url
stringlengths 24
122
| repo_url
stringlengths 60
156
| date_extracted
stringdate 2025-08-13 00:00:00
2025-08-13 00:00:00
| root
stringlengths 3
85
| breadcrumbs
listlengths 1
6
| filename
stringlengths 6
60
| stage
stringclasses 33
values | group
stringclasses 81
values | info
stringclasses 22
values | title
stringlengths 3
110
⌀ | description
stringlengths 11
359
⌀ | clean_text
stringlengths 47
3.32M
| rich_text
stringlengths 321
3.32M
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://docs.gitlab.com/development/fe_guide/lesson_1
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/lesson_1.md
|
2025-08-13
|
doc/development/fe_guide/onboarding_course
|
[
"doc",
"development",
"fe_guide",
"onboarding_course"
] |
lesson_1.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Lesson 1
| null |
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=k4C3-FKvZyI">Lesson 1 intro</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/k4C3-FKvZyI" frameborder="0" allowfullscreen> </iframe>
</figure>
In this lesson you tackle the smallest of problems - a one-character text change. To do so, we have to learn:
- How to set up a GitLab Development Environment.
- How to navigate the GitLab code base.
- How to create a merge request in the GitLab project.
After we have learned these 3 things, a GitLab team member will do a live coding demo.
In the demo, they'll use each of the things learned by completing one of these small issues, so that you can complete an issue by yourself.
There is a list of issues that are very similar to the one we'll be live coding [here in the "Linked items" section](https://gitlab.com/gitlab-org/gitlab/-/issues/389920), it would be worth commenting on one of these now to get yourself assigned to one so that you can follow along.
## What is the GDK?
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=qXGXshfo934">What is the GDK</a>?
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/qXGXshfo934" frameborder="0" allowfullscreen> </iframe>
</figure>
The GDK (GitLab Development Kit) is a local instance of GitLab that allows developers to run and test GitLab on their own computers.
Unlike frontend only applications, the GDK runs the entire GitLab application, including the back-end services, APIs, and a local database.
This allows developers to make changes, test them in real-time, and validate their modifications.
Tips for using the GDK:
- Troubleshooting documentation: When encountering issues with the GDK, refer to the troubleshooting documentation in the [GDK repository](https://gitlab.com/gitlab-org/gitlab-development-kit/-/tree/main/doc/troubleshooting).
These resources provide useful commands and tips to help resolve common problems.
- Using the Rails console: The Rails console is an essential tool for interacting with your local instance of GitLab.
You can access it by running `gdk rails c` and use it to enable or disable feature flags, perform backend operations, and more.
- Stay updated: Regularly update your GDK by running `gdk update`.
This command fetches the latest branch of the GitLab project, as well as the latest branch of the GDK and its dependencies.
Keeping your GDK up to date helps ensure you will be working with the latest version of GitLab and make sure you have the latest bug fixes.
Remember, if you need further assistance or have specific questions, you can reach out to the GitLab community through our [Discord](https://discord.com/invite/gitlab) or [other available support channels](https://about.gitlab.com/community/contribute/).
## Installing and using the GDK locally
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=fcOyjuCizmY">Installing the GDK</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/fcOyjuCizmY" frameborder="0" allowfullscreen> </iframe>
</figure>
For the latest installation instructions, refer to the [GitLab Development Kit documentation](https://gitlab.com/gitlab-org/gitlab-development-kit#installation).
Here's a step-by-step summary:
1. Prerequisites:
- 16 GB RAM. If you have less, consider [using Gitpod](#using-gitpod-instead-of-running-the-gdk-locally)
- Ensure that Git is installed on your machine.
- Install a code editor, such as Visual Studio Code.
- [Create an account](https://gitlab.com/users/sign_up) or [sign in](https://gitlab.com/users/sign_in) on GitLab.com and join the [community members group](https://gitlab.com/gitlab-community/meta#request-access-to-community-forks).
1. Installation:
- Choose a directory to install the GitLab Development Kit (GDK).
- Open your terminal and go to the chosen directory.
- Download and run the installation script from the terminal:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/install" | bash
```
- Only run scripts from trusted sources to ensure your safety.
- The installation process may take around 20 minutes or more.
1. Choosing the repository:
- Instead of cloning the main GitLab repository, use the community fork recommended for wider community members.
- Follow the instructions provided to install the community fork.
1. GDK structure:
- After the installation, the GDK directory is created.
- Inside the GDK directory, you'll find the GitLab project folder.
1. Working with the GDK:
- GDK offers lots of commands you can use to interact with your installation. To run those commands you must be inside the GDK or GitLab folder.
- To start the GDK, run the command `gdk start` in your terminal.
- You can explore available commands and options by running `gdk help` in the terminal.
Remember to consult the documentation or seek community support if you have any further questions or issues.
## Using Gitpod instead of running the GDK locally
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=RI2kM5_oii4">Using Gitpod with GitLab</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/RI2kM5_oii4" frameborder="0" allowfullscreen> </iframe>
</figure>
Gitpod is a service that allows you to run a virtual machine, specifically the GitLab Development Kit (GDK), on the Gitpod server instead of running it on your own machine.
It provides a web-based Integrated Development Environment (IDE) where you can edit code and see the GDK in action.
Gitpod is useful for quickly getting a GDK environment up and running, for making small merge requests without installing the GDK locally, or for running GDK on a machine that may not have enough resources.
To use Gitpod:
1. [Request access to the GitLab community forks](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access).
Alternatively, you can create your own public fork, but will miss out on [the benefits of the community forks](https://gitlab.com/gitlab-community/meta#why).
1. Go to the [GitLab community fork website](https://gitlab.com/gitlab-community/gitlab), select **Edit**, then select **Gitpod**.
1. Configure your settings, such as the editor (VS Code desktop or browser) and the context (usually the `main` or `master` branch).
1. Select **Open** to create your Gitpod workspace. This process may take up to 20 minutes. The GitLab Development Kit (GDK) will be installed in the Gitpod workspace. This installation is faster than downloading and installing the full GDK locally.
After the workspace is created, you'll find your chosen IDE running in your browser. You can also connect it to your desktop IDE if preferred.
Treat Gitpod just like you would use VS Code locally. Create branches, make code changes, commit them, and push them back to the community fork.
Other tips:
- Remember to push your code regularly to avoid the workspace timing out. Idle workspaces are eventually destroyed.
- Customize your Gitpod workspace settings if needed, such as making your instance of GitLab frontend publicly available.
- If you run out of minutes, contact the support team on the Discord server.
- Troubleshoot issues by using commands like `gdk start` and `gdk status` in the Gitpod workspace as you would if it was running locally.
By following these steps, you can leverage Gitpod to efficiently develop with the GitLab Development Kit without the need for local installation.
## Navigating the GitLab codebase
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=Wc5u879_0Aw">How to navigate the GitLab codebase</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/Wc5u879_0Aw" frameborder="0" allowfullscreen> </iframe>
</figure>
Understanding how to navigate the GitLab codebase is essential for contributors.
Navigating the codebase and locating specific files can be challenging but crucial for making changes and addressing issues effectively.
Here we'll explore a step-by-step process for finding files and finding where they are rendered in GitLab.
If you already know the file you are going to work on and now you want to find where it is rendered:
1. Start by gathering clues to understand the file's purpose. Look for relevant information within the file itself, such as keywords or specific content that might indicate its context.
1. You can also examine the file path (or folder structure) to gain insights into where the file might be rendered.
A lot of routing in GitLab is very similar to the folder structure.
1. If you can work out which feature (or one of the features) that this component is used in, you can then leverage the GitLab user documentation to find out how to go to the feature page.
1. Follow the component hierarchy, do a global search for the filename to identify the parent component that renders the component.
Continue to follow the hierarchy of components to trace back to a feature you recognize or can search for in the GitLab user docs.
1. You can use `git blame` with an extension like GitLens to find a recent MR where this file was changed.
Most MR's have a "How to validate" section that you can follow, if the MR doesn't have one, look for the previous change and until you find one that have validation steps.
If you know which page you need to fix and you want to find the file path, here are some things you can try:
- Look for content that is unique and doesn't contain variables so that you can search for the translation variable.
- Try using Vue Dev Tools to find the component name.
- Look for unique identifiers like a `data-testid`,`id` or a unique looking CSS class in the HTML of the component and then search globally the codebase for those identifying strings.
## Writing a good merge request
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=H5zozDNIn98">How to write a good MR</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/H5zozDNIn98" frameborder="0" allowfullscreen> </iframe>
</figure>
When writing a merge request there are some important things to be aware of:
- Your MR will become a permanent part of the documentation of the GitLab project.
It may be used in the future to help people understand why some code works the way it does and why it doesn't use an alternative solution.
- At least 2 other engineers are going to review your code. For the sake of efficiency (much like the code itself you have written) it is best to take a little while longer to get your MR right so that it is quicker and easier for others to read.
- The MRs that you create on GitLab are available to the public. This means you can add a link to MRs you are particularly proud of to your portfolio page when looking for a job.
- Since an MR is a technical document, you should try to implement a technical writing style.
If you don't know what that is, here is a highly recommended short course from [Google on Technical writing](https://developers.google.com/tech-writing/one).
If you are also contributing to the documentation at GitLab, there is a [Technical Writing Fundamentals course available here from GitLab](https://university.gitlab.com/courses/gitlab-technical-writing-fundamentals).
## Live coding
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=BJCCwc1Czt4">Lesson 1 code walkthrough</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/BJCCwc1Czt4" frameborder="0" allowfullscreen> </iframe>
</figure>
Now it is your turn to complete your first MR, there is a list of issues that are very similar to the one we just finished that need completing [here in the "Linked items" section](https://gitlab.com/gitlab-org/gitlab/-/issues/389920). Thanks for contributing! (if there are none left, let us know on [Discord](https://discord.com/invite/gitlab) or [other available support channels](https://about.gitlab.com/community/contribute/) and we'll find more for you)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Lesson 1
breadcrumbs:
- doc
- development
- fe_guide
- onboarding_course
---
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=k4C3-FKvZyI">Lesson 1 intro</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/k4C3-FKvZyI" frameborder="0" allowfullscreen> </iframe>
</figure>
In this lesson you tackle the smallest of problems - a one-character text change. To do so, we have to learn:
- How to set up a GitLab Development Environment.
- How to navigate the GitLab code base.
- How to create a merge request in the GitLab project.
After we have learned these 3 things, a GitLab team member will do a live coding demo.
In the demo, they'll use each of the things learned by completing one of these small issues, so that you can complete an issue by yourself.
There is a list of issues that are very similar to the one we'll be live coding [here in the "Linked items" section](https://gitlab.com/gitlab-org/gitlab/-/issues/389920), it would be worth commenting on one of these now to get yourself assigned to one so that you can follow along.
## What is the GDK?
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=qXGXshfo934">What is the GDK</a>?
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/qXGXshfo934" frameborder="0" allowfullscreen> </iframe>
</figure>
The GDK (GitLab Development Kit) is a local instance of GitLab that allows developers to run and test GitLab on their own computers.
Unlike frontend only applications, the GDK runs the entire GitLab application, including the back-end services, APIs, and a local database.
This allows developers to make changes, test them in real-time, and validate their modifications.
Tips for using the GDK:
- Troubleshooting documentation: When encountering issues with the GDK, refer to the troubleshooting documentation in the [GDK repository](https://gitlab.com/gitlab-org/gitlab-development-kit/-/tree/main/doc/troubleshooting).
These resources provide useful commands and tips to help resolve common problems.
- Using the Rails console: The Rails console is an essential tool for interacting with your local instance of GitLab.
You can access it by running `gdk rails c` and use it to enable or disable feature flags, perform backend operations, and more.
- Stay updated: Regularly update your GDK by running `gdk update`.
This command fetches the latest branch of the GitLab project, as well as the latest branch of the GDK and its dependencies.
Keeping your GDK up to date helps ensure you will be working with the latest version of GitLab and make sure you have the latest bug fixes.
Remember, if you need further assistance or have specific questions, you can reach out to the GitLab community through our [Discord](https://discord.com/invite/gitlab) or [other available support channels](https://about.gitlab.com/community/contribute/).
## Installing and using the GDK locally
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=fcOyjuCizmY">Installing the GDK</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/fcOyjuCizmY" frameborder="0" allowfullscreen> </iframe>
</figure>
For the latest installation instructions, refer to the [GitLab Development Kit documentation](https://gitlab.com/gitlab-org/gitlab-development-kit#installation).
Here's a step-by-step summary:
1. Prerequisites:
- 16 GB RAM. If you have less, consider [using Gitpod](#using-gitpod-instead-of-running-the-gdk-locally)
- Ensure that Git is installed on your machine.
- Install a code editor, such as Visual Studio Code.
- [Create an account](https://gitlab.com/users/sign_up) or [sign in](https://gitlab.com/users/sign_in) on GitLab.com and join the [community members group](https://gitlab.com/gitlab-community/meta#request-access-to-community-forks).
1. Installation:
- Choose a directory to install the GitLab Development Kit (GDK).
- Open your terminal and go to the chosen directory.
- Download and run the installation script from the terminal:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/install" | bash
```
- Only run scripts from trusted sources to ensure your safety.
- The installation process may take around 20 minutes or more.
1. Choosing the repository:
- Instead of cloning the main GitLab repository, use the community fork recommended for wider community members.
- Follow the instructions provided to install the community fork.
1. GDK structure:
- After the installation, the GDK directory is created.
- Inside the GDK directory, you'll find the GitLab project folder.
1. Working with the GDK:
- GDK offers lots of commands you can use to interact with your installation. To run those commands you must be inside the GDK or GitLab folder.
- To start the GDK, run the command `gdk start` in your terminal.
- You can explore available commands and options by running `gdk help` in the terminal.
Remember to consult the documentation or seek community support if you have any further questions or issues.
## Using Gitpod instead of running the GDK locally
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=RI2kM5_oii4">Using Gitpod with GitLab</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/RI2kM5_oii4" frameborder="0" allowfullscreen> </iframe>
</figure>
Gitpod is a service that allows you to run a virtual machine, specifically the GitLab Development Kit (GDK), on the Gitpod server instead of running it on your own machine.
It provides a web-based Integrated Development Environment (IDE) where you can edit code and see the GDK in action.
Gitpod is useful for quickly getting a GDK environment up and running, for making small merge requests without installing the GDK locally, or for running GDK on a machine that may not have enough resources.
To use Gitpod:
1. [Request access to the GitLab community forks](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access).
Alternatively, you can create your own public fork, but will miss out on [the benefits of the community forks](https://gitlab.com/gitlab-community/meta#why).
1. Go to the [GitLab community fork website](https://gitlab.com/gitlab-community/gitlab), select **Edit**, then select **Gitpod**.
1. Configure your settings, such as the editor (VS Code desktop or browser) and the context (usually the `main` or `master` branch).
1. Select **Open** to create your Gitpod workspace. This process may take up to 20 minutes. The GitLab Development Kit (GDK) will be installed in the Gitpod workspace. This installation is faster than downloading and installing the full GDK locally.
After the workspace is created, you'll find your chosen IDE running in your browser. You can also connect it to your desktop IDE if preferred.
Treat Gitpod just like you would use VS Code locally. Create branches, make code changes, commit them, and push them back to the community fork.
Other tips:
- Remember to push your code regularly to avoid the workspace timing out. Idle workspaces are eventually destroyed.
- Customize your Gitpod workspace settings if needed, such as making your instance of GitLab frontend publicly available.
- If you run out of minutes, contact the support team on the Discord server.
- Troubleshoot issues by using commands like `gdk start` and `gdk status` in the Gitpod workspace as you would if it was running locally.
By following these steps, you can leverage Gitpod to efficiently develop with the GitLab Development Kit without the need for local installation.
## Navigating the GitLab codebase
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=Wc5u879_0Aw">How to navigate the GitLab codebase</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/Wc5u879_0Aw" frameborder="0" allowfullscreen> </iframe>
</figure>
Understanding how to navigate the GitLab codebase is essential for contributors.
Navigating the codebase and locating specific files can be challenging but crucial for making changes and addressing issues effectively.
Here we'll explore a step-by-step process for finding files and finding where they are rendered in GitLab.
If you already know the file you are going to work on and now you want to find where it is rendered:
1. Start by gathering clues to understand the file's purpose. Look for relevant information within the file itself, such as keywords or specific content that might indicate its context.
1. You can also examine the file path (or folder structure) to gain insights into where the file might be rendered.
A lot of routing in GitLab is very similar to the folder structure.
1. If you can work out which feature (or one of the features) that this component is used in, you can then leverage the GitLab user documentation to find out how to go to the feature page.
1. Follow the component hierarchy, do a global search for the filename to identify the parent component that renders the component.
Continue to follow the hierarchy of components to trace back to a feature you recognize or can search for in the GitLab user docs.
1. You can use `git blame` with an extension like GitLens to find a recent MR where this file was changed.
Most MR's have a "How to validate" section that you can follow, if the MR doesn't have one, look for the previous change and until you find one that have validation steps.
If you know which page you need to fix and you want to find the file path, here are some things you can try:
- Look for content that is unique and doesn't contain variables so that you can search for the translation variable.
- Try using Vue Dev Tools to find the component name.
- Look for unique identifiers like a `data-testid`,`id` or a unique looking CSS class in the HTML of the component and then search globally the codebase for those identifying strings.
## Writing a good merge request
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=H5zozDNIn98">How to write a good MR</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/H5zozDNIn98" frameborder="0" allowfullscreen> </iframe>
</figure>
When writing a merge request there are some important things to be aware of:
- Your MR will become a permanent part of the documentation of the GitLab project.
It may be used in the future to help people understand why some code works the way it does and why it doesn't use an alternative solution.
- At least 2 other engineers are going to review your code. For the sake of efficiency (much like the code itself you have written) it is best to take a little while longer to get your MR right so that it is quicker and easier for others to read.
- The MRs that you create on GitLab are available to the public. This means you can add a link to MRs you are particularly proud of to your portfolio page when looking for a job.
- Since an MR is a technical document, you should try to implement a technical writing style.
If you don't know what that is, here is a highly recommended short course from [Google on Technical writing](https://developers.google.com/tech-writing/one).
If you are also contributing to the documentation at GitLab, there is a [Technical Writing Fundamentals course available here from GitLab](https://university.gitlab.com/courses/gitlab-technical-writing-fundamentals).
## Live coding
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=BJCCwc1Czt4">Lesson 1 code walkthrough</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/BJCCwc1Czt4" frameborder="0" allowfullscreen> </iframe>
</figure>
Now it is your turn to complete your first MR, there is a list of issues that are very similar to the one we just finished that need completing [here in the "Linked items" section](https://gitlab.com/gitlab-org/gitlab/-/issues/389920). Thanks for contributing! (if there are none left, let us know on [Discord](https://discord.com/invite/gitlab) or [other available support channels](https://about.gitlab.com/community/contribute/) and we'll find more for you)
|
https://docs.gitlab.com/development/gitlab_instrumentation_for_opentelemetry
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/gitlab_instrumentation_for_opentelemetry.md
|
2025-08-13
|
doc/development/stage_group_observability
|
[
"doc",
"development",
"stage_group_observability"
] |
gitlab_instrumentation_for_opentelemetry.md
|
Monitor
|
Platform Insights
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab instrumentation for OpenTelemetry
| null |
## Enable OpenTelemetry tracing, metrics, and logs in GDK development
{{< alert type="note" >}}
Currently the default GDK environment is not set up by default to properly
collect and display OpenTelemetry data. Therefore, you should point the
`OTEL_EXPORTER_*_ENDPOINT` ENV vars to a GitLab project:
{{< /alert >}}
1. Which has an Ultimate license, and where you have
1. In which you have at least the Maintainer role
1. In which you have access to enable top-level group feature flags (or is under the `gitlab-org` or `gitlab-com` top-level groups which already have the flags enabled)
Once you have a project identified to use:
1. Note the ID of the project (from the three dots at upper right of main project page).
1. Note the ID of the top-level group which contains the project.
1. When setting the environment variables for the following steps, add them to `env.runit` in the root of the `gitlab-development-kit` folder.
1. Follow instructions to [configure distributed tracing for a project](../tracing.md), with the following custom settings:
- For the `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` environment variable, use the following value:
```shell
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://<gitlab-host>/v3/<gitlab-top-level-group-id>/<gitlab-project-id>/ingest/traces"
```
1. Follow instructions to [configure distributed metrics for a project](../metrics.md), with the following custom settings:
- For the `OTEL_EXPORTER_OTLP_METRICS_ENDPOINT` environment variable, use the following value:
```shell
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT="https://<gitlab-host>/v3/<gitlab-top-level-group-id>/<gitlab-project-id>/ingest/metrics"
```
1. Follow instructions to [configure distributed logs for a project](../logs.md), with the following custom settings:
- For the `OTEL_EXPORTER_OTLP_LOGS_ENDPOINT` environment variable, use the following value:
```shell
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://<gitlab-host>/v3/<gitlab-top-level-group-id>/<gitlab-project-id>/ingest/logs"
```
1. Also add the following to the `env.runit` file:
```shell
# GitLab-specific flag to enable the Rails initializer to set up OpenTelemetry exporters
export GITLAB_ENABLE_OTEL_EXPORTERS=true
```
1. `gdk restart`.
1. Navigate to your project, and follow the instructions in the above docs to enable and view the tracing, metrics, or logs.
## References
- [Distributed Tracing](../tracing.md)
- [Metrics](../metrics.md)
- [Logs](../logs.md)
## Related design documents
- [GitLab Observability in GitLab.com and GitLab Self-Managed](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/observability_for_self_managed/)
- [GitLab Observability - Metrics](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/observability_metrics/)
- [GitLab Observability - Logging](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/observability_logging/)
|
---
stage: Monitor
group: Platform Insights
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab instrumentation for OpenTelemetry
breadcrumbs:
- doc
- development
- stage_group_observability
---
## Enable OpenTelemetry tracing, metrics, and logs in GDK development
{{< alert type="note" >}}
Currently the default GDK environment is not set up by default to properly
collect and display OpenTelemetry data. Therefore, you should point the
`OTEL_EXPORTER_*_ENDPOINT` ENV vars to a GitLab project:
{{< /alert >}}
1. Which has an Ultimate license, and where you have
1. In which you have at least the Maintainer role
1. In which you have access to enable top-level group feature flags (or is under the `gitlab-org` or `gitlab-com` top-level groups which already have the flags enabled)
Once you have a project identified to use:
1. Note the ID of the project (from the three dots at upper right of main project page).
1. Note the ID of the top-level group which contains the project.
1. When setting the environment variables for the following steps, add them to `env.runit` in the root of the `gitlab-development-kit` folder.
1. Follow instructions to [configure distributed tracing for a project](../tracing.md), with the following custom settings:
- For the `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` environment variable, use the following value:
```shell
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://<gitlab-host>/v3/<gitlab-top-level-group-id>/<gitlab-project-id>/ingest/traces"
```
1. Follow instructions to [configure distributed metrics for a project](../metrics.md), with the following custom settings:
- For the `OTEL_EXPORTER_OTLP_METRICS_ENDPOINT` environment variable, use the following value:
```shell
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT="https://<gitlab-host>/v3/<gitlab-top-level-group-id>/<gitlab-project-id>/ingest/metrics"
```
1. Follow instructions to [configure distributed logs for a project](../logs.md), with the following custom settings:
- For the `OTEL_EXPORTER_OTLP_LOGS_ENDPOINT` environment variable, use the following value:
```shell
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://<gitlab-host>/v3/<gitlab-top-level-group-id>/<gitlab-project-id>/ingest/logs"
```
1. Also add the following to the `env.runit` file:
```shell
# GitLab-specific flag to enable the Rails initializer to set up OpenTelemetry exporters
export GITLAB_ENABLE_OTEL_EXPORTERS=true
```
1. `gdk restart`.
1. Navigate to your project, and follow the instructions in the above docs to enable and view the tracing, metrics, or logs.
## References
- [Distributed Tracing](../tracing.md)
- [Metrics](../metrics.md)
- [Logs](../logs.md)
## Related design documents
- [GitLab Observability in GitLab.com and GitLab Self-Managed](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/observability_for_self_managed/)
- [GitLab Observability - Metrics](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/observability_metrics/)
- [GitLab Observability - Logging](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/observability_logging/)
|
https://docs.gitlab.com/development/stage_group_observability
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/stage_group_observability
|
[
"doc",
"development",
"stage_group_observability"
] |
_index.md
|
Platforms
|
Scalability
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Observability for stage groups
| null |
Observability is about bringing visibility into a system to see and
understand the state of each component, with context, to support
performance tuning and debugging. To run a SaaS platform at scale, a
rich and detailed observability platform is needed.
To make information available to [stage groups](https://handbook.gitlab.com/handbook/product/categories/#hierarchy),
we are aggregating metrics by feature category and then show
this information on [dashboards](dashboards/_index.md) tailored to the groups. Only metrics
for the features built by the group are visible on their
dashboards.
With a filtered view, groups can discover bugs and performance regressions that could otherwise
be missed when viewing aggregated data.
For more specific information on dashboards, see:
- [Dashboards](dashboards/_index.md): a general overview of where to find dashboards
and how to use them.
- [Stage group dashboard](dashboards/stage_group_dashboard.md): how to use and customize the stage group dashboard.
- [Error budget detail](dashboards/error_budget_detail.md): how to explore error budget over time.
## Error budget
The error budget is calculated from the same [Service Level Indicators](https://en.wikipedia.org/wiki/Service_level_indicator) (SLIs)
that we use to monitor GitLab.com. The 28-day availability number for a
stage group is comparable to the
[monthly availability](https://handbook.gitlab.com/handbook/engineering/infrastructure/performance-indicators/#gitlabcom-availability)
we calculate for GitLab.com, except it's scoped to the features of a group.
For more information about how we use error budgets, see the
[Engineering Error Budgets](https://handbook.gitlab.com/handbook/engineering/error-budgets/) handbook page.
By default, the first row of panels on both dashboards shows the
[error budget for the stage group](https://handbook.gitlab.com/handbook/engineering/error-budgets/#budget-spend-by-stage-group).
This row shows how features owned by the group contribute to our
[overall availability](https://handbook.gitlab.com/handbook/engineering/infrastructure/performance-indicators/#gitlabcom-availability).
The official budget is aggregated over the 28 days. You can see it on the
[stage group dashboard](dashboards/stage_group_dashboard.md).
The [error budget detail dashboard](dashboards/error_budget_detail.md)
allows customizing the range.
We show the information in two formats:
- Availability: this number can be compared to GitLab.com overall
availability target of 99.95% uptime.
- Budget Spent: time over the past 28 days that features owned by the group have not been performing
adequately.
The budget is calculated based on indicators per component. Each
component can have two indicators:
- [Apdex](https://en.wikipedia.org/wiki/Apdex): the rate of operations that performed adequately.
The threshold for "performing adequately" is stored in our
[metrics catalog](https://gitlab.com/gitlab-com/runbooks/-/tree/master/metrics-catalog)
and depends on the service in question. For the Puma (Rails) component of the
[API](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/api.jsonnet#L127),
[Git](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/git.jsonnet#L216),
and
[Web](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/web.jsonnet#L154)
services, that threshold is **5 seconds** when not opted in to the
[`rails_request` SLI](../application_slis/rails_request.md).
We've made this target configurable in [this project](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/525).
To customize the request Apdex, see
[Rails request SLIs](../application_slis/rails_request.md).
This new Apdex measurement is not part of the error budget until you
[opt in](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1451).
For Sidekiq job execution, the threshold depends on the
[job urgency](../sidekiq/worker_attributes.md#job-urgency). It is
[currently](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/lib/sidekiq-helpers.libsonnet#L25-38)
**10 seconds** for high-urgency jobs and **5 minutes** for other jobs.
Some stage groups might have more services. The thresholds for them are also in the metrics catalog.
- Error rate: The rate of operations that had errors.
The calculation of the ratio happens as follows:

<!--
To update this calculation, paste the following math block in a GitLab comment, update it,
and take a screenshot:
```math
\frac {operations\_meeting\_apdex + (total\_operations - operations\_with\_errors)} {total\_apdex\_measurements + total\_operations}
```
-->
## Check where budget is being spent
Both the [stage group dashboard](dashboards/stage_group_dashboard.md)
and the [error budget detail dashboard](dashboards/error_budget_detail.md)
show panels to see where the error budget was spent. The stage group
dashboard always shows a fixed 28 days. The error budget detail
dashboard allows drilling down to the SLIs over time.
The row below the error budget row is collapsed by default. Expanding
it shows which component and violation type had the most offending
operations in the past 28 days.

The first panel on the left shows a table with the number of errors per
component. Digging into the first row in that table has
the biggest impact on the budget spent.
Commonly, the components that spend most of the budget are Sidekiq or Puma. The panel in
the center explains what different violation types mean and how to dig
deeper in the logs.
The panel on the right provides links to Kibana that should reveal
which endpoints or Sidekiq jobs are causing the errors.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
To learn how to use these panels and logs for
determining which Rails endpoints are slow,
see the [Error Budget Attribution for Purchase group](https://youtu.be/M9u6unON7bU) video.
Other components visible in the table come from
[service-level indicators](https://sre.google/sre-book/service-level-objectives/) (SLIs) defined
in the [metrics catalog](https://gitlab.com/gitlab-com/runbooks/-/blob/master/metrics-catalog/README.md).
For those types of failures, you can follow the link to the service
dashboard linked from the `type` column. The service dashboard
contains a row specifically for the SLI that is causing the budget
spent, with links to logs and a description of what the
component means.
For example, see the `server` component of the `web-pages` service:

To add more SLIs tailored to specific features, you can use an [Application SLI](../application_slis/_index.md).
## Kibana dashboard for error budgets
For a detailed analysis you can use [a specialized Kibana dashboard](https://log.gprd.gitlab.net/goto/771b5c10-c0ec-11ed-85ed-e7557b0a598c), like this:

Description:
- **Apdex requests over limit (graph)** - Displays only requests that exceeded their
target duration.
- **Apdex operations over-limit duration (graph)** - Displays the distribution of duration
components (database, Redis, Gitaly, and Rails app).
- **Apdex requests** (pie chart) - Displays the percentage of `2xx`, `3xx`, `4xx` and
`5xx` requests.
- **Slow request component distribution** - Highlights the component responsible
for Apdex violation.
- **Apdex operations over limit** (table) - Displays a number of operations over
limit for each endpoint.
- **Apdex requests over limit** - Displays a list of individual requests responsible
for Apdex violation.
### Use the dashboard
1. Select the feature category you want to investigate.
1. Scroll to the **Feature Category** section. Enter the feature name.
1. Select **Apply changes**. Selected results contain only requests related to this feature category.
1. Select the time frame for the investigation.
1. Review dashboard and pay attention to the type of failures.
Questions to answer:
1. Does the failure pattern look like a spike? Or does it persist?
1. Does the failure look related to a particular component? (database, Redis, ...)
1. Does the failure affect a specific endpoint? Or is it system-wide?
1. Does the failure appear caused by infrastructure incidents?
## GitLab instrumentation for OpenTelemetry
There is an ongoing effort to instrument the GitLab codebase for OpenTelemetry.
For more specific information on this effort, see [GitLab instrumentation for OpenTelemetry](gitlab_instrumentation_for_opentelemetry.md).
|
---
stage: Platforms
group: Scalability
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Observability for stage groups
breadcrumbs:
- doc
- development
- stage_group_observability
---
Observability is about bringing visibility into a system to see and
understand the state of each component, with context, to support
performance tuning and debugging. To run a SaaS platform at scale, a
rich and detailed observability platform is needed.
To make information available to [stage groups](https://handbook.gitlab.com/handbook/product/categories/#hierarchy),
we are aggregating metrics by feature category and then show
this information on [dashboards](dashboards/_index.md) tailored to the groups. Only metrics
for the features built by the group are visible on their
dashboards.
With a filtered view, groups can discover bugs and performance regressions that could otherwise
be missed when viewing aggregated data.
For more specific information on dashboards, see:
- [Dashboards](dashboards/_index.md): a general overview of where to find dashboards
and how to use them.
- [Stage group dashboard](dashboards/stage_group_dashboard.md): how to use and customize the stage group dashboard.
- [Error budget detail](dashboards/error_budget_detail.md): how to explore error budget over time.
## Error budget
The error budget is calculated from the same [Service Level Indicators](https://en.wikipedia.org/wiki/Service_level_indicator) (SLIs)
that we use to monitor GitLab.com. The 28-day availability number for a
stage group is comparable to the
[monthly availability](https://handbook.gitlab.com/handbook/engineering/infrastructure/performance-indicators/#gitlabcom-availability)
we calculate for GitLab.com, except it's scoped to the features of a group.
For more information about how we use error budgets, see the
[Engineering Error Budgets](https://handbook.gitlab.com/handbook/engineering/error-budgets/) handbook page.
By default, the first row of panels on both dashboards shows the
[error budget for the stage group](https://handbook.gitlab.com/handbook/engineering/error-budgets/#budget-spend-by-stage-group).
This row shows how features owned by the group contribute to our
[overall availability](https://handbook.gitlab.com/handbook/engineering/infrastructure/performance-indicators/#gitlabcom-availability).
The official budget is aggregated over the 28 days. You can see it on the
[stage group dashboard](dashboards/stage_group_dashboard.md).
The [error budget detail dashboard](dashboards/error_budget_detail.md)
allows customizing the range.
We show the information in two formats:
- Availability: this number can be compared to GitLab.com overall
availability target of 99.95% uptime.
- Budget Spent: time over the past 28 days that features owned by the group have not been performing
adequately.
The budget is calculated based on indicators per component. Each
component can have two indicators:
- [Apdex](https://en.wikipedia.org/wiki/Apdex): the rate of operations that performed adequately.
The threshold for "performing adequately" is stored in our
[metrics catalog](https://gitlab.com/gitlab-com/runbooks/-/tree/master/metrics-catalog)
and depends on the service in question. For the Puma (Rails) component of the
[API](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/api.jsonnet#L127),
[Git](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/git.jsonnet#L216),
and
[Web](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/web.jsonnet#L154)
services, that threshold is **5 seconds** when not opted in to the
[`rails_request` SLI](../application_slis/rails_request.md).
We've made this target configurable in [this project](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/525).
To customize the request Apdex, see
[Rails request SLIs](../application_slis/rails_request.md).
This new Apdex measurement is not part of the error budget until you
[opt in](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1451).
For Sidekiq job execution, the threshold depends on the
[job urgency](../sidekiq/worker_attributes.md#job-urgency). It is
[currently](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/lib/sidekiq-helpers.libsonnet#L25-38)
**10 seconds** for high-urgency jobs and **5 minutes** for other jobs.
Some stage groups might have more services. The thresholds for them are also in the metrics catalog.
- Error rate: The rate of operations that had errors.
The calculation of the ratio happens as follows:

<!--
To update this calculation, paste the following math block in a GitLab comment, update it,
and take a screenshot:
```math
\frac {operations\_meeting\_apdex + (total\_operations - operations\_with\_errors)} {total\_apdex\_measurements + total\_operations}
```
-->
## Check where budget is being spent
Both the [stage group dashboard](dashboards/stage_group_dashboard.md)
and the [error budget detail dashboard](dashboards/error_budget_detail.md)
show panels to see where the error budget was spent. The stage group
dashboard always shows a fixed 28 days. The error budget detail
dashboard allows drilling down to the SLIs over time.
The row below the error budget row is collapsed by default. Expanding
it shows which component and violation type had the most offending
operations in the past 28 days.

The first panel on the left shows a table with the number of errors per
component. Digging into the first row in that table has
the biggest impact on the budget spent.
Commonly, the components that spend most of the budget are Sidekiq or Puma. The panel in
the center explains what different violation types mean and how to dig
deeper in the logs.
The panel on the right provides links to Kibana that should reveal
which endpoints or Sidekiq jobs are causing the errors.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
To learn how to use these panels and logs for
determining which Rails endpoints are slow,
see the [Error Budget Attribution for Purchase group](https://youtu.be/M9u6unON7bU) video.
Other components visible in the table come from
[service-level indicators](https://sre.google/sre-book/service-level-objectives/) (SLIs) defined
in the [metrics catalog](https://gitlab.com/gitlab-com/runbooks/-/blob/master/metrics-catalog/README.md).
For those types of failures, you can follow the link to the service
dashboard linked from the `type` column. The service dashboard
contains a row specifically for the SLI that is causing the budget
spent, with links to logs and a description of what the
component means.
For example, see the `server` component of the `web-pages` service:

To add more SLIs tailored to specific features, you can use an [Application SLI](../application_slis/_index.md).
## Kibana dashboard for error budgets
For a detailed analysis you can use [a specialized Kibana dashboard](https://log.gprd.gitlab.net/goto/771b5c10-c0ec-11ed-85ed-e7557b0a598c), like this:

Description:
- **Apdex requests over limit (graph)** - Displays only requests that exceeded their
target duration.
- **Apdex operations over-limit duration (graph)** - Displays the distribution of duration
components (database, Redis, Gitaly, and Rails app).
- **Apdex requests** (pie chart) - Displays the percentage of `2xx`, `3xx`, `4xx` and
`5xx` requests.
- **Slow request component distribution** - Highlights the component responsible
for Apdex violation.
- **Apdex operations over limit** (table) - Displays a number of operations over
limit for each endpoint.
- **Apdex requests over limit** - Displays a list of individual requests responsible
for Apdex violation.
### Use the dashboard
1. Select the feature category you want to investigate.
1. Scroll to the **Feature Category** section. Enter the feature name.
1. Select **Apply changes**. Selected results contain only requests related to this feature category.
1. Select the time frame for the investigation.
1. Review dashboard and pay attention to the type of failures.
Questions to answer:
1. Does the failure pattern look like a spike? Or does it persist?
1. Does the failure look related to a particular component? (database, Redis, ...)
1. Does the failure affect a specific endpoint? Or is it system-wide?
1. Does the failure appear caused by infrastructure incidents?
## GitLab instrumentation for OpenTelemetry
There is an ongoing effort to instrument the GitLab codebase for OpenTelemetry.
For more specific information on this effort, see [GitLab instrumentation for OpenTelemetry](gitlab_instrumentation_for_opentelemetry.md).
|
https://docs.gitlab.com/development/stage_group_observability/error_budget_detail
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/stage_group_observability/error_budget_detail.md
|
2025-08-13
|
doc/development/stage_group_observability/dashboards
|
[
"doc",
"development",
"stage_group_observability",
"dashboards"
] |
error_budget_detail.md
|
Platforms
|
Scalability
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Error budget detail dashboard
| null |
With error budget detailed dashboards you can explore the error budget
spent at specific moments in time. By default, the dashboard shows
the past 28 days. You can adjust it with the [time range controls](_index.md#time-range-controls)
or by selecting a range on one of the graphs.
This dashboard is the same kind of dashboard we use for service level
monitoring. For example, see the
[overview dashboard for the web service](https://dashboards.gitlab.net/d/web-main) (GitLab internal).
## Error budget panels
On top of each dashboard, there's the same panel with the [error budget](../_index.md#error-budget).
Here, the time based targets adjust depending on the range.
For example, while the budget was 20 minutes per 28 days, it is only 1/4 of that for 7 days:

Also, keep in mind that Grafana rounds the numbers. In this example the
total time spent is 5 minutes and 24 seconds, so 24 seconds over
budget.
The attribution panels also show only failures that occurred
within the selected range.
These two panels represent a view of the "official" error budget: they
take into account if an SLI was ignored.
The [attribution panels](../_index.md#check-where-budget-is-being-spent) show which components
contributed the most over the selected period.
The panels below take into account all SLIs that contribute to GitLab.com availability.
This includes SLIs that are ignored for the official error budget.
## Time series for aggregations
The time series panels for aggregations all contain three panels:
- Apdex: the [Apdex score](https://en.wikipedia.org/wiki/Apdex) for one or more SLIs. Higher score is better.
- Error Ratio: the error ratio for one or more SLIs. Lower is better.
- Requests Per Second: the number of operations per second. Higher means a bigger impact on the error budget.
The Apdex and error-ratio panels also contain two alerting thresholds:
- The one-hour threshold: the fast burn rate.
When this line is crossed, we've spent 2% of our monthly budget in the last hour.
- The six-hour threshold: the slow burn rate.
When this line is crossed, we've spent 2% of our budget in the last six hours.
If there is no error-ratio or Apdex for a certain SLI, the panel is hidden.
Read more about these alerting windows in
[Google SRE workbook](https://sre.google/workbook/alerting-on-slos/#recommended_time_windows_and_burn_rates_f).
We don't have alerting on these metrics for stage groups.
This work is being discussed in [epic 615](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/615).
If this is something you would like for your group, let us know there.
### Stage group aggregation

The stage group aggregation shows a graph with the Apdex and errors
portion of the error budget over time. The lower a dip in the Apdex
graph or the higher a peak on the error ratio graph, the more budget
was spent at that moment.
The third graph shows the sum of all the request rates for all
SLIs. Higher means there was more traffic.
To zoom in on a particular moment where a lot of budget was spent, select the appropriate time in
the graph.
### Service-level indicators

This time series shows a breakdown of each SLI that could be contributing to the
error budget for a stage group. Similar to the stage group
aggregation, it contains an Apdex score, error ratio, and request
rate.
Here we also display an explanation panel, describing the SLI and
linking to other monitoring tools. The links to logs (📖) or
visualizations (📈) in Kibana are scoped to the feature categories
for your stage group, and limited to the range selected. Keep in mind
that we only keep logs in Kibana for seven days.
In the graphs, there is a single line per service. In the previous example image,
`rails_requests` is an SLI for the `web`, `api` and `git` services.
Sidekiq is not included in this dashboard. We're tracking this in
[epic 700](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/700).
|
---
stage: Platforms
group: Scalability
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Error budget detail dashboard
breadcrumbs:
- doc
- development
- stage_group_observability
- dashboards
---
With error budget detailed dashboards you can explore the error budget
spent at specific moments in time. By default, the dashboard shows
the past 28 days. You can adjust it with the [time range controls](_index.md#time-range-controls)
or by selecting a range on one of the graphs.
This dashboard is the same kind of dashboard we use for service level
monitoring. For example, see the
[overview dashboard for the web service](https://dashboards.gitlab.net/d/web-main) (GitLab internal).
## Error budget panels
On top of each dashboard, there's the same panel with the [error budget](../_index.md#error-budget).
Here, the time based targets adjust depending on the range.
For example, while the budget was 20 minutes per 28 days, it is only 1/4 of that for 7 days:

Also, keep in mind that Grafana rounds the numbers. In this example the
total time spent is 5 minutes and 24 seconds, so 24 seconds over
budget.
The attribution panels also show only failures that occurred
within the selected range.
These two panels represent a view of the "official" error budget: they
take into account if an SLI was ignored.
The [attribution panels](../_index.md#check-where-budget-is-being-spent) show which components
contributed the most over the selected period.
The panels below take into account all SLIs that contribute to GitLab.com availability.
This includes SLIs that are ignored for the official error budget.
## Time series for aggregations
The time series panels for aggregations all contain three panels:
- Apdex: the [Apdex score](https://en.wikipedia.org/wiki/Apdex) for one or more SLIs. Higher score is better.
- Error Ratio: the error ratio for one or more SLIs. Lower is better.
- Requests Per Second: the number of operations per second. Higher means a bigger impact on the error budget.
The Apdex and error-ratio panels also contain two alerting thresholds:
- The one-hour threshold: the fast burn rate.
When this line is crossed, we've spent 2% of our monthly budget in the last hour.
- The six-hour threshold: the slow burn rate.
When this line is crossed, we've spent 2% of our budget in the last six hours.
If there is no error-ratio or Apdex for a certain SLI, the panel is hidden.
Read more about these alerting windows in
[Google SRE workbook](https://sre.google/workbook/alerting-on-slos/#recommended_time_windows_and_burn_rates_f).
We don't have alerting on these metrics for stage groups.
This work is being discussed in [epic 615](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/615).
If this is something you would like for your group, let us know there.
### Stage group aggregation

The stage group aggregation shows a graph with the Apdex and errors
portion of the error budget over time. The lower a dip in the Apdex
graph or the higher a peak on the error ratio graph, the more budget
was spent at that moment.
The third graph shows the sum of all the request rates for all
SLIs. Higher means there was more traffic.
To zoom in on a particular moment where a lot of budget was spent, select the appropriate time in
the graph.
### Service-level indicators

This time series shows a breakdown of each SLI that could be contributing to the
error budget for a stage group. Similar to the stage group
aggregation, it contains an Apdex score, error ratio, and request
rate.
Here we also display an explanation panel, describing the SLI and
linking to other monitoring tools. The links to logs (📖) or
visualizations (📈) in Kibana are scoped to the feature categories
for your stage group, and limited to the range selected. Keep in mind
that we only keep logs in Kibana for seven days.
In the graphs, there is a single line per service. In the previous example image,
`rails_requests` is an SLI for the `web`, `api` and `git` services.
Sidekiq is not included in this dashboard. We're tracking this in
[epic 700](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/700).
|
https://docs.gitlab.com/development/stage_group_observability/stage_group_dashboard
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/stage_group_observability/stage_group_dashboard.md
|
2025-08-13
|
doc/development/stage_group_observability/dashboards
|
[
"doc",
"development",
"stage_group_observability",
"dashboards"
] |
stage_group_dashboard.md
|
Platforms
|
Scalability
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Stage group dashboard
| null |
The stage group dashboard is generated dashboard that contains metrics
for common components used by most stage groups. The dashboard is
fully customizable and owned by the stage groups.
This page explains what is on these dashboards, how to use their
contents, and how they can be customized.
## Dashboard contents
### Error budget panels

The top panels display the [error budget](../_index.md#error-budget).
These panels always show the 28 days before the end time selected in the
[time range controls](_index.md#time-range-controls). This data doesn't
follow the selected range. It does respect the filters for environment
and stage.
### Metrics panels

Although most of the metrics displayed in the panels are self-explanatory in their title and nearby
description, note the following:
- The events are counted, measured, accumulated, collected, and stored as
[time series](https://prometheus.io/docs/concepts/data_model/). The data is calculated using
statistical methods to produce metrics. It means that metrics are approximately correct and
meaningful over a time period. They help you get an overview of the stage of a system over time.
They are not meant to give you precise numbers of a discrete event.
If you need a higher level of accuracy, use another monitoring tool, such as
[logs](https://handbook.gitlab.com/handbook/engineering/monitoring/#logs).
Read the following examples for more explanations.
- All the rate metrics' units are `requests per second`. The default aggregate time frame is 1 minute.
For example, a panel shows the requests per second number at `2020-12-25 00:42:00` to be `34.13`.
It means at the minute 42 (from `2020-12-25 00:42:00` to `2020-12-25 00:42:59` ), there are
approximately `34.13 * 60 = ~ 2047` requests processed by the web servers.
- You might encounter some gotchas related to decimal fraction and rounding up frequently, especially
in low-traffic cases. For example, the error rate of `RepositoryUpdateMirrorWorker` at
`2020-12-25 02:04:00` is `0.07`, equivalent to `4.2` jobs per minute. The raw result is
`0.06666666667`, equivalent to 4 jobs per minute.
- All the rate metrics are more accurate when the data is big enough. The default floating-point
precision is 2. In some extremely low panels, you can see `0.00`, even though there is still some
real traffic.
To inspect the raw data of the panel for further calculation, select **Inspect** from the dropdown list of a panel.
Queries, raw data, and panel JSON structure are available.
Read more at [Grafana panel inspection](https://grafana.com/docs/grafana/latest/panels-visualizations/query-transform-data/).
All the dashboards are powered by [Grafana](https://grafana.com/), a frontend for displaying metrics.
Grafana consumes the data returned from queries to backend Prometheus data source, then presents it
with visualizations. The stage group dashboards are built to serve the most common use cases with a
limited set of filters and pre-built queries. Grafana provides a way to explore and visualize the
metrics data with [Grafana Explore](https://grafana.com/docs/grafana/latest/explore/). This requires
some knowledge of the [Prometheus PromQL query language](https://prometheus.io/docs/prometheus/latest/querying/basics/).
## Example: Debugging with dashboards
Example debugging workflow:
1. A team member in the Code Review group has merged an MR which got deployed to production.
1. To verify the deployment, you can check the
[Code Review group's dashboard](https://dashboards.gitlab.net/d/stage-groups-code_review/stage-groups-group-dashboard-create-code-review?orgId=1).
1. Sidekiq Error Rate panel shows an elevated error rate, specifically `UpdateMergeRequestsWorker`.

1. If you select **Kibana: Kibana Sidekiq failed request logs** in the **Extra links** section, you can filter for `UpdateMergeRequestsWorker` and read through the logs.

1. With [Sentry](https://sentry.gitlab.net/gitlab/gitlabcom/) you can find the exception where you
can filter by transaction type and `correlation_id` from Kibana's result item.

1. A precise exception, including a stack trace, job arguments, and other information should now appear.
Happy debugging!
## Customizing the dashboard
All Grafana dashboards at GitLab are generated from the [Jsonnet files](https://github.com/grafana/grafonnet-lib)
stored in [the runbooks project](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards).
Particularly, the stage group dashboards definitions are stored in
[`/dashboards/stage-groups`](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards/stage-groups).
By convention, each group has a corresponding Jsonnet file. The dashboards are synced with GitLab
[stage group data](https://gitlab.com/gitlab-com/www-gitlab-com/-/raw/master/data/stages.yml) every
month.
Expansion and customization are one of the key principles used when we designed this system.
To customize your group's dashboard, edit the corresponding file and follow the
[Runbook workflow](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards#dashboard-source).
The dashboard is updated after the MR is merged.
Looking at an autogenerated file, for example,
[`product_planning.dashboard.jsonnet`](https://gitlab.com/gitlab-com/runbooks/-/blob/master/dashboards/stage-groups/product_planning.dashboard.jsonnet):
```jsonnet
// This file is autogenerated using scripts/update_stage_groups_dashboards.rb
// Feel free to customize this file.
local stageGroupDashboards = import './stage-group-dashboards.libsonnet';
stageGroupDashboards.dashboard('product_planning')
.stageGroupDashboardTrailer()
```
We provide basic customization to filter out the components essential to your group's activities.
By default, only the `web`, `api`, and `sidekiq` components are available in the dashboard, while
`git` is hidden. See [how to enable available components and optional graphs](#optional-graphs).
You can also append further information or custom metrics to a dashboard. The following example
adds some links and a total request rate to the top of the page:
```jsonnet
local stageGroupDashboards = import './stage-group-dashboards.libsonnet';
local grafana = import 'github.com/grafana/grafonnet-lib/grafonnet/grafana.libsonnet';
local basic = import 'grafana/basic.libsonnet';
stageGroupDashboards.dashboard('source_code')
.addPanel(
grafana.text.new(
title='Group information',
mode='markdown',
content=|||
Useful link for the Source Code Management group dashboard:
- [Issue list](https://gitlab.com/groups/gitlab-org/-/issues?scope=all&state=opened&label_name%5B%5D=repository)
- [Epic list](https://gitlab.com/groups/gitlab-org/-/epics?label_name[]=repository)
|||,
),
gridPos={ x: 0, y: 0, w: 24, h: 4 }
)
.addPanel(
basic.timeseries(
title='Total Request Rate',
yAxisLabel='Requests per Second',
decimals=2,
query=|||
sum (
rate(gitlab_transaction_duration_seconds_count{
env='$environment',
environment='$environment',
feature_category=~'source_code_management',
}[$__interval])
)
|||
),
gridPos={ x: 0, y: 0, w: 24, h: 7 }
)
.stageGroupDashboardTrailer()
```

<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
If you want to see the workflow in action, we've recorded a pairing session on customizing a dashboard,
available on [GitLab Unfiltered](https://youtu.be/shEd_eiUjdI).
For deeper customization and more complicated metrics, see the
[Grafonnet lib](https://github.com/grafana/grafonnet-lib) project and the
[GitLab Prometheus Metrics](../../../administration/monitoring/prometheus/gitlab_metrics.md)
documentation.
### Optional graphs
Some graphs aren't relevant for all groups, so they aren't added to
the dashboard by default. They can be added by customizing the
dashboard.
By default, only the `web`, `api`, and `sidekiq` metrics are
shown. If you wish to see the metrics from the `git` fleet (or any
other component that might be added in the future), you can configure it as follows:
```jsonnet
stageGroupDashboards
.dashboard('source_code', components=stageGroupDashboards.supportedComponents)
.stageGroupDashboardTrailer()
```
If your group is interested in Sidekiq job durations and their
thresholds, you can add these graphs by calling the `.addSidekiqJobDurationByUrgency` function:
```jsonnet
stageGroupDashboards
.dashboard('access')
.addSidekiqJobDurationByUrgency()
.stageGroupDashboardTrailer()
```
|
---
stage: Platforms
group: Scalability
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Stage group dashboard
breadcrumbs:
- doc
- development
- stage_group_observability
- dashboards
---
The stage group dashboard is generated dashboard that contains metrics
for common components used by most stage groups. The dashboard is
fully customizable and owned by the stage groups.
This page explains what is on these dashboards, how to use their
contents, and how they can be customized.
## Dashboard contents
### Error budget panels

The top panels display the [error budget](../_index.md#error-budget).
These panels always show the 28 days before the end time selected in the
[time range controls](_index.md#time-range-controls). This data doesn't
follow the selected range. It does respect the filters for environment
and stage.
### Metrics panels

Although most of the metrics displayed in the panels are self-explanatory in their title and nearby
description, note the following:
- The events are counted, measured, accumulated, collected, and stored as
[time series](https://prometheus.io/docs/concepts/data_model/). The data is calculated using
statistical methods to produce metrics. It means that metrics are approximately correct and
meaningful over a time period. They help you get an overview of the stage of a system over time.
They are not meant to give you precise numbers of a discrete event.
If you need a higher level of accuracy, use another monitoring tool, such as
[logs](https://handbook.gitlab.com/handbook/engineering/monitoring/#logs).
Read the following examples for more explanations.
- All the rate metrics' units are `requests per second`. The default aggregate time frame is 1 minute.
For example, a panel shows the requests per second number at `2020-12-25 00:42:00` to be `34.13`.
It means at the minute 42 (from `2020-12-25 00:42:00` to `2020-12-25 00:42:59` ), there are
approximately `34.13 * 60 = ~ 2047` requests processed by the web servers.
- You might encounter some gotchas related to decimal fraction and rounding up frequently, especially
in low-traffic cases. For example, the error rate of `RepositoryUpdateMirrorWorker` at
`2020-12-25 02:04:00` is `0.07`, equivalent to `4.2` jobs per minute. The raw result is
`0.06666666667`, equivalent to 4 jobs per minute.
- All the rate metrics are more accurate when the data is big enough. The default floating-point
precision is 2. In some extremely low panels, you can see `0.00`, even though there is still some
real traffic.
To inspect the raw data of the panel for further calculation, select **Inspect** from the dropdown list of a panel.
Queries, raw data, and panel JSON structure are available.
Read more at [Grafana panel inspection](https://grafana.com/docs/grafana/latest/panels-visualizations/query-transform-data/).
All the dashboards are powered by [Grafana](https://grafana.com/), a frontend for displaying metrics.
Grafana consumes the data returned from queries to backend Prometheus data source, then presents it
with visualizations. The stage group dashboards are built to serve the most common use cases with a
limited set of filters and pre-built queries. Grafana provides a way to explore and visualize the
metrics data with [Grafana Explore](https://grafana.com/docs/grafana/latest/explore/). This requires
some knowledge of the [Prometheus PromQL query language](https://prometheus.io/docs/prometheus/latest/querying/basics/).
## Example: Debugging with dashboards
Example debugging workflow:
1. A team member in the Code Review group has merged an MR which got deployed to production.
1. To verify the deployment, you can check the
[Code Review group's dashboard](https://dashboards.gitlab.net/d/stage-groups-code_review/stage-groups-group-dashboard-create-code-review?orgId=1).
1. Sidekiq Error Rate panel shows an elevated error rate, specifically `UpdateMergeRequestsWorker`.

1. If you select **Kibana: Kibana Sidekiq failed request logs** in the **Extra links** section, you can filter for `UpdateMergeRequestsWorker` and read through the logs.

1. With [Sentry](https://sentry.gitlab.net/gitlab/gitlabcom/) you can find the exception where you
can filter by transaction type and `correlation_id` from Kibana's result item.

1. A precise exception, including a stack trace, job arguments, and other information should now appear.
Happy debugging!
## Customizing the dashboard
All Grafana dashboards at GitLab are generated from the [Jsonnet files](https://github.com/grafana/grafonnet-lib)
stored in [the runbooks project](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards).
Particularly, the stage group dashboards definitions are stored in
[`/dashboards/stage-groups`](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards/stage-groups).
By convention, each group has a corresponding Jsonnet file. The dashboards are synced with GitLab
[stage group data](https://gitlab.com/gitlab-com/www-gitlab-com/-/raw/master/data/stages.yml) every
month.
Expansion and customization are one of the key principles used when we designed this system.
To customize your group's dashboard, edit the corresponding file and follow the
[Runbook workflow](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards#dashboard-source).
The dashboard is updated after the MR is merged.
Looking at an autogenerated file, for example,
[`product_planning.dashboard.jsonnet`](https://gitlab.com/gitlab-com/runbooks/-/blob/master/dashboards/stage-groups/product_planning.dashboard.jsonnet):
```jsonnet
// This file is autogenerated using scripts/update_stage_groups_dashboards.rb
// Feel free to customize this file.
local stageGroupDashboards = import './stage-group-dashboards.libsonnet';
stageGroupDashboards.dashboard('product_planning')
.stageGroupDashboardTrailer()
```
We provide basic customization to filter out the components essential to your group's activities.
By default, only the `web`, `api`, and `sidekiq` components are available in the dashboard, while
`git` is hidden. See [how to enable available components and optional graphs](#optional-graphs).
You can also append further information or custom metrics to a dashboard. The following example
adds some links and a total request rate to the top of the page:
```jsonnet
local stageGroupDashboards = import './stage-group-dashboards.libsonnet';
local grafana = import 'github.com/grafana/grafonnet-lib/grafonnet/grafana.libsonnet';
local basic = import 'grafana/basic.libsonnet';
stageGroupDashboards.dashboard('source_code')
.addPanel(
grafana.text.new(
title='Group information',
mode='markdown',
content=|||
Useful link for the Source Code Management group dashboard:
- [Issue list](https://gitlab.com/groups/gitlab-org/-/issues?scope=all&state=opened&label_name%5B%5D=repository)
- [Epic list](https://gitlab.com/groups/gitlab-org/-/epics?label_name[]=repository)
|||,
),
gridPos={ x: 0, y: 0, w: 24, h: 4 }
)
.addPanel(
basic.timeseries(
title='Total Request Rate',
yAxisLabel='Requests per Second',
decimals=2,
query=|||
sum (
rate(gitlab_transaction_duration_seconds_count{
env='$environment',
environment='$environment',
feature_category=~'source_code_management',
}[$__interval])
)
|||
),
gridPos={ x: 0, y: 0, w: 24, h: 7 }
)
.stageGroupDashboardTrailer()
```

<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
If you want to see the workflow in action, we've recorded a pairing session on customizing a dashboard,
available on [GitLab Unfiltered](https://youtu.be/shEd_eiUjdI).
For deeper customization and more complicated metrics, see the
[Grafonnet lib](https://github.com/grafana/grafonnet-lib) project and the
[GitLab Prometheus Metrics](../../../administration/monitoring/prometheus/gitlab_metrics.md)
documentation.
### Optional graphs
Some graphs aren't relevant for all groups, so they aren't added to
the dashboard by default. They can be added by customizing the
dashboard.
By default, only the `web`, `api`, and `sidekiq` metrics are
shown. If you wish to see the metrics from the `git` fleet (or any
other component that might be added in the future), you can configure it as follows:
```jsonnet
stageGroupDashboards
.dashboard('source_code', components=stageGroupDashboards.supportedComponents)
.stageGroupDashboardTrailer()
```
If your group is interested in Sidekiq job durations and their
thresholds, you can add these graphs by calling the `.addSidekiqJobDurationByUrgency` function:
```jsonnet
stageGroupDashboards
.dashboard('access')
.addSidekiqJobDurationByUrgency()
.stageGroupDashboardTrailer()
```
|
https://docs.gitlab.com/development/stage_group_observability/dashboards
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/stage_group_observability/_index.md
|
2025-08-13
|
doc/development/stage_group_observability/dashboards
|
[
"doc",
"development",
"stage_group_observability",
"dashboards"
] |
_index.md
|
Platforms
|
Scalability
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Dashboards for stage groups
| null |
We generate a lot of dashboards acting as windows to the metrics we
use to monitor GitLab.com. Most of our dashboards are generated from
Jsonnet in the
[runbooks repository](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards#dashboard-source).
Anyone can contribute to these, adding new dashboards or modifying
existing ones.
When adding new dashboards for your stage groups, tagging them with
`stage_group:<group name>` cross-links the dashboard on other
dashboards with the same tag. You can create dashboards for stage groups
in the [`dashboards/stage-groups`](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards/stage-groups)
directory. Directories can't be nested more than one level deep.
To see a list of all the dashboards for your stage group:
1. In Grafana, go to the [Dashboard browser](https://dashboards.gitlab.net/dashboards?tag=stage-groups).
1. To see all of the dashboards for a specific group, filter for `stage_group:<group name>`.
Some generated dashboards are already available:
1. [Stage group dashboard](stage_group_dashboard.md): a customizable
dashboard with tailored metrics per group.
1. [Error budget detail dashboard](error_budget_detail.md): a
dashboard allowing to explore the error budget spend over time and
over multiple SLIs.
## Time range controls

By default, all the times are in UTC time zone.
[We use UTC when communicating in Engineering.](https://handbook.gitlab.com/handbook/communication/#writing-style-guidelines)
All metrics recorded in the GitLab production system have
[one-year retention](https://gitlab.com/gitlab-cookbooks/gitlab-prometheus/-/blob/31526b03fef823e2f9b3cda7c75dcd28a12418a3/attributes/prometheus.rb#L40).
You can also zoom in and filter the time range directly on a graph. For more information, see the
[Grafana Time Range Controls](https://grafana.com/docs/grafana/latest/dashboards/use-dashboards/#set-dashboard-time-range)
documentation.
## Filters and annotations
On each dashboard, there are two filters and some annotation switches on the top of the page.
Some special events are meaningful to development and operational activities.
[Grafana annotations](https://grafana.com/docs/grafana/latest/dashboards/build-dashboards/annotate-visualizations/) mark them
directly on the graphs.

| Name | Type | Description |
| --------------- | ---------- | ----------- |
| `PROMETHEUS_DS` | filter | Filter the selective [Prometheus data sources](https://handbook.gitlab.com/handbook/engineering/monitoring/#prometheus). The default value is `Global`, which aggregates the data from all available data sources. Most of the time, you don't need to care about this filter. |
| `environment` | filter | Filter the environment the metrics are fetched from. The default setting is production (`gprd`). For other options, see [Production Environment mapping](https://handbook.gitlab.com/handbook/engineering/infrastructure/environments/#environments). |
| `stage` | filter | Filter metrics by stage: `main` or `cny` for canary. Default is `main` |
| `deploy` | annotation | Mark a deployment event on the GitLab.com SaaS platform. |
| `canary-deploy` | annotation | Mark a [canary deployment](https://handbook.gitlab.com/handbook/engineering/infrastructure/environments/canary-stage/) event on the GitLab.com SaaS platform. |
| `feature-flags` | annotation | Mark the time point when a feature flag is updated. |
Example of a feature flag annotation displayed on a dashboard panel:

|
---
stage: Platforms
group: Scalability
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Dashboards for stage groups
breadcrumbs:
- doc
- development
- stage_group_observability
- dashboards
---
We generate a lot of dashboards acting as windows to the metrics we
use to monitor GitLab.com. Most of our dashboards are generated from
Jsonnet in the
[runbooks repository](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards#dashboard-source).
Anyone can contribute to these, adding new dashboards or modifying
existing ones.
When adding new dashboards for your stage groups, tagging them with
`stage_group:<group name>` cross-links the dashboard on other
dashboards with the same tag. You can create dashboards for stage groups
in the [`dashboards/stage-groups`](https://gitlab.com/gitlab-com/runbooks/-/tree/master/dashboards/stage-groups)
directory. Directories can't be nested more than one level deep.
To see a list of all the dashboards for your stage group:
1. In Grafana, go to the [Dashboard browser](https://dashboards.gitlab.net/dashboards?tag=stage-groups).
1. To see all of the dashboards for a specific group, filter for `stage_group:<group name>`.
Some generated dashboards are already available:
1. [Stage group dashboard](stage_group_dashboard.md): a customizable
dashboard with tailored metrics per group.
1. [Error budget detail dashboard](error_budget_detail.md): a
dashboard allowing to explore the error budget spend over time and
over multiple SLIs.
## Time range controls

By default, all the times are in UTC time zone.
[We use UTC when communicating in Engineering.](https://handbook.gitlab.com/handbook/communication/#writing-style-guidelines)
All metrics recorded in the GitLab production system have
[one-year retention](https://gitlab.com/gitlab-cookbooks/gitlab-prometheus/-/blob/31526b03fef823e2f9b3cda7c75dcd28a12418a3/attributes/prometheus.rb#L40).
You can also zoom in and filter the time range directly on a graph. For more information, see the
[Grafana Time Range Controls](https://grafana.com/docs/grafana/latest/dashboards/use-dashboards/#set-dashboard-time-range)
documentation.
## Filters and annotations
On each dashboard, there are two filters and some annotation switches on the top of the page.
Some special events are meaningful to development and operational activities.
[Grafana annotations](https://grafana.com/docs/grafana/latest/dashboards/build-dashboards/annotate-visualizations/) mark them
directly on the graphs.

| Name | Type | Description |
| --------------- | ---------- | ----------- |
| `PROMETHEUS_DS` | filter | Filter the selective [Prometheus data sources](https://handbook.gitlab.com/handbook/engineering/monitoring/#prometheus). The default value is `Global`, which aggregates the data from all available data sources. Most of the time, you don't need to care about this filter. |
| `environment` | filter | Filter the environment the metrics are fetched from. The default setting is production (`gprd`). For other options, see [Production Environment mapping](https://handbook.gitlab.com/handbook/engineering/infrastructure/environments/#environments). |
| `stage` | filter | Filter metrics by stage: `main` or `cny` for canary. Default is `main` |
| `deploy` | annotation | Mark a deployment event on the GitLab.com SaaS platform. |
| `canary-deploy` | annotation | Mark a [canary deployment](https://handbook.gitlab.com/handbook/engineering/infrastructure/environments/canary-stage/) event on the GitLab.com SaaS platform. |
| `feature-flags` | annotation | Mark the time point when a feature flag is updated. |
Example of a feature flag annotation displayed on a dashboard panel:

|
https://docs.gitlab.com/development/metadata
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/metadata.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
metadata.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Metadata
| null |
Each documentation Markdown page contains YAML front matter.
All values in the metadata are treated as strings and are used for the
documentation website only.
## Stage and group metadata
Each page should have metadata related to the stage and group it
belongs to, an information block, and the page title. For example:
```yaml
---
stage: Example Stage
group: Example Group
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Example page title
---
```
To populate the metadata, include this information:
- `stage`: The [Stage](https://handbook.gitlab.com/handbook/product/categories/#devops-stages)
that the majority of the page's content belongs to.
- `group`: The [Group](https://handbook.gitlab.com/handbook/company/structure/#product-groups)
that the majority of the page's content belongs to.
- `info`: How to find the Technical Writer associated with the page's stage and
group.
- `title`: The page title that appears as the H1 (level one heading) at the top of the page.
### Exceptions
Documents in the `/development` directory get this metadata:
```yaml
---
stage: Example Stage
group: Example Group
info: Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Example page title
---
```
Documents in the `/solutions` directory get this metadata:
```yaml
---
stage: Solutions Architecture
group: Solutions Architecture
info: This page is owned by the Solutions Architecture team.
title: Example page title
---
```
## Title metadata
The `title` metadata:
- Generates the H1 (level one heading) at the top of the rendered page.
- Can be used to generate automated page listings.
- Replaces Markdown H1 headings (like `# Page title`).
## Description metadata
The `description` tag:
- Is used to populate text on the documentation home page.
- Is shown in social media previews.
- Can be used in search result snippets.
- Is shown when the page is included in a [`cards` shortcode](styleguide/_index.md#cards).
For the top-level pages, like **Use GitLab** and one level underneath,
the descriptions are lists of nouns. For example, for **Set up your organization**,
the description is `Users, groups, namespaces, SSH keys.`
For other pages, descriptions are not actively maintained. However, if you want to add one,
use a short description of what the page is about.
See the Google [Best practices for creating quality meta descriptions](https://developers.google.com/search/docs/appearance/snippet#meta-descriptions) for tips.
## Avoid pages being added to global navigation
If a specific page shouldn't be added to the global navigation (have an entry added to
[`navigation.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml), add
the following to the page's metadata:
```yaml
ignore_in_report: true
```
When this metadata is set on a page:
- The [`pages_not_in_nav.cjs`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/scripts/pages_not_in_nav.cjs)
script ignores the page when processing the documentation.
- Technical writers doing the Technical Writing team's monthly tasks aren't prompted to add the page to the global
navigation.
## Indicate GitLab Dedicated support
The `gitlab_dedicated` metadata indicates whether a documentation page applies to GitLab Dedicated.
Add this field to documentation pages when GitLab Dedicated availability status has been confirmed with the product team. This metadata should complement, not replace, the information from the **Offering** details.
For example, usually pages that apply to GitLab Self-Managed apply to GitLab Dedicated.
Use this metadata when they don't:
```yaml
gitlab_dedicated: no
```
When a page applies to GitLab Dedicated, use:
```yaml
gitlab_dedicated: yes
```
For pages with partial availability on GitLab Dedicated, use `gitlab_dedicated: yes`
and update the [product availability details](styleguide/availability_details.md)
for any topics that don't apply to GitLab Dedicated.
## Indicate lack of product availability details
On pages that purposely do not have availability details, add this metadata to the
top of the page:
```yaml
availability_details: no
```
## Additional metadata
The following metadata is optional and is not actively maintained.
- `feedback`: Set to `false` to not include the "Help & Feedback" footer.
- `noindex`: Set to `true` to prevent the page from being indexed by search engines.
- `redirect_to`: Used to control redirects. For more information, see [Redirects in GitLab documentation](redirects.md).
- `searchbar`: Set to `false` to not include the search bar in the page header.
- `toc`: Set to `false` to not include the "On this page" navigation.
## Batch updates for TW metadata
The [`CODEOWNERS`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/CODEOWNERS)
file contains a list of files and the associated technical writers.
When a merge request contains documentation, the information in the `CODEOWNERS` file determines:
- The list of users in the **Approvers** section.
- The technical writer that the GitLab Bot pings for community contributions.
You can use a Rake task to [update the `CODEOWNERS` file](#update-the-codeowners-file).
### Update the `CODEOWNERS` file
When groups or [TW assignments](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments)
change, you must update the `CODEOWNERS` file. To do this, you run the `codeowners.rake` Rake task.
This task checks all files in the `doc` directory, reads the metadata, and uses the information in
the `codeowners.rake` file to populate the `CODEOWNERS` file.
To update the `CODEOWNERS` file:
1. Update the [stage and group metadata](#stage-and-group-metadata) for any affected doc pages, if necessary. If there are many changes, you can do this step in a separate MR.
1. Update the [`codeowners.rake`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/tasks/gitlab/tw/codeowners.rake) file with the changes.
1. Go to the root of the `gitlab` repository.
1. Run the Rake task with this command: `bundle exec rake tw:codeowners`
1. Review the changes in the `CODEOWNERS` file.
1. Add and commit all your changes and push your branch up to `origin`.
1. Create a merge request and add the `~"pipeline:skip-undercoverage"` label to it.
Because this merge request modifies a code file, GitLab Bot runs a `tier-3`
pipeline when the MR is approved. The pipeline fails at
[`rspec:undercoverage`](../pipelines/_index.md#rspecundercoverage-job) because we don't have tests for
`codeowners.rake`. Add the label to skip the test coverage check.
1. Assign the merge request to a technical writing manager for review.
When you update the `codeowners.rake` file:
- To specify multiple writers for a single group, use a space between writer names. Files are assigned to both writers.
```ruby
CodeOwnerRule.new('Group Name', '@writer1 @writer2'),
```
- To assign different writers in a group to documentation in different directories, use the `path` parameter to specify a directory:
```ruby
CodeOwnerRule.new('Group Name', ->(path) { path.start_with?('/doc/user') ? '@writer1' : '@writer2' }),
```
In this example, `writer1` is a code owner for files related to this group that are in `/doc/user`.
For everything else, `writer2` is made code owner. For an example, see [MR 127903](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127903).
- For a group that does not have an assigned writer, include the group name in the file and comment out the line:
```ruby
# CodeOwnerRule.new('Group Name', ''),
```
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Metadata
breadcrumbs:
- doc
- development
- documentation
---
Each documentation Markdown page contains YAML front matter.
All values in the metadata are treated as strings and are used for the
documentation website only.
## Stage and group metadata
Each page should have metadata related to the stage and group it
belongs to, an information block, and the page title. For example:
```yaml
---
stage: Example Stage
group: Example Group
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Example page title
---
```
To populate the metadata, include this information:
- `stage`: The [Stage](https://handbook.gitlab.com/handbook/product/categories/#devops-stages)
that the majority of the page's content belongs to.
- `group`: The [Group](https://handbook.gitlab.com/handbook/company/structure/#product-groups)
that the majority of the page's content belongs to.
- `info`: How to find the Technical Writer associated with the page's stage and
group.
- `title`: The page title that appears as the H1 (level one heading) at the top of the page.
### Exceptions
Documents in the `/development` directory get this metadata:
```yaml
---
stage: Example Stage
group: Example Group
info: Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Example page title
---
```
Documents in the `/solutions` directory get this metadata:
```yaml
---
stage: Solutions Architecture
group: Solutions Architecture
info: This page is owned by the Solutions Architecture team.
title: Example page title
---
```
## Title metadata
The `title` metadata:
- Generates the H1 (level one heading) at the top of the rendered page.
- Can be used to generate automated page listings.
- Replaces Markdown H1 headings (like `# Page title`).
## Description metadata
The `description` tag:
- Is used to populate text on the documentation home page.
- Is shown in social media previews.
- Can be used in search result snippets.
- Is shown when the page is included in a [`cards` shortcode](styleguide/_index.md#cards).
For the top-level pages, like **Use GitLab** and one level underneath,
the descriptions are lists of nouns. For example, for **Set up your organization**,
the description is `Users, groups, namespaces, SSH keys.`
For other pages, descriptions are not actively maintained. However, if you want to add one,
use a short description of what the page is about.
See the Google [Best practices for creating quality meta descriptions](https://developers.google.com/search/docs/appearance/snippet#meta-descriptions) for tips.
## Avoid pages being added to global navigation
If a specific page shouldn't be added to the global navigation (have an entry added to
[`navigation.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml), add
the following to the page's metadata:
```yaml
ignore_in_report: true
```
When this metadata is set on a page:
- The [`pages_not_in_nav.cjs`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/scripts/pages_not_in_nav.cjs)
script ignores the page when processing the documentation.
- Technical writers doing the Technical Writing team's monthly tasks aren't prompted to add the page to the global
navigation.
## Indicate GitLab Dedicated support
The `gitlab_dedicated` metadata indicates whether a documentation page applies to GitLab Dedicated.
Add this field to documentation pages when GitLab Dedicated availability status has been confirmed with the product team. This metadata should complement, not replace, the information from the **Offering** details.
For example, usually pages that apply to GitLab Self-Managed apply to GitLab Dedicated.
Use this metadata when they don't:
```yaml
gitlab_dedicated: no
```
When a page applies to GitLab Dedicated, use:
```yaml
gitlab_dedicated: yes
```
For pages with partial availability on GitLab Dedicated, use `gitlab_dedicated: yes`
and update the [product availability details](styleguide/availability_details.md)
for any topics that don't apply to GitLab Dedicated.
## Indicate lack of product availability details
On pages that purposely do not have availability details, add this metadata to the
top of the page:
```yaml
availability_details: no
```
## Additional metadata
The following metadata is optional and is not actively maintained.
- `feedback`: Set to `false` to not include the "Help & Feedback" footer.
- `noindex`: Set to `true` to prevent the page from being indexed by search engines.
- `redirect_to`: Used to control redirects. For more information, see [Redirects in GitLab documentation](redirects.md).
- `searchbar`: Set to `false` to not include the search bar in the page header.
- `toc`: Set to `false` to not include the "On this page" navigation.
## Batch updates for TW metadata
The [`CODEOWNERS`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/CODEOWNERS)
file contains a list of files and the associated technical writers.
When a merge request contains documentation, the information in the `CODEOWNERS` file determines:
- The list of users in the **Approvers** section.
- The technical writer that the GitLab Bot pings for community contributions.
You can use a Rake task to [update the `CODEOWNERS` file](#update-the-codeowners-file).
### Update the `CODEOWNERS` file
When groups or [TW assignments](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments)
change, you must update the `CODEOWNERS` file. To do this, you run the `codeowners.rake` Rake task.
This task checks all files in the `doc` directory, reads the metadata, and uses the information in
the `codeowners.rake` file to populate the `CODEOWNERS` file.
To update the `CODEOWNERS` file:
1. Update the [stage and group metadata](#stage-and-group-metadata) for any affected doc pages, if necessary. If there are many changes, you can do this step in a separate MR.
1. Update the [`codeowners.rake`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/tasks/gitlab/tw/codeowners.rake) file with the changes.
1. Go to the root of the `gitlab` repository.
1. Run the Rake task with this command: `bundle exec rake tw:codeowners`
1. Review the changes in the `CODEOWNERS` file.
1. Add and commit all your changes and push your branch up to `origin`.
1. Create a merge request and add the `~"pipeline:skip-undercoverage"` label to it.
Because this merge request modifies a code file, GitLab Bot runs a `tier-3`
pipeline when the MR is approved. The pipeline fails at
[`rspec:undercoverage`](../pipelines/_index.md#rspecundercoverage-job) because we don't have tests for
`codeowners.rake`. Add the label to skip the test coverage check.
1. Assign the merge request to a technical writing manager for review.
When you update the `codeowners.rake` file:
- To specify multiple writers for a single group, use a space between writer names. Files are assigned to both writers.
```ruby
CodeOwnerRule.new('Group Name', '@writer1 @writer2'),
```
- To assign different writers in a group to documentation in different directories, use the `path` parameter to specify a directory:
```ruby
CodeOwnerRule.new('Group Name', ->(path) { path.start_with?('/doc/user') ? '@writer1' : '@writer2' }),
```
In this example, `writer1` is a code owner for files related to this group that are in `/doc/user`.
For everything else, `writer2` is made code owner. For an example, see [MR 127903](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127903).
- For a group that does not have an assigned writer, include the group name in the file and comment out the line:
```ruby
# CodeOwnerRule.new('Group Name', ''),
```
|
https://docs.gitlab.com/development/redirects
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/redirects.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
redirects.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Redirects in GitLab documentation
|
Learn how to contribute to GitLab Documentation.
|
When you move, rename, or delete a page, you must add a redirect. Redirects reduce
how often users get 404s when they visit the documentation site from out-of-date links.
Add a redirect to ensure:
- Users see the new page and can update or delete their bookmark.
- External sites can update their links, especially sites that have automation that
checks for redirected links.
- The documentation site global navigation does not link to a missing page.
The links in the global navigation are already tested in the `docs-gitlab-com` project.
Be sure to assign a technical writer to any merge request that moves, renames, or deletes a page.
Technical Writers can help with any questions and can review your change.
{{< alert type="note" >}}
When you change the filename of a page, the Google Analytics are removed
from the content audit and the page views start from scratch.
If you want to change the filename, edit the page first,
so you can ensure the new page name is as accurate as possible.
{{< /alert >}}
## Types of redirects
There are two types of redirects:
- [Redirects added into the documentation files themselves](#redirect-to-a-page-that-already-exists), for users who
view the docs in `/help` on GitLab Self-Managed instances. For example,
[`/help` on GitLab.com](https://gitlab.com/help). These must be added in the same
MR that renames or moves a doc. Redirects to internal pages expire after three months
and redirects to external pages (starting with `https:`) expire after a year.
- [GitLab Pages redirects](../../user/project/pages/redirects.md), which are added
automatically after redirect files expire. They must not be manually added by
contributors and expire after nine months. Redirects pointing to external sites
are not added to the GitLab Pages redirects.
Expired redirect files are removed from the documentation projects as part of the Technical Writing team's [monthly tasks](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/blob/main/.gitlab/issue_templates/tw-monthly-tasks.md).
## Redirect to a page that already exists
To redirect a page to another page in the same repository:
1. In the Markdown file that you want to direct to a new location:
- Delete all of the content.
- Add this content:
```markdown
---
redirect_to: '../newpath/to/file/_index.md'
remove_date: 'YYYY-MM-DD'
---
<!-- markdownlint-disable -->
This document was moved to [another location](../newpath/to/file/_index.md).
<!-- This redirect file can be deleted after <YYYY-MM-DD>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
```
- Replace both instances of `../newpath/to/file/index.md` with the new file path.
- Replace both instances of `YYYY-MM-DD` with the expiration date, as explained in the template.
1. If the page had images that aren't used on any other pages, delete them.
### Update links in other repositories
After your changes are committed, search for and update all other repositories that
might link to the old file:
1. In <https://gitlab.com/gitlab-com/www-gitlab-com>, search for full URLs:
```shell
grep -r "docs.gitlab.com/path/to/file" .
```
1. In <https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/tree/main/data>,
search the navigation bar configuration files for the path:
```shell
grep -r "path/to/file" .
```
1. In [all of the doc projects](site_architecture/_index.md#source-files), search for links in the docs
and codebase. Search for all variations, including full URL and just the path.
For example, go to the root directory of the `gitlab` project and run:
```shell
grep -r "docs.gitlab.com/path/to/file" .
grep -r "path/to/file" .
grep -r "path/to/file.md" .
grep -r "path/to/file" .
```
You might need to try variations of relative links, such as `../path/to/file` or
`../file` to find every case.
1. In <https://gitlab.com/gitlab-org/customers-gitlab-com>, search for full URLs:
```shell
grep -r "docs.gitlab.com/path/to/file" .
```
### Move a file's location
If you want to move a file from one location to another, you do not move it.
Instead, you duplicate the file, and add the redirect code to the old file.
1. Create the new file.
1. Copy the contents of the old file to the new one.
1. In the old file, delete all the content.
1. In the old file, add the redirect code and follow the rest of the steps in
the [Redirect to a page that already exists](#redirect-to-a-page-that-already-exists) topic.
## Use code to add a redirect
If you prefer to use a script to create the redirect:
Add the redirect code to the old documentation file by running the
following Rake task. The first argument is the path of the old file,
and the second argument is the path of the new file:
- To redirect to a page in the same project, use relative paths and
the `.md` extension. Both old and new paths start from the same location.
In the following example, both paths are relative to `doc/`:
```shell
bundle exec rake "gitlab:docs:redirect[doc/user/search/old_file.md, doc/api/new_file.md]"
```
- To redirect to a page in a different project or site, use the full URL (with `https://`) :
```shell
bundle exec rake "gitlab:docs:redirect[doc/user/search/old_file.md, https://example.com]"
```
- Alternatively, you can omit the arguments and be prompted to enter the values:
```shell
bundle exec rake gitlab:docs:redirect
```
## Redirecting a page created before the release
If you create a new page and then rename it before it's added to a release on the 18th:
Instead of following that procedure, ask a Technical Writer to manually add the redirect
to [`redirects.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/redirects.yaml).
## Exceptions to creating a redirect
In some cases you can skip adding the redirect and just delete the file. The page
must have already been removed from (or never existed in) the navigation, and one
of the following must be true:
- The page was added and removed in the same release, so it was never included in
a GitLab Self-Managed release.
- The page does not contain any content of value, like a placeholder page or a page
with extremely low usage statistics.
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Learn how to contribute to GitLab Documentation.
title: Redirects in GitLab documentation
breadcrumbs:
- doc
- development
- documentation
---
When you move, rename, or delete a page, you must add a redirect. Redirects reduce
how often users get 404s when they visit the documentation site from out-of-date links.
Add a redirect to ensure:
- Users see the new page and can update or delete their bookmark.
- External sites can update their links, especially sites that have automation that
checks for redirected links.
- The documentation site global navigation does not link to a missing page.
The links in the global navigation are already tested in the `docs-gitlab-com` project.
Be sure to assign a technical writer to any merge request that moves, renames, or deletes a page.
Technical Writers can help with any questions and can review your change.
{{< alert type="note" >}}
When you change the filename of a page, the Google Analytics are removed
from the content audit and the page views start from scratch.
If you want to change the filename, edit the page first,
so you can ensure the new page name is as accurate as possible.
{{< /alert >}}
## Types of redirects
There are two types of redirects:
- [Redirects added into the documentation files themselves](#redirect-to-a-page-that-already-exists), for users who
view the docs in `/help` on GitLab Self-Managed instances. For example,
[`/help` on GitLab.com](https://gitlab.com/help). These must be added in the same
MR that renames or moves a doc. Redirects to internal pages expire after three months
and redirects to external pages (starting with `https:`) expire after a year.
- [GitLab Pages redirects](../../user/project/pages/redirects.md), which are added
automatically after redirect files expire. They must not be manually added by
contributors and expire after nine months. Redirects pointing to external sites
are not added to the GitLab Pages redirects.
Expired redirect files are removed from the documentation projects as part of the Technical Writing team's [monthly tasks](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/blob/main/.gitlab/issue_templates/tw-monthly-tasks.md).
## Redirect to a page that already exists
To redirect a page to another page in the same repository:
1. In the Markdown file that you want to direct to a new location:
- Delete all of the content.
- Add this content:
```markdown
---
redirect_to: '../newpath/to/file/_index.md'
remove_date: 'YYYY-MM-DD'
---
<!-- markdownlint-disable -->
This document was moved to [another location](../newpath/to/file/_index.md).
<!-- This redirect file can be deleted after <YYYY-MM-DD>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
```
- Replace both instances of `../newpath/to/file/index.md` with the new file path.
- Replace both instances of `YYYY-MM-DD` with the expiration date, as explained in the template.
1. If the page had images that aren't used on any other pages, delete them.
### Update links in other repositories
After your changes are committed, search for and update all other repositories that
might link to the old file:
1. In <https://gitlab.com/gitlab-com/www-gitlab-com>, search for full URLs:
```shell
grep -r "docs.gitlab.com/path/to/file" .
```
1. In <https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/tree/main/data>,
search the navigation bar configuration files for the path:
```shell
grep -r "path/to/file" .
```
1. In [all of the doc projects](site_architecture/_index.md#source-files), search for links in the docs
and codebase. Search for all variations, including full URL and just the path.
For example, go to the root directory of the `gitlab` project and run:
```shell
grep -r "docs.gitlab.com/path/to/file" .
grep -r "path/to/file" .
grep -r "path/to/file.md" .
grep -r "path/to/file" .
```
You might need to try variations of relative links, such as `../path/to/file` or
`../file` to find every case.
1. In <https://gitlab.com/gitlab-org/customers-gitlab-com>, search for full URLs:
```shell
grep -r "docs.gitlab.com/path/to/file" .
```
### Move a file's location
If you want to move a file from one location to another, you do not move it.
Instead, you duplicate the file, and add the redirect code to the old file.
1. Create the new file.
1. Copy the contents of the old file to the new one.
1. In the old file, delete all the content.
1. In the old file, add the redirect code and follow the rest of the steps in
the [Redirect to a page that already exists](#redirect-to-a-page-that-already-exists) topic.
## Use code to add a redirect
If you prefer to use a script to create the redirect:
Add the redirect code to the old documentation file by running the
following Rake task. The first argument is the path of the old file,
and the second argument is the path of the new file:
- To redirect to a page in the same project, use relative paths and
the `.md` extension. Both old and new paths start from the same location.
In the following example, both paths are relative to `doc/`:
```shell
bundle exec rake "gitlab:docs:redirect[doc/user/search/old_file.md, doc/api/new_file.md]"
```
- To redirect to a page in a different project or site, use the full URL (with `https://`) :
```shell
bundle exec rake "gitlab:docs:redirect[doc/user/search/old_file.md, https://example.com]"
```
- Alternatively, you can omit the arguments and be prompted to enter the values:
```shell
bundle exec rake gitlab:docs:redirect
```
## Redirecting a page created before the release
If you create a new page and then rename it before it's added to a release on the 18th:
Instead of following that procedure, ask a Technical Writer to manually add the redirect
to [`redirects.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/redirects.yaml).
## Exceptions to creating a redirect
In some cases you can skip adding the redirect and just delete the file. The page
must have already been removed from (or never existed in) the navigation, and one
of the following must be true:
- The page was added and removed in the same release, so it was never included in
a GitLab Self-Managed release.
- The page does not contain any content of value, like a placeholder page or a page
with extremely low usage statistics.
|
https://docs.gitlab.com/development/authoring_environment
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/authoring_environment.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
authoring_environment.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Set up your authoring environment
| null |
Set up your environment for writing and previewing GitLab documentation.
You can use whichever tools you're most comfortable with.
Use this guidance to help ensure you have the tools you need.
- Install a code editor, like VS Code or Sublime Text, to work with Markdown files.
- [Install Git](../../topics/git/how_to_install_git/_index.md) and
[add an SSH key to your GitLab profile](../../user/ssh.md#add-an-ssh-key-to-your-gitlab-account).
- Install documentation [linters](testing/_index.md) and configure them in your code editor:
- [markdownlint](testing/markdownlint.md)
- [Vale](testing/vale.md)
- If you're using VS Code, [install the GitLab Workflow extension](../../editor_extensions/visual_studio_code/setup.md)
to get GitLab Duo Chat and other GitLab features in your editor.
- [Set up the docs site to build locally](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/setup.md).
- Optional. Install the [Conventional Comments](https://gitlab.com/conventionalcomments/conventional-comments-button) extension for Chrome.
The plugin adds **Conventional Comment** buttons to GitLab comments.
After you're comfortable with your toolset, you can [install the GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/_index.md), a fully functional self-managed version of GitLab.
You can use GDK to:
- [Preview documentation changes locally](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitlab_docs.md).
- [Preview code changes locally](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/preview_gitlab_changes.md).
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Set up your authoring environment
breadcrumbs:
- doc
- development
- documentation
---
Set up your environment for writing and previewing GitLab documentation.
You can use whichever tools you're most comfortable with.
Use this guidance to help ensure you have the tools you need.
- Install a code editor, like VS Code or Sublime Text, to work with Markdown files.
- [Install Git](../../topics/git/how_to_install_git/_index.md) and
[add an SSH key to your GitLab profile](../../user/ssh.md#add-an-ssh-key-to-your-gitlab-account).
- Install documentation [linters](testing/_index.md) and configure them in your code editor:
- [markdownlint](testing/markdownlint.md)
- [Vale](testing/vale.md)
- If you're using VS Code, [install the GitLab Workflow extension](../../editor_extensions/visual_studio_code/setup.md)
to get GitLab Duo Chat and other GitLab features in your editor.
- [Set up the docs site to build locally](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/setup.md).
- Optional. Install the [Conventional Comments](https://gitlab.com/conventionalcomments/conventional-comments-button) extension for Chrome.
The plugin adds **Conventional Comment** buttons to GitLab comments.
After you're comfortable with your toolset, you can [install the GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/_index.md), a fully functional self-managed version of GitLab.
You can use GDK to:
- [Preview documentation changes locally](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitlab_docs.md).
- [Preview code changes locally](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/preview_gitlab_changes.md).
|
https://docs.gitlab.com/development/review_apps
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/review_apps.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
review_apps.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Documentation review apps
|
Learn how documentation review apps work.
|
GitLab team members can deploy a [review app](../../ci/review_apps/_index.md) for merge requests with documentation
changes. The review app lets you preview how your changes appear on the [GitLab Docs site](https://docs.gitlab.com) before merging.
Review app deployments are available for these projects:
| Project | Configuration file |
| ----------------------------------------------------------------------------- | ------------------ |
| [GitLab](https://gitlab.com/gitlab-org/gitlab) | [`.gitlab/ci/docs.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/066d02834ef51ff7647672d1d9cc323256177580/.gitlab/ci/docs.gitlab-ci.yml#L1-34) |
| [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) | [`gitlab-ci-config/gitlab-com.yml`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/49ab057ecf75396a453e1e2981e0889a3818842b/gitlab-ci-config/gitlab-com.yml#L328-347) |
| [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner) | [`.gitlab/ci/docs.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/8e2e3b7ace350a8889ff0143a9a0ad3c46322786/.gitlab/ci/docs.gitlab-ci.yml) |
| [GitLab Charts](https://gitlab.com/gitlab-org/charts/gitlab) | [`.gitlab/ci/review-docs.yml`](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/6e8270d0e7c51bdc3de8f8f1429ad68625621eb1/.gitlab/ci/review-docs.yml) |
| [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/bbf52c863ce4b712369214474e47b3f989e52d48/.gitlab-ci.yml#L234-281) |
## Deploy a review app
You can deploy a review app by manually triggering the `review-docs-deploy` job in your merge request.
This job creates a preview of your documentation changes using the Hugo static site generation from
the [`docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) project.
Prerequisites:
- You must have the Developer role for the project.
External contributors cannot run this job. If you're an external contributor,
ask a GitLab team member to run it for you.
To deploy a review app:
1. From your merge request, [manually run](../../ci/jobs/job_control.md#run-a-manual-job) the `review-docs-deploy` job.
This job triggers a [multi-project pipeline](../../ci/pipelines/downstream_pipelines.md#multi-project-pipelines)
that builds and deploys the documentation site with your changes.
1. When the pipeline finishes, select **View app** to open the review app in your browser.
The `review-docs-cleanup` job is triggered automatically on merge. This job deletes
the review app.
## How documentation review apps work
Documentation review apps follow this process:
1. You manually run the `review-docs-deploy` job in a merge request.
1. The job downloads (if outside of `gitlab` project) and runs the
[`scripts/trigger-build.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/trigger-build.rb) script with
the `docs deploy` flag, which triggers a pipeline in the `gitlab-org/technical-writing/docs-gitlab-com`
project.
The `DOCS_BRANCH` environment variable determines which branch of the
`gitlab-org/technical-writing/docs-gitlab-com` project to use. If not set, the `main` branch is used.
1. After the documentation preview site is built, it is [deployed in parallel to other review apps](../../user/project/pages/_index.md#parallel-deployments).
## Troubleshooting
When working with review apps, you might encounter the following issues.
### Error: `401 Unauthorized` in documentation review app deployment jobs
You might get an error in a review app deployment job that states:
```plaintext
Server responded with code 401, message: 401 Unauthorized.
```
This issue occurs when the `DOCS_HUGO_PROJECT_API_TOKEN` has either:
- Expired or been revoked and must be regenerated.
- Been recreated, but the CI/CD variable in the projects that use it wasn't updated.
These conditions result in the deployment job for the documentation review app being unable to query the downstream project for
the status of the downstream pipeline.
To resolve this issue, contact the [Technical Writing team](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#contact-us).
For more information on documentation review app tokens,
see [GitLab docs site maintenance](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/maintenance.md).
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Learn how documentation review apps work.
title: Documentation review apps
breadcrumbs:
- doc
- development
- documentation
---
GitLab team members can deploy a [review app](../../ci/review_apps/_index.md) for merge requests with documentation
changes. The review app lets you preview how your changes appear on the [GitLab Docs site](https://docs.gitlab.com) before merging.
Review app deployments are available for these projects:
| Project | Configuration file |
| ----------------------------------------------------------------------------- | ------------------ |
| [GitLab](https://gitlab.com/gitlab-org/gitlab) | [`.gitlab/ci/docs.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/066d02834ef51ff7647672d1d9cc323256177580/.gitlab/ci/docs.gitlab-ci.yml#L1-34) |
| [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) | [`gitlab-ci-config/gitlab-com.yml`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/49ab057ecf75396a453e1e2981e0889a3818842b/gitlab-ci-config/gitlab-com.yml#L328-347) |
| [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner) | [`.gitlab/ci/docs.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/8e2e3b7ace350a8889ff0143a9a0ad3c46322786/.gitlab/ci/docs.gitlab-ci.yml) |
| [GitLab Charts](https://gitlab.com/gitlab-org/charts/gitlab) | [`.gitlab/ci/review-docs.yml`](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/6e8270d0e7c51bdc3de8f8f1429ad68625621eb1/.gitlab/ci/review-docs.yml) |
| [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/bbf52c863ce4b712369214474e47b3f989e52d48/.gitlab-ci.yml#L234-281) |
## Deploy a review app
You can deploy a review app by manually triggering the `review-docs-deploy` job in your merge request.
This job creates a preview of your documentation changes using the Hugo static site generation from
the [`docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) project.
Prerequisites:
- You must have the Developer role for the project.
External contributors cannot run this job. If you're an external contributor,
ask a GitLab team member to run it for you.
To deploy a review app:
1. From your merge request, [manually run](../../ci/jobs/job_control.md#run-a-manual-job) the `review-docs-deploy` job.
This job triggers a [multi-project pipeline](../../ci/pipelines/downstream_pipelines.md#multi-project-pipelines)
that builds and deploys the documentation site with your changes.
1. When the pipeline finishes, select **View app** to open the review app in your browser.
The `review-docs-cleanup` job is triggered automatically on merge. This job deletes
the review app.
## How documentation review apps work
Documentation review apps follow this process:
1. You manually run the `review-docs-deploy` job in a merge request.
1. The job downloads (if outside of `gitlab` project) and runs the
[`scripts/trigger-build.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/trigger-build.rb) script with
the `docs deploy` flag, which triggers a pipeline in the `gitlab-org/technical-writing/docs-gitlab-com`
project.
The `DOCS_BRANCH` environment variable determines which branch of the
`gitlab-org/technical-writing/docs-gitlab-com` project to use. If not set, the `main` branch is used.
1. After the documentation preview site is built, it is [deployed in parallel to other review apps](../../user/project/pages/_index.md#parallel-deployments).
## Troubleshooting
When working with review apps, you might encounter the following issues.
### Error: `401 Unauthorized` in documentation review app deployment jobs
You might get an error in a review app deployment job that states:
```plaintext
Server responded with code 401, message: 401 Unauthorized.
```
This issue occurs when the `DOCS_HUGO_PROJECT_API_TOKEN` has either:
- Expired or been revoked and must be regenerated.
- Been recreated, but the CI/CD variable in the projects that use it wasn't updated.
These conditions result in the deployment job for the documentation review app being unable to query the downstream project for
the status of the downstream pipeline.
To resolve this issue, contact the [Technical Writing team](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#contact-us).
For more information on documentation review app tokens,
see [GitLab docs site maintenance](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/maintenance.md).
|
https://docs.gitlab.com/development/documentation
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
_index.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Contribute to the GitLab documentation
|
Documentation style guide and workflows.
|
The GitLab documentation is the single source of truth (SSoT)
for information about how to configure, use, and troubleshoot GitLab.
Everyone is welcome to contribute to the GitLab documentation.
The following instructions are for community contributors.
## Update the documentation
Prerequisites:
- [Request access to the GitLab community fork](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access).
The community fork is a shared copy of the main GitLab repository.
When you make the request, you'll be asked to answer a few questions. Let them know
that you're interested in contributing to the GitLab documentation.
To update the documentation:
1. In the GitLab community fork, go to the [`/doc` directory](https://gitlab.com/gitlab-community/gitlab-org/gitlab/-/tree/master/doc).
1. Find the documentation page you want to update. If you're not sure where the page is,
look at the URL of the page on <https://docs.gitlab.com>.
The path is listed there.
1. In the upper right, select **Edit > Edit single file**.
1. Make your updates.
1. When you're done, in the **Commit message** text box, enter a commit message.
Use 3-5 words, start the first word with a capital letter, and do not end the phrase with a period.
1. Select **Commit changes**.
1. A new merge request opens.
1. On the **New merge request** page, select the **Documentation** template and select **Apply template**.
1. In the description, write a brief summary of the changes and link to the related issue, if there is one.
1. Select **Create merge request**.
After your merge request is created, look for a message from **GitLab Bot**. This message has instructions for what to do when you're ready for review.
## What to work on
You don't need an issue to update the documentation, but if you're looking for open issues to work on,
[review the list of documentation issues curated specifically for new contributors](https://gitlab.com/gitlab-org/gitlab/-/issues/?sort=created_date&state=opened&label_name%5B%5D=documentation&label_name%5B%5D=docs-only&label_name%5B%5D=Seeking%20community%20contributions&first_page_size=20).
When you find an issue you'd like to work on:
- If the issue is already assigned to someone, pick a different one.
- If the issue is unassigned, add a comment and ask to work on the issue. For a Hackathon, use `@docs-hackathon`. Otherwise, use `@gl-docsteam`. For example:
```plaintext
@docs-hackathon I would like to work on this issue
```
You can try installing and running the [Vale linting tool](testing/vale.md)
and fixing the resulting issues.
### Translated documentation
To make GitLab documentation easier to use around the world, we plan to have product documentation
translated and published in other languages.
The [file structure](site_architecture/_index.md#documentation-in-other-languages)
and initial translations have been created, but this project is not complete.
After the official public release of the translated documentation, we will share details
on how to help us improve our translations. But while this work is in progress,
we cannot accept contributions to any translations of product documentation.
Additionally, only localization team members can change localization-related files.
## Ask for help
Ask for help from the Technical Writing team if you:
- Need help to choose the correct place for documentation.
- Want to discuss a documentation idea or outline.
- Want to request any other help.
To identify someone who can help you:
1. Locate the Technical Writer for the relevant
[DevOps stage group](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments).
1. Either:
- If urgent help is required, directly assign the Technical Writer in the issue or in the merge request.
- If non-urgent help is required, ping the Technical Writer in the issue or merge request.
If you are a member of the GitLab Slack workspace, you can request help in the `#docs` channel.
## Edit a document from your own fork
If you already have your own fork of the GitLab repository, you can use it,
rather than using the GitLab community fork.
1. On <https://docs.gitlab.com>, scroll to the bottom of the page you want to edit.
1. Select **View page source**.
1. In the upper-right corner, select **Edit > Edit single file**.
1. Make your updates.
1. When you're done, in the **Commit message** text box, enter a commit message.
Use 3-5 words, start the first word with a capital letter, and do not end the phrase with a period.
1. Select **Commit changes**.
1. Note the name of your branch and then select **Commit changes**.
The changes were added to GitLab in your forked repository, in a branch with the name noted in the last step.
Now, create a merge request. This merge request is how the changes from your branch
are merged into the GitLab `master` branch.
1. On the left sidebar, select **Code > Merge requests**.
1. Select **New merge request**.
1. For the source branch, select your fork and branch.
1. For the target branch, select the [GitLab repository](https://gitlab.com/gitlab-org/gitlab) `master` branch.
1. Select **Compare branches and continue**. A new merge request opens.
1. On the **New merge request** page, select the **Documentation** template and select **Apply template**.
1. In the description, write a brief summary of the changes and link to the related issue, if there is one.
1. Select **Create merge request**.
After your merge request is created, look for a message from **GitLab Bot**. This message has instructions for what to do when you're ready for review.
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Documentation style guide and workflows.
title: Contribute to the GitLab documentation
breadcrumbs:
- doc
- development
- documentation
---
The GitLab documentation is the single source of truth (SSoT)
for information about how to configure, use, and troubleshoot GitLab.
Everyone is welcome to contribute to the GitLab documentation.
The following instructions are for community contributors.
## Update the documentation
Prerequisites:
- [Request access to the GitLab community fork](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access).
The community fork is a shared copy of the main GitLab repository.
When you make the request, you'll be asked to answer a few questions. Let them know
that you're interested in contributing to the GitLab documentation.
To update the documentation:
1. In the GitLab community fork, go to the [`/doc` directory](https://gitlab.com/gitlab-community/gitlab-org/gitlab/-/tree/master/doc).
1. Find the documentation page you want to update. If you're not sure where the page is,
look at the URL of the page on <https://docs.gitlab.com>.
The path is listed there.
1. In the upper right, select **Edit > Edit single file**.
1. Make your updates.
1. When you're done, in the **Commit message** text box, enter a commit message.
Use 3-5 words, start the first word with a capital letter, and do not end the phrase with a period.
1. Select **Commit changes**.
1. A new merge request opens.
1. On the **New merge request** page, select the **Documentation** template and select **Apply template**.
1. In the description, write a brief summary of the changes and link to the related issue, if there is one.
1. Select **Create merge request**.
After your merge request is created, look for a message from **GitLab Bot**. This message has instructions for what to do when you're ready for review.
## What to work on
You don't need an issue to update the documentation, but if you're looking for open issues to work on,
[review the list of documentation issues curated specifically for new contributors](https://gitlab.com/gitlab-org/gitlab/-/issues/?sort=created_date&state=opened&label_name%5B%5D=documentation&label_name%5B%5D=docs-only&label_name%5B%5D=Seeking%20community%20contributions&first_page_size=20).
When you find an issue you'd like to work on:
- If the issue is already assigned to someone, pick a different one.
- If the issue is unassigned, add a comment and ask to work on the issue. For a Hackathon, use `@docs-hackathon`. Otherwise, use `@gl-docsteam`. For example:
```plaintext
@docs-hackathon I would like to work on this issue
```
You can try installing and running the [Vale linting tool](testing/vale.md)
and fixing the resulting issues.
### Translated documentation
To make GitLab documentation easier to use around the world, we plan to have product documentation
translated and published in other languages.
The [file structure](site_architecture/_index.md#documentation-in-other-languages)
and initial translations have been created, but this project is not complete.
After the official public release of the translated documentation, we will share details
on how to help us improve our translations. But while this work is in progress,
we cannot accept contributions to any translations of product documentation.
Additionally, only localization team members can change localization-related files.
## Ask for help
Ask for help from the Technical Writing team if you:
- Need help to choose the correct place for documentation.
- Want to discuss a documentation idea or outline.
- Want to request any other help.
To identify someone who can help you:
1. Locate the Technical Writer for the relevant
[DevOps stage group](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments).
1. Either:
- If urgent help is required, directly assign the Technical Writer in the issue or in the merge request.
- If non-urgent help is required, ping the Technical Writer in the issue or merge request.
If you are a member of the GitLab Slack workspace, you can request help in the `#docs` channel.
## Edit a document from your own fork
If you already have your own fork of the GitLab repository, you can use it,
rather than using the GitLab community fork.
1. On <https://docs.gitlab.com>, scroll to the bottom of the page you want to edit.
1. Select **View page source**.
1. In the upper-right corner, select **Edit > Edit single file**.
1. Make your updates.
1. When you're done, in the **Commit message** text box, enter a commit message.
Use 3-5 words, start the first word with a capital letter, and do not end the phrase with a period.
1. Select **Commit changes**.
1. Note the name of your branch and then select **Commit changes**.
The changes were added to GitLab in your forked repository, in a branch with the name noted in the last step.
Now, create a merge request. This merge request is how the changes from your branch
are merged into the GitLab `master` branch.
1. On the left sidebar, select **Code > Merge requests**.
1. Select **New merge request**.
1. For the source branch, select your fork and branch.
1. For the target branch, select the [GitLab repository](https://gitlab.com/gitlab-org/gitlab) `master` branch.
1. Select **Compare branches and continue**. A new merge request opens.
1. On the **New merge request** page, select the **Documentation** template and select **Apply template**.
1. In the description, write a brief summary of the changes and link to the related issue, if there is one.
1. Select **Create merge request**.
After your merge request is created, look for a message from **GitLab Bot**. This message has instructions for what to do when you're ready for review.
|
https://docs.gitlab.com/development/workflow
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/workflow.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
workflow.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Documentation workflow
| null |
Documentation at GitLab follows a workflow.
The process for creating and maintaining GitLab product documentation depends on whether the documentation is:
- [A new feature or feature enhancement](#documentation-for-a-product-change): Delivered for a specific milestone and associated with specific code changes.
This documentation has the highest priority.
- [Changes outside a specific milestone](#documentation-feedback-and-improvements): Usually not associated with a specific code change, is of lower priority, and
is open to all GitLab contributors.
Documentation is [required](../contributing/merge_request_workflow.md#definition-of-done)
for a milestone when:
- A new or enhanced feature is shipped that impacts the user or administrator experience.
- There are changes to the user interface or API.
- A process, workflow, or previously documented feature is changed.
- A feature is deprecated or removed.
Documentation is not typically required when a **backend feature** is added or changed.
## Pipelines and branch naming
The CI/CD pipelines for the `gitlab` and `gitlab-runner` projects are configured to
run shorter, faster pipelines on merge requests that contain only documentation changes.
If you submit documentation-only changes to `omnibus-gitlab`, `charts/gitlab`, or `gitlab-operator`,
to make the shorter pipeline run, you must follow these guidelines when naming your branch:
| Branch name | Valid example |
|:----------------------|:-----------------------------|
| Starting with `docs/` | `docs/update-api-issues` |
| Starting with `docs-` | `docs-update-api-issues` |
| Ending in `-docs` | `123-update-api-issues-docs` |
Additionally, changes to these files in the `gitlab` project automatically trigger a long pipeline
because some code tests use these files as examples:
- `doc/_index.md`
- `doc/api/settings.md`
When you edit these pages, the long pipeline appears the same as in a code MR,
but you do not need any additional approvals. If the `pre-merge-checks` job fails on merge with a
`Expected latest pipeline (link) to be a tier-3 pipeline!` message, add the `~"pipeline::tier-3"`
label to the MR and run a new pipeline.
If your merge requests include any code changes, long pipelines are run for them.
For more information on long pipelines, see [pipelines for the GitLab project](../pipelines/_index.md).
## Moving content
When you move content to a new location, and edit the content in the same merge request,
use separate commits.
Separate commits help the reviewer, because the MR diff for moved content
does not clearly highlight edits.
When you use separate commits, the reviewer can verify the location change
in the first commit diff, then the content changes in subsequent commits.
For example, if you move a page, but also update the content of the page:
1. In the first commit: Move the content to its new location and put [redirects](redirects.md) in place if required.
If you can, fix broken links in this commit.
1. In subsequent commits: Make content changes. Fix broken links if you haven't already.
1. In the merge request: Explain the commits in the MR description and in a
comment to the reviewer.
You can add as many commits as you want, but make sure the first commit only moves the content,
and does not edit it.
## Documentation for a product change
Documentation is required for any new or changed feature, and is:
- Created or updated as part of feature development, and is almost always in
the same merge request as the feature code. Including documentation in the
same merge request as the code eliminates the possibility that code and
documentation get out-of-sync.
- Required with the delivery of a feature for a specific milestone as part of the
GitLab [definition of done](../contributing/merge_request_workflow.md#definition-of-done).
- Linked from the release post.
### Developer responsibilities
Developers are the primary authors of documentation for a feature or feature
enhancement. They are responsible for:
- Developing initial content required for a feature.
- Liaising with their product manager to understand what documentation must be
delivered, and when.
- Requesting technical reviews from other developers in their group.
- Requesting documentation reviews from the technical writer
[assigned to the DevOps stage group](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments)
that is delivering the new feature or feature enhancements.
When possible, the merge request with the code should include the
documentation.
For more information, see the [guidelines](feature_flags.md).
The author of this MR, either a frontend or backend developer, should write the documentation.
{{< alert type="note" >}}
Community Contributors can ask for additional help from GitLab team members.
{{< /alert >}}
#### Authoring
Because the documentation is an essential part of the product, if a `~"type::feature"`
issue also contains the `~documentation` label, you must ship the new or
updated documentation with the code of the feature.
Technical writers are happy to help, as requested and planned on an
issue-by-issue basis.
For feature issues requiring documentation, follow the process below unless
otherwise agreed with the product manager and technical writer:
- Include any new and edited documentation, either in:
- The merge request introducing the code.
- A separate merge request raised around the same time.
- Use the [documentation requirements](#documentation-requirements) developed
by the product manager in the issue and discuss any further documentation
plans or ideas as needed.
If the new or changed documentation requires extensive collaboration or
conversation, a separate, linked issue can be used for the planning process.
- Use the [Documentation guidelines](_index.md),
and other resources linked from there, including:
- [Documentation folder structure](site_architecture/folder_structure.md).
- [Documentation Style Guide](styleguide/_index.md).
- [Markdown Guide](../../user/markdown.md).
- Contact the technical writer for the relevant
[DevOps stage](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments)
in your issue or merge request, or in the `#docs` Slack channel, if you:
- Need any help to choose the correct place for documentation.
- Want to discuss a documentation idea or outline.
- Want to request any other help.
- If you are working on documentation in a separate merge request, ensure the
documentation is merged as close as possible to the code merge.
- If the feature has a feature flag, [follow the policy for documenting feature-flagged issues](feature_flags.md).
#### Review
Before merging, documentation changes committed by the developer must be
reviewed by:
- The code reviewer for the merge request. This is known as a technical review.
- Optionally, others involved in the work such as other developers or the
product manager.
- The technical writer for the DevOps stage group, except in exceptional
circumstances where a [post-merge review](#post-merge-reviews)
can be requested.
- A maintainer of the project.
### Product manager responsibilities
Product managers are responsible for the
[documentation requirements](#documentation-requirements) for a feature or
feature enhancement. They can also:
- Connect with the technical writer for discussion and collaboration.
- Review documentation themselves.
For issues requiring any new or updated documentation, the product manager
must:
- Add the `~documentation` label.
- Confirm or add the documentation requirements.
- Ensure the issue contains:
- Any new or updated feature name.
- Overview, description, and use cases when applicable (as required by the
[documentation folder structure](site_architecture/folder_structure.md)).
Everyone is encouraged to draft the documentation requirements in the issue.
However, a product manager will:
- When the issue is assigned a release milestone, review and update the
Documentation details.
- By the kickoff, finalize the documentation details.
### Technical writer responsibilities
Technical writers are responsible for:
- Participating in issue discussions and reviewing MRs for the upcoming
milestone.
- Reviewing documentation requirements in issues when called upon.
- Answering questions, and helping and providing advice throughout the
authoring and editing process.
- Reviewing all significant new and updated documentation content, whether
before merge or after it is merged.
- Assisting the developer and product manager with feature documentation
delivery.
- Ensuring that issues and MRs are labeled appropriately, and that doc content has the correct [metadata](metadata.md).
#### Planning
The technical writer:
- Reviews their group's `~"type::feature"` issues that are part of the next milestone
to get a sense of the scope of content likely to be authored.
- Recommends the `~documentation` label on issues from that list which don't
have it but should, or inquires with the PM to determine if documentation is
truly required.
- For `~direction` issues from that list, reads the full issue and reviews its
Documentation requirements section. Addresses any recommendations or
questions with the PMs and others collaborating on the issue to
refine or expand the Documentation requirements.
- Updates the Technical Writing milestone plan ([example](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/issues/521) created from the [issue template](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/blob/main/.gitlab/issue_templates/tw-milestone-plan.md)).
- Add a link to the board or filter that shows the planned documentation and UI text work for the upcoming milestone.
- Confirm that the group PM or EM is aware of the planned work.
#### Collaboration
By default, the developer will work on documentation changes independently, but
the developer, product manager, or technical writer can propose a broader
collaboration for any given issue.
Additionally, technical writers are available for questions at any time.
#### Review
Technical writers provide non-blocking reviews of all documentation changes, before or after
the change is merged. Identified issues that would block or slow a change's
release are to be handled in linked, follow-up MRs.
### Documentation requirements
Feature documentation requirements should be included as part of
the issue for planning that feature in the **Documentation** section in the
issue description. Issues created using the
[**Feature Proposal** template](https://gitlab.com/gitlab-org/gitlab/-/raw/master/.gitlab/issue_templates/Feature%20proposal%20-%20detailed.md)
have this section by default.
Anyone can add these details, but the product manager who assigns the issue to
a specific release milestone will ensure these details are present and
finalized by the time of that milestone's kickoff.
Developers, technical writers, and others may help further refine this plan at
any time on request.
The following details should be included:
- What concepts and procedures should the documentation guide and enable the
user to understand or accomplish?
- To this end, what new pages are needed, if any? What pages or subsections
need updates? Consider changes and additions to user, admin, and API
documentation.
- For any guide or instruction set, should it help address a single use case,
or be flexible to address a certain range of use cases?
- Do we need to update a previously recommended workflow? Should we link the
new feature from various relevant locations? Consider all ways documentation
should be affected.
- Are there any key terms or task descriptions that should be included so that
the documentation is found in relevant searches?
- Include suggested titles of any pages or subsection headings, if applicable.
- List any documentation that should be cross-linked, if applicable.
### Including documentation with code
Currently, the Technical Writing team strongly encourages including
documentation in the same merge request as the code that it relates to, but
this isn't strictly mandatory. It's still common for documentation to be added
in an MR separate from the feature MR.
Engineering teams may elect to adopt a workflow where it is **mandatory** that
documentation is included in the code MR, as part of their
[definition of done](../contributing/merge_request_workflow.md#definition-of-done).
When a team adopts this workflow, that team's engineers must include their
documentation in the **same** MR as their feature code, at all times.
#### Downsides of separate documentation MRs
A workflow that has documentation separated into its own MR has many downsides.
If the documentation merges **before** the feature:
- GitLab.com users might try to use the feature before it's released, driving
support tickets.
- If the feature is delayed, the documentation might not be pulled/reverted in
time and could be accidentally included on GitLab Self-Managed for that
release.
If the documentation merges **after** the feature:
- The feature might be included on GitLab Self-Managed, but without any
documentation if the documentation MR misses the cutoff.
- A feature might show up in the GitLab.com user interface before any
documentation exists for it. Users surprised by this feature will search for
documentation and won't find it, possibly driving support tickets.
Having two separate MRs means:
- Two different people might be responsible for merging one feature, which
isn't workable with an asynchronous work style. The feature might merge while
the technical writer is asleep, creating a potentially lengthy delay between
the two merges.
- If the documentation MR is assigned to the same maintainer as responsible for
the feature code MR, they will have to review and juggle two MRs instead of
dealing with just one.
Documentation quality might be lower, because:
- Having documentation in a separate MR will mean far fewer people will see and
verify them, increasing the likelihood that issues will be missed.
- In a split workflow, engineers might only create the documentation MR after
the feature MR is ready, or almost ready. This gives the technical writer
little time to learn about the feature to do a good review. It also
increases pressure on them to review and merge faster than desired, letting
problems slip in due to haste.
#### Benefits of always including documentation with code
Including documentation with code (and doing it early in the development
process) has many benefits:
- There are no timing issues connected to releases:
- If a feature slips to the next release, the documentation slips too.
- If the feature just makes it into a release, the documentation just
makes it in too.
- If a feature makes it to GitLab.com early, the documentation will be ready
for our early adopters.
- Only a single person will be responsible for merging the feature (the code
maintainer).
- The technical writer will have more time to gain an understanding of the
feature and will be better able to verify the content of the documentation in
the Review App or GDK. They will also be able to offer advice for improving
the user interface text or offer additional use cases.
- The documentation will have increased visibility:
- Everyone involved in the merge request can review the documentation. This
could include product managers, multiple engineers with deep domain
knowledge, the code reviewers, and the maintainer. They will be more likely
to catch issues with examples, and background or concepts that the
technical writer may not be aware of.
- Increasing visibility of the documentation also has the side effect of
improving other engineers' documentation. By reviewing each other's MRs,
each engineer's own documentation skills will improve.
- Thinking about the documentation early can help engineers generate better
examples, as they will need to think about what examples a user will want,
and will need to ensure the code they write implements that example properly.
#### Documentation with code as a workflow
To have documentation included with code as a mandatory workflow, some
changes might need to happen to a team's current workflow:
- The engineers must strive to include the documentation early in the
development process, to give ample time for review, not just from the
technical writer, but also the code reviewer and maintainer.
- Reviewers and maintainers must also review the documentation during code
reviews to ensure the described processes match the expected use of the
feature and that examples are correct.
They do **not** need to worry about style or grammar.
- The technical writer must be assigned as a reviewer on the MR directly and not only pinged.
This can be done at any time, but must be before the code maintainer review.
It's common to have both the documentation and code reviews happening at the
same time, with the author, reviewer, and technical writer discussing the
documentation together.
- When the documentation is ready, the technical writer will click **Approve**
and usually will no longer be involved in the MR. If the feature changes
during code review and the documentation is updated, the technical writer
must be reassigned the MR to verify the update.
- Maintainers are allowed to merge features with the documentation *as-is*,
even if the technical writer hasn't given final approval yet. The
**documentation reviews must not be blockers**. Therefore, it's important to
get the documentation included and assigned to the technical writers early.
If the feature is merged before final documentation approval, the maintainer
must create a [post-merge follow-up issue](#post-merge-reviews),
and assign it to both the engineer and technical writer.
You can visualize the parallel workflow for code and documentation reviews as:
```mermaid
graph TD
A("Feature MR Created (Engineer)") --> |Assign| B("Code Review (reviewer)")
B --> |"Approve / Reassign"| C("Code Review (maintainer)")
C --> |Approve| F("Merge (maintainer)")
A --> D("Docs Added (Engineer)")
D --> |Assign| E("Docs Review (Tech Writer)")
E --> |Approve| F
```
For complex features split over multiple merge requests:
- If a merge request is implementing components for a future feature, but the
components aren't accessible to users yet, then no documentation should be
included.
- If a merge request will expose a feature to users in any way, such as an
enabled user interface element, an API endpoint, or anything similar, then
that MR **must** have documentation. This might mean multiple
documentation additions could happen in the buildup to the implementation of
a single large feature, for example API documentation and feature usage
documentation.
- If it's unclear which engineer should add the feature documentation into
their MR, the engineering manager should decide during planning, and tie the
documentation to the last MR that must be merged before a feature is
considered released. This is often, but not always, a frontend MR.
## UI text
### Planning and authoring
A product designer should consult the technical writer for their stage group when planning to add
or change UI text.
The technical writer can offer an initial review of any ideas, plans, or actual text.
The technical writer can be asked to draft text when provided with the context and goal of the text.
The context might include where the text would appear and what information to convey, which typically answers
one or more of these questions:
- What does this do?
- How do I use it?
- Why should I care?
Consider tagging the technical writer once in a review request with a message indicating where reviews are needed.
This will help manage the volume of notifications per review round.
### MR Reviews
After the merge request is created, all changes and additions to text in the UI **must** be reviewed by
the technical writer.
These might include labels (buttons, menus, column headers, and UI sections) or any phrases that would be
displayed in the UI, such as microcopy or error messages.
<i class="fa-youtube-play" aria-hidden="true"></i>
For more information about writing and reviewing UI text,
see [Copy That: Helping your Users Succeed with Effective Product Copy](https://www.youtube.com/watch?v=M_Q1RO0ky2c).
<!-- Video published on 2016-05-26 -->
## Release posts
The technical writer for each [stage group](https://handbook.gitlab.com/handbook/product/categories/#devops-stages)
reviews their group's [feature blocks](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#content-reviews)
(release post items) authored by the product manager.
For each release, a single technical writer is also assigned as the [Technical Writing Lead](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#tw-lead) to perform the [structural check](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#structural-check) and other duties.
## Monthly documentation releases
When a new GitLab version is released, the Technical Writing team releases
[version-specific published documentation](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/releases.md).
## Documentation feedback and improvements
To make a documentation change that is not associated with a specific code change, the Technical Writing team encourages contributors to create an MR.
If you start with an issue rather than an MR, use the [documentation template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Documentation).
For the labels you should apply, see [labels](#labels).
Also include:
- Milestone: `Backlog` until the work is scheduled for a milestone.
- Assignee: `None` until the work is scheduled for a milestone.
In the issue description or comments, mention (`@username`) the technical writer assigned to the group for awareness.
- Description: starts with `Docs:` or `Docs feedback:`
- A task checklist or next step to deliver an MVC.
- Optional. If the issue is suitable for a community contributor: [`Seeking community contributions`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_date&state=opened&label_name%5B%5D=Seeking%20community%20contributions&first_page_size=50) and [`quick win`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_date&state=opened&label_name%5B%5D=quick%20win&first_page_size=50).
If an issue requires input from the development team before a technical writer can start work, it should follow the stage and group's issue lifecycle.
For an example of an issue lifecycle, see [Plan stage issues](https://handbook.gitlab.com/handbook/engineering/development/dev/plan/#issues).
### Review and triage documentation-only backlog issues
Routine review and triage of documentation feedback and improvement issues for your groups helps us spend the [time we have](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#prioritization) on actionable issues that improve the user experience.
#### Prerequisites
- An issue triage board for each group that you are the assigned technical writer for. If you don't have an issue triage board for your group, set one up called `Docs only backlog triage - group name`. See an [example board](https://gitlab.com/gitlab-org/gitlab/-/boards/8944610?not[label_name][]=type%3A%3Afeature¬[label_name][]=type%3A%3Abug&label_name[]=documentation&label_name[]=group%3A%3Aproject%20management) for the `Project Management` group.
- The filter criteria should include **Label=**`documentation`, **Label=**`group::groupname`, **Label!=**`type::feature`, **Label!=**`type:bug`.
- In **Edit board**, ensure **Show the Open list** is selected.
- On the issue board, select **Create list**, and set the label to `tw:triaged`.
To review and triage documentation feedback and improvement issues for your groups:
1. Once a month, on the issue triage boards for your groups, check the **Open** list for new issues.
1. Apply the labels described in [documentation feedback and improvements](#documentation-feedback-and-improvements).
1. Aim to keep the list of open, untriaged issues at **<10**.
1. Share the triaged list with the group and group PM.
## Hackathons
The Technical Writing team takes part in the [GitLab Hackathon](https://about.gitlab.com/community/hackathon/)
and sometimes hosts a documentation-only Hackathon.
### Create issues for a Hackathon
We often create documentation issues for a Hackathon. These issues are typically based on results found when you run Vale against the documentation.
1. Run Vale against the full docset. Go to the GitLab repo and run:
`find doc -name '*.md' | sort | xargs vale --minAlertLevel suggestion --output line > ../results.txt`
1. Create issues. You have a few options:
- Use a [script](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/scripts/create_issues.js) to create one issue for each Markdown file listed in the Vale results.
This script uses the [`Doc cleanup` issue template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Doc_cleanup.md).
- Create issues one at a time by using the `Doc cleanup` issue template.
- Create issues in bulk by using the [Issues API](../../api/issues.md#new-issue).
Ensure that the labels assigned to the issues match those in the `Doc cleanup` issue template.
### Assign an issue to a community contributor
To assign an issue to a community contributor:
1. Remove the `Seeking community contributions` label.
1. Assign the user by typing `/assign @username` in a comment, where `@username` is the contributor's handle.
1. Mention the user in a comment, telling them the issue is now assigned to them.
Try to limit each contributor to no more than three issues at a time. You can assign another issue as soon as they've opened an MR for one of the issues they're already assigned.
### Review Hackathon merge requests
When a community contributor opens a Hackathon merge request:
1. View the related issue. Ensure the user who authored the MR is the same user who asked to be assigned to the issue.
- If the user is not listed in the issue, and another user has asked to work on the issue, do not merge the MR.
Ask the MR author to find an issue that has not already been assigned or point them to [Contribute to GitLab development](../contributing/_index.md).
1. Work to merge the merge request.
1. When you merge, ensure you close the related issue.
## Labels
The Technical Writing team uses the following [labels](../../user/project/labels.md)
on issues and merge requests:
- A label for the type of change. The two labels used most often are:
- `~"type::feature"`
- `~"type::maintenance"` with `~"maintenance::refactor"`
- A stage and group label. For example:
- `~devops::create`
- `~group::source code`
- The `~documentation` specialization label.
- The `~Technical Writing` team label.
The [documentation merge request template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/merge_request_templates/Documentation.md)
includes some of these labels.
### Available labels
Any issue or merge request a technical writer works on must include the `Technical Writing` label.
To further classify the type of effort, include one or more of the following labels:
- [`Category:Docs Site`](https://gitlab.com/groups/gitlab-org/-/labels?subscribed=&sort=relevance&search=Category%3ADocs+Site): Documentation website infrastructure or code. This is not needed for issues related to the documentation itself. Issues with this label are included on the [Docs Workflow issue board](https://gitlab.com/groups/gitlab-org/-/boards/4340643?label_name[]=Category%3ADocs%20Site).
- [`development guidelines`](https://gitlab.com/gitlab-org/gitlab/-/labels?utf8=%E2%9C%93&subscribed=&search=development+guidelines): Files in the `/developer` directory.
- [`docs-missing`](https://gitlab.com/groups/gitlab-org/-/labels?subscribed=&sort=relevance&search=docs-missing): Documentation for a feature is missing. Documentation is required with the delivery of a feature for a specific milestone as part of the GitLab [definition of done](../contributing/merge_request_workflow.md#definition-of-done). Add this label to the original feature MR or issue where documentation is missing. Keep the label for historical tracking and use `tw::finished` to indicate when documentation is completed. Does not apply to [experiment features](../../policy/development_stages_support.md#experiment).
- [`documentation`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&sort=relevance&search=documentation): Files in the `/doc` directory.
- [`global nav`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/labels?subscribed=&sort=relevance&search=global+nav): Left nav of the docs site. Used in the `docs-gitlab-com` project.
- [`L10N-docs`](https://gitlab.com/groups/gitlab-org/-/labels?subscribed=&sort=relevance&search=l10n-docs): Localization issue, MR, or epic that impacts the workflows of the Technical Writing team or the `docs.gitlab.com` site and infrastructure.
- [`release post item`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=release+post+item): Release post items.
- [`Technical Writing Leadership`](https://gitlab.com/gitlab-org/gitlab/-/labels?subscribed=&search=tech+writing+leadership): Work driven or owned by the Technical Writing leadership team, such as OKRs.
- [`tw-lead`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=tw-lead): MRs that are driven by or require input from one of the [stage leads](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#stage-leads).
- [`tw-style`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=tw-style): Style standards for documentation and UI text.
- [`UI text`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=ui+text): Any user-facing text, such as UI text and error messages.
Other documentation labels include `vale`, `docs-only`, and `docs-channel`. These labels are optional.
### Type labels
All issues and merge requests must be classified into one of three work types: bug, feature, or maintenance.
Add one of the following labels to an issue or merge request:
- `type::feature`
- `type::bug`
- `type::maintenance`
For more information, see [work type classification](https://handbook.gitlab.com/handbook/product/groups/product-analysis/engineering/metrics/#work-type-classification).
The majority of documentation work uses the `type::maintenance` label.
You must also apply one of these subtype labels to further classify the type of maintenance work:
- `maintenance::refactor`: Edits and improvements of existing documentation.
- `maintenance::workflow`: Documentation changes that are not visible to readers, like linting and tooling updates, and metadata changes.
For example, if you open a merge request to refactor a page for CTRT, apply the `type::maintenance` and `maintenance::refactor` labels.
If you open a merge request to modify the metadata, apply the `type::maintenance` and `maintenance::workflow` labels.
### Workflow labels
Writers can use [these labels](https://gitlab.com/groups/gitlab-org/-/labels?utf8=✓&subscribed=&search=tw%3A%3A)
to describe the status of their work in an issue or merge request:
- `tw::doing`
- `tw::finished`
The technical writer who authors content usually adds the `tw::doing` label,
and the technical writer who does the review usually adds the `tw::finished` label.
For content submissions from community contributors,
the technical writer would add both labels as part of their review.
The workflow is:
1. An issue or merge request is assigned to the writer for review.
1. The writer adds the `tw::doing` label while actively working.
- If the writer stops work for more than a week,
they remove the `tw::doing` label.
- Whenever work restarts, the writer adds the `tw::doing` label again.
1. When work is complete on the issue or merge request, a technical writer (typically the
reviewer) adds the `tw::finished` label.
1. The issue or merge request is **Closed** or **Merged**.
The `tw::finished` label indicates that the writer is done with an issue or merge request
they are not closing or merging.
If the Technical Writing team is closing or merging, the issue or merge request
status overrides the scoped `tw` label status. The technical writer does not have to
use the `tw::finished` label.
If a technical writer is presented with an open issue or merge request with a
`tw::finished` label that needs more work, the writer should
re-add the `tw::doing` scoped label.
## Post-merge reviews
If not assigned to a technical writer for review prior to merging, a review must be scheduled
immediately after merge by the developer or maintainer. For this,
create an issue using the [Doc Review description template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Doc%20Review)
and link to it from the merged merge request that introduced the documentation change.
Circumstances in which a regular pre-merge technical writer review might be skipped include:
- There is a short amount of time left before the milestone release. If fewer than three
days are remaining, seek a post-merge review and ping the writer via Slack to ensure the review is
completed as soon as possible.
- The size of the change is small and you have a high degree of confidence
that early users of the feature (for example, GitLab.com users) can easily
use the documentation as written.
Remember:
- At GitLab, we treat documentation like code. As with code, documentation must be reviewed to
ensure quality.
- Documentation forms part of the GitLab [definition of done](../contributing/merge_request_workflow.md#definition-of-done).
- That pre-merge technical writer reviews should be most common when the code is complete well in
advance of a milestone release and for larger documentation changes.
- You can request a post-merge technical writer review of documentation if it's important to get the
code with which it ships merged as soon as possible. In this case, the author of the original MR
can address the feedback provided by the technical writer in a follow-up MR.
- The technical writer can also help decide that documentation can be merged without Technical
writer review, with the review to occur soon after merge.
## Pages with no tech writer review
The documentation under `/doc/solutions` is created, maintained, copy edited,
and merged by the Solutions Architect team.
## Related topics
- [Technical Writing assignments](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments)
- [Reviews and levels of edit](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#reviews)
- [Documentation Style Guide](styleguide/_index.md)
- [Recommended word list](styleguide/word_list.md)
- [Product availability details](styleguide/availability_details.md)
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Documentation workflow
breadcrumbs:
- doc
- development
- documentation
---
Documentation at GitLab follows a workflow.
The process for creating and maintaining GitLab product documentation depends on whether the documentation is:
- [A new feature or feature enhancement](#documentation-for-a-product-change): Delivered for a specific milestone and associated with specific code changes.
This documentation has the highest priority.
- [Changes outside a specific milestone](#documentation-feedback-and-improvements): Usually not associated with a specific code change, is of lower priority, and
is open to all GitLab contributors.
Documentation is [required](../contributing/merge_request_workflow.md#definition-of-done)
for a milestone when:
- A new or enhanced feature is shipped that impacts the user or administrator experience.
- There are changes to the user interface or API.
- A process, workflow, or previously documented feature is changed.
- A feature is deprecated or removed.
Documentation is not typically required when a **backend feature** is added or changed.
## Pipelines and branch naming
The CI/CD pipelines for the `gitlab` and `gitlab-runner` projects are configured to
run shorter, faster pipelines on merge requests that contain only documentation changes.
If you submit documentation-only changes to `omnibus-gitlab`, `charts/gitlab`, or `gitlab-operator`,
to make the shorter pipeline run, you must follow these guidelines when naming your branch:
| Branch name | Valid example |
|:----------------------|:-----------------------------|
| Starting with `docs/` | `docs/update-api-issues` |
| Starting with `docs-` | `docs-update-api-issues` |
| Ending in `-docs` | `123-update-api-issues-docs` |
Additionally, changes to these files in the `gitlab` project automatically trigger a long pipeline
because some code tests use these files as examples:
- `doc/_index.md`
- `doc/api/settings.md`
When you edit these pages, the long pipeline appears the same as in a code MR,
but you do not need any additional approvals. If the `pre-merge-checks` job fails on merge with a
`Expected latest pipeline (link) to be a tier-3 pipeline!` message, add the `~"pipeline::tier-3"`
label to the MR and run a new pipeline.
If your merge requests include any code changes, long pipelines are run for them.
For more information on long pipelines, see [pipelines for the GitLab project](../pipelines/_index.md).
## Moving content
When you move content to a new location, and edit the content in the same merge request,
use separate commits.
Separate commits help the reviewer, because the MR diff for moved content
does not clearly highlight edits.
When you use separate commits, the reviewer can verify the location change
in the first commit diff, then the content changes in subsequent commits.
For example, if you move a page, but also update the content of the page:
1. In the first commit: Move the content to its new location and put [redirects](redirects.md) in place if required.
If you can, fix broken links in this commit.
1. In subsequent commits: Make content changes. Fix broken links if you haven't already.
1. In the merge request: Explain the commits in the MR description and in a
comment to the reviewer.
You can add as many commits as you want, but make sure the first commit only moves the content,
and does not edit it.
## Documentation for a product change
Documentation is required for any new or changed feature, and is:
- Created or updated as part of feature development, and is almost always in
the same merge request as the feature code. Including documentation in the
same merge request as the code eliminates the possibility that code and
documentation get out-of-sync.
- Required with the delivery of a feature for a specific milestone as part of the
GitLab [definition of done](../contributing/merge_request_workflow.md#definition-of-done).
- Linked from the release post.
### Developer responsibilities
Developers are the primary authors of documentation for a feature or feature
enhancement. They are responsible for:
- Developing initial content required for a feature.
- Liaising with their product manager to understand what documentation must be
delivered, and when.
- Requesting technical reviews from other developers in their group.
- Requesting documentation reviews from the technical writer
[assigned to the DevOps stage group](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments)
that is delivering the new feature or feature enhancements.
When possible, the merge request with the code should include the
documentation.
For more information, see the [guidelines](feature_flags.md).
The author of this MR, either a frontend or backend developer, should write the documentation.
{{< alert type="note" >}}
Community Contributors can ask for additional help from GitLab team members.
{{< /alert >}}
#### Authoring
Because the documentation is an essential part of the product, if a `~"type::feature"`
issue also contains the `~documentation` label, you must ship the new or
updated documentation with the code of the feature.
Technical writers are happy to help, as requested and planned on an
issue-by-issue basis.
For feature issues requiring documentation, follow the process below unless
otherwise agreed with the product manager and technical writer:
- Include any new and edited documentation, either in:
- The merge request introducing the code.
- A separate merge request raised around the same time.
- Use the [documentation requirements](#documentation-requirements) developed
by the product manager in the issue and discuss any further documentation
plans or ideas as needed.
If the new or changed documentation requires extensive collaboration or
conversation, a separate, linked issue can be used for the planning process.
- Use the [Documentation guidelines](_index.md),
and other resources linked from there, including:
- [Documentation folder structure](site_architecture/folder_structure.md).
- [Documentation Style Guide](styleguide/_index.md).
- [Markdown Guide](../../user/markdown.md).
- Contact the technical writer for the relevant
[DevOps stage](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments)
in your issue or merge request, or in the `#docs` Slack channel, if you:
- Need any help to choose the correct place for documentation.
- Want to discuss a documentation idea or outline.
- Want to request any other help.
- If you are working on documentation in a separate merge request, ensure the
documentation is merged as close as possible to the code merge.
- If the feature has a feature flag, [follow the policy for documenting feature-flagged issues](feature_flags.md).
#### Review
Before merging, documentation changes committed by the developer must be
reviewed by:
- The code reviewer for the merge request. This is known as a technical review.
- Optionally, others involved in the work such as other developers or the
product manager.
- The technical writer for the DevOps stage group, except in exceptional
circumstances where a [post-merge review](#post-merge-reviews)
can be requested.
- A maintainer of the project.
### Product manager responsibilities
Product managers are responsible for the
[documentation requirements](#documentation-requirements) for a feature or
feature enhancement. They can also:
- Connect with the technical writer for discussion and collaboration.
- Review documentation themselves.
For issues requiring any new or updated documentation, the product manager
must:
- Add the `~documentation` label.
- Confirm or add the documentation requirements.
- Ensure the issue contains:
- Any new or updated feature name.
- Overview, description, and use cases when applicable (as required by the
[documentation folder structure](site_architecture/folder_structure.md)).
Everyone is encouraged to draft the documentation requirements in the issue.
However, a product manager will:
- When the issue is assigned a release milestone, review and update the
Documentation details.
- By the kickoff, finalize the documentation details.
### Technical writer responsibilities
Technical writers are responsible for:
- Participating in issue discussions and reviewing MRs for the upcoming
milestone.
- Reviewing documentation requirements in issues when called upon.
- Answering questions, and helping and providing advice throughout the
authoring and editing process.
- Reviewing all significant new and updated documentation content, whether
before merge or after it is merged.
- Assisting the developer and product manager with feature documentation
delivery.
- Ensuring that issues and MRs are labeled appropriately, and that doc content has the correct [metadata](metadata.md).
#### Planning
The technical writer:
- Reviews their group's `~"type::feature"` issues that are part of the next milestone
to get a sense of the scope of content likely to be authored.
- Recommends the `~documentation` label on issues from that list which don't
have it but should, or inquires with the PM to determine if documentation is
truly required.
- For `~direction` issues from that list, reads the full issue and reviews its
Documentation requirements section. Addresses any recommendations or
questions with the PMs and others collaborating on the issue to
refine or expand the Documentation requirements.
- Updates the Technical Writing milestone plan ([example](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/issues/521) created from the [issue template](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/blob/main/.gitlab/issue_templates/tw-milestone-plan.md)).
- Add a link to the board or filter that shows the planned documentation and UI text work for the upcoming milestone.
- Confirm that the group PM or EM is aware of the planned work.
#### Collaboration
By default, the developer will work on documentation changes independently, but
the developer, product manager, or technical writer can propose a broader
collaboration for any given issue.
Additionally, technical writers are available for questions at any time.
#### Review
Technical writers provide non-blocking reviews of all documentation changes, before or after
the change is merged. Identified issues that would block or slow a change's
release are to be handled in linked, follow-up MRs.
### Documentation requirements
Feature documentation requirements should be included as part of
the issue for planning that feature in the **Documentation** section in the
issue description. Issues created using the
[**Feature Proposal** template](https://gitlab.com/gitlab-org/gitlab/-/raw/master/.gitlab/issue_templates/Feature%20proposal%20-%20detailed.md)
have this section by default.
Anyone can add these details, but the product manager who assigns the issue to
a specific release milestone will ensure these details are present and
finalized by the time of that milestone's kickoff.
Developers, technical writers, and others may help further refine this plan at
any time on request.
The following details should be included:
- What concepts and procedures should the documentation guide and enable the
user to understand or accomplish?
- To this end, what new pages are needed, if any? What pages or subsections
need updates? Consider changes and additions to user, admin, and API
documentation.
- For any guide or instruction set, should it help address a single use case,
or be flexible to address a certain range of use cases?
- Do we need to update a previously recommended workflow? Should we link the
new feature from various relevant locations? Consider all ways documentation
should be affected.
- Are there any key terms or task descriptions that should be included so that
the documentation is found in relevant searches?
- Include suggested titles of any pages or subsection headings, if applicable.
- List any documentation that should be cross-linked, if applicable.
### Including documentation with code
Currently, the Technical Writing team strongly encourages including
documentation in the same merge request as the code that it relates to, but
this isn't strictly mandatory. It's still common for documentation to be added
in an MR separate from the feature MR.
Engineering teams may elect to adopt a workflow where it is **mandatory** that
documentation is included in the code MR, as part of their
[definition of done](../contributing/merge_request_workflow.md#definition-of-done).
When a team adopts this workflow, that team's engineers must include their
documentation in the **same** MR as their feature code, at all times.
#### Downsides of separate documentation MRs
A workflow that has documentation separated into its own MR has many downsides.
If the documentation merges **before** the feature:
- GitLab.com users might try to use the feature before it's released, driving
support tickets.
- If the feature is delayed, the documentation might not be pulled/reverted in
time and could be accidentally included on GitLab Self-Managed for that
release.
If the documentation merges **after** the feature:
- The feature might be included on GitLab Self-Managed, but without any
documentation if the documentation MR misses the cutoff.
- A feature might show up in the GitLab.com user interface before any
documentation exists for it. Users surprised by this feature will search for
documentation and won't find it, possibly driving support tickets.
Having two separate MRs means:
- Two different people might be responsible for merging one feature, which
isn't workable with an asynchronous work style. The feature might merge while
the technical writer is asleep, creating a potentially lengthy delay between
the two merges.
- If the documentation MR is assigned to the same maintainer as responsible for
the feature code MR, they will have to review and juggle two MRs instead of
dealing with just one.
Documentation quality might be lower, because:
- Having documentation in a separate MR will mean far fewer people will see and
verify them, increasing the likelihood that issues will be missed.
- In a split workflow, engineers might only create the documentation MR after
the feature MR is ready, or almost ready. This gives the technical writer
little time to learn about the feature to do a good review. It also
increases pressure on them to review and merge faster than desired, letting
problems slip in due to haste.
#### Benefits of always including documentation with code
Including documentation with code (and doing it early in the development
process) has many benefits:
- There are no timing issues connected to releases:
- If a feature slips to the next release, the documentation slips too.
- If the feature just makes it into a release, the documentation just
makes it in too.
- If a feature makes it to GitLab.com early, the documentation will be ready
for our early adopters.
- Only a single person will be responsible for merging the feature (the code
maintainer).
- The technical writer will have more time to gain an understanding of the
feature and will be better able to verify the content of the documentation in
the Review App or GDK. They will also be able to offer advice for improving
the user interface text or offer additional use cases.
- The documentation will have increased visibility:
- Everyone involved in the merge request can review the documentation. This
could include product managers, multiple engineers with deep domain
knowledge, the code reviewers, and the maintainer. They will be more likely
to catch issues with examples, and background or concepts that the
technical writer may not be aware of.
- Increasing visibility of the documentation also has the side effect of
improving other engineers' documentation. By reviewing each other's MRs,
each engineer's own documentation skills will improve.
- Thinking about the documentation early can help engineers generate better
examples, as they will need to think about what examples a user will want,
and will need to ensure the code they write implements that example properly.
#### Documentation with code as a workflow
To have documentation included with code as a mandatory workflow, some
changes might need to happen to a team's current workflow:
- The engineers must strive to include the documentation early in the
development process, to give ample time for review, not just from the
technical writer, but also the code reviewer and maintainer.
- Reviewers and maintainers must also review the documentation during code
reviews to ensure the described processes match the expected use of the
feature and that examples are correct.
They do **not** need to worry about style or grammar.
- The technical writer must be assigned as a reviewer on the MR directly and not only pinged.
This can be done at any time, but must be before the code maintainer review.
It's common to have both the documentation and code reviews happening at the
same time, with the author, reviewer, and technical writer discussing the
documentation together.
- When the documentation is ready, the technical writer will click **Approve**
and usually will no longer be involved in the MR. If the feature changes
during code review and the documentation is updated, the technical writer
must be reassigned the MR to verify the update.
- Maintainers are allowed to merge features with the documentation *as-is*,
even if the technical writer hasn't given final approval yet. The
**documentation reviews must not be blockers**. Therefore, it's important to
get the documentation included and assigned to the technical writers early.
If the feature is merged before final documentation approval, the maintainer
must create a [post-merge follow-up issue](#post-merge-reviews),
and assign it to both the engineer and technical writer.
You can visualize the parallel workflow for code and documentation reviews as:
```mermaid
graph TD
A("Feature MR Created (Engineer)") --> |Assign| B("Code Review (reviewer)")
B --> |"Approve / Reassign"| C("Code Review (maintainer)")
C --> |Approve| F("Merge (maintainer)")
A --> D("Docs Added (Engineer)")
D --> |Assign| E("Docs Review (Tech Writer)")
E --> |Approve| F
```
For complex features split over multiple merge requests:
- If a merge request is implementing components for a future feature, but the
components aren't accessible to users yet, then no documentation should be
included.
- If a merge request will expose a feature to users in any way, such as an
enabled user interface element, an API endpoint, or anything similar, then
that MR **must** have documentation. This might mean multiple
documentation additions could happen in the buildup to the implementation of
a single large feature, for example API documentation and feature usage
documentation.
- If it's unclear which engineer should add the feature documentation into
their MR, the engineering manager should decide during planning, and tie the
documentation to the last MR that must be merged before a feature is
considered released. This is often, but not always, a frontend MR.
## UI text
### Planning and authoring
A product designer should consult the technical writer for their stage group when planning to add
or change UI text.
The technical writer can offer an initial review of any ideas, plans, or actual text.
The technical writer can be asked to draft text when provided with the context and goal of the text.
The context might include where the text would appear and what information to convey, which typically answers
one or more of these questions:
- What does this do?
- How do I use it?
- Why should I care?
Consider tagging the technical writer once in a review request with a message indicating where reviews are needed.
This will help manage the volume of notifications per review round.
### MR Reviews
After the merge request is created, all changes and additions to text in the UI **must** be reviewed by
the technical writer.
These might include labels (buttons, menus, column headers, and UI sections) or any phrases that would be
displayed in the UI, such as microcopy or error messages.
<i class="fa-youtube-play" aria-hidden="true"></i>
For more information about writing and reviewing UI text,
see [Copy That: Helping your Users Succeed with Effective Product Copy](https://www.youtube.com/watch?v=M_Q1RO0ky2c).
<!-- Video published on 2016-05-26 -->
## Release posts
The technical writer for each [stage group](https://handbook.gitlab.com/handbook/product/categories/#devops-stages)
reviews their group's [feature blocks](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#content-reviews)
(release post items) authored by the product manager.
For each release, a single technical writer is also assigned as the [Technical Writing Lead](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#tw-lead) to perform the [structural check](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#structural-check) and other duties.
## Monthly documentation releases
When a new GitLab version is released, the Technical Writing team releases
[version-specific published documentation](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/releases.md).
## Documentation feedback and improvements
To make a documentation change that is not associated with a specific code change, the Technical Writing team encourages contributors to create an MR.
If you start with an issue rather than an MR, use the [documentation template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Documentation).
For the labels you should apply, see [labels](#labels).
Also include:
- Milestone: `Backlog` until the work is scheduled for a milestone.
- Assignee: `None` until the work is scheduled for a milestone.
In the issue description or comments, mention (`@username`) the technical writer assigned to the group for awareness.
- Description: starts with `Docs:` or `Docs feedback:`
- A task checklist or next step to deliver an MVC.
- Optional. If the issue is suitable for a community contributor: [`Seeking community contributions`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_date&state=opened&label_name%5B%5D=Seeking%20community%20contributions&first_page_size=50) and [`quick win`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_date&state=opened&label_name%5B%5D=quick%20win&first_page_size=50).
If an issue requires input from the development team before a technical writer can start work, it should follow the stage and group's issue lifecycle.
For an example of an issue lifecycle, see [Plan stage issues](https://handbook.gitlab.com/handbook/engineering/development/dev/plan/#issues).
### Review and triage documentation-only backlog issues
Routine review and triage of documentation feedback and improvement issues for your groups helps us spend the [time we have](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#prioritization) on actionable issues that improve the user experience.
#### Prerequisites
- An issue triage board for each group that you are the assigned technical writer for. If you don't have an issue triage board for your group, set one up called `Docs only backlog triage - group name`. See an [example board](https://gitlab.com/gitlab-org/gitlab/-/boards/8944610?not[label_name][]=type%3A%3Afeature¬[label_name][]=type%3A%3Abug&label_name[]=documentation&label_name[]=group%3A%3Aproject%20management) for the `Project Management` group.
- The filter criteria should include **Label=**`documentation`, **Label=**`group::groupname`, **Label!=**`type::feature`, **Label!=**`type:bug`.
- In **Edit board**, ensure **Show the Open list** is selected.
- On the issue board, select **Create list**, and set the label to `tw:triaged`.
To review and triage documentation feedback and improvement issues for your groups:
1. Once a month, on the issue triage boards for your groups, check the **Open** list for new issues.
1. Apply the labels described in [documentation feedback and improvements](#documentation-feedback-and-improvements).
1. Aim to keep the list of open, untriaged issues at **<10**.
1. Share the triaged list with the group and group PM.
## Hackathons
The Technical Writing team takes part in the [GitLab Hackathon](https://about.gitlab.com/community/hackathon/)
and sometimes hosts a documentation-only Hackathon.
### Create issues for a Hackathon
We often create documentation issues for a Hackathon. These issues are typically based on results found when you run Vale against the documentation.
1. Run Vale against the full docset. Go to the GitLab repo and run:
`find doc -name '*.md' | sort | xargs vale --minAlertLevel suggestion --output line > ../results.txt`
1. Create issues. You have a few options:
- Use a [script](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/scripts/create_issues.js) to create one issue for each Markdown file listed in the Vale results.
This script uses the [`Doc cleanup` issue template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Doc_cleanup.md).
- Create issues one at a time by using the `Doc cleanup` issue template.
- Create issues in bulk by using the [Issues API](../../api/issues.md#new-issue).
Ensure that the labels assigned to the issues match those in the `Doc cleanup` issue template.
### Assign an issue to a community contributor
To assign an issue to a community contributor:
1. Remove the `Seeking community contributions` label.
1. Assign the user by typing `/assign @username` in a comment, where `@username` is the contributor's handle.
1. Mention the user in a comment, telling them the issue is now assigned to them.
Try to limit each contributor to no more than three issues at a time. You can assign another issue as soon as they've opened an MR for one of the issues they're already assigned.
### Review Hackathon merge requests
When a community contributor opens a Hackathon merge request:
1. View the related issue. Ensure the user who authored the MR is the same user who asked to be assigned to the issue.
- If the user is not listed in the issue, and another user has asked to work on the issue, do not merge the MR.
Ask the MR author to find an issue that has not already been assigned or point them to [Contribute to GitLab development](../contributing/_index.md).
1. Work to merge the merge request.
1. When you merge, ensure you close the related issue.
## Labels
The Technical Writing team uses the following [labels](../../user/project/labels.md)
on issues and merge requests:
- A label for the type of change. The two labels used most often are:
- `~"type::feature"`
- `~"type::maintenance"` with `~"maintenance::refactor"`
- A stage and group label. For example:
- `~devops::create`
- `~group::source code`
- The `~documentation` specialization label.
- The `~Technical Writing` team label.
The [documentation merge request template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/merge_request_templates/Documentation.md)
includes some of these labels.
### Available labels
Any issue or merge request a technical writer works on must include the `Technical Writing` label.
To further classify the type of effort, include one or more of the following labels:
- [`Category:Docs Site`](https://gitlab.com/groups/gitlab-org/-/labels?subscribed=&sort=relevance&search=Category%3ADocs+Site): Documentation website infrastructure or code. This is not needed for issues related to the documentation itself. Issues with this label are included on the [Docs Workflow issue board](https://gitlab.com/groups/gitlab-org/-/boards/4340643?label_name[]=Category%3ADocs%20Site).
- [`development guidelines`](https://gitlab.com/gitlab-org/gitlab/-/labels?utf8=%E2%9C%93&subscribed=&search=development+guidelines): Files in the `/developer` directory.
- [`docs-missing`](https://gitlab.com/groups/gitlab-org/-/labels?subscribed=&sort=relevance&search=docs-missing): Documentation for a feature is missing. Documentation is required with the delivery of a feature for a specific milestone as part of the GitLab [definition of done](../contributing/merge_request_workflow.md#definition-of-done). Add this label to the original feature MR or issue where documentation is missing. Keep the label for historical tracking and use `tw::finished` to indicate when documentation is completed. Does not apply to [experiment features](../../policy/development_stages_support.md#experiment).
- [`documentation`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&sort=relevance&search=documentation): Files in the `/doc` directory.
- [`global nav`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/labels?subscribed=&sort=relevance&search=global+nav): Left nav of the docs site. Used in the `docs-gitlab-com` project.
- [`L10N-docs`](https://gitlab.com/groups/gitlab-org/-/labels?subscribed=&sort=relevance&search=l10n-docs): Localization issue, MR, or epic that impacts the workflows of the Technical Writing team or the `docs.gitlab.com` site and infrastructure.
- [`release post item`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=release+post+item): Release post items.
- [`Technical Writing Leadership`](https://gitlab.com/gitlab-org/gitlab/-/labels?subscribed=&search=tech+writing+leadership): Work driven or owned by the Technical Writing leadership team, such as OKRs.
- [`tw-lead`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=tw-lead): MRs that are driven by or require input from one of the [stage leads](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#stage-leads).
- [`tw-style`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=tw-style): Style standards for documentation and UI text.
- [`UI text`](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=ui+text): Any user-facing text, such as UI text and error messages.
Other documentation labels include `vale`, `docs-only`, and `docs-channel`. These labels are optional.
### Type labels
All issues and merge requests must be classified into one of three work types: bug, feature, or maintenance.
Add one of the following labels to an issue or merge request:
- `type::feature`
- `type::bug`
- `type::maintenance`
For more information, see [work type classification](https://handbook.gitlab.com/handbook/product/groups/product-analysis/engineering/metrics/#work-type-classification).
The majority of documentation work uses the `type::maintenance` label.
You must also apply one of these subtype labels to further classify the type of maintenance work:
- `maintenance::refactor`: Edits and improvements of existing documentation.
- `maintenance::workflow`: Documentation changes that are not visible to readers, like linting and tooling updates, and metadata changes.
For example, if you open a merge request to refactor a page for CTRT, apply the `type::maintenance` and `maintenance::refactor` labels.
If you open a merge request to modify the metadata, apply the `type::maintenance` and `maintenance::workflow` labels.
### Workflow labels
Writers can use [these labels](https://gitlab.com/groups/gitlab-org/-/labels?utf8=✓&subscribed=&search=tw%3A%3A)
to describe the status of their work in an issue or merge request:
- `tw::doing`
- `tw::finished`
The technical writer who authors content usually adds the `tw::doing` label,
and the technical writer who does the review usually adds the `tw::finished` label.
For content submissions from community contributors,
the technical writer would add both labels as part of their review.
The workflow is:
1. An issue or merge request is assigned to the writer for review.
1. The writer adds the `tw::doing` label while actively working.
- If the writer stops work for more than a week,
they remove the `tw::doing` label.
- Whenever work restarts, the writer adds the `tw::doing` label again.
1. When work is complete on the issue or merge request, a technical writer (typically the
reviewer) adds the `tw::finished` label.
1. The issue or merge request is **Closed** or **Merged**.
The `tw::finished` label indicates that the writer is done with an issue or merge request
they are not closing or merging.
If the Technical Writing team is closing or merging, the issue or merge request
status overrides the scoped `tw` label status. The technical writer does not have to
use the `tw::finished` label.
If a technical writer is presented with an open issue or merge request with a
`tw::finished` label that needs more work, the writer should
re-add the `tw::doing` scoped label.
## Post-merge reviews
If not assigned to a technical writer for review prior to merging, a review must be scheduled
immediately after merge by the developer or maintainer. For this,
create an issue using the [Doc Review description template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Doc%20Review)
and link to it from the merged merge request that introduced the documentation change.
Circumstances in which a regular pre-merge technical writer review might be skipped include:
- There is a short amount of time left before the milestone release. If fewer than three
days are remaining, seek a post-merge review and ping the writer via Slack to ensure the review is
completed as soon as possible.
- The size of the change is small and you have a high degree of confidence
that early users of the feature (for example, GitLab.com users) can easily
use the documentation as written.
Remember:
- At GitLab, we treat documentation like code. As with code, documentation must be reviewed to
ensure quality.
- Documentation forms part of the GitLab [definition of done](../contributing/merge_request_workflow.md#definition-of-done).
- That pre-merge technical writer reviews should be most common when the code is complete well in
advance of a milestone release and for larger documentation changes.
- You can request a post-merge technical writer review of documentation if it's important to get the
code with which it ships merged as soon as possible. In this case, the author of the original MR
can address the feedback provided by the technical writer in a follow-up MR.
- The technical writer can also help decide that documentation can be merged without Technical
writer review, with the review to occur soon after merge.
## Pages with no tech writer review
The documentation under `/doc/solutions` is created, maintained, copy edited,
and merged by the Solutions Architect team.
## Related topics
- [Technical Writing assignments](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments)
- [Reviews and levels of edit](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#reviews)
- [Documentation Style Guide](styleguide/_index.md)
- [Recommended word list](styleguide/word_list.md)
- [Product availability details](styleguide/availability_details.md)
|
https://docs.gitlab.com/development/experiment_beta
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/experiment_beta.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
experiment_beta.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects
|
Documenting experimental and beta features
| null |
When you document an [experiment or beta](../../policy/development_stages_support.md) feature:
- Include the status in the [product availability details](styleguide/availability_details.md#status).
- Include [feature flag details](feature_flags.md) if behind a feature flag.
- [Update the feature status](styleguide/availability_details.md#changed-feature-status) when it changes.
## When features become generally available
When the feature changes from experiment or beta to generally available:
- Remove the **Status** from the product availability details.
- Remove any language about the feature not being ready for production.
- Update the [history](styleguide/availability_details.md#history).
## Features that require user enrollment or feedback
To include details about how users should enroll or leave feedback,
add it below the `type=flag` alert.
For example:
```markdown
## Great new feature
{{</* details */>}}
Status: Experiment
{{</* /details */>}}
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 15.10. This feature is an [experiment](<link_to>/policy/development_stages_support.md).
{{</* /history */>}}
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{</* /alert */>}}
Use this new feature when you need to do this new thing.
This feature is an [experiment](<link_to>/policy/development_stages_support.md). To join
the list of users testing this feature, do this thing. If you find a bug,
[open an issue](https://link).
```
## GitLab Duo features
Follow these guidelines when you document GitLab Duo features.
### Experiment
When documenting a GitLab Duo experiment:
- On the [GitLab Duo feature summary page](../../user/gitlab_duo/feature_summary.md):
- Add a row to the table.
- Add the feature to an area at the top of the page, near other features that are available
during a similar stage of the software development lifecycle.
- Document the feature near other similar features.
- Make sure you add history and status values, including any
[add-on information](styleguide/availability_details.md#add-ons).
- For features that are part of the [Early Access Program](../../policy/early_access_program/_index.md#add-a-feature-to-the-program)
in the `#developer-relations-early-access-program` Slack channel,
post a comment that mentions the feature and its status.
### Beta
When a GitLab Duo experiment moves to beta:
- On the [GitLab Duo feature summary page](../../user/gitlab_duo/feature_summary.md),
update the row in the table.
- Make sure you update the history and status values, including any
[add-on information](styleguide/availability_details.md#add-ons).
- For features that are part of the [Early Access Program](../../policy/early_access_program/_index.md#add-a-feature-to-the-program)
in the `#developer-relations-early-access-program` Slack channel,
post a comment that mentions the feature and its status.
### Generally available
When a GitLab Duo feature becomes generally available:
- On the [GitLab Duo feature summary page](../../user/gitlab_duo/feature_summary.md),
move the feature to the GA table.
- Make sure you update the history and status values, including any
[add-on information](styleguide/availability_details.md#add-ons).
- For features that are part of the [Early Access Program](../../policy/early_access_program/_index.md#add-a-feature-to-the-program)
in the `#developer-relations-early-access-program` Slack channel,
post a comment that mentions the feature and its status.
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects
title: Documenting experimental and beta features
breadcrumbs:
- doc
- development
- documentation
---
When you document an [experiment or beta](../../policy/development_stages_support.md) feature:
- Include the status in the [product availability details](styleguide/availability_details.md#status).
- Include [feature flag details](feature_flags.md) if behind a feature flag.
- [Update the feature status](styleguide/availability_details.md#changed-feature-status) when it changes.
## When features become generally available
When the feature changes from experiment or beta to generally available:
- Remove the **Status** from the product availability details.
- Remove any language about the feature not being ready for production.
- Update the [history](styleguide/availability_details.md#history).
## Features that require user enrollment or feedback
To include details about how users should enroll or leave feedback,
add it below the `type=flag` alert.
For example:
```markdown
## Great new feature
{{</* details */>}}
Status: Experiment
{{</* /details */>}}
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 15.10. This feature is an [experiment](<link_to>/policy/development_stages_support.md).
{{</* /history */>}}
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{</* /alert */>}}
Use this new feature when you need to do this new thing.
This feature is an [experiment](<link_to>/policy/development_stages_support.md). To join
the list of users testing this feature, do this thing. If you find a bug,
[open an issue](https://link).
```
## GitLab Duo features
Follow these guidelines when you document GitLab Duo features.
### Experiment
When documenting a GitLab Duo experiment:
- On the [GitLab Duo feature summary page](../../user/gitlab_duo/feature_summary.md):
- Add a row to the table.
- Add the feature to an area at the top of the page, near other features that are available
during a similar stage of the software development lifecycle.
- Document the feature near other similar features.
- Make sure you add history and status values, including any
[add-on information](styleguide/availability_details.md#add-ons).
- For features that are part of the [Early Access Program](../../policy/early_access_program/_index.md#add-a-feature-to-the-program)
in the `#developer-relations-early-access-program` Slack channel,
post a comment that mentions the feature and its status.
### Beta
When a GitLab Duo experiment moves to beta:
- On the [GitLab Duo feature summary page](../../user/gitlab_duo/feature_summary.md),
update the row in the table.
- Make sure you update the history and status values, including any
[add-on information](styleguide/availability_details.md#add-ons).
- For features that are part of the [Early Access Program](../../policy/early_access_program/_index.md#add-a-feature-to-the-program)
in the `#developer-relations-early-access-program` Slack channel,
post a comment that mentions the feature and its status.
### Generally available
When a GitLab Duo feature becomes generally available:
- On the [GitLab Duo feature summary page](../../user/gitlab_duo/feature_summary.md),
move the feature to the GA table.
- Make sure you update the history and status values, including any
[add-on information](styleguide/availability_details.md#add-ons).
- For features that are part of the [Early Access Program](../../policy/early_access_program/_index.md#add-a-feature-to-the-program)
in the `#developer-relations-early-access-program` Slack channel,
post a comment that mentions the feature and its status.
|
https://docs.gitlab.com/development/drawers
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/drawers.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
drawers.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Create content for drawers
| null |
In the GitLab UI, you can display help content in
[a drawer component](https://design.gitlab.com/components/drawer/).
The component for Markdown is
[in the storybook](https://gitlab-org.gitlab.io/gitlab/storybook/?path=/story/vue-shared-markdown-drawer--default).
The component points to a Markdown file. Any time you update the Markdown
file, the contents of the drawer are updated.
Drawer content is displayed in drawers only, and not on `docs.gitlab.com`.
The content is rendered in GitLab Flavored Markdown.
To create this content:
1. In the [GitLab](https://gitlab.com/gitlab-org/gitlab) repository,
go to the `/doc/drawers` folder.
1. Create a Markdown file. Use a descriptive filename.
Do not create subfolders.
1. Add the standard page metadata. Also, include:
```markdown
type: drawer
```
1. Author the content.
1. If the page includes content that is also on a page on `docs.gitlab.com`,
on the page's metadata, include a path to the other file. For example:
```markdown
source: /doc/user/search/global_search/advanced_search_syntax.md
```
1. Work with the developer to view the content in the drawer and
verify that the content appears correctly.
## Drawer content guidelines
- The headings in the file are used as headings in the drawer.
The `H1` heading is the drawer title.
- Do not include any characters other than plain text in the `H1`.
- The drawer component is narrow and not resizable.
- If you include tables, the content in the table should be brief.
- While no technical limitation exists on the number of characters
you can use, you should preview the drawer content to
ensure it renders well.
- To link from the drawer to other content, use absolute URLs.
- Do not include Hugo shortcodes, such as Alert boxes or SVG icons.
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Create content for drawers
breadcrumbs:
- doc
- development
- documentation
---
In the GitLab UI, you can display help content in
[a drawer component](https://design.gitlab.com/components/drawer/).
The component for Markdown is
[in the storybook](https://gitlab-org.gitlab.io/gitlab/storybook/?path=/story/vue-shared-markdown-drawer--default).
The component points to a Markdown file. Any time you update the Markdown
file, the contents of the drawer are updated.
Drawer content is displayed in drawers only, and not on `docs.gitlab.com`.
The content is rendered in GitLab Flavored Markdown.
To create this content:
1. In the [GitLab](https://gitlab.com/gitlab-org/gitlab) repository,
go to the `/doc/drawers` folder.
1. Create a Markdown file. Use a descriptive filename.
Do not create subfolders.
1. Add the standard page metadata. Also, include:
```markdown
type: drawer
```
1. Author the content.
1. If the page includes content that is also on a page on `docs.gitlab.com`,
on the page's metadata, include a path to the other file. For example:
```markdown
source: /doc/user/search/global_search/advanced_search_syntax.md
```
1. Work with the developer to view the content in the drawer and
verify that the content appears correctly.
## Drawer content guidelines
- The headings in the file are used as headings in the drawer.
The `H1` heading is the drawer title.
- Do not include any characters other than plain text in the `H1`.
- The drawer component is narrow and not resizable.
- If you include tables, the content in the table should be brief.
- While no technical limitation exists on the number of characters
you can use, you should preview the drawer content to
ensure it renders well.
- To link from the drawer to other content, use absolute URLs.
- Do not include Hugo shortcodes, such as Alert boxes or SVG icons.
|
https://docs.gitlab.com/development/hugo_migration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/hugo_migration.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
hugo_migration.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Hugo migration reference for writers
| null |
We've rebuilt the GitLab Docs website on Hugo. This guide outlines the formatting
requirements for documentation after the relaunch.
While existing content will be automatically updated, any new or modified documentation must follow these guidelines to ensure proper building with Hugo.
## New project
The new Docs website is in the [`gitlab-org/technical-writing/docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) project.
After launch, all issues from the [original `gitlab-org/gitlab-docs` project](https://gitlab.com/gitlab-org/gitlab-docs)
will be moved over to the new one, or closed if they're no longer applicable.
## Formatting changes
### Page titles
Page titles move from `h1` tags to `title` front matter attributes.
For example, on Nanoc, a title is added as an `h1`, like this:
```markdown
---
stage: Systems
group: Cloud Connector
info: Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
---
# Cloud Connector: Configuration
A GitLab Rails instance accesses...
```
For Hugo, move the title into the page's front matter:
```markdown
---
stage: Systems
group: Cloud Connector
info: Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: 'Cloud Connector: Configuration'
---
A GitLab Rails instance accesses...
```
**Why**: Hugo can generate automated listings of pages. For these to work, Hugo needs the page title to be handled more like data than regular content.
We are not using these initially, but may do so in the future.
**Testing**: Error-level Vale rule ([`FrontMatter.yml`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/master/doc/.vale/gitlab_docs/FrontMatter.yml?ref_type=heads)).
### Shortcodes
Custom Markdown elements are now marked up using Hugo's shortcode syntax.
Our custom elements are:
- Alert boxes
- History details
- Feature availability details (tier, offering, status)
- GitLab SVG icons
- Tabs
For example:
```markdown
{{</* alert type="warning" */>}}
Don't delete your docs!
{{</* /alert */>}}
```
See the [Shortcodes reference](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/shortcodes.md) for syntax and examples.
**Why**: Shortcodes are the standard Hugo method for creating custom templated
bits of content.
**Testing**: Shortcodes are validated on docs pipelines (see [implementation issue](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/161)).
#### Shortcodes in `/help`
Shortcodes, like our existing custom Markdown elements, will not render in `/help`.
`/help` is a built-in set of documentation pages available in GitLab Self-Managed instances
([learn more](help.md)).
Shortcodes have more verbose syntax, so we've modified `/help` to hide these
tags and show simplified plain text fallbacks for elements like tabs and alert boxes.
**Why**: `/help` only renders plain Markdown. It is not a static site generator with
functionality to transform content or render templated frontend code.
### Kramdown
Kramdown is no longer supported on the website.
A few example Kramdown tags that exist on the site right now:
```plaintext
{::options parse_block_html="true" /}
{: .alert .alert-warning}
```
With Hugo, these will no longer have any effect. They will render as plain text.
**Why**: Hugo uses the Goldmark Markdown rendering engine, not Kramdown.
**Testing**: We are running an audit job on the CI pipeline for Kramdown tags ([example](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/jobs/8885163533)).
These tags will be manually removed as part of launch.
### Menu entries in `navigation.yaml`
1. We have simplified the structure of the `navigation.yaml` file. The valid
property names are now `title`, `url`, and `submenu` rather than using different property
names at each level of the hierarchy.
For example, the Nanoc site menu data looks like this:
```yaml
sections:
- section_title: Tutorials
section_url: 'ee/tutorials/'
section_categories:
- category_title: Find your way around GitLab
category_url: 'ee/tutorials/gitlab_navigation.html'
docs:
- doc_title: 'Tutorial: Use the left sidebar to navigate GitLab'
doc_url: 'ee/tutorials/left_sidebar/'
```
For Hugo, it looks like this:
```yaml
- title: Tutorials
url: 'tutorials/'
submenu:
- title: Find your way around GitLab
url: 'tutorials/gitlab_navigation/'
submenu:
- title: 'Tutorial: Use the left sidebar to navigate GitLab'
url: 'tutorials/left_sidebar/'
```
**Why**: Using the same property names at each level of the hierarchy significantly
simplifies everything we do programmatically with the menu. It also simplifies
menu edits for contributors.
1. As part of the change to `prettyURLs`, page paths should no longer
include a `.html` extension. End each URL with a trailing `/`.
For example:
```plaintext
# Before
- category_title: Find your way around GitLab
category_url: 'ee/tutorials/gitlab_navigation.html'
# After
- title: Find your way around GitLab
url: 'tutorials/gitlab_navigation/'
```
**Testing**: We run various checks on `navigation.yaml` in [this script](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/main/scripts/check-navigation.sh?ref_type=heads),
which runs as a pipeline job when the YAML file is updated.
## File naming
### Index file names
All files previously named `index.md` need to be named `_index.md`. For example:
```plaintext
Before:
doc/
├── user/
│ ├── index.md # Must be renamed
│ └── feature/
│ └── index.md # Must be renamed
└── admin/
└── index.md # Must be renamed
After:
doc/
├── user/
│ ├── _index.md # Renamed
│ └── feature/
│ └── _index.md # Renamed
└── admin/
└── _index.md # Renamed
```
**Why**: Hugo requires this specific naming convention for section index pages (pages that serve as the main page for a directory).
See Hugo's documentation on [Page bundles](https://gohugo.io/content-management/page-bundles/) for more information.
**Testing**: We will test for this on the pipeline and prevent merges that include an `index.md` file (see [this issue](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/161) for details).
### Clashing file names
Hugo is configured to use PrettyURLs, which drop the `.html` extension from page URLs.
A _path clash_ occurs when two files would render at the same URL, making one of them
inaccessible.
```plaintext
# Example 1
- doc/development/project_templates.md
- doc/development/project_templates/index.md
# Resulting URL for both: /development/project_templates/
# Example 2
- doc/user/gitlab_duo_chat.md
- doc/user/gitlab_duo_chat/index.md
# Resulting URL for both: /user/gitlab_duo_chat/
# Example 3
- doc/administration/dedicated/configure_instance.md
- doc/administration/dedicated/configure_instance/index.md
# Resulting URL for both: /administration/dedicated/configure_instance/
```
**Why**: Hugo's options for URL paths are `prettyURLs` and `uglyURLs`. Both of these produce
somewhat different paths than the Nanoc website does. We've opted for `prettyURLs` because it's
Hugo's default, and Hugo's pattern for `uglyURLs` is different from most other static site generators.
**Testing**: After launch, Hugo will throw an error on docs pipelines if it detects a new path clash.
## Processes
### Cutting a release
Cutting a release no longer requires updating `latest.Dockerfile`. This file no longer exists in
the project, and the release template has been updated accordingly.
**Why**: We've refactored versioning to use the [Parallel Deployments](../../user/project/pages/_index.md#parallel-deployments) feature.
You can review the new release process [here](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/issue_templates/release.md).
### Monthly technical writing tasks
The [Docs project maintenance tasks rotation](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments) will pause when we launch on Hugo.
For February 2025, run the checks for broken external links and `start_remove` content before Wednesday, February 12. Other tasks are fine to skip for now. From March onwards, the monthly maintenance task will be on hold until further notice.
{{< alert type="note" >}}
This does not impact the release post [structural check](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#structural-check) or [monthly documentation release](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/releases.md) tasks. The assigned Technical Writer should continue to do these tasks as previously scheduled.
{{< /alert >}}
**Why**: Some Ruby scripts need to be rewritten in Go, and the maintenance tasks are
low-priority enough that we can launch without them. There may be more opportunity
post-launch to share more of these scripts with the Handbook project.
**Testing**: Because we will pause on removing old redirects temporarily,
we've added a [test script](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/scripts/redirect-threshold-check.sh?ref_type=heads) to warn if we get near the Pages redirect limit.
## User-facing changes
These changes take effect when we launch the new site.
They are viewable at [https://new.docs.gitlab.com](https://new.docs.gitlab.com).
### Page URLs
- `ee` prefix: We dropped the `ee` prefix from paths to pages
that come from the GitLab project.
The prefix was an artifact leftover from when pages were split
between `ce` and `ee`, and has been a source of confusion
for site visitors.
- Pretty URLs: Pages no longer have a `.html` extension in the URL.
A file located at `/foo/bar/baz.html` is available at `/foo/bar/baz`.
We have redirects in place at Cloudflare to redirect all URLs to their
new formats. See the [redirects documentation](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/redirects.md?ref_type=heads#cloudflare) in the Hugo project for more information.
### Layout changes
We implemented the layout changes proposed in [this issue](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/673), which aim to improve
readability.
The primary changes are:
- Main content column has a maximum width.
- Main content column (which includes the table of contents) is
centered, with extra space on either side of it, when the site
is viewed on a large screen.
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Hugo migration reference for writers
breadcrumbs:
- doc
- development
- documentation
---
We've rebuilt the GitLab Docs website on Hugo. This guide outlines the formatting
requirements for documentation after the relaunch.
While existing content will be automatically updated, any new or modified documentation must follow these guidelines to ensure proper building with Hugo.
## New project
The new Docs website is in the [`gitlab-org/technical-writing/docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) project.
After launch, all issues from the [original `gitlab-org/gitlab-docs` project](https://gitlab.com/gitlab-org/gitlab-docs)
will be moved over to the new one, or closed if they're no longer applicable.
## Formatting changes
### Page titles
Page titles move from `h1` tags to `title` front matter attributes.
For example, on Nanoc, a title is added as an `h1`, like this:
```markdown
---
stage: Systems
group: Cloud Connector
info: Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
---
# Cloud Connector: Configuration
A GitLab Rails instance accesses...
```
For Hugo, move the title into the page's front matter:
```markdown
---
stage: Systems
group: Cloud Connector
info: Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: 'Cloud Connector: Configuration'
---
A GitLab Rails instance accesses...
```
**Why**: Hugo can generate automated listings of pages. For these to work, Hugo needs the page title to be handled more like data than regular content.
We are not using these initially, but may do so in the future.
**Testing**: Error-level Vale rule ([`FrontMatter.yml`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/master/doc/.vale/gitlab_docs/FrontMatter.yml?ref_type=heads)).
### Shortcodes
Custom Markdown elements are now marked up using Hugo's shortcode syntax.
Our custom elements are:
- Alert boxes
- History details
- Feature availability details (tier, offering, status)
- GitLab SVG icons
- Tabs
For example:
```markdown
{{</* alert type="warning" */>}}
Don't delete your docs!
{{</* /alert */>}}
```
See the [Shortcodes reference](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/shortcodes.md) for syntax and examples.
**Why**: Shortcodes are the standard Hugo method for creating custom templated
bits of content.
**Testing**: Shortcodes are validated on docs pipelines (see [implementation issue](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/161)).
#### Shortcodes in `/help`
Shortcodes, like our existing custom Markdown elements, will not render in `/help`.
`/help` is a built-in set of documentation pages available in GitLab Self-Managed instances
([learn more](help.md)).
Shortcodes have more verbose syntax, so we've modified `/help` to hide these
tags and show simplified plain text fallbacks for elements like tabs and alert boxes.
**Why**: `/help` only renders plain Markdown. It is not a static site generator with
functionality to transform content or render templated frontend code.
### Kramdown
Kramdown is no longer supported on the website.
A few example Kramdown tags that exist on the site right now:
```plaintext
{::options parse_block_html="true" /}
{: .alert .alert-warning}
```
With Hugo, these will no longer have any effect. They will render as plain text.
**Why**: Hugo uses the Goldmark Markdown rendering engine, not Kramdown.
**Testing**: We are running an audit job on the CI pipeline for Kramdown tags ([example](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/jobs/8885163533)).
These tags will be manually removed as part of launch.
### Menu entries in `navigation.yaml`
1. We have simplified the structure of the `navigation.yaml` file. The valid
property names are now `title`, `url`, and `submenu` rather than using different property
names at each level of the hierarchy.
For example, the Nanoc site menu data looks like this:
```yaml
sections:
- section_title: Tutorials
section_url: 'ee/tutorials/'
section_categories:
- category_title: Find your way around GitLab
category_url: 'ee/tutorials/gitlab_navigation.html'
docs:
- doc_title: 'Tutorial: Use the left sidebar to navigate GitLab'
doc_url: 'ee/tutorials/left_sidebar/'
```
For Hugo, it looks like this:
```yaml
- title: Tutorials
url: 'tutorials/'
submenu:
- title: Find your way around GitLab
url: 'tutorials/gitlab_navigation/'
submenu:
- title: 'Tutorial: Use the left sidebar to navigate GitLab'
url: 'tutorials/left_sidebar/'
```
**Why**: Using the same property names at each level of the hierarchy significantly
simplifies everything we do programmatically with the menu. It also simplifies
menu edits for contributors.
1. As part of the change to `prettyURLs`, page paths should no longer
include a `.html` extension. End each URL with a trailing `/`.
For example:
```plaintext
# Before
- category_title: Find your way around GitLab
category_url: 'ee/tutorials/gitlab_navigation.html'
# After
- title: Find your way around GitLab
url: 'tutorials/gitlab_navigation/'
```
**Testing**: We run various checks on `navigation.yaml` in [this script](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/main/scripts/check-navigation.sh?ref_type=heads),
which runs as a pipeline job when the YAML file is updated.
## File naming
### Index file names
All files previously named `index.md` need to be named `_index.md`. For example:
```plaintext
Before:
doc/
├── user/
│ ├── index.md # Must be renamed
│ └── feature/
│ └── index.md # Must be renamed
└── admin/
└── index.md # Must be renamed
After:
doc/
├── user/
│ ├── _index.md # Renamed
│ └── feature/
│ └── _index.md # Renamed
└── admin/
└── _index.md # Renamed
```
**Why**: Hugo requires this specific naming convention for section index pages (pages that serve as the main page for a directory).
See Hugo's documentation on [Page bundles](https://gohugo.io/content-management/page-bundles/) for more information.
**Testing**: We will test for this on the pipeline and prevent merges that include an `index.md` file (see [this issue](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/161) for details).
### Clashing file names
Hugo is configured to use PrettyURLs, which drop the `.html` extension from page URLs.
A _path clash_ occurs when two files would render at the same URL, making one of them
inaccessible.
```plaintext
# Example 1
- doc/development/project_templates.md
- doc/development/project_templates/index.md
# Resulting URL for both: /development/project_templates/
# Example 2
- doc/user/gitlab_duo_chat.md
- doc/user/gitlab_duo_chat/index.md
# Resulting URL for both: /user/gitlab_duo_chat/
# Example 3
- doc/administration/dedicated/configure_instance.md
- doc/administration/dedicated/configure_instance/index.md
# Resulting URL for both: /administration/dedicated/configure_instance/
```
**Why**: Hugo's options for URL paths are `prettyURLs` and `uglyURLs`. Both of these produce
somewhat different paths than the Nanoc website does. We've opted for `prettyURLs` because it's
Hugo's default, and Hugo's pattern for `uglyURLs` is different from most other static site generators.
**Testing**: After launch, Hugo will throw an error on docs pipelines if it detects a new path clash.
## Processes
### Cutting a release
Cutting a release no longer requires updating `latest.Dockerfile`. This file no longer exists in
the project, and the release template has been updated accordingly.
**Why**: We've refactored versioning to use the [Parallel Deployments](../../user/project/pages/_index.md#parallel-deployments) feature.
You can review the new release process [here](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/issue_templates/release.md).
### Monthly technical writing tasks
The [Docs project maintenance tasks rotation](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments) will pause when we launch on Hugo.
For February 2025, run the checks for broken external links and `start_remove` content before Wednesday, February 12. Other tasks are fine to skip for now. From March onwards, the monthly maintenance task will be on hold until further notice.
{{< alert type="note" >}}
This does not impact the release post [structural check](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#structural-check) or [monthly documentation release](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/releases.md) tasks. The assigned Technical Writer should continue to do these tasks as previously scheduled.
{{< /alert >}}
**Why**: Some Ruby scripts need to be rewritten in Go, and the maintenance tasks are
low-priority enough that we can launch without them. There may be more opportunity
post-launch to share more of these scripts with the Handbook project.
**Testing**: Because we will pause on removing old redirects temporarily,
we've added a [test script](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/scripts/redirect-threshold-check.sh?ref_type=heads) to warn if we get near the Pages redirect limit.
## User-facing changes
These changes take effect when we launch the new site.
They are viewable at [https://new.docs.gitlab.com](https://new.docs.gitlab.com).
### Page URLs
- `ee` prefix: We dropped the `ee` prefix from paths to pages
that come from the GitLab project.
The prefix was an artifact leftover from when pages were split
between `ce` and `ee`, and has been a source of confusion
for site visitors.
- Pretty URLs: Pages no longer have a `.html` extension in the URL.
A file located at `/foo/bar/baz.html` is available at `/foo/bar/baz`.
We have redirects in place at Cloudflare to redirect all URLs to their
new formats. See the [redirects documentation](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/redirects.md?ref_type=heads#cloudflare) in the Hugo project for more information.
### Layout changes
We implemented the layout changes proposed in [this issue](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/673), which aim to improve
readability.
The primary changes are:
- Main content column has a maximum width.
- Main content column (which includes the table of contents) is
centered, with extra space on either side of it, when the site
is viewed on a large screen.
|
https://docs.gitlab.com/development/backporting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/backporting.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
backporting.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Backport documentation changes
| null |
There are two types of backports:
- **Current stable release**: Any maintainer can backport
changes, usually bug fixes but also important documentation changes, into the
current stable release.
- **Older stable releases**: To guarantee the
[maintenance policy](../../policy/maintenance.md) is respected, merging to
older stable releases is restricted to release managers.
## Backport documentation changes to current stable release
To backport documentation changes to the current stable release,
follow the [standard process to contribute to documentation](_index.md).
## Backport documentation changes to older releases
{{< alert type="warning" >}}
You should only rarely consider backporting documentation to older stable releases. Legitimate reasons to backport documentation include legal issues, emergency security fixes, and fixes to content that might prevent users from upgrading or cause data loss.
{{< /alert >}}
To backport documentation changes in documentation releases older than the
current stable branch:
1. [Create an issue for the backport.](#create-an-issue)
1. [Create the merge request (MR) to backport the change.](#create-the-merge-request-to-backport-the-change)
1. [Deploy the backport change.](#deploy-the-backport-changes)
### Create an issue
Prerequisites:
- The person requesting the backport does this step. You must have at
least the Developer role for the [Technical Writing team tasks project](https://gitlab.com/gitlab-org/technical-writing/team-tasks).
1. Open an [issue in the Technical Writing team tasks project](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/issues/new)
using the [backport changes template](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/blob/main/.gitlab/issue_templates/backport_changes.md).
1. In the issue, state why the backport is needed. Include:
- The background to this change.
- Which specific documentation versions are changing.
- How the documentation will change.
- Links to any supporting issues or MRs.
1. Ask for the approval of technical writing leadership by creating a comment in
this issue with the following text:
```plaintext
@gitlab-org/tw-leadership could I get your approval for this documentation backport?
```
After the technical writing leadership approves the backport, you can create the
merge request to backport the change.
### Create the merge request to backport the change
Prerequisites:
- The person requesting the backport does this step. You must have at least the
Developer role on the project that needs the backport.
To backport a change, merge your changes into the stable branch of the version
where you want the changes to occur.
1. Open an MR with the backport by following the
[release docs guidelines](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/patch/engineers.md#backporting-a-bug-fix-in-the-gitlab-project),
and mention the issue you opened before so that they are linked.
1. Assign the MR to a technical writer for review.
1. After the technical writer approves the MR, assign the MR to a release manager
for review and merge.
Mention this issue to the release manager, and provide them with all the context
they need.
For the change to appear in:
- `docs.gitlab.com`, the release manager only has to merge the MR to the stable branch,
and the technical writer needs to [deploy the backport changes](#deploy-the-backport-changes).
- `gitlab.com/help`, the change needs to be part of a GitLab release. The release
manager can include the change in the next release they create. This step is optional.
### Deploy the backport changes
Prerequisites:
- The technical writer assigned to the backport does this step. You must have at
least the Maintainer role for the [Technical Writing team tasks project](https://gitlab.com/gitlab-org/technical-writing/team-tasks).
After the changes are merged to the appropriate stable branch,
you must deploy the backported changes.
#### Backport changes made in GitLab 17.9 and later
Run a [new pipeline](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipelines/new)
in `docs-gitlab-com`. Choose the branch name that matches the stable version, for example `17.9`.
- A parallel deployment for that branch is run and is deployed automatically.
- A Docker image is created that contains the versioned documentation and can
be used offline.
#### Backport changes made in GitLab 17.8 and earlier
Run a [new pipeline](https://gitlab.com/gitlab-org/gitlab-docs/-/pipelines/new)
in `gitlab-docs`. Choose the branch name that matches the stable version, for example `17.8` or `16.0`.
- A Docker image is created that contains the versioned documentation and can
be used offline.
#### Backport changes made to a version other than the last three stable branches
If the backport change was made to a version other than the last three stable
branches, update the docs archives site:
1. Make sure the Docker images from the previous instructions are built.
1. Run a [new pipeline](https://gitlab.com/gitlab-org/gitlab-docs-archives/-/pipelines/new)
in the `gitlab-docs-archives` repository.
1. After the pipeline finishes, go to `https://archives.docs.gitlab.com` and verify
that the changes are available for the correct version.
## View older documentation versions
Previous versions of the documentation are available on `docs.gitlab.com`.
To view a previous version, in the upper-right corner, select the version
number from the dropdown list.
To view versions that are not available on `docs.gitlab.com`:
- View the [documentation archives](https://archives.docs.gitlab.com).
- Go to the GitLab repository and select the version-specific branch. For example,
the [13.2 branch](https://gitlab.com/gitlab-org/gitlab/-/tree/13-2-stable-ee/doc) has the
documentation for GitLab 13.2.
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Backport documentation changes
breadcrumbs:
- doc
- development
- documentation
---
There are two types of backports:
- **Current stable release**: Any maintainer can backport
changes, usually bug fixes but also important documentation changes, into the
current stable release.
- **Older stable releases**: To guarantee the
[maintenance policy](../../policy/maintenance.md) is respected, merging to
older stable releases is restricted to release managers.
## Backport documentation changes to current stable release
To backport documentation changes to the current stable release,
follow the [standard process to contribute to documentation](_index.md).
## Backport documentation changes to older releases
{{< alert type="warning" >}}
You should only rarely consider backporting documentation to older stable releases. Legitimate reasons to backport documentation include legal issues, emergency security fixes, and fixes to content that might prevent users from upgrading or cause data loss.
{{< /alert >}}
To backport documentation changes in documentation releases older than the
current stable branch:
1. [Create an issue for the backport.](#create-an-issue)
1. [Create the merge request (MR) to backport the change.](#create-the-merge-request-to-backport-the-change)
1. [Deploy the backport change.](#deploy-the-backport-changes)
### Create an issue
Prerequisites:
- The person requesting the backport does this step. You must have at
least the Developer role for the [Technical Writing team tasks project](https://gitlab.com/gitlab-org/technical-writing/team-tasks).
1. Open an [issue in the Technical Writing team tasks project](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/issues/new)
using the [backport changes template](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/blob/main/.gitlab/issue_templates/backport_changes.md).
1. In the issue, state why the backport is needed. Include:
- The background to this change.
- Which specific documentation versions are changing.
- How the documentation will change.
- Links to any supporting issues or MRs.
1. Ask for the approval of technical writing leadership by creating a comment in
this issue with the following text:
```plaintext
@gitlab-org/tw-leadership could I get your approval for this documentation backport?
```
After the technical writing leadership approves the backport, you can create the
merge request to backport the change.
### Create the merge request to backport the change
Prerequisites:
- The person requesting the backport does this step. You must have at least the
Developer role on the project that needs the backport.
To backport a change, merge your changes into the stable branch of the version
where you want the changes to occur.
1. Open an MR with the backport by following the
[release docs guidelines](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/patch/engineers.md#backporting-a-bug-fix-in-the-gitlab-project),
and mention the issue you opened before so that they are linked.
1. Assign the MR to a technical writer for review.
1. After the technical writer approves the MR, assign the MR to a release manager
for review and merge.
Mention this issue to the release manager, and provide them with all the context
they need.
For the change to appear in:
- `docs.gitlab.com`, the release manager only has to merge the MR to the stable branch,
and the technical writer needs to [deploy the backport changes](#deploy-the-backport-changes).
- `gitlab.com/help`, the change needs to be part of a GitLab release. The release
manager can include the change in the next release they create. This step is optional.
### Deploy the backport changes
Prerequisites:
- The technical writer assigned to the backport does this step. You must have at
least the Maintainer role for the [Technical Writing team tasks project](https://gitlab.com/gitlab-org/technical-writing/team-tasks).
After the changes are merged to the appropriate stable branch,
you must deploy the backported changes.
#### Backport changes made in GitLab 17.9 and later
Run a [new pipeline](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipelines/new)
in `docs-gitlab-com`. Choose the branch name that matches the stable version, for example `17.9`.
- A parallel deployment for that branch is run and is deployed automatically.
- A Docker image is created that contains the versioned documentation and can
be used offline.
#### Backport changes made in GitLab 17.8 and earlier
Run a [new pipeline](https://gitlab.com/gitlab-org/gitlab-docs/-/pipelines/new)
in `gitlab-docs`. Choose the branch name that matches the stable version, for example `17.8` or `16.0`.
- A Docker image is created that contains the versioned documentation and can
be used offline.
#### Backport changes made to a version other than the last three stable branches
If the backport change was made to a version other than the last three stable
branches, update the docs archives site:
1. Make sure the Docker images from the previous instructions are built.
1. Run a [new pipeline](https://gitlab.com/gitlab-org/gitlab-docs-archives/-/pipelines/new)
in the `gitlab-docs-archives` repository.
1. After the pipeline finishes, go to `https://archives.docs.gitlab.com` and verify
that the changes are available for the correct version.
## View older documentation versions
Previous versions of the documentation are available on `docs.gitlab.com`.
To view a previous version, in the upper-right corner, select the version
number from the dropdown list.
To view versions that are not available on `docs.gitlab.com`:
- View the [documentation archives](https://archives.docs.gitlab.com).
- Go to the GitLab repository and select the version-specific branch. For example,
the [13.2 branch](https://gitlab.com/gitlab-org/gitlab/-/tree/13-2-stable-ee/doc) has the
documentation for GitLab 13.2.
|
https://docs.gitlab.com/development/restful_api_styleguide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/restful_api_styleguide.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
restful_api_styleguide.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Documenting REST API resources
|
Writing styles, markup, formatting, and other standards for the GitLab RESTful APIs.
|
REST API resources are documented in Markdown under
[`/doc/api`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/api). Each
resource has its own Markdown file, which is linked from
[`api_resources.md`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/api/api_resources.md).
When modifying the Markdown or API code, also update the corresponding
[OpenAPI definition](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/api/openapi), by running `bin/rake gitlab:openapi:generate`.
To check if the OpenAPI definition needs to be updated, you can run `bin/rake gitlab:openapi:check_docs`.
This is also checked by the `openapi-doc-check` CI/CD job that runs for commits that modify API code or documentation.
In the Markdown doc for a resource (the API endpoint):
- Every method must have the REST API request. The request should include the HTTP method
(like GET, PUT, DELETE) followed by the request path. The path should always start with a `/`. For example:
```plaintext
GET /api/v4/projects/:id/repository/branches
```
- Every method must have a detailed [description of the attributes](#method-description).
- Every method must have a cURL example.
- Every method must have a detailed [description of the response body](#response-body-description).
- Every method must have a response body example (in JSON format).
- If an attribute is available only to higher level subscription tiers, add the appropriate tier to the **Description**. If an attribute is
for Premium, include that it's also available for Ultimate.
- If an attribute is available only in certain offerings, add the offerings to the **Description**. If the attribute's
description also has both offering and tier, combine them. For
example: _GitLab Self-Managed, Premium and Ultimate only._
After a new API documentation page is added, [add an entry in the global navigation](site_architecture/global_nav.md#add-a-navigation-entry). [Examples](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/commits/main/data/en-us/navigation.yaml).
## API topic template
Use the following template to help you get started. Be sure to list any
required attributes first in the table.
````markdown
## API name
{{</* history */>}}
- History note.
{{</* /history */>}}
One or two sentence description of what endpoint does.
### Method title
{{</* history */>}}
- History note.
{{</* /history */>}}
Description of the method.
```plaintext
METHOD /api/v4/endpoint
```
Supported attributes:
| Attribute | Type | Required | Description |
|--------------------------|----------|----------|-----------------------|
| `attribute` | datatype | Yes | Detailed description. |
| `attribute` | datatype | No | Detailed description. |
| `attribute` | datatype | No | Detailed description. |
| `attribute` | datatype | No | Detailed description. |
If successful, returns [`<status_code>`](rest/troubleshooting.md#status-codes) and the following
response attributes:
| Attribute | Type | Description |
|--------------------------|----------|-----------------------|
| `attribute` | datatype | Detailed description. |
| `attribute` | datatype | Detailed description. |
Example request:
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/endpoint?parameters"
```
Example response:
```json
[
{
}
]
```
````
## History
Add [history](styleguide/availability_details.md#history)
to describe new or updated API calls.
To add history for an individual attribute, include it in the history
for the section. For example:
```markdown
### Edit a widget
{{</* history */>}}
- `widget_message` [introduced](https://link-to-issue) in GitLab 14.3.
{{</* /history */>}}
```
If the API or attribute is deployed behind a feature flag,
[include the feature flag information](feature_flags.md) in the history.
## Deprecations
To document the deprecation of an API endpoint, follow the steps to
[deprecate a page or topic](styleguide/deprecations_and_removals.md).
To deprecate an attribute:
1. Add a history note.
```markdown
{{</* history */>}}
- `widget_name` [deprecated](https://link-to-issue) in GitLab 14.7.
{{</* /history */>}}
```
1. Add inline deprecation text to the description.
```markdown
| Attribute | Type | Required | Description |
|---------------|--------|----------|-------------|
| `widget_name` | string | No | [Deprecated](https://link-to-issue) in GitLab 14.7. Use `widget_id` instead. The name of the widget. |
```
To widely announce a deprecation, [update the REST API deprecations page](../../api/rest/deprecations.md).
## Method description
Use the following table headers to describe the methods. Attributes should
always be in code blocks using backticks (`` ` ``).
Sort the table by required attributes first, then alphabetically.
```markdown
| Attribute | Type | Required | Description |
|------------------------------|---------------|----------|-----------------------------------------------------|
| `title` | string | Yes | Title of the issue. |
| `assignee_ids` | integer array | No | IDs of the users to assign the issue to. Ultimate only. |
| `confidential` | boolean | No | Sets the issue to confidential. Default is `false`. |
```
Rendered example:
| Attribute | Type | Required | Description |
|------------------------------|---------------|----------|-----------------------------------------------------|
| `title` | string | Yes | Title of the issue. |
| `assignee_ids` | integer array | No | IDs of the users to assign the issue to. Premium and Ultimate only. |
| `confidential` | boolean | No | Sets the issue to confidential. Default is `false`. |
For information about writing attribute descriptions, see the [GraphQL API description style guide](../api_graphql_styleguide.md#description-style-guide).
### Conditionally required attributes
If there are attributes where either one or both are required to make an API
request:
1. Add `Conditionally` in the `Required` column.
1. Clearly describe the related attributes in the description.
You can use the following template:
```markdown
At least one of `attribute1` or `attribute2` must be included in the API call. Both may be used if needed.
```
For example:
| Attribute | Type | Required | Description |
|:---------------------------|:---------------|:---------------|:--------------------------------------------------------------------------------------------------- |
| `include_saml_users` | boolean | Conditionally | Include users with a SAML identity. At least one of `include_saml_users` or `include_service_accounts` must be `true`. Both may be used if needed. |
| `include_service_accounts` | boolean | Conditionally | Include service account users. At least one of `include_saml_users` or `include_service_accounts` must be `true`. Both may be used if needed. |
## Response body description
Start the description with the following sentence, replacing `status code` with the
relevant [HTTP status code](../../api/rest/troubleshooting.md#status-codes), for example:
```markdown
If successful, returns [`200 OK`](../../api/rest/troubleshooting.md#status-codes) and the
following response attributes:
```
Use the following table headers to describe the response bodies. Attributes should
always be in code blocks using backticks (`` ` ``).
If the attribute is a complex type, like another object, represent sub-attributes
with dots (`.`), like `project.name` or `projects[].name` in case of an array.
Sort the table alphabetically.
```markdown
| Attribute | Type | Description |
|------------------------------|---------------|-------------------------------------------|
| `assignee_ids` | integer array | IDs of the users to assign the issue to. Premium and Ultimate only. |
| `confidential` | boolean | Whether the issue is confidential or not. |
| `title` | string | Title of the issue. |
```
Rendered example:
| Attribute | Type | Description |
|------------------------------|---------------|-------------------------------------------|
| `assignee_ids` | integer array | IDs of the users to assign the issue to. Premium and Ultimate only. |
| `confidential` | boolean | Whether the issue is confidential or not. |
| `title` | string | Title of the issue. |
For information about writing attribute descriptions, see the [GraphQL API description style guide](../api_graphql_styleguide.md#description-style-guide).
## cURL commands
- Use `https://gitlab.example.com/api/v4/` as an endpoint.
- Wherever needed use this personal access token: `<your_access_token>`.
- Always put the request first. `GET` is the default so you don't have to
include it.
- Use long option names (`--header` instead of `-H`) for legibility. (Tested in
[`scripts/lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/lint-doc.sh).)
- Declare URLs with the `--url` parameter, and wrap the URL in double quotes (`"`).
- Prefer to use examples using the personal access token and don't pass data of
username and password.
- For legibility, use the ` \ ` character and indentation to break long single-line
commands apart into multiple lines.
| Methods | Description |
|-------------------------------------------------|--------------------------------------------------------|
| `--header "PRIVATE-TOKEN: <your_access_token>"` | Use this method as is, whenever authentication needed. |
| `--request POST` | Use this method when creating new objects. |
| `--request PUT` | Use this method when updating existing objects. |
| `--request DELETE` | Use this method when removing existing objects. |
## cURL Examples
The following sections include a set of [cURL](https://curl.se/) examples
you can use in the API documentation.
{{< alert type="warning" >}}
Do not use information for real users, URLs, or tokens. For documentation, refer to our
relevant style guide sections on [fake user information](styleguide/_index.md#fake-user-information),
[fake URLs](styleguide/_index.md#fake-urls), and [fake tokens](styleguide/_index.md#fake-tokens).
{{< /alert >}}
### Simple cURL command
Get the details of a group:
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/groups/gitlab-org"
```
### cURL example with parameters passed in the URL
Create a new project under the authenticated user's namespace:
```shell
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/projects?name=foo"
```
### Post data using cURL's `--data`
Instead of using `--request POST` and appending the parameters to the URI, you
can use cURL's `--data` option. The example below will create a new project
`foo` under the authenticated user's namespace.
```shell
curl --data "name=foo" \
--header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/projects"
```
### Post data using JSON content
This example creates a new group. Be aware of the use of single (`'`) and double
(`"`) quotes.
```shell
curl --request POST \
--header "PRIVATE-TOKEN: <your_access_token>" \
--header "Content-Type: application/json" \
--data '{"path": "my-group", "name": "My group"}' \
--url "https://gitlab.example.com/api/v4/groups"
```
For readability, you can also set up the `--data` by using the following format:
```shell
curl --request POST \
--url "https://gitlab.example.com/api/v4/groups" \
--header "content-type: application/json" \
--header "PRIVATE-TOKEN: <your_access_token>" \
--data '{
"path": "my-group",
"name": "My group"
}'
```
### Post data using form-data
Instead of using JSON or URL-encoding data, you can use `multipart/form-data` which
properly handles data encoding:
```shell
curl --request POST \
--header "PRIVATE-TOKEN: <your_access_token>" \
--form "title=ssh-key" \
--form "key=ssh-rsa AAAAB3NzaC1yc2EA..." \
--url "https://gitlab.example.com/api/v4/users/25/keys"
```
The above example adds an SSH public key titled `ssh-key` to the account of
a user with ID 25. The operation requires administrator access.
### Escape special characters
Spaces or slashes (`/`) can sometimes result in errors, so you should
escape them when possible. In the example below we create a new issue which
contains spaces in its title. Observe how spaces are escaped using the `%20`
ASCII code.
```shell
curl --request POST \
--header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/projects/42/issues?title=Hello%20GitLab"
```
Use `%2F` for slashes (`/`).
### Pass arrays to API calls
The GitLab API sometimes accepts arrays of strings or integers. For example, to
exclude specific users when requesting a list of users for a project, you would
do something like this:
```shell
curl --request PUT \
--header "PRIVATE-TOKEN: <your_access_token>"
--data "skip_users[]=<user_id>" \
--data "skip_users[]=<user_id>" \
--url "https://gitlab.example.com/api/v4/projects/<project_id>/users"
```
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Writing styles, markup, formatting, and other standards for the GitLab
RESTful APIs.
title: Documenting REST API resources
breadcrumbs:
- doc
- development
- documentation
---
REST API resources are documented in Markdown under
[`/doc/api`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/api). Each
resource has its own Markdown file, which is linked from
[`api_resources.md`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/api/api_resources.md).
When modifying the Markdown or API code, also update the corresponding
[OpenAPI definition](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/api/openapi), by running `bin/rake gitlab:openapi:generate`.
To check if the OpenAPI definition needs to be updated, you can run `bin/rake gitlab:openapi:check_docs`.
This is also checked by the `openapi-doc-check` CI/CD job that runs for commits that modify API code or documentation.
In the Markdown doc for a resource (the API endpoint):
- Every method must have the REST API request. The request should include the HTTP method
(like GET, PUT, DELETE) followed by the request path. The path should always start with a `/`. For example:
```plaintext
GET /api/v4/projects/:id/repository/branches
```
- Every method must have a detailed [description of the attributes](#method-description).
- Every method must have a cURL example.
- Every method must have a detailed [description of the response body](#response-body-description).
- Every method must have a response body example (in JSON format).
- If an attribute is available only to higher level subscription tiers, add the appropriate tier to the **Description**. If an attribute is
for Premium, include that it's also available for Ultimate.
- If an attribute is available only in certain offerings, add the offerings to the **Description**. If the attribute's
description also has both offering and tier, combine them. For
example: _GitLab Self-Managed, Premium and Ultimate only._
After a new API documentation page is added, [add an entry in the global navigation](site_architecture/global_nav.md#add-a-navigation-entry). [Examples](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/commits/main/data/en-us/navigation.yaml).
## API topic template
Use the following template to help you get started. Be sure to list any
required attributes first in the table.
````markdown
## API name
{{</* history */>}}
- History note.
{{</* /history */>}}
One or two sentence description of what endpoint does.
### Method title
{{</* history */>}}
- History note.
{{</* /history */>}}
Description of the method.
```plaintext
METHOD /api/v4/endpoint
```
Supported attributes:
| Attribute | Type | Required | Description |
|--------------------------|----------|----------|-----------------------|
| `attribute` | datatype | Yes | Detailed description. |
| `attribute` | datatype | No | Detailed description. |
| `attribute` | datatype | No | Detailed description. |
| `attribute` | datatype | No | Detailed description. |
If successful, returns [`<status_code>`](rest/troubleshooting.md#status-codes) and the following
response attributes:
| Attribute | Type | Description |
|--------------------------|----------|-----------------------|
| `attribute` | datatype | Detailed description. |
| `attribute` | datatype | Detailed description. |
Example request:
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/endpoint?parameters"
```
Example response:
```json
[
{
}
]
```
````
## History
Add [history](styleguide/availability_details.md#history)
to describe new or updated API calls.
To add history for an individual attribute, include it in the history
for the section. For example:
```markdown
### Edit a widget
{{</* history */>}}
- `widget_message` [introduced](https://link-to-issue) in GitLab 14.3.
{{</* /history */>}}
```
If the API or attribute is deployed behind a feature flag,
[include the feature flag information](feature_flags.md) in the history.
## Deprecations
To document the deprecation of an API endpoint, follow the steps to
[deprecate a page or topic](styleguide/deprecations_and_removals.md).
To deprecate an attribute:
1. Add a history note.
```markdown
{{</* history */>}}
- `widget_name` [deprecated](https://link-to-issue) in GitLab 14.7.
{{</* /history */>}}
```
1. Add inline deprecation text to the description.
```markdown
| Attribute | Type | Required | Description |
|---------------|--------|----------|-------------|
| `widget_name` | string | No | [Deprecated](https://link-to-issue) in GitLab 14.7. Use `widget_id` instead. The name of the widget. |
```
To widely announce a deprecation, [update the REST API deprecations page](../../api/rest/deprecations.md).
## Method description
Use the following table headers to describe the methods. Attributes should
always be in code blocks using backticks (`` ` ``).
Sort the table by required attributes first, then alphabetically.
```markdown
| Attribute | Type | Required | Description |
|------------------------------|---------------|----------|-----------------------------------------------------|
| `title` | string | Yes | Title of the issue. |
| `assignee_ids` | integer array | No | IDs of the users to assign the issue to. Ultimate only. |
| `confidential` | boolean | No | Sets the issue to confidential. Default is `false`. |
```
Rendered example:
| Attribute | Type | Required | Description |
|------------------------------|---------------|----------|-----------------------------------------------------|
| `title` | string | Yes | Title of the issue. |
| `assignee_ids` | integer array | No | IDs of the users to assign the issue to. Premium and Ultimate only. |
| `confidential` | boolean | No | Sets the issue to confidential. Default is `false`. |
For information about writing attribute descriptions, see the [GraphQL API description style guide](../api_graphql_styleguide.md#description-style-guide).
### Conditionally required attributes
If there are attributes where either one or both are required to make an API
request:
1. Add `Conditionally` in the `Required` column.
1. Clearly describe the related attributes in the description.
You can use the following template:
```markdown
At least one of `attribute1` or `attribute2` must be included in the API call. Both may be used if needed.
```
For example:
| Attribute | Type | Required | Description |
|:---------------------------|:---------------|:---------------|:--------------------------------------------------------------------------------------------------- |
| `include_saml_users` | boolean | Conditionally | Include users with a SAML identity. At least one of `include_saml_users` or `include_service_accounts` must be `true`. Both may be used if needed. |
| `include_service_accounts` | boolean | Conditionally | Include service account users. At least one of `include_saml_users` or `include_service_accounts` must be `true`. Both may be used if needed. |
## Response body description
Start the description with the following sentence, replacing `status code` with the
relevant [HTTP status code](../../api/rest/troubleshooting.md#status-codes), for example:
```markdown
If successful, returns [`200 OK`](../../api/rest/troubleshooting.md#status-codes) and the
following response attributes:
```
Use the following table headers to describe the response bodies. Attributes should
always be in code blocks using backticks (`` ` ``).
If the attribute is a complex type, like another object, represent sub-attributes
with dots (`.`), like `project.name` or `projects[].name` in case of an array.
Sort the table alphabetically.
```markdown
| Attribute | Type | Description |
|------------------------------|---------------|-------------------------------------------|
| `assignee_ids` | integer array | IDs of the users to assign the issue to. Premium and Ultimate only. |
| `confidential` | boolean | Whether the issue is confidential or not. |
| `title` | string | Title of the issue. |
```
Rendered example:
| Attribute | Type | Description |
|------------------------------|---------------|-------------------------------------------|
| `assignee_ids` | integer array | IDs of the users to assign the issue to. Premium and Ultimate only. |
| `confidential` | boolean | Whether the issue is confidential or not. |
| `title` | string | Title of the issue. |
For information about writing attribute descriptions, see the [GraphQL API description style guide](../api_graphql_styleguide.md#description-style-guide).
## cURL commands
- Use `https://gitlab.example.com/api/v4/` as an endpoint.
- Wherever needed use this personal access token: `<your_access_token>`.
- Always put the request first. `GET` is the default so you don't have to
include it.
- Use long option names (`--header` instead of `-H`) for legibility. (Tested in
[`scripts/lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/lint-doc.sh).)
- Declare URLs with the `--url` parameter, and wrap the URL in double quotes (`"`).
- Prefer to use examples using the personal access token and don't pass data of
username and password.
- For legibility, use the ` \ ` character and indentation to break long single-line
commands apart into multiple lines.
| Methods | Description |
|-------------------------------------------------|--------------------------------------------------------|
| `--header "PRIVATE-TOKEN: <your_access_token>"` | Use this method as is, whenever authentication needed. |
| `--request POST` | Use this method when creating new objects. |
| `--request PUT` | Use this method when updating existing objects. |
| `--request DELETE` | Use this method when removing existing objects. |
## cURL Examples
The following sections include a set of [cURL](https://curl.se/) examples
you can use in the API documentation.
{{< alert type="warning" >}}
Do not use information for real users, URLs, or tokens. For documentation, refer to our
relevant style guide sections on [fake user information](styleguide/_index.md#fake-user-information),
[fake URLs](styleguide/_index.md#fake-urls), and [fake tokens](styleguide/_index.md#fake-tokens).
{{< /alert >}}
### Simple cURL command
Get the details of a group:
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/groups/gitlab-org"
```
### cURL example with parameters passed in the URL
Create a new project under the authenticated user's namespace:
```shell
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/projects?name=foo"
```
### Post data using cURL's `--data`
Instead of using `--request POST` and appending the parameters to the URI, you
can use cURL's `--data` option. The example below will create a new project
`foo` under the authenticated user's namespace.
```shell
curl --data "name=foo" \
--header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/projects"
```
### Post data using JSON content
This example creates a new group. Be aware of the use of single (`'`) and double
(`"`) quotes.
```shell
curl --request POST \
--header "PRIVATE-TOKEN: <your_access_token>" \
--header "Content-Type: application/json" \
--data '{"path": "my-group", "name": "My group"}' \
--url "https://gitlab.example.com/api/v4/groups"
```
For readability, you can also set up the `--data` by using the following format:
```shell
curl --request POST \
--url "https://gitlab.example.com/api/v4/groups" \
--header "content-type: application/json" \
--header "PRIVATE-TOKEN: <your_access_token>" \
--data '{
"path": "my-group",
"name": "My group"
}'
```
### Post data using form-data
Instead of using JSON or URL-encoding data, you can use `multipart/form-data` which
properly handles data encoding:
```shell
curl --request POST \
--header "PRIVATE-TOKEN: <your_access_token>" \
--form "title=ssh-key" \
--form "key=ssh-rsa AAAAB3NzaC1yc2EA..." \
--url "https://gitlab.example.com/api/v4/users/25/keys"
```
The above example adds an SSH public key titled `ssh-key` to the account of
a user with ID 25. The operation requires administrator access.
### Escape special characters
Spaces or slashes (`/`) can sometimes result in errors, so you should
escape them when possible. In the example below we create a new issue which
contains spaces in its title. Observe how spaces are escaped using the `%20`
ASCII code.
```shell
curl --request POST \
--header "PRIVATE-TOKEN: <your_access_token>" \
--url "https://gitlab.example.com/api/v4/projects/42/issues?title=Hello%20GitLab"
```
Use `%2F` for slashes (`/`).
### Pass arrays to API calls
The GitLab API sometimes accepts arrays of strings or integers. For example, to
exclude specific users when requesting a list of users for a project, you would
do something like this:
```shell
curl --request PUT \
--header "PRIVATE-TOKEN: <your_access_token>"
--data "skip_users[]=<user_id>" \
--data "skip_users[]=<user_id>" \
--url "https://gitlab.example.com/api/v4/projects/<project_id>/users"
```
|
https://docs.gitlab.com/development/ai_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/ai_guide.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
ai_guide.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Use of AI
| null |
Community members can make AI-generated contributions to GitLab documentation, provided they follow the guidelines in our [DCO or our CLA terms](https://about.gitlab.com/community/contribute/dco-cla/).
GitLab team members must follow the guidelines documented in the [internal handbook](https://internal.gitlab.com/handbook/product/ai-strategy/ai-integration-effort/legal_restrictions/).
AI is a productivity multiplier and creative catalyst for the Technical Writing team at GitLab. Examples include:
- Write and refactor documentation
- Create initial drafts from outlines and screenshots.
- Generate ideas to restructure content for scannability.
- Draft tutorial content.
- Convert list items to Markdown tables.
- Rephrase and simplify language for better readability.
- Edit UI text to make it more succinct.
- Suggest alternatives and improvements to error messages.
- Restructure pages based on user feedback with specific improvement recommendations.
- Support technical tasks and automation
- Troubleshoot failed pipelines.
- Write Python scripts for data analysis and content auditing.
- Help with rebasing and Git operations.
- Troubleshoot GDK update errors.
- Create Mermaid diagrams.
- Analysis and research
- Analyze documentation sets and identify pain points and improvement areas.
- Summarize long documents.
- Generate lists of content by topic or category.
All content, AI-generated or human-created, is reviewed for accuracy and readability by a GitLab team member.
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Use of AI
breadcrumbs:
- doc
- development
- documentation
---
Community members can make AI-generated contributions to GitLab documentation, provided they follow the guidelines in our [DCO or our CLA terms](https://about.gitlab.com/community/contribute/dco-cla/).
GitLab team members must follow the guidelines documented in the [internal handbook](https://internal.gitlab.com/handbook/product/ai-strategy/ai-integration-effort/legal_restrictions/).
AI is a productivity multiplier and creative catalyst for the Technical Writing team at GitLab. Examples include:
- Write and refactor documentation
- Create initial drafts from outlines and screenshots.
- Generate ideas to restructure content for scannability.
- Draft tutorial content.
- Convert list items to Markdown tables.
- Rephrase and simplify language for better readability.
- Edit UI text to make it more succinct.
- Suggest alternatives and improvements to error messages.
- Restructure pages based on user feedback with specific improvement recommendations.
- Support technical tasks and automation
- Troubleshoot failed pipelines.
- Write Python scripts for data analysis and content auditing.
- Help with rebasing and Git operations.
- Troubleshoot GDK update errors.
- Create Mermaid diagrams.
- Analysis and research
- Analyze documentation sets and identify pain points and improvement areas.
- Summarize long documents.
- Generate lists of content by topic or category.
All content, AI-generated or human-created, is reviewed for accuracy and readability by a GitLab team member.
|
https://docs.gitlab.com/development/graphql_styleguide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/graphql_styleguide.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
graphql_styleguide.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Creating a GraphQL example page
|
Writing styles, markup, formatting, and other standards for GraphQL API's GitLab Documentation.
|
GraphQL APIs are different from [RESTful APIs](restful_api_styleguide.md). Reference
information is generated in our [GraphQL API resources](../../api/graphql/reference/_index.md) page.
However, it's helpful to include examples for how to use GraphQL for different
use cases, with samples that readers can use directly in the GraphQL explorer, called
[GraphiQL](../api_graphql_styleguide.md#graphiql).
This section describes the steps required to add your GraphQL examples to
GitLab documentation.
For information about adding a resource to the
[GraphQL API resources](../../api/graphql/reference/_index.md) page,
see the [description style guide](../api_graphql_styleguide.md#description-style-guide).
## Add a dedicated GraphQL page
To create a dedicated GraphQL page, create a new `.md` file in the
`doc/api/graphql/` directory. Give the file a functional name, like
`import_from_specific_location.md`.
## Add metadata
Add descriptive content and a title at the top of the page, for example:
```markdown
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: List branch rules for a project by using GraphQL
---
{{</* details */>}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
```
For help editing this content for your use case, ask a technical writer.
## Add content
Now add the body text. You can use this content as a starting point
and replace the text with your own information.
```markdown
You can query for branch rules in a given project by using:
- GraphiQL.
- [`cURL`](getting_started.md#command-line).
## Use GraphiQL
You can use GraphiQL to list the branch rules for a project.
1. Open GraphiQL:
- For GitLab.com, use: `https://gitlab.com/-/graphql-explorer`
- For GitLab Self-Managed, use: `https://gitlab.example.com/-/graphql-explorer`
1. Copy the following text and paste it in the left window.
<graphql codeblock here>
1. Select **Play**.
## Related topics:
- [GraphQL API reference](reference/_index.md)
```
## Add the GraphQL example to the global navigation
Include a link to your new document in the global navigation (the list on the
left side of the documentation website). To do so, open a second MR, against the
[GitLab documentation repository](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/).
The global navigation is set in the
[`navigation.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml) file,
in the `content/data/en-us` subdirectory. You can find the GraphQL section under the
following line:
```yaml
- title: GraphQL
```
Be aware that CI tests for that second MR will fail with a bad link until the
main MR that adds the new GraphQL page is merged. Therefore, only merge the MR against the
[`docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/) repository after the content has
been merged and live on `docs.gitlab.com`.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Writing styles, markup, formatting, and other standards for GraphQL API's
GitLab Documentation.
title: Creating a GraphQL example page
breadcrumbs:
- doc
- development
- documentation
---
GraphQL APIs are different from [RESTful APIs](restful_api_styleguide.md). Reference
information is generated in our [GraphQL API resources](../../api/graphql/reference/_index.md) page.
However, it's helpful to include examples for how to use GraphQL for different
use cases, with samples that readers can use directly in the GraphQL explorer, called
[GraphiQL](../api_graphql_styleguide.md#graphiql).
This section describes the steps required to add your GraphQL examples to
GitLab documentation.
For information about adding a resource to the
[GraphQL API resources](../../api/graphql/reference/_index.md) page,
see the [description style guide](../api_graphql_styleguide.md#description-style-guide).
## Add a dedicated GraphQL page
To create a dedicated GraphQL page, create a new `.md` file in the
`doc/api/graphql/` directory. Give the file a functional name, like
`import_from_specific_location.md`.
## Add metadata
Add descriptive content and a title at the top of the page, for example:
```markdown
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: List branch rules for a project by using GraphQL
---
{{</* details */>}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
```
For help editing this content for your use case, ask a technical writer.
## Add content
Now add the body text. You can use this content as a starting point
and replace the text with your own information.
```markdown
You can query for branch rules in a given project by using:
- GraphiQL.
- [`cURL`](getting_started.md#command-line).
## Use GraphiQL
You can use GraphiQL to list the branch rules for a project.
1. Open GraphiQL:
- For GitLab.com, use: `https://gitlab.com/-/graphql-explorer`
- For GitLab Self-Managed, use: `https://gitlab.example.com/-/graphql-explorer`
1. Copy the following text and paste it in the left window.
<graphql codeblock here>
1. Select **Play**.
## Related topics:
- [GraphQL API reference](reference/_index.md)
```
## Add the GraphQL example to the global navigation
Include a link to your new document in the global navigation (the list on the
left side of the documentation website). To do so, open a second MR, against the
[GitLab documentation repository](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/).
The global navigation is set in the
[`navigation.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml) file,
in the `content/data/en-us` subdirectory. You can find the GraphQL section under the
following line:
```yaml
- title: GraphQL
```
Be aware that CI tests for that second MR will fail with a bad link until the
main MR that adds the new GraphQL page is merged. Therefore, only merge the MR against the
[`docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/) repository after the content has
been merged and live on `docs.gitlab.com`.
|
https://docs.gitlab.com/development/feature_flags
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/feature_flags.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
feature_flags.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Document features deployed behind feature flags
|
GitLab development - how to document features deployed behind feature flags
|
GitLab uses [feature flags](../feature_flags/_index.md) to roll
out the deployment of its own features.
{{< alert type="note" >}}
The developer who changes the state of a feature flag is responsible for
updating the documentation.
{{< /alert >}}
## When to document features behind a feature flag
Before a feature flag is enabled for all customers in an environment (GitLab Self-Managed, GitLab.com, or GitLab Dedicated),
the feature must be documented.
For all other features behind flags, the PM or EM for the group determines whether or not
to document the feature.
Even when a flag is not documented alongside the feature, it is
[automatically documented on a central page](../../administration/feature_flags/list.md).
## How to add feature flag documentation
To document feature flags:
- [Add history text](#add-history-text).
- [Add a flag note](#add-a-flag-note).
## Offerings
When documenting the [offerings](styleguide/availability_details.md#offering), for features
**disabled on GitLab Self-Managed**, don't list `GitLab Dedicated` as the feature's offering.
## Add history text
When the state of a flag changes (for example, from disabled by default to enabled by default), add the change to the
[history](styleguide/availability_details.md#history).
Possible history entries are:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab X.X [with a flag](../../administration/feature_flags/_index.md) named `flag_name`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab X.X.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab X.X.
- [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://issue-link) in GitLab X.X.
- [Generally available](https://issue-link) in GitLab X.Y. Feature flag `flag_name` removed.
{{</* /history */>}}
```
These entries might not fit every scenario. You can adjust to suit your needs.
For example, a flag might be enabled for a group, project, or subset of users only.
In that case, you can use a history entry like:
`- [Enabled on GitLab.com](https://issue-link) in GitLab X.X for a subset of users.`
## Add a flag note
Add this feature flag note at the start of the topic, just below the history.
The final sentence (`not ready for production use`) is optional.
```markdown
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{</* /alert */>}}
```
This note renders on the GitLab documentation site as:
{{< alert type="flag" >}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{< /alert >}}
## History examples
The following examples show the progression of a feature flag. Update the history with every change:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 13.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Disabled by default.
{{</* /history */>}}
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag. For more information, see the history.
{{</* /alert */>}}
```
When the feature is enabled by default on GitLab.com:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 13.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 13.8.
{{</* /history */>}}
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag. For more information, see the history.
{{</* /alert */>}}
```
When the feature is enabled by default for all offerings:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 13.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 13.8.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 13.9.
{{</* /history */>}}
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag. For more information, see the history.
{{</* /alert */>}}
```
When the flag is removed, add a `Generally available` entry. Ensure that you delete the `FLAG` note as well:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 13.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 13.8.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 13.9.
- [Generally available](https://issue-link) in GitLab 14.0. Feature flag `forti_token_cloud` removed.
{{</* history */>}}
```
## Simplify long history
The history can get long, but you can sometimes simplify or delete entries.
Combine entries if they happened in the same release:
- Before:
```markdown
- [Introduced](https://issue-link) in GitLab 14.2 [with a flag](../../administration/feature_flags/_index.md) named `ci_include_rules`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 14.3.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 14.3.
```
- After:
```markdown
- [Introduced](https://issue-link) in GitLab 14.2 [with a flag](../../administration/feature_flags/_index.md) named `ci_include_rules`. Disabled by default.
- [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://issue-link) in GitLab 14.3.
```
If the feature flag is introduced and enabled in the same release, combine the entries:
```markdown
- [Introduced](https://issue-link) in GitLab 17.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Enabled by default.
```
Delete `Enabled on GitLab.com` entries only when the feature is enabled by default for all offerings and the flag is removed:
- Before:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 15.6 [with a flag](../../administration/feature_flags/_index.md) named `ci_hooks_pre_get_sources_script`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 15.7.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 15.8.
- [Generally available](https://issue-link) in GitLab 15.9. Feature flag `ci_hooks_pre_get_sources_script` removed.
{{</* /history */>}}
```
- After:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 15.6 [with a flag](../../administration/feature_flags/_index.md) named `ci_hooks_pre_get_sources_script`. Disabled by default.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 15.8.
- [Generally available](https://issue-link) in GitLab 15.9. Feature flag `ci_hooks_pre_get_sources_script` removed.
{{</* history */>}}
```
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: GitLab development - how to document features deployed behind feature
flags
title: Document features deployed behind feature flags
breadcrumbs:
- doc
- development
- documentation
---
GitLab uses [feature flags](../feature_flags/_index.md) to roll
out the deployment of its own features.
{{< alert type="note" >}}
The developer who changes the state of a feature flag is responsible for
updating the documentation.
{{< /alert >}}
## When to document features behind a feature flag
Before a feature flag is enabled for all customers in an environment (GitLab Self-Managed, GitLab.com, or GitLab Dedicated),
the feature must be documented.
For all other features behind flags, the PM or EM for the group determines whether or not
to document the feature.
Even when a flag is not documented alongside the feature, it is
[automatically documented on a central page](../../administration/feature_flags/list.md).
## How to add feature flag documentation
To document feature flags:
- [Add history text](#add-history-text).
- [Add a flag note](#add-a-flag-note).
## Offerings
When documenting the [offerings](styleguide/availability_details.md#offering), for features
**disabled on GitLab Self-Managed**, don't list `GitLab Dedicated` as the feature's offering.
## Add history text
When the state of a flag changes (for example, from disabled by default to enabled by default), add the change to the
[history](styleguide/availability_details.md#history).
Possible history entries are:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab X.X [with a flag](../../administration/feature_flags/_index.md) named `flag_name`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab X.X.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab X.X.
- [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://issue-link) in GitLab X.X.
- [Generally available](https://issue-link) in GitLab X.Y. Feature flag `flag_name` removed.
{{</* /history */>}}
```
These entries might not fit every scenario. You can adjust to suit your needs.
For example, a flag might be enabled for a group, project, or subset of users only.
In that case, you can use a history entry like:
`- [Enabled on GitLab.com](https://issue-link) in GitLab X.X for a subset of users.`
## Add a flag note
Add this feature flag note at the start of the topic, just below the history.
The final sentence (`not ready for production use`) is optional.
```markdown
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{</* /alert */>}}
```
This note renders on the GitLab documentation site as:
{{< alert type="flag" >}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{< /alert >}}
## History examples
The following examples show the progression of a feature flag. Update the history with every change:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 13.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Disabled by default.
{{</* /history */>}}
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag. For more information, see the history.
{{</* /alert */>}}
```
When the feature is enabled by default on GitLab.com:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 13.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 13.8.
{{</* /history */>}}
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag. For more information, see the history.
{{</* /alert */>}}
```
When the feature is enabled by default for all offerings:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 13.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 13.8.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 13.9.
{{</* /history */>}}
{{</* alert type="flag" */>}}
The availability of this feature is controlled by a feature flag. For more information, see the history.
{{</* /alert */>}}
```
When the flag is removed, add a `Generally available` entry. Ensure that you delete the `FLAG` note as well:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 13.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 13.8.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 13.9.
- [Generally available](https://issue-link) in GitLab 14.0. Feature flag `forti_token_cloud` removed.
{{</* history */>}}
```
## Simplify long history
The history can get long, but you can sometimes simplify or delete entries.
Combine entries if they happened in the same release:
- Before:
```markdown
- [Introduced](https://issue-link) in GitLab 14.2 [with a flag](../../administration/feature_flags/_index.md) named `ci_include_rules`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 14.3.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 14.3.
```
- After:
```markdown
- [Introduced](https://issue-link) in GitLab 14.2 [with a flag](../../administration/feature_flags/_index.md) named `ci_include_rules`. Disabled by default.
- [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://issue-link) in GitLab 14.3.
```
If the feature flag is introduced and enabled in the same release, combine the entries:
```markdown
- [Introduced](https://issue-link) in GitLab 17.7 [with a flag](../../administration/feature_flags/_index.md) named `forti_token_cloud`. Enabled by default.
```
Delete `Enabled on GitLab.com` entries only when the feature is enabled by default for all offerings and the flag is removed:
- Before:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 15.6 [with a flag](../../administration/feature_flags/_index.md) named `ci_hooks_pre_get_sources_script`. Disabled by default.
- [Enabled on GitLab.com](https://issue-link) in GitLab 15.7.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 15.8.
- [Generally available](https://issue-link) in GitLab 15.9. Feature flag `ci_hooks_pre_get_sources_script` removed.
{{</* /history */>}}
```
- After:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 15.6 [with a flag](../../administration/feature_flags/_index.md) named `ci_hooks_pre_get_sources_script`. Disabled by default.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://issue-link) in GitLab 15.8.
- [Generally available](https://issue-link) in GitLab 15.9. Feature flag `ci_hooks_pre_get_sources_script` removed.
{{</* history */>}}
```
|
https://docs.gitlab.com/development/help
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/help.md
|
2025-08-13
|
doc/development/documentation
|
[
"doc",
"development",
"documentation"
] |
help.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
GitLab /help
| null |
Every GitLab instance includes documentation at `/help` (`https://gitlab.example.com/help`)
that matches the version of the instance. For example, <https://gitlab.com/help>.
The documentation available online at <https://docs.gitlab.com> is deployed every
hour from the default branch of GitLab, Omnibus, Runner, Charts, and Operator.
After a merge request that updates documentation is merged, it is available online
in an hour or less.
However, it's only available at `/help` on GitLab Self-Managed instances in the next released
version. The date an update is merged can impact which GitLab Self-Managed release the update
is present in.
For example:
1. A merge request in `gitlab` updates documentation. It has a milestone of 14.4,
with an expected release date of 2021-10-22.
1. It is merged on 2021-10-19 and available online the same day at <https://docs.gitlab.com>.
1. GitLab 14.4 is released on 2021-10-22, based on the `gitlab` codebase from 2021-10-18
(one day before the update was merged).
1. The change shows up in the 14.5 GitLab Self-Managed release, due to missing the release cutoff
for 14.4.
If it is important that a documentation update is present in that month's release,
merge it as early as possible.
## Page mapping
Requests to `/help` can be [redirected](../../administration/settings/help_page.md#redirect-help-pages). If redirection
is turned off, `/help` maps requests for help pages to specific files in the `doc`
directory. For example:
- Requested URLs: `<gdk_instance>/help/topics/plan_and_track.md`, `<gdk_instance>/help/topics/plan_and_track.html`
and `<gdk_instance>/help/topics/plan_and_track`.
- Mapping: `doc/topics/plan_and_track.md`.
### `_index.md` files
{{< history >}}
- Support for `_index.md` files [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/144419) in GitLab 16.10.
{{< /history >}}
The Hugo static site generator makes use of `_index.md` files. To allow for index pages to be
named either `index.md` or `_index.md` in `/help`, GitLab maps requests for `index.md`, `index.html`, or `index`:
- To `index.md` if the file exists at the requested location.
- Otherwise, to `_index.md`.
For example:
- Requested URLs: `<gdk_instance>/help/user/index.md`, `<gdk_instance>/help/user/index.html`, and
`<gdk_instance>/help/user/index`.
- Mapping:
- `doc/user/index.md` if it exists.
- Otherwise, to `doc/user/_index.md`.
## Linking to `/help`
When you're building a new feature, you may need to link to the documentation
from the GitLab application. This is usually done in files inside the
`app/views/` directory, with the help of the `help_page_path` helper method.
The `help_page_path` contains the path to the document you want to link to,
with the following conventions:
- It's relative to the `doc/` directory in the GitLab repository.
- For clarity, it should end with the `.md` file extension.
The help text follows the [Pajamas guidelines](https://design.gitlab.com/patterns/contextual-help#formatting-help-content).
### Linking to `/help` in HAML
Use the following special cases depending on the context, ensuring all link text
is inside `_()` so it can be translated:
- Linking to a doc page. In its most basic form, the HAML code to generate a
link to the `/help` page is:
```haml
= link_to _('Learn more.'), help_page_path('user/permissions.md'), target: '_blank', rel: 'noopener noreferrer'
```
- Linking to an anchor link. Use `anchor` as part of the `help_page_path`
method:
```haml
= link_to _('Learn more.'), help_page_path('user/permissions.md', anchor: 'anchor-link'), target: '_blank', rel: 'noopener noreferrer'
```
- Using links inline of some text. First, define the link, and then use it. In
this example, `link_start` is the name of the variable that contains the
link:
```haml
- link = link_to('', help_page_path('user/permissions.md'), target: '_blank', rel: 'noopener noreferrer')
%p= safe_format(_("This is a text describing the option/feature in a sentence. %{link_start}Learn more.%{link_end}"), tag_pair(link, :link_start, :link_end))
```
- Using a button link. Useful in places where text would be out of context with
the rest of the page layout:
```haml
= render Pajamas::ButtonComponent.new(href: help_page_path('user/group/import/_index.md'), target: '_blank') do
= _('Learn more')
```
### Linking to `/help` in JavaScript
To link to the documentation from a JavaScript or a Vue component, use the `helpPagePath` function from [`help_page_helper.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/helpers/help_page_helper.js):
```javascript
import { helpPagePath } from '~/helpers/help_page_helper';
helpPagePath('user/permissions.md', { anchor: 'anchor-link' })
// evaluates to '/help/user/permissions#anchor-link' for GitLab.com
```
This is preferred over static paths, as the helper also works on instances installed under a [relative URL](../../install/relative_url.md).
### Linking to `/help` in Ruby
To link to the documentation from within Ruby code, use the following code block as a guide, ensuring all link text is inside `_()` so it can
be translated:
```ruby
docs_link = link_to _('Learn more.'), help_page_url('user/permissions.md', anchor: 'anchor-link'), target: '_blank', rel: 'noopener noreferrer'
safe_format(_('This is a text describing the option/feature in a sentence. %{docs_link}'), docs_link: docs_link)
```
In cases where you need to generate a link from outside of views/helpers, where the `link_to` and `help_page_url` methods are not available, use the following code block
as a guide where the methods are fully qualified:
```ruby
docs_link = ActionController::Base.helpers.link_to _('Learn more.'), Rails.application.routes.url_helpers.help_page_url('user/permissions.md', anchor: 'anchor-link'), target: '_blank', rel: 'noopener noreferrer'
safe_format(_('This is a text describing the option/feature in a sentence. %{docs_link}'), docs_link: docs_link)
```
Do not use `include ActionView::Helpers::UrlHelper` just to make the `link_to` method available as you might see in some existing code. Read more in
[issue 340567](https://gitlab.com/gitlab-org/gitlab/-/issues/340567).
## `/help` tests
Several [RSpec tests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/features/help_pages_spec.rb)
are run to ensure GitLab documentation renders and works correctly. In particular, that [main docs landing page](../../_index.md) works correctly from `/help`.
For example, [GitLab.com's `/help`](https://gitlab.com/help).
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: GitLab /help
breadcrumbs:
- doc
- development
- documentation
---
Every GitLab instance includes documentation at `/help` (`https://gitlab.example.com/help`)
that matches the version of the instance. For example, <https://gitlab.com/help>.
The documentation available online at <https://docs.gitlab.com> is deployed every
hour from the default branch of GitLab, Omnibus, Runner, Charts, and Operator.
After a merge request that updates documentation is merged, it is available online
in an hour or less.
However, it's only available at `/help` on GitLab Self-Managed instances in the next released
version. The date an update is merged can impact which GitLab Self-Managed release the update
is present in.
For example:
1. A merge request in `gitlab` updates documentation. It has a milestone of 14.4,
with an expected release date of 2021-10-22.
1. It is merged on 2021-10-19 and available online the same day at <https://docs.gitlab.com>.
1. GitLab 14.4 is released on 2021-10-22, based on the `gitlab` codebase from 2021-10-18
(one day before the update was merged).
1. The change shows up in the 14.5 GitLab Self-Managed release, due to missing the release cutoff
for 14.4.
If it is important that a documentation update is present in that month's release,
merge it as early as possible.
## Page mapping
Requests to `/help` can be [redirected](../../administration/settings/help_page.md#redirect-help-pages). If redirection
is turned off, `/help` maps requests for help pages to specific files in the `doc`
directory. For example:
- Requested URLs: `<gdk_instance>/help/topics/plan_and_track.md`, `<gdk_instance>/help/topics/plan_and_track.html`
and `<gdk_instance>/help/topics/plan_and_track`.
- Mapping: `doc/topics/plan_and_track.md`.
### `_index.md` files
{{< history >}}
- Support for `_index.md` files [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/144419) in GitLab 16.10.
{{< /history >}}
The Hugo static site generator makes use of `_index.md` files. To allow for index pages to be
named either `index.md` or `_index.md` in `/help`, GitLab maps requests for `index.md`, `index.html`, or `index`:
- To `index.md` if the file exists at the requested location.
- Otherwise, to `_index.md`.
For example:
- Requested URLs: `<gdk_instance>/help/user/index.md`, `<gdk_instance>/help/user/index.html`, and
`<gdk_instance>/help/user/index`.
- Mapping:
- `doc/user/index.md` if it exists.
- Otherwise, to `doc/user/_index.md`.
## Linking to `/help`
When you're building a new feature, you may need to link to the documentation
from the GitLab application. This is usually done in files inside the
`app/views/` directory, with the help of the `help_page_path` helper method.
The `help_page_path` contains the path to the document you want to link to,
with the following conventions:
- It's relative to the `doc/` directory in the GitLab repository.
- For clarity, it should end with the `.md` file extension.
The help text follows the [Pajamas guidelines](https://design.gitlab.com/patterns/contextual-help#formatting-help-content).
### Linking to `/help` in HAML
Use the following special cases depending on the context, ensuring all link text
is inside `_()` so it can be translated:
- Linking to a doc page. In its most basic form, the HAML code to generate a
link to the `/help` page is:
```haml
= link_to _('Learn more.'), help_page_path('user/permissions.md'), target: '_blank', rel: 'noopener noreferrer'
```
- Linking to an anchor link. Use `anchor` as part of the `help_page_path`
method:
```haml
= link_to _('Learn more.'), help_page_path('user/permissions.md', anchor: 'anchor-link'), target: '_blank', rel: 'noopener noreferrer'
```
- Using links inline of some text. First, define the link, and then use it. In
this example, `link_start` is the name of the variable that contains the
link:
```haml
- link = link_to('', help_page_path('user/permissions.md'), target: '_blank', rel: 'noopener noreferrer')
%p= safe_format(_("This is a text describing the option/feature in a sentence. %{link_start}Learn more.%{link_end}"), tag_pair(link, :link_start, :link_end))
```
- Using a button link. Useful in places where text would be out of context with
the rest of the page layout:
```haml
= render Pajamas::ButtonComponent.new(href: help_page_path('user/group/import/_index.md'), target: '_blank') do
= _('Learn more')
```
### Linking to `/help` in JavaScript
To link to the documentation from a JavaScript or a Vue component, use the `helpPagePath` function from [`help_page_helper.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/helpers/help_page_helper.js):
```javascript
import { helpPagePath } from '~/helpers/help_page_helper';
helpPagePath('user/permissions.md', { anchor: 'anchor-link' })
// evaluates to '/help/user/permissions#anchor-link' for GitLab.com
```
This is preferred over static paths, as the helper also works on instances installed under a [relative URL](../../install/relative_url.md).
### Linking to `/help` in Ruby
To link to the documentation from within Ruby code, use the following code block as a guide, ensuring all link text is inside `_()` so it can
be translated:
```ruby
docs_link = link_to _('Learn more.'), help_page_url('user/permissions.md', anchor: 'anchor-link'), target: '_blank', rel: 'noopener noreferrer'
safe_format(_('This is a text describing the option/feature in a sentence. %{docs_link}'), docs_link: docs_link)
```
In cases where you need to generate a link from outside of views/helpers, where the `link_to` and `help_page_url` methods are not available, use the following code block
as a guide where the methods are fully qualified:
```ruby
docs_link = ActionController::Base.helpers.link_to _('Learn more.'), Rails.application.routes.url_helpers.help_page_url('user/permissions.md', anchor: 'anchor-link'), target: '_blank', rel: 'noopener noreferrer'
safe_format(_('This is a text describing the option/feature in a sentence. %{docs_link}'), docs_link: docs_link)
```
Do not use `include ActionView::Helpers::UrlHelper` just to make the `link_to` method available as you might see in some existing code. Read more in
[issue 340567](https://gitlab.com/gitlab-org/gitlab/-/issues/340567).
## `/help` tests
Several [RSpec tests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/features/help_pages_spec.rb)
are run to ensure GitLab documentation renders and works correctly. In particular, that [main docs landing page](../../_index.md) works correctly from `/help`.
For example, [GitLab.com's `/help`](https://gitlab.com/help).
|
https://docs.gitlab.com/development/documentation/availability_details
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/availability_details.md
|
2025-08-13
|
doc/development/documentation/styleguide
|
[
"doc",
"development",
"documentation",
"styleguide"
] |
availability_details.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Product availability details
|
Writing styles, markup, formatting, and other standards for GitLab Documentation.
|
Product availability details provide information about a feature.
- If the details apply to the whole page, place them at the top
of the page, but after the front matter.
- If they apply to a specific section, place the details under the applicable
section titles.
Availability details include the tier, offering, status, and history.
The Markdown for availability details should look like the following:
```markdown
title: 'Topic title'
---
{{</* details */>}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
- Status: Experiment
{{</* /details */>}}
{{</* history */>}}
- [Introduced](https://link-to-issue) in GitLab 16.3.
- Updated in GitLab 16.4.
{{</* /history */>}}
```
## Available options
Use the following text for the tier, offering, add-on, status, and version history.
### Offering
For offering, use any combination of these entries, in this order, separated by commas:
- `GitLab.com`
- `GitLab Self-Managed`
- `GitLab Dedicated`
For example:
- `GitLab.com`
- `GitLab.com, GitLab Self-Managed`
- `GitLab Self-Managed`
- `GitLab Self-Managed, GitLab Dedicated`
{{< alert type="note" >}}
If you have reviewed a page and it specifically doesn't apply to GitLab Dedicated,
[assign metadata](../metadata.md#indicate-gitlab-dedicated-support).
{{< /alert >}}
### Tier
For tier, choose one:
- `Free, Premium, Ultimate`
- `Premium, Ultimate`
- `Ultimate`
{{< alert type="note" >}}
GitLab Dedicated always includes an Ultimate subscription.
{{< /alert >}}
#### Add-ons
For add-ons, the possibilities are:
```markdown
- Add-on: GitLab Duo Pro
- Add-on: GitLab Duo Enterprise
- Add-on: GitLab Duo Pro or Enterprise
- Add-on: GitLab Duo with Amazon Q
```
### Status
For status, choose one:
- `Beta`
- `Experiment`
- `Limited availability`
Generally available features should not have a status.
### History
The documentation site uses [shortcodes](../hugo_migration.md#shortcodes) to render the version history,
for example:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 16.3.
- [Changed](https://issue-link) in GitLab 16.4.
{{</* /history */>}}
```
In addition:
- Ensure that history notes are listed after the details (if any), and immediately
after the heading.
- Ensure that the output generates properly.
- Ensure the version history begins with `-`.
- If possible, include a link to the related issue. If there is no related issue, link to a merge request, or epic.
- Do not link to [confidential issues](_index.md#confidential-or-restricted-access-links).
- Do not link to the pricing page. Do not include the subscription tier.
#### Updated features
For features that have changed or been updated, add a new list item.
Start the sentence with the feature name or a gerund.
For example:
```markdown
- [Introduced](https://issue-link) in GitLab 13.1.
- Creating an issue from an issue board [introduced](https://issue-link) in GitLab 14.1.
```
Or:
```markdown
- [Introduced](https://issue-link) in GitLab 13.1.
- Notifications for expiring tokens [introduced](https://issue-link) in GitLab 14.3.
```
#### Moved subscription tiers
For features that move to another subscription tier, use `moved`:
```markdown
- [Moved](https://issue-link) from GitLab Ultimate to GitLab Premium in 11.8.
- [Moved](https://issue-link) from GitLab Premium to GitLab Free in 12.0.
```
#### Changed feature status
For a feature status change from experiment to beta, use `changed`:
```markdown
- [Introduced](https://issue-link) as an [experiment](../../policy/development_stages_support.md) in GitLab 15.7.
- [Changed](https://issue-link) from experiment to beta in GitLab 16.0.
```
For a feature status change from beta to limited availability, use `changed`:
```markdown
- [Changed](https://issue-link) from experiment to beta in GitLab 16.0.
- [Changed](https://issue-link) from beta to limited availability in GitLab 16.3.
```
For a change to generally available, use:
```markdown
- [Generally available](https://issue-link) in GitLab 16.10.
```
#### Features made available as part of a program
For features made available to users as part of a program, add a new list item and link to the program.
```markdown
- [Introduced](https://issue-link) in GitLab 15.1.
- Merged results pipelines [added](https://issue-link) to the [Registration Features Program](https://page-link) in GitLab 16.7.
```
#### Features behind feature flags
For features introduced behind feature flags, add details about the feature flag. For more information, see [Document features deployed behind feature flags](../feature_flags.md).
#### Removing versions
Remove history items and inline text that refer to unsupported versions.
GitLab supports the current major version and two previous major versions.
For example, if 18.0 is the current major version, all major and minor releases of
GitLab 18.0, 17.0, and 16.0 are supported.
For the list of current supported versions, see [Version support](https://about.gitlab.com/support/statement-of-support/#version-support).
Remove information about [features behind feature flags](../feature_flags.md)
only if all events related to the feature flag happened in unsupported versions.
If the flag hasn't been removed, readers should know when it was introduced.
#### Timing version removals
When a new major version is about to be released, create merge
requests to remove mentions of the last unsupported version. Only merge
them during the milestone of the new major release.
For example, if GitLab 19.0 is the next major upcoming release:
- The supported versions are 18, 17, and 16.
- When GitLab 19.0 is released, GitLab 16 is no longer supported.
Create merge requests to remove mentions of GitLab 16, but only
merge them during the 19.0 milestone, after 18.11 is released.
## When to add availability details
Assign availability details under:
- Most H1 topic titles, except the pages under `doc/development/*` and `doc/solutions/*`.
- Topic titles for features that have different availability details than the H1 title.
The H1 availability details should be the details that apply to the widest availability
for the features on the page. For example:
- If some sections apply to Premium and Ultimate, and others apply to just Ultimate,
the H1 `Tier:` should be `Premium, Ultimate`.
- If some sections apply to all instances, and others apply to only `GitLab Self-Managed`,
the `Offering:` should be `GitLab.com, GitLab Self-Managed, GitLab Dedicated`.
- If some sections are beta, and others are experiment, the H1 `Status:` should be `Beta`.
If some sections are beta, and others are generally available, then there should
be no `Status:` for the H1.
## When not to add availability details
Do not assign availability details to the following pages:
- Tutorials.
- Pages that compare features from different tiers.
- Pages in the `/development` folder. These pages are automatically assigned a `Contribute` badge.
- Pages in the `/solutions` folder. These pages are automatically assigned a `Solutions` badge.
Also, do not assign them when a feature does not have one obvious subscription tier or offering.
For example, if a feature applies to one tier for GitLab.com and a different availability for GitLab Self-Managed.
In this case, do any or all of the following:
- Use [metadata](../metadata.md#indicate-lack-of-product-availability-details)
to indicate that the page has been reviewed and does not need availability details.
- Use a [`type="note"`](_index.md#note) alert box to describe the availability details.
- Add availability details under other topic titles where this information makes more sense.
- Do not add availability details under the H1.
### Duplicating tier, offering, or status on subheadings
If a subheading has the same tier, offering, or status as its parent
topic, you don't need to repeat the information in the subheading's
badge.
For example, subheadings that have `Tier: Premium, Ultimate` and `Offering: GitLab.com`
don't need to duplicate the details if the page details match:
```markdown
title: My title
---
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com
{{</* /details */>}}
```
Any lower-level heading that applies to a different tier but same offering would be:
```markdown
## My title
{{</* details */>}}
- Tier: Ultimate
{{</* /details */>}}
```
## Inline availability details
Generally, you should not add availability details inline with other text.
The single source of truth for a feature should be the topic where the
functionality is described.
If you do need to mention an availability details inline, write it in plain text.
For example, for an API topic:
```markdown
IDs of the users to assign the issue to. Ultimate only.
```
For more examples, see the [REST API style guide](../restful_api_styleguide.md).
## Inline history text
If you're adding content to an existing topic, add historical information
inline with the existing text. If possible, include a link to the related issue,
merge request, or epic. For example:
```markdown
The voting strategy [in GitLab 13.4 and later](https://issue-link) requires the primary and secondary
voters to agree.
```
## Administrator documentation for availability details
Topics that are only for instance administrators should have the `GitLab Self-Managed` tier.
Instance administrator documentation often includes sections that mention:
- Changing the `gitlab.rb` or `gitlab.yml` files.
- Accessing the rails console or running Rake tasks.
- Doing things in the **Admin** area.
These pages should also mention if the tasks can only be accomplished by an
instance administrator.
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Writing styles, markup, formatting, and other standards for GitLab Documentation.
title: Product availability details
breadcrumbs:
- doc
- development
- documentation
- styleguide
---
Product availability details provide information about a feature.
- If the details apply to the whole page, place them at the top
of the page, but after the front matter.
- If they apply to a specific section, place the details under the applicable
section titles.
Availability details include the tier, offering, status, and history.
The Markdown for availability details should look like the following:
```markdown
title: 'Topic title'
---
{{</* details */>}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
- Status: Experiment
{{</* /details */>}}
{{</* history */>}}
- [Introduced](https://link-to-issue) in GitLab 16.3.
- Updated in GitLab 16.4.
{{</* /history */>}}
```
## Available options
Use the following text for the tier, offering, add-on, status, and version history.
### Offering
For offering, use any combination of these entries, in this order, separated by commas:
- `GitLab.com`
- `GitLab Self-Managed`
- `GitLab Dedicated`
For example:
- `GitLab.com`
- `GitLab.com, GitLab Self-Managed`
- `GitLab Self-Managed`
- `GitLab Self-Managed, GitLab Dedicated`
{{< alert type="note" >}}
If you have reviewed a page and it specifically doesn't apply to GitLab Dedicated,
[assign metadata](../metadata.md#indicate-gitlab-dedicated-support).
{{< /alert >}}
### Tier
For tier, choose one:
- `Free, Premium, Ultimate`
- `Premium, Ultimate`
- `Ultimate`
{{< alert type="note" >}}
GitLab Dedicated always includes an Ultimate subscription.
{{< /alert >}}
#### Add-ons
For add-ons, the possibilities are:
```markdown
- Add-on: GitLab Duo Pro
- Add-on: GitLab Duo Enterprise
- Add-on: GitLab Duo Pro or Enterprise
- Add-on: GitLab Duo with Amazon Q
```
### Status
For status, choose one:
- `Beta`
- `Experiment`
- `Limited availability`
Generally available features should not have a status.
### History
The documentation site uses [shortcodes](../hugo_migration.md#shortcodes) to render the version history,
for example:
```markdown
{{</* history */>}}
- [Introduced](https://issue-link) in GitLab 16.3.
- [Changed](https://issue-link) in GitLab 16.4.
{{</* /history */>}}
```
In addition:
- Ensure that history notes are listed after the details (if any), and immediately
after the heading.
- Ensure that the output generates properly.
- Ensure the version history begins with `-`.
- If possible, include a link to the related issue. If there is no related issue, link to a merge request, or epic.
- Do not link to [confidential issues](_index.md#confidential-or-restricted-access-links).
- Do not link to the pricing page. Do not include the subscription tier.
#### Updated features
For features that have changed or been updated, add a new list item.
Start the sentence with the feature name or a gerund.
For example:
```markdown
- [Introduced](https://issue-link) in GitLab 13.1.
- Creating an issue from an issue board [introduced](https://issue-link) in GitLab 14.1.
```
Or:
```markdown
- [Introduced](https://issue-link) in GitLab 13.1.
- Notifications for expiring tokens [introduced](https://issue-link) in GitLab 14.3.
```
#### Moved subscription tiers
For features that move to another subscription tier, use `moved`:
```markdown
- [Moved](https://issue-link) from GitLab Ultimate to GitLab Premium in 11.8.
- [Moved](https://issue-link) from GitLab Premium to GitLab Free in 12.0.
```
#### Changed feature status
For a feature status change from experiment to beta, use `changed`:
```markdown
- [Introduced](https://issue-link) as an [experiment](../../policy/development_stages_support.md) in GitLab 15.7.
- [Changed](https://issue-link) from experiment to beta in GitLab 16.0.
```
For a feature status change from beta to limited availability, use `changed`:
```markdown
- [Changed](https://issue-link) from experiment to beta in GitLab 16.0.
- [Changed](https://issue-link) from beta to limited availability in GitLab 16.3.
```
For a change to generally available, use:
```markdown
- [Generally available](https://issue-link) in GitLab 16.10.
```
#### Features made available as part of a program
For features made available to users as part of a program, add a new list item and link to the program.
```markdown
- [Introduced](https://issue-link) in GitLab 15.1.
- Merged results pipelines [added](https://issue-link) to the [Registration Features Program](https://page-link) in GitLab 16.7.
```
#### Features behind feature flags
For features introduced behind feature flags, add details about the feature flag. For more information, see [Document features deployed behind feature flags](../feature_flags.md).
#### Removing versions
Remove history items and inline text that refer to unsupported versions.
GitLab supports the current major version and two previous major versions.
For example, if 18.0 is the current major version, all major and minor releases of
GitLab 18.0, 17.0, and 16.0 are supported.
For the list of current supported versions, see [Version support](https://about.gitlab.com/support/statement-of-support/#version-support).
Remove information about [features behind feature flags](../feature_flags.md)
only if all events related to the feature flag happened in unsupported versions.
If the flag hasn't been removed, readers should know when it was introduced.
#### Timing version removals
When a new major version is about to be released, create merge
requests to remove mentions of the last unsupported version. Only merge
them during the milestone of the new major release.
For example, if GitLab 19.0 is the next major upcoming release:
- The supported versions are 18, 17, and 16.
- When GitLab 19.0 is released, GitLab 16 is no longer supported.
Create merge requests to remove mentions of GitLab 16, but only
merge them during the 19.0 milestone, after 18.11 is released.
## When to add availability details
Assign availability details under:
- Most H1 topic titles, except the pages under `doc/development/*` and `doc/solutions/*`.
- Topic titles for features that have different availability details than the H1 title.
The H1 availability details should be the details that apply to the widest availability
for the features on the page. For example:
- If some sections apply to Premium and Ultimate, and others apply to just Ultimate,
the H1 `Tier:` should be `Premium, Ultimate`.
- If some sections apply to all instances, and others apply to only `GitLab Self-Managed`,
the `Offering:` should be `GitLab.com, GitLab Self-Managed, GitLab Dedicated`.
- If some sections are beta, and others are experiment, the H1 `Status:` should be `Beta`.
If some sections are beta, and others are generally available, then there should
be no `Status:` for the H1.
## When not to add availability details
Do not assign availability details to the following pages:
- Tutorials.
- Pages that compare features from different tiers.
- Pages in the `/development` folder. These pages are automatically assigned a `Contribute` badge.
- Pages in the `/solutions` folder. These pages are automatically assigned a `Solutions` badge.
Also, do not assign them when a feature does not have one obvious subscription tier or offering.
For example, if a feature applies to one tier for GitLab.com and a different availability for GitLab Self-Managed.
In this case, do any or all of the following:
- Use [metadata](../metadata.md#indicate-lack-of-product-availability-details)
to indicate that the page has been reviewed and does not need availability details.
- Use a [`type="note"`](_index.md#note) alert box to describe the availability details.
- Add availability details under other topic titles where this information makes more sense.
- Do not add availability details under the H1.
### Duplicating tier, offering, or status on subheadings
If a subheading has the same tier, offering, or status as its parent
topic, you don't need to repeat the information in the subheading's
badge.
For example, subheadings that have `Tier: Premium, Ultimate` and `Offering: GitLab.com`
don't need to duplicate the details if the page details match:
```markdown
title: My title
---
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com
{{</* /details */>}}
```
Any lower-level heading that applies to a different tier but same offering would be:
```markdown
## My title
{{</* details */>}}
- Tier: Ultimate
{{</* /details */>}}
```
## Inline availability details
Generally, you should not add availability details inline with other text.
The single source of truth for a feature should be the topic where the
functionality is described.
If you do need to mention an availability details inline, write it in plain text.
For example, for an API topic:
```markdown
IDs of the users to assign the issue to. Ultimate only.
```
For more examples, see the [REST API style guide](../restful_api_styleguide.md).
## Inline history text
If you're adding content to an existing topic, add historical information
inline with the existing text. If possible, include a link to the related issue,
merge request, or epic. For example:
```markdown
The voting strategy [in GitLab 13.4 and later](https://issue-link) requires the primary and secondary
voters to agree.
```
## Administrator documentation for availability details
Topics that are only for instance administrators should have the `GitLab Self-Managed` tier.
Instance administrator documentation often includes sections that mention:
- Changing the `gitlab.rb` or `gitlab.yml` files.
- Accessing the rails console or running Rake tasks.
- Doing things in the **Admin** area.
These pages should also mention if the tasks can only be accomplished by an
instance administrator.
|
https://docs.gitlab.com/development/documentation/styleguide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/_index.md
|
2025-08-13
|
doc/development/documentation/styleguide
|
[
"doc",
"development",
"documentation",
"styleguide"
] |
_index.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Documentation Style Guide
|
Writing styles, markup, formatting, and other standards for GitLab Documentation.
|
This document defines the standards for GitLab documentation, including grammar, formatting, and more.
For guidelines on specific words, see [the word list](word_list.md).
## The GitLab voice
The GitLab brand guidelines define the
[voice used by the larger organization](https://design.gitlab.com/brand-messaging/brand-voice).
Building on that guidance, the voice in the GitLab documentation strives to be concise,
direct, and precise. The goal is to provide information that's easy to search and scan.
The voice in the documentation should be conversational but brief, friendly but succinct.
## Documentation is the single source of truth (SSoT)
The GitLab documentation is the SSoT for all product information related to implementation,
use, and troubleshooting. The documentation evolves continuously. It is updated with
new products and features, and with improvements for clarity, accuracy, and completeness.
This policy:
- Prevents information silos and makes it easier to find information about GitLab products.
- Does not mean that content cannot be duplicated in multiple places in the documentation.
## Topic types
GitLab uses [topic types](../topic_types/_index.md) to organize the product documentation.
Topic types help users digest information more quickly. They also help address these issues:
- **Content is hard to find.** The GitLab documentation is comprehensive and includes a large amount of
useful information. Topic types create repeatable patterns that make the content easier
to scan and parse.
- **Content is often written from the contributor's point of view.** The GitLab documentation is
written by a variety of contributors. Topic types (tasks, specifically) help put
information into a format that is geared toward helping others, rather than
documenting how a feature was implemented.
## Docs-first methodology
The product documentation should be a complete and trusted resource.
- If the answer to a question exists in documentation, share the link to the
documentation instead of rephrasing the information.
- When you encounter information that's not available in GitLab documentation,
create a merge request (MR) to add the information to the
documentation. Then share the MR to communicate the information.
The more we reflexively add information to the documentation, the more
the documentation helps others efficiently accomplish tasks and solve problems.
## Writing for localization
The GitLab documentation is not localized, but we follow guidelines that help us write for a global audience.
[The GitLab voice](#the-gitlab-voice) dictates that we write clearly and directly with translation in mind.
Our style guide, [word list](word_list.md), and [Vale rules](../testing/_index.md) ensure consistency in the documentation.
When documentation is translated into other languages, the meaning of each word must be clear.
The increasing use of machine translation, GitLab Duo Chat, and other AI tools
means that consistency is even more important.
The following rules can help documentation be translated more efficiently.
Avoid:
- Phrases that hide the subject like [**there is** and **there are**](word_list.md#there-is-there-are).
- Ambiguous pronouns like [**it**](word_list.md#it).
- Words that end in [**-ing**](word_list.md#-ing-words).
- Words that can be confused with one another like [**since**](word_list.md#since) and **because**.
- Latin abbreviations like [**e.g.**](word_list.md#eg) and [**i.e.**](word_list.md#ie).
- Culture-specific references like **kill two birds with one stone**.
Use:
- Standard [text for links](#text-for-links).
- [Lists](#lists) and [tables](#tables) instead of complex sentences and paragraphs.
- Common abbreviations like [**AI**](word_list.md#ai-artificial-intelligence) and
[**CI/CD**](word_list.md#cicd) and abbreviations you've previously spelled out.
Also, keep the following guidance in mind:
- Be consistent with [feature names](#feature-names) and how to interact with them.
- Break up noun strings. For example, instead of **project integration custom settings**,
use **custom settings for project integrations**.
- Format [dates and times](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/date-time-terms)
consistently and for an international audience.
- Use [illustrations](#illustrations), including screenshots, sparingly.
- For [UI text](#ui-text), allow for up to 30% expansion and contraction in translation.
To see how much a string expands or contracts in another language, paste the string
into [Google Translate](https://translate.google.com/) and review the results.
Ask a colleague who speaks the language to verify if the translation is clear.
## Markdown
All GitLab documentation is written in [Markdown](https://en.wikipedia.org/wiki/Markdown).
The [documentation website](https://docs.gitlab.com) uses the [Hugo](https://gohugo.io/) static site generator with its default Markdown engine, [Goldmark](https://gohugo.io/content-management/formats/#markdown).
Markdown formatting is tested by using [markdownlint](../testing/markdownlint.md) and [Vale](../testing/vale.md).
### HTML in Markdown
Hard-coded HTML is valid, although it's discouraged for a few reasons:
- Custom markup has potential to break future site-wide changes or design system updates.
- Custom markup does not have test coverage to ensure consistency across the site.
- Custom markup might not be responsive or accessible.
- Custom markup might not adhere to Pajamas guidelines.
- HTML and CSS in Markdown do not render on `/help`.
- Hand-coding HTML can be error-prone. It's possible to break the page layout or other components with malformed HTML.
HTML is permitted if:
- No equivalent exists in Markdown.
- The content is reviewed and approved by a technical writer.
- The need for a custom element is urgent and cannot wait for implementation by Technical Writing engineers.
If you have an idea or request for a new element that would be useful on the Docs site,
submit a [feature request](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/new?issuable_template=Default).
### Heading levels in Markdown
Each documentation page must include a `title` attribute in its [metadata](../metadata.md).
The `title` becomes the `H1` element when rendered to HTML.
Do not add an `H1` heading in Markdown because there can be only one for each page.
- For each subsection, increment the heading level. In other words, increment the number of `#` characters
in front of the topic title.
- Avoid heading levels greater than `H5` (`#####`). If you need more than five heading levels, move the topics to a new page instead.
Heading levels greater than `H4` do not display in the right sidebar navigation.
- Do not skip a level. For example: `##` > `####`.
- Leave one blank line before and after the topic title.
- If you use code in topic titles, ensure the code is in backticks.
- Do not use bold text in topic titles.
### Description lists in Markdown
To define terms or differentiate between options, use description lists. For a list of UI elements,
use a regular [list](#lists) instead of a description list.
Do not mix description lists with other styles.
```markdown
Term 1
: Definition of Term 1
Term 2
: Definition of Term 2
```
These lists render like this:
Term 1
: Definition of Term 1
Term 2
: Definition of Term 2
### Shortcodes
[Shortcodes](https://gohugo.io/content-management/shortcodes/) are snippets of template code that we can include in our Markdown content to display non-standard elements on a page, such as alert boxes or tabs.
GitLab documentation uses the following shortcodes:
- [Alert boxes](#alert-boxes)
- Note
- Warning
- Flag
- Disclaimer
- Details
- [Availability details](availability_details.md)
- [Version history](availability_details.md#history)
- [Icons](#gitlab-svg-icons)
- [Tabs](#tabs)
- [Cards](#cards)
- [Maintained versions](#maintained-versions)
## Language
GitLab documentation should be clear and easy to understand.
- Avoid unnecessary words.
- Be clear, concise, and stick to the goal of the topic.
- Write in US English with US grammar. (Tested in [`British.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/British.yml).)
### Active voice
In most cases, text is easier to understand and to translate if you use active voice instead of passive.
For example, use:
- The developer writes code for the application.
Instead of:
- Application code is written by the developer.
Sometimes, `GitLab` as the subject can be awkward. For example, `GitLab exports the report`.
In this case, use passive voice instead. For example, `The report is exported`.
### Customer perspective
Focus on the functionality and benefits that GitLab brings to customer,
rather than what GitLab has created.
For example, use:
- Use merge requests to compare code in the source and target branches.
Instead of:
- GitLab allows you to compare code.
- GitLab created the ability to let you compare code.
- Merge requests let you compare code.
Words that indicate you are not writing from a customer perspective are
[allow and enable](word_list.md#allow-enable). Try instead to use
[you](word_list.md#you-your-yours) and to speak directly to the user.
### Building trust
Product documentation should be focused on providing clear, concise information,
without the addition of sales or marketing text.
- Do not use words like [easily](word_list.md#easily) or [simply](word_list.md#simply-simple).
- Do not use marketing phrases like "This feature will save you time and money."
Instead, focus on facts and achievable goals. Be specific. For example:
- The build time can decrease when you use this feature.
- Use this feature to save time when you create a project. The API creates the file and you
do not have to manually intervene.
### Self-referential writing
Avoid writing about the document itself. For example, do not use:
- This page shows...
- This guide explains...
These phrases slow the user down. Instead, get right to the point. For example, instead of:
- This page explains different types of pipelines.
Use:
- GitLab has different types of pipelines to help address your development needs.
Tested in [`SelfReferential.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SelfReferential.yml).
### Capitalization
As a company, we tend toward lowercase.
#### Topic titles
Use sentence case for topic titles. For example:
- `# Use variables to configure pipelines`
- `## Use the To-Do List`
#### UI text
When referring to specific user interface text, like a button label, page, tab,
or menu item, use the same capitalization that's displayed in the user interface.
The only exception is text that's all uppercase (for example, `RECENT FLOWS`).
In this case, use sentence case.
If you think the user interface text contains style mistakes,
create an issue or an MR to propose a change to the user interface text.
#### Feature names
Feature names should be lowercase.
However, in a few rare cases, features can be title case. These exceptions are:
- Added as a proper name to [markdownlint](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint.yml),
so they can be consistently applied across all documentation.
- Added to the [word list](word_list.md).
If the term is not in the word list, ask a GitLab Technical Writer for advice.
For assistance naming a feature and ensuring it meets GitLab standards, see
[the handbook](https://handbook.gitlab.com/handbook/product/categories/gitlab-the-product/#naming-features).
Do not match the capitalization of terms or phrases on the [Features page](https://about.gitlab.com/features/)
or [`features.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/features.yml)
by default.
#### Other terms
Capitalize names of:
- GitLab [product tiers](https://about.gitlab.com/pricing/). For example,
GitLab Free and GitLab Ultimate.
- Third-party organizations, software, and products. For example, Prometheus,
Kubernetes, Git, and The Linux Foundation.
- Methods or methodologies. For example, Continuous Integration,
Continuous Deployment, Scrum, and Agile.
Follow the capitalization style listed at the authoritative source
for the entity, which might use non-standard case styles. For example: GitLab and
npm.
### Fake user information
Do not include real usernames or email addresses in the documentation.
For text:
- Use diverse or non-gendered names with common surnames, like `Sidney Jones`, `Zhang Wei`, or `Alex Garcia`.
- Make fake email addresses end in `example.com`.
For screenshots:
- Temporarily edit the page before you take the screenshot:
1. Right-click the text you want to change.
1. Select **Inspect**.
1. In the **Elements** dialog, edit the HTML to replace text that contains real user information with example data.
1. Close the dialog. All of the user data in the web page should now be replaced with the example data you entered.
1. Take the screenshot.
- Alternatively, create example accounts in a test environment, and take the screenshot there.
- If you can't reproduce the environment, blur the user data by using an image editing tool like Preview on macOS.
### Fake URLs
When including sample URLs in the documentation, use:
- `example.com` when the domain name is generic.
- `gitlab.example.com` when referring only to GitLab Self-Managed.
Use `gitlab.com` for GitLab.com.
### Fake tokens
Do not use real tokens in the documentation.
Use these fake tokens as examples:
| Token type | Token value |
|:----------------------|:------------|
| Personal access token | `<your_access_token>` |
| Application ID | `2fcb195768c39e9a94cec2c2e32c59c0aad7a3365c10892e8116b5d83d4096b6` |
| Application secret | `04f294d1eaca42b8692017b426d53bbc8fe75f827734f0260710b83a556082df` |
| CI/CD variable | `Li8j-mLUVA3eZYjPfd_H` |
| Project runner token | `yrnZW46BrtBFqM7xDzE7dddd` |
| Instance runner token | `6Vk7ZsosqQyfreAxXTZr` |
| Trigger token | `be20d8dcc028677c931e04f3871a9b` |
| Webhook secret token | `6XhDroRcYPM5by_h-HLY` |
| Health check token | `Tu7BgjR9qeZTEyRzGG2P` |
### Contractions
Contractions are encouraged, and can create a friendly and informal tone,
especially in tutorials, instructional documentation, and
[user interfaces](https://design.gitlab.com/content/punctuation/#contractions).
Some contractions, however, should be avoided:
<!-- vale gitlab_base.Possessive = NO -->
| Do not use a contraction | Example | Use instead |
|-------------------------------|-------------------------------------------|-------------|
| With a proper noun and a verb | **Terraform's** a helpful tool. | **Terraform** is a helpful tool. |
| To emphasize a negative | **Don't** install X with Y. | **Do not** install X with Y. |
| In reference documentation | **Don't** set a limit. | **Do not** set a limit. |
| In error messages | Requests to localhost **aren't** allowed. | Requests to localhost **are not** allowed. |
<!-- vale gitlab_base.Possessive = YES -->
### Possessives
Do not use possessives (`'s`) for proper nouns, like organization or product names.
For example, instead of `Docker's CLI`, use `the Docker CLI`.
For details, see [the Google documentation style guide](https://developers.google.com/style/possessives#product,-feature,-and-company-names).
### Prepositions
Use prepositions at the end of the sentence when needed.
Dangling or stranded prepositions are fine. For example:
- You can leave the group you're a member of.
- Share the credentials with users you want to give access to.
These constructions are more casual than the alternatives:
- You can leave the group of which you're a member.
- Share the credentials with users to which you want to give access.
### Acronyms
If you use an acronym, spell it out on first use on a page. Do not spell it out more than once on a page.
- **Titles**: Try to avoid acronyms in topic titles, especially if the acronym is not widely used.
- **Plurals**: Try not to make acronyms plural. For example, use `YAML files`, not `YAMLs`. If you must make an acronym plural, do not use an apostrophe. For example, use `APIs`, not `API's`.
- **Possessives**: Use caution when making an acronym possessive. If possible,
write the sentence to avoid making the acronym possessive. If you must make the
acronym possessive, consider spelling out the words.
### Numbers
For numbers in text, spell out zero through nine and use numbers for 10 and greater. For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/numbers).
## Text
- [Write in Markdown](#markdown).
- Insert an empty line for new paragraphs.
- Insert an empty line between different markups (for example, after every
paragraph, heading, and list). Example:
```markdown
## Heading
Paragraph.
- List item 1
- List item 2
```
### Line length
To make the source content easy to read, and to compare diffs,
follow these best practices.
- Split long lines at approximately 100 characters. (Exception: Do not split links.)
- Start each new sentence on a new line.
### Comments
To embed comments in Markdown, use standard HTML comments that are not rendered
when published. Example:
```html
<!-- This is a comment that is not rendered -->
```
### Punctuation
Follow these guidelines for punctuation.
<!-- vale gitlab_base.Repetition = NO -->
- End full sentences with a period, including full sentences in tables.
- Use serial (Oxford) commas before the final **and** or **or** in a list of three or more items. (Tested in [`OxfordComma.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/OxfordComma.yml).)
<!-- vale gitlab_base.Repetition = YES -->
When spacing content:
- Use one space between sentences. (Use of more than one space is tested in [`SentenceSpacing.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SentenceSpacing.yml).)
- Do not use non-breaking spaces. Use standard spaces instead. (Tested in [`lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/lint-doc.sh).)
- Do not use tabs for indentation. Use spaces instead. Consider configuring your code editor to output spaces instead of tabs when pressing the <kbd>Tab</kbd> key.
Do not use these punctuation characters:
- `;` (semicolon): Use two sentences instead.
- `–` (en dash) or `—` (em dash): Use separate sentences, or commas, instead.
- `“` `”` `‘` `’`: Double or single typographer's ("curly") quotation marks. Use straight quotes instead. (Tested in [`NonStandardQuotes.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/NonStandardQuotes.yml).)
### Placeholder text
In a code block, you might want to provide a command or configuration that
uses specific values.
In these cases, use [`<` and `>`](https://en.wikipedia.org/wiki/Usage_message#Pattern)
to call out where a reader must replace text with their own value.
For example:
```shell
cp <your_source_directory> <your_destination_directory>
```
If the placeholder is not in a code block, use `<` and `>` and wrap the placeholder
in a single backtick. For example:
```plaintext
Select **Grant admin consent for `<application_name>`**.
```
### Quotation marks
Follow [the Microsoft guidance for quotation marks](https://learn.microsoft.com/en-us/style-guide/punctuation/quotation-marks).
Try to avoid quotation marks for user input and instead, use backticks.
## Text formatting
When formatting text, use:
- [Bold](#bold) for UI elements and pages.
- [Inline code style](#inline-code) for inputs, outputs, code, and similar.
- [Code blocks](#code-blocks) for command line examples, and multi-line inputs, outputs, code, and similar.
- [`<kbd>`](#keyboard-commands) for keyboard commands.
### Bold
Use bold for:
- UI elements with a visible label. Match the text and capitalization of the label.
- Navigation paths.
Do not use bold for keywords or emphasis.
UI elements include:
- Buttons
- Checkboxes
- Settings
- Menus
- Pages
- Tabs
For example:
- Select **Cancel**.
- On the **Issues** page...
- On the **Pipelines** tab...
To make text bold, wrap it with double asterisks (`**`). For example:
```markdown
1. Select **Cancel**.
```
When you use bold format for UI elements, place any punctuation outside the bold tag.
This rule includes periods, commas, colons, and right-angle brackets (`>`).
The punctuation is part of the sentence structure rather than the UI element that you're emphasizing.
Include punctuation in the bold tag when it's part of the UI element itself.
For example:
- `**Start a review**: This a description of the button that starts a review.`
- `Select **Overview** > **Users**.`
### Inline code
Inline code is text that's wrapped in single backticks (`` ` ``). For example:
```markdown
In the **Name** text box, enter `test`.
```
Use inline code for:
- Text a user enters in the UI.
- Short inputs and outputs like `true`, `false`, `Job succeeded`, and similar.
- Filenames, configuration parameters, keywords, and code. For example,
`.gitlab-ci.yml`, `--version`, or `rules:`.
- Short error messages.
- API and HTTP methods (`POST`).
- HTTP status codes. Full (`404 File Not Found`) and abbreviated (`404`).
- HTML elements. For example, `<sup>`. Include the angle brackets.
For example:
- In the **Name** text box, enter `test`.
- Use the `rules:` CI/CD keyword to control when to add jobs to a pipeline.
- Send a `DELETE` request to delete the runner. Send a `POST` request to create one.
- The job log displays `Job succeeded` when complete.
### Code blocks
Code blocks separate code text from regular text, and can be copy-pasted by users.
Use code blocks for:
- CLI and [cURL commands](../restful_api_styleguide.md#curl-commands).
- Multi-line inputs, outputs, and code samples that are too large for [inline code](#inline-code).
To add a code block, add triple backticks (```` ``` ````) above and below the text,
with a syntax name at the top for proper syntax highlighting. For example:
````markdown
```markdown
This is a code block that uses Markdown to demonstrate **bold** and `backticks`.
```
````
When you use code blocks:
- Add a blank line above and below code blocks.
- Use one of the [supported syntax names](https://gohugo.io/content-management/syntax-highlighting/#languages).
Use `plaintext` if no better option is available.
- Use quadruple backticks (````` ```` `````) when the code block contains another (nested) code block
which has triple backticks already. The example above uses quadruple backticks internally
to illustrate the code block format.
To represent missing information in a code block, use a comment or an [ellipsis](word_list.md#ellipsis-ellipses). For example:
- `# Removed for readability`
- `// ...`
### Keyboard commands
Use the HTML `<kbd>` tag when referring to keystroke presses. For example:
```plaintext
To stop the command, press <kbd>Control</kbd>+<kbd>C</kbd>.
```
This example renders as:
To stop the command, press <kbd>Control</kbd>+<kbd>C</kbd>.
### Italics and emphasis
Avoid [italics for emphasis](../../../user/markdown.md#emphasis) in product documentation.
Instead, write content that is clear enough that emphasis is not needed. GitLab and
<https://docs.gitlab.com> use a sans-serif font, but italic text [does not stand out in a page using sans-serif](https://practicaltypography.com/bold-or-italic.html).
## Lists
Use lists to present information in a format that is easier to scan.
- Make all items in the list parallel.
For example, do not start some items with nouns and others with verbs.
- Start all items with a capital letter.
- Give all items the same punctuation.
- Do not use a period if the item is not a full sentence.
- Use a period after every full sentence.
Do not use semicolons or commas.
- Add a colon (`:`) after the introductory phrase.
For example:
```markdown
To complete a task:
- Do this thing.
- Do this other thing.
```
- Do not use [bold](#bold) formatting to define keywords or concepts in a list. Use bold for UI element labels only. For example:
- `**Start a review**: This a description of the button that starts a review.`
- `Offline environments: This is a description of offline environments.`
For keywords and concepts, consider a [reference topic](../topic_types/reference.md) or
[description list](#description-lists-in-markdown) for alternative formatting.
### Choose between an ordered or unordered list
Use ordered lists for a sequence of steps. For example:
```markdown
Follow these steps to do something.
1. First, do the first step.
1. Then, do the next step.
1. Finally, do the last step.
```
Use an unordered lists when the steps do not need to be completed in order. For example:
```markdown
These things are imported:
- Thing 1
- Thing 2
- Thing 3
```
### List markup
- Use dashes (`-`) for unordered lists instead of asterisks (`*`).
- Start every item in an ordered list with `1.`. When rendered, the list items
are sequential.
- Leave a blank line before and after a list.
- Begin a line with spaces (not tabs) to denote a [nested sub-item](#nesting-inside-a-list-item).
### Nesting inside a list item
The following items can be nested under a list item, so they render with the same
indentation as the list item:
- [Code blocks](#code-blocks)
- [Blockquotes](#blockquotes)
- [Alert boxes](#alert-boxes)
- [Illustrations](#illustrations)
- [Tabs](#tabs)
Nested items should always align with the first character of the list
item. For unordered lists (using `-`), use two spaces for each level of
indentation:
````markdown
- Unordered list item 1
A line nested that uses 2 spaces to align with the `U` above.
- Unordered list item 2
> A quote block that will nest
> inside list item 2.
- Unordered list item 3
```plaintext
a code block that nests inside list item 3
```
- Unordered list item 4

````
For ordered lists, use three spaces for each level of indentation:
````markdown
1. Ordered list item 1
A line nested that uses 3 spaces to align with the `O` above.
````
You can nest lists in other lists.
```markdown
1. Ordered list item one.
1. Ordered list item two.
- Nested unordered list item one.
- Nested unordered list item two.
1. Ordered list item three.
- Unordered list item one.
- Unordered list item two.
1. Nested ordered list item one.
1. Nested ordered list item two.
- Unordered list item three.
```
## Tables
Tables should be used to describe complex information in a straightforward
manner. In many cases, an unordered list is sufficient to describe a
list of items with a single description for each item. But, if you have data
that's best described by a matrix, tables are the best choice.
### Creation guidelines
To keep tables accessible and scannable, tables should not have any
empty cells. If no otherwise meaningful value for a cell exists, consider entering
**N/A** for 'not applicable' or **None**.
To make tables easier to maintain:
- If the table has a `Description` column, make it the right-most column if possible.
- Add additional spaces to make the column widths consistent. For example:
```markdown
| Parameter | Default | Requirements |
|-----------|--------------|--------------|
| `param1` | `true` | A and B. |
| `param2` | `gitlab.com` | None |
```
- Skip the additional spaces in the rightmost column for tables that are very wide.
For example:
```markdown
| Setting | Default | Description |
|-----------|---------|-------------|
| Setting 1 | `1000` | A short description. |
| Setting 2 | `2000` | A long description that would make the table too wide and add too much whitespace if every cell in this column was aligned. |
| Setting 3 | `0` | Another short description. |
```
- The header (first) row and the delimiter (second) row of the table should be the same length.
Do not use shortened delimiter rows like `|-|-|-|` or `|--|--|`.
- If a large table does not auto-format well, you can skip the auto-format but:
- Make the first two rows the same length.
- Put spaces between the `|` characters and cell contents.
For example `| Cell 1 | Cell 2 |`, not `|Cell1|Cell2|`.
### Editor extensions for table formatting
To ensure consistent table formatting across all Markdown files, consider formatting your tables
with the VS Code [Markdown Table Formatter](https://github.com/fcrespo82/vscode-markdown-table-formatter).
To configure this extension to follow the guidelines above, turn on the **Follow header row length** setting.
To turn on the setting:
- In the UI:
1. In the VS Code menu, go to **Code** > **Settings** > **Settings**.
1. Search for `Limit Last Column Length`.
1. In the **Limit Last Column Length** dropdown list, select **Follow header row length**.
- In your VS Code `settings.json`, add a new line with:
```json
{
"markdown-table-formatter.limitLastColumnLength": "Follow header row length"
}
```
To format a table with this extension, select the entire table, right-click the selection,
and select **Format Selection With**. Select **Markdown Table Formatter** in the VS Code Command Palette.
If you use Sublime Text, try the
[Markdown Table Formatter](https://packagecontrol.io/packages/Markdown%20Table%20Formatter)
plugin, but it does not have a **Follow header row length** setting.
### Updates to existing tables
When you add or edit rows in an existing table, some rows might not be aligned anymore.
Don't realign the entire table if only changing a few rows.
If you realign the columns to account for the width, the diff becomes difficult to read,
because the entire table shows as modified.
Markdown tables naturally fall out of alignment over time, but still render correctly
on `docs.gitlab.com`. The technical writing team can realign cells the next time
the page is refactored.
### Table headers
Use sentence case for table headers. For example, `Keyword value` or `Project name`.
### Feature tables
When creating tables of lists of features (such the features
available to each role on the [Permissions](../../../user/permissions.md#project-members-permissions)
page), use these phrases:
| Option | Markdown | Displayed result |
|--------|---------------------------------------------------|------------------|
| No | `{{</* icon name="dash-circle" */>}} No` | {{< icon name="dash-circle" >}} No |
| Yes | `{{</* icon name="check-circle-filled" */>}} Yes` | {{< icon name="check-circle-filled" >}} Yes |
Do not use these SVG icons in API documentation.
Instead, follow the [API topic template](../restful_api_styleguide.md#api-topic-template).
### Footnotes
Use footnotes below tables only when you cannot include the content in the table itself.
For example, use footnotes when you must:
- Provide the same information in several table cells.
- Include content that would disrupt the table's layout.
#### Footnote format
In the table, use the HTML superscript tag `<sup>` for each footnote.
Put the tag at the end of the sentence. Leave one space between the sentence and the tag.
For example:
```markdown
| App name | Description |
|:---------|:------------|
| App A | Description text. <sup>1</sup> |
| App B | Description text. <sup>2</sup> |
```
When you add a footnote, do not re-sort the existing tags in the table.
For the footnotes below the table, use `**Footnotes**:` followed by an ordered list.
For example:
```markdown
**Footnotes**:
1. This is the first footnote.
1. This is the second footnote.
```
The table and footnotes would render as follows:
| App name | Description |
|:---------|:------------|
| App A | Description text. <sup>1</sup> |
| App B | Description text. <sup>2</sup> |
**Footnotes**:
1. This is the first footnote.
1. This is the second footnote.
##### Five or more footnotes
If you have five or more footnotes that you cannot include in the table itself,
use consecutive numbers for the list items.
If you use consecutive numbers, you must disable Markdown rule `029`:
```markdown
**Footnotes**:
<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
<!-- markdownlint-disable MD029 -->
1. This is the first footnote.
2. This is the second footnote.
3. This is the third footnote.
4. This is the fourth footnote.
5. This is the fifth footnote.
<!-- markdownlint-enable MD029 -->
```
## Links
Links are an important way to help readers find what they need.
However, most content is found by searching, and you should avoid putting too many links on any page.
Too many links can hinder readability.
- Do not duplicate links on the same page. For example, on **Page A**, do not link to **Page B** multiple times.
- Do not use links in headings. Headings that contain links cause errors.
- Do not use a hard line wrap between any words in a link.
- Avoid multiple links in a single paragraph.
- Avoid multiple links in a single task.
- On any one page, try not to use more than 15 links to other pages.
- Consider the use of [Related topics](../topic_types/_index.md#related-topics) to reduce links that interrupt the flow of a task.
- Try to avoid anchor links to sections on the same page. Let users rely on the right navigation instead.
### Inline links
Use inline links instead of reference links. Inline links are easier to parse
and edit.
([Vale](../testing/vale.md) rule: [`ReferenceLinks.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_docs/ReferenceLinks.yml))
- Do:
```markdown
For more information, see [merge requests](path/to/merge_requests.md)
```
- Don't:
```markdown
For more information, see [merge requests][1].
[1]: path/to/merge_requests.md
```
### Links in the same repository
To link to another documentation (`.md`) file in the same repository:
- Use an inline link with a relative file path. For example, `[GitLab.com settings](../user/gitlab_com/_index.md)`.
- Put the entire link on a single line, even if the link is very long. ([Vale](../testing/vale.md) rule: [`MultiLineLinks.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/MultiLineLinks.yml)).
{{< alert type="note" >}}
In the GitLab repository, do not link to the `/development` directory from any other directory.
{{< /alert >}}
To link to a file outside of the documentation files, for example to link from development
documentation to a specific code file:
- Use a full URL. For example: ``[`app/views/help/show.html.haml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/views/help/show.html.haml)``
- Optional. Use a full URL with a specific ref. For example: ``[`app/views/help/show.html.haml`](https://gitlab.com/gitlab-org/gitlab/-/blob/6d01aa9f1cfcbdfa88edf9d003bd073f1a6fff1d/app/views/help/show.html.haml)``
### Links in separate repositories
To link to a page in a different repository, use a full URL.
For example, to link from a page in the GitLab repository to the Charts repository,
use a URL like `[GitLab Charts documentation](https://docs.gitlab.com/charts/)`.
### Anchor links
Each topic title has an anchor link. For example, a topic with the title
`## This is an example` has the anchor `#this-is-an-example`.
When you change topic title text, the anchor link changes. To avoid broken links:
- Do not use step numbers in topic titles.
- When possible, do not use words that might change in the future.
#### Changing links and titles
When you change a topic title, the anchor link changes. If other documentation pages
or code files link to this anchor, [pipeline jobs could fail](../testing/_index.md).
Consider [running the link checks locally](../testing/links.md) before pushing your changes
to prevent failing pipelines.
### Text for links
Follow these guidelines for link text.
#### Standard text
Use text that follows one of these patterns:
- `For more information, see [link text](link.md)`.
- `To [DO THIS THING], see [link text](link.md)`
For example:
- `For more information, see [merge requests](link.md).`
- `To create a review app, see [review apps](link.md).`
To expand on this text, use phrases like
`For more information about this feature, see...`
Do not use the following constructions:
- `Learn more about...`
- `To read more...`.
- `For more information, see the [Merge requests](link.md) page.`
- `For more information, see the [Merge requests](link.md) documentation.`
#### Descriptive text rather than `here`
Use descriptive text for links, rather than words like `here` or `this page.`
For the name of a topic or page, use lowercase.
You don't have to match the text to the topic or page name exactly.
Edit the text to be descriptive and fit the guidelines.
Do:
- `For more information, see [merge requests](link.md)`.
- `For more information, see [roles and permissions](link.md)`.
- `For more information, see [how to configure common settings](link.md)`.
Don't:
- `For more information, see [this page](link.md).`
- `For more information, go [here](link.md).`
- `For more information, see [this documentation](link.md).`
#### Links to issues
When linking to an issue, include the issue number in the link. For example:
- `For more information, see [issue 12345](link.md).`
Do not use the pound sign (`issue #12345`).
### Links to external documentation
When possible, avoid links to external documentation. These links can become outdated and are difficult to maintain.
- [They lead to link rot](https://en.wikipedia.org/wiki/Link_rot).
- [They create issues with maintenance](https://gitlab.com/gitlab-org/gitlab/-/issues/368300).
Sometimes links are required. They might clarify troubleshooting steps or help prevent duplication of content.
Sometimes they are more precise and will be maintained more actively.
For each external link you add, weigh the customer benefit with the maintenance difficulties.
### Links to handbook
Limit links to the handbook. Some links are unavoidable, like licensing terms, data usage and access policies,
testing agreements, and terms and conditions.
### Confidential or restricted access links
Don't link directly to:
- [Confidential issues](../../../user/project/issues/confidential_issues.md).
- Internal handbook pages.
- Project features that require [special permissions](../../../user/permissions.md)
to view.
These links fail for:
- Those without sufficient permissions.
- Automated link checkers.
If you must use one of these links:
- If the link is to a confidential issue or internal handbook page, mention that the issue or page is visible only to GitLab team members.
- If the link requires a specific role or permissions, mention that information.
- Put the link in backticks so that it does not cause link checkers to fail.
Examples:
- ```markdown
GitLab team members can view more information in this confidential issue:
`https://gitlab.com/gitlab-org/gitlab/-/issues/<issue_number>`
```
- ```markdown
GitLab team members can view more information in this internal handbook page:
`https://internal.gitlab.com/handbook/<link>`
```
- ```markdown
Users with the Maintainer role for the project can use the pipeline editor:
`https://gitlab.com/gitlab-org/gitlab/-/ci/editor`
```
### Link to specific lines of code
When linking to specific lines in a file, link to a commit instead of to the
branch. Lines of code change over time. Linking to a line by using
the commit link ensures the user lands on the line you're referring to. The
**Permalink** dropdown item in the ellipsis menu, displayed when viewing a file in a project,
provides a link to the most recent commit of that file.
- Do: `[link to line 3](https://gitlab.com/gitlab-org/gitlab/-/blob/11f17c56d8b7f0b752562d78a4298a3a95b5ce66/.gitlab/issue_templates/Feature%20proposal.md#L3)`
- Don't: `[link to line 3](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20proposal.md#L3).`
If that linked expression has changed line numbers due to additional
commits, you can still search the file for that query. In this case, update the
document to ensure it links to the most recent version of the file.
## Navigation
When documenting how to navigate the GitLab UI:
- Always use location, then action.
- From the **Visibility** dropdown list (location), select **Public** (action).
- Be brief and specific. For example:
- Do: Select **Save**.
- Do not: Select **Save** for the changes to take effect.
- If a step must include a reason, start the step with it. This helps the user scan more quickly.
- Do: To view the changes, in the merge request, select the link.
- Do not: Select the link in the merge request to view the changes.
### Names for menus
Use these terms when referring to the main GitLab user interface
elements:
- **Left sidebar**: This is the navigation sidebar on the left of the user
interface.
- Do not use the phrase `context switcher` or `switch contexts`. Instead, try to direct the user to the exact location with a set of repeatable steps.
- Do not use the phrase `the **Explore** menu` or `the **Your work** sidebar`. Instead, use `the left sidebar`.
- **Right sidebar**: This is the navigation sidebar on the right of the user
interface, specific to the open issue, merge request, or epic.
### Names for UI elements
All UI elements [should be **bold**](#bold). The `>` in the navigation path should not be bold.
Guidance for individual UI elements is in [the word list](word_list.md).
### How to write navigation task steps
To be consistent, use these examples to write navigation steps in a task topic.
Although alternative steps might exist, including items pinned by default,
use these steps instead.
To open project settings:
```markdown
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
```
To open group settings:
```markdown
1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
```
To open settings for a top-level group:
```markdown
1. On the left sidebar, select **Search or go to** and find your group.
This group must be at the top level.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
```
To open either project or group settings:
```markdown
1. On the left sidebar, select **Search or go to** and find your project or group.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
```
To create a project:
```markdown
1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**.
```
To create a group:
```markdown
1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New group**.
```
To open the **Admin** area:
```markdown
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings** > **CI/CD**.
```
You do not have to repeat `On the left sidebar` in your second step.
To open the **Your work** menu item:
```markdown
1. On the left sidebar, select **Search or go to**.
1. Select **Your work**.
```
To select your avatar:
```markdown
1. On the left sidebar, select your avatar.
```
To save the selection in some dropdown lists:
```markdown
1. Go to your issue.
1. On the right sidebar, in the **Iteration** section, select **Edit**.
1. From the dropdown list, select the iteration to associate this issue with.
1. Select any area outside the dropdown list.
```
To view all your projects:
```markdown
1. On the left sidebar, select **Search or go to**.
1. Select **View all my projects**.
```
To view all your groups:
```markdown
1. On the left sidebar, select **Search or go to**.
1. Select **View all my groups**.
```
### Optional steps
If a step is optional, start the step with the word `Optional` followed by a period.
For example:
```markdown
1. Optional. Enter a description for the job.
```
### Recommended steps
If a step is recommended, start the step with the word `Recommended` followed by a period.
For example:
```markdown
1. Recommended. Enter a description for the job.
```
### Documenting keyboard shortcuts and commands
Write UI instructions instead of keyboard commands when both options exist.
This guideline applies to GitLab and third-party applications, like VS Code.
Keyboard commands for GitLab are documented in [GitLab keyboard shortcuts](../../../user/shortcuts.md).
### Documenting multiple fields at once
If the UI text sufficiently explains the fields in a section, do not include a task step for every field.
Instead, summarize multiple fields in a single task step.
Use the phrase **Complete the fields**.
For example:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **Repository**.
1. Expand **Push rules**.
1. Complete the fields.
If you are documenting multiple fields and only one field needs explanation, do it in the same step:
1. Expand **Push rules**.
1. Complete the fields. **Branch name** must be a regular expression.
To describe multiple fields, use unordered list items:
1. Expand **General pipelines**.
1. Complete the fields.
- **Branch name** must be a regular expression.
- **User** must be a user with at least the **Maintainer** role.
## Illustrations
GitLab documentation uses two illustration types:
- Screenshots, used to show a portion of the GitLab user interface.
- Diagrams, used to illustrate processes or relationships between entities.
Illustrations can help the reader understand a concept, where they are in a complicated process,
or how they should interact with the application. Use illustrations sparingly because:
- They become outdated.
- They are difficult and expensive to localize.
- They cannot be read by screen readers.
If you must use illustrations in documentation, they should:
- Supplement the text, not replace it.
The reader should not have to rely only on the illustration to get the needed information.
- Have an introductory sentence in the preceding text.
For example, `The following diagram illustrates the product analytics flow:`.
- Be accessible. For more information, see the guidelines specific to screenshots and diagrams.
- Exclude personally identifying information.
### Screenshots
Use screenshots to show a portion of the GitLab user interface, if some relevant information
can't be conveyed in text.
#### Capture the screenshot
When you take screenshots:
- Ensure the content in the screenshot adheres to the
[GitLab SAFE framework](https://handbook.gitlab.com/handbook/legal/safe-framework/). To check,
follow the
[SAFE flowchart](https://handbook.gitlab.com/handbook/legal/safe-framework/#safe-flowchart).
- **Ensure it provides value.** Don't use `lorem ipsum` text.
Try to replicate how the feature would be used in a real-world scenario, and
[use realistic text](#fake-user-information).
- **Capture only the relevant UI.** Don't include unnecessary white
space or areas of the UI that don't help illustrate the point. The
sidebars in GitLab can change, so don't include
them in screenshots unless absolutely necessary.
- **Keep it small.** If you don't have to show the full width of the screen, don't.
Reduce the size of your browser window as much as possible to keep elements close
together and reduce empty space. Try to keep the screenshot dimensions as small as possible.
- **Review how the image renders on the page.** Preview the image locally or use the
review app in the merge request. Make sure the image isn't blurry or overwhelming.
- **Be consistent.** Coordinate screenshots with the other screenshots already on
a documentation page for a consistent reading experience. Ensure your navigation theme
is set to the default preference **Indigo** and the syntax highlighting theme is also set to the default preference **Light**.
#### Add callouts
To emphasize an area in a screenshot, use an arrow.
- For color, use `#EE2604`. If you use the Preview application on macOS, this is the default red.
- For the line width, use 3 pt. If you use the Preview application on macOS, this is the third line in the list.
- Use the arrow style shown in the following image.
- If you have multiple arrows, make them parallel when possible.

#### Image requirements
- Resize any wide or tall screenshots.
- Width should be 1000 pixels or less.
- Height should be 500 pixels or less.
- Make sure the screenshot is still clear after being resized and compressed.
- All images **must** be [compressed](#compress-images) to 100 KB or less.
In many cases, 25-50 KB or less is often possible without reducing image quality.
- Save the image with a lowercase filename that's descriptive of the feature
or concept in the image:
- If the image is of the GitLab interface, append the GitLab version to the filename,
based on this format: `image_name_vX_Y.png`. For example, for a screenshot taken
from the pipelines page of GitLab 11.1, a valid name is `pipelines_v11_1.png`.
- If you're adding an illustration that doesn't include parts of the user interface,
add the release number corresponding to the release the image was added to.
For an MR added to 11.1's milestone, a valid name for an illustration is `devops_diagram_v11_1.png`.
- Place images in a separate directory named `img/` in the same directory where
the `.md` document that you're working on is located.
- Do not link to externally-hosted images. Download a copy and store it in the appropriate `img` directory within the docs directory.
- Consider PNG images instead of JPEG.
- Compress GIFs with <https://ezgif.com/optimize> or similar tool.
See also how to link and embed [videos](#videos) to illustrate the documentation.
#### Compress images
You should always compress any new images you add to the documentation. One
known tool is [`pngquant`](https://pngquant.org/), which is cross-platform and
open source. Install it by visiting the official website and following the
instructions for your OS.
If you use macOS and want all screenshots to be compressed automatically, read
[One simple trick to make your screenshots 80% smaller](https://about.gitlab.com/blog/2020/01/30/simple-trick-for-smaller-screenshots/).
GitLab has a [Ruby script](https://gitlab.com/gitlab-org/gitlab/-/blob/master/bin/pngquant)
to simplify the manual process. In the root directory of your local
copy of `https://gitlab.com/gitlab-org/gitlab`, run in a terminal:
- Before compressing, if you want, check that all documentation PNG images have
been compressed:
```shell
bin/pngquant lint
```
- Compress all documentation PNG images by using `pngquant`:
```shell
bin/pngquant compress
```
- Compress specific files:
```shell
bin/pngquant compress doc/user/img/award_emoji_select.png doc/user/img/markdown_logo.png
```
- Compress all PNG files in a specific directory:
```shell
bin/pngquant compress doc/user/img
```
#### Animated images
Avoid animated images (such as animated GIFs). They can be distracting
and annoying for users.
If you're describing a complicated interaction in the user interface and want to
include a visual representation to help readers understand it, you can:
- Use a static image (screenshot) and if necessary, add callouts to emphasize an area of the screen.
- Create a short video of the interaction and link to it.
#### Add the image link to content
The Markdown code for including an image in a document is:
``
#### Alternative text
Alt text provides an accessible experience.
Screen readers use alt text to describe the image, and alt text displays
if an image fails to download.
Alt text should describe the context of the image, not the content. Add context that
relates to the topic of the page or section. Consider what you would say about the image
if you were helping someone read and interact with the page and they couldn't see it.
Do:
``
Do not:
``
When writing alt text:
- Write short, descriptive alt text in 155 characters or fewer.
Screen readers typically stop reading after this many characters.
- If the image has complex information like a workflow diagram, use short alt text
to identify the image and include detailed information in the text.
- Use a period at the end of the string, whether it's a sentence or not.
- Use sentence case and avoid all caps.
Some screen readers read capitals as individual letters.
- Do not use phrases like **Image of** or **Graphic of**.
- Do not use a string of keywords.
Include keywords in the text to enhance context.
- Introduce the image in the topic, not the alt text.
- Try to avoid repeating text you've already used in the topic.
- Do not use inline styling like bold, italics, or backticks.
Screen readers read `**text**` as `star star text star star`.
- Use an empty alt text tag (`alt=""`) instead of omitting the tag altogether when the image does not add any unique information to the page. For example, when the image is decorative or is already fully described in the body text or caption. An empty alt tag tells assistive technologies that you have omitted the text intentionally, while a missing alt tag is ambiguous.
#### Automatic screenshot generator
You can use an automatic screenshot generator to take and compress screenshots.
1. Set up the [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/gitlab_docs.md).
1. Go to the subdirectory with your cloned GitLab repository, typically `gdk/gitlab`.
1. Make sure that your GDK database is fully migrated: `bin/rake db:migrate RAILS_ENV=development`.
1. Install `pngquant`, see the tool website for more information: [`pngquant`](https://pngquant.org/)
1. Run `scripts/docs_screenshots.rb spec/docs_screenshots/<name_of_screenshot_generator>.rb <milestone-version>`.
1. Identify the location of the screenshots, based on the `gitlab/doc` location defined by the `it` parameter in your script.
1. Commit the newly created screenshots.
##### Extending the tool
To add an additional screenshot generator:
1. In the `spec/docs_screenshots` directory, add a new file with a `_docs.rb` extension.
1. Add the following information to your file:
```ruby
require 'spec_helper'
RSpec.describe '<What I am taking screenshots of>', :js do
include DocsScreenshotHelpers # Helper that enables the screenshots taking mechanism
before do
page.driver.browser.manage.window.resize_to(1366, 1024) # length and width of the page
end
```
1. To each `it` block, add the path where the screenshot is saved:
```ruby
it '<path/to/images/directory>'
```
You can take a screenshot of a page with `visit <path>`.
To avoid blank screenshots, use `expect` to wait for the content to load.
###### Single-element screenshots
You can take a screenshot of a single element.
- Add the following to your screenshot generator file:
```ruby
screenshot_area = find('<element>') # Find the element
scroll_to screenshot_area # Scroll to the element
expect(screenshot_area).to have_content '<content>' # Wait for the content you want to capture
set_crop_data(screenshot_area, <padding>) # Capture the element with added padding
```
Use `spec/docs_screenshots/container_registry_docs.rb` as a guide to create your own scripts.
### Diagrams
Use a diagram to illustrate a process or the relationship between entities, if the information is too
complex to be understood from text only.
To create a diagram, use either [Mermaid](https://mermaid.js.org/#/) (recommended) or [Draw.io](https://draw.io).
Mermaid is the recommended diagramming tool, but it is not suitable for all situations. For example,
complex diagram requirements might result in a layout that is difficult to understand.
GUI diagramming tools can help authors overcome Mermaid's complexity and layout issue. Draw.io is
the preferred GUI tool because, when you use the editor, both the diagram and its definition are
stored in the SVG file, so it can be edited. Draw.io is also integrated with the GitLab wiki.
| Feature | Mermaid | Draw.io |
|-------------------------------------------|-------------------------------------------------------------------------|---------|
| **Editor required** | Text editor | Draw.io editor |
| **WYSIWYG editing** | {{< icon name="dash-circle" >}} No | {{< icon name="check-circle-filled" >}} Yes |
| **Text content findable by `grep`** | {{< icon name="check-circle-filled" >}} Yes | {{< icon name="dash-circle" >}} No |
| **Appearance controlled by** | Web site's CSS | Diagram's author |
| **File format** | SVG | SVG |
| **VS Code integration (with extensions)** | {{< icon name="check-circle-filled" >}} Yes (Preview and local editing) | {{< icon name="check-circle-filled" >}} Yes (Preview and local editing) |
| **Generated dynamically** | {{< icon name="check-circle-filled" >}} Yes | {{< icon name="dash-circle" >}} No |
#### Guidelines
To create accessible and maintainable diagrams, follow these guidelines:
- Keep diagrams simple and focused. Include only essential elements and information.
- Use different but consistent visual cues (such as shape, color, and font) to distinguish between categories:
- Rectangles for processes or steps.
- Diamonds for decision points.
- Solid lines for direct relationships between elements.
- Dotted lines for indirect relationship between elements.
- Arrows for flow or direction in a process.
- GitLab Sans font.
- Add clear labels and brief descriptions to diagram elements.
- Include a title and brief description for the diagram.
- For complex processes, consider creating multiple simple diagrams instead of one large diagram.
- Validate diagrams work well when viewed on different devices and screen sizes.
- Do not include links. Links embedded in diagrams with [`click` actions](https://mermaid.js.org/syntax/classDiagram.html#interaction) are not testable with our link checking tools.
- Update diagrams along with documentation or code when processes change to maintain accuracy.
#### Create a diagram with Mermaid
To learn how to create diagrams with the [Mermaid syntax](https://mermaid.js.org/intro/syntax-reference.html),
see the [Mermaid user guide](https://mermaid.js.org/intro/getting-started.html)
and the examples on the Mermaid site.
To create a diagram for GitLab documentation with Mermaid:
1. In the [Mermaid Live Editor](https://mermaid.live/), create the diagram.
1. Copy the content of the **Code** pane and paste it in the Markdown file, wrapped in a `mermaid` code block. For more
details, see [GitLab Flavored Markdown for Mermaid](../../../user/markdown.md#mermaid).
1. To add GitLab font styling to your diagram, between the Mermaid code block declaration
and the type of diagram, add the following line:
```plaintext
%%{init: { "fontFamily": "GitLab Sans" }}%%
```
1. On the next line after declaring the type of diagram
(like `flowchart` or `sequenceDiagram`), add the following lines for accessibility:
```yaml
accTitle: your diagram title here
accDescr: describe what your diagram does in a single sentence, with no line breaks.
```
Make sure the title and description follow the [alternative text guidelines](#alternative-text).
For example, this flowchart contains both accessibility and font information:
````markdown
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Example diagram title
accDescr: A description of your diagram
A[Start here] -->|action| B[next step]
```
````
#### Create a diagram with Draw.io
Use either the [Draw.io](https://draw.io) web application or the (unofficial)
VS Code [Draw.io Integration](https://marketplace.visualstudio.com/items?itemName=hediet.vscode-drawio)
extension to create the diagram. Each tool provides the same diagram editing experience, but the web
application provides editable example diagrams.
##### Use the web application
To create a diagram by using the Draw.io web application:
1. In the [Draw.io](https://draw.io) web application, create the diagram.
Follow the [style guidelines](#style-guidelines).
1. Save the diagram:
1. In the Draw.io web application, select **File** > **Export as** > **SVG**.
1. Select the **Include a copy of my diagram: All pages** checkbox, then select **Export**. Use
the file extension `drawio.svg` to indicate it can be edited in Draw.io.
1. [Add the SVG to the docs as an image](#add-the-image-link-to-content).
These SVGs use the same Markdown as other non-SVG images.
##### Use the VS Code extension
To create a diagram by using the Draw.io Integration extension for VS Code:
1. In the directory that will contain the diagram, create an empty file with the suffix
`drawio.svg`.
1. Open the file in VS Code then create the diagram.
Follow the [style guidelines](#style-guidelines).
1. Save the file.
The diagram's definition is stored in Draw.io-compatible format in the SVG file.
1. [Add the SVG to the docs as an image](#add-the-image-link-to-content).
These SVGs use the same Markdown as other non-SVG images.
##### Style guidelines
When you create a diagram in Draw.io, it should be visually consistent with a diagram you would create with Mermaid.
The following rules are an addition to the general [style guidelines](#guidelines).
Fonts:
- Use the Inter font for all text. This font is not included in the default fonts.
To add Inter font as a custom font:
1. From the font dropdown list, select **Custom**.
1. Select **Google fonts** and in the **Font name** text box, enter `Inter`.
Shapes:
- For elements, use the rectangle shape.
- For flowcharts, use shapes from the **Flowchart** shape collection.
- Shapes that represent the same element should have the same shape and size.
- For elements that have text, ensure adequate white space exists between the text and the
shape's outline. If required, increase the size of the shape and **all** similar shapes in the diagram.
Colors:
- Use colors in the [GitLab Design System color range](https://design.gitlab.com/brand-design/color/) only.
- For all elements, shapes, arrows, and text, follow the
[Pajamas guidelines for illustration](https://design.gitlab.com/product-foundations/illustration/).
## Emoji
Don't use the Markdown emoji format, for example `:smile:`, for any purpose. Use
[GitLab SVG icons](#gitlab-svg-icons) instead.
## GitLab SVG icons
You can use icons from the [GitLab SVG library](https://gitlab-org.gitlab.io/gitlab-svgs/)
directly in the documentation. For example, `{{</* icon name="tanuki" */>}}` renders as: {{< icon name="tanuki" >}}.
In most cases, avoid icons in text.
However, use the icon when hover text is the only
available way to describe a UI element. For example, **Delete** or **Edit** buttons
often have hover text only.
When you do use an icon, start with the hover text and follow it with the SVG reference in parentheses.
- Avoid: `Select {{</* icon name="pencil" */>}} **Edit**.` This generates as: Select {{< icon name="pencil" >}} **Edit**.
- Use instead: `Select **Edit** ({{</* icon name="pencil" */>}}).` This generates as: Select **Edit** ({{< icon name="pencil" >}}).
Do not use words to describe the icon:
- Avoid: `Select **Erase job log** (the trash icon).`
- Use instead: `Select **Erase job log** ({{</* icon name="remove" */>}}).` This generates as: Select **Erase job log** ({{< icon name="remove" >}}).
When the button doesn't have any hover text, describe the icon.
Follow up by creating a
[UX bug issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Bug)
to add hover text to the button to improve accessibility.
- Avoid: `Select {{</* icon name="ellipsis_v" */>}}.`
- Use instead: `Select the vertical ellipsis ({{</* icon name="ellipsis_v" */>}}).` This generates as: Select the vertical ellipsis ({{< icon name="ellipsis_v" >}}).
## Videos
Adding GitLab YouTube video tutorials to the documentation is highly
encouraged, unless the video is outdated. Videos should not replace
documentation, but complement or illustrate it. If content in a video is
fundamental to a feature and its key use cases, but isn't adequately
covered in the documentation, you should:
- Add this detail to the documentation text.
- Create an issue to review the video and update the page.
Do not upload videos to the product repositories. [Add a link](#link-to-video) or
[embed](#embed-videos) them instead.
### Link to video
To link to a video, include a YouTube icon so that readers can scan the page
for videos before reading. Include the video's publication date after the link, to help identify
videos that might be out-of-date.
```markdown
<i class="fa-youtube-play" aria-hidden="true"></i>
For an overview, see [Video Title](https://link-to-video).
<!-- Video published on YYYY-MM-DD -->
```
You can link any up-to-date video that's useful to the GitLab user.
### Embed videos
The [GitLab documentation site](https://docs.gitlab.com) supports embedded
videos.
You can embed videos from [the official YouTube account for GitLab](https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg) only.
For videos from other sources, [link them](#link-to-video) instead.
In most cases, [link to a video](#link-to-video), because
embedded videos take up a lot of space on the page and can be distracting to readers.
To embed a video:
1. Copy the code from this procedure and paste it into your Markdown file. Leave a
blank line above and below it. Do not edit the code (don't remove or add any spaces).
1. In YouTube, visit the video URL you want to display. Copy the regular URL
from your browser (`https://www.youtube.com/watch?v=VIDEO-ID`) and replace
the video title and link in the line under `<div class="video-fallback">`.
1. In YouTube, select **Share**, and then select **Embed**.
1. Copy the `<iframe>` source (`src`) **URL only**
(`https://www.youtube-nocookie.com/embed/VIDEO-ID`),
and paste it, replacing the content of the `src` field in the
`iframe` tag.
1. Include the video's publication date below the link, to help identify
videos that might be out-of-date.
```html
leave a blank line here
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=MqL6BMOySIQ">Video title</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/MqL6BMOySIQ" frameborder="0" allowfullscreen> </iframe>
</figure>
<!-- Video published on YYYY-MM-DD -->
leave a blank line here
```
This is how it renders on the GitLab documentation site:
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=enMumwvLAug">What is GitLab</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/MqL6BMOySIQ" frameborder="0" allowfullscreen> </iframe>
</figure>
With this formatting:
- The `figure` tag is required for semantic SEO and the `video-container`
class is necessary to make sure the video is responsive and displays on
different mobile devices.
- The `<div class="video-fallback">` is a fallback necessary for
`/help`, because the GitLab Markdown processor doesn't support iframes. It's
hidden on the documentation site, but is displayed by `/help`.
- The `www.youtube-nocookie.com` domain enables the [Privacy Enhanced Mode](https://support.google.com/youtube/answer/171780?hl=en#zippy=%2Cturn-on-privacy-enhanced-mode)
of the YouTube embedded player. This mode allows users with restricted cookie preferences to view embedded videos.
## Link to click-through demos
Linking to click-through demos should follow similar guidelines to [videos](#videos).
```markdown
For a click-through demo, see [Demo Title](https://link-to-demo).
<!-- Demo published on YYYY-MM-DD -->
```
## Alert boxes
Use alert boxes to call attention to information. Use them sparingly, and never have an alert box immediately follow another alert box.
Alert boxes are generated by using a Hugo shortcode:
```plaintext
{{</* alert type="note" */>}}
This is something to note.
{{</* /alert */>}}
```
The valid alert types are `flag`, `note`, `warning`, and `disclaimer`.
Alert boxes render only on the GitLab documentation site (<https://docs.gitlab.com>).
In the GitLab product help, alert boxes appear as plain text.
### Flag
Use this alert type to describe a feature's availability. For information about how to format
`flag` alerts, see [Document features deployed behind feature flags](../feature_flags.md).
### Note
Use notes sparingly. Too many notes can make topics difficult to scan.
Instead of adding a note:
- Re-write the sentence as part of a paragraph.
- Put the information into its own paragraph.
- Put the content under a new topic title.
If you must use a note, use this format:
```markdown
{{</* alert type="note" */>}}
This is something to note.
{{</* /alert */>}}
```
It renders on the GitLab documentation site as:
{{< alert type="note" >}}
This is something to note.
{{< /alert >}}
### Warning
Use a warning to indicate deprecated features, or to provide a warning about
procedures that have the potential for data loss.
```markdown
{{</* alert type="warning" */>}}
This is something to be warned about.
{{</* /alert */>}}
```
It renders on the GitLab documentation site as:
{{< alert type="warning" >}}
This is something to be warned about.
{{< /alert >}}
### Disclaimer
If you **must** write about features we have not yet delivered, add a disclaimer about forward-looking statements near the content it applies to.
Disclaimer alerts are populated by using a [template](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/themes/gitlab-docs/layouts/shortcodes/alert.html) and should not include
any other text.
Add a disclaimer like this:
```plaintext
{{</* alert type="disclaimer" /*/>}}
```
It renders on the GitLab documentation site as:
{{< alert type="disclaimer" />}}
If all of the content on the page is not available, use the disclaimer about forward-looking statements once at the top of the page.
If the content in a topic is not ready, use the disclaimer in the topic.
For more information, see [Promising features in future versions](#promising-features-in-future-versions).
## Blockquotes
Avoid using [blockquotes](../../../user/markdown.md#blockquotes) in the product documentation.
They can make text difficult to scan. Instead of a blockquote, consider using:
- A [code block](#code-blocks).
- An [alert box](#alert-boxes).
- No special styling at all.
The [GitLab Flavored Markdown (GLFM)](../../../user/markdown.md) page is a rare case that
uses blockquotes to differentiate between plain text and rendered examples. However, in most cases,
you should avoid them.
## Tabs
On the documentation site, you can format text to display as tabs.
{{< alert type="warning" >}}
Do not put version history bullets, topic headings, HTML, or tabs in tabs. Only use paragraphs, lists, alert boxes, and code blocks. Other styles might not render properly. When in doubt, keep things simple.
{{< /alert >}}
To create a set of tabs, follow this example:
```plaintext
{{</* tabs */>}}
{{</* tab title="Tab one" */>}}
Here's some content in tab one.
{{</* /tab */>}}
{{</* tab title="Tab two" */>}}
Here's some other content in tab two.
{{</* /tab */>}}
{{</* /tabs */>}}
```
This code renders on the GitLab documentation site as:
{{< tabs >}}
{{< tab title="Tab one" >}}
Here's some content in tab one.
{{< /tab >}}
{{< tab title="Tab two" >}}
Here's some other content in tab two.
{{< /tab >}}
{{< /tabs >}}
For tab titles, be brief and consistent. Ensure they are parallel, and start each with a capital letter.
For example:
- `Linux package (Omnibus)`, `Helm chart (Kubernetes)` (when documenting configuration edits, follow the
[configuration edits guide](#how-to-document-different-installation-methods))
- `15.1 and earlier`, `15.2 and later`
Until we implement automated testing for broken links to tabs, do not link directly to a single tab.
For more information, see [issue 225](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/225).
See [Pajamas](https://design.gitlab.com/components/tabs/#guidelines) for more
details on tabs.
## Cards
Use cards to create landing pages with links to sub-pages.
To create a set of cards, follow this example:
```markdown
{{</* cards */>}}
- [The first page](first_page.md)
- [Another page](another/page.md)
- [One more page](one_more.md)
{{</* /cards */>}}
```
Cards render only on the GitLab documentation site (`https://docs.gitlab.com`).
In the GitLab product help, a set of cards appears as an unordered list of links.
Card descriptions are populated from the `description` metadata on the Markdown page headers.
Use cards on top-level pages where the cards are the only content on the page.
## Maintained versions
Use the maintained versions shortcode to create an unordered list of the currently
maintained GitLab versions as specified by the
[maintenance policy](../../../policy/maintenance.md):
```markdown
{{</* maintained-versions */>}}
```
Maintained versions render only on the pre-release version of the GitLab
documentation site (`https://docs.gitlab.com`). In all other cases and in
`/help`, a link to the documentation site is shown instead.
## Plagiarism
Do not copy and paste content from other sources unless it is a limited
quotation with the source cited. Typically it is better to rephrase
relevant information in your own words or link out to the other source.
## Promising features in future versions
Do not promise to deliver features in a future release. For example, avoid phrases like,
"Support for this feature is planned."
We cannot guarantee future feature work, and promises
like these can raise legal issues. Instead, say that an issue exists.
For example:
- Support for improvements is proposed in `[issue <issue_number>](https://link-to-issue)`.
- You cannot do this thing, but `[issue 12345](https://link-to-issue)` proposes to change this behavior.
You can say that we plan to remove a feature.
If you must document a future feature, use the [disclaimer](#disclaimer).
## Products and features
Refer to the information in this section when describing products and features
in the GitLab product documentation.
### Avoid line breaks in names
If a feature or product name contains spaces, don't split the name with a line break.
When names change, it is more complicated to search or grep text that has line breaks.
### Product availability details
Product availability details provide information about a feature and are displayed under the topic title.
Read more about [product availability details](availability_details.md).
## Specific sections
Certain styles should be applied to specific sections. Styles for specific
sections are outlined in this section.
### Help and feedback section
This section is displayed at the end of each document and can be omitted
by adding a key into the front matter:
```yaml
---
feedback: false
---
```
The default is to leave it there. If you want to omit it from a document, you
must check with a technical writer before doing so.
### GitLab restart
When a restart or reconfigure of GitLab is required, avoid duplication by linking
to [`doc/administration/restart_gitlab.md`](../../../administration/restart_gitlab.md)
with text like this, replacing 'reconfigure' with 'restart' as needed:
```markdown
Save the file and [reconfigure GitLab](../../../administration/restart_gitlab.md)
for the changes to take effect.
```
If the document resides outside of the `doc/` directory, use the full path
instead of the relative link:
`https://docs.gitlab.com/administration/restart_gitlab`.
### How to document different installation methods
GitLab supports five official installation methods. If you're referring to
words as part of sentences and titles, use the following phrases:
- Linux package
- Helm chart
- GitLab Operator
- Docker
- Self-compiled
It's OK to add the explanatory parentheses when
[you use tabs](#use-tabs-to-describe-a-gitlab-self-managed-configuration-procedure):
- Linux package (Omnibus)
- Helm chart (Kubernetes)
- GitLab Operator (Kubernetes)
- Docker
- Self-compiled (source)
### Use tabs to describe a GitLab Self-Managed configuration procedure
Configuration procedures can require users to edit configuration files, reconfigure
GitLab, or restart GitLab. In this case:
- Use [tabs](#tabs) to differentiate among the various installation methods.
- Use the installation methods names exactly as described in the previous list.
- Use them in the order described below.
- Indent the code blocks to line up with the list item they belong to.
- Use the appropriate syntax highlighting for each code block (`ruby`, `shell`, or `yaml`).
- For the YAML files, always include the parent settings.
- The final step to reconfigure or restart GitLab can be used verbatim because it's
the same every time.
When describing a configuration edit, use this snippet, editing it as needed:
````markdown
{{</* tabs */>}}
{{</* tab title="Linux package (Omnibus)" */>}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
external_url "https://gitlab.example.com"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{</* /tab */>}}
{{</* tab title="Helm chart (Kubernetes)" */>}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
hosts:
gitlab:
name: gitlab.example.com
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{</* /tab */>}}
{{</* tab title="Docker" */>}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url "https://gitlab.example.com"
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{</* /tab */>}}
{{</* tab title="Self-compiled (source)" */>}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
gitlab:
host: "gitlab.example.com"
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{</* /tab */>}}
{{</* /tabs */>}}
````
It renders as:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
external_url "https://gitlab.example.com"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
hosts:
gitlab:
name: gitlab.example.com
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url "https://gitlab.example.com"
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
gitlab:
host: "gitlab.example.com"
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Writing styles, markup, formatting, and other standards for GitLab Documentation.
title: Documentation Style Guide
breadcrumbs:
- doc
- development
- documentation
- styleguide
---
This document defines the standards for GitLab documentation, including grammar, formatting, and more.
For guidelines on specific words, see [the word list](word_list.md).
## The GitLab voice
The GitLab brand guidelines define the
[voice used by the larger organization](https://design.gitlab.com/brand-messaging/brand-voice).
Building on that guidance, the voice in the GitLab documentation strives to be concise,
direct, and precise. The goal is to provide information that's easy to search and scan.
The voice in the documentation should be conversational but brief, friendly but succinct.
## Documentation is the single source of truth (SSoT)
The GitLab documentation is the SSoT for all product information related to implementation,
use, and troubleshooting. The documentation evolves continuously. It is updated with
new products and features, and with improvements for clarity, accuracy, and completeness.
This policy:
- Prevents information silos and makes it easier to find information about GitLab products.
- Does not mean that content cannot be duplicated in multiple places in the documentation.
## Topic types
GitLab uses [topic types](../topic_types/_index.md) to organize the product documentation.
Topic types help users digest information more quickly. They also help address these issues:
- **Content is hard to find.** The GitLab documentation is comprehensive and includes a large amount of
useful information. Topic types create repeatable patterns that make the content easier
to scan and parse.
- **Content is often written from the contributor's point of view.** The GitLab documentation is
written by a variety of contributors. Topic types (tasks, specifically) help put
information into a format that is geared toward helping others, rather than
documenting how a feature was implemented.
## Docs-first methodology
The product documentation should be a complete and trusted resource.
- If the answer to a question exists in documentation, share the link to the
documentation instead of rephrasing the information.
- When you encounter information that's not available in GitLab documentation,
create a merge request (MR) to add the information to the
documentation. Then share the MR to communicate the information.
The more we reflexively add information to the documentation, the more
the documentation helps others efficiently accomplish tasks and solve problems.
## Writing for localization
The GitLab documentation is not localized, but we follow guidelines that help us write for a global audience.
[The GitLab voice](#the-gitlab-voice) dictates that we write clearly and directly with translation in mind.
Our style guide, [word list](word_list.md), and [Vale rules](../testing/_index.md) ensure consistency in the documentation.
When documentation is translated into other languages, the meaning of each word must be clear.
The increasing use of machine translation, GitLab Duo Chat, and other AI tools
means that consistency is even more important.
The following rules can help documentation be translated more efficiently.
Avoid:
- Phrases that hide the subject like [**there is** and **there are**](word_list.md#there-is-there-are).
- Ambiguous pronouns like [**it**](word_list.md#it).
- Words that end in [**-ing**](word_list.md#-ing-words).
- Words that can be confused with one another like [**since**](word_list.md#since) and **because**.
- Latin abbreviations like [**e.g.**](word_list.md#eg) and [**i.e.**](word_list.md#ie).
- Culture-specific references like **kill two birds with one stone**.
Use:
- Standard [text for links](#text-for-links).
- [Lists](#lists) and [tables](#tables) instead of complex sentences and paragraphs.
- Common abbreviations like [**AI**](word_list.md#ai-artificial-intelligence) and
[**CI/CD**](word_list.md#cicd) and abbreviations you've previously spelled out.
Also, keep the following guidance in mind:
- Be consistent with [feature names](#feature-names) and how to interact with them.
- Break up noun strings. For example, instead of **project integration custom settings**,
use **custom settings for project integrations**.
- Format [dates and times](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/date-time-terms)
consistently and for an international audience.
- Use [illustrations](#illustrations), including screenshots, sparingly.
- For [UI text](#ui-text), allow for up to 30% expansion and contraction in translation.
To see how much a string expands or contracts in another language, paste the string
into [Google Translate](https://translate.google.com/) and review the results.
Ask a colleague who speaks the language to verify if the translation is clear.
## Markdown
All GitLab documentation is written in [Markdown](https://en.wikipedia.org/wiki/Markdown).
The [documentation website](https://docs.gitlab.com) uses the [Hugo](https://gohugo.io/) static site generator with its default Markdown engine, [Goldmark](https://gohugo.io/content-management/formats/#markdown).
Markdown formatting is tested by using [markdownlint](../testing/markdownlint.md) and [Vale](../testing/vale.md).
### HTML in Markdown
Hard-coded HTML is valid, although it's discouraged for a few reasons:
- Custom markup has potential to break future site-wide changes or design system updates.
- Custom markup does not have test coverage to ensure consistency across the site.
- Custom markup might not be responsive or accessible.
- Custom markup might not adhere to Pajamas guidelines.
- HTML and CSS in Markdown do not render on `/help`.
- Hand-coding HTML can be error-prone. It's possible to break the page layout or other components with malformed HTML.
HTML is permitted if:
- No equivalent exists in Markdown.
- The content is reviewed and approved by a technical writer.
- The need for a custom element is urgent and cannot wait for implementation by Technical Writing engineers.
If you have an idea or request for a new element that would be useful on the Docs site,
submit a [feature request](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/new?issuable_template=Default).
### Heading levels in Markdown
Each documentation page must include a `title` attribute in its [metadata](../metadata.md).
The `title` becomes the `H1` element when rendered to HTML.
Do not add an `H1` heading in Markdown because there can be only one for each page.
- For each subsection, increment the heading level. In other words, increment the number of `#` characters
in front of the topic title.
- Avoid heading levels greater than `H5` (`#####`). If you need more than five heading levels, move the topics to a new page instead.
Heading levels greater than `H4` do not display in the right sidebar navigation.
- Do not skip a level. For example: `##` > `####`.
- Leave one blank line before and after the topic title.
- If you use code in topic titles, ensure the code is in backticks.
- Do not use bold text in topic titles.
### Description lists in Markdown
To define terms or differentiate between options, use description lists. For a list of UI elements,
use a regular [list](#lists) instead of a description list.
Do not mix description lists with other styles.
```markdown
Term 1
: Definition of Term 1
Term 2
: Definition of Term 2
```
These lists render like this:
Term 1
: Definition of Term 1
Term 2
: Definition of Term 2
### Shortcodes
[Shortcodes](https://gohugo.io/content-management/shortcodes/) are snippets of template code that we can include in our Markdown content to display non-standard elements on a page, such as alert boxes or tabs.
GitLab documentation uses the following shortcodes:
- [Alert boxes](#alert-boxes)
- Note
- Warning
- Flag
- Disclaimer
- Details
- [Availability details](availability_details.md)
- [Version history](availability_details.md#history)
- [Icons](#gitlab-svg-icons)
- [Tabs](#tabs)
- [Cards](#cards)
- [Maintained versions](#maintained-versions)
## Language
GitLab documentation should be clear and easy to understand.
- Avoid unnecessary words.
- Be clear, concise, and stick to the goal of the topic.
- Write in US English with US grammar. (Tested in [`British.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/British.yml).)
### Active voice
In most cases, text is easier to understand and to translate if you use active voice instead of passive.
For example, use:
- The developer writes code for the application.
Instead of:
- Application code is written by the developer.
Sometimes, `GitLab` as the subject can be awkward. For example, `GitLab exports the report`.
In this case, use passive voice instead. For example, `The report is exported`.
### Customer perspective
Focus on the functionality and benefits that GitLab brings to customer,
rather than what GitLab has created.
For example, use:
- Use merge requests to compare code in the source and target branches.
Instead of:
- GitLab allows you to compare code.
- GitLab created the ability to let you compare code.
- Merge requests let you compare code.
Words that indicate you are not writing from a customer perspective are
[allow and enable](word_list.md#allow-enable). Try instead to use
[you](word_list.md#you-your-yours) and to speak directly to the user.
### Building trust
Product documentation should be focused on providing clear, concise information,
without the addition of sales or marketing text.
- Do not use words like [easily](word_list.md#easily) or [simply](word_list.md#simply-simple).
- Do not use marketing phrases like "This feature will save you time and money."
Instead, focus on facts and achievable goals. Be specific. For example:
- The build time can decrease when you use this feature.
- Use this feature to save time when you create a project. The API creates the file and you
do not have to manually intervene.
### Self-referential writing
Avoid writing about the document itself. For example, do not use:
- This page shows...
- This guide explains...
These phrases slow the user down. Instead, get right to the point. For example, instead of:
- This page explains different types of pipelines.
Use:
- GitLab has different types of pipelines to help address your development needs.
Tested in [`SelfReferential.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SelfReferential.yml).
### Capitalization
As a company, we tend toward lowercase.
#### Topic titles
Use sentence case for topic titles. For example:
- `# Use variables to configure pipelines`
- `## Use the To-Do List`
#### UI text
When referring to specific user interface text, like a button label, page, tab,
or menu item, use the same capitalization that's displayed in the user interface.
The only exception is text that's all uppercase (for example, `RECENT FLOWS`).
In this case, use sentence case.
If you think the user interface text contains style mistakes,
create an issue or an MR to propose a change to the user interface text.
#### Feature names
Feature names should be lowercase.
However, in a few rare cases, features can be title case. These exceptions are:
- Added as a proper name to [markdownlint](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint.yml),
so they can be consistently applied across all documentation.
- Added to the [word list](word_list.md).
If the term is not in the word list, ask a GitLab Technical Writer for advice.
For assistance naming a feature and ensuring it meets GitLab standards, see
[the handbook](https://handbook.gitlab.com/handbook/product/categories/gitlab-the-product/#naming-features).
Do not match the capitalization of terms or phrases on the [Features page](https://about.gitlab.com/features/)
or [`features.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/features.yml)
by default.
#### Other terms
Capitalize names of:
- GitLab [product tiers](https://about.gitlab.com/pricing/). For example,
GitLab Free and GitLab Ultimate.
- Third-party organizations, software, and products. For example, Prometheus,
Kubernetes, Git, and The Linux Foundation.
- Methods or methodologies. For example, Continuous Integration,
Continuous Deployment, Scrum, and Agile.
Follow the capitalization style listed at the authoritative source
for the entity, which might use non-standard case styles. For example: GitLab and
npm.
### Fake user information
Do not include real usernames or email addresses in the documentation.
For text:
- Use diverse or non-gendered names with common surnames, like `Sidney Jones`, `Zhang Wei`, or `Alex Garcia`.
- Make fake email addresses end in `example.com`.
For screenshots:
- Temporarily edit the page before you take the screenshot:
1. Right-click the text you want to change.
1. Select **Inspect**.
1. In the **Elements** dialog, edit the HTML to replace text that contains real user information with example data.
1. Close the dialog. All of the user data in the web page should now be replaced with the example data you entered.
1. Take the screenshot.
- Alternatively, create example accounts in a test environment, and take the screenshot there.
- If you can't reproduce the environment, blur the user data by using an image editing tool like Preview on macOS.
### Fake URLs
When including sample URLs in the documentation, use:
- `example.com` when the domain name is generic.
- `gitlab.example.com` when referring only to GitLab Self-Managed.
Use `gitlab.com` for GitLab.com.
### Fake tokens
Do not use real tokens in the documentation.
Use these fake tokens as examples:
| Token type | Token value |
|:----------------------|:------------|
| Personal access token | `<your_access_token>` |
| Application ID | `2fcb195768c39e9a94cec2c2e32c59c0aad7a3365c10892e8116b5d83d4096b6` |
| Application secret | `04f294d1eaca42b8692017b426d53bbc8fe75f827734f0260710b83a556082df` |
| CI/CD variable | `Li8j-mLUVA3eZYjPfd_H` |
| Project runner token | `yrnZW46BrtBFqM7xDzE7dddd` |
| Instance runner token | `6Vk7ZsosqQyfreAxXTZr` |
| Trigger token | `be20d8dcc028677c931e04f3871a9b` |
| Webhook secret token | `6XhDroRcYPM5by_h-HLY` |
| Health check token | `Tu7BgjR9qeZTEyRzGG2P` |
### Contractions
Contractions are encouraged, and can create a friendly and informal tone,
especially in tutorials, instructional documentation, and
[user interfaces](https://design.gitlab.com/content/punctuation/#contractions).
Some contractions, however, should be avoided:
<!-- vale gitlab_base.Possessive = NO -->
| Do not use a contraction | Example | Use instead |
|-------------------------------|-------------------------------------------|-------------|
| With a proper noun and a verb | **Terraform's** a helpful tool. | **Terraform** is a helpful tool. |
| To emphasize a negative | **Don't** install X with Y. | **Do not** install X with Y. |
| In reference documentation | **Don't** set a limit. | **Do not** set a limit. |
| In error messages | Requests to localhost **aren't** allowed. | Requests to localhost **are not** allowed. |
<!-- vale gitlab_base.Possessive = YES -->
### Possessives
Do not use possessives (`'s`) for proper nouns, like organization or product names.
For example, instead of `Docker's CLI`, use `the Docker CLI`.
For details, see [the Google documentation style guide](https://developers.google.com/style/possessives#product,-feature,-and-company-names).
### Prepositions
Use prepositions at the end of the sentence when needed.
Dangling or stranded prepositions are fine. For example:
- You can leave the group you're a member of.
- Share the credentials with users you want to give access to.
These constructions are more casual than the alternatives:
- You can leave the group of which you're a member.
- Share the credentials with users to which you want to give access.
### Acronyms
If you use an acronym, spell it out on first use on a page. Do not spell it out more than once on a page.
- **Titles**: Try to avoid acronyms in topic titles, especially if the acronym is not widely used.
- **Plurals**: Try not to make acronyms plural. For example, use `YAML files`, not `YAMLs`. If you must make an acronym plural, do not use an apostrophe. For example, use `APIs`, not `API's`.
- **Possessives**: Use caution when making an acronym possessive. If possible,
write the sentence to avoid making the acronym possessive. If you must make the
acronym possessive, consider spelling out the words.
### Numbers
For numbers in text, spell out zero through nine and use numbers for 10 and greater. For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/numbers).
## Text
- [Write in Markdown](#markdown).
- Insert an empty line for new paragraphs.
- Insert an empty line between different markups (for example, after every
paragraph, heading, and list). Example:
```markdown
## Heading
Paragraph.
- List item 1
- List item 2
```
### Line length
To make the source content easy to read, and to compare diffs,
follow these best practices.
- Split long lines at approximately 100 characters. (Exception: Do not split links.)
- Start each new sentence on a new line.
### Comments
To embed comments in Markdown, use standard HTML comments that are not rendered
when published. Example:
```html
<!-- This is a comment that is not rendered -->
```
### Punctuation
Follow these guidelines for punctuation.
<!-- vale gitlab_base.Repetition = NO -->
- End full sentences with a period, including full sentences in tables.
- Use serial (Oxford) commas before the final **and** or **or** in a list of three or more items. (Tested in [`OxfordComma.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/OxfordComma.yml).)
<!-- vale gitlab_base.Repetition = YES -->
When spacing content:
- Use one space between sentences. (Use of more than one space is tested in [`SentenceSpacing.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SentenceSpacing.yml).)
- Do not use non-breaking spaces. Use standard spaces instead. (Tested in [`lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/lint-doc.sh).)
- Do not use tabs for indentation. Use spaces instead. Consider configuring your code editor to output spaces instead of tabs when pressing the <kbd>Tab</kbd> key.
Do not use these punctuation characters:
- `;` (semicolon): Use two sentences instead.
- `–` (en dash) or `—` (em dash): Use separate sentences, or commas, instead.
- `“` `”` `‘` `’`: Double or single typographer's ("curly") quotation marks. Use straight quotes instead. (Tested in [`NonStandardQuotes.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/NonStandardQuotes.yml).)
### Placeholder text
In a code block, you might want to provide a command or configuration that
uses specific values.
In these cases, use [`<` and `>`](https://en.wikipedia.org/wiki/Usage_message#Pattern)
to call out where a reader must replace text with their own value.
For example:
```shell
cp <your_source_directory> <your_destination_directory>
```
If the placeholder is not in a code block, use `<` and `>` and wrap the placeholder
in a single backtick. For example:
```plaintext
Select **Grant admin consent for `<application_name>`**.
```
### Quotation marks
Follow [the Microsoft guidance for quotation marks](https://learn.microsoft.com/en-us/style-guide/punctuation/quotation-marks).
Try to avoid quotation marks for user input and instead, use backticks.
## Text formatting
When formatting text, use:
- [Bold](#bold) for UI elements and pages.
- [Inline code style](#inline-code) for inputs, outputs, code, and similar.
- [Code blocks](#code-blocks) for command line examples, and multi-line inputs, outputs, code, and similar.
- [`<kbd>`](#keyboard-commands) for keyboard commands.
### Bold
Use bold for:
- UI elements with a visible label. Match the text and capitalization of the label.
- Navigation paths.
Do not use bold for keywords or emphasis.
UI elements include:
- Buttons
- Checkboxes
- Settings
- Menus
- Pages
- Tabs
For example:
- Select **Cancel**.
- On the **Issues** page...
- On the **Pipelines** tab...
To make text bold, wrap it with double asterisks (`**`). For example:
```markdown
1. Select **Cancel**.
```
When you use bold format for UI elements, place any punctuation outside the bold tag.
This rule includes periods, commas, colons, and right-angle brackets (`>`).
The punctuation is part of the sentence structure rather than the UI element that you're emphasizing.
Include punctuation in the bold tag when it's part of the UI element itself.
For example:
- `**Start a review**: This a description of the button that starts a review.`
- `Select **Overview** > **Users**.`
### Inline code
Inline code is text that's wrapped in single backticks (`` ` ``). For example:
```markdown
In the **Name** text box, enter `test`.
```
Use inline code for:
- Text a user enters in the UI.
- Short inputs and outputs like `true`, `false`, `Job succeeded`, and similar.
- Filenames, configuration parameters, keywords, and code. For example,
`.gitlab-ci.yml`, `--version`, or `rules:`.
- Short error messages.
- API and HTTP methods (`POST`).
- HTTP status codes. Full (`404 File Not Found`) and abbreviated (`404`).
- HTML elements. For example, `<sup>`. Include the angle brackets.
For example:
- In the **Name** text box, enter `test`.
- Use the `rules:` CI/CD keyword to control when to add jobs to a pipeline.
- Send a `DELETE` request to delete the runner. Send a `POST` request to create one.
- The job log displays `Job succeeded` when complete.
### Code blocks
Code blocks separate code text from regular text, and can be copy-pasted by users.
Use code blocks for:
- CLI and [cURL commands](../restful_api_styleguide.md#curl-commands).
- Multi-line inputs, outputs, and code samples that are too large for [inline code](#inline-code).
To add a code block, add triple backticks (```` ``` ````) above and below the text,
with a syntax name at the top for proper syntax highlighting. For example:
````markdown
```markdown
This is a code block that uses Markdown to demonstrate **bold** and `backticks`.
```
````
When you use code blocks:
- Add a blank line above and below code blocks.
- Use one of the [supported syntax names](https://gohugo.io/content-management/syntax-highlighting/#languages).
Use `plaintext` if no better option is available.
- Use quadruple backticks (````` ```` `````) when the code block contains another (nested) code block
which has triple backticks already. The example above uses quadruple backticks internally
to illustrate the code block format.
To represent missing information in a code block, use a comment or an [ellipsis](word_list.md#ellipsis-ellipses). For example:
- `# Removed for readability`
- `// ...`
### Keyboard commands
Use the HTML `<kbd>` tag when referring to keystroke presses. For example:
```plaintext
To stop the command, press <kbd>Control</kbd>+<kbd>C</kbd>.
```
This example renders as:
To stop the command, press <kbd>Control</kbd>+<kbd>C</kbd>.
### Italics and emphasis
Avoid [italics for emphasis](../../../user/markdown.md#emphasis) in product documentation.
Instead, write content that is clear enough that emphasis is not needed. GitLab and
<https://docs.gitlab.com> use a sans-serif font, but italic text [does not stand out in a page using sans-serif](https://practicaltypography.com/bold-or-italic.html).
## Lists
Use lists to present information in a format that is easier to scan.
- Make all items in the list parallel.
For example, do not start some items with nouns and others with verbs.
- Start all items with a capital letter.
- Give all items the same punctuation.
- Do not use a period if the item is not a full sentence.
- Use a period after every full sentence.
Do not use semicolons or commas.
- Add a colon (`:`) after the introductory phrase.
For example:
```markdown
To complete a task:
- Do this thing.
- Do this other thing.
```
- Do not use [bold](#bold) formatting to define keywords or concepts in a list. Use bold for UI element labels only. For example:
- `**Start a review**: This a description of the button that starts a review.`
- `Offline environments: This is a description of offline environments.`
For keywords and concepts, consider a [reference topic](../topic_types/reference.md) or
[description list](#description-lists-in-markdown) for alternative formatting.
### Choose between an ordered or unordered list
Use ordered lists for a sequence of steps. For example:
```markdown
Follow these steps to do something.
1. First, do the first step.
1. Then, do the next step.
1. Finally, do the last step.
```
Use an unordered lists when the steps do not need to be completed in order. For example:
```markdown
These things are imported:
- Thing 1
- Thing 2
- Thing 3
```
### List markup
- Use dashes (`-`) for unordered lists instead of asterisks (`*`).
- Start every item in an ordered list with `1.`. When rendered, the list items
are sequential.
- Leave a blank line before and after a list.
- Begin a line with spaces (not tabs) to denote a [nested sub-item](#nesting-inside-a-list-item).
### Nesting inside a list item
The following items can be nested under a list item, so they render with the same
indentation as the list item:
- [Code blocks](#code-blocks)
- [Blockquotes](#blockquotes)
- [Alert boxes](#alert-boxes)
- [Illustrations](#illustrations)
- [Tabs](#tabs)
Nested items should always align with the first character of the list
item. For unordered lists (using `-`), use two spaces for each level of
indentation:
````markdown
- Unordered list item 1
A line nested that uses 2 spaces to align with the `U` above.
- Unordered list item 2
> A quote block that will nest
> inside list item 2.
- Unordered list item 3
```plaintext
a code block that nests inside list item 3
```
- Unordered list item 4

````
For ordered lists, use three spaces for each level of indentation:
````markdown
1. Ordered list item 1
A line nested that uses 3 spaces to align with the `O` above.
````
You can nest lists in other lists.
```markdown
1. Ordered list item one.
1. Ordered list item two.
- Nested unordered list item one.
- Nested unordered list item two.
1. Ordered list item three.
- Unordered list item one.
- Unordered list item two.
1. Nested ordered list item one.
1. Nested ordered list item two.
- Unordered list item three.
```
## Tables
Tables should be used to describe complex information in a straightforward
manner. In many cases, an unordered list is sufficient to describe a
list of items with a single description for each item. But, if you have data
that's best described by a matrix, tables are the best choice.
### Creation guidelines
To keep tables accessible and scannable, tables should not have any
empty cells. If no otherwise meaningful value for a cell exists, consider entering
**N/A** for 'not applicable' or **None**.
To make tables easier to maintain:
- If the table has a `Description` column, make it the right-most column if possible.
- Add additional spaces to make the column widths consistent. For example:
```markdown
| Parameter | Default | Requirements |
|-----------|--------------|--------------|
| `param1` | `true` | A and B. |
| `param2` | `gitlab.com` | None |
```
- Skip the additional spaces in the rightmost column for tables that are very wide.
For example:
```markdown
| Setting | Default | Description |
|-----------|---------|-------------|
| Setting 1 | `1000` | A short description. |
| Setting 2 | `2000` | A long description that would make the table too wide and add too much whitespace if every cell in this column was aligned. |
| Setting 3 | `0` | Another short description. |
```
- The header (first) row and the delimiter (second) row of the table should be the same length.
Do not use shortened delimiter rows like `|-|-|-|` or `|--|--|`.
- If a large table does not auto-format well, you can skip the auto-format but:
- Make the first two rows the same length.
- Put spaces between the `|` characters and cell contents.
For example `| Cell 1 | Cell 2 |`, not `|Cell1|Cell2|`.
### Editor extensions for table formatting
To ensure consistent table formatting across all Markdown files, consider formatting your tables
with the VS Code [Markdown Table Formatter](https://github.com/fcrespo82/vscode-markdown-table-formatter).
To configure this extension to follow the guidelines above, turn on the **Follow header row length** setting.
To turn on the setting:
- In the UI:
1. In the VS Code menu, go to **Code** > **Settings** > **Settings**.
1. Search for `Limit Last Column Length`.
1. In the **Limit Last Column Length** dropdown list, select **Follow header row length**.
- In your VS Code `settings.json`, add a new line with:
```json
{
"markdown-table-formatter.limitLastColumnLength": "Follow header row length"
}
```
To format a table with this extension, select the entire table, right-click the selection,
and select **Format Selection With**. Select **Markdown Table Formatter** in the VS Code Command Palette.
If you use Sublime Text, try the
[Markdown Table Formatter](https://packagecontrol.io/packages/Markdown%20Table%20Formatter)
plugin, but it does not have a **Follow header row length** setting.
### Updates to existing tables
When you add or edit rows in an existing table, some rows might not be aligned anymore.
Don't realign the entire table if only changing a few rows.
If you realign the columns to account for the width, the diff becomes difficult to read,
because the entire table shows as modified.
Markdown tables naturally fall out of alignment over time, but still render correctly
on `docs.gitlab.com`. The technical writing team can realign cells the next time
the page is refactored.
### Table headers
Use sentence case for table headers. For example, `Keyword value` or `Project name`.
### Feature tables
When creating tables of lists of features (such the features
available to each role on the [Permissions](../../../user/permissions.md#project-members-permissions)
page), use these phrases:
| Option | Markdown | Displayed result |
|--------|---------------------------------------------------|------------------|
| No | `{{</* icon name="dash-circle" */>}} No` | {{< icon name="dash-circle" >}} No |
| Yes | `{{</* icon name="check-circle-filled" */>}} Yes` | {{< icon name="check-circle-filled" >}} Yes |
Do not use these SVG icons in API documentation.
Instead, follow the [API topic template](../restful_api_styleguide.md#api-topic-template).
### Footnotes
Use footnotes below tables only when you cannot include the content in the table itself.
For example, use footnotes when you must:
- Provide the same information in several table cells.
- Include content that would disrupt the table's layout.
#### Footnote format
In the table, use the HTML superscript tag `<sup>` for each footnote.
Put the tag at the end of the sentence. Leave one space between the sentence and the tag.
For example:
```markdown
| App name | Description |
|:---------|:------------|
| App A | Description text. <sup>1</sup> |
| App B | Description text. <sup>2</sup> |
```
When you add a footnote, do not re-sort the existing tags in the table.
For the footnotes below the table, use `**Footnotes**:` followed by an ordered list.
For example:
```markdown
**Footnotes**:
1. This is the first footnote.
1. This is the second footnote.
```
The table and footnotes would render as follows:
| App name | Description |
|:---------|:------------|
| App A | Description text. <sup>1</sup> |
| App B | Description text. <sup>2</sup> |
**Footnotes**:
1. This is the first footnote.
1. This is the second footnote.
##### Five or more footnotes
If you have five or more footnotes that you cannot include in the table itself,
use consecutive numbers for the list items.
If you use consecutive numbers, you must disable Markdown rule `029`:
```markdown
**Footnotes**:
<!-- Disable ordered list rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md029---ordered-list-item-prefix -->
<!-- markdownlint-disable MD029 -->
1. This is the first footnote.
2. This is the second footnote.
3. This is the third footnote.
4. This is the fourth footnote.
5. This is the fifth footnote.
<!-- markdownlint-enable MD029 -->
```
## Links
Links are an important way to help readers find what they need.
However, most content is found by searching, and you should avoid putting too many links on any page.
Too many links can hinder readability.
- Do not duplicate links on the same page. For example, on **Page A**, do not link to **Page B** multiple times.
- Do not use links in headings. Headings that contain links cause errors.
- Do not use a hard line wrap between any words in a link.
- Avoid multiple links in a single paragraph.
- Avoid multiple links in a single task.
- On any one page, try not to use more than 15 links to other pages.
- Consider the use of [Related topics](../topic_types/_index.md#related-topics) to reduce links that interrupt the flow of a task.
- Try to avoid anchor links to sections on the same page. Let users rely on the right navigation instead.
### Inline links
Use inline links instead of reference links. Inline links are easier to parse
and edit.
([Vale](../testing/vale.md) rule: [`ReferenceLinks.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_docs/ReferenceLinks.yml))
- Do:
```markdown
For more information, see [merge requests](path/to/merge_requests.md)
```
- Don't:
```markdown
For more information, see [merge requests][1].
[1]: path/to/merge_requests.md
```
### Links in the same repository
To link to another documentation (`.md`) file in the same repository:
- Use an inline link with a relative file path. For example, `[GitLab.com settings](../user/gitlab_com/_index.md)`.
- Put the entire link on a single line, even if the link is very long. ([Vale](../testing/vale.md) rule: [`MultiLineLinks.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/MultiLineLinks.yml)).
{{< alert type="note" >}}
In the GitLab repository, do not link to the `/development` directory from any other directory.
{{< /alert >}}
To link to a file outside of the documentation files, for example to link from development
documentation to a specific code file:
- Use a full URL. For example: ``[`app/views/help/show.html.haml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/views/help/show.html.haml)``
- Optional. Use a full URL with a specific ref. For example: ``[`app/views/help/show.html.haml`](https://gitlab.com/gitlab-org/gitlab/-/blob/6d01aa9f1cfcbdfa88edf9d003bd073f1a6fff1d/app/views/help/show.html.haml)``
### Links in separate repositories
To link to a page in a different repository, use a full URL.
For example, to link from a page in the GitLab repository to the Charts repository,
use a URL like `[GitLab Charts documentation](https://docs.gitlab.com/charts/)`.
### Anchor links
Each topic title has an anchor link. For example, a topic with the title
`## This is an example` has the anchor `#this-is-an-example`.
When you change topic title text, the anchor link changes. To avoid broken links:
- Do not use step numbers in topic titles.
- When possible, do not use words that might change in the future.
#### Changing links and titles
When you change a topic title, the anchor link changes. If other documentation pages
or code files link to this anchor, [pipeline jobs could fail](../testing/_index.md).
Consider [running the link checks locally](../testing/links.md) before pushing your changes
to prevent failing pipelines.
### Text for links
Follow these guidelines for link text.
#### Standard text
Use text that follows one of these patterns:
- `For more information, see [link text](link.md)`.
- `To [DO THIS THING], see [link text](link.md)`
For example:
- `For more information, see [merge requests](link.md).`
- `To create a review app, see [review apps](link.md).`
To expand on this text, use phrases like
`For more information about this feature, see...`
Do not use the following constructions:
- `Learn more about...`
- `To read more...`.
- `For more information, see the [Merge requests](link.md) page.`
- `For more information, see the [Merge requests](link.md) documentation.`
#### Descriptive text rather than `here`
Use descriptive text for links, rather than words like `here` or `this page.`
For the name of a topic or page, use lowercase.
You don't have to match the text to the topic or page name exactly.
Edit the text to be descriptive and fit the guidelines.
Do:
- `For more information, see [merge requests](link.md)`.
- `For more information, see [roles and permissions](link.md)`.
- `For more information, see [how to configure common settings](link.md)`.
Don't:
- `For more information, see [this page](link.md).`
- `For more information, go [here](link.md).`
- `For more information, see [this documentation](link.md).`
#### Links to issues
When linking to an issue, include the issue number in the link. For example:
- `For more information, see [issue 12345](link.md).`
Do not use the pound sign (`issue #12345`).
### Links to external documentation
When possible, avoid links to external documentation. These links can become outdated and are difficult to maintain.
- [They lead to link rot](https://en.wikipedia.org/wiki/Link_rot).
- [They create issues with maintenance](https://gitlab.com/gitlab-org/gitlab/-/issues/368300).
Sometimes links are required. They might clarify troubleshooting steps or help prevent duplication of content.
Sometimes they are more precise and will be maintained more actively.
For each external link you add, weigh the customer benefit with the maintenance difficulties.
### Links to handbook
Limit links to the handbook. Some links are unavoidable, like licensing terms, data usage and access policies,
testing agreements, and terms and conditions.
### Confidential or restricted access links
Don't link directly to:
- [Confidential issues](../../../user/project/issues/confidential_issues.md).
- Internal handbook pages.
- Project features that require [special permissions](../../../user/permissions.md)
to view.
These links fail for:
- Those without sufficient permissions.
- Automated link checkers.
If you must use one of these links:
- If the link is to a confidential issue or internal handbook page, mention that the issue or page is visible only to GitLab team members.
- If the link requires a specific role or permissions, mention that information.
- Put the link in backticks so that it does not cause link checkers to fail.
Examples:
- ```markdown
GitLab team members can view more information in this confidential issue:
`https://gitlab.com/gitlab-org/gitlab/-/issues/<issue_number>`
```
- ```markdown
GitLab team members can view more information in this internal handbook page:
`https://internal.gitlab.com/handbook/<link>`
```
- ```markdown
Users with the Maintainer role for the project can use the pipeline editor:
`https://gitlab.com/gitlab-org/gitlab/-/ci/editor`
```
### Link to specific lines of code
When linking to specific lines in a file, link to a commit instead of to the
branch. Lines of code change over time. Linking to a line by using
the commit link ensures the user lands on the line you're referring to. The
**Permalink** dropdown item in the ellipsis menu, displayed when viewing a file in a project,
provides a link to the most recent commit of that file.
- Do: `[link to line 3](https://gitlab.com/gitlab-org/gitlab/-/blob/11f17c56d8b7f0b752562d78a4298a3a95b5ce66/.gitlab/issue_templates/Feature%20proposal.md#L3)`
- Don't: `[link to line 3](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20proposal.md#L3).`
If that linked expression has changed line numbers due to additional
commits, you can still search the file for that query. In this case, update the
document to ensure it links to the most recent version of the file.
## Navigation
When documenting how to navigate the GitLab UI:
- Always use location, then action.
- From the **Visibility** dropdown list (location), select **Public** (action).
- Be brief and specific. For example:
- Do: Select **Save**.
- Do not: Select **Save** for the changes to take effect.
- If a step must include a reason, start the step with it. This helps the user scan more quickly.
- Do: To view the changes, in the merge request, select the link.
- Do not: Select the link in the merge request to view the changes.
### Names for menus
Use these terms when referring to the main GitLab user interface
elements:
- **Left sidebar**: This is the navigation sidebar on the left of the user
interface.
- Do not use the phrase `context switcher` or `switch contexts`. Instead, try to direct the user to the exact location with a set of repeatable steps.
- Do not use the phrase `the **Explore** menu` or `the **Your work** sidebar`. Instead, use `the left sidebar`.
- **Right sidebar**: This is the navigation sidebar on the right of the user
interface, specific to the open issue, merge request, or epic.
### Names for UI elements
All UI elements [should be **bold**](#bold). The `>` in the navigation path should not be bold.
Guidance for individual UI elements is in [the word list](word_list.md).
### How to write navigation task steps
To be consistent, use these examples to write navigation steps in a task topic.
Although alternative steps might exist, including items pinned by default,
use these steps instead.
To open project settings:
```markdown
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
```
To open group settings:
```markdown
1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
```
To open settings for a top-level group:
```markdown
1. On the left sidebar, select **Search or go to** and find your group.
This group must be at the top level.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
```
To open either project or group settings:
```markdown
1. On the left sidebar, select **Search or go to** and find your project or group.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
```
To create a project:
```markdown
1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**.
```
To create a group:
```markdown
1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New group**.
```
To open the **Admin** area:
```markdown
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings** > **CI/CD**.
```
You do not have to repeat `On the left sidebar` in your second step.
To open the **Your work** menu item:
```markdown
1. On the left sidebar, select **Search or go to**.
1. Select **Your work**.
```
To select your avatar:
```markdown
1. On the left sidebar, select your avatar.
```
To save the selection in some dropdown lists:
```markdown
1. Go to your issue.
1. On the right sidebar, in the **Iteration** section, select **Edit**.
1. From the dropdown list, select the iteration to associate this issue with.
1. Select any area outside the dropdown list.
```
To view all your projects:
```markdown
1. On the left sidebar, select **Search or go to**.
1. Select **View all my projects**.
```
To view all your groups:
```markdown
1. On the left sidebar, select **Search or go to**.
1. Select **View all my groups**.
```
### Optional steps
If a step is optional, start the step with the word `Optional` followed by a period.
For example:
```markdown
1. Optional. Enter a description for the job.
```
### Recommended steps
If a step is recommended, start the step with the word `Recommended` followed by a period.
For example:
```markdown
1. Recommended. Enter a description for the job.
```
### Documenting keyboard shortcuts and commands
Write UI instructions instead of keyboard commands when both options exist.
This guideline applies to GitLab and third-party applications, like VS Code.
Keyboard commands for GitLab are documented in [GitLab keyboard shortcuts](../../../user/shortcuts.md).
### Documenting multiple fields at once
If the UI text sufficiently explains the fields in a section, do not include a task step for every field.
Instead, summarize multiple fields in a single task step.
Use the phrase **Complete the fields**.
For example:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **Repository**.
1. Expand **Push rules**.
1. Complete the fields.
If you are documenting multiple fields and only one field needs explanation, do it in the same step:
1. Expand **Push rules**.
1. Complete the fields. **Branch name** must be a regular expression.
To describe multiple fields, use unordered list items:
1. Expand **General pipelines**.
1. Complete the fields.
- **Branch name** must be a regular expression.
- **User** must be a user with at least the **Maintainer** role.
## Illustrations
GitLab documentation uses two illustration types:
- Screenshots, used to show a portion of the GitLab user interface.
- Diagrams, used to illustrate processes or relationships between entities.
Illustrations can help the reader understand a concept, where they are in a complicated process,
or how they should interact with the application. Use illustrations sparingly because:
- They become outdated.
- They are difficult and expensive to localize.
- They cannot be read by screen readers.
If you must use illustrations in documentation, they should:
- Supplement the text, not replace it.
The reader should not have to rely only on the illustration to get the needed information.
- Have an introductory sentence in the preceding text.
For example, `The following diagram illustrates the product analytics flow:`.
- Be accessible. For more information, see the guidelines specific to screenshots and diagrams.
- Exclude personally identifying information.
### Screenshots
Use screenshots to show a portion of the GitLab user interface, if some relevant information
can't be conveyed in text.
#### Capture the screenshot
When you take screenshots:
- Ensure the content in the screenshot adheres to the
[GitLab SAFE framework](https://handbook.gitlab.com/handbook/legal/safe-framework/). To check,
follow the
[SAFE flowchart](https://handbook.gitlab.com/handbook/legal/safe-framework/#safe-flowchart).
- **Ensure it provides value.** Don't use `lorem ipsum` text.
Try to replicate how the feature would be used in a real-world scenario, and
[use realistic text](#fake-user-information).
- **Capture only the relevant UI.** Don't include unnecessary white
space or areas of the UI that don't help illustrate the point. The
sidebars in GitLab can change, so don't include
them in screenshots unless absolutely necessary.
- **Keep it small.** If you don't have to show the full width of the screen, don't.
Reduce the size of your browser window as much as possible to keep elements close
together and reduce empty space. Try to keep the screenshot dimensions as small as possible.
- **Review how the image renders on the page.** Preview the image locally or use the
review app in the merge request. Make sure the image isn't blurry or overwhelming.
- **Be consistent.** Coordinate screenshots with the other screenshots already on
a documentation page for a consistent reading experience. Ensure your navigation theme
is set to the default preference **Indigo** and the syntax highlighting theme is also set to the default preference **Light**.
#### Add callouts
To emphasize an area in a screenshot, use an arrow.
- For color, use `#EE2604`. If you use the Preview application on macOS, this is the default red.
- For the line width, use 3 pt. If you use the Preview application on macOS, this is the third line in the list.
- Use the arrow style shown in the following image.
- If you have multiple arrows, make them parallel when possible.

#### Image requirements
- Resize any wide or tall screenshots.
- Width should be 1000 pixels or less.
- Height should be 500 pixels or less.
- Make sure the screenshot is still clear after being resized and compressed.
- All images **must** be [compressed](#compress-images) to 100 KB or less.
In many cases, 25-50 KB or less is often possible without reducing image quality.
- Save the image with a lowercase filename that's descriptive of the feature
or concept in the image:
- If the image is of the GitLab interface, append the GitLab version to the filename,
based on this format: `image_name_vX_Y.png`. For example, for a screenshot taken
from the pipelines page of GitLab 11.1, a valid name is `pipelines_v11_1.png`.
- If you're adding an illustration that doesn't include parts of the user interface,
add the release number corresponding to the release the image was added to.
For an MR added to 11.1's milestone, a valid name for an illustration is `devops_diagram_v11_1.png`.
- Place images in a separate directory named `img/` in the same directory where
the `.md` document that you're working on is located.
- Do not link to externally-hosted images. Download a copy and store it in the appropriate `img` directory within the docs directory.
- Consider PNG images instead of JPEG.
- Compress GIFs with <https://ezgif.com/optimize> or similar tool.
See also how to link and embed [videos](#videos) to illustrate the documentation.
#### Compress images
You should always compress any new images you add to the documentation. One
known tool is [`pngquant`](https://pngquant.org/), which is cross-platform and
open source. Install it by visiting the official website and following the
instructions for your OS.
If you use macOS and want all screenshots to be compressed automatically, read
[One simple trick to make your screenshots 80% smaller](https://about.gitlab.com/blog/2020/01/30/simple-trick-for-smaller-screenshots/).
GitLab has a [Ruby script](https://gitlab.com/gitlab-org/gitlab/-/blob/master/bin/pngquant)
to simplify the manual process. In the root directory of your local
copy of `https://gitlab.com/gitlab-org/gitlab`, run in a terminal:
- Before compressing, if you want, check that all documentation PNG images have
been compressed:
```shell
bin/pngquant lint
```
- Compress all documentation PNG images by using `pngquant`:
```shell
bin/pngquant compress
```
- Compress specific files:
```shell
bin/pngquant compress doc/user/img/award_emoji_select.png doc/user/img/markdown_logo.png
```
- Compress all PNG files in a specific directory:
```shell
bin/pngquant compress doc/user/img
```
#### Animated images
Avoid animated images (such as animated GIFs). They can be distracting
and annoying for users.
If you're describing a complicated interaction in the user interface and want to
include a visual representation to help readers understand it, you can:
- Use a static image (screenshot) and if necessary, add callouts to emphasize an area of the screen.
- Create a short video of the interaction and link to it.
#### Add the image link to content
The Markdown code for including an image in a document is:
``
#### Alternative text
Alt text provides an accessible experience.
Screen readers use alt text to describe the image, and alt text displays
if an image fails to download.
Alt text should describe the context of the image, not the content. Add context that
relates to the topic of the page or section. Consider what you would say about the image
if you were helping someone read and interact with the page and they couldn't see it.
Do:
``
Do not:
``
When writing alt text:
- Write short, descriptive alt text in 155 characters or fewer.
Screen readers typically stop reading after this many characters.
- If the image has complex information like a workflow diagram, use short alt text
to identify the image and include detailed information in the text.
- Use a period at the end of the string, whether it's a sentence or not.
- Use sentence case and avoid all caps.
Some screen readers read capitals as individual letters.
- Do not use phrases like **Image of** or **Graphic of**.
- Do not use a string of keywords.
Include keywords in the text to enhance context.
- Introduce the image in the topic, not the alt text.
- Try to avoid repeating text you've already used in the topic.
- Do not use inline styling like bold, italics, or backticks.
Screen readers read `**text**` as `star star text star star`.
- Use an empty alt text tag (`alt=""`) instead of omitting the tag altogether when the image does not add any unique information to the page. For example, when the image is decorative or is already fully described in the body text or caption. An empty alt tag tells assistive technologies that you have omitted the text intentionally, while a missing alt tag is ambiguous.
#### Automatic screenshot generator
You can use an automatic screenshot generator to take and compress screenshots.
1. Set up the [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/gitlab_docs.md).
1. Go to the subdirectory with your cloned GitLab repository, typically `gdk/gitlab`.
1. Make sure that your GDK database is fully migrated: `bin/rake db:migrate RAILS_ENV=development`.
1. Install `pngquant`, see the tool website for more information: [`pngquant`](https://pngquant.org/)
1. Run `scripts/docs_screenshots.rb spec/docs_screenshots/<name_of_screenshot_generator>.rb <milestone-version>`.
1. Identify the location of the screenshots, based on the `gitlab/doc` location defined by the `it` parameter in your script.
1. Commit the newly created screenshots.
##### Extending the tool
To add an additional screenshot generator:
1. In the `spec/docs_screenshots` directory, add a new file with a `_docs.rb` extension.
1. Add the following information to your file:
```ruby
require 'spec_helper'
RSpec.describe '<What I am taking screenshots of>', :js do
include DocsScreenshotHelpers # Helper that enables the screenshots taking mechanism
before do
page.driver.browser.manage.window.resize_to(1366, 1024) # length and width of the page
end
```
1. To each `it` block, add the path where the screenshot is saved:
```ruby
it '<path/to/images/directory>'
```
You can take a screenshot of a page with `visit <path>`.
To avoid blank screenshots, use `expect` to wait for the content to load.
###### Single-element screenshots
You can take a screenshot of a single element.
- Add the following to your screenshot generator file:
```ruby
screenshot_area = find('<element>') # Find the element
scroll_to screenshot_area # Scroll to the element
expect(screenshot_area).to have_content '<content>' # Wait for the content you want to capture
set_crop_data(screenshot_area, <padding>) # Capture the element with added padding
```
Use `spec/docs_screenshots/container_registry_docs.rb` as a guide to create your own scripts.
### Diagrams
Use a diagram to illustrate a process or the relationship between entities, if the information is too
complex to be understood from text only.
To create a diagram, use either [Mermaid](https://mermaid.js.org/#/) (recommended) or [Draw.io](https://draw.io).
Mermaid is the recommended diagramming tool, but it is not suitable for all situations. For example,
complex diagram requirements might result in a layout that is difficult to understand.
GUI diagramming tools can help authors overcome Mermaid's complexity and layout issue. Draw.io is
the preferred GUI tool because, when you use the editor, both the diagram and its definition are
stored in the SVG file, so it can be edited. Draw.io is also integrated with the GitLab wiki.
| Feature | Mermaid | Draw.io |
|-------------------------------------------|-------------------------------------------------------------------------|---------|
| **Editor required** | Text editor | Draw.io editor |
| **WYSIWYG editing** | {{< icon name="dash-circle" >}} No | {{< icon name="check-circle-filled" >}} Yes |
| **Text content findable by `grep`** | {{< icon name="check-circle-filled" >}} Yes | {{< icon name="dash-circle" >}} No |
| **Appearance controlled by** | Web site's CSS | Diagram's author |
| **File format** | SVG | SVG |
| **VS Code integration (with extensions)** | {{< icon name="check-circle-filled" >}} Yes (Preview and local editing) | {{< icon name="check-circle-filled" >}} Yes (Preview and local editing) |
| **Generated dynamically** | {{< icon name="check-circle-filled" >}} Yes | {{< icon name="dash-circle" >}} No |
#### Guidelines
To create accessible and maintainable diagrams, follow these guidelines:
- Keep diagrams simple and focused. Include only essential elements and information.
- Use different but consistent visual cues (such as shape, color, and font) to distinguish between categories:
- Rectangles for processes or steps.
- Diamonds for decision points.
- Solid lines for direct relationships between elements.
- Dotted lines for indirect relationship between elements.
- Arrows for flow or direction in a process.
- GitLab Sans font.
- Add clear labels and brief descriptions to diagram elements.
- Include a title and brief description for the diagram.
- For complex processes, consider creating multiple simple diagrams instead of one large diagram.
- Validate diagrams work well when viewed on different devices and screen sizes.
- Do not include links. Links embedded in diagrams with [`click` actions](https://mermaid.js.org/syntax/classDiagram.html#interaction) are not testable with our link checking tools.
- Update diagrams along with documentation or code when processes change to maintain accuracy.
#### Create a diagram with Mermaid
To learn how to create diagrams with the [Mermaid syntax](https://mermaid.js.org/intro/syntax-reference.html),
see the [Mermaid user guide](https://mermaid.js.org/intro/getting-started.html)
and the examples on the Mermaid site.
To create a diagram for GitLab documentation with Mermaid:
1. In the [Mermaid Live Editor](https://mermaid.live/), create the diagram.
1. Copy the content of the **Code** pane and paste it in the Markdown file, wrapped in a `mermaid` code block. For more
details, see [GitLab Flavored Markdown for Mermaid](../../../user/markdown.md#mermaid).
1. To add GitLab font styling to your diagram, between the Mermaid code block declaration
and the type of diagram, add the following line:
```plaintext
%%{init: { "fontFamily": "GitLab Sans" }}%%
```
1. On the next line after declaring the type of diagram
(like `flowchart` or `sequenceDiagram`), add the following lines for accessibility:
```yaml
accTitle: your diagram title here
accDescr: describe what your diagram does in a single sentence, with no line breaks.
```
Make sure the title and description follow the [alternative text guidelines](#alternative-text).
For example, this flowchart contains both accessibility and font information:
````markdown
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Example diagram title
accDescr: A description of your diagram
A[Start here] -->|action| B[next step]
```
````
#### Create a diagram with Draw.io
Use either the [Draw.io](https://draw.io) web application or the (unofficial)
VS Code [Draw.io Integration](https://marketplace.visualstudio.com/items?itemName=hediet.vscode-drawio)
extension to create the diagram. Each tool provides the same diagram editing experience, but the web
application provides editable example diagrams.
##### Use the web application
To create a diagram by using the Draw.io web application:
1. In the [Draw.io](https://draw.io) web application, create the diagram.
Follow the [style guidelines](#style-guidelines).
1. Save the diagram:
1. In the Draw.io web application, select **File** > **Export as** > **SVG**.
1. Select the **Include a copy of my diagram: All pages** checkbox, then select **Export**. Use
the file extension `drawio.svg` to indicate it can be edited in Draw.io.
1. [Add the SVG to the docs as an image](#add-the-image-link-to-content).
These SVGs use the same Markdown as other non-SVG images.
##### Use the VS Code extension
To create a diagram by using the Draw.io Integration extension for VS Code:
1. In the directory that will contain the diagram, create an empty file with the suffix
`drawio.svg`.
1. Open the file in VS Code then create the diagram.
Follow the [style guidelines](#style-guidelines).
1. Save the file.
The diagram's definition is stored in Draw.io-compatible format in the SVG file.
1. [Add the SVG to the docs as an image](#add-the-image-link-to-content).
These SVGs use the same Markdown as other non-SVG images.
##### Style guidelines
When you create a diagram in Draw.io, it should be visually consistent with a diagram you would create with Mermaid.
The following rules are an addition to the general [style guidelines](#guidelines).
Fonts:
- Use the Inter font for all text. This font is not included in the default fonts.
To add Inter font as a custom font:
1. From the font dropdown list, select **Custom**.
1. Select **Google fonts** and in the **Font name** text box, enter `Inter`.
Shapes:
- For elements, use the rectangle shape.
- For flowcharts, use shapes from the **Flowchart** shape collection.
- Shapes that represent the same element should have the same shape and size.
- For elements that have text, ensure adequate white space exists between the text and the
shape's outline. If required, increase the size of the shape and **all** similar shapes in the diagram.
Colors:
- Use colors in the [GitLab Design System color range](https://design.gitlab.com/brand-design/color/) only.
- For all elements, shapes, arrows, and text, follow the
[Pajamas guidelines for illustration](https://design.gitlab.com/product-foundations/illustration/).
## Emoji
Don't use the Markdown emoji format, for example `:smile:`, for any purpose. Use
[GitLab SVG icons](#gitlab-svg-icons) instead.
## GitLab SVG icons
You can use icons from the [GitLab SVG library](https://gitlab-org.gitlab.io/gitlab-svgs/)
directly in the documentation. For example, `{{</* icon name="tanuki" */>}}` renders as: {{< icon name="tanuki" >}}.
In most cases, avoid icons in text.
However, use the icon when hover text is the only
available way to describe a UI element. For example, **Delete** or **Edit** buttons
often have hover text only.
When you do use an icon, start with the hover text and follow it with the SVG reference in parentheses.
- Avoid: `Select {{</* icon name="pencil" */>}} **Edit**.` This generates as: Select {{< icon name="pencil" >}} **Edit**.
- Use instead: `Select **Edit** ({{</* icon name="pencil" */>}}).` This generates as: Select **Edit** ({{< icon name="pencil" >}}).
Do not use words to describe the icon:
- Avoid: `Select **Erase job log** (the trash icon).`
- Use instead: `Select **Erase job log** ({{</* icon name="remove" */>}}).` This generates as: Select **Erase job log** ({{< icon name="remove" >}}).
When the button doesn't have any hover text, describe the icon.
Follow up by creating a
[UX bug issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Bug)
to add hover text to the button to improve accessibility.
- Avoid: `Select {{</* icon name="ellipsis_v" */>}}.`
- Use instead: `Select the vertical ellipsis ({{</* icon name="ellipsis_v" */>}}).` This generates as: Select the vertical ellipsis ({{< icon name="ellipsis_v" >}}).
## Videos
Adding GitLab YouTube video tutorials to the documentation is highly
encouraged, unless the video is outdated. Videos should not replace
documentation, but complement or illustrate it. If content in a video is
fundamental to a feature and its key use cases, but isn't adequately
covered in the documentation, you should:
- Add this detail to the documentation text.
- Create an issue to review the video and update the page.
Do not upload videos to the product repositories. [Add a link](#link-to-video) or
[embed](#embed-videos) them instead.
### Link to video
To link to a video, include a YouTube icon so that readers can scan the page
for videos before reading. Include the video's publication date after the link, to help identify
videos that might be out-of-date.
```markdown
<i class="fa-youtube-play" aria-hidden="true"></i>
For an overview, see [Video Title](https://link-to-video).
<!-- Video published on YYYY-MM-DD -->
```
You can link any up-to-date video that's useful to the GitLab user.
### Embed videos
The [GitLab documentation site](https://docs.gitlab.com) supports embedded
videos.
You can embed videos from [the official YouTube account for GitLab](https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg) only.
For videos from other sources, [link them](#link-to-video) instead.
In most cases, [link to a video](#link-to-video), because
embedded videos take up a lot of space on the page and can be distracting to readers.
To embed a video:
1. Copy the code from this procedure and paste it into your Markdown file. Leave a
blank line above and below it. Do not edit the code (don't remove or add any spaces).
1. In YouTube, visit the video URL you want to display. Copy the regular URL
from your browser (`https://www.youtube.com/watch?v=VIDEO-ID`) and replace
the video title and link in the line under `<div class="video-fallback">`.
1. In YouTube, select **Share**, and then select **Embed**.
1. Copy the `<iframe>` source (`src`) **URL only**
(`https://www.youtube-nocookie.com/embed/VIDEO-ID`),
and paste it, replacing the content of the `src` field in the
`iframe` tag.
1. Include the video's publication date below the link, to help identify
videos that might be out-of-date.
```html
leave a blank line here
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=MqL6BMOySIQ">Video title</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/MqL6BMOySIQ" frameborder="0" allowfullscreen> </iframe>
</figure>
<!-- Video published on YYYY-MM-DD -->
leave a blank line here
```
This is how it renders on the GitLab documentation site:
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=enMumwvLAug">What is GitLab</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/MqL6BMOySIQ" frameborder="0" allowfullscreen> </iframe>
</figure>
With this formatting:
- The `figure` tag is required for semantic SEO and the `video-container`
class is necessary to make sure the video is responsive and displays on
different mobile devices.
- The `<div class="video-fallback">` is a fallback necessary for
`/help`, because the GitLab Markdown processor doesn't support iframes. It's
hidden on the documentation site, but is displayed by `/help`.
- The `www.youtube-nocookie.com` domain enables the [Privacy Enhanced Mode](https://support.google.com/youtube/answer/171780?hl=en#zippy=%2Cturn-on-privacy-enhanced-mode)
of the YouTube embedded player. This mode allows users with restricted cookie preferences to view embedded videos.
## Link to click-through demos
Linking to click-through demos should follow similar guidelines to [videos](#videos).
```markdown
For a click-through demo, see [Demo Title](https://link-to-demo).
<!-- Demo published on YYYY-MM-DD -->
```
## Alert boxes
Use alert boxes to call attention to information. Use them sparingly, and never have an alert box immediately follow another alert box.
Alert boxes are generated by using a Hugo shortcode:
```plaintext
{{</* alert type="note" */>}}
This is something to note.
{{</* /alert */>}}
```
The valid alert types are `flag`, `note`, `warning`, and `disclaimer`.
Alert boxes render only on the GitLab documentation site (<https://docs.gitlab.com>).
In the GitLab product help, alert boxes appear as plain text.
### Flag
Use this alert type to describe a feature's availability. For information about how to format
`flag` alerts, see [Document features deployed behind feature flags](../feature_flags.md).
### Note
Use notes sparingly. Too many notes can make topics difficult to scan.
Instead of adding a note:
- Re-write the sentence as part of a paragraph.
- Put the information into its own paragraph.
- Put the content under a new topic title.
If you must use a note, use this format:
```markdown
{{</* alert type="note" */>}}
This is something to note.
{{</* /alert */>}}
```
It renders on the GitLab documentation site as:
{{< alert type="note" >}}
This is something to note.
{{< /alert >}}
### Warning
Use a warning to indicate deprecated features, or to provide a warning about
procedures that have the potential for data loss.
```markdown
{{</* alert type="warning" */>}}
This is something to be warned about.
{{</* /alert */>}}
```
It renders on the GitLab documentation site as:
{{< alert type="warning" >}}
This is something to be warned about.
{{< /alert >}}
### Disclaimer
If you **must** write about features we have not yet delivered, add a disclaimer about forward-looking statements near the content it applies to.
Disclaimer alerts are populated by using a [template](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/themes/gitlab-docs/layouts/shortcodes/alert.html) and should not include
any other text.
Add a disclaimer like this:
```plaintext
{{</* alert type="disclaimer" /*/>}}
```
It renders on the GitLab documentation site as:
{{< alert type="disclaimer" />}}
If all of the content on the page is not available, use the disclaimer about forward-looking statements once at the top of the page.
If the content in a topic is not ready, use the disclaimer in the topic.
For more information, see [Promising features in future versions](#promising-features-in-future-versions).
## Blockquotes
Avoid using [blockquotes](../../../user/markdown.md#blockquotes) in the product documentation.
They can make text difficult to scan. Instead of a blockquote, consider using:
- A [code block](#code-blocks).
- An [alert box](#alert-boxes).
- No special styling at all.
The [GitLab Flavored Markdown (GLFM)](../../../user/markdown.md) page is a rare case that
uses blockquotes to differentiate between plain text and rendered examples. However, in most cases,
you should avoid them.
## Tabs
On the documentation site, you can format text to display as tabs.
{{< alert type="warning" >}}
Do not put version history bullets, topic headings, HTML, or tabs in tabs. Only use paragraphs, lists, alert boxes, and code blocks. Other styles might not render properly. When in doubt, keep things simple.
{{< /alert >}}
To create a set of tabs, follow this example:
```plaintext
{{</* tabs */>}}
{{</* tab title="Tab one" */>}}
Here's some content in tab one.
{{</* /tab */>}}
{{</* tab title="Tab two" */>}}
Here's some other content in tab two.
{{</* /tab */>}}
{{</* /tabs */>}}
```
This code renders on the GitLab documentation site as:
{{< tabs >}}
{{< tab title="Tab one" >}}
Here's some content in tab one.
{{< /tab >}}
{{< tab title="Tab two" >}}
Here's some other content in tab two.
{{< /tab >}}
{{< /tabs >}}
For tab titles, be brief and consistent. Ensure they are parallel, and start each with a capital letter.
For example:
- `Linux package (Omnibus)`, `Helm chart (Kubernetes)` (when documenting configuration edits, follow the
[configuration edits guide](#how-to-document-different-installation-methods))
- `15.1 and earlier`, `15.2 and later`
Until we implement automated testing for broken links to tabs, do not link directly to a single tab.
For more information, see [issue 225](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/225).
See [Pajamas](https://design.gitlab.com/components/tabs/#guidelines) for more
details on tabs.
## Cards
Use cards to create landing pages with links to sub-pages.
To create a set of cards, follow this example:
```markdown
{{</* cards */>}}
- [The first page](first_page.md)
- [Another page](another/page.md)
- [One more page](one_more.md)
{{</* /cards */>}}
```
Cards render only on the GitLab documentation site (`https://docs.gitlab.com`).
In the GitLab product help, a set of cards appears as an unordered list of links.
Card descriptions are populated from the `description` metadata on the Markdown page headers.
Use cards on top-level pages where the cards are the only content on the page.
## Maintained versions
Use the maintained versions shortcode to create an unordered list of the currently
maintained GitLab versions as specified by the
[maintenance policy](../../../policy/maintenance.md):
```markdown
{{</* maintained-versions */>}}
```
Maintained versions render only on the pre-release version of the GitLab
documentation site (`https://docs.gitlab.com`). In all other cases and in
`/help`, a link to the documentation site is shown instead.
## Plagiarism
Do not copy and paste content from other sources unless it is a limited
quotation with the source cited. Typically it is better to rephrase
relevant information in your own words or link out to the other source.
## Promising features in future versions
Do not promise to deliver features in a future release. For example, avoid phrases like,
"Support for this feature is planned."
We cannot guarantee future feature work, and promises
like these can raise legal issues. Instead, say that an issue exists.
For example:
- Support for improvements is proposed in `[issue <issue_number>](https://link-to-issue)`.
- You cannot do this thing, but `[issue 12345](https://link-to-issue)` proposes to change this behavior.
You can say that we plan to remove a feature.
If you must document a future feature, use the [disclaimer](#disclaimer).
## Products and features
Refer to the information in this section when describing products and features
in the GitLab product documentation.
### Avoid line breaks in names
If a feature or product name contains spaces, don't split the name with a line break.
When names change, it is more complicated to search or grep text that has line breaks.
### Product availability details
Product availability details provide information about a feature and are displayed under the topic title.
Read more about [product availability details](availability_details.md).
## Specific sections
Certain styles should be applied to specific sections. Styles for specific
sections are outlined in this section.
### Help and feedback section
This section is displayed at the end of each document and can be omitted
by adding a key into the front matter:
```yaml
---
feedback: false
---
```
The default is to leave it there. If you want to omit it from a document, you
must check with a technical writer before doing so.
### GitLab restart
When a restart or reconfigure of GitLab is required, avoid duplication by linking
to [`doc/administration/restart_gitlab.md`](../../../administration/restart_gitlab.md)
with text like this, replacing 'reconfigure' with 'restart' as needed:
```markdown
Save the file and [reconfigure GitLab](../../../administration/restart_gitlab.md)
for the changes to take effect.
```
If the document resides outside of the `doc/` directory, use the full path
instead of the relative link:
`https://docs.gitlab.com/administration/restart_gitlab`.
### How to document different installation methods
GitLab supports five official installation methods. If you're referring to
words as part of sentences and titles, use the following phrases:
- Linux package
- Helm chart
- GitLab Operator
- Docker
- Self-compiled
It's OK to add the explanatory parentheses when
[you use tabs](#use-tabs-to-describe-a-gitlab-self-managed-configuration-procedure):
- Linux package (Omnibus)
- Helm chart (Kubernetes)
- GitLab Operator (Kubernetes)
- Docker
- Self-compiled (source)
### Use tabs to describe a GitLab Self-Managed configuration procedure
Configuration procedures can require users to edit configuration files, reconfigure
GitLab, or restart GitLab. In this case:
- Use [tabs](#tabs) to differentiate among the various installation methods.
- Use the installation methods names exactly as described in the previous list.
- Use them in the order described below.
- Indent the code blocks to line up with the list item they belong to.
- Use the appropriate syntax highlighting for each code block (`ruby`, `shell`, or `yaml`).
- For the YAML files, always include the parent settings.
- The final step to reconfigure or restart GitLab can be used verbatim because it's
the same every time.
When describing a configuration edit, use this snippet, editing it as needed:
````markdown
{{</* tabs */>}}
{{</* tab title="Linux package (Omnibus)" */>}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
external_url "https://gitlab.example.com"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{</* /tab */>}}
{{</* tab title="Helm chart (Kubernetes)" */>}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
hosts:
gitlab:
name: gitlab.example.com
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{</* /tab */>}}
{{</* tab title="Docker" */>}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url "https://gitlab.example.com"
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{</* /tab */>}}
{{</* tab title="Self-compiled (source)" */>}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
gitlab:
host: "gitlab.example.com"
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{</* /tab */>}}
{{</* /tabs */>}}
````
It renders as:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
external_url "https://gitlab.example.com"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
hosts:
gitlab:
name: gitlab.example.com
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url "https://gitlab.example.com"
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
gitlab:
host: "gitlab.example.com"
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
|
https://docs.gitlab.com/development/documentation/word_list
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/word_list.md
|
2025-08-13
|
doc/development/documentation/styleguide
|
[
"doc",
"development",
"documentation",
"styleguide"
] |
word_list.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Recommended word list
|
Writing styles, markup, formatting, and other standards for GitLab Documentation.
|
To help ensure consistency in the documentation, the Technical Writing team
recommends these word choices. In addition:
- The GitLab handbook contains a list of
[top misused terms](https://handbook.gitlab.com/handbook/communication/top-misused-terms/).
- The documentation [style guide](_index.md#language) includes details
about language and capitalization.
- The GitLab handbook provides guidance on the [use of third-party trademarks](https://handbook.gitlab.com/handbook/legal/policies/product-third-party-trademarks-guidelines/#process-for-adding-third-party-trademarks-to-gitlab).
For guidance not on this page, we defer to these style guides:
- [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/welcome/)
- [Google Developer Documentation Style Guide](https://developers.google.com/style)
<!-- vale off -->
<!-- Disable trailing punctuation in heading rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md026---trailing-punctuation-in-heading -->
<!-- markdownlint-disable MD026 -->
## `.gitlab-ci.yml` file
Use backticks and lowercase for **the `.gitlab-ci.yml` file**.
When possible, use the full phrase: **the `.gitlab-ci.yml` file**
Although users can specify another name for their CI/CD configuration file,
in most cases, use **the `.gitlab-ci.yml` file** instead.
## `&` (ampersand)
Do not use Latin abbreviations. Use **and** instead, unless you are documenting a UI element that uses an `&`.
## `@mention`
Try to avoid **`@mention`**. Say **mention** instead, and consider linking to the
[mentions topic](../../../user/discussions/_index.md#mentions).
Don't use backticks.
## 2FA, two-factor authentication
Spell out **two-factor authentication** in sentence case for the first use and in topic titles, and **2FA**
thereafter. If the first word in a sentence, do not capitalize `factor` or `authentication`. For example:
- Two-factor authentication (2FA) helps secure your account. Set up 2FA when you first sign in.
## ability, able
Try to avoid using **ability** or **able** because they can be ambiguous.
The usage of these words is similar to [allow and enable](#allow-enable).
Instead of talking about the abilities of the user, or
the capabilities of the product, be direct and specific.
You can, however, use these terms when you're talking about security, or
preventing someone from being able to complete a task in the UI.
Do not confuse **ability** or **able** with [permissions](#permissions) or [roles](#roles).
Use:
- You cannot change this setting.
- To change this setting, you must have the Maintainer role.
- Confirm you can sign in.
- The external load balancer cannot connect.
- Option to delete branches introduced in GitLab 17.1.
Instead of:
- You are not able to change this setting.
- You must have the ability to change this setting.
- Verify you are able to sign in.
- The external load balancer will not be able to connect.
- Ability to delete branches introduced in GitLab 17.1.
## above
Try to avoid using **above** when referring to an example or table in a documentation page. If required, use **previous** instead. For example:
- In the previous example, the dog had fleas.
Do not use **above** when referring to versions of the product. Use [**later**](#later) instead.
Use:
- In GitLab 14.4 and later...
Instead of:
- In GitLab 14.4 and above...
- In GitLab 14.4 and higher...
- In GitLab 14.4 and newer...
## access level
Access levels are different than [roles](#roles) or [permissions](#permissions).
When you create a user, you choose an access level: **Regular**, **Auditor**, or **Administrator**.
Capitalize these words when you refer to the UI. Otherwise use lowercase.
## add
Use **add** when an object already exists. If the object does not exist yet, use [**create**](#create) instead.
**Add** is the opposite of [remove](#remove).
For example:
- Add a user to the list.
- Add an issue to the epic.
Do not confuse **add** with [**create**](#create).
Do not use **Add new**.
## Admin area
Use:
- **Admin** area, to describe this area of the UI.
- **Admin** for the UI button.
Instead of:
- **Admin area** (with both words as bold)
- **Admin Area** (with **Area** capitalized)
- **Admin** Area (with Area capitalized)
- **administrator area**
- or other variants
## Admin Mode
Use title case for **Admin Mode**. The UI uses title case.
## administrator
Use **administrator access** instead of **admin** when talking about a user's access level.

An **administrator** is not a [role](#roles) or [permission](#permissions).
Use:
- To do this thing, you must be an administrator.
- To do this thing, you must have administrator access.
Instead of:
- To do this thing, you must have the Admin role.
## advanced search
Use lowercase for **advanced search** to refer to the faster, more efficient search across the entire GitLab instance.
## agent for Kubernetes
Use lowercase to refer to the [GitLab agent for Kubernetes](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent).
For example:
- To connect your cluster to GitLab, use the GitLab agent for Kubernetes.
- Install the agent in your cluster.
- Select an agent from the list.
Do not use title case for **GitLab Agent** or **GitLab Agent for Kubernetes**.
When referring to the specific component in technical contexts, use `agentk` in backticks.
## agent for workspace
Use lowercase **agent for workspace** when referring to the component that runs
in a workspace and is used to access the workspace. Do not use title case for **Workspace**. For example:
- The agent for workspace handles GitLab integration tasks in the workspace.
- Configure the agent for workspace to connect your development environment.
When referring to the specific component in technical contexts, use `agentw` in backticks.
Do not confuse with [agent for Kubernetes](#agent-for-kubernetes).
## agent access token
The token generated when you create an agent for Kubernetes. Use **agent access token**, not:
- registration token
- secret token
- authentication token
## Agentic Chat, GitLab Duo Agentic Chat
GitLab Duo Agentic Chat is an experimental, enhanced version of [GitLab Duo Chat](#chat-gitlab-duo-chat).
Use **Agentic Chat** with a capital `a` and `c` for **Agentic Chat** or **GitLab Duo Agentic Chat**.
On first use on a page, use **GitLab Duo Agentic Chat**.
Thereafter, use **Agentic Chat** by itself.
Do not use **Duo Agentic Chat**.
## agnostic
Instead of **agnostic**, use **platform-independent** or **vendor-neutral**.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## AI, artificial intelligence
Use **AI**. Do not spell out **artificial intelligence**.
## AI agent
When writing about AI, the **agent** is an entity that performs actions for the user.
You can use **AI agent** if **agent** on its own is not clear.
When you're interacting with an AI agent, a [**session**](#session) is running.
The user can stop a session.
One or more AI agents can be part of a [**flow**](#flows), where they are orchestrated to work together on a problem.
## AI gateway
Use lowercase for **AI gateway** and do not hyphenate.
## AI Impact Dashboard
Use title case for **AI Impact Dashboard**.
On first mention on a page, use **GitLab Duo AI Impact Dashboard**.
Thereafter, use **AI Impact Dashboard** by itself.
## AI-powered, AI-native
Use **AI-native** instead of **AI-powered**. For example, **Code Suggestions is an AI-native feature**.
## air gap, air-gapped
Use **offline environment** to describe installations that have physical barriers or security policies that prevent or limit internet access. Do not use **air gap**, **air gapped**, or **air-gapped**. For example:
- The firewall policies in an offline environment prevent the computer from accessing the internet.
## allow, enable
Try to avoid **allow** and **enable**, unless you are talking about security-related features or the
state of a feature flag.
Use:
- You can add a file to your repository.
Instead of:
- This feature allows you to add a file to your repository.
- This feature enables users to add files to their repository.
This phrasing is more active and is from the user perspective, rather than the person who implemented the feature.
For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/a/allow-allows).
## analytics
Use lowercase for **analytics** and its variations, like **contribution analytics** and **issue analytics**.
However, if the UI has different capitalization, make the documentation match the UI.
For example:
- You can view merge request analytics for a project. They are displayed on the Merge Request Analytics dashboard.
## ancestor
To refer to a [parent item](#parent) that's one or more level above in the hierarchy,
use **ancestor**.
Do not use **grandparent**.
Examples:
- An ancestor group, a group in the project's hierarchy.
- An ancestor epic, an epic in the issue's hierarchy.
- A group and all its ancestors.
See also: [child](#child), [descendant](#descendant), and [subgroup](#subgroup).
## and/or
Instead of **and/or**, use **or** or rewrite the sentence to spell out both options.
## and so on
Do not use **and so on**. Instead, be more specific. For more information, see the
[Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/a/and-so-on).
## area
Use [**section**](#section) instead of **area**. The only exception is [the **Admin** area](#admin-area).
## as
Do not use **as** to mean **because**.
Use:
- Because none of the endpoints return an ID...
Instead of:
- As none of the endpoints return an ID...
## as well as
Instead of **as well as**, use **and**.
## associate
Do not use **associate** when describing adding issues to epics, or users to issues, merge requests,
or epics.
Instead, use **assign**. For example:
- Assign the issue to an epic.
- Assign a user to the issue.
## authenticated user
Use **authenticated user** instead of other variations, like **signed in user** or **logged in user**.
## authenticate
Try to use the most suitable preposition when you use **authenticate** as a verb.
Use **authenticate with** when referring to a system or provider that
performs the authentication, like a token or a service like OAuth.
For example:
- Authenticate with a deploy token.
- Authenticate with your credentials.
- Authenticate with OAuth.
- The runner uses an authentication token to authenticate with GitLab.
Use **authenticate against** when referring to a resource that contains
credentials that are checked for validation.
For example:
- The client authenticates against the LDAP directory.
- The script authenticates against the local user database.
## before you begin
Use **before you begin** when documenting the tasks that must be completed or the conditions that must be met before a user can complete a tutorial. Do not use **requirements** or **prerequisites**.
For more information, see [the tutorial page type](../topic_types/tutorial.md).
For task topic types, use [**prerequisites**](#prerequisites) instead.
## below
Try to avoid **below** when referring to an example or table in a documentation page. If required, use **following** instead. For example:
- In the following example, the dog has fleas.
## beta
Use lowercase for **beta**. For example:
- The feature is in beta.
- This is a beta feature.
- This beta release is ready to test.
You might also want to link to [this topic](../../../policy/development_stages_support.md#beta)
when writing about beta features.
## blacklist
Do not use **blacklist**. Another option is **denylist**. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## board
Use lowercase for **boards**, **issue boards**, and **epic boards**.
## box
Use **text box** to refer to the UI field. Do not use **field** or **box**. For example:
- In the **Variable name** text box, enter a value.
## branch
Use **branch** by itself to describe a branch. For specific branches, use these terms only:
- **default branch**: The primary branch in the repository. Users can use the UI to set the default
branch. For examples that use the default branch, use `main` instead of [`master`](#master).
- **source branch**: The branch you're merging from.
- **target branch**: The branch you're merging to.
- **current branch**: The branch you have checked out.
This branch might be the default branch, a branch you've created, a source branch, or some other branch.
Do not use the terms **feature branch** or **merge request branch**. Be as specific as possible. For example:
- The branch you have checked out...
- The branch you added commits to...
## bullet
Don't refer to individual items in an ordered or unordered list as **bullets**. Use **list item** instead. To be less ambiguous, you can use:
- **Ordered list item** for items in an ordered list.
- **Unordered list item** for items in an unordered list.
## button
Don't use a descriptor with **button**.
Use:
- Select **Run pipelines**.
Instead of:
- Select the **Run pipelines** button.
## cannot, can not
Use **cannot** instead of **can not**.
See also [contractions](_index.md#contractions).
## card
Although the UI term might be **card**, do not use it in the documentation.
Avoid the descriptor if you can.
Use:
- By **Seat utilization**, select **Assign seats**.
Instead of:
- On the **Seat utilization** card, select **Assign seats**.
## Chat, GitLab Duo Chat
GitLab Duo Chat is an AI-native assistant that helps developers with contextual,
conversational code explanations, troubleshooting, and guidance.
It is different from [GitLab Duo Agentic Chat](#agentic-chat-gitlab-duo-agentic-chat).
Use **Chat** with a capital `c` for **Chat** or **GitLab Duo Chat**.
On first use on a page, use **GitLab Duo Chat**.
Thereafter, use **Chat** by itself.
Do not use **Duo Chat**.
## checkbox
Use one word for **checkbox**. Do not use **check box**.
You **select** (not **check** or **enable**) and **clear** (not **deselect** or **disable**) checkboxes. For example:
- Select the **Protect environment** checkbox.
- Clear the **Protect environment** checkbox.
If you must refer to the checkbox, you can say it is selected or cleared. For example:
- Ensure the **Protect environment** checkbox is cleared.
- Ensure the **Protect environment** checkbox is selected.
(For `deselect`, [Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## checkout, check out
Use **check out** as a verb. For the Git command, use `checkout`.
- Use `git checkout` to check out a branch locally.
- Check out the files you want to edit.
## cherry-pick, cherry pick
Use the hyphenated version of **cherry-pick**. Do not use **cherry pick**.
## CI, CD
When talking about GitLab features, use **CI/CD**. Do not use **CI** or **CD** alone.
## CI/CD
**CI/CD** is always uppercase. You are not required spell it out on first use.
You can omit **CI/CD** when the context is clear, especially after the first use. For example:
- Test your code in a **CI/CD pipeline**. Configure the **pipeline** to run for merge requests.
- Store the value in a **CI/CD variable**. Set the **variable** to masked.
## CI/CD minutes
Do not use **CI/CD minutes**. This term was renamed to [**compute minutes**](#compute-minutes).
## child
Always use as a compound noun.
Examples:
- child issue
- child epic
- child objective
- child key result
- child pipeline
See also: [descendant](#descendant), [parent](#parent) and [subgroup](#subgroup).
## click
Do not use **click**. Instead, use **select** with buttons, links, menu items, and lists.
**Select** applies to more devices, while **click** is more specific to a mouse.
However, you can make an exception for **right-click** and **click-through demo**.
## cloud licensing
Avoid the phrase **cloud licensing**, except when you have to describe the process
of synchronizing an activation code over the internet.
If you can, rather focus on the fact that this subscription is synchronized with GitLab.
For example:
- Your instance must be able to synchronize your subscription data with GitLab.
## cloud-native
When you're talking about using a Kubernetes cluster to host GitLab, you're talking about a **cloud-native version of GitLab**.
This version is different than the larger, more monolithic **Linux package** that is used to deploy GitLab.
You can also use **cloud-native GitLab** for short. It should be hyphenated and lowercase.
## code completion
Code Suggestions has evolved to include two primary features:
- **code completion**
- **code generation**
Use lowercase for **code completion**. Do not use **GitLab Duo Code Completion**.
GitLab Duo is reserved for Code Suggestions only.
**Code completion** must always be singular.
Example:
- Use code completion to populate the file.
## Code Explanation
Use title case for **Code Explanation**.
On first mention on a page, use **GitLab Duo Code Explanation**.
Thereafter, use **Code Explanation** by itself.
## code generation
Code Suggestions has evolved to include two primary features:
- **code completion**
- **code generation**
Use lowercase for **code generation**. Do not use **GitLab Duo Code Generation**.
GitLab Duo is reserved for Code Suggestions only.
**Code generation** must always be singular.
Examples:
- Use code generation to create code based on your comments.
- Adjust your code generation results by adding code comments to your file.
## Code Owner, code owner, `CODEOWNER`
Use **Code Owners** to refer to the feature name or concept. For example:
- Use the Code Owners approval rules to protect your code.
Use **code owner** or **code owners**, lowercase, to refer to a person or group with code ownership responsibilities.
For example:
- Assign a code owner to the project.
- Contact the code owner for a review.
Do not use **codeowner**, **CodeOwner**, or **code-owner**.
Use `CODEOWNERS`, uppercase and in backticks, to refer to the filename. For example:
- Edit the `CODEOWNERS` file to define the code ownership rules.
## Code Review Summary
Use title case for **Code Review Summary**.
On first mention on a page, use **GitLab Duo Code Review Summary**.
Thereafter, use **Code Review Summary** by itself.
## Code Suggestions
Use title case for **Code Suggestions**. On first mention on a page, use **GitLab Duo Code Suggestions**.
**Code Suggestions**, the feature, should always end in an `s`. However, write like it
is singular. For example:
- Code Suggestions is turned on for the instance.
When generically referring to the suggestions that the feature outputs, use lowercase.
Examples:
- Use Code Suggestions to display suggestions as you type. (This phrase describes the feature.)
- As you type, suggestions are displayed. (This phrase is generic.)
**Code Suggestions** has evolved to include two primary features:
- [**code completion**](#code-completion)
- [**code generation**](#code-generation)
## collapse
Use **collapse** instead of **close** when you are talking about expanding or collapsing a section in the UI.
## command line
Use **From the command line** to introduce commands.
Hyphenate when you use it as an adjective. For example, **a command-line tool**.
## compute
Use **compute** for the resources used by runners to run CI/CD jobs.
Related terms:
- [**compute minutes**](#compute-minutes): How compute usage is calculated. For example, `400 compute minutes`.
- [**compute quota**](../../../ci/pipelines/compute_minutes.md): The limit of compute minutes that a namespace can use each month.
- **compute usage**: The number of compute minutes that the namespace has used from the monthly quota.
## compute minutes
Use **compute minutes** instead of these (or similar) terms:
- **CI/CD minutes**
- **CI minutes**
- **pipeline minutes**
- **CI pipeline minutes**
- **pipeline minutes**
For more information, see [epic 2150](https://gitlab.com/groups/gitlab-com/-/epics/2150).
## configuration
When you edit a collection of settings, call it a **configuration**.
## configure
Use **configure** after a feature or product has been [set up](#setup-set-up).
For example:
1. Set up your installation.
1. Configure your installation.
## confirmation dialog
Use **confirmation dialog** to describe the dialog that asks you to confirm an action. For example:
- On the confirmation dialog, select **OK**.
Do not use **confirmation box** or **confirmation dialog box**. See also [**dialog**](#dialog).
## container registry
When documenting the GitLab container registry features and functionality, use lowercase.
Use:
- The GitLab container registry supports A, B, and C.
- You can push a Docker image to your project's container registry.
## create
Use **create** when an object does not exist and you are creating it for the first time. **Create** is the opposite of [delete](#delete).
For example:
- Create an issue.
Do not confuse **create** with [**add**](#add).
Do not use **create new**. The word **create** implies that the object is new, and the extra word is not necessary.
## currently
Do not use **currently** when talking about the product or its features. The documentation describes the product as it is today.
([Vale](../testing/vale.md) rule: [`CurrentStatus.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/CurrentStatus.yml))
## custom role
Use **custom role** when referring to a role created with specific customized permissions.
When referring to a non-custom role, use [**default role**](#default-role).
## data
Use **data** as a singular noun.
Use:
- Data is collected.
- The data shows a performance increase.
Instead of:
- Data are collected.
- The data show a performance increase.
## deadline
Do not use **deadline**. Use **due date** instead.
## default role
Use **default role** when referring to the following predefined roles that have
no customized permissions added:
- Guest
- Planner
- Reporter
- Developer
- Maintainer
- Owner
- Minimal Access
Do not use **static role**, **built-in role**, or **predefined role**.
## delete
Use **delete** when an object is completely deleted. **Delete** is the opposite of [create](#create).
When the object continues to exist, use [**remove**](#remove) instead.
For example, you can remove an issue from an epic, but the issue still exists.
## Dependency Proxy
Use title case for the GitLab Dependency Proxy.
## deploy board
Use lowercase for **deploy board**.
## descendant
To refer to a [child item](#child) that's one or more level below in the hierarchy,
use **descendant**.
Do not use **grandchild**.
Examples:
- An descendant project, a project in the group's hierarchy.
- An descendant issue, an issue in the epic's hierarchy.
- A group and all its descendants.
See also: [ancestor](#ancestor), [child](#child), and [subgroup](#subgroup).
## Developer
When writing about the Developer role:
- Use a capital **D**.
- Write it out.
- Use: if you are assigned the Developer role
- Instead of: if you are a Developer
- When the Developer role is the minimum required role:
- Use: at least the Developer role
- Instead of: the Developer role or higher
Do not use bold.
Do not use **Developer permissions**. A user who is assigned the Developer role has a set of associated permissions.
## dialog
Use **dialog** rather than any of these alternatives:
- **dialog box**
- **modal**
- **modal dialog**
- **modal window**
- **pop-up**
- **pop-up window**
- **window**
See also [**confirmation dialog**](#confirmation-dialog). For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/dialog-box-dialog-dialogue).
Before you use this term, confirm whether **dialog** or [**drawer**](#drawer) is
the correct term for your use case.
When the dialog is the location of an action, use **on** as a preposition. For example:
- On the **Grant permission** dialog, select **Group**.
See also [**on**](#on).
## disable
Do not use **disable** to describe making a setting or feature unavailable. Use alternatives like **turn off**, **hide**,
**make unavailable**, or **remove** instead.
To describe a state, use **off**, **inactive**, or **unavailable**.
This guidance is based on the
[Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/disable-disabled).
## disallow
Use **prevent** instead of **disallow**. ([Vale](../testing/vale.md) rule: [`Substitutions.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Substitutions.yml))
## Discussion Summary
Use title case for **Discussion Summary**.
On first mention on a page, use **GitLab Duo Discussion Summary**.
Thereafter, use **Discussion Summary** by itself.
## Docker-in-Docker, `dind`
Use **Docker-in-Docker** when you are describing running a Docker container by using the Docker executor.
Use `dind` in backticks to describe the container name: `docker:dind`. Otherwise, spell it out.
## downgrade
To be more upbeat and precise, do not use **downgrade**. Focus instead on the action the user is taking.
- For changing to earlier GitLab versions, use [**roll back**](#roll-back).
- For changing to lower GitLab tiers, use **change the subscription tier**.
## download
Use **download** to describe saving data to a user's device. For details, see
[the Microsoft style guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/download).
Do not confuse download with [export](#export).
## drawer
Use **drawer** to describe a [drawer UI component](../drawers.md) that:
- Appears from the right side of the screen.
- Displays context-specific information or actions without the user having to
leave the current page.
To see examples of drawers:
- Go to the [Technical Writing Pipeline Editor](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/ci/editor?branch_name=main) and select **Help** ({{< icon name="information-o" >}}).
- Open GitLab Duo Chat.
Before you use this term, confirm whether **drawer** or [**dialog**](#dialog) is
the correct term for your use case.
## dropdown list
Use **dropdown list** to refer to the UI element. Do not use **dropdown** without **list** after it.
Do not use **drop-down** (hyphenated), **dropdown menu**, or other variants.
For example:
- From the **Visibility** dropdown list, select **Public**.
## earlier
Use **earlier** when talking about version numbers.
Use:
- In GitLab 14.1 and earlier.
Instead of:
- In GitLab 14.1 and lower.
- In GitLab 14.1 and older.
## easily
Do not use **easily**. If the user doesn't find the process to be easy, we lose their trust.
## edit
Use **edit** for UI documentation and user actions.
For example:
- To edit your profile settings, select **Edit**.
For API documentation and programmatic changes, use **[update](#update)**.
## e.g.
Do not use Latin abbreviations. Use **for example**, **such as**, **for instance**, or **like** instead. ([Vale](../testing/vale.md) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/LatinTerms.yml))
## ellipsis, ellipses
Avoid ellipses when you can. If you must include them, for example as part of a code block or other CLI response,
use three periods with no space (`...`) instead of the `…` HTML entity or the `…` HTML code.
For more information, see [code blocks](_index.md#code-blocks).
Do not include any ellipses when documenting UI text. For example, use:
- **Search or go to**
Instead of:
- **Search or go to...**
For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/punctuation/ellipses).
## email
Do not use **e-mail** with a hyphen. When plural, use **emails** or **email messages**. ([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## email address
Use **email address** when referring to addresses used in emails. Do not shorten to **email**, which are messages.
## emoji
Use **emoji** to refer to the plural form of **emoji**.
## enable
Do not use **enable** to describe making a setting or feature available. Use **turn on** instead.
To describe a state, use **on** or **active**.
This guidance is based on the
[Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/disable-disabled).
## enter
In most cases, use **enter** rather than **type**.
- **Enter** encompasses multiple ways to enter information, including speech and keyboard.
- **Enter** assumes that the user puts a value in a field and then moves the cursor outside the field (or presses <kbd>Enter</kbd>).
**Enter** includes both the entering of the content and the action to validate the content.
For example:
- In the **Variable name** text box, enter a value.
- In the **Variable name** text box, enter `my text`.
When you use **Enter** to refer to the key on a keyboard, use the HTML `<kbd>` tag:
- To view the list of results, press <kbd>Enter</kbd>.
See also [**type**](#type).
## epic
Use lowercase for **epic**.
See also [associate](#associate).
## epic board
Use lowercase for **epic board**.
## etc.
Try to avoid **etc.**. Be as specific as you can. Do not use
[**and so on**](#and-so-on) as a replacement.
Use:
- You can edit objects, like merge requests and issues.
Instead of:
- You can edit objects, like merge requests, issues, etc.
## expand
Use **expand** instead of **open** when you are talking about expanding or collapsing a section in the UI.
## experiment
Use lowercase for **experiment**. For example:
- This feature is an experiment.
- These features are experiments.
- This experiment is ready to test.
If you must, you can use **experimental**.
You might also want to link to [this topic](../../../policy/development_stages_support.md#experiment)
when writing about experimental features.
## export
Use **export** to indicate translating raw data,
which is not represented by a file in GitLab, into a standard file format.
You can differentiate **export** from **download** because:
- Often, you can use export options to change the output.
- Exported data is not necessarily downloaded to a user's device.
For example:
- Export the contents of your report to CSV format.
Do not confuse with [download](#download).
## FAQ
We want users to find information quickly, and they rarely search for the term **FAQ**.
Information in FAQs belongs with other similar information, under a searchable topic title.
## feature
You should rarely use the word **feature**. Instead, explain what GitLab does.
For example, use:
- Use merge requests to incorporate changes into the target branch.
Instead of:
- Use the merge request feature to incorporate changes into the target branch.
## feature branch
Do not use **feature branch**. See [branch](#branch).
## field
Use **text box** instead of **field** or **box**.
Use:
- In the **Variable name** text box, enter `my text`.
Instead of:
- In the **Variable name** field, enter `my text`.
However, you can make an exception when you are writing a task and you want to refer to all
of the fields at once. For example:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
1. Complete the fields.
Learn more about [documenting multiple fields at once](_index.md#documenting-multiple-fields-at-once).
## filename
Use one word for **filename**. When you use filename as a variable, use `<filename>`.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## filter
When you are viewing a list of items, like issues or merge requests, you filter the list by
the available attributes. For example, you might filter by assignee or reviewer.
Filtering is different from [searching](#search).
## flows
GitLab provides multiple **flows** that are run by [agents](#ai-agent).
Do not use **agent flow**.
You choose a flow. You start a [**session**](#session).
## foo
Do not use **foo** in product documentation. You can use it in our API and contributor documentation, but try to use a clearer and more meaningful example instead.
## fork
A **fork** is a project that was created from a **upstream project** by using the
forking process.
The **upstream project** (also known as the **source project**) and the **fork** have a **fork relationship** and are
**linked**.
If the **fork relationship** is removed, the
**fork** is **unlinked** from the **upstream project**.
## Free
Use **Free**, in uppercase, for the subscription tier. When you refer to **Free**
in the context of other subscription tiers, follow [the subscription tier](#subscription-tier) guidance.
## full screen
Use two words for **full screen**.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## future tense
When possible, use present tense instead of future tense. For example, use **after you execute this command, GitLab displays the result** instead of **after you execute this command, GitLab will display the result**. ([Vale](../testing/vale.md) rule: [`FutureTense.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/FutureTense.yml))
## GB, gigabytes
For **GB** and **MB**, follow the [Microsoft guidance](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/bits-bytes-terms).
## Geo
Use title case for **Geo**.
## generally available, general availability
Use lowercase for **generally available** and **general availability**.
For example:
- This feature is generally available.
Use **generally available** more often. For example,
do not say:
- This feature has reached general availability.
Do not use **GA** to abbreviate general availability.
## GitLab
Do not make **GitLab** possessive (GitLab's). This guidance follows [GitLab Trademark Guidelines](https://handbook.gitlab.com/handbook/marketing/brand-and-product-marketing/brand/brand-activation/trademark-guidelines/).
Do not put **GitLab** next to the name of another third-party tool or brand.
For example, do not use:
- GitLab Chrome extension
- GitLab Kubernetes agent
Instead, use:
- GitLab extension for Chrome
- GitLab agent for Kubernetes
Putting the brand names next to each other can imply ownership or partnership, which we don't want to do,
unless we've gone through a legal review and have been told to promote the partnership.
This guidance follows the [Use of Third-party Trademarks](https://handbook.gitlab.com/handbook/legal/policies/product-third-party-trademarks-guidelines/#dos--donts-for-use-of-third-party-trademarks-in-gitlab).
## GitLab AI vendor model
Use **GitLab AI vendor model** to refer to a [language model](#language-model-large-language-model)
that is hosted by a third-party provider, and that customers access by using the GitLab
[AI gateway](#ai-gateway) through the [Cloud Connector](../../cloud_connector/architecture.md).
Do not use this term when the [language model is hosted by a customer](#self-hosted-model),
or when the customer uses the [GitLab Duo Self-Hosted](#gitlab-duo-self-hosted)
feature.
## GitLab Dedicated
Use **GitLab Dedicated** to refer to the product offering. It refers to a GitLab instance that's hosted and managed by GitLab for customers.
GitLab Dedicated can be referred to as a single-tenant SaaS service.
Do not use **Dedicated** by itself. Always use **GitLab Dedicated**.
## GitLab Duo
Do not use **Duo** by itself. Always use **GitLab Duo**.
On first use on a page, use **GitLab Duo `<featurename>`**. As of Aug, 2024,
the following are the names of GitLab Duo features:
- GitLab Duo AI Impact Dashboard
- GitLab Duo Chat
- GitLab Duo Code Explanation
- GitLab Duo Code Review
- GitLab Duo Code Review Summary
- GitLab Duo Code Suggestions
- GitLab Duo for the CLI
- GitLab Duo Issue Description Generation
- GitLab Duo Issue Discussion Summary
- GitLab Duo Merge Commit Message Generation
- GitLab Duo Merge Request Summary
- GitLab Duo Product Analytics
- GitLab Duo Root Cause Analysis
- GitLab Duo Self-Hosted
- GitLab Duo Test Generation
- GitLab Duo Vulnerability Explanation
- GitLab Duo Vulnerability Resolution
Excluding GitLab Duo Self-Hosted, after the first use, use the feature name
without **GitLab Duo**.
## GitLab Duo Agent Platform
Use **GitLab Duo Agent Platform**. After first use, use **Agent Platform**.
Do not use **Duo Agent Platform** by itself.
## GitLab Duo Core
Use **GitLab Duo Core** for the add-on. Do not use **Duo Core** by itself.
You can also use **the GitLab Duo Core add-on** but omit **add-on** when you can.
In marketing materials, like release posts or blogs, use
**Premium and Ultimate with GitLab Duo** instead of **GitLab Duo Core**.
For example:
- [Blog: Unlocking AI for every GitLab Premium and Ultimate customer](https://about.gitlab.com/blog/gitlab-premium-with-duo/)
- [Release post: Group and project controls for Premium and Ultimate with GitLab Duo](https://about.gitlab.com/releases/2025/07/17/gitlab-18-2-released/#group-and-project-controls-for-premium-and-ultimate-with-gitlab-duo)
## GitLab Duo Enterprise
Always use **GitLab Duo Enterprise** for the add-on. Do not use **Duo Enterprise** unless approved by legal.
You can use **the GitLab Duo Enterprise add-on** (with this capitalization) but you are not required to use **add-on**
and should leave it off when you can.
## GitLab Duo Pro
Always use **GitLab Duo Pro** for the add-on. Do not use **Duo Pro** unless approved by legal.
You can use **the GitLab Duo Pro add-on** (with this capitalization) but you are not required to use **add-on**
and should leave it off when you can.
## GitLab Duo Self-Hosted
When referring to the feature, always write **GitLab Duo Self-Hosted** in full
and in title case, unless you are
[referring to a language model that's hosted by a customer, rather than GitLab](#self-hosted-model).
Do not use **Self-Hosted** by itself.
## GitLab Flavored Markdown
When possible, spell out [**GitLab Flavored Markdown**](../../../user/markdown.md).
If you must abbreviate, do not use **GFM**. Use **GLFM** instead.
## GitLab for Eclipse plugin, Eclipse
Use **GitLab for Eclipse plugin** to refer to the editor extension.
Use **Eclipse** to refer to the IDE.
## GitLab Helm chart, GitLab chart
To deploy a cloud-native version of GitLab, use:
- The GitLab Helm chart (long version)
- The GitLab chart (short version)
Do not use **the `gitlab` chart**, **the GitLab Chart**, or **the cloud-native chart**.
You use the **GitLab Helm chart** to deploy **cloud-native GitLab** in a Kubernetes cluster.
If you use it in a context of describing the
[different installation methods](_index.md#how-to-document-different-installation-methods)
use `Helm chart (Kubernetes)`.
## GitLab Pages
For consistency and branding, use **GitLab Pages** rather than **Pages**.
However, if you use **GitLab Pages** for the first mention on a page or in the UI,
you can use **Pages** thereafter.
## GitLab Runner
Use title case for **GitLab Runner**. This is the product you install. For more information about the decision for this usage,
see [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/233529).
See also:
- [runners](#runner-runners)
- [runner managers](#runner-manager-runner-managers)
- [runner workers](#runner-worker-runner-workers)
## GitLab SaaS
**GitLab SaaS** refers to both [GitLab.com](#gitlabcom) (multi-tenant SaaS) and [GitLab Dedicated](#gitlab-dedicated) (single-tenant SaaS).
Try to avoid **GitLab SaaS** and instead, refer to the [specific offering](#offerings) instead.
## GitLab Self-Managed
Use **GitLab Self-Managed** to refer to an installation of GitLab that customers manage.
Use the descriptor of **instance** as needed. Do not use **installation**.
Use:
- GitLab Self-Managed
- a GitLab Self-Managed instance
Instead of:
- A GitLab Self-Managed installation
- A Self-Managed GitLab installation
- A self-managed GitLab installation
- A GitLab instance that is GitLab Self-Managed
You can use **instance** on its own to describe GitLab Self-Managed. For example:
- On your instance, ensure the port is open.
- Verify that the instance is publicly accessible.
See also [self-managed](#self-managed).
## GitLab.com
Use **GitLab.com** to refer to the URL or product offering. GitLab.com is the instance that's managed by GitLab.
## GitLab Workflow extension for VS Code
Use **GitLab Workflow extension for VS Code** to refer to the extension.
You can also use **GitLab Workflow for VS Code** or **GitLab Workflow**.
For terms in VS Code, see [VS Code user interface](#vs-code-user-interface)
## GraphiQL
Use **GraphiQL** or **GraphQL explorer** to refer to this tool.
In most cases, you should use **GraphiQL** on its own with no descriptor.
Do not use:
- GraphiQL explorer tool
- GraphiQL explorer
## group access token
Use sentence case for **group access token**.
Capitalize the first word when you refer to the UI.
## guide
We want to speak directly to users. On `docs.gitlab.com`, do not use **guide** as part of a page title.
For example, **Snowplow Guide**. Instead, speak about the feature itself, and how to use it. For example, **Use Snowplow to do xyz**.
## Guest
When writing about the Guest role:
- Use a capital **G**.
- Write it out:
- Use: if you are assigned the Guest role
- Instead of: if you are a guest
- When the Guest role is the minimum required role:
- Use: at least the Guest role
- Instead of: the Guest role or higher
Do not use bold.
Do not use **Guest permissions**. A user who is assigned the Guest role has a set of associated permissions.
## handy
Do not use **handy**. If the user doesn't find the feature or process to be handy, we lose their trust. ([Vale](../testing/vale.md) rule: [`Simplicity.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Simplicity.yml))
## high availability, HA
Do not use **high availability** or **HA**, except in the GitLab [reference architectures](../../../administration/reference_architectures/_index.md#high-availability-ha). Instead, direct readers to the reference architectures for more information about configuring GitLab for handling greater amounts of users.
Do not use phrases like **high availability setup** to mean a multiple node environment. Instead, use **multi-node setup** or similar.
## higher
Do not use **higher** when talking about version numbers.
Use:
- In GitLab 14.4 and later...
Instead of:
- In GitLab 14.4 and higher...
- In GitLab 14.4 and above...
## hit
Don't use **hit** to mean **press**.
Use:
- Press **ENTER**.
Instead of:
- Hit the **ENTER** button.
## I
Do not use first-person singular. Use **you** or rewrite the phrase instead.
## i.e.
Do not use Latin abbreviations. Use **that is** instead. ([Vale](../testing/vale.md) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/LatinTerms.yml))
## in order to
Do not use **in order to**. Use **to** instead. ([Vale](../testing/vale.md) rule: [`Wordy.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Wordy.yml))
## indexes, indices
For the plural of **index**, use **indexes**.
However, for Elasticsearch, use [**indices**](https://www.elastic.co/blog/what-is-an-elasticsearch-index).
## Installation from source
To refer to the installation method that uses the self-compiled code, use **self-compiled**.
Use:
- For self-compiled installations...
Instead of:
- For installations from source...
For more information, see the
[different installation methods](_index.md#how-to-document-different-installation-methods).
## -ing words
Remove **-ing** words whenever possible. They can be difficult to translate,
and more precise terms are usually available. For example:
- Instead of **The files using storage are deleted**, use **The files that use storage are deleted**.
- Instead of **Delete files using the Edit button**, use **Use the Edit button to delete files**.
- Instead of **Replicating your server is required**, use **You must replicate your server**.
## issue
Use lowercase for **issue**.
## issue board
Use lowercase for **issue board**.
## Issue Description Generation
Use title case for **Issue Description Generation**.
On first mention on a page, use **GitLab Duo Issue Description Generation**.
Thereafter, use **Issue Description Generation** by itself.
## Issue Discussion Summary
Use title case for **Issue Discussion Summary**.
On first mention on a page, use **GitLab Duo Issue Discussion Summary**.
Thereafter, use **Issue Discussion Summary** by itself.
## issue weights
Use lowercase for **issue weights**.
## IP address
Use **IP address** when referring to addresses used with Internet Protocol (IP). Do not refer to an IP address as an
**IP**.
## it
When you use the word **it**, ensure the word it refers to is obvious.
If it's not obvious, repeat the word rather than using **it**.
Use:
- The field returns a connection. The field accepts four arguments.
Instead of:
- The field returns a connection. It accepts four arguments.
See also [this, these, that, those](#this-these-that-those).
## job
Do not use **build** to be synonymous with **job**. A job is defined in the `.gitlab-ci.yml` file and runs as part of a pipeline.
If you want to use **CI** with the word **job**, use **CI/CD job** rather than **CI job**.
## Kubernetes executor
GitLab Runner can run jobs on a Kubernetes cluster. To do this, GitLab Runner uses the Kubernetes executor.
When referring to this feature, use:
- Kubernetes executor for GitLab Runner
- Kubernetes executor
Do not use:
- GitLab Runner Kubernetes executor, because this can infringe on the Kubernetes trademark.
## language model, large language model
When referring to language models, be precise. Not all language models are large,
and not all models are language models. When in doubt, ask a developer or PM for confirmation.
You can use LLM to refer to a large language model if you spell it out on first use.
## later
Use **later** when talking about version numbers.
Use:
- In GitLab 14.1 and later...
Instead of:
- In GitLab 14.1 and higher...
- In GitLab 14.1 and above...
- In GitLab 14.1 and newer...
## level
If you can, avoid `level` in the context of an instance, project, or group.
Use:
- This setting is turned on for the instance.
- This setting is turned on for the group and its subgroups.
- This setting is turned on for projects.
Instead of:
- This setting is turned on at the instance level.
- This setting is turned on at the group level.
- This is a project-level setting.
## lifecycle, life cycle, life-cycle
Use one word for **lifecycle**. Do not use **life cycle** or **life-cycle**.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## list
Do not use **list** when referring to a [**dropdown list**](#dropdown-list).
Use the full phrase **dropdown list** instead.
Also, do not use **list** when referring to a page. For example, the **Issues** page
is populated with a list of issues. However, you should call it the **Issues** page,
and not the **Issues** list.
## license
Licenses are different than subscriptions.
- A license grants users access to the subscription they purchased. The license includes information like the number of seats and subscription dates.
- A subscription is the subscription tier that the user purchases.
Avoid the terms [**cloud license** or **cloud licensing**](#cloud-licensing) if possible.
The following terms are displayed in the UI and in emails. You can use them when necessary:
- **Online license** - a license synchronized with GitLab
- **Offline license** - a license not synchronized with GitLab
- **Legacy license** - a license created before synchronization was possible
You can also use the terms **legacy license file** and **offline license file** when
describing the files that customers receive by email as part of the overall
licensing and synchronization process.
However, if you can, rather than the relying on the term, use the more specific description instead.
Use:
- Add a license to your instance.
- Purchase a subscription.
Instead of:
- Buy a license.
- Purchase a license.
## limitations
Do not use **Limitations** as a topic title. For more information,
see [reference topic titles](../topic_types/reference.md#reference-topic-titles).
If you must, you can use the title **Known issues**.
## log in, log on
Do not use:
- **log in**.
- **log on**.
- **login**
Use [sign in](#sign-in-sign-in) instead.
However, if the user interface has **Log in**, you should match the UI.
## limited availability
Use lowercase for **limited availability**. For example:
- This feature has limited availability.
- Hosted runners are in limited availability.
Do not use:
- This feature has reached limited availability.
Do not use **LA** to abbreviate limited availability.
## logged-in user, logged in user
Use **authenticated user** instead of **logged-in user** or **logged in user**.
## lower
Do not use **lower** when talking about version numbers.
Use:
- In GitLab 14.1 and earlier.
Instead of:
- In GitLab 14.1 and lower.
- In GitLab 14.1 and older.
## machine learning
Use lowercase for **machine learning**.
When machine learning is used as an adjective, like **a machine learning model**,
do not hyphenate. While a hyphen might be more grammatically correct, we risk
becoming inconsistent if we try to be more precise.
## Maintainer
When writing about the Maintainer role:
- Use a capital **M**.
- Write it out.
- Use: if you are assigned the Maintainer role
- Instead of: if you are a maintainer
- When the Maintainer role is the minimum required role:
- Use: at least the Maintainer role
- Instead of: the Maintainer role or higher
Do not use bold.
Do not use **Maintainer permissions**. A user who is assigned the Maintainer role has a set of associated permissions.
## mankind
Do not use **mankind**. Use **people** or **humanity** instead. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## manpower
Do not use **manpower**. Use words like **workforce** or **GitLab team members**. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## master
Do not use **master**. Use **main** when you need a sample [default branch name](#branch).
([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## may, might
**Might** means something has the probability of occurring. Might is often used in troubleshooting documentation.
**May** gives permission to do something. Consider **can** instead of **may**.
Consider rewording phrases that use these terms. These terms often indicate possibility and doubt, and technical writing strives to be precise.
See also [you can](#you-can).
Use:
- The `committed_date` and `authored_date` fields are generated from different sources, and might not be identical.
- A typical pipeline consists of four stages, executed in the following order:
Instead of:
- The `committed_date` and `authored_date` fields are generated from different sources, and may not be identical.
- A typical pipeline might consist of four stages, executed in the following order:
## MB, megabytes
For **MB** and **GB**, follow the [Microsoft guidance](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/bits-bytes-terms).
## member
When you add a [user account](#user-account) to a group or project,
the user account becomes a **member**.
## Merge Commit Message Generation
Use title case for **Merge Commit Message Generation**.
On first mention on a page, use **GitLab Duo Merge Commit Message Generation**.
Thereafter, use **Merge Commit Message Generation** by itself.
## merge request branch
Do not use **merge request branch**. See [branch](#branch).
## merge requests
Use lowercase for **merge requests**. If you use **MR** as the acronym, spell it out on first use.
## Merge Request Summary
Use title case for **Merge Request Summary**.
On first mention on a page, use **GitLab Duo Merge Request Summary**.
Thereafter, use **Merge Request Summary** by itself.
## milestones
Use lowercase for **milestones**.
## Minimal Access
When writing about the Minimal Access role:
- Use a capital **M** and a capital **A**.
- Write it out:
- Use: if you are assigned the Minimal Access role
- Instead of: if you are a Minimal Access user
- When the Minimal Access role is the minimum required role:
- Use: at least the Minimal Access role
- Instead of: the Minimal Access role or higher
Do not use bold.
Do not use **Minimal Access permissions**. A user who is assigned the Minimal Access role has a set of associated permissions.
## model registry
When documenting the GitLab model registry features and functionality, use lowercase.
Use:
- The GitLab model registry supports A, B, and C.
- You can publish a model to your project's model registry.
## models
For usage, see [language models](#language-model-large-language-model).
## n/a, N/A, not applicable
When possible, use **not applicable**. Spelling out the phrase helps non-English speaking users and avoids
capitalization inconsistencies.
## navigate
Do not use **navigate**. Use **go** instead. For example:
- Go to this webpage.
- Open a terminal and go to the `runner` directory.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## need to
Try to avoid **need to**, because it's wordy.
For example, when a variable is **required**,
instead of **You need to set the variable**, use:
- Set the variable.
- You must set the variable.
When the variable is **recommended**:
- You should set the variable.
When the variable is **optional**:
- You can set the variable.
## new
Often, you can avoid the word **new**. When you create an object, it is new,
so you don't need this additional word.
See also [**create**](#create) and [**add**](#add).
## newer
Do not use **newer** when talking about version numbers.
Use:
- In GitLab 14.4 and later...
Instead of:
- In GitLab 14.4 and higher...
- In GitLab 14.4 and above...
- In GitLab 14.4 and newer...
## normal, normally
Don't use **normal** to mean the usual, typical, or standard way of doing something.
Use those terms instead.
Use:
- Typically, you specify a certificate.
- Usually, you specify a certificate.
- Follow the standard Git workflow.
Instead of:
- Normally, you specify a certificate.
- Follow the normal Git workflow.
([Vale](../testing/vale.md) rule: [`Normal.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Normal.yml))
## note that
Do not use **note that** because it's wordy.
Use:
- You can change the settings.
Instead of:
- Note that you can change the settings.
## offerings
The current product offerings are:
- [GitLab.com](#gitlabcom)
- [GitLab Self-Managed](#self-managed)
- [GitLab Dedicated](#gitlab-dedicated)
The [availability details](availability_details.md) reflect these offerings.
## older
Do not use **older** when talking about version numbers.
Use:
- In GitLab 14.1 and earlier.
Instead of:
- In GitLab 14.1 and lower.
- In GitLab 14.1 and older.
## Omnibus GitLab
When referring to the installation method that uses the Linux package, refer to it
as **Linux package**.
Use:
- For installations that use the Linux package...
Instead of:
- For installations that use Omnibus GitLab...
For more information, see the
[different installation methods](_index.md#how-to-document-different-installation-methods).
## on
When documenting high-level UI elements, use **on** as a preposition. For example:
- On the left sidebar, select **Settings** > **CI/CD**.
- On the **Grant permission** dialog, select **Group**.
Do not use **from** or **in**. For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/f/from-vs-on).
## once
The word **once** means **one time**. Don't use it to mean **after** or **when**.
Use:
- When the process is complete...
Instead of:
- Once the process is complete...
## only
Put the word **only** next to the word it modifies.
In the following example, **only** modifies the noun **projects**.
The meaning is that you can create one type of project--a private project.
- You can create only private projects.
In the following example, **only** modifies the verb **create**.
The meaning is that you can't perform other actions,
like deleting private projects, or adding users to them.
- You can only create private projects.
## optional
If something is optional, such as a command argument, parameter value,
or a file, use `Optional` followed by a period. For optional topics,
append `(optional)` to the topic title.
For example:
```markdown
### This is a topic (optional)
- `value`: Optional. Use it to do something.
```
Follow the same guidance for [optional task steps](_index.md#optional-steps).
## override
Use **override** to indicate temporary replacement.
For example, a value might be overridden when a job runs. The
original value does not change.
## overwrite
Use **overwrite** to indicate permanent replacement.
For example, a log file might overwrite a log file of the same name.
## Owner
When writing about the Owner role:
- Use a capital **O**.
- Write it out.
- Use: if you are assigned the Owner role
- Instead of: if you are an owner
Do not use bold.
Do not use **Owner permissions**. A user who is assigned the Owner role has a set of associated permissions.
An Owner is the highest role a user can have.
## package registry
When documenting the GitLab package registry features and functionality, use lowercase.
Use:
- The GitLab package registry supports A, B, and C.
- You can publish a package to your project's package registry.
## page
If you write a phrase like, "On the **Issues** page," ensure steps for how to get to the page are nearby. Otherwise, people might not know what the **Issues** page is.
The page name should be visible in the UI at the top of the page,
or included in the breadcrumb.
The docs should match the case in the UI, and the page name should be bold. For example:
- On the **Test cases** page, ...
## parent
Always use as a compound noun.
Do not use **direct [ancestor](#ancestor)** or **ascendant**.
Examples:
- parent directory
- parent group
- parent project
- parent commit
- parent issue
- parent item
- parent epic
- parent objective
- parent pipeline
See also: [child](#child), and [subgroup](#subgroup).
## per
Do not use **per** because it can have several different meanings.
Use the specific prepositional phrase instead:
- for each
- through
- by
- every
- according to
## permissions
Do not use [**roles**](#roles) and **permissions** interchangeably. Each user is assigned a role. Each role includes a set of permissions.
Permissions are not the same as [**access levels**](#access-level).
## personal access token
Use sentence case for **personal access token**.
Capitalize the first word when you refer to the UI.
## Planner
When writing about the Planner role:
- Use a capital **P**.
- Write it out.
- Use: if you are assigned the Planner role
- Instead of: if you are a Planner
- When the Planner role is the minimum required role:
- Use: at least the Planner role
- Instead of: the Planner role or higher
Do not use bold.
Do not use **Planner permissions**. A user who is assigned the Planner role has a set of associated permissions.
## please
Do not use **please** in the product documentation.
In UI text, use **please** when we've inconvenienced the user. For more information,
see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/p/please).
## Premium
Use **Premium**, in uppercase, for the subscription tier. When you refer to **Premium**
in the context of other subscription tiers, follow [the subscription tier](#subscription-tier) guidance.
## preferences
Use **preferences** to describe user-specific, system-level settings like theme and layout.
## prerequisites
Use **prerequisites** when documenting the tasks that must be completed or the conditions that must be met before a user can complete a task. Do not use **requirements**.
**Prerequisites** must always be plural, even if the list includes only one item.
For more information, see [the task topic type](../topic_types/task.md).
For tutorial page types, use [**before you begin**](#before-you-begin) instead.
## press
Use **press** when talking about keyboard keys. For example:
- To stop the command, press <kbd>Control</kbd>+<kbd>C</kbd>.
## profanity
Do not use profanity. Doing so might negatively affect other users and contributors, which is contrary to the GitLab value of [Diversity, Inclusion, and Belonging](https://handbook.gitlab.com/handbook/values/#diversity-inclusion).
## project
See [repository, project](#repository-project).
## project access token
Use sentence case for **project access token**.
Capitalize the first word when you refer to the UI.
## provision
Use the term **provision** when referring to provisioning cloud infrastructure. You provision the infrastructure, and then deploy applications to it.
For example, you might write something like:
- Provision an AWS EKS cluster and deploy your application to it.
## push rules
Use lowercase for **push rules**.
## quite
Do not use **quite** because it's wordy.
## `README` file
Use backticks and lowercase for **the `README` file**, or **the `README.md` file**.
When possible, use the full phrase: **the `README` file**
For plural, use **`README` files**.
## recommend, we recommend
Instead of **we recommend**, use **you should**. We want to talk to the user the way
we would talk to a colleague, and to avoid differentiation between `we` and `them`.
- You should set the variable. (It's recommended.)
- Set the variable. (It's required.)
- You can set the variable. (It's optional.)
See also [recommended steps](_index.md#recommended-steps).
## register
Use **register** instead of **sign up** when talking about creating an account.
## reindex
Use **reindex** instead of **re-index** when talking about search.
## remove
Use **remove** when an object continues to exist. For example, you can remove an issue from an epic, but the issue still exists.
When an object is completely deleted, use [**delete**](#delete) instead.
## Reporter
When writing about the Reporter role:
- Use a capital **R**.
- Write it out.
- Use: if you are assigned the Reporter role
- Instead of: if you are a reporter
- When the Reporter role is the minimum required role:
- Use: at least the Reporter role
- Instead of: the Reporter role or higher
Do not use bold.
Do not use **Reporter permissions**. A user who is assigned the Reporter role has a set of associated permissions.
## repository, project
A GitLab project contains, among other things, a Git repository. Use **repository** when referring to the
Git repository. Use **project** to refer to the GitLab user interface for managing and configuring the
Git repository, wiki, and other features.
## Repository Mirroring
Use title case for **Repository Mirroring**.
## resolution, resolve
Use **resolution** when the troubleshooting solution fixes the issue permanently.
A resolution usually involves file and code changes to correct the problem.
For example:
- To resolve this issue, edit the `.gitlab-ci.yml` file.
- One resolution is to edit the `.gitlab-ci.yml` file.
See also [workaround](#workaround).
## requirements
When documenting the tasks that must be completed or the conditions that must be met before a user can complete the steps:
- Use **prerequisites** for tasks. For more information, see [the task topic type](../topic_types/task.md).
- Use **before you begin** for tutorials. For more information, see [the tutorial page type](../topic_types/tutorial.md).
Do not use **requirements**.
## reset
Use **reset** to describe the action associated with resetting an item to a new state.
## respectively
Avoid **respectively** and be more precise instead.
Use:
- To create a user, select **Create user**. For an existing user, select **Save changes**.
Instead of:
- Select **Create user** or **Save changes** if you created a new user or
edited an existing one respectively.
## restore
See the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/r/restore) for guidance on **restore**.
## review app
Use lowercase for **review app**.
## roles
A user has a role **for** a project or group.
Use:
- You must have the Owner role for the group.
Instead of:
- You must have the Owner role of the group.
Do not use **roles** and [**permissions**](#permissions) interchangeably. Each user is assigned a role. Each role includes a set of permissions.
Two types of roles exist: [custom](#custom-role) and [default](#default-role).
Roles are not the same as [**access levels**](#access-level).
## Root Cause Analysis
Use title case for **Root Cause Analysis**.
On first mention on a page, use **GitLab Duo Root Cause Analysis**.
Thereafter, use **Root Cause Analysis** by itself.
## roll back
Use **roll back** for changing a GitLab version to an earlier one.
Do not use **roll back** for licensing or subscriptions. Use **change the subscription tier** instead.
## runner, runners
Use lowercase for **runners**. These are the agents that run CI/CD jobs. See also [GitLab Runner](#gitlab-runner) and [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/233529).
When referring to runners, if you have to specify that the runners are installed on a customer's GitLab instance,
use **self-managed** rather than **self-hosted**.
When referring to the scope of runners, use:
- **project runner**: Associated with specific projects.
- **group runner**: Available to all projects and subgroups in a group.
- **instance runner**: Available to all groups and projects in a GitLab instance.
## runner manager, runner managers
Use lowercase for **runner managers**. These are a type of runner that can create multiple runners for autoscaling. See also [GitLab Runner](#gitlab-runner).
## runner worker, runner workers
Use lowercase for **runner workers**. This is the process created by the runner on the host computing platform to run jobs. See also [GitLab Runner](#gitlab-runner).
## runner authentication token
Use **runner authentication token** instead of variations like **runner token**, **authentication token**, or **token**.
Runners are assigned runner authentication tokens when they are created, and use them to authenticate with GitLab when
they execute jobs.
## Runner SaaS, SaaS runners
Do not use **Runner SaaS** or **SaaS runners**.
Use **GitLab-hosted runners** as the main feature name that describes runners hosted on GitLab.com and GitLab Dedicated.
To specify offerings and operating systems use:
- **hosted runners for GitLab.com**
- **hosted runners for GitLab Dedicated**
- **hosted runners on Linux for GitLab.com**
- **hosted runners on Windows for GitLab.com**
Do not use **hosted runners** without the **GitLab-** prefix or without the offering or operating system.
## (s)
Do not use **(s)** to make a word optionally plural. It can slow down comprehension. For example:
Use:
- Select the jobs you want.
Instead of:
- Select the job(s) you want.
If you can select multiples of something, then write the word as plural.
## sanity check
Do not use **sanity check**. Use **check for completeness** instead. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## scalability
Do not use **scalability** when talking about increasing GitLab performance for additional users. The words scale or scaling
are sometimes acceptable, but references to increasing GitLab performance for additional users should direct readers
to the GitLab [reference architectures](../../../administration/reference_architectures/_index.md) page.
## search
When you search, you type a string in the search box on the left sidebar.
The search results are displayed on a search page.
Searching is different from [filtering](#filter).
## seats
When referring to the subscription billing model:
- For GitLab.com, use **seats**. Customers purchase seats. Users occupy seats when they are invited
to a group, with some [exceptions](../../../subscriptions/manage_users_and_seats.md#gitlabcom-billing-and-usage).
- For GitLab Self-Managed, use **users**. Customers purchase subscriptions for a specified number of **users**.
## section
Use **section** to describe an area on a page. For example, if a page has lines that separate the UI
into separate areas, refer to these areas as sections.
We often think of expandable/collapsible areas as **sections**. When you refer to expanding
or collapsing a section, don't include the word **section**.
Use:
- Expand **Auto DevOps**.
Instead of:
- Do not: Expand the **Auto DevOps** section.
## select
Use **select** with buttons, links, menu items, and lists. **Select** applies to more devices,
while **click** is more specific to a mouse.
However, you can make an exception for **right-click** and **click-through demo**.
## self-hosted model
Use **self-hosted model** (lowercase) to refer to a language model that's hosted by a customer, rather than GitLab.
The language model might be an LLM (large language model), but it might not be.
## Self-Hosted
To avoid confusion with [**GitLab Self-Managed**](#gitlab-self-managed),
when referring to the [**GitLab Duo Self-Hosted** feature](#gitlab-duo-self-hosted),
do not use **Self-Hosted** by itself.
Always write **GitLab Duo Self-Hosted** in full and in title case, unless you are
[referring to a language model that's hosted by a customer, rather than GitLab](#self-hosted-model).
## self-managed
Use **GitLab Self-Managed** to refer to a customer's installation of GitLab.
- Do not use **self-hosted**.
See [GitLab Self-Managed](#gitlab-self-managed).
## Service Desk
Use title case for **Service Desk**.
## session
When an [agent](#ai-agent) is working on a [**flow**](#flows), a **session** is running.
The session can start and stop.
Do not use **AI session** or **agent session**.
## setup, set up
Use **setup** as a noun, and **set up** as a verb. For example:
- Your remote office setup is amazing.
- To set up your remote office correctly, consider the ergonomics of your work area.
Do not confuse **set up** with [**configure**](#configure).
**Set up** implies that it's the first time you've done something. For example:
1. Set up your installation.
1. Configure your installation.
## settings
A **setting** changes the default behavior of the product. A **setting** consists of a key/value pair,
typically represented by a label with one or more options.
## sign in, sign-in
To describe the action of signing in, use:
- **sign in**.
- **sign in to** as a verb. For example: Use your password to sign in to GitLab.
You can also use:
- **sign-in** as a noun or adjective. For example: **sign-in page** or
**sign-in restrictions**.
- **single sign-on**.
Do not use:
- **sign on**.
- **sign into**.
- [**log on**, **log in**, or **log into**](#log-in-log-on).
If the user interface has different words, you can use those.
## sign up
Use **register** instead of **sign up** when talking about creating an account.
## signed-in user, signed in user
Use **authenticated user** instead of **signed-in user** or **signed in user**.
## simply, simple
Do not use **simply** or **simple**. If the user doesn't find the process to be simple, we lose their trust. ([Vale](../testing/vale.md) rule: [`Simplicity.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Simplicity.yml))
## since
The word **since** indicates a timeframe. For example, **Since 1984, Bon Jovi has existed**. Don't use **since** to mean **because**.
Use:
- Because you have the Developer role, you can delete the widget.
Instead of:
- Since you have the Developer role, you can delete the widget.
## slashes
Instead of **and/or**, use **or** or re-write the sentence. This rule also applies to other slashes, like **follow/unfollow**. Some exceptions (like **CI/CD**) are allowed.
## slave
Do not use **slave**. Another option is **secondary**. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## storages
In the context of:
- Gitaly, storage is physical and must be called a **storage**.
- Gitaly Cluster (Praefect), storage can be either:
- Virtual and must be called a **virtual storage**.
- Physical and must be called a **physical storage**.
Gitaly storages have physical paths and virtual storages have virtual paths.
## subgroup
Use **subgroup** (no hyphen) instead of **sub-group**.
Also, avoid alternative terms for subgroups, such as **child group** or **low-level group**.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## subscription tier
Do not confuse **subscription** or **subscription tier** with **[license](#license)**.
A user purchases a **subscription**. That subscription has a **tier**.
To describe tiers:
| Instead of | Use |
|---------------------------------|----------------------------------------|
| In the Free tier or greater | In all tiers |
| In the Free tier or higher | In all tiers |
| In the Premium tier or greater | In the Premium and Ultimate tier |
| In the Premium tier or higher | In the Premium and Ultimate tier |
| In the Premium tier or lower | In the Free and Premium tier |
## Suggested Reviewers
Use title case for **Suggested Reviewers**.
**Suggested Reviewers** should always be plural, and is capitalized even if it's generic.
Examples:
- Suggested Reviewers can recommend a person to review your merge request. (This phrase describes the feature.)
- As you type, Suggested Reviewers are displayed. (This phrase is generic but still uses capital letters.)
## tab
Use bold for tab names. For example:
- The **Pipelines** tab
- The **Overview** tab
## that
Do not use **that** when describing a noun. For example:
Use:
- The file you save...
Instead of:
- The file **that** you save...
See also [this, these, that, those](#this-these-that-those).
## terminal
Use lowercase for **terminal**. For example:
- Open a terminal.
- From a terminal, run the `docker login` command.
## Terraform Module Registry
Use title case for the GitLab Terraform Module Registry, but use lowercase `m` when
talking about non-specific modules. For example:
- You can publish a Terraform module to your project's Terraform Module Registry.
## Test Generation
Use title case for **Test Generation**.
On first mention on a page, use **GitLab Duo Test Generation**.
Thereafter, use **Test Generation** by itself.
## text box
Use **text box** instead of **field** or **box** when referring to the UI element.
## there is, there are
Try to avoid **there is** and **there are**. These phrases hide the subject.
Use:
- The bucket has holes.
Instead of:
- There are holes in the bucket.
## they
Avoid the use of gender-specific pronouns, unless referring to a specific person.
Use a singular [they](https://developers.google.com/style/pronouns#gender-neutral-pronouns) as
a gender-neutral pronoun.
## this, these, that, those
Always follow these words with a noun. For example:
- Use: **This setting** improves performance.
- Instead of: **This** improves performance.
- Use: **These pants** are the best.
- Instead of: **These** are the best.
- Use: **That droid** is the one you are looking for.
- Instead of: **That** is the one you are looking for.
- Use: **Those settings** must be configured. (Or even better, **Configure those settings.**)
- Instead of: **Those** need to be configured.
## to which, of which
Try to avoid **to which** and **of which**, and let the preposition dangle at the end of the sentence instead.
For examples, see [Prepositions](_index.md#prepositions).
## to-do item
Use lowercase and hyphenate **to-do** item. ([Vale](../testing/vale.md) rule: [`ToDo.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/ToDo.yml))
## To-Do List
Use title case for **To-Do List**. ([Vale](../testing/vale.md) rule: [`ToDo.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/ToDo.yml))
## toggle
You **turn on** or **turn off** a toggle. For example:
- Turn on the **blah** toggle.
## top-level group
Use lowercase for **top-level group** (hyphenated).
Do not use **root group**.
## TFA, two-factor authentication
Use [**2FA** and **two-factor authentication**](#2fa-two-factor-authentication) instead.
## turn on, turn off
Use **turn on** and **turn off** instead of **enable** or **disable**.
For details, see [the Microsoft style guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/t/turn-on-turn-off).
See also [enable](#enable) and [disable](#disable).
## type
Use **type** when the cursor remains where you're typing. For example,
in a search box, you begin typing and search results appear. You do not
click out of the search box.
For example:
- To view all users named Alex, type `Al`.
- To view all labels for the documentation team, type `doc`.
- For a list of quick actions, type `/`.
See also [**enter**](#enter).
## Ultimate
Use **Ultimate**, in uppercase, for the subscription tier. When you refer to **Ultimate**
in the context of other subscription tiers, follow [the subscription tier](#subscription-tier) guidance.
## undo
See the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/u/undo) for guidance on **undo**.
## units of measurement
Use a space between the number and the unit of measurement. For example, **128 GB**.
([Vale](../testing/vale.md) rule: [`Units.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Units.yml))
For more information, see the
[Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/bits-bytes-terms).
## update
Use **update** for installing a newer **patch** version of the software,
or for documenting API and programmatic changes.
For example:
- Update GitLab from 14.9 to 14.9.1.
- Use this endpoint to update user permissions.
Do not use **update** for any other case. Instead, use **[upgrade](#upgrade)** or **[edit](#edit)**.
## upgrade
Use **upgrade** for:
- Choosing a higher subscription tier (Premium or Ultimate).
- Installing a newer **major** (13.0) or **minor** (13.2) version of GitLab.
For example:
- Upgrade to GitLab Ultimate.
- Upgrade GitLab from 14.0 to 14.1.
- Upgrade GitLab from 14.0 to 15.0.
Use caution with the phrase **Upgrade GitLab** without any other text.
Ensure the surrounding text clarifies whether
you're talking about the product version or the subscription tier.
See also [downgrade](#downgrade) and [roll back](#roll-back).
## upper left, upper right
Use **upper-left corner** and **upper-right corner** to provide direction in the UI.
If the UI element is not in a corner, use **upper left** and **upper right**.
Do not use **top left** and **top right**.
For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/u/upper-left-upper-right).
## useful
Do not use **useful**. If the user doesn't find the process to be useful, we lose their trust. ([Vale](../testing/vale.md) rule: [`Simplicity.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Simplicity.yml))
## user account
You create a **user account**. The user account has an [access level](#access-level).
When you add a **user account** to a group or project, the user account becomes a **member**.
## using
Avoid **using** in most cases. It hides the subject and makes the phrase more difficult to translate.
Use **by using**, **that use**, or re-write the sentence.
For example:
- Instead of: The files using storage...
- Use: The files that use storage...
- Instead of: Change directories using the command line.
- Use: Change directories by using the command line. Or even better: To change directories, use the command line.
## utilize
Do not use **utilize**. Use **use** instead. It's more succinct and easier for non-native English speakers to understand.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## version, v
To describe versions of GitLab, use **GitLab `<version number>`**. For example:
- You must have GitLab 16.0 or later.
To describe other software, use the same style as the documentation for that software.
For example:
- In Kubernetes 1.4, you can...
Pay attention to spacing by the letter **v**. In semantic versioning, no space exists after the **v**. For example:
- v1.2.3
## via
Do not use Latin abbreviations. Use **with**, **through**, or **by using** instead. ([Vale](../testing/vale.md) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/LatinTerms.yml))
## virtual registry
Use lowercase for **virtual registry**.
On first mention on a page, use **GitLab virtual registry**.
Thereafter, use **virtual registry** by itself.
Use:
- The GitLab virtual registry supports A, B, and C.
- You can configure your applications to use one virtual registry instead
of multiple upstream registries.
## VS Code user interface
When describing the user interface of VS Code and the Web IDE, follow the usage and capitalization of the
[VS Code documentation](https://code.visualstudio.com/docs/getstarted/userinterface), such as Command Palette
and Primary Side Bar.
## Vulnerability Explanation
Use title case for **Vulnerability Explanation**.
On first mention on a page, use **GitLab Duo Vulnerability Explanation**.
Thereafter, use **Vulnerability Explanation** by itself.
## Vulnerability Resolution
Use title case for **Vulnerability Resolution**.
On first mention on a page, use **GitLab Duo Vulnerability Resolution**.
Thereafter, use **Vulnerability Resolution** by itself.
## we
Try to avoid **we** and focus instead on how the user can accomplish something in GitLab.
Use:
- Use widgets when you have work you want to organize.
Instead of:
- We created a feature for you to add widgets.
## Web IDE user interface
See [VS Code user interface](#vs-code-user-interface).
## workaround
Use **workaround** when the troubleshooting solution is a temporary fix.
A workaround is usually an immediate fix and might have ongoing issues.
For example:
- The workaround is to temporarily pin your template to the deprecated version.
See also [resolution](#resolution-resolve).
## while
Use **while** to refer only to something occurring in time. For example,
**Leave the window open while the process runs.**
Do not use **while** for comparison. For example, use:
- Job 1 can run quickly. However, job 2 is more precise.
Instead of:
- While job 1 can run quickly, job 2 is more precise.
For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/w/while).
## whilst
Do not use **whilst**. Use [while](#while) instead. **While** is more succinct and easier for non-native English speakers to understand.
## whitelist
Do not use **whitelist**. Another option is **allowlist**. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## within
When possible, do not use **within**. Use **in** instead, unless you are referring to a time frame, limit, or boundary. For example:
- The upgrade occurs within the four-hour maintenance window.
- The Wi-Fi signal is accessible within a 30-foot radius.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## yet
Do not use **yet** when talking about the product or its features. The documentation describes the product as it is today.
Sometimes you might want to use **yet** when writing a task. If you use
**yet**, ensure the surrounding phrases are written
in present tense, active voice.
[View guidance about how to write about future features](_index.md#promising-features-in-future-versions).
## you, your, yours
Use **you** instead of **the user**, **the administrator** or **the customer**.
Documentation should speak directly to the user, whether that user is someone installing the product,
configuring it, administering it, or using it.
Use:
- You can configure a pipeline.
- You can reset a user's password. (In content for an administrator)
Instead of:
- Users can configure a pipeline.
- Administrators can reset a user's password.
## you can
When possible, start sentences with an active verb instead of **you can**.
For example:
- Use code review analytics to view merge request data.
- Create a board to organize your team tasks.
- Configure variables to restrict pushes to a repository.
- Add links to external accounts you have, like Discord and Twitter.
Use **you can** for optional actions. For example:
- Use code review analytics to view metrics for each merge request. You can also use the API.
- Enter the name and value pairs. You can add up to 20 pairs for each streaming destination.
<!-- vale on -->
<!-- markdownlint-enable -->
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Writing styles, markup, formatting, and other standards for GitLab Documentation.
title: Recommended word list
breadcrumbs:
- doc
- development
- documentation
- styleguide
---
To help ensure consistency in the documentation, the Technical Writing team
recommends these word choices. In addition:
- The GitLab handbook contains a list of
[top misused terms](https://handbook.gitlab.com/handbook/communication/top-misused-terms/).
- The documentation [style guide](_index.md#language) includes details
about language and capitalization.
- The GitLab handbook provides guidance on the [use of third-party trademarks](https://handbook.gitlab.com/handbook/legal/policies/product-third-party-trademarks-guidelines/#process-for-adding-third-party-trademarks-to-gitlab).
For guidance not on this page, we defer to these style guides:
- [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/welcome/)
- [Google Developer Documentation Style Guide](https://developers.google.com/style)
<!-- vale off -->
<!-- Disable trailing punctuation in heading rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md026---trailing-punctuation-in-heading -->
<!-- markdownlint-disable MD026 -->
## `.gitlab-ci.yml` file
Use backticks and lowercase for **the `.gitlab-ci.yml` file**.
When possible, use the full phrase: **the `.gitlab-ci.yml` file**
Although users can specify another name for their CI/CD configuration file,
in most cases, use **the `.gitlab-ci.yml` file** instead.
## `&` (ampersand)
Do not use Latin abbreviations. Use **and** instead, unless you are documenting a UI element that uses an `&`.
## `@mention`
Try to avoid **`@mention`**. Say **mention** instead, and consider linking to the
[mentions topic](../../../user/discussions/_index.md#mentions).
Don't use backticks.
## 2FA, two-factor authentication
Spell out **two-factor authentication** in sentence case for the first use and in topic titles, and **2FA**
thereafter. If the first word in a sentence, do not capitalize `factor` or `authentication`. For example:
- Two-factor authentication (2FA) helps secure your account. Set up 2FA when you first sign in.
## ability, able
Try to avoid using **ability** or **able** because they can be ambiguous.
The usage of these words is similar to [allow and enable](#allow-enable).
Instead of talking about the abilities of the user, or
the capabilities of the product, be direct and specific.
You can, however, use these terms when you're talking about security, or
preventing someone from being able to complete a task in the UI.
Do not confuse **ability** or **able** with [permissions](#permissions) or [roles](#roles).
Use:
- You cannot change this setting.
- To change this setting, you must have the Maintainer role.
- Confirm you can sign in.
- The external load balancer cannot connect.
- Option to delete branches introduced in GitLab 17.1.
Instead of:
- You are not able to change this setting.
- You must have the ability to change this setting.
- Verify you are able to sign in.
- The external load balancer will not be able to connect.
- Ability to delete branches introduced in GitLab 17.1.
## above
Try to avoid using **above** when referring to an example or table in a documentation page. If required, use **previous** instead. For example:
- In the previous example, the dog had fleas.
Do not use **above** when referring to versions of the product. Use [**later**](#later) instead.
Use:
- In GitLab 14.4 and later...
Instead of:
- In GitLab 14.4 and above...
- In GitLab 14.4 and higher...
- In GitLab 14.4 and newer...
## access level
Access levels are different than [roles](#roles) or [permissions](#permissions).
When you create a user, you choose an access level: **Regular**, **Auditor**, or **Administrator**.
Capitalize these words when you refer to the UI. Otherwise use lowercase.
## add
Use **add** when an object already exists. If the object does not exist yet, use [**create**](#create) instead.
**Add** is the opposite of [remove](#remove).
For example:
- Add a user to the list.
- Add an issue to the epic.
Do not confuse **add** with [**create**](#create).
Do not use **Add new**.
## Admin area
Use:
- **Admin** area, to describe this area of the UI.
- **Admin** for the UI button.
Instead of:
- **Admin area** (with both words as bold)
- **Admin Area** (with **Area** capitalized)
- **Admin** Area (with Area capitalized)
- **administrator area**
- or other variants
## Admin Mode
Use title case for **Admin Mode**. The UI uses title case.
## administrator
Use **administrator access** instead of **admin** when talking about a user's access level.

An **administrator** is not a [role](#roles) or [permission](#permissions).
Use:
- To do this thing, you must be an administrator.
- To do this thing, you must have administrator access.
Instead of:
- To do this thing, you must have the Admin role.
## advanced search
Use lowercase for **advanced search** to refer to the faster, more efficient search across the entire GitLab instance.
## agent for Kubernetes
Use lowercase to refer to the [GitLab agent for Kubernetes](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent).
For example:
- To connect your cluster to GitLab, use the GitLab agent for Kubernetes.
- Install the agent in your cluster.
- Select an agent from the list.
Do not use title case for **GitLab Agent** or **GitLab Agent for Kubernetes**.
When referring to the specific component in technical contexts, use `agentk` in backticks.
## agent for workspace
Use lowercase **agent for workspace** when referring to the component that runs
in a workspace and is used to access the workspace. Do not use title case for **Workspace**. For example:
- The agent for workspace handles GitLab integration tasks in the workspace.
- Configure the agent for workspace to connect your development environment.
When referring to the specific component in technical contexts, use `agentw` in backticks.
Do not confuse with [agent for Kubernetes](#agent-for-kubernetes).
## agent access token
The token generated when you create an agent for Kubernetes. Use **agent access token**, not:
- registration token
- secret token
- authentication token
## Agentic Chat, GitLab Duo Agentic Chat
GitLab Duo Agentic Chat is an experimental, enhanced version of [GitLab Duo Chat](#chat-gitlab-duo-chat).
Use **Agentic Chat** with a capital `a` and `c` for **Agentic Chat** or **GitLab Duo Agentic Chat**.
On first use on a page, use **GitLab Duo Agentic Chat**.
Thereafter, use **Agentic Chat** by itself.
Do not use **Duo Agentic Chat**.
## agnostic
Instead of **agnostic**, use **platform-independent** or **vendor-neutral**.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## AI, artificial intelligence
Use **AI**. Do not spell out **artificial intelligence**.
## AI agent
When writing about AI, the **agent** is an entity that performs actions for the user.
You can use **AI agent** if **agent** on its own is not clear.
When you're interacting with an AI agent, a [**session**](#session) is running.
The user can stop a session.
One or more AI agents can be part of a [**flow**](#flows), where they are orchestrated to work together on a problem.
## AI gateway
Use lowercase for **AI gateway** and do not hyphenate.
## AI Impact Dashboard
Use title case for **AI Impact Dashboard**.
On first mention on a page, use **GitLab Duo AI Impact Dashboard**.
Thereafter, use **AI Impact Dashboard** by itself.
## AI-powered, AI-native
Use **AI-native** instead of **AI-powered**. For example, **Code Suggestions is an AI-native feature**.
## air gap, air-gapped
Use **offline environment** to describe installations that have physical barriers or security policies that prevent or limit internet access. Do not use **air gap**, **air gapped**, or **air-gapped**. For example:
- The firewall policies in an offline environment prevent the computer from accessing the internet.
## allow, enable
Try to avoid **allow** and **enable**, unless you are talking about security-related features or the
state of a feature flag.
Use:
- You can add a file to your repository.
Instead of:
- This feature allows you to add a file to your repository.
- This feature enables users to add files to their repository.
This phrasing is more active and is from the user perspective, rather than the person who implemented the feature.
For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/a/allow-allows).
## analytics
Use lowercase for **analytics** and its variations, like **contribution analytics** and **issue analytics**.
However, if the UI has different capitalization, make the documentation match the UI.
For example:
- You can view merge request analytics for a project. They are displayed on the Merge Request Analytics dashboard.
## ancestor
To refer to a [parent item](#parent) that's one or more level above in the hierarchy,
use **ancestor**.
Do not use **grandparent**.
Examples:
- An ancestor group, a group in the project's hierarchy.
- An ancestor epic, an epic in the issue's hierarchy.
- A group and all its ancestors.
See also: [child](#child), [descendant](#descendant), and [subgroup](#subgroup).
## and/or
Instead of **and/or**, use **or** or rewrite the sentence to spell out both options.
## and so on
Do not use **and so on**. Instead, be more specific. For more information, see the
[Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/a/and-so-on).
## area
Use [**section**](#section) instead of **area**. The only exception is [the **Admin** area](#admin-area).
## as
Do not use **as** to mean **because**.
Use:
- Because none of the endpoints return an ID...
Instead of:
- As none of the endpoints return an ID...
## as well as
Instead of **as well as**, use **and**.
## associate
Do not use **associate** when describing adding issues to epics, or users to issues, merge requests,
or epics.
Instead, use **assign**. For example:
- Assign the issue to an epic.
- Assign a user to the issue.
## authenticated user
Use **authenticated user** instead of other variations, like **signed in user** or **logged in user**.
## authenticate
Try to use the most suitable preposition when you use **authenticate** as a verb.
Use **authenticate with** when referring to a system or provider that
performs the authentication, like a token or a service like OAuth.
For example:
- Authenticate with a deploy token.
- Authenticate with your credentials.
- Authenticate with OAuth.
- The runner uses an authentication token to authenticate with GitLab.
Use **authenticate against** when referring to a resource that contains
credentials that are checked for validation.
For example:
- The client authenticates against the LDAP directory.
- The script authenticates against the local user database.
## before you begin
Use **before you begin** when documenting the tasks that must be completed or the conditions that must be met before a user can complete a tutorial. Do not use **requirements** or **prerequisites**.
For more information, see [the tutorial page type](../topic_types/tutorial.md).
For task topic types, use [**prerequisites**](#prerequisites) instead.
## below
Try to avoid **below** when referring to an example or table in a documentation page. If required, use **following** instead. For example:
- In the following example, the dog has fleas.
## beta
Use lowercase for **beta**. For example:
- The feature is in beta.
- This is a beta feature.
- This beta release is ready to test.
You might also want to link to [this topic](../../../policy/development_stages_support.md#beta)
when writing about beta features.
## blacklist
Do not use **blacklist**. Another option is **denylist**. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## board
Use lowercase for **boards**, **issue boards**, and **epic boards**.
## box
Use **text box** to refer to the UI field. Do not use **field** or **box**. For example:
- In the **Variable name** text box, enter a value.
## branch
Use **branch** by itself to describe a branch. For specific branches, use these terms only:
- **default branch**: The primary branch in the repository. Users can use the UI to set the default
branch. For examples that use the default branch, use `main` instead of [`master`](#master).
- **source branch**: The branch you're merging from.
- **target branch**: The branch you're merging to.
- **current branch**: The branch you have checked out.
This branch might be the default branch, a branch you've created, a source branch, or some other branch.
Do not use the terms **feature branch** or **merge request branch**. Be as specific as possible. For example:
- The branch you have checked out...
- The branch you added commits to...
## bullet
Don't refer to individual items in an ordered or unordered list as **bullets**. Use **list item** instead. To be less ambiguous, you can use:
- **Ordered list item** for items in an ordered list.
- **Unordered list item** for items in an unordered list.
## button
Don't use a descriptor with **button**.
Use:
- Select **Run pipelines**.
Instead of:
- Select the **Run pipelines** button.
## cannot, can not
Use **cannot** instead of **can not**.
See also [contractions](_index.md#contractions).
## card
Although the UI term might be **card**, do not use it in the documentation.
Avoid the descriptor if you can.
Use:
- By **Seat utilization**, select **Assign seats**.
Instead of:
- On the **Seat utilization** card, select **Assign seats**.
## Chat, GitLab Duo Chat
GitLab Duo Chat is an AI-native assistant that helps developers with contextual,
conversational code explanations, troubleshooting, and guidance.
It is different from [GitLab Duo Agentic Chat](#agentic-chat-gitlab-duo-agentic-chat).
Use **Chat** with a capital `c` for **Chat** or **GitLab Duo Chat**.
On first use on a page, use **GitLab Duo Chat**.
Thereafter, use **Chat** by itself.
Do not use **Duo Chat**.
## checkbox
Use one word for **checkbox**. Do not use **check box**.
You **select** (not **check** or **enable**) and **clear** (not **deselect** or **disable**) checkboxes. For example:
- Select the **Protect environment** checkbox.
- Clear the **Protect environment** checkbox.
If you must refer to the checkbox, you can say it is selected or cleared. For example:
- Ensure the **Protect environment** checkbox is cleared.
- Ensure the **Protect environment** checkbox is selected.
(For `deselect`, [Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## checkout, check out
Use **check out** as a verb. For the Git command, use `checkout`.
- Use `git checkout` to check out a branch locally.
- Check out the files you want to edit.
## cherry-pick, cherry pick
Use the hyphenated version of **cherry-pick**. Do not use **cherry pick**.
## CI, CD
When talking about GitLab features, use **CI/CD**. Do not use **CI** or **CD** alone.
## CI/CD
**CI/CD** is always uppercase. You are not required spell it out on first use.
You can omit **CI/CD** when the context is clear, especially after the first use. For example:
- Test your code in a **CI/CD pipeline**. Configure the **pipeline** to run for merge requests.
- Store the value in a **CI/CD variable**. Set the **variable** to masked.
## CI/CD minutes
Do not use **CI/CD minutes**. This term was renamed to [**compute minutes**](#compute-minutes).
## child
Always use as a compound noun.
Examples:
- child issue
- child epic
- child objective
- child key result
- child pipeline
See also: [descendant](#descendant), [parent](#parent) and [subgroup](#subgroup).
## click
Do not use **click**. Instead, use **select** with buttons, links, menu items, and lists.
**Select** applies to more devices, while **click** is more specific to a mouse.
However, you can make an exception for **right-click** and **click-through demo**.
## cloud licensing
Avoid the phrase **cloud licensing**, except when you have to describe the process
of synchronizing an activation code over the internet.
If you can, rather focus on the fact that this subscription is synchronized with GitLab.
For example:
- Your instance must be able to synchronize your subscription data with GitLab.
## cloud-native
When you're talking about using a Kubernetes cluster to host GitLab, you're talking about a **cloud-native version of GitLab**.
This version is different than the larger, more monolithic **Linux package** that is used to deploy GitLab.
You can also use **cloud-native GitLab** for short. It should be hyphenated and lowercase.
## code completion
Code Suggestions has evolved to include two primary features:
- **code completion**
- **code generation**
Use lowercase for **code completion**. Do not use **GitLab Duo Code Completion**.
GitLab Duo is reserved for Code Suggestions only.
**Code completion** must always be singular.
Example:
- Use code completion to populate the file.
## Code Explanation
Use title case for **Code Explanation**.
On first mention on a page, use **GitLab Duo Code Explanation**.
Thereafter, use **Code Explanation** by itself.
## code generation
Code Suggestions has evolved to include two primary features:
- **code completion**
- **code generation**
Use lowercase for **code generation**. Do not use **GitLab Duo Code Generation**.
GitLab Duo is reserved for Code Suggestions only.
**Code generation** must always be singular.
Examples:
- Use code generation to create code based on your comments.
- Adjust your code generation results by adding code comments to your file.
## Code Owner, code owner, `CODEOWNER`
Use **Code Owners** to refer to the feature name or concept. For example:
- Use the Code Owners approval rules to protect your code.
Use **code owner** or **code owners**, lowercase, to refer to a person or group with code ownership responsibilities.
For example:
- Assign a code owner to the project.
- Contact the code owner for a review.
Do not use **codeowner**, **CodeOwner**, or **code-owner**.
Use `CODEOWNERS`, uppercase and in backticks, to refer to the filename. For example:
- Edit the `CODEOWNERS` file to define the code ownership rules.
## Code Review Summary
Use title case for **Code Review Summary**.
On first mention on a page, use **GitLab Duo Code Review Summary**.
Thereafter, use **Code Review Summary** by itself.
## Code Suggestions
Use title case for **Code Suggestions**. On first mention on a page, use **GitLab Duo Code Suggestions**.
**Code Suggestions**, the feature, should always end in an `s`. However, write like it
is singular. For example:
- Code Suggestions is turned on for the instance.
When generically referring to the suggestions that the feature outputs, use lowercase.
Examples:
- Use Code Suggestions to display suggestions as you type. (This phrase describes the feature.)
- As you type, suggestions are displayed. (This phrase is generic.)
**Code Suggestions** has evolved to include two primary features:
- [**code completion**](#code-completion)
- [**code generation**](#code-generation)
## collapse
Use **collapse** instead of **close** when you are talking about expanding or collapsing a section in the UI.
## command line
Use **From the command line** to introduce commands.
Hyphenate when you use it as an adjective. For example, **a command-line tool**.
## compute
Use **compute** for the resources used by runners to run CI/CD jobs.
Related terms:
- [**compute minutes**](#compute-minutes): How compute usage is calculated. For example, `400 compute minutes`.
- [**compute quota**](../../../ci/pipelines/compute_minutes.md): The limit of compute minutes that a namespace can use each month.
- **compute usage**: The number of compute minutes that the namespace has used from the monthly quota.
## compute minutes
Use **compute minutes** instead of these (or similar) terms:
- **CI/CD minutes**
- **CI minutes**
- **pipeline minutes**
- **CI pipeline minutes**
- **pipeline minutes**
For more information, see [epic 2150](https://gitlab.com/groups/gitlab-com/-/epics/2150).
## configuration
When you edit a collection of settings, call it a **configuration**.
## configure
Use **configure** after a feature or product has been [set up](#setup-set-up).
For example:
1. Set up your installation.
1. Configure your installation.
## confirmation dialog
Use **confirmation dialog** to describe the dialog that asks you to confirm an action. For example:
- On the confirmation dialog, select **OK**.
Do not use **confirmation box** or **confirmation dialog box**. See also [**dialog**](#dialog).
## container registry
When documenting the GitLab container registry features and functionality, use lowercase.
Use:
- The GitLab container registry supports A, B, and C.
- You can push a Docker image to your project's container registry.
## create
Use **create** when an object does not exist and you are creating it for the first time. **Create** is the opposite of [delete](#delete).
For example:
- Create an issue.
Do not confuse **create** with [**add**](#add).
Do not use **create new**. The word **create** implies that the object is new, and the extra word is not necessary.
## currently
Do not use **currently** when talking about the product or its features. The documentation describes the product as it is today.
([Vale](../testing/vale.md) rule: [`CurrentStatus.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/CurrentStatus.yml))
## custom role
Use **custom role** when referring to a role created with specific customized permissions.
When referring to a non-custom role, use [**default role**](#default-role).
## data
Use **data** as a singular noun.
Use:
- Data is collected.
- The data shows a performance increase.
Instead of:
- Data are collected.
- The data show a performance increase.
## deadline
Do not use **deadline**. Use **due date** instead.
## default role
Use **default role** when referring to the following predefined roles that have
no customized permissions added:
- Guest
- Planner
- Reporter
- Developer
- Maintainer
- Owner
- Minimal Access
Do not use **static role**, **built-in role**, or **predefined role**.
## delete
Use **delete** when an object is completely deleted. **Delete** is the opposite of [create](#create).
When the object continues to exist, use [**remove**](#remove) instead.
For example, you can remove an issue from an epic, but the issue still exists.
## Dependency Proxy
Use title case for the GitLab Dependency Proxy.
## deploy board
Use lowercase for **deploy board**.
## descendant
To refer to a [child item](#child) that's one or more level below in the hierarchy,
use **descendant**.
Do not use **grandchild**.
Examples:
- An descendant project, a project in the group's hierarchy.
- An descendant issue, an issue in the epic's hierarchy.
- A group and all its descendants.
See also: [ancestor](#ancestor), [child](#child), and [subgroup](#subgroup).
## Developer
When writing about the Developer role:
- Use a capital **D**.
- Write it out.
- Use: if you are assigned the Developer role
- Instead of: if you are a Developer
- When the Developer role is the minimum required role:
- Use: at least the Developer role
- Instead of: the Developer role or higher
Do not use bold.
Do not use **Developer permissions**. A user who is assigned the Developer role has a set of associated permissions.
## dialog
Use **dialog** rather than any of these alternatives:
- **dialog box**
- **modal**
- **modal dialog**
- **modal window**
- **pop-up**
- **pop-up window**
- **window**
See also [**confirmation dialog**](#confirmation-dialog). For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/dialog-box-dialog-dialogue).
Before you use this term, confirm whether **dialog** or [**drawer**](#drawer) is
the correct term for your use case.
When the dialog is the location of an action, use **on** as a preposition. For example:
- On the **Grant permission** dialog, select **Group**.
See also [**on**](#on).
## disable
Do not use **disable** to describe making a setting or feature unavailable. Use alternatives like **turn off**, **hide**,
**make unavailable**, or **remove** instead.
To describe a state, use **off**, **inactive**, or **unavailable**.
This guidance is based on the
[Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/disable-disabled).
## disallow
Use **prevent** instead of **disallow**. ([Vale](../testing/vale.md) rule: [`Substitutions.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Substitutions.yml))
## Discussion Summary
Use title case for **Discussion Summary**.
On first mention on a page, use **GitLab Duo Discussion Summary**.
Thereafter, use **Discussion Summary** by itself.
## Docker-in-Docker, `dind`
Use **Docker-in-Docker** when you are describing running a Docker container by using the Docker executor.
Use `dind` in backticks to describe the container name: `docker:dind`. Otherwise, spell it out.
## downgrade
To be more upbeat and precise, do not use **downgrade**. Focus instead on the action the user is taking.
- For changing to earlier GitLab versions, use [**roll back**](#roll-back).
- For changing to lower GitLab tiers, use **change the subscription tier**.
## download
Use **download** to describe saving data to a user's device. For details, see
[the Microsoft style guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/download).
Do not confuse download with [export](#export).
## drawer
Use **drawer** to describe a [drawer UI component](../drawers.md) that:
- Appears from the right side of the screen.
- Displays context-specific information or actions without the user having to
leave the current page.
To see examples of drawers:
- Go to the [Technical Writing Pipeline Editor](https://gitlab.com/gitlab-org/technical-writing/team-tasks/-/ci/editor?branch_name=main) and select **Help** ({{< icon name="information-o" >}}).
- Open GitLab Duo Chat.
Before you use this term, confirm whether **drawer** or [**dialog**](#dialog) is
the correct term for your use case.
## dropdown list
Use **dropdown list** to refer to the UI element. Do not use **dropdown** without **list** after it.
Do not use **drop-down** (hyphenated), **dropdown menu**, or other variants.
For example:
- From the **Visibility** dropdown list, select **Public**.
## earlier
Use **earlier** when talking about version numbers.
Use:
- In GitLab 14.1 and earlier.
Instead of:
- In GitLab 14.1 and lower.
- In GitLab 14.1 and older.
## easily
Do not use **easily**. If the user doesn't find the process to be easy, we lose their trust.
## edit
Use **edit** for UI documentation and user actions.
For example:
- To edit your profile settings, select **Edit**.
For API documentation and programmatic changes, use **[update](#update)**.
## e.g.
Do not use Latin abbreviations. Use **for example**, **such as**, **for instance**, or **like** instead. ([Vale](../testing/vale.md) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/LatinTerms.yml))
## ellipsis, ellipses
Avoid ellipses when you can. If you must include them, for example as part of a code block or other CLI response,
use three periods with no space (`...`) instead of the `…` HTML entity or the `…` HTML code.
For more information, see [code blocks](_index.md#code-blocks).
Do not include any ellipses when documenting UI text. For example, use:
- **Search or go to**
Instead of:
- **Search or go to...**
For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/punctuation/ellipses).
## email
Do not use **e-mail** with a hyphen. When plural, use **emails** or **email messages**. ([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## email address
Use **email address** when referring to addresses used in emails. Do not shorten to **email**, which are messages.
## emoji
Use **emoji** to refer to the plural form of **emoji**.
## enable
Do not use **enable** to describe making a setting or feature available. Use **turn on** instead.
To describe a state, use **on** or **active**.
This guidance is based on the
[Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/d/disable-disabled).
## enter
In most cases, use **enter** rather than **type**.
- **Enter** encompasses multiple ways to enter information, including speech and keyboard.
- **Enter** assumes that the user puts a value in a field and then moves the cursor outside the field (or presses <kbd>Enter</kbd>).
**Enter** includes both the entering of the content and the action to validate the content.
For example:
- In the **Variable name** text box, enter a value.
- In the **Variable name** text box, enter `my text`.
When you use **Enter** to refer to the key on a keyboard, use the HTML `<kbd>` tag:
- To view the list of results, press <kbd>Enter</kbd>.
See also [**type**](#type).
## epic
Use lowercase for **epic**.
See also [associate](#associate).
## epic board
Use lowercase for **epic board**.
## etc.
Try to avoid **etc.**. Be as specific as you can. Do not use
[**and so on**](#and-so-on) as a replacement.
Use:
- You can edit objects, like merge requests and issues.
Instead of:
- You can edit objects, like merge requests, issues, etc.
## expand
Use **expand** instead of **open** when you are talking about expanding or collapsing a section in the UI.
## experiment
Use lowercase for **experiment**. For example:
- This feature is an experiment.
- These features are experiments.
- This experiment is ready to test.
If you must, you can use **experimental**.
You might also want to link to [this topic](../../../policy/development_stages_support.md#experiment)
when writing about experimental features.
## export
Use **export** to indicate translating raw data,
which is not represented by a file in GitLab, into a standard file format.
You can differentiate **export** from **download** because:
- Often, you can use export options to change the output.
- Exported data is not necessarily downloaded to a user's device.
For example:
- Export the contents of your report to CSV format.
Do not confuse with [download](#download).
## FAQ
We want users to find information quickly, and they rarely search for the term **FAQ**.
Information in FAQs belongs with other similar information, under a searchable topic title.
## feature
You should rarely use the word **feature**. Instead, explain what GitLab does.
For example, use:
- Use merge requests to incorporate changes into the target branch.
Instead of:
- Use the merge request feature to incorporate changes into the target branch.
## feature branch
Do not use **feature branch**. See [branch](#branch).
## field
Use **text box** instead of **field** or **box**.
Use:
- In the **Variable name** text box, enter `my text`.
Instead of:
- In the **Variable name** field, enter `my text`.
However, you can make an exception when you are writing a task and you want to refer to all
of the fields at once. For example:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **CI/CD**.
1. Expand **General pipelines**.
1. Complete the fields.
Learn more about [documenting multiple fields at once](_index.md#documenting-multiple-fields-at-once).
## filename
Use one word for **filename**. When you use filename as a variable, use `<filename>`.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## filter
When you are viewing a list of items, like issues or merge requests, you filter the list by
the available attributes. For example, you might filter by assignee or reviewer.
Filtering is different from [searching](#search).
## flows
GitLab provides multiple **flows** that are run by [agents](#ai-agent).
Do not use **agent flow**.
You choose a flow. You start a [**session**](#session).
## foo
Do not use **foo** in product documentation. You can use it in our API and contributor documentation, but try to use a clearer and more meaningful example instead.
## fork
A **fork** is a project that was created from a **upstream project** by using the
forking process.
The **upstream project** (also known as the **source project**) and the **fork** have a **fork relationship** and are
**linked**.
If the **fork relationship** is removed, the
**fork** is **unlinked** from the **upstream project**.
## Free
Use **Free**, in uppercase, for the subscription tier. When you refer to **Free**
in the context of other subscription tiers, follow [the subscription tier](#subscription-tier) guidance.
## full screen
Use two words for **full screen**.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## future tense
When possible, use present tense instead of future tense. For example, use **after you execute this command, GitLab displays the result** instead of **after you execute this command, GitLab will display the result**. ([Vale](../testing/vale.md) rule: [`FutureTense.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/FutureTense.yml))
## GB, gigabytes
For **GB** and **MB**, follow the [Microsoft guidance](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/bits-bytes-terms).
## Geo
Use title case for **Geo**.
## generally available, general availability
Use lowercase for **generally available** and **general availability**.
For example:
- This feature is generally available.
Use **generally available** more often. For example,
do not say:
- This feature has reached general availability.
Do not use **GA** to abbreviate general availability.
## GitLab
Do not make **GitLab** possessive (GitLab's). This guidance follows [GitLab Trademark Guidelines](https://handbook.gitlab.com/handbook/marketing/brand-and-product-marketing/brand/brand-activation/trademark-guidelines/).
Do not put **GitLab** next to the name of another third-party tool or brand.
For example, do not use:
- GitLab Chrome extension
- GitLab Kubernetes agent
Instead, use:
- GitLab extension for Chrome
- GitLab agent for Kubernetes
Putting the brand names next to each other can imply ownership or partnership, which we don't want to do,
unless we've gone through a legal review and have been told to promote the partnership.
This guidance follows the [Use of Third-party Trademarks](https://handbook.gitlab.com/handbook/legal/policies/product-third-party-trademarks-guidelines/#dos--donts-for-use-of-third-party-trademarks-in-gitlab).
## GitLab AI vendor model
Use **GitLab AI vendor model** to refer to a [language model](#language-model-large-language-model)
that is hosted by a third-party provider, and that customers access by using the GitLab
[AI gateway](#ai-gateway) through the [Cloud Connector](../../cloud_connector/architecture.md).
Do not use this term when the [language model is hosted by a customer](#self-hosted-model),
or when the customer uses the [GitLab Duo Self-Hosted](#gitlab-duo-self-hosted)
feature.
## GitLab Dedicated
Use **GitLab Dedicated** to refer to the product offering. It refers to a GitLab instance that's hosted and managed by GitLab for customers.
GitLab Dedicated can be referred to as a single-tenant SaaS service.
Do not use **Dedicated** by itself. Always use **GitLab Dedicated**.
## GitLab Duo
Do not use **Duo** by itself. Always use **GitLab Duo**.
On first use on a page, use **GitLab Duo `<featurename>`**. As of Aug, 2024,
the following are the names of GitLab Duo features:
- GitLab Duo AI Impact Dashboard
- GitLab Duo Chat
- GitLab Duo Code Explanation
- GitLab Duo Code Review
- GitLab Duo Code Review Summary
- GitLab Duo Code Suggestions
- GitLab Duo for the CLI
- GitLab Duo Issue Description Generation
- GitLab Duo Issue Discussion Summary
- GitLab Duo Merge Commit Message Generation
- GitLab Duo Merge Request Summary
- GitLab Duo Product Analytics
- GitLab Duo Root Cause Analysis
- GitLab Duo Self-Hosted
- GitLab Duo Test Generation
- GitLab Duo Vulnerability Explanation
- GitLab Duo Vulnerability Resolution
Excluding GitLab Duo Self-Hosted, after the first use, use the feature name
without **GitLab Duo**.
## GitLab Duo Agent Platform
Use **GitLab Duo Agent Platform**. After first use, use **Agent Platform**.
Do not use **Duo Agent Platform** by itself.
## GitLab Duo Core
Use **GitLab Duo Core** for the add-on. Do not use **Duo Core** by itself.
You can also use **the GitLab Duo Core add-on** but omit **add-on** when you can.
In marketing materials, like release posts or blogs, use
**Premium and Ultimate with GitLab Duo** instead of **GitLab Duo Core**.
For example:
- [Blog: Unlocking AI for every GitLab Premium and Ultimate customer](https://about.gitlab.com/blog/gitlab-premium-with-duo/)
- [Release post: Group and project controls for Premium and Ultimate with GitLab Duo](https://about.gitlab.com/releases/2025/07/17/gitlab-18-2-released/#group-and-project-controls-for-premium-and-ultimate-with-gitlab-duo)
## GitLab Duo Enterprise
Always use **GitLab Duo Enterprise** for the add-on. Do not use **Duo Enterprise** unless approved by legal.
You can use **the GitLab Duo Enterprise add-on** (with this capitalization) but you are not required to use **add-on**
and should leave it off when you can.
## GitLab Duo Pro
Always use **GitLab Duo Pro** for the add-on. Do not use **Duo Pro** unless approved by legal.
You can use **the GitLab Duo Pro add-on** (with this capitalization) but you are not required to use **add-on**
and should leave it off when you can.
## GitLab Duo Self-Hosted
When referring to the feature, always write **GitLab Duo Self-Hosted** in full
and in title case, unless you are
[referring to a language model that's hosted by a customer, rather than GitLab](#self-hosted-model).
Do not use **Self-Hosted** by itself.
## GitLab Flavored Markdown
When possible, spell out [**GitLab Flavored Markdown**](../../../user/markdown.md).
If you must abbreviate, do not use **GFM**. Use **GLFM** instead.
## GitLab for Eclipse plugin, Eclipse
Use **GitLab for Eclipse plugin** to refer to the editor extension.
Use **Eclipse** to refer to the IDE.
## GitLab Helm chart, GitLab chart
To deploy a cloud-native version of GitLab, use:
- The GitLab Helm chart (long version)
- The GitLab chart (short version)
Do not use **the `gitlab` chart**, **the GitLab Chart**, or **the cloud-native chart**.
You use the **GitLab Helm chart** to deploy **cloud-native GitLab** in a Kubernetes cluster.
If you use it in a context of describing the
[different installation methods](_index.md#how-to-document-different-installation-methods)
use `Helm chart (Kubernetes)`.
## GitLab Pages
For consistency and branding, use **GitLab Pages** rather than **Pages**.
However, if you use **GitLab Pages** for the first mention on a page or in the UI,
you can use **Pages** thereafter.
## GitLab Runner
Use title case for **GitLab Runner**. This is the product you install. For more information about the decision for this usage,
see [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/233529).
See also:
- [runners](#runner-runners)
- [runner managers](#runner-manager-runner-managers)
- [runner workers](#runner-worker-runner-workers)
## GitLab SaaS
**GitLab SaaS** refers to both [GitLab.com](#gitlabcom) (multi-tenant SaaS) and [GitLab Dedicated](#gitlab-dedicated) (single-tenant SaaS).
Try to avoid **GitLab SaaS** and instead, refer to the [specific offering](#offerings) instead.
## GitLab Self-Managed
Use **GitLab Self-Managed** to refer to an installation of GitLab that customers manage.
Use the descriptor of **instance** as needed. Do not use **installation**.
Use:
- GitLab Self-Managed
- a GitLab Self-Managed instance
Instead of:
- A GitLab Self-Managed installation
- A Self-Managed GitLab installation
- A self-managed GitLab installation
- A GitLab instance that is GitLab Self-Managed
You can use **instance** on its own to describe GitLab Self-Managed. For example:
- On your instance, ensure the port is open.
- Verify that the instance is publicly accessible.
See also [self-managed](#self-managed).
## GitLab.com
Use **GitLab.com** to refer to the URL or product offering. GitLab.com is the instance that's managed by GitLab.
## GitLab Workflow extension for VS Code
Use **GitLab Workflow extension for VS Code** to refer to the extension.
You can also use **GitLab Workflow for VS Code** or **GitLab Workflow**.
For terms in VS Code, see [VS Code user interface](#vs-code-user-interface)
## GraphiQL
Use **GraphiQL** or **GraphQL explorer** to refer to this tool.
In most cases, you should use **GraphiQL** on its own with no descriptor.
Do not use:
- GraphiQL explorer tool
- GraphiQL explorer
## group access token
Use sentence case for **group access token**.
Capitalize the first word when you refer to the UI.
## guide
We want to speak directly to users. On `docs.gitlab.com`, do not use **guide** as part of a page title.
For example, **Snowplow Guide**. Instead, speak about the feature itself, and how to use it. For example, **Use Snowplow to do xyz**.
## Guest
When writing about the Guest role:
- Use a capital **G**.
- Write it out:
- Use: if you are assigned the Guest role
- Instead of: if you are a guest
- When the Guest role is the minimum required role:
- Use: at least the Guest role
- Instead of: the Guest role or higher
Do not use bold.
Do not use **Guest permissions**. A user who is assigned the Guest role has a set of associated permissions.
## handy
Do not use **handy**. If the user doesn't find the feature or process to be handy, we lose their trust. ([Vale](../testing/vale.md) rule: [`Simplicity.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Simplicity.yml))
## high availability, HA
Do not use **high availability** or **HA**, except in the GitLab [reference architectures](../../../administration/reference_architectures/_index.md#high-availability-ha). Instead, direct readers to the reference architectures for more information about configuring GitLab for handling greater amounts of users.
Do not use phrases like **high availability setup** to mean a multiple node environment. Instead, use **multi-node setup** or similar.
## higher
Do not use **higher** when talking about version numbers.
Use:
- In GitLab 14.4 and later...
Instead of:
- In GitLab 14.4 and higher...
- In GitLab 14.4 and above...
## hit
Don't use **hit** to mean **press**.
Use:
- Press **ENTER**.
Instead of:
- Hit the **ENTER** button.
## I
Do not use first-person singular. Use **you** or rewrite the phrase instead.
## i.e.
Do not use Latin abbreviations. Use **that is** instead. ([Vale](../testing/vale.md) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/LatinTerms.yml))
## in order to
Do not use **in order to**. Use **to** instead. ([Vale](../testing/vale.md) rule: [`Wordy.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Wordy.yml))
## indexes, indices
For the plural of **index**, use **indexes**.
However, for Elasticsearch, use [**indices**](https://www.elastic.co/blog/what-is-an-elasticsearch-index).
## Installation from source
To refer to the installation method that uses the self-compiled code, use **self-compiled**.
Use:
- For self-compiled installations...
Instead of:
- For installations from source...
For more information, see the
[different installation methods](_index.md#how-to-document-different-installation-methods).
## -ing words
Remove **-ing** words whenever possible. They can be difficult to translate,
and more precise terms are usually available. For example:
- Instead of **The files using storage are deleted**, use **The files that use storage are deleted**.
- Instead of **Delete files using the Edit button**, use **Use the Edit button to delete files**.
- Instead of **Replicating your server is required**, use **You must replicate your server**.
## issue
Use lowercase for **issue**.
## issue board
Use lowercase for **issue board**.
## Issue Description Generation
Use title case for **Issue Description Generation**.
On first mention on a page, use **GitLab Duo Issue Description Generation**.
Thereafter, use **Issue Description Generation** by itself.
## Issue Discussion Summary
Use title case for **Issue Discussion Summary**.
On first mention on a page, use **GitLab Duo Issue Discussion Summary**.
Thereafter, use **Issue Discussion Summary** by itself.
## issue weights
Use lowercase for **issue weights**.
## IP address
Use **IP address** when referring to addresses used with Internet Protocol (IP). Do not refer to an IP address as an
**IP**.
## it
When you use the word **it**, ensure the word it refers to is obvious.
If it's not obvious, repeat the word rather than using **it**.
Use:
- The field returns a connection. The field accepts four arguments.
Instead of:
- The field returns a connection. It accepts four arguments.
See also [this, these, that, those](#this-these-that-those).
## job
Do not use **build** to be synonymous with **job**. A job is defined in the `.gitlab-ci.yml` file and runs as part of a pipeline.
If you want to use **CI** with the word **job**, use **CI/CD job** rather than **CI job**.
## Kubernetes executor
GitLab Runner can run jobs on a Kubernetes cluster. To do this, GitLab Runner uses the Kubernetes executor.
When referring to this feature, use:
- Kubernetes executor for GitLab Runner
- Kubernetes executor
Do not use:
- GitLab Runner Kubernetes executor, because this can infringe on the Kubernetes trademark.
## language model, large language model
When referring to language models, be precise. Not all language models are large,
and not all models are language models. When in doubt, ask a developer or PM for confirmation.
You can use LLM to refer to a large language model if you spell it out on first use.
## later
Use **later** when talking about version numbers.
Use:
- In GitLab 14.1 and later...
Instead of:
- In GitLab 14.1 and higher...
- In GitLab 14.1 and above...
- In GitLab 14.1 and newer...
## level
If you can, avoid `level` in the context of an instance, project, or group.
Use:
- This setting is turned on for the instance.
- This setting is turned on for the group and its subgroups.
- This setting is turned on for projects.
Instead of:
- This setting is turned on at the instance level.
- This setting is turned on at the group level.
- This is a project-level setting.
## lifecycle, life cycle, life-cycle
Use one word for **lifecycle**. Do not use **life cycle** or **life-cycle**.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## list
Do not use **list** when referring to a [**dropdown list**](#dropdown-list).
Use the full phrase **dropdown list** instead.
Also, do not use **list** when referring to a page. For example, the **Issues** page
is populated with a list of issues. However, you should call it the **Issues** page,
and not the **Issues** list.
## license
Licenses are different than subscriptions.
- A license grants users access to the subscription they purchased. The license includes information like the number of seats and subscription dates.
- A subscription is the subscription tier that the user purchases.
Avoid the terms [**cloud license** or **cloud licensing**](#cloud-licensing) if possible.
The following terms are displayed in the UI and in emails. You can use them when necessary:
- **Online license** - a license synchronized with GitLab
- **Offline license** - a license not synchronized with GitLab
- **Legacy license** - a license created before synchronization was possible
You can also use the terms **legacy license file** and **offline license file** when
describing the files that customers receive by email as part of the overall
licensing and synchronization process.
However, if you can, rather than the relying on the term, use the more specific description instead.
Use:
- Add a license to your instance.
- Purchase a subscription.
Instead of:
- Buy a license.
- Purchase a license.
## limitations
Do not use **Limitations** as a topic title. For more information,
see [reference topic titles](../topic_types/reference.md#reference-topic-titles).
If you must, you can use the title **Known issues**.
## log in, log on
Do not use:
- **log in**.
- **log on**.
- **login**
Use [sign in](#sign-in-sign-in) instead.
However, if the user interface has **Log in**, you should match the UI.
## limited availability
Use lowercase for **limited availability**. For example:
- This feature has limited availability.
- Hosted runners are in limited availability.
Do not use:
- This feature has reached limited availability.
Do not use **LA** to abbreviate limited availability.
## logged-in user, logged in user
Use **authenticated user** instead of **logged-in user** or **logged in user**.
## lower
Do not use **lower** when talking about version numbers.
Use:
- In GitLab 14.1 and earlier.
Instead of:
- In GitLab 14.1 and lower.
- In GitLab 14.1 and older.
## machine learning
Use lowercase for **machine learning**.
When machine learning is used as an adjective, like **a machine learning model**,
do not hyphenate. While a hyphen might be more grammatically correct, we risk
becoming inconsistent if we try to be more precise.
## Maintainer
When writing about the Maintainer role:
- Use a capital **M**.
- Write it out.
- Use: if you are assigned the Maintainer role
- Instead of: if you are a maintainer
- When the Maintainer role is the minimum required role:
- Use: at least the Maintainer role
- Instead of: the Maintainer role or higher
Do not use bold.
Do not use **Maintainer permissions**. A user who is assigned the Maintainer role has a set of associated permissions.
## mankind
Do not use **mankind**. Use **people** or **humanity** instead. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## manpower
Do not use **manpower**. Use words like **workforce** or **GitLab team members**. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## master
Do not use **master**. Use **main** when you need a sample [default branch name](#branch).
([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## may, might
**Might** means something has the probability of occurring. Might is often used in troubleshooting documentation.
**May** gives permission to do something. Consider **can** instead of **may**.
Consider rewording phrases that use these terms. These terms often indicate possibility and doubt, and technical writing strives to be precise.
See also [you can](#you-can).
Use:
- The `committed_date` and `authored_date` fields are generated from different sources, and might not be identical.
- A typical pipeline consists of four stages, executed in the following order:
Instead of:
- The `committed_date` and `authored_date` fields are generated from different sources, and may not be identical.
- A typical pipeline might consist of four stages, executed in the following order:
## MB, megabytes
For **MB** and **GB**, follow the [Microsoft guidance](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/bits-bytes-terms).
## member
When you add a [user account](#user-account) to a group or project,
the user account becomes a **member**.
## Merge Commit Message Generation
Use title case for **Merge Commit Message Generation**.
On first mention on a page, use **GitLab Duo Merge Commit Message Generation**.
Thereafter, use **Merge Commit Message Generation** by itself.
## merge request branch
Do not use **merge request branch**. See [branch](#branch).
## merge requests
Use lowercase for **merge requests**. If you use **MR** as the acronym, spell it out on first use.
## Merge Request Summary
Use title case for **Merge Request Summary**.
On first mention on a page, use **GitLab Duo Merge Request Summary**.
Thereafter, use **Merge Request Summary** by itself.
## milestones
Use lowercase for **milestones**.
## Minimal Access
When writing about the Minimal Access role:
- Use a capital **M** and a capital **A**.
- Write it out:
- Use: if you are assigned the Minimal Access role
- Instead of: if you are a Minimal Access user
- When the Minimal Access role is the minimum required role:
- Use: at least the Minimal Access role
- Instead of: the Minimal Access role or higher
Do not use bold.
Do not use **Minimal Access permissions**. A user who is assigned the Minimal Access role has a set of associated permissions.
## model registry
When documenting the GitLab model registry features and functionality, use lowercase.
Use:
- The GitLab model registry supports A, B, and C.
- You can publish a model to your project's model registry.
## models
For usage, see [language models](#language-model-large-language-model).
## n/a, N/A, not applicable
When possible, use **not applicable**. Spelling out the phrase helps non-English speaking users and avoids
capitalization inconsistencies.
## navigate
Do not use **navigate**. Use **go** instead. For example:
- Go to this webpage.
- Open a terminal and go to the `runner` directory.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## need to
Try to avoid **need to**, because it's wordy.
For example, when a variable is **required**,
instead of **You need to set the variable**, use:
- Set the variable.
- You must set the variable.
When the variable is **recommended**:
- You should set the variable.
When the variable is **optional**:
- You can set the variable.
## new
Often, you can avoid the word **new**. When you create an object, it is new,
so you don't need this additional word.
See also [**create**](#create) and [**add**](#add).
## newer
Do not use **newer** when talking about version numbers.
Use:
- In GitLab 14.4 and later...
Instead of:
- In GitLab 14.4 and higher...
- In GitLab 14.4 and above...
- In GitLab 14.4 and newer...
## normal, normally
Don't use **normal** to mean the usual, typical, or standard way of doing something.
Use those terms instead.
Use:
- Typically, you specify a certificate.
- Usually, you specify a certificate.
- Follow the standard Git workflow.
Instead of:
- Normally, you specify a certificate.
- Follow the normal Git workflow.
([Vale](../testing/vale.md) rule: [`Normal.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Normal.yml))
## note that
Do not use **note that** because it's wordy.
Use:
- You can change the settings.
Instead of:
- Note that you can change the settings.
## offerings
The current product offerings are:
- [GitLab.com](#gitlabcom)
- [GitLab Self-Managed](#self-managed)
- [GitLab Dedicated](#gitlab-dedicated)
The [availability details](availability_details.md) reflect these offerings.
## older
Do not use **older** when talking about version numbers.
Use:
- In GitLab 14.1 and earlier.
Instead of:
- In GitLab 14.1 and lower.
- In GitLab 14.1 and older.
## Omnibus GitLab
When referring to the installation method that uses the Linux package, refer to it
as **Linux package**.
Use:
- For installations that use the Linux package...
Instead of:
- For installations that use Omnibus GitLab...
For more information, see the
[different installation methods](_index.md#how-to-document-different-installation-methods).
## on
When documenting high-level UI elements, use **on** as a preposition. For example:
- On the left sidebar, select **Settings** > **CI/CD**.
- On the **Grant permission** dialog, select **Group**.
Do not use **from** or **in**. For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/f/from-vs-on).
## once
The word **once** means **one time**. Don't use it to mean **after** or **when**.
Use:
- When the process is complete...
Instead of:
- Once the process is complete...
## only
Put the word **only** next to the word it modifies.
In the following example, **only** modifies the noun **projects**.
The meaning is that you can create one type of project--a private project.
- You can create only private projects.
In the following example, **only** modifies the verb **create**.
The meaning is that you can't perform other actions,
like deleting private projects, or adding users to them.
- You can only create private projects.
## optional
If something is optional, such as a command argument, parameter value,
or a file, use `Optional` followed by a period. For optional topics,
append `(optional)` to the topic title.
For example:
```markdown
### This is a topic (optional)
- `value`: Optional. Use it to do something.
```
Follow the same guidance for [optional task steps](_index.md#optional-steps).
## override
Use **override** to indicate temporary replacement.
For example, a value might be overridden when a job runs. The
original value does not change.
## overwrite
Use **overwrite** to indicate permanent replacement.
For example, a log file might overwrite a log file of the same name.
## Owner
When writing about the Owner role:
- Use a capital **O**.
- Write it out.
- Use: if you are assigned the Owner role
- Instead of: if you are an owner
Do not use bold.
Do not use **Owner permissions**. A user who is assigned the Owner role has a set of associated permissions.
An Owner is the highest role a user can have.
## package registry
When documenting the GitLab package registry features and functionality, use lowercase.
Use:
- The GitLab package registry supports A, B, and C.
- You can publish a package to your project's package registry.
## page
If you write a phrase like, "On the **Issues** page," ensure steps for how to get to the page are nearby. Otherwise, people might not know what the **Issues** page is.
The page name should be visible in the UI at the top of the page,
or included in the breadcrumb.
The docs should match the case in the UI, and the page name should be bold. For example:
- On the **Test cases** page, ...
## parent
Always use as a compound noun.
Do not use **direct [ancestor](#ancestor)** or **ascendant**.
Examples:
- parent directory
- parent group
- parent project
- parent commit
- parent issue
- parent item
- parent epic
- parent objective
- parent pipeline
See also: [child](#child), and [subgroup](#subgroup).
## per
Do not use **per** because it can have several different meanings.
Use the specific prepositional phrase instead:
- for each
- through
- by
- every
- according to
## permissions
Do not use [**roles**](#roles) and **permissions** interchangeably. Each user is assigned a role. Each role includes a set of permissions.
Permissions are not the same as [**access levels**](#access-level).
## personal access token
Use sentence case for **personal access token**.
Capitalize the first word when you refer to the UI.
## Planner
When writing about the Planner role:
- Use a capital **P**.
- Write it out.
- Use: if you are assigned the Planner role
- Instead of: if you are a Planner
- When the Planner role is the minimum required role:
- Use: at least the Planner role
- Instead of: the Planner role or higher
Do not use bold.
Do not use **Planner permissions**. A user who is assigned the Planner role has a set of associated permissions.
## please
Do not use **please** in the product documentation.
In UI text, use **please** when we've inconvenienced the user. For more information,
see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/p/please).
## Premium
Use **Premium**, in uppercase, for the subscription tier. When you refer to **Premium**
in the context of other subscription tiers, follow [the subscription tier](#subscription-tier) guidance.
## preferences
Use **preferences** to describe user-specific, system-level settings like theme and layout.
## prerequisites
Use **prerequisites** when documenting the tasks that must be completed or the conditions that must be met before a user can complete a task. Do not use **requirements**.
**Prerequisites** must always be plural, even if the list includes only one item.
For more information, see [the task topic type](../topic_types/task.md).
For tutorial page types, use [**before you begin**](#before-you-begin) instead.
## press
Use **press** when talking about keyboard keys. For example:
- To stop the command, press <kbd>Control</kbd>+<kbd>C</kbd>.
## profanity
Do not use profanity. Doing so might negatively affect other users and contributors, which is contrary to the GitLab value of [Diversity, Inclusion, and Belonging](https://handbook.gitlab.com/handbook/values/#diversity-inclusion).
## project
See [repository, project](#repository-project).
## project access token
Use sentence case for **project access token**.
Capitalize the first word when you refer to the UI.
## provision
Use the term **provision** when referring to provisioning cloud infrastructure. You provision the infrastructure, and then deploy applications to it.
For example, you might write something like:
- Provision an AWS EKS cluster and deploy your application to it.
## push rules
Use lowercase for **push rules**.
## quite
Do not use **quite** because it's wordy.
## `README` file
Use backticks and lowercase for **the `README` file**, or **the `README.md` file**.
When possible, use the full phrase: **the `README` file**
For plural, use **`README` files**.
## recommend, we recommend
Instead of **we recommend**, use **you should**. We want to talk to the user the way
we would talk to a colleague, and to avoid differentiation between `we` and `them`.
- You should set the variable. (It's recommended.)
- Set the variable. (It's required.)
- You can set the variable. (It's optional.)
See also [recommended steps](_index.md#recommended-steps).
## register
Use **register** instead of **sign up** when talking about creating an account.
## reindex
Use **reindex** instead of **re-index** when talking about search.
## remove
Use **remove** when an object continues to exist. For example, you can remove an issue from an epic, but the issue still exists.
When an object is completely deleted, use [**delete**](#delete) instead.
## Reporter
When writing about the Reporter role:
- Use a capital **R**.
- Write it out.
- Use: if you are assigned the Reporter role
- Instead of: if you are a reporter
- When the Reporter role is the minimum required role:
- Use: at least the Reporter role
- Instead of: the Reporter role or higher
Do not use bold.
Do not use **Reporter permissions**. A user who is assigned the Reporter role has a set of associated permissions.
## repository, project
A GitLab project contains, among other things, a Git repository. Use **repository** when referring to the
Git repository. Use **project** to refer to the GitLab user interface for managing and configuring the
Git repository, wiki, and other features.
## Repository Mirroring
Use title case for **Repository Mirroring**.
## resolution, resolve
Use **resolution** when the troubleshooting solution fixes the issue permanently.
A resolution usually involves file and code changes to correct the problem.
For example:
- To resolve this issue, edit the `.gitlab-ci.yml` file.
- One resolution is to edit the `.gitlab-ci.yml` file.
See also [workaround](#workaround).
## requirements
When documenting the tasks that must be completed or the conditions that must be met before a user can complete the steps:
- Use **prerequisites** for tasks. For more information, see [the task topic type](../topic_types/task.md).
- Use **before you begin** for tutorials. For more information, see [the tutorial page type](../topic_types/tutorial.md).
Do not use **requirements**.
## reset
Use **reset** to describe the action associated with resetting an item to a new state.
## respectively
Avoid **respectively** and be more precise instead.
Use:
- To create a user, select **Create user**. For an existing user, select **Save changes**.
Instead of:
- Select **Create user** or **Save changes** if you created a new user or
edited an existing one respectively.
## restore
See the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/r/restore) for guidance on **restore**.
## review app
Use lowercase for **review app**.
## roles
A user has a role **for** a project or group.
Use:
- You must have the Owner role for the group.
Instead of:
- You must have the Owner role of the group.
Do not use **roles** and [**permissions**](#permissions) interchangeably. Each user is assigned a role. Each role includes a set of permissions.
Two types of roles exist: [custom](#custom-role) and [default](#default-role).
Roles are not the same as [**access levels**](#access-level).
## Root Cause Analysis
Use title case for **Root Cause Analysis**.
On first mention on a page, use **GitLab Duo Root Cause Analysis**.
Thereafter, use **Root Cause Analysis** by itself.
## roll back
Use **roll back** for changing a GitLab version to an earlier one.
Do not use **roll back** for licensing or subscriptions. Use **change the subscription tier** instead.
## runner, runners
Use lowercase for **runners**. These are the agents that run CI/CD jobs. See also [GitLab Runner](#gitlab-runner) and [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/233529).
When referring to runners, if you have to specify that the runners are installed on a customer's GitLab instance,
use **self-managed** rather than **self-hosted**.
When referring to the scope of runners, use:
- **project runner**: Associated with specific projects.
- **group runner**: Available to all projects and subgroups in a group.
- **instance runner**: Available to all groups and projects in a GitLab instance.
## runner manager, runner managers
Use lowercase for **runner managers**. These are a type of runner that can create multiple runners for autoscaling. See also [GitLab Runner](#gitlab-runner).
## runner worker, runner workers
Use lowercase for **runner workers**. This is the process created by the runner on the host computing platform to run jobs. See also [GitLab Runner](#gitlab-runner).
## runner authentication token
Use **runner authentication token** instead of variations like **runner token**, **authentication token**, or **token**.
Runners are assigned runner authentication tokens when they are created, and use them to authenticate with GitLab when
they execute jobs.
## Runner SaaS, SaaS runners
Do not use **Runner SaaS** or **SaaS runners**.
Use **GitLab-hosted runners** as the main feature name that describes runners hosted on GitLab.com and GitLab Dedicated.
To specify offerings and operating systems use:
- **hosted runners for GitLab.com**
- **hosted runners for GitLab Dedicated**
- **hosted runners on Linux for GitLab.com**
- **hosted runners on Windows for GitLab.com**
Do not use **hosted runners** without the **GitLab-** prefix or without the offering or operating system.
## (s)
Do not use **(s)** to make a word optionally plural. It can slow down comprehension. For example:
Use:
- Select the jobs you want.
Instead of:
- Select the job(s) you want.
If you can select multiples of something, then write the word as plural.
## sanity check
Do not use **sanity check**. Use **check for completeness** instead. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## scalability
Do not use **scalability** when talking about increasing GitLab performance for additional users. The words scale or scaling
are sometimes acceptable, but references to increasing GitLab performance for additional users should direct readers
to the GitLab [reference architectures](../../../administration/reference_architectures/_index.md) page.
## search
When you search, you type a string in the search box on the left sidebar.
The search results are displayed on a search page.
Searching is different from [filtering](#filter).
## seats
When referring to the subscription billing model:
- For GitLab.com, use **seats**. Customers purchase seats. Users occupy seats when they are invited
to a group, with some [exceptions](../../../subscriptions/manage_users_and_seats.md#gitlabcom-billing-and-usage).
- For GitLab Self-Managed, use **users**. Customers purchase subscriptions for a specified number of **users**.
## section
Use **section** to describe an area on a page. For example, if a page has lines that separate the UI
into separate areas, refer to these areas as sections.
We often think of expandable/collapsible areas as **sections**. When you refer to expanding
or collapsing a section, don't include the word **section**.
Use:
- Expand **Auto DevOps**.
Instead of:
- Do not: Expand the **Auto DevOps** section.
## select
Use **select** with buttons, links, menu items, and lists. **Select** applies to more devices,
while **click** is more specific to a mouse.
However, you can make an exception for **right-click** and **click-through demo**.
## self-hosted model
Use **self-hosted model** (lowercase) to refer to a language model that's hosted by a customer, rather than GitLab.
The language model might be an LLM (large language model), but it might not be.
## Self-Hosted
To avoid confusion with [**GitLab Self-Managed**](#gitlab-self-managed),
when referring to the [**GitLab Duo Self-Hosted** feature](#gitlab-duo-self-hosted),
do not use **Self-Hosted** by itself.
Always write **GitLab Duo Self-Hosted** in full and in title case, unless you are
[referring to a language model that's hosted by a customer, rather than GitLab](#self-hosted-model).
## self-managed
Use **GitLab Self-Managed** to refer to a customer's installation of GitLab.
- Do not use **self-hosted**.
See [GitLab Self-Managed](#gitlab-self-managed).
## Service Desk
Use title case for **Service Desk**.
## session
When an [agent](#ai-agent) is working on a [**flow**](#flows), a **session** is running.
The session can start and stop.
Do not use **AI session** or **agent session**.
## setup, set up
Use **setup** as a noun, and **set up** as a verb. For example:
- Your remote office setup is amazing.
- To set up your remote office correctly, consider the ergonomics of your work area.
Do not confuse **set up** with [**configure**](#configure).
**Set up** implies that it's the first time you've done something. For example:
1. Set up your installation.
1. Configure your installation.
## settings
A **setting** changes the default behavior of the product. A **setting** consists of a key/value pair,
typically represented by a label with one or more options.
## sign in, sign-in
To describe the action of signing in, use:
- **sign in**.
- **sign in to** as a verb. For example: Use your password to sign in to GitLab.
You can also use:
- **sign-in** as a noun or adjective. For example: **sign-in page** or
**sign-in restrictions**.
- **single sign-on**.
Do not use:
- **sign on**.
- **sign into**.
- [**log on**, **log in**, or **log into**](#log-in-log-on).
If the user interface has different words, you can use those.
## sign up
Use **register** instead of **sign up** when talking about creating an account.
## signed-in user, signed in user
Use **authenticated user** instead of **signed-in user** or **signed in user**.
## simply, simple
Do not use **simply** or **simple**. If the user doesn't find the process to be simple, we lose their trust. ([Vale](../testing/vale.md) rule: [`Simplicity.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Simplicity.yml))
## since
The word **since** indicates a timeframe. For example, **Since 1984, Bon Jovi has existed**. Don't use **since** to mean **because**.
Use:
- Because you have the Developer role, you can delete the widget.
Instead of:
- Since you have the Developer role, you can delete the widget.
## slashes
Instead of **and/or**, use **or** or re-write the sentence. This rule also applies to other slashes, like **follow/unfollow**. Some exceptions (like **CI/CD**) are allowed.
## slave
Do not use **slave**. Another option is **secondary**. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## storages
In the context of:
- Gitaly, storage is physical and must be called a **storage**.
- Gitaly Cluster (Praefect), storage can be either:
- Virtual and must be called a **virtual storage**.
- Physical and must be called a **physical storage**.
Gitaly storages have physical paths and virtual storages have virtual paths.
## subgroup
Use **subgroup** (no hyphen) instead of **sub-group**.
Also, avoid alternative terms for subgroups, such as **child group** or **low-level group**.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## subscription tier
Do not confuse **subscription** or **subscription tier** with **[license](#license)**.
A user purchases a **subscription**. That subscription has a **tier**.
To describe tiers:
| Instead of | Use |
|---------------------------------|----------------------------------------|
| In the Free tier or greater | In all tiers |
| In the Free tier or higher | In all tiers |
| In the Premium tier or greater | In the Premium and Ultimate tier |
| In the Premium tier or higher | In the Premium and Ultimate tier |
| In the Premium tier or lower | In the Free and Premium tier |
## Suggested Reviewers
Use title case for **Suggested Reviewers**.
**Suggested Reviewers** should always be plural, and is capitalized even if it's generic.
Examples:
- Suggested Reviewers can recommend a person to review your merge request. (This phrase describes the feature.)
- As you type, Suggested Reviewers are displayed. (This phrase is generic but still uses capital letters.)
## tab
Use bold for tab names. For example:
- The **Pipelines** tab
- The **Overview** tab
## that
Do not use **that** when describing a noun. For example:
Use:
- The file you save...
Instead of:
- The file **that** you save...
See also [this, these, that, those](#this-these-that-those).
## terminal
Use lowercase for **terminal**. For example:
- Open a terminal.
- From a terminal, run the `docker login` command.
## Terraform Module Registry
Use title case for the GitLab Terraform Module Registry, but use lowercase `m` when
talking about non-specific modules. For example:
- You can publish a Terraform module to your project's Terraform Module Registry.
## Test Generation
Use title case for **Test Generation**.
On first mention on a page, use **GitLab Duo Test Generation**.
Thereafter, use **Test Generation** by itself.
## text box
Use **text box** instead of **field** or **box** when referring to the UI element.
## there is, there are
Try to avoid **there is** and **there are**. These phrases hide the subject.
Use:
- The bucket has holes.
Instead of:
- There are holes in the bucket.
## they
Avoid the use of gender-specific pronouns, unless referring to a specific person.
Use a singular [they](https://developers.google.com/style/pronouns#gender-neutral-pronouns) as
a gender-neutral pronoun.
## this, these, that, those
Always follow these words with a noun. For example:
- Use: **This setting** improves performance.
- Instead of: **This** improves performance.
- Use: **These pants** are the best.
- Instead of: **These** are the best.
- Use: **That droid** is the one you are looking for.
- Instead of: **That** is the one you are looking for.
- Use: **Those settings** must be configured. (Or even better, **Configure those settings.**)
- Instead of: **Those** need to be configured.
## to which, of which
Try to avoid **to which** and **of which**, and let the preposition dangle at the end of the sentence instead.
For examples, see [Prepositions](_index.md#prepositions).
## to-do item
Use lowercase and hyphenate **to-do** item. ([Vale](../testing/vale.md) rule: [`ToDo.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/ToDo.yml))
## To-Do List
Use title case for **To-Do List**. ([Vale](../testing/vale.md) rule: [`ToDo.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/ToDo.yml))
## toggle
You **turn on** or **turn off** a toggle. For example:
- Turn on the **blah** toggle.
## top-level group
Use lowercase for **top-level group** (hyphenated).
Do not use **root group**.
## TFA, two-factor authentication
Use [**2FA** and **two-factor authentication**](#2fa-two-factor-authentication) instead.
## turn on, turn off
Use **turn on** and **turn off** instead of **enable** or **disable**.
For details, see [the Microsoft style guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/t/turn-on-turn-off).
See also [enable](#enable) and [disable](#disable).
## type
Use **type** when the cursor remains where you're typing. For example,
in a search box, you begin typing and search results appear. You do not
click out of the search box.
For example:
- To view all users named Alex, type `Al`.
- To view all labels for the documentation team, type `doc`.
- For a list of quick actions, type `/`.
See also [**enter**](#enter).
## Ultimate
Use **Ultimate**, in uppercase, for the subscription tier. When you refer to **Ultimate**
in the context of other subscription tiers, follow [the subscription tier](#subscription-tier) guidance.
## undo
See the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/u/undo) for guidance on **undo**.
## units of measurement
Use a space between the number and the unit of measurement. For example, **128 GB**.
([Vale](../testing/vale.md) rule: [`Units.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Units.yml))
For more information, see the
[Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/term-collections/bits-bytes-terms).
## update
Use **update** for installing a newer **patch** version of the software,
or for documenting API and programmatic changes.
For example:
- Update GitLab from 14.9 to 14.9.1.
- Use this endpoint to update user permissions.
Do not use **update** for any other case. Instead, use **[upgrade](#upgrade)** or **[edit](#edit)**.
## upgrade
Use **upgrade** for:
- Choosing a higher subscription tier (Premium or Ultimate).
- Installing a newer **major** (13.0) or **minor** (13.2) version of GitLab.
For example:
- Upgrade to GitLab Ultimate.
- Upgrade GitLab from 14.0 to 14.1.
- Upgrade GitLab from 14.0 to 15.0.
Use caution with the phrase **Upgrade GitLab** without any other text.
Ensure the surrounding text clarifies whether
you're talking about the product version or the subscription tier.
See also [downgrade](#downgrade) and [roll back](#roll-back).
## upper left, upper right
Use **upper-left corner** and **upper-right corner** to provide direction in the UI.
If the UI element is not in a corner, use **upper left** and **upper right**.
Do not use **top left** and **top right**.
For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/u/upper-left-upper-right).
## useful
Do not use **useful**. If the user doesn't find the process to be useful, we lose their trust. ([Vale](../testing/vale.md) rule: [`Simplicity.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Simplicity.yml))
## user account
You create a **user account**. The user account has an [access level](#access-level).
When you add a **user account** to a group or project, the user account becomes a **member**.
## using
Avoid **using** in most cases. It hides the subject and makes the phrase more difficult to translate.
Use **by using**, **that use**, or re-write the sentence.
For example:
- Instead of: The files using storage...
- Use: The files that use storage...
- Instead of: Change directories using the command line.
- Use: Change directories by using the command line. Or even better: To change directories, use the command line.
## utilize
Do not use **utilize**. Use **use** instead. It's more succinct and easier for non-native English speakers to understand.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## version, v
To describe versions of GitLab, use **GitLab `<version number>`**. For example:
- You must have GitLab 16.0 or later.
To describe other software, use the same style as the documentation for that software.
For example:
- In Kubernetes 1.4, you can...
Pay attention to spacing by the letter **v**. In semantic versioning, no space exists after the **v**. For example:
- v1.2.3
## via
Do not use Latin abbreviations. Use **with**, **through**, or **by using** instead. ([Vale](../testing/vale.md) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/LatinTerms.yml))
## virtual registry
Use lowercase for **virtual registry**.
On first mention on a page, use **GitLab virtual registry**.
Thereafter, use **virtual registry** by itself.
Use:
- The GitLab virtual registry supports A, B, and C.
- You can configure your applications to use one virtual registry instead
of multiple upstream registries.
## VS Code user interface
When describing the user interface of VS Code and the Web IDE, follow the usage and capitalization of the
[VS Code documentation](https://code.visualstudio.com/docs/getstarted/userinterface), such as Command Palette
and Primary Side Bar.
## Vulnerability Explanation
Use title case for **Vulnerability Explanation**.
On first mention on a page, use **GitLab Duo Vulnerability Explanation**.
Thereafter, use **Vulnerability Explanation** by itself.
## Vulnerability Resolution
Use title case for **Vulnerability Resolution**.
On first mention on a page, use **GitLab Duo Vulnerability Resolution**.
Thereafter, use **Vulnerability Resolution** by itself.
## we
Try to avoid **we** and focus instead on how the user can accomplish something in GitLab.
Use:
- Use widgets when you have work you want to organize.
Instead of:
- We created a feature for you to add widgets.
## Web IDE user interface
See [VS Code user interface](#vs-code-user-interface).
## workaround
Use **workaround** when the troubleshooting solution is a temporary fix.
A workaround is usually an immediate fix and might have ongoing issues.
For example:
- The workaround is to temporarily pin your template to the deprecated version.
See also [resolution](#resolution-resolve).
## while
Use **while** to refer only to something occurring in time. For example,
**Leave the window open while the process runs.**
Do not use **while** for comparison. For example, use:
- Job 1 can run quickly. However, job 2 is more precise.
Instead of:
- While job 1 can run quickly, job 2 is more precise.
For more information, see the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/w/while).
## whilst
Do not use **whilst**. Use [while](#while) instead. **While** is more succinct and easier for non-native English speakers to understand.
## whitelist
Do not use **whitelist**. Another option is **allowlist**. ([Vale](../testing/vale.md) rule: [`InclusiveLanguage.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/InclusiveLanguage.yml))
## within
When possible, do not use **within**. Use **in** instead, unless you are referring to a time frame, limit, or boundary. For example:
- The upgrade occurs within the four-hour maintenance window.
- The Wi-Fi signal is accessible within a 30-foot radius.
([Vale](../testing/vale.md) rule: [`SubstitutionWarning.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/SubstitutionWarning.yml))
## yet
Do not use **yet** when talking about the product or its features. The documentation describes the product as it is today.
Sometimes you might want to use **yet** when writing a task. If you use
**yet**, ensure the surrounding phrases are written
in present tense, active voice.
[View guidance about how to write about future features](_index.md#promising-features-in-future-versions).
## you, your, yours
Use **you** instead of **the user**, **the administrator** or **the customer**.
Documentation should speak directly to the user, whether that user is someone installing the product,
configuring it, administering it, or using it.
Use:
- You can configure a pipeline.
- You can reset a user's password. (In content for an administrator)
Instead of:
- Users can configure a pipeline.
- Administrators can reset a user's password.
## you can
When possible, start sentences with an active verb instead of **you can**.
For example:
- Use code review analytics to view merge request data.
- Create a board to organize your team tasks.
- Configure variables to restrict pushes to a repository.
- Add links to external accounts you have, like Discord and Twitter.
Use **you can** for optional actions. For example:
- Use code review analytics to view metrics for each merge request. You can also use the API.
- Enter the name and value pairs. You can add up to 20 pairs for each streaming destination.
<!-- vale on -->
<!-- markdownlint-enable -->
|
https://docs.gitlab.com/development/documentation/deprecations_and_removals
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/deprecations_and_removals.md
|
2025-08-13
|
doc/development/documentation/styleguide
|
[
"doc",
"development",
"documentation",
"styleguide"
] |
deprecations_and_removals.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Deprecations and removals
|
Guidelines for deprecations and page removals
|
When GitLab deprecates or removes a feature, use the following process to update the documentation.
This process requires temporarily changing content to be "deprecated" or "removed" before it's deleted.
If a feature is not generally available, you can delete the content outright instead of following these instructions.
{{< alert type="note" >}}
In the following cases, a separate process applies:
- [Documentation redirects](../redirects.md) to move, rename, or delete pages not related to feature deprecation.
- [REST API deprecations](../restful_api_styleguide.md#deprecations).
- [GraphQL API deprecation process](../../../api/graphql/_index.md#deprecation-and-removal-process) and [deprecation reasons](../../api_graphql_styleguide.md#deprecation-reason-style-guide).
{{< /alert >}}
## Features not actively being developed
When a feature is no longer actively developed, but not deprecated, add the following note under
the topic title and version history:
```markdown
{{</* alert type="note" */>}}
This feature is not under active development, but
[community contributions](https://about.gitlab.com/community/contribute/) are welcome.
{{</* /alert */>}}
```
## Deprecate a page or topic
To deprecate a page or topic:
1. Add `(deprecated)` after the title. Use a warning `alert` to explain when it was deprecated,
when it will be removed, and the replacement feature.
```markdown
title: Title (deprecated)
---
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
{{</* alert type="warning" */>}}
This feature was [deprecated](https://issue-link) in GitLab 14.8
and is planned for removal in 15.4. Use [feature X](link-to-docs.md) instead.
{{</* /alert */>}}
```
If you're not sure when the feature will be removed or no
replacement feature exists, you don't need to add this information.
1. If the deprecation is a [breaking change](../../../update/terminology.md#breaking-change), add this text:
```markdown
This change is a breaking change.
```
You can add any additional context-specific details that might help users.
1. Add the following HTML comments above and below the content. For `remove_date`,
set a date three months after the [release where it will be removed](https://about.gitlab.com/releases/).
```markdown
title: Title (deprecated)
---
<!--- start_remove The following content will be removed on remove_date: 'YYYY-MM-DD' -->
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
{{</* alert type="warning" */>}}
This feature was [deprecated](https://issue-link) in GitLab 14.8
and is planned for removal in 15.4. Use [feature X](link-to-docs.md) instead.
{{</* /alert */>}}
<!--- end_remove -->
```
1. Open a merge request to add the word `(deprecated)` to the left nav, after the page title.
## Remove a page
Mark content as removed during the release the feature was removed.
The title and a removed indicator remains until three months after the removal.
To remove a page:
1. Leave the page title. Remove all other content, including the history items and the `details` and `alert` shortcodes.
1. After the `title`, change `(deprecated)` to `(removed)`.
1. Update the YAML metadata:
- For `remove_date`, set the value to a date three months after
the release when the feature was removed.
- For the `redirect_to`, set a path to a file that makes sense. If no obvious
page exists, use the docs home page.
```markdown
---
stage: AI-powered
group: Global Search
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
remove_date: '2022-08-02'
redirect_to: '../newpath/to/file/_index.md'
title: Title (removed)
---
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
This feature was [deprecated](https://issue-link) in GitLab X.Y
and [removed](https://issue-link) in X.Y.
Use [feature X](link-to-docs.md) instead.
```
1. Edit the [`navigation.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml) in `docs-gitlab-com`
to remove the page's entry from the global navigation.
1. Search the [Deprecations and Removals](../../../update/deprecations.md) page for
links to the removed page. The links use full URLs like: `https://docs.gitlab.com/user/deprecated_page/`.
If you find any links, update the relevant [YAML files](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations):
- In the `body:` section, remove links to the removed page.
- In the `documentation_url:` section, if the entry links to the page, delete the link.
- Run the Rake task to update the documentation:
```shell
bin/rake gitlab:docs:compile_deprecations
```
This content is removed from the documentation as part of the Technical Writing team's
[regularly scheduled tasks](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#regularly-scheduled-tasks).
## Remove a topic
To remove a topic:
1. Leave the title and the details of the deprecation and removal. Remove all other content,
including the history items and the `details` and `alert` shortcodes.
1. Add `(removed)` after the title.
1. Add the following HTML comments above and below the topic.
For `remove_date`, set a date three months after the release where it was removed.
```markdown
title: Title (removed)
----
<!--- start_remove The following content will be removed on remove_date: 'YYYY-MM-DD' -->
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
This feature was [deprecated](https://issue-link) in GitLab X.Y
and [removed](https://issue-link) in X.Y.
Use [feature X](link-to-docs.md) instead.
<!--- end_remove -->
```
1. Search the [Deprecations and Removals](../../../update/deprecations.md) page for
links to the removed page. The links use full URLs like: `https://docs.gitlab.com/user/deprecated_page/`.
If you find any links, update the relevant [YAML files](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations):
- In the `body:` section, remove links to the removed page.
- In the `documentation_url:` section, if the entry links to the page, delete the link.
- Run the Rake task to update the documentation:
```shell
bin/rake gitlab:docs:compile_deprecations
```
This content is removed from the documentation as part of the Technical Writing team's
[regularly scheduled tasks](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#regularly-scheduled-tasks).
## Removing version-specific upgrade pages
Version-specific upgrade pages are in the `doc/update/versions/` directory.
We don't remove version-specific upgrade pages immediately for a major milestone. This gives
users time to upgrade from older versions.
For example, `doc/update/versions/14_changes.md` should
be removed during the `.3` milestone. Therefore `14_changes.md` are
removed in GitLab 17.3.
Instead of removing the unsupported page:
- [Add a note](#remove-a-topic) with a date three months
in the future to ensure the page is removed during the
[monthly maintenance task](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#regularly-scheduled-tasks).
- Do not add `Removed` to the title.
If the `X_changes.md` page contains relative links to other sections
that are removed as part of the versions cleanup, the `docs-lint links`
job might fail. You can replace those relative links with an [archived version](https://archives.docs.gitlab.com).
Choose the latest minor version of the unsupported version to be removed.
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Guidelines for deprecations and page removals
title: Deprecations and removals
breadcrumbs:
- doc
- development
- documentation
- styleguide
---
When GitLab deprecates or removes a feature, use the following process to update the documentation.
This process requires temporarily changing content to be "deprecated" or "removed" before it's deleted.
If a feature is not generally available, you can delete the content outright instead of following these instructions.
{{< alert type="note" >}}
In the following cases, a separate process applies:
- [Documentation redirects](../redirects.md) to move, rename, or delete pages not related to feature deprecation.
- [REST API deprecations](../restful_api_styleguide.md#deprecations).
- [GraphQL API deprecation process](../../../api/graphql/_index.md#deprecation-and-removal-process) and [deprecation reasons](../../api_graphql_styleguide.md#deprecation-reason-style-guide).
{{< /alert >}}
## Features not actively being developed
When a feature is no longer actively developed, but not deprecated, add the following note under
the topic title and version history:
```markdown
{{</* alert type="note" */>}}
This feature is not under active development, but
[community contributions](https://about.gitlab.com/community/contribute/) are welcome.
{{</* /alert */>}}
```
## Deprecate a page or topic
To deprecate a page or topic:
1. Add `(deprecated)` after the title. Use a warning `alert` to explain when it was deprecated,
when it will be removed, and the replacement feature.
```markdown
title: Title (deprecated)
---
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
{{</* alert type="warning" */>}}
This feature was [deprecated](https://issue-link) in GitLab 14.8
and is planned for removal in 15.4. Use [feature X](link-to-docs.md) instead.
{{</* /alert */>}}
```
If you're not sure when the feature will be removed or no
replacement feature exists, you don't need to add this information.
1. If the deprecation is a [breaking change](../../../update/terminology.md#breaking-change), add this text:
```markdown
This change is a breaking change.
```
You can add any additional context-specific details that might help users.
1. Add the following HTML comments above and below the content. For `remove_date`,
set a date three months after the [release where it will be removed](https://about.gitlab.com/releases/).
```markdown
title: Title (deprecated)
---
<!--- start_remove The following content will be removed on remove_date: 'YYYY-MM-DD' -->
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
{{</* alert type="warning" */>}}
This feature was [deprecated](https://issue-link) in GitLab 14.8
and is planned for removal in 15.4. Use [feature X](link-to-docs.md) instead.
{{</* /alert */>}}
<!--- end_remove -->
```
1. Open a merge request to add the word `(deprecated)` to the left nav, after the page title.
## Remove a page
Mark content as removed during the release the feature was removed.
The title and a removed indicator remains until three months after the removal.
To remove a page:
1. Leave the page title. Remove all other content, including the history items and the `details` and `alert` shortcodes.
1. After the `title`, change `(deprecated)` to `(removed)`.
1. Update the YAML metadata:
- For `remove_date`, set the value to a date three months after
the release when the feature was removed.
- For the `redirect_to`, set a path to a file that makes sense. If no obvious
page exists, use the docs home page.
```markdown
---
stage: AI-powered
group: Global Search
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
remove_date: '2022-08-02'
redirect_to: '../newpath/to/file/_index.md'
title: Title (removed)
---
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
This feature was [deprecated](https://issue-link) in GitLab X.Y
and [removed](https://issue-link) in X.Y.
Use [feature X](link-to-docs.md) instead.
```
1. Edit the [`navigation.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml) in `docs-gitlab-com`
to remove the page's entry from the global navigation.
1. Search the [Deprecations and Removals](../../../update/deprecations.md) page for
links to the removed page. The links use full URLs like: `https://docs.gitlab.com/user/deprecated_page/`.
If you find any links, update the relevant [YAML files](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations):
- In the `body:` section, remove links to the removed page.
- In the `documentation_url:` section, if the entry links to the page, delete the link.
- Run the Rake task to update the documentation:
```shell
bin/rake gitlab:docs:compile_deprecations
```
This content is removed from the documentation as part of the Technical Writing team's
[regularly scheduled tasks](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#regularly-scheduled-tasks).
## Remove a topic
To remove a topic:
1. Leave the title and the details of the deprecation and removal. Remove all other content,
including the history items and the `details` and `alert` shortcodes.
1. Add `(removed)` after the title.
1. Add the following HTML comments above and below the topic.
For `remove_date`, set a date three months after the release where it was removed.
```markdown
title: Title (removed)
----
<!--- start_remove The following content will be removed on remove_date: 'YYYY-MM-DD' -->
{{</* details */>}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{</* /details */>}}
This feature was [deprecated](https://issue-link) in GitLab X.Y
and [removed](https://issue-link) in X.Y.
Use [feature X](link-to-docs.md) instead.
<!--- end_remove -->
```
1. Search the [Deprecations and Removals](../../../update/deprecations.md) page for
links to the removed page. The links use full URLs like: `https://docs.gitlab.com/user/deprecated_page/`.
If you find any links, update the relevant [YAML files](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations):
- In the `body:` section, remove links to the removed page.
- In the `documentation_url:` section, if the entry links to the page, delete the link.
- Run the Rake task to update the documentation:
```shell
bin/rake gitlab:docs:compile_deprecations
```
This content is removed from the documentation as part of the Technical Writing team's
[regularly scheduled tasks](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#regularly-scheduled-tasks).
## Removing version-specific upgrade pages
Version-specific upgrade pages are in the `doc/update/versions/` directory.
We don't remove version-specific upgrade pages immediately for a major milestone. This gives
users time to upgrade from older versions.
For example, `doc/update/versions/14_changes.md` should
be removed during the `.3` milestone. Therefore `14_changes.md` are
removed in GitLab 17.3.
Instead of removing the unsupported page:
- [Add a note](#remove-a-topic) with a date three months
in the future to ensure the page is removed during the
[monthly maintenance task](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#regularly-scheduled-tasks).
- Do not add `Removed` to the title.
If the `X_changes.md` page contains relative links to other sections
that are removed as part of the versions cleanup, the `docs-lint links`
job might fail. You can replace those relative links with an [archived version](https://archives.docs.gitlab.com).
Choose the latest minor version of the unsupported version to be removed.
|
https://docs.gitlab.com/development/documentation/site_architecture
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/_index.md
|
2025-08-13
|
doc/development/documentation/site_architecture
|
[
"doc",
"development",
"documentation",
"site_architecture"
] |
_index.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Documentation site architecture
| null |
The [`docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) project hosts
the repository used to generate the GitLab documentation website and
is deployed to <https://docs.gitlab.com>. It uses the [Hugo](https://gohugo.io/)
static site generator.
For more information, see the [Docs site architecture](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/architecture.md)
page.
## Source files
The documentation source files are in the same repositories as the product code.
| Project | Path |
| --- | --- |
| [GitLab](https://gitlab.com/gitlab-org/gitlab/) | [`/doc`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc) |
| [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner/) | [`/docs`](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs) |
| [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/) | [`/doc`](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc) |
| [Charts](https://gitlab.com/gitlab-org/charts/gitlab) | [`/doc`](https://gitlab.com/gitlab-org/charts/gitlab/tree/master/doc) |
| [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator) | [`/doc`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc) |
Documentation issues and merge requests are part of their respective repositories and all have the label `Documentation`.
## Publication
Documentation for GitLab, GitLab Runner, GitLab Operator, Omnibus GitLab, and Charts is published to <https://docs.gitlab.com>.
The same documentation is included in the application. To view the in-product help,
go to the URL and add `/help` at the end.
Only help for your current edition and version is included.
Help for other versions is available at <https://docs.gitlab.com/archives/>.
## Updating older versions
If you need to add or edit documentation for a GitLab version that has already been
released, follow the [patch release runbook](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/patch/engineers.md).
## Documentation in other repositories
If you have code and documentation in a repository other than the [primary repositories](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/architecture.md),
you should keep the documentation with the code in that repository.
Then you can use one of these approaches:
- Recommended. [Add the repository to the list of products](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/development.md#add-a-new-product)
published at <https://docs.gitlab.com>. The source of the documentation pages remains
in the external repository, but the resulting pages are indexed and searchable on <https://docs.gitlab.com>.
- Recommended. [Add an entry in the global navigation](global_nav.md#add-a-navigation-entry) for
<https://docs.gitlab.com> that links directly to the documentation in that external repository.
The documentation pages are not indexed or searchable on <https://docs.gitlab.com>.
- Create a landing page for the product in the `gitlab` repository, and add the landing page
[to the global navigation](global_nav.md#add-a-navigation-entry), but keep the rest
of the documentation in the external repository. The landing page is indexed and
searchable on <https://docs.gitlab.com>, but the rest of the documentation is not.
For example, the [GitLab Workflow extension for VS Code](../../../editor_extensions/visual_studio_code/_index.md).
We do not encourage the use of [pages with lists of links](../topic_types/_index.md#pages-and-topics-to-avoid),
so only use this option if the recommended options are not feasible.
## Documentation in other languages
Translations of GitLab documentation are done through a semi-autonomous process.
The [English files](#source-files) are the canonical source files, and the translations
are in language-specific subdirectories under `doc-locale` or similar. For example, Japanese translations
are in `/doc-locale/ja-jp/`.
| Project | Path |
|-----------------|------|
| GitLab | [`/doc-locale`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc-locale) |
| GitLab Runner | [`/docs-locale`](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs-locale) |
| Omnibus GitLab | [`/doc-locale`](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc-locale) |
| Charts | [`/doc-locale`](https://gitlab.com/gitlab-org/charts/gitlab/tree/master/doc-locale) |
| GitLab Operator | [`/doc-locale`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc-locale) |
Development documentation under `doc/development` or similar is not translated.
You can contribute to the English source files only. The translated files are updated by automation.
## Monthly release process (versions)
The docs website supports versions and each month we add the latest one to the list.
For more information, read about the [monthly release process](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/releases.md).
## Danger Bot
GitLab uses [Danger](https://github.com/danger/danger) to automate code review processes.
When documentation files in `/doc` are modified in a merge request,
Danger Bot automatically comments with documentation-related guidelines.
This automation is configured in the [`Dangerfile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/danger/documentation/Dangerfile).
## Request a documentation survey banner
To reach to a wider audience, you can request
[a survey banner](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/maintenance.md#survey-banner).
Only one banner can exist at any given time. Priority is given based on who
asked for the banner first.
To request a survey banner:
1. [Open an issue](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/new?issue[title]=Survey%20banner%20request&issuable_template=Survey%20banner%20request)
in the `docs-gitlab-com` project and use the "Survey banner request" template.
1. Fill in the details in the issue description.
1. Create the issue and someone from the Technical Writing team will handle your request.
1. When you no longer need the banner, ping the person assigned to the issue and ask them to remove it.
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Documentation site architecture
breadcrumbs:
- doc
- development
- documentation
- site_architecture
---
The [`docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) project hosts
the repository used to generate the GitLab documentation website and
is deployed to <https://docs.gitlab.com>. It uses the [Hugo](https://gohugo.io/)
static site generator.
For more information, see the [Docs site architecture](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/architecture.md)
page.
## Source files
The documentation source files are in the same repositories as the product code.
| Project | Path |
| --- | --- |
| [GitLab](https://gitlab.com/gitlab-org/gitlab/) | [`/doc`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc) |
| [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner/) | [`/docs`](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs) |
| [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/) | [`/doc`](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc) |
| [Charts](https://gitlab.com/gitlab-org/charts/gitlab) | [`/doc`](https://gitlab.com/gitlab-org/charts/gitlab/tree/master/doc) |
| [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator) | [`/doc`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc) |
Documentation issues and merge requests are part of their respective repositories and all have the label `Documentation`.
## Publication
Documentation for GitLab, GitLab Runner, GitLab Operator, Omnibus GitLab, and Charts is published to <https://docs.gitlab.com>.
The same documentation is included in the application. To view the in-product help,
go to the URL and add `/help` at the end.
Only help for your current edition and version is included.
Help for other versions is available at <https://docs.gitlab.com/archives/>.
## Updating older versions
If you need to add or edit documentation for a GitLab version that has already been
released, follow the [patch release runbook](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/patch/engineers.md).
## Documentation in other repositories
If you have code and documentation in a repository other than the [primary repositories](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/architecture.md),
you should keep the documentation with the code in that repository.
Then you can use one of these approaches:
- Recommended. [Add the repository to the list of products](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/development.md#add-a-new-product)
published at <https://docs.gitlab.com>. The source of the documentation pages remains
in the external repository, but the resulting pages are indexed and searchable on <https://docs.gitlab.com>.
- Recommended. [Add an entry in the global navigation](global_nav.md#add-a-navigation-entry) for
<https://docs.gitlab.com> that links directly to the documentation in that external repository.
The documentation pages are not indexed or searchable on <https://docs.gitlab.com>.
- Create a landing page for the product in the `gitlab` repository, and add the landing page
[to the global navigation](global_nav.md#add-a-navigation-entry), but keep the rest
of the documentation in the external repository. The landing page is indexed and
searchable on <https://docs.gitlab.com>, but the rest of the documentation is not.
For example, the [GitLab Workflow extension for VS Code](../../../editor_extensions/visual_studio_code/_index.md).
We do not encourage the use of [pages with lists of links](../topic_types/_index.md#pages-and-topics-to-avoid),
so only use this option if the recommended options are not feasible.
## Documentation in other languages
Translations of GitLab documentation are done through a semi-autonomous process.
The [English files](#source-files) are the canonical source files, and the translations
are in language-specific subdirectories under `doc-locale` or similar. For example, Japanese translations
are in `/doc-locale/ja-jp/`.
| Project | Path |
|-----------------|------|
| GitLab | [`/doc-locale`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc-locale) |
| GitLab Runner | [`/docs-locale`](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs-locale) |
| Omnibus GitLab | [`/doc-locale`](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc-locale) |
| Charts | [`/doc-locale`](https://gitlab.com/gitlab-org/charts/gitlab/tree/master/doc-locale) |
| GitLab Operator | [`/doc-locale`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc-locale) |
Development documentation under `doc/development` or similar is not translated.
You can contribute to the English source files only. The translated files are updated by automation.
## Monthly release process (versions)
The docs website supports versions and each month we add the latest one to the list.
For more information, read about the [monthly release process](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/releases.md).
## Danger Bot
GitLab uses [Danger](https://github.com/danger/danger) to automate code review processes.
When documentation files in `/doc` are modified in a merge request,
Danger Bot automatically comments with documentation-related guidelines.
This automation is configured in the [`Dangerfile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/danger/documentation/Dangerfile).
## Request a documentation survey banner
To reach to a wider audience, you can request
[a survey banner](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/maintenance.md#survey-banner).
Only one banner can exist at any given time. Priority is given based on who
asked for the banner first.
To request a survey banner:
1. [Open an issue](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/new?issue[title]=Survey%20banner%20request&issuable_template=Survey%20banner%20request)
in the `docs-gitlab-com` project and use the "Survey banner request" template.
1. Fill in the details in the issue description.
1. Create the issue and someone from the Technical Writing team will handle your request.
1. When you no longer need the banner, ping the person assigned to the issue and ask them to remove it.
|
https://docs.gitlab.com/development/documentation/folder_structure
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/folder_structure.md
|
2025-08-13
|
doc/development/documentation/site_architecture
|
[
"doc",
"development",
"documentation",
"site_architecture"
] |
folder_structure.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Folder structure for documentation
| null |
The documentation is separated by top-level audience folders [`user`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/doc/user),
[`administration`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/doc/administration),
and [`development`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/doc/development)
(contributing) folders.
Beyond that, we primarily follow the structure of the GitLab user interface or
API.
Our goal is to have a clear hierarchical structure with meaningful URLs like
`docs.gitlab.com/user/project/merge_requests/`. With this pattern, you can
immediately tell that you are navigating to user-related documentation about
project features; specifically about merge requests. Our site's paths match
those of our repository, so the clear structure also makes documentation easier
to update.
Put files for a specific product area into the related folder:
| Directory | Contents |
|:----------------------|:------------------|
| `doc/user/` | Documentation for users. Anything that can be done in the GitLab user interface goes here, including usage of the `/admin` interface. |
| `doc/administration/` | Documentation that requires the user to have access to the server where GitLab is installed. Administrator settings in the GitLab user interface are under `doc/administration/`. |
| `doc/api/` | Documentation for the API. |
| `doc/development/` | Documentation related to the development of GitLab, whether contributing code or documentation. Related process and style guides should go here. |
| `doc/legal/` | Legal documents about contributing to GitLab. |
| `doc/install/` | Instructions for installing GitLab. |
| `doc/update/` | Instructions for updating GitLab. |
| `doc/tutorials/` | Tutorials for how to use GitLab. |
The following are legacy or deprecated folders.
Do not add new content to these folders:
- `/gitlab-basics/`
- `/topics/`
- `/university/`
## Work with directories and files
When working with directories and files:
1. When you create a new directory, always start with an `_index.md` file.
Don't use another filename and do not create `README.md` files.
1. Do not use special characters and spaces, or capital letters in file
names, directory names, branch names, and anything that generates a path.
1. When creating or renaming a file or directory and it has more than one word
in its name, use underscores (`_`) instead of spaces or dashes. For example,
proper naming would be `import_project/import_from_github.md`. This applies
to both [image files](../styleguide/_index.md#illustrations) and Markdown files.
1. Do not upload video files to the product repositories.
[Link or embed videos](../styleguide/_index.md#videos) instead.
1. In the `doc/user/` directory:
- `doc/user/project/` should contain all project related documentation.
- `doc/user/group/` should contain all group related documentation.
- `doc/user/profile/` should contain all profile related documentation.
Every page you would navigate under `/profile` should have its own document,
for example, `account.md`, `applications.md`, or `emails.md`.
1. In the `doc/administration/` directory: all administrator-related
documentation for administrators, including admin tasks done in both
the UI and on the backend servers.
If you're unsure where to place a document or a content addition, this shouldn't
stop you from authoring and contributing. Use your best judgment, and then ask
the reviewer of your MR to confirm your decision. You can also ask a technical writer at
any stage in the process. The technical writing team reviews all
documentation changes, regardless, and can move content if there is a better
place for it.
## Avoid duplication when possible
When possible, do not include the same information in multiple places.
Link to a single source of truth instead.
For example, if you have code in a repository other than the [primary repositories](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/architecture.md),
and documentation in the same repository, you can keep the documentation in that repository.
Then you can either:
- Publish it to <https://docs.gitlab.com>.
- Link to it from <https://docs.gitlab.com> by adding an entry in the global navigation.
## References across documents
- Give each folder an `_index.md` page that introduces the topic, and both introduces
and links to the child pages, including to the index pages of
any next-level sub-paths.
- To ensure discoverability, ensure each new or renamed doc is linked from its
higher-level index page and other related pages.
- When making reference to other GitLab products and features, link to their
respective documentation, at least on first mention.
- When making reference to third-party products or technologies, link out to
their external sites, documentation, and resources.
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Folder structure for documentation
breadcrumbs:
- doc
- development
- documentation
- site_architecture
---
The documentation is separated by top-level audience folders [`user`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/doc/user),
[`administration`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/doc/administration),
and [`development`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/doc/development)
(contributing) folders.
Beyond that, we primarily follow the structure of the GitLab user interface or
API.
Our goal is to have a clear hierarchical structure with meaningful URLs like
`docs.gitlab.com/user/project/merge_requests/`. With this pattern, you can
immediately tell that you are navigating to user-related documentation about
project features; specifically about merge requests. Our site's paths match
those of our repository, so the clear structure also makes documentation easier
to update.
Put files for a specific product area into the related folder:
| Directory | Contents |
|:----------------------|:------------------|
| `doc/user/` | Documentation for users. Anything that can be done in the GitLab user interface goes here, including usage of the `/admin` interface. |
| `doc/administration/` | Documentation that requires the user to have access to the server where GitLab is installed. Administrator settings in the GitLab user interface are under `doc/administration/`. |
| `doc/api/` | Documentation for the API. |
| `doc/development/` | Documentation related to the development of GitLab, whether contributing code or documentation. Related process and style guides should go here. |
| `doc/legal/` | Legal documents about contributing to GitLab. |
| `doc/install/` | Instructions for installing GitLab. |
| `doc/update/` | Instructions for updating GitLab. |
| `doc/tutorials/` | Tutorials for how to use GitLab. |
The following are legacy or deprecated folders.
Do not add new content to these folders:
- `/gitlab-basics/`
- `/topics/`
- `/university/`
## Work with directories and files
When working with directories and files:
1. When you create a new directory, always start with an `_index.md` file.
Don't use another filename and do not create `README.md` files.
1. Do not use special characters and spaces, or capital letters in file
names, directory names, branch names, and anything that generates a path.
1. When creating or renaming a file or directory and it has more than one word
in its name, use underscores (`_`) instead of spaces or dashes. For example,
proper naming would be `import_project/import_from_github.md`. This applies
to both [image files](../styleguide/_index.md#illustrations) and Markdown files.
1. Do not upload video files to the product repositories.
[Link or embed videos](../styleguide/_index.md#videos) instead.
1. In the `doc/user/` directory:
- `doc/user/project/` should contain all project related documentation.
- `doc/user/group/` should contain all group related documentation.
- `doc/user/profile/` should contain all profile related documentation.
Every page you would navigate under `/profile` should have its own document,
for example, `account.md`, `applications.md`, or `emails.md`.
1. In the `doc/administration/` directory: all administrator-related
documentation for administrators, including admin tasks done in both
the UI and on the backend servers.
If you're unsure where to place a document or a content addition, this shouldn't
stop you from authoring and contributing. Use your best judgment, and then ask
the reviewer of your MR to confirm your decision. You can also ask a technical writer at
any stage in the process. The technical writing team reviews all
documentation changes, regardless, and can move content if there is a better
place for it.
## Avoid duplication when possible
When possible, do not include the same information in multiple places.
Link to a single source of truth instead.
For example, if you have code in a repository other than the [primary repositories](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/architecture.md),
and documentation in the same repository, you can keep the documentation in that repository.
Then you can either:
- Publish it to <https://docs.gitlab.com>.
- Link to it from <https://docs.gitlab.com> by adding an entry in the global navigation.
## References across documents
- Give each folder an `_index.md` page that introduces the topic, and both introduces
and links to the child pages, including to the index pages of
any next-level sub-paths.
- To ensure discoverability, ensure each new or renamed doc is linked from its
higher-level index page and other related pages.
- When making reference to other GitLab products and features, link to their
respective documentation, at least on first mention.
- When making reference to third-party products or technologies, link out to
their external sites, documentation, and resources.
|
https://docs.gitlab.com/development/documentation/global_nav
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/global_nav.md
|
2025-08-13
|
doc/development/documentation/site_architecture
|
[
"doc",
"development",
"documentation",
"site_architecture"
] |
global_nav.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Global navigation
|
Learn how GitLab docs' global navigation works and how to add new items.
|
Global navigation (global nav) is the left-most pane in the documentation. You can use the
global nav to browse the content.
Research shows that people use Google to search for GitLab product documentation. When they land on a result,
we want them to find topics nearby that are related to the content they're reading. The global nav provides this information.
At the highest level, our global nav is **workflow-based**. Navigation needs to help users build a mental model of how to use GitLab.
The levels under each of the higher workflow-based topics are the names of features. For example:
**Use GitLab** (_workflow_) **> Build your application** (_workflow_) **> Get started** (_feature_)**> CI/CD** (_feature_) **> Pipelines** (_feature_)
While some older sections of the nav are alphabetical, the nav should primarily be workflow-based.
Without a navigation entry:
- The navigation closes when the page is opened, and the reader loses their place.
- The page isn't visible in a group with other pages.
## Choose the right words for your navigation entry
Before you add an item to the left nav, choose the parts of speech you want to use.
The nav entry should match the page title. However, if the title is too long,
when you shorten the phrase, use either:
- A noun, like **Merge requests**.
- An active verb, like **Install GitLab** or **Get started with runners**.
Use a phrase that clearly indicates what the page is for. For example, **Get started** is not
as helpful as **Get started with runners**.
## Add a navigation entry
The global nav is stored in the `gitlab-org/technical-writing/docs-gitlab-com` project, in the
`data/en-us/navigation.yaml` file. The documentation website at `docs.gitlab.com` is built using Hugo and assembles documentation
content from several projects (including `charts`, `gitlab`, `gitlab-runner`, and `omnibus-gitlab`).
**Do not** add items to the global nav without
the consent of one of the technical writers.
To add a topic to the global navigation:
1. Check that the topic is published on <https://docs.gitlab.com>.
1. In the [`navigation.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml)
file, add the item.
1. Assign the MR to a technical writer for review and merge.
### Where to add
Documentation pages can be said to belong in the following groups:
- GitLab users. This documentation is for day-to-day use of GitLab for users with any level
of permissions, from Reporter to Owner.
- GitLab administrators. This tends to be documentation for GitLab Self-Managed instances that requires
access to the underlying infrastructure hosting GitLab.
- Other documentation. This includes documentation for customers outside their day-to-day use of
GitLab and for contributors. Documentation that doesn't fit in the other groups belongs here.
With these groups in mind, the following are general rules for where new items should be added.
- User documentation belongs in **Use GitLab**.
- Administration documentation belongs under **Administer**.
- Other documentation belongs at the top-level, but care must be taken to not create an enormously
long top-level navigation, which defeats the purpose of it.
Making all documentation and navigation items adhere to these principles is being progressively
rolled out.
### What to add
Having decided where to add a navigation element, the next step is deciding what to add. The
mechanics of what is required is [documented below](#data-file) but, in principle:
- Navigation item text (that which the reader sees) should:
- Be as short as possible.
- Be contextual. It's rare to need to repeat text from a parent item.
- Avoid jargon or terms of art, unless ubiquitous. For example, **CI** is an acceptable
substitution for **Continuous Integration**.
- Navigation links must follow the rules documented in the [data file](#data-file).
### Pages you don't need to add
Exclude these pages from the global nav:
- Legal notices.
- Pages in the `user/application_security/dast/checks/` directory.
The following pages should probably be in the global nav, but the technical writers
do not actively work to add them:
- Pages in the `/development` directory.
- Pages authored by the support team, which are under the `doc/administration/troubleshooting` directory.
Sometimes a feature page must be excluded from the global navigation. For example,
pages for deprecated features might not be in the global nav, depending on how long ago the feature was deprecated.
To make it clear these pages are excluded from the global navigation on purpose,
add the following code to the page's front matter:
```yaml
ignore_in_report: true
```
All other pages should be in the global nav.
The technical writing team runs a report to determine which pages are not in the nav.
This report skips pages with `ignore_in_report: true` in the front matter.
The team reviews this list each month.
### Use GitLab section
In addition to feature documentation, each category in the **Use GitLab** section should contain:
- A [top-level page](../topic_types/top_level_page.md).
- A [Get started page](../topic_types/get_started.md).
This ensures a repeatable pattern that familiarizes users with how to navigate the documentation.
The structure for the **Use GitLab** section is:
- Use GitLab
- Top-level page
- Get started page
- Feature
- Feature
## Composition
The global nav is built from two files:
- [Data](#data-file)
- [Layout](#layout-file-logic)
The data file feeds the layout with the links to the documentation.
The layout organizes the data among the nav in containers properly [styled](#css-classes).
### Data file
The data file describes the structure of the navigation for the applicable project.
It is stored at <https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml>.
Each entry comprises of three main components:
- `title`
- `url`
- `submenu` (optional)
For example:
```yaml
- title: Getting started
url: 'user/get_started/'
- title: Tutorials
url: 'tutorials/'
submenu:
- title: Find your way around GitLab
url: 'tutorials/gitlab_navigation/'
submenu:
- title: 'Tutorial: Use the left sidebar to navigate GitLab'
url: 'tutorials/left_sidebar/'
```
Each entry can stand alone or contain nested pages, under `submenu`.
New components are indented two spaces.
All nav links:
- Are selectable.
- Must refer to unique pages.
- Must not point to an anchor in a page, for example: `path/to/page/#anchor-link`.
This must be followed so that we don't have duplicated links nor two `.active` links
at the same time.
#### Syntax
For all components, **respect the indentation** and the following syntax rules.
##### Titles
- Use sentence case, capitalizing feature names.
- There's no need to wrap the titles, unless there's a special character in it. For example,
in `GitLab CI/CD`, there's a `/` present, therefore, it must be wrapped in quotes.
As convention, wrap the titles in double quotes: `title: "GitLab CI/CD"`.
##### URLs
URLs must be relative. In addition:
- End each URL with a trailing `/` (not `.html` or `.md`).
- Do not start any relative link with a forward slash `/`.
- Match the path you see on the website.
- As convention, always wrap URLs in single quotes `'url'`.
To find the global nav link, from the full URL remove `https://docs.gitlab.com/`.
- Do not link to external URLs. Leaving the documentation site by clicking the left navigation is a confusing user experience.
Examples of relative URLs:
| Full URL | Global nav URL |
| --------------------------------------------------------- | -------------- |
| `https://docs.gitlab.com/api/avatar/` | `api/avatar/` |
| `https://docs.gitlab.com/charts/installation/deployment/` | `charts/installation/deployment/` |
| `https://docs.gitlab.com/install/` | `install/` |
| `https://docs.gitlab.com/omnibus/settings/database/` | `omnibus/settings/database/` |
| `https://docs.gitlab.com/operator/installation/` | `operator/installation/` |
| `https://docs.gitlab.com/runner/install/docker/` | `runner/install/docker/` |
### Layout file (logic)
The navigation Vue.js component [`sidebar_menu.vue`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/themes/gitlab-docs/src/components/sidebar_menu.vue)
is fed by the [data file](#data-file) and builds the global nav.
The global nav contains links from all [five upstream projects](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/architecture.md).
The [global nav URL](#urls) has a different prefix depending on the documentation file you change.
| Repository | Link prefix | Final URL |
| ------------------------------------------------------------------------------ | ----------- | --------- |
| <https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc> | None | `https://docs.gitlab.com/` |
| <https://gitlab.com/charts/gitlab/tree/master/doc> | `charts/` | `https://docs.gitlab.com/charts/` |
| <https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc> | `omnibus/` | `https://docs.gitlab.com/omnibus/` |
| <https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc> | `operator` | `https://docs.gitlab.com/operator/` |
| <https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs> | `runner/` | `https://docs.gitlab.com/runner/` |
### CSS classes
The nav is styled in the general `main.css` file. To change
its styles, keep them grouped for better development among the team.
## Testing
We run various checks on `navigation.yaml` in
[`check-navigation.sh`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/scripts/check-navigation.sh),
which runs as a pipeline job when the YAML file is updated.
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Learn how GitLab docs' global navigation works and how to add new items.
title: Global navigation
breadcrumbs:
- doc
- development
- documentation
- site_architecture
---
Global navigation (global nav) is the left-most pane in the documentation. You can use the
global nav to browse the content.
Research shows that people use Google to search for GitLab product documentation. When they land on a result,
we want them to find topics nearby that are related to the content they're reading. The global nav provides this information.
At the highest level, our global nav is **workflow-based**. Navigation needs to help users build a mental model of how to use GitLab.
The levels under each of the higher workflow-based topics are the names of features. For example:
**Use GitLab** (_workflow_) **> Build your application** (_workflow_) **> Get started** (_feature_)**> CI/CD** (_feature_) **> Pipelines** (_feature_)
While some older sections of the nav are alphabetical, the nav should primarily be workflow-based.
Without a navigation entry:
- The navigation closes when the page is opened, and the reader loses their place.
- The page isn't visible in a group with other pages.
## Choose the right words for your navigation entry
Before you add an item to the left nav, choose the parts of speech you want to use.
The nav entry should match the page title. However, if the title is too long,
when you shorten the phrase, use either:
- A noun, like **Merge requests**.
- An active verb, like **Install GitLab** or **Get started with runners**.
Use a phrase that clearly indicates what the page is for. For example, **Get started** is not
as helpful as **Get started with runners**.
## Add a navigation entry
The global nav is stored in the `gitlab-org/technical-writing/docs-gitlab-com` project, in the
`data/en-us/navigation.yaml` file. The documentation website at `docs.gitlab.com` is built using Hugo and assembles documentation
content from several projects (including `charts`, `gitlab`, `gitlab-runner`, and `omnibus-gitlab`).
**Do not** add items to the global nav without
the consent of one of the technical writers.
To add a topic to the global navigation:
1. Check that the topic is published on <https://docs.gitlab.com>.
1. In the [`navigation.yaml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml)
file, add the item.
1. Assign the MR to a technical writer for review and merge.
### Where to add
Documentation pages can be said to belong in the following groups:
- GitLab users. This documentation is for day-to-day use of GitLab for users with any level
of permissions, from Reporter to Owner.
- GitLab administrators. This tends to be documentation for GitLab Self-Managed instances that requires
access to the underlying infrastructure hosting GitLab.
- Other documentation. This includes documentation for customers outside their day-to-day use of
GitLab and for contributors. Documentation that doesn't fit in the other groups belongs here.
With these groups in mind, the following are general rules for where new items should be added.
- User documentation belongs in **Use GitLab**.
- Administration documentation belongs under **Administer**.
- Other documentation belongs at the top-level, but care must be taken to not create an enormously
long top-level navigation, which defeats the purpose of it.
Making all documentation and navigation items adhere to these principles is being progressively
rolled out.
### What to add
Having decided where to add a navigation element, the next step is deciding what to add. The
mechanics of what is required is [documented below](#data-file) but, in principle:
- Navigation item text (that which the reader sees) should:
- Be as short as possible.
- Be contextual. It's rare to need to repeat text from a parent item.
- Avoid jargon or terms of art, unless ubiquitous. For example, **CI** is an acceptable
substitution for **Continuous Integration**.
- Navigation links must follow the rules documented in the [data file](#data-file).
### Pages you don't need to add
Exclude these pages from the global nav:
- Legal notices.
- Pages in the `user/application_security/dast/checks/` directory.
The following pages should probably be in the global nav, but the technical writers
do not actively work to add them:
- Pages in the `/development` directory.
- Pages authored by the support team, which are under the `doc/administration/troubleshooting` directory.
Sometimes a feature page must be excluded from the global navigation. For example,
pages for deprecated features might not be in the global nav, depending on how long ago the feature was deprecated.
To make it clear these pages are excluded from the global navigation on purpose,
add the following code to the page's front matter:
```yaml
ignore_in_report: true
```
All other pages should be in the global nav.
The technical writing team runs a report to determine which pages are not in the nav.
This report skips pages with `ignore_in_report: true` in the front matter.
The team reviews this list each month.
### Use GitLab section
In addition to feature documentation, each category in the **Use GitLab** section should contain:
- A [top-level page](../topic_types/top_level_page.md).
- A [Get started page](../topic_types/get_started.md).
This ensures a repeatable pattern that familiarizes users with how to navigate the documentation.
The structure for the **Use GitLab** section is:
- Use GitLab
- Top-level page
- Get started page
- Feature
- Feature
## Composition
The global nav is built from two files:
- [Data](#data-file)
- [Layout](#layout-file-logic)
The data file feeds the layout with the links to the documentation.
The layout organizes the data among the nav in containers properly [styled](#css-classes).
### Data file
The data file describes the structure of the navigation for the applicable project.
It is stored at <https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml>.
Each entry comprises of three main components:
- `title`
- `url`
- `submenu` (optional)
For example:
```yaml
- title: Getting started
url: 'user/get_started/'
- title: Tutorials
url: 'tutorials/'
submenu:
- title: Find your way around GitLab
url: 'tutorials/gitlab_navigation/'
submenu:
- title: 'Tutorial: Use the left sidebar to navigate GitLab'
url: 'tutorials/left_sidebar/'
```
Each entry can stand alone or contain nested pages, under `submenu`.
New components are indented two spaces.
All nav links:
- Are selectable.
- Must refer to unique pages.
- Must not point to an anchor in a page, for example: `path/to/page/#anchor-link`.
This must be followed so that we don't have duplicated links nor two `.active` links
at the same time.
#### Syntax
For all components, **respect the indentation** and the following syntax rules.
##### Titles
- Use sentence case, capitalizing feature names.
- There's no need to wrap the titles, unless there's a special character in it. For example,
in `GitLab CI/CD`, there's a `/` present, therefore, it must be wrapped in quotes.
As convention, wrap the titles in double quotes: `title: "GitLab CI/CD"`.
##### URLs
URLs must be relative. In addition:
- End each URL with a trailing `/` (not `.html` or `.md`).
- Do not start any relative link with a forward slash `/`.
- Match the path you see on the website.
- As convention, always wrap URLs in single quotes `'url'`.
To find the global nav link, from the full URL remove `https://docs.gitlab.com/`.
- Do not link to external URLs. Leaving the documentation site by clicking the left navigation is a confusing user experience.
Examples of relative URLs:
| Full URL | Global nav URL |
| --------------------------------------------------------- | -------------- |
| `https://docs.gitlab.com/api/avatar/` | `api/avatar/` |
| `https://docs.gitlab.com/charts/installation/deployment/` | `charts/installation/deployment/` |
| `https://docs.gitlab.com/install/` | `install/` |
| `https://docs.gitlab.com/omnibus/settings/database/` | `omnibus/settings/database/` |
| `https://docs.gitlab.com/operator/installation/` | `operator/installation/` |
| `https://docs.gitlab.com/runner/install/docker/` | `runner/install/docker/` |
### Layout file (logic)
The navigation Vue.js component [`sidebar_menu.vue`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/themes/gitlab-docs/src/components/sidebar_menu.vue)
is fed by the [data file](#data-file) and builds the global nav.
The global nav contains links from all [five upstream projects](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/architecture.md).
The [global nav URL](#urls) has a different prefix depending on the documentation file you change.
| Repository | Link prefix | Final URL |
| ------------------------------------------------------------------------------ | ----------- | --------- |
| <https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc> | None | `https://docs.gitlab.com/` |
| <https://gitlab.com/charts/gitlab/tree/master/doc> | `charts/` | `https://docs.gitlab.com/charts/` |
| <https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc> | `omnibus/` | `https://docs.gitlab.com/omnibus/` |
| <https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc> | `operator` | `https://docs.gitlab.com/operator/` |
| <https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs> | `runner/` | `https://docs.gitlab.com/runner/` |
### CSS classes
The nav is styled in the general `main.css` file. To change
its styles, keep them grouped for better development among the team.
## Testing
We run various checks on `navigation.yaml` in
[`check-navigation.sh`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/scripts/check-navigation.sh),
which runs as a pipeline job when the YAML file is updated.
|
https://docs.gitlab.com/development/documentation/automation
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/automation.md
|
2025-08-13
|
doc/development/documentation/site_architecture
|
[
"doc",
"development",
"documentation",
"site_architecture"
] |
automation.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Automated pages
| null |
Most pages in the GitLab documentation are written manually in Markdown.
However, some pages are created by automated processes.
Two primary categories of automation exist in the GitLab documentation:
- Content that is generated by using a standard process and structured data (for example, YAML or JSON files).
- Content that is generated by any other means.
Automation helps with consistency and speed. But content that is automated in a
non-standard way causes difficulty with:
- Frontend changes.
- Site troubleshooting and maintenance.
- The contributor experience.
Ideally, any automation should be done in a standard way, which helps alleviate some of the downsides.
## Pages generated from structured data
Some functionality on the docs site uses structured data:
- Hierarchical global navigation (YAML)
- Survey banner (YAML)
- Badges (YAML)
- Homepage content lists (YAML)
- Redirects (YAML)
- Versions menu (JSON)
## Pages generated otherwise
Other pages are generated by using non-standard processes. These pages often use solutions
that are coded across multiple repositories.
| Page | Details | Owner |
|------|---------|-------|
| [All feature flags in GitLab](../../../administration/feature_flags/list.md) | [Generated during docs build](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/raketasks.md#generate-the-feature-flag-tables) | [Technical Writing](https://handbook.gitlab.com/handbook/product/ux/technical-writing/) |
| [GitLab Runner feature flags](https://docs.gitlab.com/runner/configuration/feature-flags.html) | [Page source](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/ec6e1797d2173a95c8ac7f726bd62f6f110b7211/docs/configuration/feature-flags.md?plain=1#L39) | [Runner](https://handbook.gitlab.com/handbook/engineering/development/ops/verify/runner/) |
| [GitLab Runner Kubernetes API settings](https://docs.gitlab.com/runner/executors/kubernetes/) | Generated with [mage](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/.gitlab/ci/qa.gitlab-ci.yml#L133) | [Runner](https://handbook.gitlab.com/handbook/engineering/development/ops/verify/runner/) |
| [Deprecations and removals by version](../../../update/deprecations.md) | [Update the deprecations and removals documentation](../../deprecation_guidelines/_index.md#update-the-deprecations-and-removals-documentation) | |
| [Breaking change windows](../../../update/breaking_windows.md) | [Update the breaking change windows documentation](../../deprecation_guidelines/_index.md#update-the-breaking-change-windows-documentation) | |
| [GraphQL API resources](../../../api/graphql/reference/_index.md) | [GraphQL API style guide](../../api_graphql_styleguide.md#documentation-and-schema) | [Import and Integrate](https://handbook.gitlab.com/handbook/engineering/development/dev/foundations/import-and-integrate/) |
| [REST API OpenAPI V2 documentation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/api/openapi/openapi_v2.yaml) | [Documenting REST API resources](../restful_api_styleguide.md) | [Import and Integrate](https://handbook.gitlab.com/handbook/engineering/development/dev/foundations/import-and-integrate/) |
| [Audit event types](../../../user/compliance/audit_event_types.md) | [Audit event development guidelines](../../audit_event_guide/_index.md) | [Compliance](https://handbook.gitlab.com/handbook/engineering/development/sec/govern/compliance/) |
| [Available custom role permissions](../../../user/custom_roles/abilities.md) | [Generated by Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tooling/custom_roles/docs/templates/custom_abilities.md.erb) | [Authorization](https://handbook.gitlab.com/handbook/product/categories/#authorization-group)|
| [CI/CD Job token fine-grained permissions](../../../ci/jobs/fine_grained_permissions.md) | [Generated by Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tooling/ci/job_tokens/docs/templates/fine_grained_permissions.md.erb) | [Authorization](https://handbook.gitlab.com/handbook/product/categories/#authorization-group)|
| [Application settings analysis](../../cells/application_settings_analysis.md) | [Generated by Ruby script](https://gitlab.com/gitlab-org/gitlab/-/blob/2bb2910c84fad965bde473aa2881d88358b6e96e/scripts/cells/application-settings-analysis.rb#L353) | |
| DAST vulnerability check documentation ([Example](../../../user/application_security/dast/browser/checks/798.140.md)) | [How to generate the Markdown](https://gitlab.com/gitlab-org/security-products/dast-cwe-checks/-/blob/main/doc/how-to-generate-the-markdown-documentation.md) | [Dynamic Analysis](https://handbook.gitlab.com/handbook/product/categories/#dynamic-analysis-group) |
| [The docs homepage](../../../_index.md) | | [Technical Writing](https://handbook.gitlab.com/handbook/product/ux/technical-writing/) |
## Make an automation request
If you want to automate a page on the docs site:
1. Review [issue 246](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/246)
and consider adding feedback there.
1. If that issue does not describe what you need, contact
[the DRI for the docs site backend](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects).
Because automation adds extra complexity and a support burden, we
review it on a case-by-case basis.
## Document the automation
If you do add automation, you must document:
- The list of files that are included.
- The `.gitlab-ci.yml` updates and any pipeline requirements.
- The steps needed to troubleshoot.
Other GitLab team members should be able to easily find information about how to maintain the automation.
You should announce the change widely, including, at a minimum:
- In Slack, in `#whats-happening-at-gitlab`.
- In the Technical Writer team meeting agenda.
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Automated pages
breadcrumbs:
- doc
- development
- documentation
- site_architecture
---
Most pages in the GitLab documentation are written manually in Markdown.
However, some pages are created by automated processes.
Two primary categories of automation exist in the GitLab documentation:
- Content that is generated by using a standard process and structured data (for example, YAML or JSON files).
- Content that is generated by any other means.
Automation helps with consistency and speed. But content that is automated in a
non-standard way causes difficulty with:
- Frontend changes.
- Site troubleshooting and maintenance.
- The contributor experience.
Ideally, any automation should be done in a standard way, which helps alleviate some of the downsides.
## Pages generated from structured data
Some functionality on the docs site uses structured data:
- Hierarchical global navigation (YAML)
- Survey banner (YAML)
- Badges (YAML)
- Homepage content lists (YAML)
- Redirects (YAML)
- Versions menu (JSON)
## Pages generated otherwise
Other pages are generated by using non-standard processes. These pages often use solutions
that are coded across multiple repositories.
| Page | Details | Owner |
|------|---------|-------|
| [All feature flags in GitLab](../../../administration/feature_flags/list.md) | [Generated during docs build](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/raketasks.md#generate-the-feature-flag-tables) | [Technical Writing](https://handbook.gitlab.com/handbook/product/ux/technical-writing/) |
| [GitLab Runner feature flags](https://docs.gitlab.com/runner/configuration/feature-flags.html) | [Page source](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/ec6e1797d2173a95c8ac7f726bd62f6f110b7211/docs/configuration/feature-flags.md?plain=1#L39) | [Runner](https://handbook.gitlab.com/handbook/engineering/development/ops/verify/runner/) |
| [GitLab Runner Kubernetes API settings](https://docs.gitlab.com/runner/executors/kubernetes/) | Generated with [mage](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/.gitlab/ci/qa.gitlab-ci.yml#L133) | [Runner](https://handbook.gitlab.com/handbook/engineering/development/ops/verify/runner/) |
| [Deprecations and removals by version](../../../update/deprecations.md) | [Update the deprecations and removals documentation](../../deprecation_guidelines/_index.md#update-the-deprecations-and-removals-documentation) | |
| [Breaking change windows](../../../update/breaking_windows.md) | [Update the breaking change windows documentation](../../deprecation_guidelines/_index.md#update-the-breaking-change-windows-documentation) | |
| [GraphQL API resources](../../../api/graphql/reference/_index.md) | [GraphQL API style guide](../../api_graphql_styleguide.md#documentation-and-schema) | [Import and Integrate](https://handbook.gitlab.com/handbook/engineering/development/dev/foundations/import-and-integrate/) |
| [REST API OpenAPI V2 documentation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/api/openapi/openapi_v2.yaml) | [Documenting REST API resources](../restful_api_styleguide.md) | [Import and Integrate](https://handbook.gitlab.com/handbook/engineering/development/dev/foundations/import-and-integrate/) |
| [Audit event types](../../../user/compliance/audit_event_types.md) | [Audit event development guidelines](../../audit_event_guide/_index.md) | [Compliance](https://handbook.gitlab.com/handbook/engineering/development/sec/govern/compliance/) |
| [Available custom role permissions](../../../user/custom_roles/abilities.md) | [Generated by Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tooling/custom_roles/docs/templates/custom_abilities.md.erb) | [Authorization](https://handbook.gitlab.com/handbook/product/categories/#authorization-group)|
| [CI/CD Job token fine-grained permissions](../../../ci/jobs/fine_grained_permissions.md) | [Generated by Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tooling/ci/job_tokens/docs/templates/fine_grained_permissions.md.erb) | [Authorization](https://handbook.gitlab.com/handbook/product/categories/#authorization-group)|
| [Application settings analysis](../../cells/application_settings_analysis.md) | [Generated by Ruby script](https://gitlab.com/gitlab-org/gitlab/-/blob/2bb2910c84fad965bde473aa2881d88358b6e96e/scripts/cells/application-settings-analysis.rb#L353) | |
| DAST vulnerability check documentation ([Example](../../../user/application_security/dast/browser/checks/798.140.md)) | [How to generate the Markdown](https://gitlab.com/gitlab-org/security-products/dast-cwe-checks/-/blob/main/doc/how-to-generate-the-markdown-documentation.md) | [Dynamic Analysis](https://handbook.gitlab.com/handbook/product/categories/#dynamic-analysis-group) |
| [The docs homepage](../../../_index.md) | | [Technical Writing](https://handbook.gitlab.com/handbook/product/ux/technical-writing/) |
## Make an automation request
If you want to automate a page on the docs site:
1. Review [issue 246](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/issues/246)
and consider adding feedback there.
1. If that issue does not describe what you need, contact
[the DRI for the docs site backend](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects).
Because automation adds extra complexity and a support burden, we
review it on a case-by-case basis.
## Document the automation
If you do add automation, you must document:
- The list of files that are included.
- The `.gitlab-ci.yml` updates and any pipeline requirements.
- The steps needed to troubleshoot.
Other GitLab team members should be able to easily find information about how to maintain the automation.
You should announce the change widely, including, at a minimum:
- In Slack, in `#whats-happening-at-gitlab`.
- In the Technical Writer team meeting agenda.
|
https://docs.gitlab.com/development/documentation/deployment_process
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/deployment_process.md
|
2025-08-13
|
doc/development/documentation/site_architecture
|
[
"doc",
"development",
"documentation",
"site_architecture"
] |
deployment_process.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Documentation deployments
| null |
## Deployment environments
The [GitLab documentation site](https://docs.gitlab.com/) is a static site hosted by [GitLab Pages](../../../user/project/pages/_index.md).
The deployment is done by the [Pages deploy jobs](#pages-deploy-jobs).
The website hosts documentation only for the [supported](https://about.gitlab.com/support/statement-of-support/#version-support) GitLab versions.
Documentation for older versions is available:
- Online at the [GitLab Docs Archives](https://archives.docs.gitlab.com).
- Offline or for self-hosted use at the [GitLab Docs Archives](https://docs.gitlab.com/archives/) as downloadable Docker packages.
## Parts of release process
The documentation [release process](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/releases.md)
involves:
- Merge requests, to make changes to the `main` and relevant stable branches.
- Pipelines, which:
- Build the documentation using Hugo.
- Deploy to GitLab Pages.
- Build Docker images used for testing and building.
- Docker images in the [`docs-gitlab-com` container registry](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/container_registry) used for the build environment.
## Stable branches
A stable branch (for example, `17.2`) is created in the documentation project for each GitLab release.
This branch pulls content from the corresponding stable branches of included projects:
- The stable branch from the `gitlab` project (for example, `17-2-stable-ee`).
- The stable branch from the `gitlab-runner` project (for example, `17-2-stable`).
- The stable branch from the `omnibus-gitlab` project (for example, `17-2-stable`).
- The stable branch from the `charts/gitlab` project (for example, `7-2-stable`).
`charts/gitlab` versions are [mapped](https://docs.gitlab.com/charts/installation/version_mappings.html) to GitLab versions.
- The default branch for the `gitlab-org/cloud-native/gitlab-operator`.
The Technical Writing team [creates the stable branch](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/issue_templates/release.md?ref_type=heads#create-a-stable-branch-and-docker-image-for-the-release) for the `docs-gitlab-com` project, which makes use of the stable branches created by other teams.
## Stable documentation
When merge requests that target stable branches of `docs-gitlab-com` are merged,
a pipeline builds the documentation using Hugo and deploys it as a [parallel deployment](../../../user/project/pages/_index.md#parallel-deployments).
Documentation is hosted at the following locations:
- The current stable version and two previous minor versions at `docs.gitlab.com/VERSION/`.
- Earlier versions at `archives.docs.gitlab.com/VERSION/`.
When a new minor version is released, the oldest version on `docs.gitlab.com` gets moved to `archives.docs.gitlab.com`.
The [`image:docs-single`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/ci/docker-images.gitlab-ci.yml#L72)
job in each pipeline runs automatically. It takes what is built, and pushes it to the
[archives container registry](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/container_registry/8244403) for use in building and testing environments.
### Rebuild stable documentation images
To rebuild any of the stable documentation images, create a [new pipeline](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipelines/new)
for the stable branch to rebuild. You might do this:
- To include new documentation changes from an upstream stable branch. For example,
rebuild the `17.9` documentation to include changes subsequently merged in the `gitlab` project's
[`17-9-stable-ee`](https://gitlab.com/gitlab-org/gitlab/-/tree/17-9-stable-ee) branch.
- To incorporate changes made to the `docs-gitlab-com` project itself to a stable branch. For example, CSS style changes.
## Latest documentation
The latest (upcoming and unreleased) documentation is built from the default branch (`main`) of `docs-gitlab-com` and deployed to `docs.gitlab.com`.
The process involves:
- Building the site (`build:compile_site` job):
- Pulls content from default branches of upstream projects (`gitlab`, `gitlab-runner`, `omnibus-gitlab`, `gitlab-operator` and `charts`).
- Compiles the site using Hugo.
- Deploying the site (`pages` job):
- Takes the compiled site.
- Deploys it to `docs.gitlab.com` with GitLab Pages.
```mermaid
graph LR
A["Default branches<br>of upstream projects"]
B["build:compile_site job"]
C["pages job"]
D([docs.gitlab.com])
A--"Content pulled"-->B
B--"Compiled site"-->C
C--"Deployed with<br>GitLab Pages"-->D
```
The process runs automatically when changes are merged to the default branch.
This ensures `docs.gitlab.com` always shows the latest documentation for the upcoming release.
Docker images are used in the build process, but only as part of the build environment, not for serving the documentation.
## Pages deploy jobs
The deployment of all documentation versions is handled by two GitLab Pages jobs:
- [`pages`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/ci/deploy.gitlab-ci.yml#L5) job:
- Deploys the upcoming unreleased version to `docs.gitlab.com`.
- Triggered by pipelines on the default branch (`main`).
- Takes compiled site from `build:compile_site` job.
- [`pages-archives`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/ci/deploy.gitlab-ci.yml#L38) job:
- Deploys stable versions:
- The current stable version and two previous minor versions at `docs.gitlab.com/VERSION/`.
- Earlier versions at `archives.docs.gitlab.com/VERSION/` using the [`gitlab-docs-archives`](https://gitlab.com/gitlab-org/gitlab-docs-archives/-/branches) project.
- Takes compiled site from `build:compile_archive` job.
```mermaid
graph LR
A["build:compile_site job"]
B["build:compile_archive job"]
C["pages job"]
D["pages-archives job"]
E([docs.gitlab.com])
F([docs.gitlab.com/VERSION/])
G([archives.docs.gitlab.com/VERSION/])
A--"Compiled site"-->C
B--"Compiled site"-->D
C--"Deploys upcoming version"-->E
D--"Deploys current stable and two previous versions"-->F
D--"Deploys earlier versions"-->G
```
For example, a [pipeline](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipelines/1681025501) that contains the
[`pages`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/jobs/9199739351) job.
### Manually deploy to production
Documentation is deployed to production automatically when changes are merged to the appropriate branches.
However, maintainers can [manually](../../../ci/pipelines/schedules.md#run-manually) trigger a deployment if needed:
1. Go to the [Pipeline schedules](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipeline_schedules) page.
1. Next to `Build docs.gitlab.com every hour`, select **Run schedule pipeline** ({{< icon name="play" >}}).
The updated documentation is available in production after the `pages` and `pages:deploy` jobs
complete in the new pipeline.
If you do not have the Maintainer role to perform this task, ask for help in the
`#docs` Slack channel.
## Docker files
The [`dockerfiles`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/tree/main/dockerfiles?ref_type=heads) directory
contains Dockerfiles needed to build, test, and deploy <https://docs.gitlab.com>.
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Documentation deployments
breadcrumbs:
- doc
- development
- documentation
- site_architecture
---
## Deployment environments
The [GitLab documentation site](https://docs.gitlab.com/) is a static site hosted by [GitLab Pages](../../../user/project/pages/_index.md).
The deployment is done by the [Pages deploy jobs](#pages-deploy-jobs).
The website hosts documentation only for the [supported](https://about.gitlab.com/support/statement-of-support/#version-support) GitLab versions.
Documentation for older versions is available:
- Online at the [GitLab Docs Archives](https://archives.docs.gitlab.com).
- Offline or for self-hosted use at the [GitLab Docs Archives](https://docs.gitlab.com/archives/) as downloadable Docker packages.
## Parts of release process
The documentation [release process](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/doc/releases.md)
involves:
- Merge requests, to make changes to the `main` and relevant stable branches.
- Pipelines, which:
- Build the documentation using Hugo.
- Deploy to GitLab Pages.
- Build Docker images used for testing and building.
- Docker images in the [`docs-gitlab-com` container registry](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/container_registry) used for the build environment.
## Stable branches
A stable branch (for example, `17.2`) is created in the documentation project for each GitLab release.
This branch pulls content from the corresponding stable branches of included projects:
- The stable branch from the `gitlab` project (for example, `17-2-stable-ee`).
- The stable branch from the `gitlab-runner` project (for example, `17-2-stable`).
- The stable branch from the `omnibus-gitlab` project (for example, `17-2-stable`).
- The stable branch from the `charts/gitlab` project (for example, `7-2-stable`).
`charts/gitlab` versions are [mapped](https://docs.gitlab.com/charts/installation/version_mappings.html) to GitLab versions.
- The default branch for the `gitlab-org/cloud-native/gitlab-operator`.
The Technical Writing team [creates the stable branch](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/issue_templates/release.md?ref_type=heads#create-a-stable-branch-and-docker-image-for-the-release) for the `docs-gitlab-com` project, which makes use of the stable branches created by other teams.
## Stable documentation
When merge requests that target stable branches of `docs-gitlab-com` are merged,
a pipeline builds the documentation using Hugo and deploys it as a [parallel deployment](../../../user/project/pages/_index.md#parallel-deployments).
Documentation is hosted at the following locations:
- The current stable version and two previous minor versions at `docs.gitlab.com/VERSION/`.
- Earlier versions at `archives.docs.gitlab.com/VERSION/`.
When a new minor version is released, the oldest version on `docs.gitlab.com` gets moved to `archives.docs.gitlab.com`.
The [`image:docs-single`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/ci/docker-images.gitlab-ci.yml#L72)
job in each pipeline runs automatically. It takes what is built, and pushes it to the
[archives container registry](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/container_registry/8244403) for use in building and testing environments.
### Rebuild stable documentation images
To rebuild any of the stable documentation images, create a [new pipeline](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipelines/new)
for the stable branch to rebuild. You might do this:
- To include new documentation changes from an upstream stable branch. For example,
rebuild the `17.9` documentation to include changes subsequently merged in the `gitlab` project's
[`17-9-stable-ee`](https://gitlab.com/gitlab-org/gitlab/-/tree/17-9-stable-ee) branch.
- To incorporate changes made to the `docs-gitlab-com` project itself to a stable branch. For example, CSS style changes.
## Latest documentation
The latest (upcoming and unreleased) documentation is built from the default branch (`main`) of `docs-gitlab-com` and deployed to `docs.gitlab.com`.
The process involves:
- Building the site (`build:compile_site` job):
- Pulls content from default branches of upstream projects (`gitlab`, `gitlab-runner`, `omnibus-gitlab`, `gitlab-operator` and `charts`).
- Compiles the site using Hugo.
- Deploying the site (`pages` job):
- Takes the compiled site.
- Deploys it to `docs.gitlab.com` with GitLab Pages.
```mermaid
graph LR
A["Default branches<br>of upstream projects"]
B["build:compile_site job"]
C["pages job"]
D([docs.gitlab.com])
A--"Content pulled"-->B
B--"Compiled site"-->C
C--"Deployed with<br>GitLab Pages"-->D
```
The process runs automatically when changes are merged to the default branch.
This ensures `docs.gitlab.com` always shows the latest documentation for the upcoming release.
Docker images are used in the build process, but only as part of the build environment, not for serving the documentation.
## Pages deploy jobs
The deployment of all documentation versions is handled by two GitLab Pages jobs:
- [`pages`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/ci/deploy.gitlab-ci.yml#L5) job:
- Deploys the upcoming unreleased version to `docs.gitlab.com`.
- Triggered by pipelines on the default branch (`main`).
- Takes compiled site from `build:compile_site` job.
- [`pages-archives`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab/ci/deploy.gitlab-ci.yml#L38) job:
- Deploys stable versions:
- The current stable version and two previous minor versions at `docs.gitlab.com/VERSION/`.
- Earlier versions at `archives.docs.gitlab.com/VERSION/` using the [`gitlab-docs-archives`](https://gitlab.com/gitlab-org/gitlab-docs-archives/-/branches) project.
- Takes compiled site from `build:compile_archive` job.
```mermaid
graph LR
A["build:compile_site job"]
B["build:compile_archive job"]
C["pages job"]
D["pages-archives job"]
E([docs.gitlab.com])
F([docs.gitlab.com/VERSION/])
G([archives.docs.gitlab.com/VERSION/])
A--"Compiled site"-->C
B--"Compiled site"-->D
C--"Deploys upcoming version"-->E
D--"Deploys current stable and two previous versions"-->F
D--"Deploys earlier versions"-->G
```
For example, a [pipeline](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipelines/1681025501) that contains the
[`pages`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/jobs/9199739351) job.
### Manually deploy to production
Documentation is deployed to production automatically when changes are merged to the appropriate branches.
However, maintainers can [manually](../../../ci/pipelines/schedules.md#run-manually) trigger a deployment if needed:
1. Go to the [Pipeline schedules](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipeline_schedules) page.
1. Next to `Build docs.gitlab.com every hour`, select **Run schedule pipeline** ({{< icon name="play" >}}).
The updated documentation is available in production after the `pages` and `pages:deploy` jobs
complete in the new pipeline.
If you do not have the Maintainer role to perform this task, ask for help in the
`#docs` Slack channel.
## Docker files
The [`dockerfiles`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/tree/main/dockerfiles?ref_type=heads) directory
contains Dockerfiles needed to build, test, and deploy <https://docs.gitlab.com>.
|
https://docs.gitlab.com/development/documentation/topic_types
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/_index.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
_index.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Documentation topic types (CTRT)
| null |
Each topic on a page should be one of the following topic types:
- [Concept](concept.md)
- [Task](task.md)
- [Reference](reference.md)
- [Troubleshooting](troubleshooting.md)
Even if a page is short, the page usually starts with a concept and then
includes a task or reference topic.
The tech writing team sometimes uses the acronym `CTRT` to refer to the topic types.
The acronym refers to the first letter of each topic type.
<i class="fa-youtube-play" aria-hidden="true"></i>
For an overview, see [Editing for style and topic type](https://youtu.be/HehnjPgPWb0).
<!-- Video published on 2021-06-06 -->
## Other page and topic types
In addition to the four primary topic types, you can use the following:
- Page type: [Tutorial](tutorial.md)
- Page type: [Get started](get_started.md)
- Page type: [Top-level](top_level_page.md)
- Page type: [Prompt example](prompt_example.md)
- Topic type: [Related topics](#related-topics)
- Page or topic type: [Glossaries](glossary.md)
## Pages and topics to avoid
You should avoid:
- Pages that are exclusively links to other pages. The only exceptions are
top-level pages that aid with navigation.
- Topics that have one or two sentences only. In these cases:
- Incorporate the information in another topic.
- If the sentence links to another page, use a [Related topics](#related-topics) link instead.
## Topic title guidelines
In general, for topic titles:
- Be clear and direct. Make every word count.
- Use fewer than 70 characters when possible. The [markdownlint](../testing/markdownlint.md) rule:
[`line-length` (MD013)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint-cli2.yaml)
- Use articles and prepositions.
- Follow [capitalization](../styleguide/_index.md#topic-titles) guidelines.
- Do not repeat text from earlier topic titles. For example, if the page is about merge requests,
instead of `Troubleshooting merge requests`, use only `Troubleshooting`.
- Avoid using hyphens to separate information.
For example, instead of `Internal analytics - Architecture`, use `Internal analytics architecture` or `Architecture of internal analytics`.
See also [guidelines for heading levels in Markdown](../styleguide/_index.md#heading-levels-in-markdown).
## Related topics
If inline links are not sufficient, you can create a section called **Related topics**
and include an unordered list of related topics. This topic should be above the Troubleshooting section.
Links in this section should be brief and scannable. They are usually not
full sentences, and so should not end in a period.
```markdown
## Related topics
- [CI/CD variables](link-to-topic.md)
- [Environment variables](link-to-topic.md)
```
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Documentation topic types (CTRT)
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
Each topic on a page should be one of the following topic types:
- [Concept](concept.md)
- [Task](task.md)
- [Reference](reference.md)
- [Troubleshooting](troubleshooting.md)
Even if a page is short, the page usually starts with a concept and then
includes a task or reference topic.
The tech writing team sometimes uses the acronym `CTRT` to refer to the topic types.
The acronym refers to the first letter of each topic type.
<i class="fa-youtube-play" aria-hidden="true"></i>
For an overview, see [Editing for style and topic type](https://youtu.be/HehnjPgPWb0).
<!-- Video published on 2021-06-06 -->
## Other page and topic types
In addition to the four primary topic types, you can use the following:
- Page type: [Tutorial](tutorial.md)
- Page type: [Get started](get_started.md)
- Page type: [Top-level](top_level_page.md)
- Page type: [Prompt example](prompt_example.md)
- Topic type: [Related topics](#related-topics)
- Page or topic type: [Glossaries](glossary.md)
## Pages and topics to avoid
You should avoid:
- Pages that are exclusively links to other pages. The only exceptions are
top-level pages that aid with navigation.
- Topics that have one or two sentences only. In these cases:
- Incorporate the information in another topic.
- If the sentence links to another page, use a [Related topics](#related-topics) link instead.
## Topic title guidelines
In general, for topic titles:
- Be clear and direct. Make every word count.
- Use fewer than 70 characters when possible. The [markdownlint](../testing/markdownlint.md) rule:
[`line-length` (MD013)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint-cli2.yaml)
- Use articles and prepositions.
- Follow [capitalization](../styleguide/_index.md#topic-titles) guidelines.
- Do not repeat text from earlier topic titles. For example, if the page is about merge requests,
instead of `Troubleshooting merge requests`, use only `Troubleshooting`.
- Avoid using hyphens to separate information.
For example, instead of `Internal analytics - Architecture`, use `Internal analytics architecture` or `Architecture of internal analytics`.
See also [guidelines for heading levels in Markdown](../styleguide/_index.md#heading-levels-in-markdown).
## Related topics
If inline links are not sufficient, you can create a section called **Related topics**
and include an unordered list of related topics. This topic should be above the Troubleshooting section.
Links in this section should be brief and scannable. They are usually not
full sentences, and so should not end in a period.
```markdown
## Related topics
- [CI/CD variables](link-to-topic.md)
- [Environment variables](link-to-topic.md)
```
|
https://docs.gitlab.com/development/documentation/tutorial
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/tutorial.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
tutorial.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Tutorial page type
| null |
A tutorial is page that contains an end-to-end walkthrough of a complex workflow or scenario.
In general, you might consider using a tutorial when:
- The workflow requires sequential steps where each step consists
of sub-steps.
- The steps cover a variety of GitLab features or third-party tools.
## Tutorial guidance
- Tutorials are not [tasks](task.md). A task gives instructions for one procedure.
A tutorial combines multiple tasks to achieve a specific goal.
- Tutorials provide a working example. Ideally the reader can create the example the
tutorial describes. If they can't replicate it exactly, they should be able
to replicate something similar.
- Tutorials do not introduce new features.
- Tutorials can include information that's also available elsewhere on the docs site.
## Tutorial filename and location
For tutorial Markdown files, you can either:
- Save the file in a directory with the product documentation.
- Create a subfolder under `doc/tutorials` and name the file `_index.md`.
In the left nav, add the tutorial near the relevant feature documentation.
Add a link to the tutorial on one of the [tutorial pages](../../../tutorials/_index.md).
## Tutorial format
Tutorials should be in this format:
```markdown
title: Title (starts with "Tutorial:" followed by an active verb, like "Tutorial: Create a website")
---
<!-- vale gitlab_base.FutureTense = NO -->
A paragraph that explains what the tutorial does, and the expected outcome.
To create a website:
1. [Do the first task](#do-the-first-task)
1. [Do the second task](#do-the-second-task)
## Before you begin
This section is optional.
- Thing 1
- Thing 2
- Thing 3
## Do the first task
To do step 1:
1. First step.
1. Another step.
1. Another step.
## Do the second task
Before you begin, make sure you have [done the first task](#do-the-first-task).
To do step 2:
1. First step.
1. Another step.
1. Another step.
```
An example of a tutorial that follows this format is
[Tutorial: Make your first Git commit](../../../tutorials/make_first_git_commit/_index.md).
## Tutorial page title
Start the page title with `Tutorial:` followed by an active verb, like `Tutorial: Create a website`.
In the left nav, use the full page title. Do not abbreviate it.
Put the text in quotes so the pipeline succeeds. For example,
`"Tutorial: Make your first Git commit"`.
On [the **Learn GitLab with tutorials** page](../../../tutorials/_index.md),
do not use `Tutorial` in the title.
## Screenshots
You can include screenshots in a tutorial to illustrate important steps in the process.
In the core product documentation, you should [use illustrations sparingly](../styleguide/_index.md#illustrations).
However, in tutorials, screenshots can help users understand where they are in a complex process.
Try to balance the number of screenshots in the tutorial so they don't disrupt
the narrative flow. For example, do not put one large screenshot in the middle of the tutorial.
Instead, put multiple, smaller screenshots throughout.
## Tutorial voice
Use a friendlier tone than you would for other topic types. For example,
you can:
- Add encouraging or congratulatory phrases after tasks.
- Use future tense from time to time, especially when you're introducing
steps. For example, `Next, you will associate your issues with your epics`.
Disable the Vale rule `gitlab_base.FutureTense` to avoid false positives.
- Be more conversational. For example, `This task might take a while to complete`.
## Metadata
On pages that are tutorials, add the most appropriate `stage:` and `group:` metadata at the top of the file.
If the majority of the content does not align with a single group, specify `none` for the stage
and `Tutorials` for the group:
```plaintext
stage: none
group: Tutorials
```
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Tutorial page type
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
A tutorial is page that contains an end-to-end walkthrough of a complex workflow or scenario.
In general, you might consider using a tutorial when:
- The workflow requires sequential steps where each step consists
of sub-steps.
- The steps cover a variety of GitLab features or third-party tools.
## Tutorial guidance
- Tutorials are not [tasks](task.md). A task gives instructions for one procedure.
A tutorial combines multiple tasks to achieve a specific goal.
- Tutorials provide a working example. Ideally the reader can create the example the
tutorial describes. If they can't replicate it exactly, they should be able
to replicate something similar.
- Tutorials do not introduce new features.
- Tutorials can include information that's also available elsewhere on the docs site.
## Tutorial filename and location
For tutorial Markdown files, you can either:
- Save the file in a directory with the product documentation.
- Create a subfolder under `doc/tutorials` and name the file `_index.md`.
In the left nav, add the tutorial near the relevant feature documentation.
Add a link to the tutorial on one of the [tutorial pages](../../../tutorials/_index.md).
## Tutorial format
Tutorials should be in this format:
```markdown
title: Title (starts with "Tutorial:" followed by an active verb, like "Tutorial: Create a website")
---
<!-- vale gitlab_base.FutureTense = NO -->
A paragraph that explains what the tutorial does, and the expected outcome.
To create a website:
1. [Do the first task](#do-the-first-task)
1. [Do the second task](#do-the-second-task)
## Before you begin
This section is optional.
- Thing 1
- Thing 2
- Thing 3
## Do the first task
To do step 1:
1. First step.
1. Another step.
1. Another step.
## Do the second task
Before you begin, make sure you have [done the first task](#do-the-first-task).
To do step 2:
1. First step.
1. Another step.
1. Another step.
```
An example of a tutorial that follows this format is
[Tutorial: Make your first Git commit](../../../tutorials/make_first_git_commit/_index.md).
## Tutorial page title
Start the page title with `Tutorial:` followed by an active verb, like `Tutorial: Create a website`.
In the left nav, use the full page title. Do not abbreviate it.
Put the text in quotes so the pipeline succeeds. For example,
`"Tutorial: Make your first Git commit"`.
On [the **Learn GitLab with tutorials** page](../../../tutorials/_index.md),
do not use `Tutorial` in the title.
## Screenshots
You can include screenshots in a tutorial to illustrate important steps in the process.
In the core product documentation, you should [use illustrations sparingly](../styleguide/_index.md#illustrations).
However, in tutorials, screenshots can help users understand where they are in a complex process.
Try to balance the number of screenshots in the tutorial so they don't disrupt
the narrative flow. For example, do not put one large screenshot in the middle of the tutorial.
Instead, put multiple, smaller screenshots throughout.
## Tutorial voice
Use a friendlier tone than you would for other topic types. For example,
you can:
- Add encouraging or congratulatory phrases after tasks.
- Use future tense from time to time, especially when you're introducing
steps. For example, `Next, you will associate your issues with your epics`.
Disable the Vale rule `gitlab_base.FutureTense` to avoid false positives.
- Be more conversational. For example, `This task might take a while to complete`.
## Metadata
On pages that are tutorials, add the most appropriate `stage:` and `group:` metadata at the top of the file.
If the majority of the content does not align with a single group, specify `none` for the stage
and `Tutorials` for the group:
```plaintext
stage: none
group: Tutorials
```
|
https://docs.gitlab.com/development/documentation/top_level_page
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/top_level_page.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
top_level_page.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Top-level page type
| null |
The top-level page is at the highest level of each section in **Use GitLab** in the global navigation.
This page type:
- Introduces the workflow briefly.
- Lists features in the workflow, in the order they appear in the global navigation.
## Format
The top-level page should be in this format.
```markdown
title: Title (The name of the top-level page, like "Manage your organization")
---
Briefly describe the workflow's key features. Use the active voice, for example, "Manage projects to track issues, plan work, and collaborate on code."
| | | |
|---|---|---|
| [**Getting started**](../../user/get_started/get_started_projects.md)<br>Overview of how features fit together. | [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. | [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. |
| [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. | [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. | [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. |
```
- For each page, use three to four keywords to describe the page contents.
- For **Getting started** pages, use `Overview of how features fit together`.
- List only the pages that are one level below the top-level page.
Update the table when a new page is added, or if the pages are reordered.
## Top-level page titles
The title must be an active verb that describes the workflow, like **Manage your infrastructure** or **Organize work with projects**.
## Metadata
The `description` metadata on the top-level page determines the text that appears on the
GitLab documentation home page.
Use the following metadata format:
```plaintext
stage: Name
group: Name
description: List 3 to 4 features linked from the page.
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
```
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Top-level page type
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
The top-level page is at the highest level of each section in **Use GitLab** in the global navigation.
This page type:
- Introduces the workflow briefly.
- Lists features in the workflow, in the order they appear in the global navigation.
## Format
The top-level page should be in this format.
```markdown
title: Title (The name of the top-level page, like "Manage your organization")
---
Briefly describe the workflow's key features. Use the active voice, for example, "Manage projects to track issues, plan work, and collaborate on code."
| | | |
|---|---|---|
| [**Getting started**](../../user/get_started/get_started_projects.md)<br>Overview of how features fit together. | [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. | [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. |
| [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. | [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. | [**Page name**](file.md)<br>Keyword, keyword, keyword, keyword. |
```
- For each page, use three to four keywords to describe the page contents.
- For **Getting started** pages, use `Overview of how features fit together`.
- List only the pages that are one level below the top-level page.
Update the table when a new page is added, or if the pages are reordered.
## Top-level page titles
The title must be an active verb that describes the workflow, like **Manage your infrastructure** or **Organize work with projects**.
## Metadata
The `description` metadata on the top-level page determines the text that appears on the
GitLab documentation home page.
Use the following metadata format:
```plaintext
stage: Name
group: Name
description: List 3 to 4 features linked from the page.
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
```
|
https://docs.gitlab.com/development/documentation/get_started
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/get_started.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
get_started.md
|
none
|
Style Guide
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Get started page type
| null |
A **Get started** page introduces high-level concepts for a broad feature area.
While a specific feature might be defined in the feature documentation,
a **Get started** page gives an introduction to a set of concepts.
The content should help the user understand how multiple features fit together
as part of the larger GitLab workflow.
## When to use a Get started page
For now, a **Get started** page should be used only at the highest level of the left navigation.
For example, you might have a **Get started** page under **Manage your organization** or **Extend GitLab**.
A **Get started** page is different from a tutorial. A **Get started** page focuses on high-level
concepts that are part of a workflow, while a tutorial helps the user achieve a task.
A **Get started** page should point to tutorials, however, because tutorials are a great way for a user to get started.
## Format
Get started pages should be in this format:
```markdown
title: Get started with abc
---
These features work together in this way. You can use them to achieve these goals.
Include a paragraph that ties together the features without describing what
each individual feature does.
Then add this sentence and a diagram. Details about the diagram
file are below.
The process of <abc> is part of a larger workflow:

## Step 1: Do this thing
Each step should group features by workflow. For example, step 1 might be:
`## Step 1: Determine your release cadence`
Then the content can explain milestones, iterations, labels, etc.
The terms can exist elsewhere in the docs, but the descriptions
on this page should be relatively brief.
Finally, add links, in this format:
For more information, see:
- [Create your first abc](link.md).
- [Learn more about abc](link.md).
## Step 2: The next thing
Don't link in the body content. Save links for the `for more information` area.
For more information, see:
- [Create your first abc](link.md).
- [Learn more about abc](link.md).
```
## Get started page titles
For the title, use `Get started with topic_name`.
For the left nav, use `Getting started`.
## Get started file location
All **Getting started** files should be in the folder `doc/user/get_started/`.
You do not need to create a subfolder for each file.
## Diagram files
The diagram files are [in this Google Slides doc](https://docs.google.com/presentation/d/19spBwRAb4QNoTdZofR37TkBBFBPcmh4196ae3lX1ngQ/edit?usp=sharing).
## Example
For an example of the Get started page type,
see [Get started learning Git](../../../topics/git/get_started.md).
|
---
stage: none
group: Style Guide
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Get started page type
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
A **Get started** page introduces high-level concepts for a broad feature area.
While a specific feature might be defined in the feature documentation,
a **Get started** page gives an introduction to a set of concepts.
The content should help the user understand how multiple features fit together
as part of the larger GitLab workflow.
## When to use a Get started page
For now, a **Get started** page should be used only at the highest level of the left navigation.
For example, you might have a **Get started** page under **Manage your organization** or **Extend GitLab**.
A **Get started** page is different from a tutorial. A **Get started** page focuses on high-level
concepts that are part of a workflow, while a tutorial helps the user achieve a task.
A **Get started** page should point to tutorials, however, because tutorials are a great way for a user to get started.
## Format
Get started pages should be in this format:
```markdown
title: Get started with abc
---
These features work together in this way. You can use them to achieve these goals.
Include a paragraph that ties together the features without describing what
each individual feature does.
Then add this sentence and a diagram. Details about the diagram
file are below.
The process of <abc> is part of a larger workflow:

## Step 1: Do this thing
Each step should group features by workflow. For example, step 1 might be:
`## Step 1: Determine your release cadence`
Then the content can explain milestones, iterations, labels, etc.
The terms can exist elsewhere in the docs, but the descriptions
on this page should be relatively brief.
Finally, add links, in this format:
For more information, see:
- [Create your first abc](link.md).
- [Learn more about abc](link.md).
## Step 2: The next thing
Don't link in the body content. Save links for the `for more information` area.
For more information, see:
- [Create your first abc](link.md).
- [Learn more about abc](link.md).
```
## Get started page titles
For the title, use `Get started with topic_name`.
For the left nav, use `Getting started`.
## Get started file location
All **Getting started** files should be in the folder `doc/user/get_started/`.
You do not need to create a subfolder for each file.
## Diagram files
The diagram files are [in this Google Slides doc](https://docs.google.com/presentation/d/19spBwRAb4QNoTdZofR37TkBBFBPcmh4196ae3lX1ngQ/edit?usp=sharing).
## Example
For an example of the Get started page type,
see [Get started learning Git](../../../topics/git/get_started.md).
|
https://docs.gitlab.com/development/documentation/glossary
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/glossary.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
glossary.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Glossary topic type
| null |
A glossary provides a list of unfamiliar terms and their definitions to help users understand a specific
GitLab feature.
Each glossary item provides a single term and its associated definition. The definition should answer the questions:
- **What** is this?
- **Why** would you use it?
For glossary terms:
- Do not use jargon.
- Do not use internal terminology or acronyms.
- Ensure the correct usage is defined in the [word list](../styleguide/word_list.md).
## Alternatives to glossaries
Glossaries should provide short, concise term-definition pairs.
- If a definition requires more than a brief explanation, use a concept topic instead.
- If you find yourself explaining how to use the feature, use a task topic instead.
## Glossary example
Glossary topics should be in this format. Use a [description list](../styleguide/_index.md#description-lists-in-markdown) primarily. You can use a table if you need to apply
additional categorization.
Try to include glossary topics on pages that explain the feature, rather than as a standalone page.
```markdown
## FeatureName glossary
This glossary provides definitions for terms related to FeatureName.
Term A
: Term A does this thing.
Term B
: Term B does this thing.
Term C
: Term C does this thing.
```
If you use the table format:
```markdown
## FeatureName glossary
This glossary provides definitions for terms related to FeatureName.
| Term | Definition | Additional category |
|--------|-------------------------|---------------------|
| Term A | Term A does this thing. | |
| Term B | Term B does this thing. | |
| Term C | Term C does this thing. | |
```
## Glossary topic titles
Use `FeatureName glossary`.
Don't use alternatives to `glossary`. For example:
- `Terminology`
- `Glossary of terms`
- `Glossary of common terms`
- `Definitions`
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Glossary topic type
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
A glossary provides a list of unfamiliar terms and their definitions to help users understand a specific
GitLab feature.
Each glossary item provides a single term and its associated definition. The definition should answer the questions:
- **What** is this?
- **Why** would you use it?
For glossary terms:
- Do not use jargon.
- Do not use internal terminology or acronyms.
- Ensure the correct usage is defined in the [word list](../styleguide/word_list.md).
## Alternatives to glossaries
Glossaries should provide short, concise term-definition pairs.
- If a definition requires more than a brief explanation, use a concept topic instead.
- If you find yourself explaining how to use the feature, use a task topic instead.
## Glossary example
Glossary topics should be in this format. Use a [description list](../styleguide/_index.md#description-lists-in-markdown) primarily. You can use a table if you need to apply
additional categorization.
Try to include glossary topics on pages that explain the feature, rather than as a standalone page.
```markdown
## FeatureName glossary
This glossary provides definitions for terms related to FeatureName.
Term A
: Term A does this thing.
Term B
: Term B does this thing.
Term C
: Term C does this thing.
```
If you use the table format:
```markdown
## FeatureName glossary
This glossary provides definitions for terms related to FeatureName.
| Term | Definition | Additional category |
|--------|-------------------------|---------------------|
| Term A | Term A does this thing. | |
| Term B | Term B does this thing. | |
| Term C | Term C does this thing. | |
```
## Glossary topic titles
Use `FeatureName glossary`.
Don't use alternatives to `glossary`. For example:
- `Terminology`
- `Glossary of terms`
- `Glossary of common terms`
- `Definitions`
|
https://docs.gitlab.com/development/documentation/concept
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/concept.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
concept.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Concept topic type
| null |
A concept introduces a single feature or concept.
A concept should answer the questions:
- **What** is this?
- **Why** would you use it?
Think of everything someone might want to know if they've never heard of this concept before.
Don't tell them **how** to do this thing. Tell them **what it is**.
If you start describing another concept, start a new concept and link to it.
## Format
Concepts should be in this format:
```markdown
title: Title (a noun, like "Widgets")
---
A paragraph or two that explains what this thing is and why you would use it.
If you start to describe another concept, stop yourself.
Each concept should be about **one concept only**.
If you start to describe **how to use the thing**, stop yourself.
Task topics explain how to use something, not concept topics.
Do not include links to related tasks. The navigation provides links to tasks.
```
## Concept topic titles
For the title text, use a noun. For example:
- `Widgets`
- `GDK dependency management`
If you need more descriptive words, use the `ion` version of the word, rather than `ing`. For example:
- `Object migration` instead of `Migrating objects` or `Migrate objects`
Words that end in `ing` are hard to translate and take up more space, and active verbs are used for task topics.
For details, see [the Google style guide](https://developers.google.com/style/headings#heading-and-title-text).
### Titles to avoid
Avoid these topic titles:
- `Overview` or `Introduction`. Instead, use a more specific
noun or phrase that someone would search for.
- `Use cases`. Instead, incorporate the information as part of the concept.
- `How it works`. Instead, use a noun followed by `workflow`. For example, `Merge request workflow`.
## Example
### Before
The following topic was trying to be all things to all people. It provided information about groups
and where to find them. It reiterated what was visible in the UI.

### After
The information is easier to scan if you move it into concepts and [tasks](task.md).
#### Concept

#### Task

|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Concept topic type
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
A concept introduces a single feature or concept.
A concept should answer the questions:
- **What** is this?
- **Why** would you use it?
Think of everything someone might want to know if they've never heard of this concept before.
Don't tell them **how** to do this thing. Tell them **what it is**.
If you start describing another concept, start a new concept and link to it.
## Format
Concepts should be in this format:
```markdown
title: Title (a noun, like "Widgets")
---
A paragraph or two that explains what this thing is and why you would use it.
If you start to describe another concept, stop yourself.
Each concept should be about **one concept only**.
If you start to describe **how to use the thing**, stop yourself.
Task topics explain how to use something, not concept topics.
Do not include links to related tasks. The navigation provides links to tasks.
```
## Concept topic titles
For the title text, use a noun. For example:
- `Widgets`
- `GDK dependency management`
If you need more descriptive words, use the `ion` version of the word, rather than `ing`. For example:
- `Object migration` instead of `Migrating objects` or `Migrate objects`
Words that end in `ing` are hard to translate and take up more space, and active verbs are used for task topics.
For details, see [the Google style guide](https://developers.google.com/style/headings#heading-and-title-text).
### Titles to avoid
Avoid these topic titles:
- `Overview` or `Introduction`. Instead, use a more specific
noun or phrase that someone would search for.
- `Use cases`. Instead, incorporate the information as part of the concept.
- `How it works`. Instead, use a noun followed by `workflow`. For example, `Merge request workflow`.
## Example
### Before
The following topic was trying to be all things to all people. It provided information about groups
and where to find them. It reiterated what was visible in the UI.

### After
The information is easier to scan if you move it into concepts and [tasks](task.md).
#### Concept

#### Task

|
https://docs.gitlab.com/development/documentation/troubleshooting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/troubleshooting.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
troubleshooting.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Troubleshooting topic type
| null |
Troubleshooting topics should be the final topics on a page.
If a page has five or more troubleshooting topics, put those topics on a [separate page](#troubleshooting-page-type).
## What type of troubleshooting information to include
Troubleshooting information includes:
- Problem-solving information that might be considered risky.
- Information about rare cases. All troubleshooting information
is included, no matter how unlikely a user is to encounter a situation.
This kind of content can be helpful to others, and the benefits outweigh the risks.
If you think you have an exception to this rule, contact the Technical Writing team.
GitLab Support maintains their own
[troubleshooting content](../../../administration/troubleshooting/_index.md).
## Format
Troubleshooting can be one of three types: introductory, task, or reference.
### An introductory topic
This topic introduces the troubleshooting section of a page.
For example:
```markdown
## Troubleshooting
When working with <x feature>, you might encounter the following issues.
```
### Troubleshooting task
The title should be similar to a [standard task](task.md).
For example, "Run debug tools" or "Verify syntax."
### Troubleshooting reference
This topic includes the message. To be consistent, use **workaround** for temporary solutions and **resolution** and **resolve** for permanent solutions. For example:
```markdown
### The message or a description of it
You might get an error that states <error message>.
This issue occurs when...
The workaround is...
```
If multiple causes or solutions exist, consider putting them into a table format.
If you use the exact error message, surround it in backticks so it's styled as code.
For more guidance on solution types, see [workaround](../styleguide/word_list.md#workaround) and [resolution, resolve](../styleguide/word_list.md#resolution-resolve).
## Troubleshooting topic titles
For the title of a **Troubleshooting reference** topic:
- Consider including at least a partial output message.
If the message is more than 70 characters, include the text that's most important, or describe the message instead.
- State the type of message at the start of the title. This indicates the severity. For example, `Error:`, `Warning:`.
- Do not use links in the title.
If you do not put the full message in the title, include it in the body text. For example:
````markdown
## Error: `unexpected disconnect while reading sideband packet`
Unstable networking conditions can cause Gitaly to fail when trying to fetch large repository
data from the primary site. Those conditions can result in this error:
```plaintext
curl 18 transfer closed with outstanding read data remaining & fetch-pack:
unexpected disconnect while reading sideband packet
```
To resolve this issue...
````
## Rails console write functions
If the troubleshooting suggestion includes a function that changes data on the GitLab instance,
add the following warning:
```markdown
{{</* alert type="warning" */>}}
Commands that change data can cause damage if not run correctly or under the right conditions. Always run commands in a test environment first and have a backup instance ready to restore.
{{</* /alert */>}}
```
## Troubleshooting page type
When there are five troubleshooting topics or more on a page, create a separate Troubleshooting page.
Follow these conventions:
- Name the page `Troubleshooting <feature>`.
- In the left nav, use the word `Troubleshooting` only.
- In the navigation file, nest the new page under the feature it belongs to.
- Name the file `<feature>_troubleshooting.md`.
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Troubleshooting topic type
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
Troubleshooting topics should be the final topics on a page.
If a page has five or more troubleshooting topics, put those topics on a [separate page](#troubleshooting-page-type).
## What type of troubleshooting information to include
Troubleshooting information includes:
- Problem-solving information that might be considered risky.
- Information about rare cases. All troubleshooting information
is included, no matter how unlikely a user is to encounter a situation.
This kind of content can be helpful to others, and the benefits outweigh the risks.
If you think you have an exception to this rule, contact the Technical Writing team.
GitLab Support maintains their own
[troubleshooting content](../../../administration/troubleshooting/_index.md).
## Format
Troubleshooting can be one of three types: introductory, task, or reference.
### An introductory topic
This topic introduces the troubleshooting section of a page.
For example:
```markdown
## Troubleshooting
When working with <x feature>, you might encounter the following issues.
```
### Troubleshooting task
The title should be similar to a [standard task](task.md).
For example, "Run debug tools" or "Verify syntax."
### Troubleshooting reference
This topic includes the message. To be consistent, use **workaround** for temporary solutions and **resolution** and **resolve** for permanent solutions. For example:
```markdown
### The message or a description of it
You might get an error that states <error message>.
This issue occurs when...
The workaround is...
```
If multiple causes or solutions exist, consider putting them into a table format.
If you use the exact error message, surround it in backticks so it's styled as code.
For more guidance on solution types, see [workaround](../styleguide/word_list.md#workaround) and [resolution, resolve](../styleguide/word_list.md#resolution-resolve).
## Troubleshooting topic titles
For the title of a **Troubleshooting reference** topic:
- Consider including at least a partial output message.
If the message is more than 70 characters, include the text that's most important, or describe the message instead.
- State the type of message at the start of the title. This indicates the severity. For example, `Error:`, `Warning:`.
- Do not use links in the title.
If you do not put the full message in the title, include it in the body text. For example:
````markdown
## Error: `unexpected disconnect while reading sideband packet`
Unstable networking conditions can cause Gitaly to fail when trying to fetch large repository
data from the primary site. Those conditions can result in this error:
```plaintext
curl 18 transfer closed with outstanding read data remaining & fetch-pack:
unexpected disconnect while reading sideband packet
```
To resolve this issue...
````
## Rails console write functions
If the troubleshooting suggestion includes a function that changes data on the GitLab instance,
add the following warning:
```markdown
{{</* alert type="warning" */>}}
Commands that change data can cause damage if not run correctly or under the right conditions. Always run commands in a test environment first and have a backup instance ready to restore.
{{</* /alert */>}}
```
## Troubleshooting page type
When there are five troubleshooting topics or more on a page, create a separate Troubleshooting page.
Follow these conventions:
- Name the page `Troubleshooting <feature>`.
- In the left nav, use the word `Troubleshooting` only.
- In the navigation file, nest the new page under the feature it belongs to.
- Name the file `<feature>_troubleshooting.md`.
|
https://docs.gitlab.com/development/documentation/reference
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/reference.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
reference.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Reference topic type
| null |
Reference information should be in an easily-scannable format,
like a table or list. It's similar to a dictionary or encyclopedia entry.
## Format
Reference topics should be in this format:
```markdown
title: Title (a noun, like "Pipeline settings" or "Administrator options")
---
Introductory sentence.
| Setting | Description |
|---------|-------------|
| **Name** | Descriptive sentence about the setting. |
```
## Reference topic titles
Reference topic titles are usually nouns.
Avoid these topic titles:
- `Important notes`. Instead, incorporate this information
closer to where it belongs. This information might be a prerequisite
for a task, or information about a concept.
- `Limitations`. Instead, move the content near other similar information.
Content listed as limitations can often be considered prerequisite
information about how a feature works.
If you must, you can use the title `Known issues`.
## Example
### Before
This topic was a compilation of a variety of information and was difficult to scan.

### After
The information in the **Overview** topic is now organized in a table
that's easy to scan. It also has a more searchable title.

|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Reference topic type
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
Reference information should be in an easily-scannable format,
like a table or list. It's similar to a dictionary or encyclopedia entry.
## Format
Reference topics should be in this format:
```markdown
title: Title (a noun, like "Pipeline settings" or "Administrator options")
---
Introductory sentence.
| Setting | Description |
|---------|-------------|
| **Name** | Descriptive sentence about the setting. |
```
## Reference topic titles
Reference topic titles are usually nouns.
Avoid these topic titles:
- `Important notes`. Instead, incorporate this information
closer to where it belongs. This information might be a prerequisite
for a task, or information about a concept.
- `Limitations`. Instead, move the content near other similar information.
Content listed as limitations can often be considered prerequisite
information about how a feature works.
If you must, you can use the title `Known issues`.
## Example
### Before
This topic was a compilation of a variety of information and was difficult to scan.

### After
The information in the **Overview** topic is now organized in a table
that's easy to scan. It also has a more searchable title.

|
https://docs.gitlab.com/development/documentation/prompt_example
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/prompt_example.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
prompt_example.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Prompt example page type
| null |
A prompt example provides step-by-step instructions for using GitLab Duo to accomplish a specific development or business task.
A prompt example should answer the questions:
- What development challenge does this solve?
- How do you use GitLab Duo to solve it?
These pages should be precise and easy to scan. They do not replace
other documentation types on the site, but instead complement them.
They should not be full of links or related conceptual or task information.
## Format
Prompt examples should be in this format:
````markdown
title: Title (active verb + object, like "Refactor legacy code")
---
One-sentence description of when to use this approach.
- Time estimate: X-Y minutes
- Level: Beginner/Intermediate/Advanced
- Prerequisites: What users need before starting
(To populate these items, see the guidance that follows this example.)
## The challenge
1-2 sentence description of the specific problem this solves.
## The approach
Brief description of the overall strategy and which GitLab Duo tools to use (usually 2-4 key phrases).
### Step 1: [Action verb]
[Specify which GitLab Duo tool to use] Brief description of what this step accomplishes.
```plaintext
Prompt template with placeholders in [brackets]
```
Expected outcome: What should happen when this prompt is used.
### Step 2: [Action verb]
[Specify which GitLab Duo tool to use] Brief description of what this step accomplishes.
```plaintext
Next prompt template with placeholders in [brackets]
```
Expected outcome: What should happen when this prompt is used.
## Tips
- Specific actionable advice for better results
- Common pitfalls to avoid
- How to iterate if first attempt doesn't work
## Verify
Ensure that:
- Quality check 1 - specific and measurable
- Quality check 2 - specific and measurable
- Quality check 3 - specific and measurable
````
## Prompt example topic titles
For the title text, use the structure `active verb` + `noun`.
For example:
- `Refactor legacy code`
- `Debug failing tests`
- `Generate API documentation`
### Titles to avoid
Avoid these topic titles:
- `How to [do something]`. Instead, use the active verb structure.
- `Using GitLab Duo for [task]`. Instead, focus on the task itself.
- `Tips and tricks`. Instead, incorporate advice into specific examples.
- Generic titles like `Code generation` when you mean something specific like `Generate REST API endpoints`.
## Level guidelines
Use these guidelines to assign difficulty levels:
- **Beginner**: Copy-paste prompts with minimal customization needed. Users follow exact steps.
- **Intermediate**: Template prompts that require adaptation. Users need to understand context and modify placeholders.
- **Advanced**: Complex multi-step workflows requiring prompt iteration and refinement. Users create custom approaches.
## Prerequisites format
Be specific about which GitLab Duo tools are needed. Common prerequisites include:
- Code file open in IDE, GitLab Duo Chat available
- Development environment set up, project requirements defined
- Existing codebase with [specific technology or framework]
- At least the Developer role for the project
- GitLab Duo Code Suggestions enabled (if using auto-completion features)
## Time estimates
Provide realistic time ranges based on complexity:
- **Simple tasks**: 5-15 minutes
- **Moderate tasks**: 15-30 minutes
- **Complex tasks**: 30-60 minutes
- **Multi-session work**: 1-2 hours (split across sessions)
## Expected outcomes format
Expected outcomes should be specific and measurable. For example:
- Do: `Detailed analysis identifying 3-5 specific improvement areas with code examples`
- Do not: `Analysis of the code`
- Do: `Complete refactored class with improved method names and added tests`
- Do not: `Better code`
## Prompt template guidelines
### Placeholder format
Always use `[descriptive_name]` format for placeholders. Make placeholders specific:
- Do: `[ClassName]` or `[file_path]` or `[specific_framework]`
- Do not: `[name]` or `[thing]` or `[item]`
### Template structure
Structure prompts with:
1. **Clear instruction**: What you want GitLab Duo to do
1. **Specific context**: What to focus on or reference
1. **Expected format**: How to structure the response
1. **Success criteria**: What good output looks like
## Tips guidelines
Tips should provide:
- **Practical advice**: Techniques that improve results
- **Common pitfalls**: Mistakes to avoid based on user experience
- **Iteration strategies**: How to refine prompts that don't work initially
- **Context tips**: How to provide better information to GitLab Duo
- **Tool combination tips**: How to use Chat and Code Suggestions together effectively
Avoid generic advice. Be specific about what works for this particular use case.
## Verification checklist
Create 3-5 specific, measurable checks that users can perform to validate success. Focus on:
- **Quality indicators**: Does the output meet standards?
- **Functionality checks**: Does the solution work as intended?
- **Completeness validation**: Are all requirements addressed?
- **Integration verification**: Does it work with existing code/systems?
## Example
### Before
The following topic tried to cover too many different scenarios in one example. It was unclear when to use each approach and the prompts were too generic.
```markdown
title: Using GitLab Duo for Development Tasks
---
You can use GitLab Duo to help with coding. Here are some ways:
- Generate code
- Fix bugs
- Write tests
- Refactor code
Ask GitLab Duo to help you with your task.
```
### After
The information is clearer when split into a focused prompt example:
````markdown
title: Refactor legacy code
---
Follow these guidelines when you need to improve performance, readability,
or maintainability of existing code.
- Time estimate: 15-30 minutes
- Level: Intermediate
- Prerequisites: Code file open in IDE, GitLab Duo Chat available
## The challenge
Transform complex, hard-to-maintain code into clean, testable components
without breaking functionality.
## The approach
Analyze, plan, and implement using GitLab Duo Chat and Code Suggestions.
### Step 1: Analyze
Use GitLab Duo Chat to understand the current state. Select the code you want to refactor, then ask:
```plaintext
Analyze the [ClassName] in [file_path]. Focus on:
1. Current methods and their complexity
2. Performance bottlenecks
3. Areas where readability can be improved
4. Potential design patterns that could be applied
Provide specific examples from the code and suggest applicable refactoring patterns.
```
Expected outcome: Detailed analysis with specific improvement suggestions.
## Tips
- Start with analysis before jumping to implementation.
- Select specific code sections when asking Chat for analysis.
- Ask Chat for specific examples from your actual code.
- Reference your existing codebase patterns for consistency.
- Let Code Suggestions help with syntax as you implement Chat's recommendations.
## Verify
Ensure that:
- Generated code follows your team's style guide.
- New structure actually improves the identified issues.
- Tests cover the refactored functionality.
````
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Prompt example page type
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
A prompt example provides step-by-step instructions for using GitLab Duo to accomplish a specific development or business task.
A prompt example should answer the questions:
- What development challenge does this solve?
- How do you use GitLab Duo to solve it?
These pages should be precise and easy to scan. They do not replace
other documentation types on the site, but instead complement them.
They should not be full of links or related conceptual or task information.
## Format
Prompt examples should be in this format:
````markdown
title: Title (active verb + object, like "Refactor legacy code")
---
One-sentence description of when to use this approach.
- Time estimate: X-Y minutes
- Level: Beginner/Intermediate/Advanced
- Prerequisites: What users need before starting
(To populate these items, see the guidance that follows this example.)
## The challenge
1-2 sentence description of the specific problem this solves.
## The approach
Brief description of the overall strategy and which GitLab Duo tools to use (usually 2-4 key phrases).
### Step 1: [Action verb]
[Specify which GitLab Duo tool to use] Brief description of what this step accomplishes.
```plaintext
Prompt template with placeholders in [brackets]
```
Expected outcome: What should happen when this prompt is used.
### Step 2: [Action verb]
[Specify which GitLab Duo tool to use] Brief description of what this step accomplishes.
```plaintext
Next prompt template with placeholders in [brackets]
```
Expected outcome: What should happen when this prompt is used.
## Tips
- Specific actionable advice for better results
- Common pitfalls to avoid
- How to iterate if first attempt doesn't work
## Verify
Ensure that:
- Quality check 1 - specific and measurable
- Quality check 2 - specific and measurable
- Quality check 3 - specific and measurable
````
## Prompt example topic titles
For the title text, use the structure `active verb` + `noun`.
For example:
- `Refactor legacy code`
- `Debug failing tests`
- `Generate API documentation`
### Titles to avoid
Avoid these topic titles:
- `How to [do something]`. Instead, use the active verb structure.
- `Using GitLab Duo for [task]`. Instead, focus on the task itself.
- `Tips and tricks`. Instead, incorporate advice into specific examples.
- Generic titles like `Code generation` when you mean something specific like `Generate REST API endpoints`.
## Level guidelines
Use these guidelines to assign difficulty levels:
- **Beginner**: Copy-paste prompts with minimal customization needed. Users follow exact steps.
- **Intermediate**: Template prompts that require adaptation. Users need to understand context and modify placeholders.
- **Advanced**: Complex multi-step workflows requiring prompt iteration and refinement. Users create custom approaches.
## Prerequisites format
Be specific about which GitLab Duo tools are needed. Common prerequisites include:
- Code file open in IDE, GitLab Duo Chat available
- Development environment set up, project requirements defined
- Existing codebase with [specific technology or framework]
- At least the Developer role for the project
- GitLab Duo Code Suggestions enabled (if using auto-completion features)
## Time estimates
Provide realistic time ranges based on complexity:
- **Simple tasks**: 5-15 minutes
- **Moderate tasks**: 15-30 minutes
- **Complex tasks**: 30-60 minutes
- **Multi-session work**: 1-2 hours (split across sessions)
## Expected outcomes format
Expected outcomes should be specific and measurable. For example:
- Do: `Detailed analysis identifying 3-5 specific improvement areas with code examples`
- Do not: `Analysis of the code`
- Do: `Complete refactored class with improved method names and added tests`
- Do not: `Better code`
## Prompt template guidelines
### Placeholder format
Always use `[descriptive_name]` format for placeholders. Make placeholders specific:
- Do: `[ClassName]` or `[file_path]` or `[specific_framework]`
- Do not: `[name]` or `[thing]` or `[item]`
### Template structure
Structure prompts with:
1. **Clear instruction**: What you want GitLab Duo to do
1. **Specific context**: What to focus on or reference
1. **Expected format**: How to structure the response
1. **Success criteria**: What good output looks like
## Tips guidelines
Tips should provide:
- **Practical advice**: Techniques that improve results
- **Common pitfalls**: Mistakes to avoid based on user experience
- **Iteration strategies**: How to refine prompts that don't work initially
- **Context tips**: How to provide better information to GitLab Duo
- **Tool combination tips**: How to use Chat and Code Suggestions together effectively
Avoid generic advice. Be specific about what works for this particular use case.
## Verification checklist
Create 3-5 specific, measurable checks that users can perform to validate success. Focus on:
- **Quality indicators**: Does the output meet standards?
- **Functionality checks**: Does the solution work as intended?
- **Completeness validation**: Are all requirements addressed?
- **Integration verification**: Does it work with existing code/systems?
## Example
### Before
The following topic tried to cover too many different scenarios in one example. It was unclear when to use each approach and the prompts were too generic.
```markdown
title: Using GitLab Duo for Development Tasks
---
You can use GitLab Duo to help with coding. Here are some ways:
- Generate code
- Fix bugs
- Write tests
- Refactor code
Ask GitLab Duo to help you with your task.
```
### After
The information is clearer when split into a focused prompt example:
````markdown
title: Refactor legacy code
---
Follow these guidelines when you need to improve performance, readability,
or maintainability of existing code.
- Time estimate: 15-30 minutes
- Level: Intermediate
- Prerequisites: Code file open in IDE, GitLab Duo Chat available
## The challenge
Transform complex, hard-to-maintain code into clean, testable components
without breaking functionality.
## The approach
Analyze, plan, and implement using GitLab Duo Chat and Code Suggestions.
### Step 1: Analyze
Use GitLab Duo Chat to understand the current state. Select the code you want to refactor, then ask:
```plaintext
Analyze the [ClassName] in [file_path]. Focus on:
1. Current methods and their complexity
2. Performance bottlenecks
3. Areas where readability can be improved
4. Potential design patterns that could be applied
Provide specific examples from the code and suggest applicable refactoring patterns.
```
Expected outcome: Detailed analysis with specific improvement suggestions.
## Tips
- Start with analysis before jumping to implementation.
- Select specific code sections when asking Chat for analysis.
- Ask Chat for specific examples from your actual code.
- Reference your existing codebase patterns for consistency.
- Let Code Suggestions help with syntax as you implement Chat's recommendations.
## Verify
Ensure that:
- Generated code follows your team's style guide.
- New structure actually improves the identified issues.
- Tests cover the refactored functionality.
````
|
https://docs.gitlab.com/development/documentation/task
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/task.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
task.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Task topic type
| null |
A task gives instructions for how to complete a procedure.
## Format
Tasks should be in this format:
```markdown
title: Title (starts with an active verb, like "Create a widget" or "Delete a widget")
---
Do this task when you want to...
Prerequisites (optional):
- Thing 1
- Thing 2
- Thing 3
To do this task:
1. Location then action. (Go to this menu, then select this item.)
1. Another step.
1. Another step.
Task result (optional). Next steps (optional).
```
Here is an example.
```markdown
title: Create an issue
---
Create an issue when you want to track bugs or future work.
Prerequisites:
- You must have at least the Developer role for the project.
To create an issue:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Plan** > **Issues**.
1. In the upper-right corner, select **New issue**.
1. Complete the fields. (If you have reference content that lists each field, link to it here.)
1. Select **Create issue**.
The issue is created. You can view it by going to **Plan** > **Issues**.
```
## Task topic titles
For the title text, use the structure `active verb` + `noun`.
For example, `Create an issue`.
If several tasks on a page share prerequisites, you can create a separate
topic with the title `Prerequisites`.
## When a task has only one step
If you need to write a task that has only one step, make that step an unordered list item.
This format helps the step stand out, while keeping it consistent with the rules
for lists.
For example:
```markdown
title: Create a merge request
---
To create a merge request:
- In the upper-right corner, select **New merge request**.
```
### When more than one way exists to perform a task
If more than one way exists to perform a task in the UI, you should
document the primary way only.
However, sometimes you must document multiple ways to perform a task.
When this situation occurs:
- Introduce the task as usual. Then, for each way of performing the task, add a topic title.
- Nest the topic titles one level below the task topic title.
- List the tasks in descending order, with the most likely method first.
- Make the task titles as brief as possible. When possible,
use `infinitive` + `noun`.
Here is an example.
```markdown
title: Change the default branch name
---
You can change the default branch name for the instance or group.
If the name is set for the instance, you can override it for a group.
## For the instance
Prerequisites:
- You must have at least the Maintainer role for the instance.
To change the default branch name for an instance:
1. Step.
1. Step.
## For the group
Prerequisites:
- You must have at least the Developer role for the group.
To change the default branch name for a group:
1. Step.
1. Step.
```
### To perform the task in the UI and API
Usually an API exists to perform the same task that you perform in the UI.
When this situation occurs:
- Do not use a separate heading for a one-sentence link to the API.
- Do not include API examples in the **Use GitLab** documentation. API examples
belong in the API documentation. If you have GraphQL examples, put them on
their own page, because the API documentation might move some day.
- Do not mention the API if you do not need to. Users can search for
the API documentation, and extra linking adds clutter.
- If someone feels strongly that you mention the API, at the end
of the UI task, add this sentence:
`To create an issue, you can also [use the API](link.md).`
## Task introductions
To start the task topic, use the structure `active verb` + `noun`, and
provide context about the action.
For example, `Create an issue when you want to track bugs or future work`.
To start the task steps, use a succinct action followed by a colon.
For example, `To create an issue:`
## Task prerequisites
As a best practice, if the task requires the user to have a role other than Guest,
put the minimum role in the prerequisites. See [the Word list](../styleguide/word_list.md) for
how to write the phrase for each role.
`Prerequisites` must always be plural, even if the list includes only one item.
## Related topics
- [How to write task steps](../styleguide/_index.md#navigation)
- [Before and after example](concept.md#example)
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: Task topic type
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
A task gives instructions for how to complete a procedure.
## Format
Tasks should be in this format:
```markdown
title: Title (starts with an active verb, like "Create a widget" or "Delete a widget")
---
Do this task when you want to...
Prerequisites (optional):
- Thing 1
- Thing 2
- Thing 3
To do this task:
1. Location then action. (Go to this menu, then select this item.)
1. Another step.
1. Another step.
Task result (optional). Next steps (optional).
```
Here is an example.
```markdown
title: Create an issue
---
Create an issue when you want to track bugs or future work.
Prerequisites:
- You must have at least the Developer role for the project.
To create an issue:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Plan** > **Issues**.
1. In the upper-right corner, select **New issue**.
1. Complete the fields. (If you have reference content that lists each field, link to it here.)
1. Select **Create issue**.
The issue is created. You can view it by going to **Plan** > **Issues**.
```
## Task topic titles
For the title text, use the structure `active verb` + `noun`.
For example, `Create an issue`.
If several tasks on a page share prerequisites, you can create a separate
topic with the title `Prerequisites`.
## When a task has only one step
If you need to write a task that has only one step, make that step an unordered list item.
This format helps the step stand out, while keeping it consistent with the rules
for lists.
For example:
```markdown
title: Create a merge request
---
To create a merge request:
- In the upper-right corner, select **New merge request**.
```
### When more than one way exists to perform a task
If more than one way exists to perform a task in the UI, you should
document the primary way only.
However, sometimes you must document multiple ways to perform a task.
When this situation occurs:
- Introduce the task as usual. Then, for each way of performing the task, add a topic title.
- Nest the topic titles one level below the task topic title.
- List the tasks in descending order, with the most likely method first.
- Make the task titles as brief as possible. When possible,
use `infinitive` + `noun`.
Here is an example.
```markdown
title: Change the default branch name
---
You can change the default branch name for the instance or group.
If the name is set for the instance, you can override it for a group.
## For the instance
Prerequisites:
- You must have at least the Maintainer role for the instance.
To change the default branch name for an instance:
1. Step.
1. Step.
## For the group
Prerequisites:
- You must have at least the Developer role for the group.
To change the default branch name for a group:
1. Step.
1. Step.
```
### To perform the task in the UI and API
Usually an API exists to perform the same task that you perform in the UI.
When this situation occurs:
- Do not use a separate heading for a one-sentence link to the API.
- Do not include API examples in the **Use GitLab** documentation. API examples
belong in the API documentation. If you have GraphQL examples, put them on
their own page, because the API documentation might move some day.
- Do not mention the API if you do not need to. Users can search for
the API documentation, and extra linking adds clutter.
- If someone feels strongly that you mention the API, at the end
of the UI task, add this sentence:
`To create an issue, you can also [use the API](link.md).`
## Task introductions
To start the task topic, use the structure `active verb` + `noun`, and
provide context about the action.
For example, `Create an issue when you want to track bugs or future work`.
To start the task steps, use a succinct action followed by a colon.
For example, `To create an issue:`
## Task prerequisites
As a best practice, if the task requires the user to have a role other than Guest,
put the minimum role in the prerequisites. See [the Word list](../styleguide/word_list.md) for
how to write the phrase for each role.
`Prerequisites` must always be plural, even if the list includes only one item.
## Related topics
- [How to write task steps](../styleguide/_index.md#navigation)
- [Before and after example](concept.md#example)
|
https://docs.gitlab.com/development/documentation/version_specific_changes
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/version_specific_changes.md
|
2025-08-13
|
doc/development/documentation/topic_types
|
[
"doc",
"development",
"documentation",
"topic_types"
] |
version_specific_changes.md
|
none
|
unassigned
|
For assistance with this Style Guide page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Version-specific changes
|
Learn how to document upgrade notes
|
A version-specific page contains upgrade notes a GitLab administrator
should follow when upgrading their GitLab Self-Managed instance.
It contains information like:
- Important bugs, bug fixes, and workarounds from one version to another.
- Long-running database migrations administrators should be aware of.
- Breaking changes in configuration files.
## Major version
For each major version of GitLab, create a page in `doc/update/versions/gitlab_X_changes.md`.
The version-specific upgrade notes page should use the following format:
```markdown
title: GitLab X upgrade notes
---
{{</* details */>}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{</* /details */>}}
This page contains upgrade information for minor and patch versions of GitLab X.
Ensure you review these instructions for:
- Your installation type.
- All versions between your current version and your target version.
For additional information for Helm chart installations, see
[the Helm chart x.0 upgrade notes](https://docs.gitlab.com/charts/releases/x_0.html).
## Issues to be aware of when upgrading from <last minor version of last major>
- General upgrade notes and issues.
## X.Y.1 (add the latest version at the top of the page)
- General upgrade notes and issues.
- ...
### Linux package installations X.Y.1
- Information specific to Linux package installations.
- ...
### Self-compiled installations X.Y.1
- Information specific to self-compiled installations.
- ...
### Geo installations X.Y.1
- Information specific to Geo.
- ...
## X.Y.0
...
```
|
---
stage: none
group: unassigned
info: For assistance with this Style Guide page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Learn how to document upgrade notes
title: Version-specific changes
breadcrumbs:
- doc
- development
- documentation
- topic_types
---
A version-specific page contains upgrade notes a GitLab administrator
should follow when upgrading their GitLab Self-Managed instance.
It contains information like:
- Important bugs, bug fixes, and workarounds from one version to another.
- Long-running database migrations administrators should be aware of.
- Breaking changes in configuration files.
## Major version
For each major version of GitLab, create a page in `doc/update/versions/gitlab_X_changes.md`.
The version-specific upgrade notes page should use the following format:
```markdown
title: GitLab X upgrade notes
---
{{</* details */>}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{</* /details */>}}
This page contains upgrade information for minor and patch versions of GitLab X.
Ensure you review these instructions for:
- Your installation type.
- All versions between your current version and your target version.
For additional information for Helm chart installations, see
[the Helm chart x.0 upgrade notes](https://docs.gitlab.com/charts/releases/x_0.html).
## Issues to be aware of when upgrading from <last minor version of last major>
- General upgrade notes and issues.
## X.Y.1 (add the latest version at the top of the page)
- General upgrade notes and issues.
- ...
### Linux package installations X.Y.1
- Information specific to Linux package installations.
- ...
### Self-compiled installations X.Y.1
- Information specific to self-compiled installations.
- ...
### Geo installations X.Y.1
- Information specific to Geo.
- ...
## X.Y.0
...
```
|
https://docs.gitlab.com/development/documentation/vale
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/vale.md
|
2025-08-13
|
doc/development/documentation/testing
|
[
"doc",
"development",
"documentation",
"testing"
] |
vale.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Vale documentation tests
|
Learn how to contribute to GitLab Documentation.
|
[Vale](https://vale.sh/) is a grammar, style, and word usage linter for the
English language. Vale's configuration is stored in the [`.vale.ini`](https://vale.sh/docs/topics/config/) file located
in the root directory of projects. For example, the [`.vale.ini`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.vale.ini)
of the `gitlab` project.
Vale supports creating [custom rules](https://vale.sh/docs/topics/styles/) that extend any of
several types of checks, which we store in the documentation directory of projects. For example,
the [`doc/.vale` directory](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/.vale) of the `gitlab` project.
This configuration is also used in build pipelines, where [error-level rules](#result-types) are enforced.
You can use Vale:
- [On the command line](https://vale.sh/docs/vale-cli/structure/).
- [In a code editor](#configure-vale-in-your-editor).
- [In a Git hook](_index.md#configure-pre-push-hooks). Vale only reports errors in the Git hook (the same
configuration as the CI/CD pipelines), and does not report suggestions or warnings.
## Install Vale
Install [`vale`](https://github.com/errata-ai/vale/releases) using either:
- If using [`asdf`](https://asdf-vm.com), the [`asdf-vale` plugin](https://github.com/pdemagny/asdf-vale). In a checkout
of a GitLab project with a `.tool-versions` file ([example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.tool-versions)),
run:
```shell
asdf plugin add vale && asdf install vale
```
- A package manager:
- macOS using `brew`, run: `brew install vale`.
- Linux, use your distribution's package manager or a [released binary](https://github.com/errata-ai/vale/releases).
## Configure Vale in your editor
Using linters in your editor is more convenient than having to run the commands from the
command line.
To configure Vale in your editor, install one of the following as appropriate:
- Visual Studio Code [`ChrisChinchilla.vale-vscode` extension](https://marketplace.visualstudio.com/items?itemName=ChrisChinchilla.vale-vscode).
You can configure the plugin to [display only a subset of alerts](#limit-which-tests-are-run).
- Sublime Text [`SublimeLinter-vale` package](https://packagecontrol.io/packages/SublimeLinter-vale). To have Vale
suggestions appears as blue instead of red (which is how errors appear), add `vale` configuration to your
[SublimeLinter](https://www.sublimelinter.com/en/master/) configuration:
```json
"vale": {
"styles": [{
"mark_style": "outline",
"scope": "region.bluish",
"types": ["suggestion"]
}]
}
```
- [LSP for Sublime Text](https://lsp.sublimetext.io) package [`LSP-vale-ls`](https://packagecontrol.io/packages/LSP-vale-ls).
- Vim [ALE plugin](https://github.com/dense-analysis/ale).
- JetBrains IDEs - No plugin exists, but
[this issue comment](https://github.com/errata-ai/vale-server/issues/39#issuecomment-751714451)
contains tips for configuring an external tool.
- Emacs [Flycheck extension](https://github.com/flycheck/flycheck). A minimal configuration
for Flycheck to work with Vale could look like:
```lisp
(flycheck-define-checker vale
"A checker for prose"
:command ("vale" "--output" "line" "--no-wrap"
source)
:standard-input nil
:error-patterns
((error line-start (file-name) ":" line ":" column ":" (id (one-or-more (not (any ":")))) ":" (message) line-end))
:modes (markdown-mode org-mode text-mode)
:next-checkers ((t . markdown-markdownlint-cli))
)
(add-to-list 'flycheck-checkers 'vale)
```
In this setup the `markdownlint` checker is set as a "next" checker from the defined `vale` checker.
Enabling this custom Vale checker provides error linting from both Vale and markdownlint.
## Result types
Vale returns three types of results:
- **Error** - For branding guidelines, trademark guidelines, and anything that causes content on
the documentation site to render incorrectly.
- **Warning** - For general style guide rules, tenets, and best practices.
- **Suggestion** - For technical writing style preferences that may require refactoring of documentation or updates to an exceptions list.
The result types have these attributes:
| Result type | Displays in CI/CD job output | Displays in MR diff | Causes CI/CD jobs to fail | Vale rule link |
|--------------|------------------------------|---------------------|---------------------------|----------------|
| `error` | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | [Error-level Vale rules](https://gitlab.com/search?group_id=9970&project_id=278964&repository_ref=master&scope=blobs&search=level%3A+error+file%3A%5Edoc&snippets=false&utf8=✓) |
| `warning` | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | [Warning-level Vale rules](https://gitlab.com/search?group_id=9970&project_id=278964&repository_ref=master&scope=blobs&search=level%3A+warning+file%3A%5Edoc&snippets=false&utf8=✓) |
| `suggestion` | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | [Suggestion-level Vale rules](https://gitlab.com/search?group_id=9970&project_id=278964&repository_ref=master&scope=blobs&search=level%3A+suggestion+file%3A%5Edoc&snippets=false&utf8=✓) |
## When to add a new Vale rule
It's tempting to add a Vale rule for every style guide rule. However, we should be
mindful of the effort to create and enforce a Vale rule, and the noise it creates.
In general, follow these guidelines:
- If you add an [error-level Vale rule](#result-types), you must fix
the existing occurrences of the issue in the documentation before you can add the rule.
If there are too many issues to fix in a single merge request, add the rule at a
`warning` level. Then, fix the existing issues in follow-up merge requests.
When the issues are fixed, promote the rule to an `error`.
- If you add a warning-level or suggestion-level rule, consider:
- How many more warnings or suggestions it creates in the Vale output. If the
number of additional warnings is significant, the rule might be too broad.
- How often an author might ignore it because it's acceptable in the context.
If the rule is too subjective, it cannot be adequately enforced and creates
unnecessary additional warnings.
- Whether it's appropriate to display in the merge request diff in the GitLab UI.
If the rule is difficult to implement directly in the merge request (for example,
it requires page refactoring), set it to suggestion-level so it displays in local editors only.
## Where to add a new Vale rule
New Vale rules belong in one of two categories (known in Vale as [styles](https://vale.sh/docs/topics/styles/)). These
rules are stored separately in specific styles directories specified in a project's `.vale.ini` file. For example,
[`.vale.ini` for the `gitlab` project](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.vale.ini).
Where to add your new rules depends on the type of rule you're proposing:
- `gitlab_base`: base rules that are applicable to any GitLab documentation.
- `gitlab_docs`: rules that are only applicable to documentation that is published to <https://docs.gitlab.com>.
Most new rules belong in [`gitlab_base`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/.vale/gitlab_base).
## Limit which tests are run
You can set Visual Studio Code to display only a subset of Vale alerts when viewing files:
1. Go to **Preferences > Settings > Extensions > Vale**.
1. In **Vale CLI: Min Alert Level**, select the minimum alert level you want displayed in files.
To display only a subset of Vale alerts when running Vale from the command line, use
the `--minAlertLevel` flag, which accepts `error`, `warning`, or `suggestion`. Combine it with `--config`
to point to the configuration file in the project, if needed:
```shell
vale --config .vale.ini --minAlertLevel error doc/**/*.md
```
Omit the flag to display all alerts, including `suggestion` level alerts.
### Test one rule at a time
To test only a single rule when running Vale from the command line, modify this
command, replacing `OutdatedVersions` with the name of the rule:
```shell
vale --no-wrap --filter='.Name=="gitlab_base.OutdatedVersions"' doc/**/*.md
```
## Disable Vale tests
You can disable a specific Vale linting rule or all Vale linting rules for any portion of a
document:
- To disable a specific rule, add a `<!-- vale gitlab_<type>.rulename = NO -->` tag before the text, and a
`<!-- vale gitlab_<type>.rulename = YES -->` tag after the text, replacing `rulename` with the filename of a test in the
directory of one of the [GitLab styles](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/.vale).
- To disable all Vale linting rules, add a `<!-- vale off -->` tag before the text, and a
`<!-- vale on -->` tag after the text.
Whenever possible, exclude only the problematic rule and lines.
Ignore statements do not work for Vale rules with the `raw` scope. For more information, see this [issue](https://github.com/errata-ai/vale/issues/194).
For more information on Vale scoping rules, see
[Vale's documentation](https://vale.sh/docs/topics/scoping/).
## Show Vale warnings on commit or push
By default, the Vale check in Lefthook only shows error-level issues. The default branches
have no Vale errors, so any errors listed here are introduced by the commit to the branch.
To also see the Vale warnings, set a local environment variable: `VALE_WARNINGS=true`.
Enable Vale warnings on commit or push to improve the documentation suite by:
- Detecting warnings you might be introducing with your commits.
- Identifying warnings that already exist in the page, which you can resolve to reduce technical debt.
These warnings:
- Don't stop the commit from working.
- Don't result in a broken pipeline.
- Include all warnings for a file, not just warnings that are introduced by the commits.
To enable Vale warnings with Lefthook:
- Automatically, add `VALE_WARNINGS=true` to your shell configuration.
- Manually, prepend `VALE_WARNINGS=true` to invocations of `lefthook`. For example:
```shell
VALE_WARNINGS=true bundle exec lefthook run pre-commit
```
You can also [configure your editor](#configure-vale-in-your-editor) to show Vale warnings.
## Resolve problems Vale identifies
### Spelling test
When Vale flags a valid word as a spelling mistake, you can fix it following these
guidelines:
| Flagged word | Guideline |
|------------------------------------------------------|-----------|
| jargon | Rewrite the sentence to avoid it. |
| *correctly-capitalized* name of a product or service | Add the word to the [Vale spelling exceptions list](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/spelling-exceptions.txt). |
| name of a person | Remove the name if it's not needed, or [add the Vale exception code inline](#disable-vale-tests). |
| a command, variable, code, or similar | Put it in backticks or a code block. For example: ``The git clone command can be used with the CI_COMMIT_BRANCH variable.`` -> ``The `git clone` command can be used with the `CI_COMMIT_BRANCH` variable.`` |
| UI text from GitLab | Verify it correctly matches the UI, then: If it does not match the UI, update it. If it matches the UI, but the UI seems incorrect, create an issue to see if the UI needs to be fixed. If it matches the UI and seems correct, add it to the [Vale spelling exceptions list](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/spelling-exceptions.txt). |
| UI text from a third-party product | Rewrite the sentence to avoid it, or [add the Vale exception code in-line](#disable-vale-tests). |
#### Uppercase (acronym) test
The [`Uppercase.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Uppercase.yml)
test checks for incorrect usage of words in all capitals. For example, avoid usage
like `This is NOT important`.
If the word must be in all capitals, follow these guidelines:
| Flagged word | Guideline |
|----------------------------------------------------------------|-----------|
| Acronym (likely known by the average visitor to that page) | Add the acronym to the list of words and acronyms in `Uppercase.yml`. |
| Acronym (likely not known by the average visitor to that page) | The first time the acronym is used, write it out fully followed by the acronym in parentheses. In later uses, use just the acronym by itself. For example: `This feature uses the File Transfer Protocol (FTP). FTP is...`. |
| Correctly capitalized name of a product or service | Add the name to the list of words and acronyms in `Uppercase.yml`. |
| Command, variable, code, or similar | Put it in backticks or a code block. For example: ``Use `FALSE` as the variable value.`` |
| UI text from a third-party product | Rewrite the sentence to avoid it, or [add the vale exception code in-line](#disable-vale-tests). |
### Readability score
In [`ReadingLevel.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/ReadingLevel.yml),
we have implemented
[the Flesch-Kincaid grade level test](https://readable.com/readability/flesch-reading-ease-flesch-kincaid-grade-level/)
to determine the readability of our documentation.
As a general guideline, the lower the score, the more readable the documentation.
For example, a page that scores `12` before a set of changes, and `9` after, indicates an iterative improvement to readability. The score is not an exact science, but is meant to help indicate the
general complexity level of the page.
The readability score is calculated based on the number of words per sentence, and the number
of syllables per word. For more information, see [the Vale documentation](https://vale.sh/docs/topics/styles/#metric).
## Export Vale results to a file
To export all (or filtered) Vale results to a file, modify this command:
```shell
# Returns results of types suggestion, warning, and error
find . -name '*.md' | sort | xargs vale --minAlertLevel suggestion --output line > ../../results.txt
# Returns only warnings and errors
find . -name '*.md' | sort | xargs vale --minAlertLevel warning --output line > ../../results.txt
# Returns only errors
find . -name '*.md' | sort | xargs vale --minAlertLevel error --output line > ../../results.txt
```
These results can be used to generate [documentation-related issues for Hackathons](../workflow.md#create-issues-for-a-hackathon).
## Enable custom rules locally
Vale 3.0 and later supports using two locations for rules. This change enables you
to create and use your own custom rules alongside the rules included in a project.
To create and use custom rules locally on macOS:
1. Create a local file in the Application Support folder for Vale:
```shell
touch ~/Library/Application\ Support/vale/.vale.ini
```
1. Add these lines to the `.vale.ini` file you just created:
```yaml
[*.md]
BasedOnStyles = local
```
1. If the folder `~/Library/Application Support/vale/styles/local` does not exist,
create it:
```shell
mkdir ~/Library/Application\ Support/vale/styles/local
```
1. Add your desired rules to `~/Library/Application Support/vale/styles/local`.
Rules in your `local` style directory are prefixed with `local` instead of `gitlab`
in Vale results, like this:
```shell
$ vale --minAlertLevel warning doc/ci/yaml/index.md
doc/ci/yaml/index.md
...[snip]...
3876:17 warning Instead of future tense 'will gitlab.FutureTense
be', use present tense.
3897:26 error Remove 'documentation' local.new-rule
✖ 1 error, 5 warnings and 0 suggestions in 1 file.
```
## Related topics
- [Styles in Vale](https://vale.sh/docs/topics/styles/)
- [Example styles](https://github.com/errata-ai/vale/tree/master/testdata/styles) containing rules you can adapt
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Learn how to contribute to GitLab Documentation.
title: Vale documentation tests
breadcrumbs:
- doc
- development
- documentation
- testing
---
[Vale](https://vale.sh/) is a grammar, style, and word usage linter for the
English language. Vale's configuration is stored in the [`.vale.ini`](https://vale.sh/docs/topics/config/) file located
in the root directory of projects. For example, the [`.vale.ini`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.vale.ini)
of the `gitlab` project.
Vale supports creating [custom rules](https://vale.sh/docs/topics/styles/) that extend any of
several types of checks, which we store in the documentation directory of projects. For example,
the [`doc/.vale` directory](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/.vale) of the `gitlab` project.
This configuration is also used in build pipelines, where [error-level rules](#result-types) are enforced.
You can use Vale:
- [On the command line](https://vale.sh/docs/vale-cli/structure/).
- [In a code editor](#configure-vale-in-your-editor).
- [In a Git hook](_index.md#configure-pre-push-hooks). Vale only reports errors in the Git hook (the same
configuration as the CI/CD pipelines), and does not report suggestions or warnings.
## Install Vale
Install [`vale`](https://github.com/errata-ai/vale/releases) using either:
- If using [`asdf`](https://asdf-vm.com), the [`asdf-vale` plugin](https://github.com/pdemagny/asdf-vale). In a checkout
of a GitLab project with a `.tool-versions` file ([example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.tool-versions)),
run:
```shell
asdf plugin add vale && asdf install vale
```
- A package manager:
- macOS using `brew`, run: `brew install vale`.
- Linux, use your distribution's package manager or a [released binary](https://github.com/errata-ai/vale/releases).
## Configure Vale in your editor
Using linters in your editor is more convenient than having to run the commands from the
command line.
To configure Vale in your editor, install one of the following as appropriate:
- Visual Studio Code [`ChrisChinchilla.vale-vscode` extension](https://marketplace.visualstudio.com/items?itemName=ChrisChinchilla.vale-vscode).
You can configure the plugin to [display only a subset of alerts](#limit-which-tests-are-run).
- Sublime Text [`SublimeLinter-vale` package](https://packagecontrol.io/packages/SublimeLinter-vale). To have Vale
suggestions appears as blue instead of red (which is how errors appear), add `vale` configuration to your
[SublimeLinter](https://www.sublimelinter.com/en/master/) configuration:
```json
"vale": {
"styles": [{
"mark_style": "outline",
"scope": "region.bluish",
"types": ["suggestion"]
}]
}
```
- [LSP for Sublime Text](https://lsp.sublimetext.io) package [`LSP-vale-ls`](https://packagecontrol.io/packages/LSP-vale-ls).
- Vim [ALE plugin](https://github.com/dense-analysis/ale).
- JetBrains IDEs - No plugin exists, but
[this issue comment](https://github.com/errata-ai/vale-server/issues/39#issuecomment-751714451)
contains tips for configuring an external tool.
- Emacs [Flycheck extension](https://github.com/flycheck/flycheck). A minimal configuration
for Flycheck to work with Vale could look like:
```lisp
(flycheck-define-checker vale
"A checker for prose"
:command ("vale" "--output" "line" "--no-wrap"
source)
:standard-input nil
:error-patterns
((error line-start (file-name) ":" line ":" column ":" (id (one-or-more (not (any ":")))) ":" (message) line-end))
:modes (markdown-mode org-mode text-mode)
:next-checkers ((t . markdown-markdownlint-cli))
)
(add-to-list 'flycheck-checkers 'vale)
```
In this setup the `markdownlint` checker is set as a "next" checker from the defined `vale` checker.
Enabling this custom Vale checker provides error linting from both Vale and markdownlint.
## Result types
Vale returns three types of results:
- **Error** - For branding guidelines, trademark guidelines, and anything that causes content on
the documentation site to render incorrectly.
- **Warning** - For general style guide rules, tenets, and best practices.
- **Suggestion** - For technical writing style preferences that may require refactoring of documentation or updates to an exceptions list.
The result types have these attributes:
| Result type | Displays in CI/CD job output | Displays in MR diff | Causes CI/CD jobs to fail | Vale rule link |
|--------------|------------------------------|---------------------|---------------------------|----------------|
| `error` | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | [Error-level Vale rules](https://gitlab.com/search?group_id=9970&project_id=278964&repository_ref=master&scope=blobs&search=level%3A+error+file%3A%5Edoc&snippets=false&utf8=✓) |
| `warning` | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | [Warning-level Vale rules](https://gitlab.com/search?group_id=9970&project_id=278964&repository_ref=master&scope=blobs&search=level%3A+warning+file%3A%5Edoc&snippets=false&utf8=✓) |
| `suggestion` | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | [Suggestion-level Vale rules](https://gitlab.com/search?group_id=9970&project_id=278964&repository_ref=master&scope=blobs&search=level%3A+suggestion+file%3A%5Edoc&snippets=false&utf8=✓) |
## When to add a new Vale rule
It's tempting to add a Vale rule for every style guide rule. However, we should be
mindful of the effort to create and enforce a Vale rule, and the noise it creates.
In general, follow these guidelines:
- If you add an [error-level Vale rule](#result-types), you must fix
the existing occurrences of the issue in the documentation before you can add the rule.
If there are too many issues to fix in a single merge request, add the rule at a
`warning` level. Then, fix the existing issues in follow-up merge requests.
When the issues are fixed, promote the rule to an `error`.
- If you add a warning-level or suggestion-level rule, consider:
- How many more warnings or suggestions it creates in the Vale output. If the
number of additional warnings is significant, the rule might be too broad.
- How often an author might ignore it because it's acceptable in the context.
If the rule is too subjective, it cannot be adequately enforced and creates
unnecessary additional warnings.
- Whether it's appropriate to display in the merge request diff in the GitLab UI.
If the rule is difficult to implement directly in the merge request (for example,
it requires page refactoring), set it to suggestion-level so it displays in local editors only.
## Where to add a new Vale rule
New Vale rules belong in one of two categories (known in Vale as [styles](https://vale.sh/docs/topics/styles/)). These
rules are stored separately in specific styles directories specified in a project's `.vale.ini` file. For example,
[`.vale.ini` for the `gitlab` project](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.vale.ini).
Where to add your new rules depends on the type of rule you're proposing:
- `gitlab_base`: base rules that are applicable to any GitLab documentation.
- `gitlab_docs`: rules that are only applicable to documentation that is published to <https://docs.gitlab.com>.
Most new rules belong in [`gitlab_base`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/.vale/gitlab_base).
## Limit which tests are run
You can set Visual Studio Code to display only a subset of Vale alerts when viewing files:
1. Go to **Preferences > Settings > Extensions > Vale**.
1. In **Vale CLI: Min Alert Level**, select the minimum alert level you want displayed in files.
To display only a subset of Vale alerts when running Vale from the command line, use
the `--minAlertLevel` flag, which accepts `error`, `warning`, or `suggestion`. Combine it with `--config`
to point to the configuration file in the project, if needed:
```shell
vale --config .vale.ini --minAlertLevel error doc/**/*.md
```
Omit the flag to display all alerts, including `suggestion` level alerts.
### Test one rule at a time
To test only a single rule when running Vale from the command line, modify this
command, replacing `OutdatedVersions` with the name of the rule:
```shell
vale --no-wrap --filter='.Name=="gitlab_base.OutdatedVersions"' doc/**/*.md
```
## Disable Vale tests
You can disable a specific Vale linting rule or all Vale linting rules for any portion of a
document:
- To disable a specific rule, add a `<!-- vale gitlab_<type>.rulename = NO -->` tag before the text, and a
`<!-- vale gitlab_<type>.rulename = YES -->` tag after the text, replacing `rulename` with the filename of a test in the
directory of one of the [GitLab styles](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/.vale).
- To disable all Vale linting rules, add a `<!-- vale off -->` tag before the text, and a
`<!-- vale on -->` tag after the text.
Whenever possible, exclude only the problematic rule and lines.
Ignore statements do not work for Vale rules with the `raw` scope. For more information, see this [issue](https://github.com/errata-ai/vale/issues/194).
For more information on Vale scoping rules, see
[Vale's documentation](https://vale.sh/docs/topics/scoping/).
## Show Vale warnings on commit or push
By default, the Vale check in Lefthook only shows error-level issues. The default branches
have no Vale errors, so any errors listed here are introduced by the commit to the branch.
To also see the Vale warnings, set a local environment variable: `VALE_WARNINGS=true`.
Enable Vale warnings on commit or push to improve the documentation suite by:
- Detecting warnings you might be introducing with your commits.
- Identifying warnings that already exist in the page, which you can resolve to reduce technical debt.
These warnings:
- Don't stop the commit from working.
- Don't result in a broken pipeline.
- Include all warnings for a file, not just warnings that are introduced by the commits.
To enable Vale warnings with Lefthook:
- Automatically, add `VALE_WARNINGS=true` to your shell configuration.
- Manually, prepend `VALE_WARNINGS=true` to invocations of `lefthook`. For example:
```shell
VALE_WARNINGS=true bundle exec lefthook run pre-commit
```
You can also [configure your editor](#configure-vale-in-your-editor) to show Vale warnings.
## Resolve problems Vale identifies
### Spelling test
When Vale flags a valid word as a spelling mistake, you can fix it following these
guidelines:
| Flagged word | Guideline |
|------------------------------------------------------|-----------|
| jargon | Rewrite the sentence to avoid it. |
| *correctly-capitalized* name of a product or service | Add the word to the [Vale spelling exceptions list](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/spelling-exceptions.txt). |
| name of a person | Remove the name if it's not needed, or [add the Vale exception code inline](#disable-vale-tests). |
| a command, variable, code, or similar | Put it in backticks or a code block. For example: ``The git clone command can be used with the CI_COMMIT_BRANCH variable.`` -> ``The `git clone` command can be used with the `CI_COMMIT_BRANCH` variable.`` |
| UI text from GitLab | Verify it correctly matches the UI, then: If it does not match the UI, update it. If it matches the UI, but the UI seems incorrect, create an issue to see if the UI needs to be fixed. If it matches the UI and seems correct, add it to the [Vale spelling exceptions list](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/spelling-exceptions.txt). |
| UI text from a third-party product | Rewrite the sentence to avoid it, or [add the Vale exception code in-line](#disable-vale-tests). |
#### Uppercase (acronym) test
The [`Uppercase.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/Uppercase.yml)
test checks for incorrect usage of words in all capitals. For example, avoid usage
like `This is NOT important`.
If the word must be in all capitals, follow these guidelines:
| Flagged word | Guideline |
|----------------------------------------------------------------|-----------|
| Acronym (likely known by the average visitor to that page) | Add the acronym to the list of words and acronyms in `Uppercase.yml`. |
| Acronym (likely not known by the average visitor to that page) | The first time the acronym is used, write it out fully followed by the acronym in parentheses. In later uses, use just the acronym by itself. For example: `This feature uses the File Transfer Protocol (FTP). FTP is...`. |
| Correctly capitalized name of a product or service | Add the name to the list of words and acronyms in `Uppercase.yml`. |
| Command, variable, code, or similar | Put it in backticks or a code block. For example: ``Use `FALSE` as the variable value.`` |
| UI text from a third-party product | Rewrite the sentence to avoid it, or [add the vale exception code in-line](#disable-vale-tests). |
### Readability score
In [`ReadingLevel.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab_base/ReadingLevel.yml),
we have implemented
[the Flesch-Kincaid grade level test](https://readable.com/readability/flesch-reading-ease-flesch-kincaid-grade-level/)
to determine the readability of our documentation.
As a general guideline, the lower the score, the more readable the documentation.
For example, a page that scores `12` before a set of changes, and `9` after, indicates an iterative improvement to readability. The score is not an exact science, but is meant to help indicate the
general complexity level of the page.
The readability score is calculated based on the number of words per sentence, and the number
of syllables per word. For more information, see [the Vale documentation](https://vale.sh/docs/topics/styles/#metric).
## Export Vale results to a file
To export all (or filtered) Vale results to a file, modify this command:
```shell
# Returns results of types suggestion, warning, and error
find . -name '*.md' | sort | xargs vale --minAlertLevel suggestion --output line > ../../results.txt
# Returns only warnings and errors
find . -name '*.md' | sort | xargs vale --minAlertLevel warning --output line > ../../results.txt
# Returns only errors
find . -name '*.md' | sort | xargs vale --minAlertLevel error --output line > ../../results.txt
```
These results can be used to generate [documentation-related issues for Hackathons](../workflow.md#create-issues-for-a-hackathon).
## Enable custom rules locally
Vale 3.0 and later supports using two locations for rules. This change enables you
to create and use your own custom rules alongside the rules included in a project.
To create and use custom rules locally on macOS:
1. Create a local file in the Application Support folder for Vale:
```shell
touch ~/Library/Application\ Support/vale/.vale.ini
```
1. Add these lines to the `.vale.ini` file you just created:
```yaml
[*.md]
BasedOnStyles = local
```
1. If the folder `~/Library/Application Support/vale/styles/local` does not exist,
create it:
```shell
mkdir ~/Library/Application\ Support/vale/styles/local
```
1. Add your desired rules to `~/Library/Application Support/vale/styles/local`.
Rules in your `local` style directory are prefixed with `local` instead of `gitlab`
in Vale results, like this:
```shell
$ vale --minAlertLevel warning doc/ci/yaml/index.md
doc/ci/yaml/index.md
...[snip]...
3876:17 warning Instead of future tense 'will gitlab.FutureTense
be', use present tense.
3897:26 error Remove 'documentation' local.new-rule
✖ 1 error, 5 warnings and 0 suggestions in 1 file.
```
## Related topics
- [Styles in Vale](https://vale.sh/docs/topics/styles/)
- [Example styles](https://github.com/errata-ai/vale/tree/master/testdata/styles) containing rules you can adapt
|
https://docs.gitlab.com/development/documentation/markdownlint
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/markdownlint.md
|
2025-08-13
|
doc/development/documentation/testing
|
[
"doc",
"development",
"documentation",
"testing"
] |
markdownlint.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
markdownlint documentation tests
|
Learn how to contribute to GitLab Documentation.
|
[markdownlint](https://github.com/DavidAnson/markdownlint) checks that Markdown syntax follows
[certain rules](https://github.com/DavidAnson/markdownlint/blob/master/doc/Rules.md#rules), and is
used by the `docs-lint` test.
Our [Documentation Style Guide](../styleguide/_index.md#markdown) and
[Markdown Guide](https://handbook.gitlab.com/docs/markdown-guide/) elaborate on which choices must
be made when selecting Markdown syntax for GitLab documentation. This tool helps catch deviations
from those guidelines.
markdownlint configuration is found in the following projects:
- [`gitlab`](https://gitlab.com/gitlab-org/gitlab)
- [`gitlab-runner`](https://gitlab.com/gitlab-org/gitlab-runner)
- [`omnibus-gitlab`](https://gitlab.com/gitlab-org/omnibus-gitlab)
- [`charts`](https://gitlab.com/gitlab-org/charts/gitlab)
- [`gitlab-development-kit`](https://gitlab.com/gitlab-org/gitlab-development-kit)
- [`gitlab-operator`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator)
This configuration is also used in build pipelines.
You can use markdownlint:
- On the command line, with either:
- [`markdownlint-cli`](https://github.com/igorshubovych/markdownlint-cli#markdownlint-cli).
- [`markdownlint-cli2`](https://github.com/DavidAnson/markdownlint-cli2#markdownlint-cli2).
- [In a code editor](#configure-markdownlint-in-your-editor).
- [In a `pre-push` hook](_index.md#configure-pre-push-hooks).
## Install markdownlint
You can install either `markdownlint-cli` or `markdownlint-cli2` to run `markdownlint`.
To install `markdownlint-cli`, run:
```shell
yarn global add markdownlint-cli
```
To install `markdownlint-cli2`, run:
```shell
yarn global add markdownlint-cli2
```
You should install the version of `markdownlint-cli` or `markdownlint-cli2` that matches the version used in the GitLab Docs project.
You can find the correct version in the [`variables:` section](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab-ci.yml?ref_type=heads#L16).
## Configure markdownlint in your editor
Using markdownlint in your editor is more convenient than having to run the commands from the
command line.
To configure markdownlint in your editor, install one of the following as appropriate:
- Visual Studio Code [`DavidAnson.vscode-markdownlint` extension](https://marketplace.visualstudio.com/items?itemName=DavidAnson.vscode-markdownlint).
- Sublime Text [`SublimeLinter-contrib-markdownlint` package](https://packagecontrol.io/packages/SublimeLinter-contrib-markdownlint).
This package uses `markdownlint-cli` by default, but can be configured to use `markdownlint-cli2` with this
SublimeLinter configuration:
```json
"markdownlint": {
"executable": [ "markdownlint-cli2" ]
}
```
- Vim [ALE plugin](https://github.com/dense-analysis/ale).
- Emacs [Flycheck extension](https://github.com/flycheck/flycheck). `Flycheck` supports
`markdownlint-cli` out of the box, but you must add a `.dir-locals.el` file to
point it to the `.markdownlint.yml` at the base of the project directory:
```lisp
;; Place this code in a file called `.dir-locals.el` at the root of the gitlab project.
((markdown-mode . ((flycheck-markdown-markdownlint-cli-config . ".markdownlint.yml"))))
```
## Run `markdownlint-cli2` locally
You can run `markdownlint-cli2` from anywhere in your repository. From the root of your repository,
you don't need to specify the location of the configuration file. If you run it from elsewhere
in your repository, you must specify the configuration file's location. In these commands,
replace `doc/**/*.md` with the path to the Markdown files in your repository:
```shell
# From the root directory, you don't need to specify the configuration file
$ markdownlint-cli2 'doc/**/*.md'
# From elsewhere in the repository, specify the configuration file
$ markdownlint-cli2 --config .markdownlint-cli2.yaml 'doc/**/*.md'
```
For a full list of command-line options, see [Command Line](https://github.com/DavidAnson/markdownlint-cli2?tab=readme-ov-file#command-line)
in the `markdownlint-cli2` documentation.
## Disable markdownlint tests
To disable all markdownlint rules, add a `<!-- markdownlint-disable -->` tag before the text, and a
`<!-- markdownlint-enable -->` tag after the text.
To disable only a [specific rule](https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#rules),
add the rule number to the tag, for example `<!-- markdownlint-disable MD044 -->`
and `<!-- markdownlint-enable MD044 -->`.
Whenever possible, exclude only the problematic lines.
## Troubleshooting
### Markdown rule `MD044/proper-names` (capitalization)
A rule that can cause confusion is `MD044/proper-names`. The failure, or
how to correct it, might not be immediately clear.
This rule checks a list of known words, listed in the `.markdownlint.yml`
file in each project, to verify proper use of capitalization and backticks.
Words in backticks are ignored by markdownlint.
In general, product names should follow the exact capitalization of the official
names of the products, protocols, and so on.
Some examples fail if incorrect capitalization is used:
- MinIO (needs capital `IO`)
- NGINX (needs all capitals)
- runit (needs lowercase `r`)
Additionally, commands, parameters, values, filenames, and so on must be
included in backticks. For example:
- "Change the `needs` keyword in your `.gitlab-ci.yml`..."
- `needs` is a parameter, and `.gitlab-ci.yml` is a file, so both need backticks.
Additionally, `.gitlab-ci.yml` without backticks fails markdownlint because it
does not have capital G or L.
- "Run `git clone` to clone a Git repository..."
- `git clone` is a command, so it must be lowercase, while Git is the product,
so it must have a capital G.
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Learn how to contribute to GitLab Documentation.
title: markdownlint documentation tests
breadcrumbs:
- doc
- development
- documentation
- testing
---
[markdownlint](https://github.com/DavidAnson/markdownlint) checks that Markdown syntax follows
[certain rules](https://github.com/DavidAnson/markdownlint/blob/master/doc/Rules.md#rules), and is
used by the `docs-lint` test.
Our [Documentation Style Guide](../styleguide/_index.md#markdown) and
[Markdown Guide](https://handbook.gitlab.com/docs/markdown-guide/) elaborate on which choices must
be made when selecting Markdown syntax for GitLab documentation. This tool helps catch deviations
from those guidelines.
markdownlint configuration is found in the following projects:
- [`gitlab`](https://gitlab.com/gitlab-org/gitlab)
- [`gitlab-runner`](https://gitlab.com/gitlab-org/gitlab-runner)
- [`omnibus-gitlab`](https://gitlab.com/gitlab-org/omnibus-gitlab)
- [`charts`](https://gitlab.com/gitlab-org/charts/gitlab)
- [`gitlab-development-kit`](https://gitlab.com/gitlab-org/gitlab-development-kit)
- [`gitlab-operator`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator)
This configuration is also used in build pipelines.
You can use markdownlint:
- On the command line, with either:
- [`markdownlint-cli`](https://github.com/igorshubovych/markdownlint-cli#markdownlint-cli).
- [`markdownlint-cli2`](https://github.com/DavidAnson/markdownlint-cli2#markdownlint-cli2).
- [In a code editor](#configure-markdownlint-in-your-editor).
- [In a `pre-push` hook](_index.md#configure-pre-push-hooks).
## Install markdownlint
You can install either `markdownlint-cli` or `markdownlint-cli2` to run `markdownlint`.
To install `markdownlint-cli`, run:
```shell
yarn global add markdownlint-cli
```
To install `markdownlint-cli2`, run:
```shell
yarn global add markdownlint-cli2
```
You should install the version of `markdownlint-cli` or `markdownlint-cli2` that matches the version used in the GitLab Docs project.
You can find the correct version in the [`variables:` section](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab-ci.yml?ref_type=heads#L16).
## Configure markdownlint in your editor
Using markdownlint in your editor is more convenient than having to run the commands from the
command line.
To configure markdownlint in your editor, install one of the following as appropriate:
- Visual Studio Code [`DavidAnson.vscode-markdownlint` extension](https://marketplace.visualstudio.com/items?itemName=DavidAnson.vscode-markdownlint).
- Sublime Text [`SublimeLinter-contrib-markdownlint` package](https://packagecontrol.io/packages/SublimeLinter-contrib-markdownlint).
This package uses `markdownlint-cli` by default, but can be configured to use `markdownlint-cli2` with this
SublimeLinter configuration:
```json
"markdownlint": {
"executable": [ "markdownlint-cli2" ]
}
```
- Vim [ALE plugin](https://github.com/dense-analysis/ale).
- Emacs [Flycheck extension](https://github.com/flycheck/flycheck). `Flycheck` supports
`markdownlint-cli` out of the box, but you must add a `.dir-locals.el` file to
point it to the `.markdownlint.yml` at the base of the project directory:
```lisp
;; Place this code in a file called `.dir-locals.el` at the root of the gitlab project.
((markdown-mode . ((flycheck-markdown-markdownlint-cli-config . ".markdownlint.yml"))))
```
## Run `markdownlint-cli2` locally
You can run `markdownlint-cli2` from anywhere in your repository. From the root of your repository,
you don't need to specify the location of the configuration file. If you run it from elsewhere
in your repository, you must specify the configuration file's location. In these commands,
replace `doc/**/*.md` with the path to the Markdown files in your repository:
```shell
# From the root directory, you don't need to specify the configuration file
$ markdownlint-cli2 'doc/**/*.md'
# From elsewhere in the repository, specify the configuration file
$ markdownlint-cli2 --config .markdownlint-cli2.yaml 'doc/**/*.md'
```
For a full list of command-line options, see [Command Line](https://github.com/DavidAnson/markdownlint-cli2?tab=readme-ov-file#command-line)
in the `markdownlint-cli2` documentation.
## Disable markdownlint tests
To disable all markdownlint rules, add a `<!-- markdownlint-disable -->` tag before the text, and a
`<!-- markdownlint-enable -->` tag after the text.
To disable only a [specific rule](https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#rules),
add the rule number to the tag, for example `<!-- markdownlint-disable MD044 -->`
and `<!-- markdownlint-enable MD044 -->`.
Whenever possible, exclude only the problematic lines.
## Troubleshooting
### Markdown rule `MD044/proper-names` (capitalization)
A rule that can cause confusion is `MD044/proper-names`. The failure, or
how to correct it, might not be immediately clear.
This rule checks a list of known words, listed in the `.markdownlint.yml`
file in each project, to verify proper use of capitalization and backticks.
Words in backticks are ignored by markdownlint.
In general, product names should follow the exact capitalization of the official
names of the products, protocols, and so on.
Some examples fail if incorrect capitalization is used:
- MinIO (needs capital `IO`)
- NGINX (needs all capitals)
- runit (needs lowercase `r`)
Additionally, commands, parameters, values, filenames, and so on must be
included in backticks. For example:
- "Change the `needs` keyword in your `.gitlab-ci.yml`..."
- `needs` is a parameter, and `.gitlab-ci.yml` is a file, so both need backticks.
Additionally, `.gitlab-ci.yml` without backticks fails markdownlint because it
does not have capital G or L.
- "Run `git clone` to clone a Git repository..."
- `git clone` is a command, so it must be lowercase, while Git is the product,
so it must have a capital G.
|
https://docs.gitlab.com/development/documentation/testing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/_index.md
|
2025-08-13
|
doc/development/documentation/testing
|
[
"doc",
"development",
"documentation",
"testing"
] |
_index.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Documentation testing
|
Learn how to contribute to GitLab Documentation.
|
GitLab documentation is stored in projects with code, and treated like code.
To maintain standards and quality of documentation, we use processes similar to
those used for code.
Merge requests containing changes to Markdown (`.md`) files run these CI/CD jobs:
- `docs-lint markdown`: Runs several types of tests, including:
- [Vale](vale.md): Checks documentation content.
- [markdownlint](markdownlint.md): Checks Markdown structure.
- [`lint-docs.sh`](#tests-in-lint-docsh) script: Miscellaneous tests
- `docs-lint links`: Checks the validity of [relative links](links.md#run-the-relative-link-test-locally) in the documentation suite.
- `docs-lint mermaid`: Runs [`mermaidlint`](#mermaid-chart-linting) to check for invalid Mermaid charts.
- `rubocop-docs`: Checks links to documentation [from `.rb` files](links.md#run-rubocop-tests).
- `eslint-docs`: Checks links to documentation [from `.js` and `.vue` files](links.md#run-eslint-tests).
- `docs-lint redirects`: Checks for deleted or renamed documentation files without [redirects](../redirects.md).
- `docs code_quality` and `code_quality cache`: Runs [code quality](../../../ci/testing/code_quality.md)
to add Vale [warnings and errors into the MR changes tab (diff view)](../../../ci/testing/code_quality.md#merge-request-changes-view).
A few files are generated from scripts. A CI/CD job fails when either the source code files
or the documentation files are updated without following the correct process:
- `graphql-verify`: Fails when `doc/api/graphql/reference/_index.md` is not updated
with the [update process](../../rake_tasks.md#update-graphql-documentation-and-schema-definitions).
- `docs-lint deprecations-and-removals`: Fails when `doc/update/deprecations.md` is
not updated with the [update process](../../deprecation_guidelines/_index.md#update-the-deprecations-and-removals-documentation).
For a full list of automated files, see [Automated pages](../site_architecture/automation.md).
## Tests in `lint-doc.sh`
The tests in
[`/scripts/lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/lint-doc.sh)
look for page content problems that Vale and markdownlint cannot test for.
The `docs-lint markdown` job fails if any of these `lint-doc.sh` tests fail:
- Curl (`curl`) commands [must use long-form options (`--header`)](../restful_api_styleguide.md#curl-commands)
instead of short options, like `-h`.
- Documentation pages [must contain front matter](../metadata.md#stage-and-group-metadata)
indicating ownership of the page.
- `CHANGELOG.md` must not contain duplicate versions.
- Files in the `doc/` directory must not be executable.
- [Filenames and directories must](../site_architecture/folder_structure.md#work-with-directories-and-files):
- Use `_index.md` instead of `README.md`
- Use underscores instead of dashes.
- Be lowercase.
- Image filenames must [specify the version they were added in](../styleguide/_index.md#image-requirements).
- Mermaid charts must render without errors.
### Mermaid chart linting
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/144328) in GitLab 16.10.
{{< /history >}}
[Mermaid](https://mermaid.js.org/) builds charts and diagrams from code.
The script (`scripts/lint/check_mermaid.mjs`) runs in the `docs-lint mermaid` job for
all merge requests that contain changes to Markdown files. The script returns an
error if any Markdown files return a Mermaid syntax error.
To help debug your Mermaid charts, use the
[Mermaid Live Editor](https://mermaid-js.github.io/mermaid-live-editor/edit).
## Tests in `docs-lint links` and other jobs
To check for broken links, merge requests containing changes to Markdown (`.md`) files run these jobs in their
pipelines:
- `docs-lint links` job in the `gitlab` project. For example: <https://gitlab.com/gitlab-org/gitlab/-/jobs/7065686331>.
- `docs-lint links` job in the `omnibus-gitlab` project. For example: <https://gitlab.com/gitlab-org/omnibus-gitlab/-/jobs/7065337075>.
- `docs-lint links` job in the `gitlab-operator` project.
- `docs:lint markdown` job in the `gitlab-runner` project, which includes link checking. For example:
<https://gitlab.com/gitlab-org/gitlab-runner/-/jobs/7056674997>.
- `check_docs_links` job in the `charts/gitlab` project. For example:
<https://gitlab.com/gitlab-org/charts/gitlab/-/jobs/7066011619>.
These jobs check links, including anchor links, and report any problems. Any link that requires a network
connection is skipped.
## Tests for translated documentation
To ensure quality across all our translated content, we've implemented testing for our documentation in
multiple languages. These tests mirror those used for the English version, but run on internationalized
content in the `/doc-locale/` or `/docs-locale/` directories.
| Project | English Dir | Translation Dir | Linting Jobs |
| ----- | ----- | ----- | ----- |
| GitLab | [`/doc`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc) | [`/doc-locale`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc-locale) | `docs-i18n-lint markdown` |
| GitLab Runner | [`/docs`](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs) | [`/docs-locale`](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs-locale?ref_type=heads) | `docs:lint i18n markdown` |
| Linux package | [`/doc`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/tree/master/doc) | [`/doc-locale`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/tree/master/doc-locale) | `docs-lint-i18n markdown` <br/> `docs-lint-i18n content` |
| Charts | [`/doc`](https://gitlab.com/gitlab-org/charts/gitlab/-/tree/master/doc) | [`/doc-locale`](https://gitlab.com/gitlab-org/charts/gitlab/-/tree/master/doc-locale) | `check_docs_i18n_content` <br/> `check_docs_i18n_markdown` |
| Operator | [`/doc`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc) | [`/doc-locale`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc-locale) | `docs-i18n-lint content` <br/> `docs-i18n-lint markdown` |
### Path verification of orphaned translation Files
The `docs-i18n-lint paths` job fails if translated files in `/doc-locale` have no corresponding English source files. The job runs when:
- Files in `/doc-locale` are modified
- The path verification script changes
When orphaned translation files are detected, localization team members handle the necessary deletions. English fallback content provides coverage until new translations are available.
## Install documentation linters
To help adhere to the [documentation style guidelines](../styleguide/_index.md), and
improve the content added to documentation, install documentation linters and
integrate them with your code editor. At a minimum, install [markdownlint](markdownlint.md)
and [Vale](vale.md) to match the checks run in build pipelines. Both tools can
integrate with your code editor.
## Run documentation tests locally
Similar to [previewing your changes locally](../review_apps.md), you can also run
documentation tests on your local computer. This has the advantage of:
- Speeding up the feedback loop. You can know of any problems with the changes in your branch
without waiting for a CI/CD pipeline to run.
- Lowering costs. Running tests locally is cheaper than running tests on the cloud
infrastructure GitLab uses.
It's important to:
- Keep the tools up-to-date, and [match the versions used](#tool-versions-used-in-cicd-pipelines) in our CI/CD pipelines.
- Run linters, documentation link tests, and UI link tests the same way they are
run in CI/CD pipelines. It's important to use same configuration we use in
CI/CD pipelines, which can be different than the default configuration of the tool.
### Run Vale, markdownlint, or link checks locally
Installation and configuration instructions are available for:
- [markdownlint](markdownlint.md).
- [Vale](vale.md).
- [Lychee](links.md) and UI link checkers.
### Run `lint-doc.sh` locally
Use a Rake task to run the `lint-doc.sh` tests locally.
Prerequisites:
- You have either:
- The [required lint tools installed](#install-documentation-linters) on your computer.
- A working Docker or `containerd` installation, to use an image with these tools pre-installed.
1. Go to your `gitlab` directory.
1. Run:
```shell
rake lint:markdown
```
To specify a single file or directory you would like to run lint checks for, run:
```shell
MD_DOC_PATH=path/to/my_doc.md rake lint:markdown
```
The output should be similar to:
```plaintext
=> Linting documents at path /path/to/gitlab as <user>...
=> Checking for cURL short options...
=> Checking for CHANGELOG.md duplicate entries...
=> Checking /path/to/gitlab/doc for executable permissions...
=> Checking for new README.md files...
=> Linting markdown style...
=> Linting prose...
✔ 0 errors, 0 warnings and 0 suggestions in 1 file.
✔ Linting passed
```
## Update linter configuration
Vale and markdownlint configurations are under source control in each
project, so updates must be committed to each project individually.
The configuration in the `gitlab` project should be treated as the source of truth,
and all updates should first be made there.
On a regular basis, the changes made in `gitlab` project to the Vale and markdownlint configuration should be
synchronized to the other projects. In each of the [supported projects](#supported-projects):
1. Create a new branch. Add `docs-` to the beginning or `-docs` to the end of the branch name. Some projects use this
convention to limit the jobs that run.
1. Copy the configuration files from the `gitlab` project. For example, in the root directory of the project, run:
```shell
# Copy markdownlint configuration file
cp ../gitlab/.markdownlint-cli2.yaml .
# Remove existing Vale configuration in case some rules have been removed from the GitLab project
rm -r docs/.vale/gitlab
# Copy gitlab_base Vale configuration files for a project with documentation stored in 'docs' directory
cp -r ../gitlab/doc/.vale/gitlab_base docs/.vale
```
1. If updating `gitlab-runner`, `gitlab-omnibus`, `charts/gitlab`, or `gitlab-operator`, also copy the `gitlab-docs`
Vale configuration from the `gitlab` project. For example, in the root directory of the project, run:
```shell
# Copy gitlab-docs Vale configuration files for a project with documentation stored in 'docs' directory
cp -r ../gitlab/doc/.vale/gitlab_docs docs/.vale
```
1. Review the diff created for `.markdownlint-cli2.yaml`. For example, run:
```shell
git diff .markdownlint-cli2.yaml
```
1. Remove any changes that aren't required. For example, `customRules` is only used in the `gitlab` project.
1. Review the diffs created for the Vale configuration. For example, run:
```shell
git diff docs
```
1. Remove unneeded changes to `RelativeLinks.yml`. This rule is specific to each project.
1. Remove any `.tmpl` files. These files are only used in the `gitlab` project.
1. Run `markdownlint-cli2` to check for any violations of the new rules. For example:
```shell
markdownlint-cli2 docs/**/*.md
```
1. Run Vale to check for any violations of the new rules. For example:
```shell
vale --minAlertLevel error docs
```
1. Commit the changes to the new branch. Some projects require
[conventional commits](https://www.conventionalcommits.org/en/v1.0.0/) so check the contributing information for the
project before committing.
1. Submit a merge request for review.
## Update linting images
Lint tests run in CI/CD pipelines using images from the
`docs-gitlab-com` [container registry](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/container_registry).
If a new version of a dependency is released (like a new version of Vale), we
should update the images to use the newer version. Then, we can update the configuration
files in each of our documentation projects to point to the new image.
To update the linting images:
1. In `docs-gitlab-com`, open a merge request to update `.gitlab-ci.yml` to use the new tooling
version. ([Example MR](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/merge_requests/341))
1. When merged, start a `Build docker images pipeline (Manual)` [scheduled pipeline](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipeline_schedules).
1. Go the pipeline you started, and wait for the relevant `test:image` job to complete,
for example `test:image:docs-lint-markdown`. If the job:
- Passes, start the relevant `image:` job, for example, `image:docs-lint-markdown`.
- Fails, review the test job log and start troubleshooting the issue. The image configuration
likely needs some manual tweaks to work with the updated dependency.
1. After the `image:` job passes, check the job's log for the name of the new image.
([Example job output](https://gitlab.com/gitlab-org/gitlab-docs/-/jobs/2335033884#L334))
1. Verify that the new image was added to the container registry.
1. Open merge requests to update each of these configuration files to point to the new image.
For jobs that use `markdownlint`, `vale`, or `lychee`:
- `gitlab`:
- [`.gitlab/ci/docs.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/docs.gitlab-ci.yml),
update the `image` in the `.docs-markdown-lint-image:` section.
- [`scripts/lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/lint-doc.sh),
update the `registry_url` value in the `run_locally_or_in_container()` section.
- `gitlab-runner`: [`.gitlab/ci/_common.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/.gitlab/ci/_common.gitlab-ci.yml),
update the value of the `DOCS_LINT_IMAGE` variable.
- `omnibus-gitlab`: [`gitlab-ci-config/variables.yml`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/gitlab-ci-config/variables.yml),
update the value of the `DOCS_LINT_IMAGE` variable.
- `charts/gitlab`: [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/.gitlab-ci.yml),
update the value of the `DOCS_LINT_IMAGE` variable.
- `cloud-native/gitlab-operator`: [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/master/.gitlab-ci.yml)
update the value of the `DOCS_LINT_IMAGE` variable.
- `gitlab-development-kit`: [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/.gitlab-ci.yml)
update the value of the `DOCS_LINT_IMAGE` variable.
1. In each merge request:
1. Include a small doc update to trigger the job that uses the image.
1. Check the relevant job output to confirm the updated image was used for the test.
1. Assign the merge requests to any technical writer to review and merge.
## Configure pre-push hooks
Git [pre-push hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) allow Git users to:
- Run tests or other processes before pushing a branch.
- Avoid pushing a branch if failures occur with these tests.
[Lefthook](https://github.com/Arkweid/lefthook) is a Git hooks manager. It makes configuring,
installing, and removing Git hooks simpler. Configuration for it is available in the
[`lefthook.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lefthook.yml)
file for the [`gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
To set up Lefthook for documentation linting, see
[Pre-commit and pre-push static analysis with Lefthook](../../contributing/style_guides.md#pre-commit-and-pre-push-static-analysis-with-lefthook).
To show Vale errors on commit or push, see [Show Vale warnings on commit or push](vale.md#show-vale-warnings-on-commit-or-push).
## Disable linting on documentation
Some, but not all, linting can be disabled on documentation files:
- [Vale tests can be disabled](vale.md#disable-vale-tests) for all or part of a file.
- [`markdownlint` tests can be disabled](markdownlint.md#disable-markdownlint-tests) for all or part of a file.
## Tool versions used in CI/CD pipelines
You should use linter versions that are the same as those used in our CI/CD pipelines for maximum compatibility
with the linting rules we use.
To match the versions of `markdownlint-cli2` and `vale` used in the GitLab projects, refer to:
- For projects managed with `asdf`, the `.tool-versions` file in the project. For example, the
[`.tool-versions` file in the `gitlab` project](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.tool-versions).
- The [versions used (see `variables:` section)](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab-ci.yml)
when building the `image:docs-lint-markdown` Docker image containing these tools for CI/CD.
Versions set in these two locations should be the same.
| Tool | Version | Command | Additional information |
|---------------------|----------|-------------------------------------------|------------------------|
| `markdownlint-cli2` | Latest | `yarn global add markdownlint-cli2` | None. |
| `markdownlint-cli2` | Specific | `yarn global add markdownlint-cli2@0.8.1` | The `@` indicates a specific version, and this example updates the tool to version `0.8.1`. |
| Vale (using `asdf`) | Specific | `asdf install` | Installs the version of Vale set in `.tool-versions` file in a project. |
| Vale (other) | Specific | Not applicable. | Binaries can be [directly downloaded](https://github.com/errata-ai/vale/releases). |
| Vale (using `brew`) | Latest | `brew update && brew upgrade vale` | This command is for macOS only. |
## Supported projects
For the specifics of each test run in our CI/CD pipelines, see the configuration for those tests
in the relevant projects:
- <https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/docs.gitlab-ci.yml>
- <https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/.gitlab/ci/docs.gitlab-ci.yml>
- <https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/gitlab-ci-config/gitlab-com.yml>
- <https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/.gitlab-ci.yml>
- <https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/master/.gitlab-ci.yml>
We also run some documentation tests in these projects:
- GitLab CLI: <https://gitlab.com/gitlab-org/cli/-/blob/main/.gitlab-ci.yml>
- GitLab Development Kit:
<https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/.gitlab/ci/test.gitlab-ci.yml>
- Gitaly: <https://gitlab.com/gitlab-org/gitaly/-/blob/master/.gitlab-ci.yml>
- GitLab Duo Plugin for JetBrains: <https://gitlab.com/gitlab-org/editor-extensions/gitlab-jetbrains-plugin/-/blob/main/.gitlab-ci.yml>
- GitLab Workflow extension for VS Code: <https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/blob/main/.gitlab-ci.yml>
- GitLab Plugin for Neovim: <https://gitlab.com/gitlab-org/editor-extensions/gitlab.vim/-/blob/main/.gitlab-ci.yml>
- GitLab Language Server: <https://gitlab.com/gitlab-org/editor-extensions/gitlab-lsp/-/blob/main/.gitlab-ci.yml>
- GitLab Extension for Visual Studio: <https://gitlab.com/gitlab-org/editor-extensions/gitlab-visual-studio-extension/-/blob/main/.gitlab-ci.yml>
- AI gateway: <https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/.gitlab/ci/lint.gitlab-ci.yml>
- Prompt Library: <https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/.gitlab-ci.yml>
- GitLab Container Registry: <https://gitlab.com/gitlab-org/container-registry/-/blob/master/.gitlab/ci/validate.yml>
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Learn how to contribute to GitLab Documentation.
title: Documentation testing
breadcrumbs:
- doc
- development
- documentation
- testing
---
GitLab documentation is stored in projects with code, and treated like code.
To maintain standards and quality of documentation, we use processes similar to
those used for code.
Merge requests containing changes to Markdown (`.md`) files run these CI/CD jobs:
- `docs-lint markdown`: Runs several types of tests, including:
- [Vale](vale.md): Checks documentation content.
- [markdownlint](markdownlint.md): Checks Markdown structure.
- [`lint-docs.sh`](#tests-in-lint-docsh) script: Miscellaneous tests
- `docs-lint links`: Checks the validity of [relative links](links.md#run-the-relative-link-test-locally) in the documentation suite.
- `docs-lint mermaid`: Runs [`mermaidlint`](#mermaid-chart-linting) to check for invalid Mermaid charts.
- `rubocop-docs`: Checks links to documentation [from `.rb` files](links.md#run-rubocop-tests).
- `eslint-docs`: Checks links to documentation [from `.js` and `.vue` files](links.md#run-eslint-tests).
- `docs-lint redirects`: Checks for deleted or renamed documentation files without [redirects](../redirects.md).
- `docs code_quality` and `code_quality cache`: Runs [code quality](../../../ci/testing/code_quality.md)
to add Vale [warnings and errors into the MR changes tab (diff view)](../../../ci/testing/code_quality.md#merge-request-changes-view).
A few files are generated from scripts. A CI/CD job fails when either the source code files
or the documentation files are updated without following the correct process:
- `graphql-verify`: Fails when `doc/api/graphql/reference/_index.md` is not updated
with the [update process](../../rake_tasks.md#update-graphql-documentation-and-schema-definitions).
- `docs-lint deprecations-and-removals`: Fails when `doc/update/deprecations.md` is
not updated with the [update process](../../deprecation_guidelines/_index.md#update-the-deprecations-and-removals-documentation).
For a full list of automated files, see [Automated pages](../site_architecture/automation.md).
## Tests in `lint-doc.sh`
The tests in
[`/scripts/lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/lint-doc.sh)
look for page content problems that Vale and markdownlint cannot test for.
The `docs-lint markdown` job fails if any of these `lint-doc.sh` tests fail:
- Curl (`curl`) commands [must use long-form options (`--header`)](../restful_api_styleguide.md#curl-commands)
instead of short options, like `-h`.
- Documentation pages [must contain front matter](../metadata.md#stage-and-group-metadata)
indicating ownership of the page.
- `CHANGELOG.md` must not contain duplicate versions.
- Files in the `doc/` directory must not be executable.
- [Filenames and directories must](../site_architecture/folder_structure.md#work-with-directories-and-files):
- Use `_index.md` instead of `README.md`
- Use underscores instead of dashes.
- Be lowercase.
- Image filenames must [specify the version they were added in](../styleguide/_index.md#image-requirements).
- Mermaid charts must render without errors.
### Mermaid chart linting
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/144328) in GitLab 16.10.
{{< /history >}}
[Mermaid](https://mermaid.js.org/) builds charts and diagrams from code.
The script (`scripts/lint/check_mermaid.mjs`) runs in the `docs-lint mermaid` job for
all merge requests that contain changes to Markdown files. The script returns an
error if any Markdown files return a Mermaid syntax error.
To help debug your Mermaid charts, use the
[Mermaid Live Editor](https://mermaid-js.github.io/mermaid-live-editor/edit).
## Tests in `docs-lint links` and other jobs
To check for broken links, merge requests containing changes to Markdown (`.md`) files run these jobs in their
pipelines:
- `docs-lint links` job in the `gitlab` project. For example: <https://gitlab.com/gitlab-org/gitlab/-/jobs/7065686331>.
- `docs-lint links` job in the `omnibus-gitlab` project. For example: <https://gitlab.com/gitlab-org/omnibus-gitlab/-/jobs/7065337075>.
- `docs-lint links` job in the `gitlab-operator` project.
- `docs:lint markdown` job in the `gitlab-runner` project, which includes link checking. For example:
<https://gitlab.com/gitlab-org/gitlab-runner/-/jobs/7056674997>.
- `check_docs_links` job in the `charts/gitlab` project. For example:
<https://gitlab.com/gitlab-org/charts/gitlab/-/jobs/7066011619>.
These jobs check links, including anchor links, and report any problems. Any link that requires a network
connection is skipped.
## Tests for translated documentation
To ensure quality across all our translated content, we've implemented testing for our documentation in
multiple languages. These tests mirror those used for the English version, but run on internationalized
content in the `/doc-locale/` or `/docs-locale/` directories.
| Project | English Dir | Translation Dir | Linting Jobs |
| ----- | ----- | ----- | ----- |
| GitLab | [`/doc`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc) | [`/doc-locale`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc-locale) | `docs-i18n-lint markdown` |
| GitLab Runner | [`/docs`](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs) | [`/docs-locale`](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs-locale?ref_type=heads) | `docs:lint i18n markdown` |
| Linux package | [`/doc`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/tree/master/doc) | [`/doc-locale`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/tree/master/doc-locale) | `docs-lint-i18n markdown` <br/> `docs-lint-i18n content` |
| Charts | [`/doc`](https://gitlab.com/gitlab-org/charts/gitlab/-/tree/master/doc) | [`/doc-locale`](https://gitlab.com/gitlab-org/charts/gitlab/-/tree/master/doc-locale) | `check_docs_i18n_content` <br/> `check_docs_i18n_markdown` |
| Operator | [`/doc`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc) | [`/doc-locale`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc-locale) | `docs-i18n-lint content` <br/> `docs-i18n-lint markdown` |
### Path verification of orphaned translation Files
The `docs-i18n-lint paths` job fails if translated files in `/doc-locale` have no corresponding English source files. The job runs when:
- Files in `/doc-locale` are modified
- The path verification script changes
When orphaned translation files are detected, localization team members handle the necessary deletions. English fallback content provides coverage until new translations are available.
## Install documentation linters
To help adhere to the [documentation style guidelines](../styleguide/_index.md), and
improve the content added to documentation, install documentation linters and
integrate them with your code editor. At a minimum, install [markdownlint](markdownlint.md)
and [Vale](vale.md) to match the checks run in build pipelines. Both tools can
integrate with your code editor.
## Run documentation tests locally
Similar to [previewing your changes locally](../review_apps.md), you can also run
documentation tests on your local computer. This has the advantage of:
- Speeding up the feedback loop. You can know of any problems with the changes in your branch
without waiting for a CI/CD pipeline to run.
- Lowering costs. Running tests locally is cheaper than running tests on the cloud
infrastructure GitLab uses.
It's important to:
- Keep the tools up-to-date, and [match the versions used](#tool-versions-used-in-cicd-pipelines) in our CI/CD pipelines.
- Run linters, documentation link tests, and UI link tests the same way they are
run in CI/CD pipelines. It's important to use same configuration we use in
CI/CD pipelines, which can be different than the default configuration of the tool.
### Run Vale, markdownlint, or link checks locally
Installation and configuration instructions are available for:
- [markdownlint](markdownlint.md).
- [Vale](vale.md).
- [Lychee](links.md) and UI link checkers.
### Run `lint-doc.sh` locally
Use a Rake task to run the `lint-doc.sh` tests locally.
Prerequisites:
- You have either:
- The [required lint tools installed](#install-documentation-linters) on your computer.
- A working Docker or `containerd` installation, to use an image with these tools pre-installed.
1. Go to your `gitlab` directory.
1. Run:
```shell
rake lint:markdown
```
To specify a single file or directory you would like to run lint checks for, run:
```shell
MD_DOC_PATH=path/to/my_doc.md rake lint:markdown
```
The output should be similar to:
```plaintext
=> Linting documents at path /path/to/gitlab as <user>...
=> Checking for cURL short options...
=> Checking for CHANGELOG.md duplicate entries...
=> Checking /path/to/gitlab/doc for executable permissions...
=> Checking for new README.md files...
=> Linting markdown style...
=> Linting prose...
✔ 0 errors, 0 warnings and 0 suggestions in 1 file.
✔ Linting passed
```
## Update linter configuration
Vale and markdownlint configurations are under source control in each
project, so updates must be committed to each project individually.
The configuration in the `gitlab` project should be treated as the source of truth,
and all updates should first be made there.
On a regular basis, the changes made in `gitlab` project to the Vale and markdownlint configuration should be
synchronized to the other projects. In each of the [supported projects](#supported-projects):
1. Create a new branch. Add `docs-` to the beginning or `-docs` to the end of the branch name. Some projects use this
convention to limit the jobs that run.
1. Copy the configuration files from the `gitlab` project. For example, in the root directory of the project, run:
```shell
# Copy markdownlint configuration file
cp ../gitlab/.markdownlint-cli2.yaml .
# Remove existing Vale configuration in case some rules have been removed from the GitLab project
rm -r docs/.vale/gitlab
# Copy gitlab_base Vale configuration files for a project with documentation stored in 'docs' directory
cp -r ../gitlab/doc/.vale/gitlab_base docs/.vale
```
1. If updating `gitlab-runner`, `gitlab-omnibus`, `charts/gitlab`, or `gitlab-operator`, also copy the `gitlab-docs`
Vale configuration from the `gitlab` project. For example, in the root directory of the project, run:
```shell
# Copy gitlab-docs Vale configuration files for a project with documentation stored in 'docs' directory
cp -r ../gitlab/doc/.vale/gitlab_docs docs/.vale
```
1. Review the diff created for `.markdownlint-cli2.yaml`. For example, run:
```shell
git diff .markdownlint-cli2.yaml
```
1. Remove any changes that aren't required. For example, `customRules` is only used in the `gitlab` project.
1. Review the diffs created for the Vale configuration. For example, run:
```shell
git diff docs
```
1. Remove unneeded changes to `RelativeLinks.yml`. This rule is specific to each project.
1. Remove any `.tmpl` files. These files are only used in the `gitlab` project.
1. Run `markdownlint-cli2` to check for any violations of the new rules. For example:
```shell
markdownlint-cli2 docs/**/*.md
```
1. Run Vale to check for any violations of the new rules. For example:
```shell
vale --minAlertLevel error docs
```
1. Commit the changes to the new branch. Some projects require
[conventional commits](https://www.conventionalcommits.org/en/v1.0.0/) so check the contributing information for the
project before committing.
1. Submit a merge request for review.
## Update linting images
Lint tests run in CI/CD pipelines using images from the
`docs-gitlab-com` [container registry](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/container_registry).
If a new version of a dependency is released (like a new version of Vale), we
should update the images to use the newer version. Then, we can update the configuration
files in each of our documentation projects to point to the new image.
To update the linting images:
1. In `docs-gitlab-com`, open a merge request to update `.gitlab-ci.yml` to use the new tooling
version. ([Example MR](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/merge_requests/341))
1. When merged, start a `Build docker images pipeline (Manual)` [scheduled pipeline](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/pipeline_schedules).
1. Go the pipeline you started, and wait for the relevant `test:image` job to complete,
for example `test:image:docs-lint-markdown`. If the job:
- Passes, start the relevant `image:` job, for example, `image:docs-lint-markdown`.
- Fails, review the test job log and start troubleshooting the issue. The image configuration
likely needs some manual tweaks to work with the updated dependency.
1. After the `image:` job passes, check the job's log for the name of the new image.
([Example job output](https://gitlab.com/gitlab-org/gitlab-docs/-/jobs/2335033884#L334))
1. Verify that the new image was added to the container registry.
1. Open merge requests to update each of these configuration files to point to the new image.
For jobs that use `markdownlint`, `vale`, or `lychee`:
- `gitlab`:
- [`.gitlab/ci/docs.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/docs.gitlab-ci.yml),
update the `image` in the `.docs-markdown-lint-image:` section.
- [`scripts/lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/lint-doc.sh),
update the `registry_url` value in the `run_locally_or_in_container()` section.
- `gitlab-runner`: [`.gitlab/ci/_common.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/.gitlab/ci/_common.gitlab-ci.yml),
update the value of the `DOCS_LINT_IMAGE` variable.
- `omnibus-gitlab`: [`gitlab-ci-config/variables.yml`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/gitlab-ci-config/variables.yml),
update the value of the `DOCS_LINT_IMAGE` variable.
- `charts/gitlab`: [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/.gitlab-ci.yml),
update the value of the `DOCS_LINT_IMAGE` variable.
- `cloud-native/gitlab-operator`: [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/master/.gitlab-ci.yml)
update the value of the `DOCS_LINT_IMAGE` variable.
- `gitlab-development-kit`: [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/.gitlab-ci.yml)
update the value of the `DOCS_LINT_IMAGE` variable.
1. In each merge request:
1. Include a small doc update to trigger the job that uses the image.
1. Check the relevant job output to confirm the updated image was used for the test.
1. Assign the merge requests to any technical writer to review and merge.
## Configure pre-push hooks
Git [pre-push hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) allow Git users to:
- Run tests or other processes before pushing a branch.
- Avoid pushing a branch if failures occur with these tests.
[Lefthook](https://github.com/Arkweid/lefthook) is a Git hooks manager. It makes configuring,
installing, and removing Git hooks simpler. Configuration for it is available in the
[`lefthook.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lefthook.yml)
file for the [`gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
To set up Lefthook for documentation linting, see
[Pre-commit and pre-push static analysis with Lefthook](../../contributing/style_guides.md#pre-commit-and-pre-push-static-analysis-with-lefthook).
To show Vale errors on commit or push, see [Show Vale warnings on commit or push](vale.md#show-vale-warnings-on-commit-or-push).
## Disable linting on documentation
Some, but not all, linting can be disabled on documentation files:
- [Vale tests can be disabled](vale.md#disable-vale-tests) for all or part of a file.
- [`markdownlint` tests can be disabled](markdownlint.md#disable-markdownlint-tests) for all or part of a file.
## Tool versions used in CI/CD pipelines
You should use linter versions that are the same as those used in our CI/CD pipelines for maximum compatibility
with the linting rules we use.
To match the versions of `markdownlint-cli2` and `vale` used in the GitLab projects, refer to:
- For projects managed with `asdf`, the `.tool-versions` file in the project. For example, the
[`.tool-versions` file in the `gitlab` project](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.tool-versions).
- The [versions used (see `variables:` section)](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/.gitlab-ci.yml)
when building the `image:docs-lint-markdown` Docker image containing these tools for CI/CD.
Versions set in these two locations should be the same.
| Tool | Version | Command | Additional information |
|---------------------|----------|-------------------------------------------|------------------------|
| `markdownlint-cli2` | Latest | `yarn global add markdownlint-cli2` | None. |
| `markdownlint-cli2` | Specific | `yarn global add markdownlint-cli2@0.8.1` | The `@` indicates a specific version, and this example updates the tool to version `0.8.1`. |
| Vale (using `asdf`) | Specific | `asdf install` | Installs the version of Vale set in `.tool-versions` file in a project. |
| Vale (other) | Specific | Not applicable. | Binaries can be [directly downloaded](https://github.com/errata-ai/vale/releases). |
| Vale (using `brew`) | Latest | `brew update && brew upgrade vale` | This command is for macOS only. |
## Supported projects
For the specifics of each test run in our CI/CD pipelines, see the configuration for those tests
in the relevant projects:
- <https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/docs.gitlab-ci.yml>
- <https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/.gitlab/ci/docs.gitlab-ci.yml>
- <https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/gitlab-ci-config/gitlab-com.yml>
- <https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/.gitlab-ci.yml>
- <https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/master/.gitlab-ci.yml>
We also run some documentation tests in these projects:
- GitLab CLI: <https://gitlab.com/gitlab-org/cli/-/blob/main/.gitlab-ci.yml>
- GitLab Development Kit:
<https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/.gitlab/ci/test.gitlab-ci.yml>
- Gitaly: <https://gitlab.com/gitlab-org/gitaly/-/blob/master/.gitlab-ci.yml>
- GitLab Duo Plugin for JetBrains: <https://gitlab.com/gitlab-org/editor-extensions/gitlab-jetbrains-plugin/-/blob/main/.gitlab-ci.yml>
- GitLab Workflow extension for VS Code: <https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/blob/main/.gitlab-ci.yml>
- GitLab Plugin for Neovim: <https://gitlab.com/gitlab-org/editor-extensions/gitlab.vim/-/blob/main/.gitlab-ci.yml>
- GitLab Language Server: <https://gitlab.com/gitlab-org/editor-extensions/gitlab-lsp/-/blob/main/.gitlab-ci.yml>
- GitLab Extension for Visual Studio: <https://gitlab.com/gitlab-org/editor-extensions/gitlab-visual-studio-extension/-/blob/main/.gitlab-ci.yml>
- AI gateway: <https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/.gitlab/ci/lint.gitlab-ci.yml>
- Prompt Library: <https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/.gitlab-ci.yml>
- GitLab Container Registry: <https://gitlab.com/gitlab-org/container-registry/-/blob/master/.gitlab/ci/validate.yml>
|
https://docs.gitlab.com/development/documentation/links
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/documentation/links.md
|
2025-08-13
|
doc/development/documentation/testing
|
[
"doc",
"development",
"documentation",
"testing"
] |
links.md
|
none
|
Documentation Guidelines
|
For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
Documentation and UI link tests
|
Learn how to contribute to GitLab Documentation.
|
For testing:
- Relative links between documentation files, we use [Lychee](https://lychee.cli.rs/installation/).
- Links to documentation from the GitLab UI, we use [`haml-lint`, `eslint`, and `rubocop`](#run-ui-link-tests-locally).
## Run the relative link test locally
To run the relative link test locally, you can either:
- Run the link check for a single project that contains documentation.
- Run the link check across entire local copy of the [GitLab documentation site](https://docs.gitlab.com).
### Check a single project
To check the links on a single project:
1. Install [Lychee](https://lychee.cli.rs/installation/).
1. Change into the root directory of the project.
1. Run `lychee --offline --include-fragments <doc_directory>` where `<doc_directory>` it the directory that contains
documentation to check. For example: `lychee --offline --include-fragments doc`.
### Check all GitLab Docs site projects
To check links on the entire [GitLab documentation site](https://docs.gitlab.com):
1. Make sure you have all the documentation projects cloned in the same directory as your `docs-gitlab-com` clone. You can
run `make clone-docs-projects` to clone any projects you don't have in that location.
1. Go to the [`docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) directory.
1. Run `hugo`, which builds the GitLab Docs site.
1. Run `lychee --offline public` to check links.
## Run UI link tests locally
To test documentation links from GitLab code files locally, you can run
- `eslint`: For frontend (`.js` and `.vue`) files.
- `rubocop`: For `.rb` and `.haml` files.
### Run `eslint` tests
1. Open the `gitlab` directory in a terminal window.
1. Run:
```shell
scripts/frontend/lint_docs_links.mjs
```
If you receive an error the first time you run this test, run `yarn install`, which
installs the dependencies for GitLab, and try again.
### Run `rubocop` tests
1. [Install RuboCop](https://github.com/rubocop/rubocop#installation)
1. Open the `gitlab` directory in a terminal window.
1. To run the check on all Ruby files:
```shell
rubocop --only Gitlab/DocumentationLinks/Link
```
To run the check on a single Ruby file:
```shell
rubocop --only Gitlab/DocumentationLinks/Link path/to/ruby/file.rb
```
|
---
stage: none
group: Documentation Guidelines
info: For assistance with this Style Guide page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
description: Learn how to contribute to GitLab Documentation.
title: Documentation and UI link tests
breadcrumbs:
- doc
- development
- documentation
- testing
---
For testing:
- Relative links between documentation files, we use [Lychee](https://lychee.cli.rs/installation/).
- Links to documentation from the GitLab UI, we use [`haml-lint`, `eslint`, and `rubocop`](#run-ui-link-tests-locally).
## Run the relative link test locally
To run the relative link test locally, you can either:
- Run the link check for a single project that contains documentation.
- Run the link check across entire local copy of the [GitLab documentation site](https://docs.gitlab.com).
### Check a single project
To check the links on a single project:
1. Install [Lychee](https://lychee.cli.rs/installation/).
1. Change into the root directory of the project.
1. Run `lychee --offline --include-fragments <doc_directory>` where `<doc_directory>` it the directory that contains
documentation to check. For example: `lychee --offline --include-fragments doc`.
### Check all GitLab Docs site projects
To check links on the entire [GitLab documentation site](https://docs.gitlab.com):
1. Make sure you have all the documentation projects cloned in the same directory as your `docs-gitlab-com` clone. You can
run `make clone-docs-projects` to clone any projects you don't have in that location.
1. Go to the [`docs-gitlab-com`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) directory.
1. Run `hugo`, which builds the GitLab Docs site.
1. Run `lychee --offline public` to check links.
## Run UI link tests locally
To test documentation links from GitLab code files locally, you can run
- `eslint`: For frontend (`.js` and `.vue`) files.
- `rubocop`: For `.rb` and `.haml` files.
### Run `eslint` tests
1. Open the `gitlab` directory in a terminal window.
1. Run:
```shell
scripts/frontend/lint_docs_links.mjs
```
If you receive an error the first time you run this test, run `yarn install`, which
installs the dependencies for GitLab, and try again.
### Run `rubocop` tests
1. [Install RuboCop](https://github.com/rubocop/rubocop#installation)
1. Open the `gitlab` directory in a terminal window.
1. To run the check on all Ruby files:
```shell
rubocop --only Gitlab/DocumentationLinks/Link
```
To run the check on a single Ruby file:
```shell
rubocop --only Gitlab/DocumentationLinks/Link path/to/ruby/file.rb
```
|
https://docs.gitlab.com/development/cells
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/cells
|
[
"doc",
"development",
"cells"
] |
_index.md
|
Tenant Scale
|
Cells Infrastructure
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Cells Development Guidelines
| null |
For background of GitLab Cells, refer to the [design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/).
## Available Cells / Organization schemas
Below are available schemas related to Cells and Organizations:
| Schema | Description |
| ------ | ----------- |
| `gitlab_main` (deprecated) | This is being replaced with `gitlab_main_cell`, for the purpose of building the [Cells](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/) architecture. |
| `gitlab_main_cell`| To be renamed to `gitlab_main_org`. Use for all tables in the `main:` database that are for an Organization. For example, `projects` and `groups` |
| `gitlab_main_cell_setting` | All tables in the `main:` database related to cell settings. For example, `application_settings`. These cell-local tables should not have any foreign key references from/to organization tables. |
| `gitlab_main_clusterwide` (deprecated) | All tables in the `main:` database where all rows, or a subset of rows needs to be present across the cluster, in the [Cells](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/) architecture. For example, `plans`. For the [Cells 1.0 architecture](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/iterations/cells-1.0/), there are no real clusterwide tables as each cell will have its own database. In effect, these tables will still be stored locally in each cell. |
| `gitlab_main_cell_local` | For tables in the `main:` database that are related to features that is distinct for each cell. For example, `zoekt_nodes`, or `shards`. These cell-local tables should not have any foreign key references from/to organization tables. |
| `gitlab_ci` | Use for all tables in the `ci:` database that are for an Organization. For example, `ci_pipelines` and `ci_builds` |
| `gitlab_ci_cell_local` | For tables in the `ci:` database that are related to features that is distinct for each cell. For example, `instance_type_ci_runners`, or `ci_cost_settings`. These cell-local tables should not have any foreign key references from/to organization tables. |
| `gitlab_main_user` | Schema for all User-related tables, ex. `users`, `emails`, etc. Most user functionality is organizational level so should use `gitlab_main_cell` instead (e.g. commenting on an issue). For user functionality that is not organizational level, use this schema. Tables on this schema must strictly belong to a user. |
Most tables will require a [sharding key](../organization/_index.md#defining-a-sharding-key-for-all-organizational-tables) to be defined.
To understand how existing tables are classified, you can use [this dashboard](https://cells-progress-tracker-gitlab-org-tenant-scale-g-f4ad96bf01d25f.gitlab.io/schema_migration).
After a schema has been assigned, the merge request pipeline might fail due to one or more of the following reasons, which can be rectified by following the linked guidelines:
- [Cross-database joins](../database/multiple_databases.md#suggestions-for-removing-cross-database-joins)
- [Cross-database transactions](../database/multiple_databases.md#fixing-cross-database-transactions)
- [Cross-database foreign keys](../database/multiple_databases.md#foreign-keys-that-cross-databases)
## What schema to choose if the feature can be cluster-wide?
The `gitlab_main_clusterwide` schema is now deprecated.
We will ask teams to update tables from `gitlab_main_clusterwide` to `gitlab_main_cell` as required.
This requires adding sharding keys to these tables, and may require
additional changes to related features to scope them to the Organizational level.
Clusterwide features are
[heavily discouraged](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/#how-do-i-decide-whether-to-move-my-feature-to-the-cluster-cell-or-organization-level),
and there are [no plans to perform any cluster-wide synchronization](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/decisions/014_clusterwide_syncing_in_cells_1_0/).
Choose a different schema from the list of available GitLab [schemas](#available-cells--organization-schemas) instead.
We expect most tables to use the `gitlab_main_cell` schema, especially if the
table in the table is related to `projects`, or `namespaces`.
Another alternative is the `gitlab_main_cell_local` schema.
Consult with the [Tenant Scale group](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/tenant-scale/):
If you believe you require a clusterwide feature, seek design input from the
Tenant Scale group.
Here are some considerations to think about:
- Can the feature to be scoped per Organization (or lower) instead ?
- The related feature must work on multiple cells, not just the legacy cell.
- How would the related feature scale across many Organizations and Cells ?
- How will data be stored ?
- How will organizations reference the data consistently ?
Can you use globally unique identifiers ?
- Does the data need to be consistent across different cells ?
- Do not use database tables to store [static data](#static-data).
## Creating a new schema
Schemas should default to require a sharding key, as features should be scoped to an Organization by default.
```yaml
# db/gitlab_schemas/gitlab_ci.yaml
require_sharding_key: true
sharding_root_tables:
- projects
- namespaces
- organizations
```
Setting `require_sharding_key` to `true` means that tables assigned to that
schema will require a `sharding_key` to be set.
You will also need to configure the list of allowed `sharding_root_tables` that can be used as sharding keys for tables in this schema.
## Static data
Problem: A database table is used to store static data.
However, the primary key is not static because it uses an auto-incrementing sequence.
This means the primary key is not globally consistent.
References to this inconsistent primary key will create problems because the
reference clashes across cells / organizations.
Example: The `plans` table on a given Cell has the following data:
```shell
id | name | title
----+------------------------------+----------------------------------
1 | default | Default
2 | bronze | Bronze
3 | silver | Silver
5 | gold | Gold
7 | ultimate_trial | Ultimate Trial
8 | premium_trial | Premium Trial
9 | opensource | Opensource
4 | premium | Premium
6 | ultimate | Ultimate
10 | ultimate_trial_paid_customer | Ultimate Trial for Paid Customer
(10 rows)
```
On another cell, the `plans` table has differing ids for the same `name`:
```shell
id | name | title
----+------------------------------+------------------------------
1 | default | Default
2 | bronze | Bronze
3 | silver | Silver
4 | premium | Premium
5 | gold | Gold
6 | ultimate | Ultimate
7 | ultimate_trial | Ultimate Trial
8 | ultimate_trial_paid_customer | Ultimate Trial Paid Customer
9 | premium_trial | Premium Trial
10 | opensource | Opensource
```
This `plans.id` column is then used as a reference in the `hosted_plan_id`
column of `gitlab_subscriptions` table.
Solution: Use globally unique references, not a database sequence.
If possible, hard-code static data in application code, instead of using the
database.
In this case, the `plans` table can be dropped, and replaced with a fixed model
(details can be found in the [configurable status design doc](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/work_items_custom_status/#fixed-items-models-and-associations)):
```ruby
class Plan
include ActiveRecord::FixedItemsModel::Model
ITEMS = [
{:id=>1, :name=>"default", :title=>"Default"},
{:id=>2, :name=>"bronze", :title=>"Bronze"},
{:id=>3, :name=>"silver", :title=>"Silver"},
{:id=>4, :name=>"premium", :title=>"Premium"},
{:id=>5, :name=>"gold", :title=>"Gold"},
{:id=>6, :name=>"ultimate", :title=>"Ultimate"},
{:id=>7, :name=>"ultimate_trial", :title=>"Ultimate Trial"},
{:id=>8, :name=>"ultimate_trial_paid_customer", :title=>"Ultimate Trial Paid Customer"},
{:id=>9, :name=>"premium_trial", :title=>"Premium Trial"},
{:id=>10, :name=>"opensource", :title=>"Opensource"}
]
attribute :name, :string
attribute :title, :string
end
```
You can use model validations and use ActiveRecord-like methods like `all`, `where`, `find_by` and `find`:
```ruby
Plan.find(4)
Plan.find_by(name: 'premium')
Plan.where(name: 'gold').first
```
The `hosted_plan_id` column will also be updated to refer to the fixed model's
`id` value.
You can also store associations with other models. For example:
```ruby
class CurrentStatus < ApplicationRecord
belongs_to_fixed_items :system_defined_status, fixed_items_class: WorkItems::Statuses::SystemDefined::Status
end
```
Examples of hard-coding static data include:
- [VisibilityLevel](https://gitlab.com/gitlab-org/gitlab/-/blob/5ae43dface737373c50798ccd909174bcdd9b664/lib/gitlab/visibility_level.rb#L25-27)
- [Static defaults for work item statuses](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178180)
- [`Ai::Catalog::BuiltInTool`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/197300)
- [`WorkItems::SystemDefined::RelatedLinkRestriction`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/199664)
## Cells Routing
Coming soon, guide on how to route your request to your organization's cell.
|
---
stage: Tenant Scale
group: Cells Infrastructure
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab Cells Development Guidelines
breadcrumbs:
- doc
- development
- cells
---
For background of GitLab Cells, refer to the [design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/).
## Available Cells / Organization schemas
Below are available schemas related to Cells and Organizations:
| Schema | Description |
| ------ | ----------- |
| `gitlab_main` (deprecated) | This is being replaced with `gitlab_main_cell`, for the purpose of building the [Cells](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/) architecture. |
| `gitlab_main_cell`| To be renamed to `gitlab_main_org`. Use for all tables in the `main:` database that are for an Organization. For example, `projects` and `groups` |
| `gitlab_main_cell_setting` | All tables in the `main:` database related to cell settings. For example, `application_settings`. These cell-local tables should not have any foreign key references from/to organization tables. |
| `gitlab_main_clusterwide` (deprecated) | All tables in the `main:` database where all rows, or a subset of rows needs to be present across the cluster, in the [Cells](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/) architecture. For example, `plans`. For the [Cells 1.0 architecture](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/iterations/cells-1.0/), there are no real clusterwide tables as each cell will have its own database. In effect, these tables will still be stored locally in each cell. |
| `gitlab_main_cell_local` | For tables in the `main:` database that are related to features that is distinct for each cell. For example, `zoekt_nodes`, or `shards`. These cell-local tables should not have any foreign key references from/to organization tables. |
| `gitlab_ci` | Use for all tables in the `ci:` database that are for an Organization. For example, `ci_pipelines` and `ci_builds` |
| `gitlab_ci_cell_local` | For tables in the `ci:` database that are related to features that is distinct for each cell. For example, `instance_type_ci_runners`, or `ci_cost_settings`. These cell-local tables should not have any foreign key references from/to organization tables. |
| `gitlab_main_user` | Schema for all User-related tables, ex. `users`, `emails`, etc. Most user functionality is organizational level so should use `gitlab_main_cell` instead (e.g. commenting on an issue). For user functionality that is not organizational level, use this schema. Tables on this schema must strictly belong to a user. |
Most tables will require a [sharding key](../organization/_index.md#defining-a-sharding-key-for-all-organizational-tables) to be defined.
To understand how existing tables are classified, you can use [this dashboard](https://cells-progress-tracker-gitlab-org-tenant-scale-g-f4ad96bf01d25f.gitlab.io/schema_migration).
After a schema has been assigned, the merge request pipeline might fail due to one or more of the following reasons, which can be rectified by following the linked guidelines:
- [Cross-database joins](../database/multiple_databases.md#suggestions-for-removing-cross-database-joins)
- [Cross-database transactions](../database/multiple_databases.md#fixing-cross-database-transactions)
- [Cross-database foreign keys](../database/multiple_databases.md#foreign-keys-that-cross-databases)
## What schema to choose if the feature can be cluster-wide?
The `gitlab_main_clusterwide` schema is now deprecated.
We will ask teams to update tables from `gitlab_main_clusterwide` to `gitlab_main_cell` as required.
This requires adding sharding keys to these tables, and may require
additional changes to related features to scope them to the Organizational level.
Clusterwide features are
[heavily discouraged](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/#how-do-i-decide-whether-to-move-my-feature-to-the-cluster-cell-or-organization-level),
and there are [no plans to perform any cluster-wide synchronization](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/decisions/014_clusterwide_syncing_in_cells_1_0/).
Choose a different schema from the list of available GitLab [schemas](#available-cells--organization-schemas) instead.
We expect most tables to use the `gitlab_main_cell` schema, especially if the
table in the table is related to `projects`, or `namespaces`.
Another alternative is the `gitlab_main_cell_local` schema.
Consult with the [Tenant Scale group](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/tenant-scale/):
If you believe you require a clusterwide feature, seek design input from the
Tenant Scale group.
Here are some considerations to think about:
- Can the feature to be scoped per Organization (or lower) instead ?
- The related feature must work on multiple cells, not just the legacy cell.
- How would the related feature scale across many Organizations and Cells ?
- How will data be stored ?
- How will organizations reference the data consistently ?
Can you use globally unique identifiers ?
- Does the data need to be consistent across different cells ?
- Do not use database tables to store [static data](#static-data).
## Creating a new schema
Schemas should default to require a sharding key, as features should be scoped to an Organization by default.
```yaml
# db/gitlab_schemas/gitlab_ci.yaml
require_sharding_key: true
sharding_root_tables:
- projects
- namespaces
- organizations
```
Setting `require_sharding_key` to `true` means that tables assigned to that
schema will require a `sharding_key` to be set.
You will also need to configure the list of allowed `sharding_root_tables` that can be used as sharding keys for tables in this schema.
## Static data
Problem: A database table is used to store static data.
However, the primary key is not static because it uses an auto-incrementing sequence.
This means the primary key is not globally consistent.
References to this inconsistent primary key will create problems because the
reference clashes across cells / organizations.
Example: The `plans` table on a given Cell has the following data:
```shell
id | name | title
----+------------------------------+----------------------------------
1 | default | Default
2 | bronze | Bronze
3 | silver | Silver
5 | gold | Gold
7 | ultimate_trial | Ultimate Trial
8 | premium_trial | Premium Trial
9 | opensource | Opensource
4 | premium | Premium
6 | ultimate | Ultimate
10 | ultimate_trial_paid_customer | Ultimate Trial for Paid Customer
(10 rows)
```
On another cell, the `plans` table has differing ids for the same `name`:
```shell
id | name | title
----+------------------------------+------------------------------
1 | default | Default
2 | bronze | Bronze
3 | silver | Silver
4 | premium | Premium
5 | gold | Gold
6 | ultimate | Ultimate
7 | ultimate_trial | Ultimate Trial
8 | ultimate_trial_paid_customer | Ultimate Trial Paid Customer
9 | premium_trial | Premium Trial
10 | opensource | Opensource
```
This `plans.id` column is then used as a reference in the `hosted_plan_id`
column of `gitlab_subscriptions` table.
Solution: Use globally unique references, not a database sequence.
If possible, hard-code static data in application code, instead of using the
database.
In this case, the `plans` table can be dropped, and replaced with a fixed model
(details can be found in the [configurable status design doc](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/work_items_custom_status/#fixed-items-models-and-associations)):
```ruby
class Plan
include ActiveRecord::FixedItemsModel::Model
ITEMS = [
{:id=>1, :name=>"default", :title=>"Default"},
{:id=>2, :name=>"bronze", :title=>"Bronze"},
{:id=>3, :name=>"silver", :title=>"Silver"},
{:id=>4, :name=>"premium", :title=>"Premium"},
{:id=>5, :name=>"gold", :title=>"Gold"},
{:id=>6, :name=>"ultimate", :title=>"Ultimate"},
{:id=>7, :name=>"ultimate_trial", :title=>"Ultimate Trial"},
{:id=>8, :name=>"ultimate_trial_paid_customer", :title=>"Ultimate Trial Paid Customer"},
{:id=>9, :name=>"premium_trial", :title=>"Premium Trial"},
{:id=>10, :name=>"opensource", :title=>"Opensource"}
]
attribute :name, :string
attribute :title, :string
end
```
You can use model validations and use ActiveRecord-like methods like `all`, `where`, `find_by` and `find`:
```ruby
Plan.find(4)
Plan.find_by(name: 'premium')
Plan.where(name: 'gold').first
```
The `hosted_plan_id` column will also be updated to refer to the fixed model's
`id` value.
You can also store associations with other models. For example:
```ruby
class CurrentStatus < ApplicationRecord
belongs_to_fixed_items :system_defined_status, fixed_items_class: WorkItems::Statuses::SystemDefined::Status
end
```
Examples of hard-coding static data include:
- [VisibilityLevel](https://gitlab.com/gitlab-org/gitlab/-/blob/5ae43dface737373c50798ccd909174bcdd9b664/lib/gitlab/visibility_level.rb#L25-27)
- [Static defaults for work item statuses](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178180)
- [`Ai::Catalog::BuiltInTool`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/197300)
- [`WorkItems::SystemDefined::RelatedLinkRestriction`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/199664)
## Cells Routing
Coming soon, guide on how to route your request to your organization's cell.
|
https://docs.gitlab.com/development/configuration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/configuration.md
|
2025-08-13
|
doc/development/cells
|
[
"doc",
"development",
"cells"
] |
configuration.md
|
Tenant Scale
|
Cells Infrastructure
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Configuration
| null |
Find the existing cells configuration documentation, under [Cells configuration](../../administration/cells.md).
Add cells-related configuration to `config/gitlab.yml` under the `cell` key.
## References
- [Cells design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/)
|
---
stage: Tenant Scale
group: Cells Infrastructure
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Configuration
breadcrumbs:
- doc
- development
- cells
---
Find the existing cells configuration documentation, under [Cells configuration](../../administration/cells.md).
Add cells-related configuration to `config/gitlab.yml` under the `cell` key.
## References
- [Cells design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/)
|
https://docs.gitlab.com/development/topology_service
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/topology_service.md
|
2025-08-13
|
doc/development/cells
|
[
"doc",
"development",
"cells"
] |
topology_service.md
|
Tenant Scale
|
Cells Infrastructure
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Topology Service
| null |
## Updating the Topology Service Gem
The Topology Service is developed in its [own repository](https://gitlab.com/gitlab-org/cells/topology-service)
We generate the Ruby Gem there, and manually copy the Gem to GitLab vendors folder, in
`vendor/gems/gitlab-topology-service-client`.
To make it easy, you can just run this bash script:
```shell
bash scripts/update-topology-service-gem.sh
```
This script is going to:
1. Clone the topology service repository into a temporary folder.
1. Check if the Ruby Gem has a newer code.
1. If so, it will update the Gem in `vendor/gems/gitlab-topology-service-client` and create a commit.
1. Clean up the temporary repository.
|
---
stage: Tenant Scale
group: Cells Infrastructure
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Topology Service
breadcrumbs:
- doc
- development
- cells
---
## Updating the Topology Service Gem
The Topology Service is developed in its [own repository](https://gitlab.com/gitlab-org/cells/topology-service)
We generate the Ruby Gem there, and manually copy the Gem to GitLab vendors folder, in
`vendor/gems/gitlab-topology-service-client`.
To make it easy, you can just run this bash script:
```shell
bash scripts/update-topology-service-gem.sh
```
This script is going to:
1. Clone the topology service repository into a temporary folder.
1. Check if the Ruby Gem has a newer code.
1. If so, it will update the Gem in `vendor/gems/gitlab-topology-service-client` and create a commit.
1. Clean up the temporary repository.
|
https://docs.gitlab.com/development/application_settings_analysis
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/application_settings_analysis.md
|
2025-08-13
|
doc/development/cells
|
[
"doc",
"development",
"cells"
] |
application_settings_analysis.md
|
Tenant Scale
|
Cells Infrastructure
|
Analysis of Application Settings for Cells 1.0.
|
Application Settings analysis
| null |
<!--
This documentation is auto generated by a Ruby script.
Please do not edit this file directly. To update this file, run:
scripts/cells/application-settings-analysis.rb
-->
## Statistics
- Number of attributes: 499
- Number of encrypted attributes: 42 (8.0%)
- Number of attributes documented: 294 (59.0%)
- Number of attributes on GitLab.com different from the defaults: 223 (45.0%)
- Number of attributes with `clusterwide` set: 499 (100.0%)
- Number of attributes with `clusterwide: true` set: 132 (26.0%)
## Individual columns
| Attribute name | Encrypted | DB Type | API Type | Not Null? | Default | GitLab.com != default | Cluster-wide? | Documented? |
| -------------- | ------------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- |
| `abuse_notification_email` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `admin_mode` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `after_sign_out_path` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `after_sign_up_text` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `ai_action_api_rate_limit` | `false` | `integer` | `` | `true` | `160` | `false` | `false`| `false` |
| `akismet_api_key` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `akismet_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `false`| `true` |
| `allow_account_deletion` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_deploy_tokens_and_keys_with_external_authn` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `allow_group_owners_to_manage_ldap` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_local_requests_from_system_hooks` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_local_requests_from_web_hooks_and_services` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `allow_possible_spam` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `allow_project_creation_for_guest_and_below` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_runner_registration_token` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_top_level_group_owners_to_create_service_accounts` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `anti_abuse_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `archive_builds_in_seconds` | `false` | `integer` | `` | `false` | `null` | `false` | `false`| `false` |
| `arkose_labs_client_secret` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `arkose_labs_client_xid` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `arkose_labs_data_exchange_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `arkose_labs_namespace` | `false` | `text` | `` | `true` | `'client'::text` | `true` | `true`| `false` |
| `arkose_labs_private_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `arkose_labs_public_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `asciidoc_max_includes` | `false` | `smallint` | `integer` | `true` | `32` | `false` | `false`| `true` |
| `asset_proxy_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `asset_proxy_secret_key` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `asset_proxy_url` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `asset_proxy_whitelist` | `false` | `text` | `string or array of strings` | `false` | `null` | `true` | `true`| `true` |
| `authorized_keys_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `true` | `true`| `true` |
| `auto_ban_user_on_excessive_projects_download` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `auto_devops_domain` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `auto_devops_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `true` | `true`| `true` |
| `automatic_purchased_storage_allocation` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `bulk_import_concurrent_pipeline_batch_limit` | `false` | `smallint` | `integer` | `true` | `25` | `false` | `false`| `true` |
| `bulk_import_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `bulk_import_max_download_file_size` | `false` | `bigint` | `integer` | `true` | `5120` | `false` | `false`| `true` |
| `cached_markdown_version` | `false` | `integer` | `` | `false` | `null` | `false` | `false`| `false` |
| `can_create_group` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `can_create_organization` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `check_namespace_plan` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `ci_cd_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `false`| `false` |
| `ci_job_token_signing_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `ci_jwt_signing_key` | `true` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `ci_max_includes` | `false` | `integer` | `integer` | `true` | `150` | `false` | `false`| `true` |
| `ci_max_total_yaml_size_bytes` | `false` | `integer` | `integer` | `true` | `314572800` | `false` | `false`| `true` |
| `clickhouse` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `cloud_license_auth_token` | `true` | `text` | `` | `false` | `null` | `false` | `false`| `false` |
| `cluster_agents` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `code_creation` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `code_suggestions_api_rate_limit` | `false` | `integer` | `` | `true` | `60` | `false` | `false`| `false` |
| `commit_email_hostname` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `compliance_frameworks` | `false` | `smallint[]` | `` | `true` | `'{}'::smallint[]` | `false` | `false`| `false` |
| `container_expiration_policies_enable_historic_entries` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `container_registry_cleanup_tags_service_max_list_size` | `false` | `integer` | `integer` | `true` | `200` | `false` | `false`| `true` |
| `container_registry_data_repair_detail_worker_max_concurrency` | `false` | `integer` | `` | `true` | `2` | `true` | `false`| `false` |
| `container_registry_db_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `true`| `false` |
| `container_registry_delete_tags_service_timeout` | `false` | `integer` | `integer` | `true` | `250` | `false` | `false`| `true` |
| `container_registry_expiration_policies_caching` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `container_registry_expiration_policies_worker_capacity` | `false` | `integer` | `integer` | `true` | `4` | `true` | `false`| `true` |
| `container_registry_features` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `true` | `true`| `false` |
| `container_registry_token_expire_delay` | `false` | `integer` | `integer` | `false` | `5` | `true` | `false`| `true` |
| `container_registry_vendor` | `false` | `text` | `` | `true` | `''::text` | `true` | `true`| `false` |
| `container_registry_version` | `false` | `text` | `` | `true` | `''::text` | `true` | `true`| `false` |
| `content_validation_api_key` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `content_validation_endpoint_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `content_validation_endpoint_url` [JIHU] | `false` | `text` | `` | `false` | `null` | `false` | `false`| `false` |
| `created_at` | `false` | `timestamp` | `` | `false` | `null` | `true` | `false`| `false` |
| `cube_api_base_url` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `cube_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `custom_http_clone_url_root` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `custom_project_templates_group_id` | `false` | `bigint` | `` | `false` | `null` | `false` | `false`| `false` |
| `customers_dot_jwt_signing_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `dashboard_limit` | `false` | `integer` | `` | `true` | `0` | `true` | `true`| `false` |
| `dashboard_limit_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `true`| `false` |
| `database_grafana_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `database_grafana_api_url` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `database_grafana_tag` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `database_max_running_batched_background_migrations` | `false` | `integer` | `` | `true` | `2` | `true` | `true`| `false` |
| `database_reindexing` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `deactivate_dormant_users` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `deactivate_dormant_users_period` | `false` | `integer` | `integer` | `true` | `90` | `false` | `false`| `true` |
| `deactivation_email_additional_text` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `decompress_archive_file_timeout` | `false` | `integer` | `integer` | `true` | `210` | `false` | `false`| `true` |
| `default_artifacts_expire_in` | `false` | `character` | `string` | `true` | `'0'::character` | `true` | `true`| `true` |
| `default_branch_name` | `false` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `default_branch_protection` | `false` | `integer` | `integer` | `false` | `2` | `false` | `false`| `true` |
| `default_branch_protection_defaults` | `false` | `jsonb` | `hash` | `true` | `'{}'::jsonb` | `true` | `false`| `true` |
| `default_ci_config_path` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `default_group_visibility` | `false` | `integer` | `string` | `false` | `null` | `true` | `false`| `true` |
| `default_preferred_language` | `false` | `text` | `string` | `true` | `'en'::text` | `false` | `false`| `true` |
| `default_profile_preferences` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `default_project_creation` | `false` | `integer` | `integer` | `true` | `2` | `false` | `false`| `true` |
| `default_project_deletion_protection` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `default_project_visibility` | `false` | `integer` | `string` | `true` | `0` | `false` | `false`| `true` |
| `default_projects_limit` | `false` | `integer` | `integer` | `false` | `null` | `true` | `false`| `true` |
| `default_snippet_visibility` | `false` | `integer` | `string` | `true` | `0` | `false` | `false`| `true` |
| `default_syntax_highlighting_theme` | `false` | `integer` | `integer` | `true` | `1` | `false` | `false`| `true` |
| `delete_inactive_projects` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `delete_unconfirmed_users` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `deletion_adjourned_period` | `false` | `integer` | `integer` | `true` | `30` | `false` | `false`| `true` |
| `deny_all_requests_except_allowed` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `dependency_proxy_ttl_group_policy_worker_capacity` | `false` | `smallint` | `` | `true` | `2` | `false` | `false`| `false` |
| `diagramsnet_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `diagramsnet_url` | `false` | `text` | `string` | `false` | `'https://embed.diagrams.net'::text` | `false` | `false`| `true` |
| `diff_max_files` | `false` | `integer` | `integer` | `true` | `1000` | `true` | `true`| `true` |
| `diff_max_lines` | `false` | `integer` | `integer` | `true` | `50000` | `true` | `true`| `true` |
| `diff_max_patch_bytes` | `false` | `integer` | `integer` | `true` | `204800` | `false` | `true`| `true` |
| `dingtalk_app_key` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `dingtalk_app_secret` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `dingtalk_corpid` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `dingtalk_integration_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `disable_admin_oauth_scopes` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `disable_download_button` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `disable_feed_token` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `disable_overriding_approvers_per_merge_request` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `disable_personal_access_tokens` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `disabled_oauth_sign_in_sources` | `false` | `text` | `array of strings` | `false` | `null` | `false` | `false`| `true` |
| `dns_rebinding_protection_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `domain_allowlist` | `false` | `text` | `array of strings` | `false` | `null` | `false` | `false`| `true` |
| `domain_denylist` | `false` | `text` | `array of strings` | `false` | `null` | `true` | `true`| `true` |
| `domain_denylist_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `true`| `true` |
| `dsa_key_restriction` | `false` | `integer` | `integer` | `true` | `'-1'::integer` | `false` | `false`| `true` |
| `duo_chat` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `duo_features_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `duo_workflow` | `false` | `jsonb` | `` | `false` | `'{}'::jsonb` | `true` | `true`| `false` |
| `ecdsa_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `ecdsa_sk_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `ed25519_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `ed25519_sk_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `editor_extensions` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `eks_access_key_id` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `eks_account_id` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `eks_integration_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `eks_secret_access_key` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `elasticsearch` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `elasticsearch_aws_secret_access_key` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `elasticsearch_password` | `true` | `bytea` | `string` | `false` | `null` | `true` | `false`| `true` |
| `elasticsearch_url` | `false` | `character` | `string` | `false` | `'http://localhost:9200'::character` | `true` | `false`| `true` |
| `email_additional_text` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `email_author_in_body` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `true`| `true` |
| `email_confirmation_setting` | `false` | `smallint` | `string` | `false` | `0` | `true` | `true`| `true` |
| `email_restrictions` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `email_restrictions_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `enable_artifact_external_redirect_warning_page` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `enable_member_promotion_management` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `enabled_git_access_protocol` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `enforce_ci_inbound_job_token_scope_enabled` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `enforce_namespace_storage_limit` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `enforce_terms` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `true`| `true` |
| `error_tracking_access_token` | `true` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `error_tracking_api_url` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `error_tracking_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `true`| `false` |
| `external_auth_client_cert` | `false` | `text` | `string` | `false` | `null` | `false` | `false`| `true` |
| `external_auth_client_key` | `true` | `text` | `string` | `false` | `null` | `false` | `false`| `true` |
| `external_auth_client_key_pass` | `true` | `character` | `string` | `false` | `null` | `false` | `false`| `true` |
| `external_authorization_service_default_label` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `external_authorization_service_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `external_authorization_service_timeout` | `false` | `double` | `float` | `false` | `0.5` | `false` | `false`| `true` |
| `external_authorization_service_url` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `external_pipeline_validation_service_timeout` | `false` | `integer` | `integer` | `false` | `null` | `true` | `true`| `true` |
| `external_pipeline_validation_service_token` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `external_pipeline_validation_service_url` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `failed_login_attempts_unlock_period_in_minutes` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `feishu_app_key` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `feishu_app_secret` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `feishu_integration_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `file_template_project_id` | `false` | `bigint` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `first_day_of_week` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `floc_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `force_pages_access_control` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `future_subscriptions` | `false` | `jsonb` | `` | `true` | `'[]'::jsonb` | `false` | `false`| `false` |
| `geo_node_allowed_ips` | `false` | `character` | `string` | `false` | `'0.0.0.0/0` | `false` | `false`| `true` |
| `geo_status_timeout` | `false` | `integer` | `integer` | `false` | `10` | `true` | `false`| `true` |
| `git_rate_limit_users_alertlist` | `false` | `integer[]` | `array of integers` | `true` | `'{}'::integer[]` | `false` | `false`| `true` |
| `git_rate_limit_users_allowlist` | `false` | `text[]` | `array of strings` | `true` | `'{}'::text[]` | `false` | `false`| `true` |
| `git_two_factor_session_expiry` | `false` | `integer` | `integer` | `true` | `15` | `false` | `false`| `true` |
| `gitaly_timeout_default` | `false` | `integer` | `integer` | `true` | `55` | `false` | `false`| `true` |
| `gitaly_timeout_fast` | `false` | `integer` | `integer` | `true` | `10` | `false` | `false`| `true` |
| `gitaly_timeout_medium` | `false` | `integer` | `integer` | `true` | `30` | `false` | `false`| `true` |
| `gitlab_dedicated_instance` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `gitlab_shell_operation_limit` | `false` | `integer` | `integer` | `false` | `600` | `false` | `false`| `true` |
| `gitpod_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `gitpod_url` | `false` | `text` | `string` | `false` | `'https://gitpod.io/'::text` | `false` | `false`| `true` |
| `globally_allowed_ips` | `false` | `text` | `string` | `true` | `''::text` | `true` | `true`| `true` |
| `grafana_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `grafana_url` | `false` | `character` | `string` | `true` | `'/-/grafana'::character` | `false` | `false`| `true` |
| `gravatar_enabled` | `false` | `boolean` | `boolean` | `false` | `null` | `true` | `true`| `true` |
| `group_download_export_limit` | `false` | `integer` | `` | `true` | `1` | `false` | `false`| `false` |
| `group_export_limit` | `false` | `integer` | `` | `true` | `6` | `false` | `false`| `false` |
| `group_import_limit` | `false` | `integer` | `` | `true` | `6` | `false` | `false`| `false` |
| `group_owners_can_manage_default_branch_protection` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `group_runner_token_expiration_interval` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `group_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `hashed_storage_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `health_check_access_token` | `false` | `character` | `` | `false` | `null` | `true` | `true`| `false` |
| `help_page_documentation_base_url` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `help_page_hide_commercial_content` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `true`| `true` |
| `help_page_support_url` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `help_page_text` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `hide_third_party_offers` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `home_page_url` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `housekeeping_bitmaps_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `housekeeping_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `housekeeping_full_repack_period` | `false` | `integer` | `integer` | `true` | `50` | `false` | `false`| `true` |
| `housekeeping_gc_period` | `false` | `integer` | `integer` | `true` | `200` | `false` | `false`| `true` |
| `housekeeping_incremental_repack_period` | `false` | `integer` | `integer` | `true` | `10` | `false` | `false`| `true` |
| `html_emails_enabled` | `false` | `boolean` | `boolean` | `false` | `true` | `false` | `false`| `true` |
| `id` | `false` | `bigint` | `` | `true` | `???` | `false` | `false`| `false` |
| `identity_verification_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `import_sources` | `false` | `text` | `array of strings` | `false` | `null` | `true` | `true`| `true` |
| `importers` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `inactive_projects_delete_after_months` | `false` | `integer` | `` | `true` | `2` | `false` | `false`| `false` |
| `inactive_projects_min_size_mb` | `false` | `integer` | `` | `true` | `0` | `false` | `false`| `false` |
| `inactive_projects_send_warning_email_after_months` | `false` | `integer` | `` | `true` | `1` | `false` | `false`| `false` |
| `include_optional_metrics_in_service_ping` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `instance_level_ai_beta_features_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `integrations` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `invisible_captcha_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `invitation_flow_enforcement` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `issues_create_limit` | `false` | `integer` | `integer` | `true` | `0` | `true` | `true`| `true` |
| `jira_connect_application_key` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `jira_connect_proxy_url` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `jira_connect_public_key_storage_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `jobs_per_stage_page_size` | `false` | `integer` | `` | `true` | `200` | `false` | `false`| `false` |
| `keep_latest_artifact` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `kroki_enabled` | `false` | `boolean` | `boolean` | `false` | `null` | `false` | `false`| `true` |
| `kroki_formats` | `false` | `jsonb` | `object` | `true` | `'{}'::jsonb` | `false` | `false`| `true` |
| `kroki_url` | `false` | `character` | `string` | `false` | `null` | `false` | `false`| `true` |
| `lets_encrypt_notification_email` | `false` | `character` | `` | `false` | `null` | `true` | `true`| `false` |
| `lets_encrypt_private_key` | `true` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `lets_encrypt_terms_of_service_accepted` | `false` | `boolean` | `` | `true` | `false` | `true` | `true`| `false` |
| `license_trial_ends_on` | `false` | `date` | `` | `false` | `null` | `false` | `false`| `false` |
| `license_usage_data_exported` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `local_markdown_version` | `false` | `integer` | `integer` | `true` | `0` | `true` | `false`| `true` |
| `lock_duo_features_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `lock_math_rendering_limits_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `lock_memberships_to_ldap` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `lock_memberships_to_saml` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `lock_model_prompt_cache_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `lock_spp_repository_pipeline_access` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `lock_web_based_commit_signing_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `login_recaptcha_protection_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `mailgun_events_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `mailgun_signing_key` | `true` | `bytea` | `string` | `false` | `null` | `true` | `true`| `true` |
| `maintenance_mode` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `maintenance_mode_message` | `false` | `text` | `string` | `false` | `null` | `false` | `false`| `true` |
| `make_profile_private` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `math_rendering_limits_enabled` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `max_artifacts_content_include_size` | `false` | `integer` | `` | `true` | `5242880` | `false` | `false`| `false` |
| `max_artifacts_size` | `false` | `integer` | `integer` | `true` | `100` | `true` | `false`| `true` |
| `max_attachment_size` | `false` | `integer` | `integer` | `true` | `100` | `false` | `false`| `true` |
| `max_decompressed_archive_size` | `false` | `integer` | `integer` | `true` | `25600` | `false` | `false`| `true` |
| `max_export_size` | `false` | `integer` | `integer` | `false` | `0` | `true` | `true`| `true` |
| `max_import_remote_file_size` | `false` | `bigint` | `integer` | `true` | `10240` | `false` | `false`| `true` |
| `max_import_size` | `false` | `integer` | `integer` | `true` | `0` | `true` | `true`| `true` |
| `max_login_attempts` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `max_number_of_repository_downloads` | `false` | `smallint` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `max_number_of_repository_downloads_within_time_period` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `max_number_of_vulnerabilities_per_project` | `false` | `integer` | `` | `false` | `null` | `false` | `false`| `false` |
| `max_pages_custom_domains_per_project` | `false` | `integer` | `` | `true` | `0` | `true` | `false`| `false` |
| `max_pages_size` | `false` | `integer` | `integer` | `true` | `100` | `true` | `false`| `true` |
| `max_personal_access_token_lifetime` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `max_ssh_key_lifetime` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `max_terraform_state_size_bytes` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `max_yaml_depth` | `false` | `integer` | `integer` | `true` | `100` | `false` | `false`| `true` |
| `max_yaml_size_bytes` | `false` | `bigint` | `integer` | `true` | `2097152` | `false` | `false`| `true` |
| `metrics_enabled` | `false` | `boolean` | `` | `false` | `false` | `true` | `true`| `false` |
| `metrics_host` | `false` | `character` | `` | `false` | `'localhost'::character` | `false` | `false`| `false` |
| `metrics_method_call_threshold` | `false` | `integer` | `integer` | `false` | `10` | `true` | `true`| `true` |
| `metrics_packet_size` | `false` | `integer` | `` | `false` | `1` | `true` | `true`| `false` |
| `metrics_pool_size` | `false` | `integer` | `` | `false` | `16` | `false` | `true`| `false` |
| `metrics_port` | `false` | `integer` | `` | `false` | `8089` | `true` | `true`| `false` |
| `metrics_sample_interval` | `false` | `integer` | `` | `false` | `15` | `false` | `true`| `false` |
| `metrics_timeout` | `false` | `integer` | `` | `false` | `10` | `false` | `true`| `false` |
| `minimum_password_length` | `false` | `integer` | `integer` | `true` | `8` | `false` | `false`| `true` |
| `mirror_available` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `mirror_capacity_threshold` | `false` | `integer` | `integer` | `true` | `50` | `true` | `false`| `true` |
| `mirror_max_capacity` | `false` | `integer` | `integer` | `true` | `100` | `true` | `false`| `true` |
| `mirror_max_delay` | `false` | `integer` | `integer` | `true` | `300` | `true` | `false`| `true` |
| `model_prompt_cache_enabled` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `namespace_aggregation_schedule_lease_duration_in_seconds` | `false` | `integer` | `` | `true` | `300` | `false` | `false`| `false` |
| `namespace_storage_forks_cost_factor` | `false` | `double` | `` | `true` | `1.0` | `true` | `false`| `false` |
| `new_user_signups_cap` | `false` | `integer` | `` | `false` | `null` | `false` | `false`| `false` |
| `notes_create_limit` | `false` | `integer` | `` | `true` | `300` | `true` | `true`| `false` |
| `notes_create_limit_allowlist` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `true` | `true`| `false` |
| `notify_on_unknown_sign_in` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `oauth_provider` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `observability_backend_ssl_verification_enabled` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `observability_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `outbound_local_requests_whitelist` | `false` | `character` | `array of strings` | `true` | `'{}'::character` | `true` | `true`| `true` |
| `package_metadata_purl_types` | `false` | `smallint[]` | `array of integers` | `false` | `'{1` | `false` | `false`| `true` |
| `package_registry` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `pages` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `pages_domain_verification_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `password_authentication_enabled_for_git` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `password_authentication_enabled_for_web` | `false` | `boolean` | `boolean` | `false` | `null` | `true` | `false`| `true` |
| `password_expiration_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `password_expires_in_days` [JIHU] | `false` | `integer` | `` | `true` | `90` | `false` | `false`| `false` |
| `password_expires_notice_before_days` [JIHU] | `false` | `integer` | `` | `true` | `7` | `false` | `false`| `false` |
| `password_lowercase_required` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `password_number_required` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `password_symbol_required` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `password_uppercase_required` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `performance_bar_allowed_group_id` | `false` | `bigint` | `string` | `false` | `null` | `true` | `false`| `true` |
| `personal_access_token_prefix` | `false` | `text` | `string` | `false` | `'glpat-'::text` | `false` | `false`| `true` |
| `phone_verification_code_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `pipeline_limit_per_project_user_sha` | `false` | `integer` | `integer` | `true` | `0` | `true` | `false`| `true` |
| `plantuml_enabled` | `false` | `boolean` | `boolean` | `false` | `null` | `true` | `true`| `true` |
| `plantuml_url` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `polling_interval_multiplier` | `false` | `numeric` | `float` | `true` | `1.0` | `false` | `false`| `true` |
| `pre_receive_secret_detection_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `true`| `false` |
| `prevent_merge_requests_author_approval` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `prevent_merge_requests_committers_approval` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `product_analytics_configurator_connection_string` | `true` | `bytea` | `` | `false` | `null` | `true` | `false`| `false` |
| `product_analytics_data_collector_host` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `product_analytics_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `productivity_analytics_start_date` | `false` | `timestamp` | `` | `false` | `null` | `true` | `false`| `false` |
| `project_download_export_limit` | `false` | `integer` | `` | `true` | `1` | `false` | `false`| `false` |
| `project_export_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `project_export_limit` | `false` | `integer` | `` | `true` | `6` | `false` | `false`| `false` |
| `project_import_limit` | `false` | `integer` | `` | `true` | `6` | `false` | `false`| `false` |
| `project_jobs_api_rate_limit` | `false` | `integer` | `integer` | `true` | `600` | `false` | `false`| `true` |
| `project_runner_token_expiration_interval` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `projects_api_rate_limit_unauthenticated` | `false` | `integer` | `integer` | `true` | `400` | `false` | `false`| `true` |
| `prometheus_alert_db_indicators_settings` | `false` | `jsonb` | `` | `false` | `null` | `true` | `false`| `false` |
| `prometheus_metrics_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `protected_ci_variables` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `protected_paths` | `false` | `character` | `` | `false` | `'{/users/password` | `false` | `false`| `false` |
| `protected_paths_for_get_request` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `false` | `false`| `false` |
| `pseudonymizer_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `public_runner_releases_url` | `false` | `text` | `` | `true` | `'https://gitlab.com/api/v4/projects/gitlab-org%2Fgitlab-runner/releases'::text` | `false` | `false`| `false` |
| `push_event_activities_limit` | `false` | `integer` | `integer` | `true` | `3` | `false` | `false`| `true` |
| `push_event_hooks_limit` | `false` | `integer` | `integer` | `true` | `3` | `false` | `false`| `true` |
| `push_rule_id` | `false` | `bigint` | `` | `false` | `null` | `true` | `false`| `false` |
| `rate_limiting_response_text` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `rate_limits` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `rate_limits_unauthenticated_git_http` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `raw_blob_request_limit` | `false` | `integer` | `integer` | `true` | `300` | `false` | `false`| `true` |
| `recaptcha_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `true`| `true` |
| `recaptcha_private_key` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `recaptcha_site_key` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `receive_max_input_size` | `false` | `integer` | `integer` | `false` | `null` | `true` | `false`| `true` |
| `remember_me_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `repository_checks_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `false`| `true` |
| `repository_size_limit` | `false` | `bigint` | `integer` | `false` | `0` | `true` | `true`| `true` |
| `repository_storages` | `false` | `character` | `` | `false` | `'default'::character` | `true` | `false`| `false` |
| `repository_storages_weighted` | `false` | `jsonb` | `hash of strings to integers` | `true` | `'{}'::jsonb` | `true` | `true`| `true` |
| `require_admin_approval_after_user_signup` | `false` | `boolean` | `boolean` | `true` | `true` | `true` | `false`| `true` |
| `require_admin_two_factor_authentication` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `require_personal_access_token_expiry` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `require_two_factor_authentication` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `false`| `true` |
| `required_instance_ci_template` | `false` | `text` | `` | `false` | `null` | `false` | `false`| `false` |
| `resource_access_tokens_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `resource_usage_limits` | `false` | `jsonb` | `hash` | `true` | `'{}'::jsonb` | `false` | `false`| `true` |
| `response_limits` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `restricted_visibility_levels` | `false` | `text` | `array of strings` | `false` | `null` | `true` | `false`| `true` |
| `rsa_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `runner_token_expiration_interval` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `runners_registration_token` | `true` | `character` | `` | `false` | `null` | `true` | `false`| `false` |
| `sdrs_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `true`| `false` |
| `sdrs_jwt_signing_key` | `true` | `jsonb` | `` | `false` | `null` | `false` | `true`| `false` |
| `sdrs_url` | `false` | `text` | `` | `false` | `null` | `false` | `true`| `false` |
| `search` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `search_max_docs_denominator` | `false` | `integer` | `` | `true` | `5000000` | `false` | `false`| `false` |
| `search_max_shard_size_gb` | `false` | `integer` | `` | `true` | `50` | `false` | `false`| `false` |
| `search_min_docs_before_rollover` | `false` | `integer` | `` | `true` | `100000` | `false` | `false`| `false` |
| `search_rate_limit` | `false` | `integer` | `integer` | `true` | `300` | `true` | `false`| `true` |
| `search_rate_limit_allowlist` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `true` | `false`| `false` |
| `search_rate_limit_unauthenticated` | `false` | `integer` | `integer` | `true` | `100` | `false` | `false`| `true` |
| `secret_detection_revocation_token_types_url` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `secret_detection_service_auth_token` | `true` | `bytea` | `` | `false` | `null` | `true` | `false`| `false` |
| `secret_detection_service_url` | `false` | `text` | `` | `true` | `''::text` | `true` | `false`| `false` |
| `secret_detection_token_revocation_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `secret_detection_token_revocation_token` | `true` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `secret_detection_token_revocation_url` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `secret_push_protection_available` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `true`| `true` |
| `security_and_compliance_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `false`| `false` |
| `security_approval_policies_limit` | `false` | `integer` | `integer` | `true` | `5` | `false` | `false`| `true` |
| `security_policies` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `security_policy_global_group_approvers_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `true` | `false`| `true` |
| `security_txt_content` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `sentry_clientside_dsn` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `sentry_clientside_traces_sample_rate` | `false` | `double` | `` | `true` | `0.0` | `true` | `false`| `false` |
| `sentry_dsn` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `sentry_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `sentry_environment` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `service_access_tokens_expiration_enforced` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `service_ping_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `session_expire_delay` | `false` | `integer` | `integer` | `true` | `10080` | `false` | `false`| `true` |
| `shared_runners_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `shared_runners_minutes` | `false` | `integer` | `integer` | `true` | `0` | `true` | `false`| `true` |
| `shared_runners_text` | `false` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `show_migrate_from_jenkins_banner` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `sidekiq_job_limiter_compression_threshold_bytes` | `false` | `integer` | `integer` | `true` | `100000` | `false` | `false`| `true` |
| `sidekiq_job_limiter_limit_bytes` | `false` | `integer` | `integer` | `true` | `0` | `true` | `false`| `true` |
| `sidekiq_job_limiter_mode` | `false` | `smallint` | `string` | `true` | `1` | `false` | `false`| `true` |
| `sign_in_restrictions` | `false` | `jsonb` | `hash` | `true` | `'{}'::jsonb` | `false` | `false`| `true` |
| `signup_enabled` | `false` | `boolean` | `boolean` | `false` | `null` | `true` | `false`| `true` |
| `silent_mode_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `slack_app_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `false`| `true` |
| `slack_app_id` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `slack_app_secret` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `slack_app_signing_secret` | `true` | `bytea` | `string` | `false` | `null` | `true` | `false`| `true` |
| `slack_app_verification_token` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `snippet_size_limit` | `false` | `bigint` | `integer` | `true` | `52428800` | `false` | `false`| `true` |
| `snowplow_app_id` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `snowplow_collector_hostname` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `snowplow_cookie_domain` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `snowplow_database_collector_hostname` | `false` | `text` | `string` | `false` | `null` | `false` | `false`| `true` |
| `snowplow_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `sourcegraph_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `sourcegraph_public_only` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `sourcegraph_url` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `spam_check_api_key` | `true` | `bytea` | `string` | `false` | `null` | `true` | `true`| `true` |
| `spam_check_endpoint_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `spam_check_endpoint_url` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `spp_repository_pipeline_access` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `static_objects_external_storage_auth_token` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `static_objects_external_storage_url` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `suggest_pipeline_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `telesign_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `telesign_customer_xid` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `terminal_max_session_time` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `throttle_authenticated_api_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `throttle_authenticated_api_period_in_seconds` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_authenticated_api_requests_per_period` | `false` | `integer` | `integer` | `true` | `7200` | `true` | `false`| `true` |
| `throttle_authenticated_deprecated_api_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_authenticated_deprecated_api_period_in_seconds` | `false` | `integer` | `` | `true` | `3600` | `true` | `false`| `false` |
| `throttle_authenticated_deprecated_api_requests_per_period` | `false` | `integer` | `` | `true` | `3600` | `false` | `false`| `false` |
| `throttle_authenticated_files_api_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_authenticated_files_api_period_in_seconds` | `false` | `integer` | `` | `true` | `15` | `false` | `false`| `false` |
| `throttle_authenticated_files_api_requests_per_period` | `false` | `integer` | `` | `true` | `500` | `false` | `false`| `false` |
| `throttle_authenticated_git_lfs_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_authenticated_git_lfs_period_in_seconds` | `false` | `integer` | `` | `true` | `60` | `false` | `false`| `false` |
| `throttle_authenticated_git_lfs_requests_per_period` | `false` | `integer` | `` | `true` | `1000` | `false` | `false`| `false` |
| `throttle_authenticated_packages_api_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `throttle_authenticated_packages_api_period_in_seconds` | `false` | `integer` | `integer` | `true` | `15` | `false` | `false`| `true` |
| `throttle_authenticated_packages_api_requests_per_period` | `false` | `integer` | `integer` | `true` | `1000` | `false` | `false`| `true` |
| `throttle_authenticated_web_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `throttle_authenticated_web_period_in_seconds` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_authenticated_web_requests_per_period` | `false` | `integer` | `integer` | `true` | `7200` | `true` | `false`| `true` |
| `throttle_incident_management_notification_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `throttle_incident_management_notification_per_period` | `false` | `integer` | `` | `false` | `3600` | `false` | `false`| `false` |
| `throttle_incident_management_notification_period_in_seconds` | `false` | `integer` | `` | `false` | `3600` | `false` | `false`| `false` |
| `throttle_protected_paths_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `throttle_protected_paths_period_in_seconds` | `false` | `integer` | `` | `true` | `60` | `false` | `false`| `false` |
| `throttle_protected_paths_requests_per_period` | `false` | `integer` | `` | `true` | `10` | `false` | `false`| `false` |
| `throttle_unauthenticated_api_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `throttle_unauthenticated_api_period_in_seconds` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_unauthenticated_api_requests_per_period` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_unauthenticated_deprecated_api_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_unauthenticated_deprecated_api_period_in_seconds` | `false` | `integer` | `` | `true` | `3600` | `false` | `false`| `false` |
| `throttle_unauthenticated_deprecated_api_requests_per_period` | `false` | `integer` | `` | `true` | `1800` | `true` | `false`| `false` |
| `throttle_unauthenticated_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `throttle_unauthenticated_files_api_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_unauthenticated_files_api_period_in_seconds` | `false` | `integer` | `` | `true` | `15` | `false` | `false`| `false` |
| `throttle_unauthenticated_files_api_requests_per_period` | `false` | `integer` | `` | `true` | `125` | `false` | `false`| `false` |
| `throttle_unauthenticated_packages_api_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `throttle_unauthenticated_packages_api_period_in_seconds` | `false` | `integer` | `integer` | `true` | `15` | `false` | `false`| `true` |
| `throttle_unauthenticated_packages_api_requests_per_period` | `false` | `integer` | `integer` | `true` | `800` | `false` | `false`| `true` |
| `throttle_unauthenticated_period_in_seconds` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_unauthenticated_requests_per_period` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `time_tracking_limit_to_hours` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `token_prefixes` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `transactional_emails` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `two_factor_grace_period` | `false` | `integer` | `integer` | `false` | `48` | `true` | `false`| `true` |
| `unconfirmed_users_delete_after_days` | `false` | `integer` | `integer` | `true` | `7` | `true` | `true`| `true` |
| `unique_ips_limit_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `unique_ips_limit_per_user` | `false` | `integer` | `integer` | `false` | `null` | `true` | `false`| `true` |
| `unique_ips_limit_time_window` | `false` | `integer` | `integer` | `false` | `null` | `true` | `false`| `true` |
| `update_namespace_name_rate_limit` | `false` | `smallint` | `` | `true` | `120` | `false` | `false`| `false` |
| `update_runner_versions_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `updated_at` | `false` | `timestamp` | `` | `false` | `null` | `true` | `false`| `false` |
| `updating_name_disabled_for_users` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `usage_ping_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `usage_ping_features_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `usage_stats_set_by_user_id` | `false` | `bigint` | `` | `false` | `null` | `true` | `false`| `false` |
| `user_deactivation_emails_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `user_default_external` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `user_default_internal_regex` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `user_defaults_to_private_profile` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `user_oauth_applications` | `false` | `boolean` | `boolean` | `false` | `true` | `false` | `false`| `true` |
| `user_seat_management` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `user_show_add_ssh_key_message` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `users_get_by_id_limit` | `false` | `integer` | `` | `true` | `300` | `false` | `false`| `false` |
| `users_get_by_id_limit_allowlist` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `true` | `false`| `false` |
| `uuid` | `false` | `character` | `` | `false` | `null` | `true` | `true`| `false` |
| `valid_runner_registrars` | `false` | `character` | `array of strings` | `false` | `'{project` | `false` | `false`| `true` |
| `version_check_enabled` | `false` | `boolean` | `boolean` | `false` | `true` | `false` | `false`| `true` |
| `vertex_ai_host` | `false` | `text` | `` | `false` | `null` | `false` | `false`| `false` |
| `vertex_ai_project` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `vscode_extension_marketplace` | `false` | `jsonb` | `hash` | `true` | `'{}'::jsonb` | `false` | `false`| `true` |
| `web_based_commit_signing_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `web_ide_oauth_application_id` | `false` | `bigint` | `` | `false` | `null` | `true` | `false`| `false` |
| `whats_new_variant` | `false` | `smallint` | `string` | `false` | `0` | `false` | `false`| `true` |
| `wiki_asciidoc_allow_uri_includes` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `wiki_page_max_content_bytes` | `false` | `bigint` | `integer` | `true` | `52428800` | `false` | `false`| `true` |
| `zoekt_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
|
---
stage: Tenant Scale
group: Cells Infrastructure
info: Analysis of Application Settings for Cells 1.0.
title: Application Settings analysis
breadcrumbs:
- doc
- development
- cells
---
<!--
This documentation is auto generated by a Ruby script.
Please do not edit this file directly. To update this file, run:
scripts/cells/application-settings-analysis.rb
-->
## Statistics
- Number of attributes: 499
- Number of encrypted attributes: 42 (8.0%)
- Number of attributes documented: 294 (59.0%)
- Number of attributes on GitLab.com different from the defaults: 223 (45.0%)
- Number of attributes with `clusterwide` set: 499 (100.0%)
- Number of attributes with `clusterwide: true` set: 132 (26.0%)
## Individual columns
| Attribute name | Encrypted | DB Type | API Type | Not Null? | Default | GitLab.com != default | Cluster-wide? | Documented? |
| -------------- | ------------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- |
| `abuse_notification_email` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `admin_mode` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `after_sign_out_path` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `after_sign_up_text` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `ai_action_api_rate_limit` | `false` | `integer` | `` | `true` | `160` | `false` | `false`| `false` |
| `akismet_api_key` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `akismet_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `false`| `true` |
| `allow_account_deletion` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_deploy_tokens_and_keys_with_external_authn` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `allow_group_owners_to_manage_ldap` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_local_requests_from_system_hooks` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_local_requests_from_web_hooks_and_services` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `allow_possible_spam` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `allow_project_creation_for_guest_and_below` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_runner_registration_token` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `allow_top_level_group_owners_to_create_service_accounts` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `anti_abuse_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `archive_builds_in_seconds` | `false` | `integer` | `` | `false` | `null` | `false` | `false`| `false` |
| `arkose_labs_client_secret` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `arkose_labs_client_xid` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `arkose_labs_data_exchange_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `arkose_labs_namespace` | `false` | `text` | `` | `true` | `'client'::text` | `true` | `true`| `false` |
| `arkose_labs_private_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `arkose_labs_public_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `asciidoc_max_includes` | `false` | `smallint` | `integer` | `true` | `32` | `false` | `false`| `true` |
| `asset_proxy_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `asset_proxy_secret_key` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `asset_proxy_url` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `asset_proxy_whitelist` | `false` | `text` | `string or array of strings` | `false` | `null` | `true` | `true`| `true` |
| `authorized_keys_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `true` | `true`| `true` |
| `auto_ban_user_on_excessive_projects_download` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `auto_devops_domain` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `auto_devops_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `true` | `true`| `true` |
| `automatic_purchased_storage_allocation` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `bulk_import_concurrent_pipeline_batch_limit` | `false` | `smallint` | `integer` | `true` | `25` | `false` | `false`| `true` |
| `bulk_import_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `bulk_import_max_download_file_size` | `false` | `bigint` | `integer` | `true` | `5120` | `false` | `false`| `true` |
| `cached_markdown_version` | `false` | `integer` | `` | `false` | `null` | `false` | `false`| `false` |
| `can_create_group` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `can_create_organization` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `check_namespace_plan` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `ci_cd_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `false`| `false` |
| `ci_job_token_signing_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `ci_jwt_signing_key` | `true` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `ci_max_includes` | `false` | `integer` | `integer` | `true` | `150` | `false` | `false`| `true` |
| `ci_max_total_yaml_size_bytes` | `false` | `integer` | `integer` | `true` | `314572800` | `false` | `false`| `true` |
| `clickhouse` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `cloud_license_auth_token` | `true` | `text` | `` | `false` | `null` | `false` | `false`| `false` |
| `cluster_agents` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `code_creation` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `code_suggestions_api_rate_limit` | `false` | `integer` | `` | `true` | `60` | `false` | `false`| `false` |
| `commit_email_hostname` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `compliance_frameworks` | `false` | `smallint[]` | `` | `true` | `'{}'::smallint[]` | `false` | `false`| `false` |
| `container_expiration_policies_enable_historic_entries` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `container_registry_cleanup_tags_service_max_list_size` | `false` | `integer` | `integer` | `true` | `200` | `false` | `false`| `true` |
| `container_registry_data_repair_detail_worker_max_concurrency` | `false` | `integer` | `` | `true` | `2` | `true` | `false`| `false` |
| `container_registry_db_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `true`| `false` |
| `container_registry_delete_tags_service_timeout` | `false` | `integer` | `integer` | `true` | `250` | `false` | `false`| `true` |
| `container_registry_expiration_policies_caching` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `container_registry_expiration_policies_worker_capacity` | `false` | `integer` | `integer` | `true` | `4` | `true` | `false`| `true` |
| `container_registry_features` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `true` | `true`| `false` |
| `container_registry_token_expire_delay` | `false` | `integer` | `integer` | `false` | `5` | `true` | `false`| `true` |
| `container_registry_vendor` | `false` | `text` | `` | `true` | `''::text` | `true` | `true`| `false` |
| `container_registry_version` | `false` | `text` | `` | `true` | `''::text` | `true` | `true`| `false` |
| `content_validation_api_key` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `content_validation_endpoint_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `content_validation_endpoint_url` [JIHU] | `false` | `text` | `` | `false` | `null` | `false` | `false`| `false` |
| `created_at` | `false` | `timestamp` | `` | `false` | `null` | `true` | `false`| `false` |
| `cube_api_base_url` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `cube_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `custom_http_clone_url_root` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `custom_project_templates_group_id` | `false` | `bigint` | `` | `false` | `null` | `false` | `false`| `false` |
| `customers_dot_jwt_signing_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `dashboard_limit` | `false` | `integer` | `` | `true` | `0` | `true` | `true`| `false` |
| `dashboard_limit_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `true`| `false` |
| `database_grafana_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `database_grafana_api_url` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `database_grafana_tag` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `database_max_running_batched_background_migrations` | `false` | `integer` | `` | `true` | `2` | `true` | `true`| `false` |
| `database_reindexing` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `deactivate_dormant_users` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `deactivate_dormant_users_period` | `false` | `integer` | `integer` | `true` | `90` | `false` | `false`| `true` |
| `deactivation_email_additional_text` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `decompress_archive_file_timeout` | `false` | `integer` | `integer` | `true` | `210` | `false` | `false`| `true` |
| `default_artifacts_expire_in` | `false` | `character` | `string` | `true` | `'0'::character` | `true` | `true`| `true` |
| `default_branch_name` | `false` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `default_branch_protection` | `false` | `integer` | `integer` | `false` | `2` | `false` | `false`| `true` |
| `default_branch_protection_defaults` | `false` | `jsonb` | `hash` | `true` | `'{}'::jsonb` | `true` | `false`| `true` |
| `default_ci_config_path` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `default_group_visibility` | `false` | `integer` | `string` | `false` | `null` | `true` | `false`| `true` |
| `default_preferred_language` | `false` | `text` | `string` | `true` | `'en'::text` | `false` | `false`| `true` |
| `default_profile_preferences` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `default_project_creation` | `false` | `integer` | `integer` | `true` | `2` | `false` | `false`| `true` |
| `default_project_deletion_protection` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `default_project_visibility` | `false` | `integer` | `string` | `true` | `0` | `false` | `false`| `true` |
| `default_projects_limit` | `false` | `integer` | `integer` | `false` | `null` | `true` | `false`| `true` |
| `default_snippet_visibility` | `false` | `integer` | `string` | `true` | `0` | `false` | `false`| `true` |
| `default_syntax_highlighting_theme` | `false` | `integer` | `integer` | `true` | `1` | `false` | `false`| `true` |
| `delete_inactive_projects` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `delete_unconfirmed_users` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `deletion_adjourned_period` | `false` | `integer` | `integer` | `true` | `30` | `false` | `false`| `true` |
| `deny_all_requests_except_allowed` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `dependency_proxy_ttl_group_policy_worker_capacity` | `false` | `smallint` | `` | `true` | `2` | `false` | `false`| `false` |
| `diagramsnet_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `diagramsnet_url` | `false` | `text` | `string` | `false` | `'https://embed.diagrams.net'::text` | `false` | `false`| `true` |
| `diff_max_files` | `false` | `integer` | `integer` | `true` | `1000` | `true` | `true`| `true` |
| `diff_max_lines` | `false` | `integer` | `integer` | `true` | `50000` | `true` | `true`| `true` |
| `diff_max_patch_bytes` | `false` | `integer` | `integer` | `true` | `204800` | `false` | `true`| `true` |
| `dingtalk_app_key` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `dingtalk_app_secret` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `dingtalk_corpid` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `dingtalk_integration_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `disable_admin_oauth_scopes` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `disable_download_button` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `disable_feed_token` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `disable_overriding_approvers_per_merge_request` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `disable_personal_access_tokens` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `disabled_oauth_sign_in_sources` | `false` | `text` | `array of strings` | `false` | `null` | `false` | `false`| `true` |
| `dns_rebinding_protection_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `domain_allowlist` | `false` | `text` | `array of strings` | `false` | `null` | `false` | `false`| `true` |
| `domain_denylist` | `false` | `text` | `array of strings` | `false` | `null` | `true` | `true`| `true` |
| `domain_denylist_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `true`| `true` |
| `dsa_key_restriction` | `false` | `integer` | `integer` | `true` | `'-1'::integer` | `false` | `false`| `true` |
| `duo_chat` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `duo_features_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `duo_workflow` | `false` | `jsonb` | `` | `false` | `'{}'::jsonb` | `true` | `true`| `false` |
| `ecdsa_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `ecdsa_sk_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `ed25519_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `ed25519_sk_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `editor_extensions` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `eks_access_key_id` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `eks_account_id` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `eks_integration_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `eks_secret_access_key` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `elasticsearch` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `elasticsearch_aws_secret_access_key` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `elasticsearch_password` | `true` | `bytea` | `string` | `false` | `null` | `true` | `false`| `true` |
| `elasticsearch_url` | `false` | `character` | `string` | `false` | `'http://localhost:9200'::character` | `true` | `false`| `true` |
| `email_additional_text` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `email_author_in_body` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `true`| `true` |
| `email_confirmation_setting` | `false` | `smallint` | `string` | `false` | `0` | `true` | `true`| `true` |
| `email_restrictions` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `email_restrictions_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `enable_artifact_external_redirect_warning_page` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `enable_member_promotion_management` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `enabled_git_access_protocol` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `enforce_ci_inbound_job_token_scope_enabled` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `enforce_namespace_storage_limit` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `enforce_terms` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `true`| `true` |
| `error_tracking_access_token` | `true` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `error_tracking_api_url` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `error_tracking_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `true`| `false` |
| `external_auth_client_cert` | `false` | `text` | `string` | `false` | `null` | `false` | `false`| `true` |
| `external_auth_client_key` | `true` | `text` | `string` | `false` | `null` | `false` | `false`| `true` |
| `external_auth_client_key_pass` | `true` | `character` | `string` | `false` | `null` | `false` | `false`| `true` |
| `external_authorization_service_default_label` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `external_authorization_service_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `external_authorization_service_timeout` | `false` | `double` | `float` | `false` | `0.5` | `false` | `false`| `true` |
| `external_authorization_service_url` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `external_pipeline_validation_service_timeout` | `false` | `integer` | `integer` | `false` | `null` | `true` | `true`| `true` |
| `external_pipeline_validation_service_token` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `external_pipeline_validation_service_url` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `failed_login_attempts_unlock_period_in_minutes` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `feishu_app_key` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `feishu_app_secret` [JIHU] | `true` | `bytea` | `` | `false` | `null` | `false` | `false`| `false` |
| `feishu_integration_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `file_template_project_id` | `false` | `bigint` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `first_day_of_week` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `floc_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `force_pages_access_control` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `future_subscriptions` | `false` | `jsonb` | `` | `true` | `'[]'::jsonb` | `false` | `false`| `false` |
| `geo_node_allowed_ips` | `false` | `character` | `string` | `false` | `'0.0.0.0/0` | `false` | `false`| `true` |
| `geo_status_timeout` | `false` | `integer` | `integer` | `false` | `10` | `true` | `false`| `true` |
| `git_rate_limit_users_alertlist` | `false` | `integer[]` | `array of integers` | `true` | `'{}'::integer[]` | `false` | `false`| `true` |
| `git_rate_limit_users_allowlist` | `false` | `text[]` | `array of strings` | `true` | `'{}'::text[]` | `false` | `false`| `true` |
| `git_two_factor_session_expiry` | `false` | `integer` | `integer` | `true` | `15` | `false` | `false`| `true` |
| `gitaly_timeout_default` | `false` | `integer` | `integer` | `true` | `55` | `false` | `false`| `true` |
| `gitaly_timeout_fast` | `false` | `integer` | `integer` | `true` | `10` | `false` | `false`| `true` |
| `gitaly_timeout_medium` | `false` | `integer` | `integer` | `true` | `30` | `false` | `false`| `true` |
| `gitlab_dedicated_instance` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `gitlab_shell_operation_limit` | `false` | `integer` | `integer` | `false` | `600` | `false` | `false`| `true` |
| `gitpod_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `gitpod_url` | `false` | `text` | `string` | `false` | `'https://gitpod.io/'::text` | `false` | `false`| `true` |
| `globally_allowed_ips` | `false` | `text` | `string` | `true` | `''::text` | `true` | `true`| `true` |
| `grafana_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `grafana_url` | `false` | `character` | `string` | `true` | `'/-/grafana'::character` | `false` | `false`| `true` |
| `gravatar_enabled` | `false` | `boolean` | `boolean` | `false` | `null` | `true` | `true`| `true` |
| `group_download_export_limit` | `false` | `integer` | `` | `true` | `1` | `false` | `false`| `false` |
| `group_export_limit` | `false` | `integer` | `` | `true` | `6` | `false` | `false`| `false` |
| `group_import_limit` | `false` | `integer` | `` | `true` | `6` | `false` | `false`| `false` |
| `group_owners_can_manage_default_branch_protection` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `group_runner_token_expiration_interval` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `group_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `hashed_storage_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `health_check_access_token` | `false` | `character` | `` | `false` | `null` | `true` | `true`| `false` |
| `help_page_documentation_base_url` | `false` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `help_page_hide_commercial_content` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `true`| `true` |
| `help_page_support_url` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `help_page_text` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `hide_third_party_offers` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `home_page_url` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `housekeeping_bitmaps_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `housekeeping_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `housekeeping_full_repack_period` | `false` | `integer` | `integer` | `true` | `50` | `false` | `false`| `true` |
| `housekeeping_gc_period` | `false` | `integer` | `integer` | `true` | `200` | `false` | `false`| `true` |
| `housekeeping_incremental_repack_period` | `false` | `integer` | `integer` | `true` | `10` | `false` | `false`| `true` |
| `html_emails_enabled` | `false` | `boolean` | `boolean` | `false` | `true` | `false` | `false`| `true` |
| `id` | `false` | `bigint` | `` | `true` | `???` | `false` | `false`| `false` |
| `identity_verification_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `import_sources` | `false` | `text` | `array of strings` | `false` | `null` | `true` | `true`| `true` |
| `importers` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `inactive_projects_delete_after_months` | `false` | `integer` | `` | `true` | `2` | `false` | `false`| `false` |
| `inactive_projects_min_size_mb` | `false` | `integer` | `` | `true` | `0` | `false` | `false`| `false` |
| `inactive_projects_send_warning_email_after_months` | `false` | `integer` | `` | `true` | `1` | `false` | `false`| `false` |
| `include_optional_metrics_in_service_ping` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `instance_level_ai_beta_features_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `integrations` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `invisible_captcha_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `invitation_flow_enforcement` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `issues_create_limit` | `false` | `integer` | `integer` | `true` | `0` | `true` | `true`| `true` |
| `jira_connect_application_key` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `jira_connect_proxy_url` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `jira_connect_public_key_storage_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `jobs_per_stage_page_size` | `false` | `integer` | `` | `true` | `200` | `false` | `false`| `false` |
| `keep_latest_artifact` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `kroki_enabled` | `false` | `boolean` | `boolean` | `false` | `null` | `false` | `false`| `true` |
| `kroki_formats` | `false` | `jsonb` | `object` | `true` | `'{}'::jsonb` | `false` | `false`| `true` |
| `kroki_url` | `false` | `character` | `string` | `false` | `null` | `false` | `false`| `true` |
| `lets_encrypt_notification_email` | `false` | `character` | `` | `false` | `null` | `true` | `true`| `false` |
| `lets_encrypt_private_key` | `true` | `text` | `` | `false` | `null` | `true` | `true`| `false` |
| `lets_encrypt_terms_of_service_accepted` | `false` | `boolean` | `` | `true` | `false` | `true` | `true`| `false` |
| `license_trial_ends_on` | `false` | `date` | `` | `false` | `null` | `false` | `false`| `false` |
| `license_usage_data_exported` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `local_markdown_version` | `false` | `integer` | `integer` | `true` | `0` | `true` | `false`| `true` |
| `lock_duo_features_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `lock_math_rendering_limits_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `lock_memberships_to_ldap` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `lock_memberships_to_saml` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `lock_model_prompt_cache_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `lock_spp_repository_pipeline_access` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `lock_web_based_commit_signing_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `login_recaptcha_protection_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `mailgun_events_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `mailgun_signing_key` | `true` | `bytea` | `string` | `false` | `null` | `true` | `true`| `true` |
| `maintenance_mode` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `maintenance_mode_message` | `false` | `text` | `string` | `false` | `null` | `false` | `false`| `true` |
| `make_profile_private` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `math_rendering_limits_enabled` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `max_artifacts_content_include_size` | `false` | `integer` | `` | `true` | `5242880` | `false` | `false`| `false` |
| `max_artifacts_size` | `false` | `integer` | `integer` | `true` | `100` | `true` | `false`| `true` |
| `max_attachment_size` | `false` | `integer` | `integer` | `true` | `100` | `false` | `false`| `true` |
| `max_decompressed_archive_size` | `false` | `integer` | `integer` | `true` | `25600` | `false` | `false`| `true` |
| `max_export_size` | `false` | `integer` | `integer` | `false` | `0` | `true` | `true`| `true` |
| `max_import_remote_file_size` | `false` | `bigint` | `integer` | `true` | `10240` | `false` | `false`| `true` |
| `max_import_size` | `false` | `integer` | `integer` | `true` | `0` | `true` | `true`| `true` |
| `max_login_attempts` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `max_number_of_repository_downloads` | `false` | `smallint` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `max_number_of_repository_downloads_within_time_period` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `max_number_of_vulnerabilities_per_project` | `false` | `integer` | `` | `false` | `null` | `false` | `false`| `false` |
| `max_pages_custom_domains_per_project` | `false` | `integer` | `` | `true` | `0` | `true` | `false`| `false` |
| `max_pages_size` | `false` | `integer` | `integer` | `true` | `100` | `true` | `false`| `true` |
| `max_personal_access_token_lifetime` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `max_ssh_key_lifetime` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `max_terraform_state_size_bytes` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `max_yaml_depth` | `false` | `integer` | `integer` | `true` | `100` | `false` | `false`| `true` |
| `max_yaml_size_bytes` | `false` | `bigint` | `integer` | `true` | `2097152` | `false` | `false`| `true` |
| `metrics_enabled` | `false` | `boolean` | `` | `false` | `false` | `true` | `true`| `false` |
| `metrics_host` | `false` | `character` | `` | `false` | `'localhost'::character` | `false` | `false`| `false` |
| `metrics_method_call_threshold` | `false` | `integer` | `integer` | `false` | `10` | `true` | `true`| `true` |
| `metrics_packet_size` | `false` | `integer` | `` | `false` | `1` | `true` | `true`| `false` |
| `metrics_pool_size` | `false` | `integer` | `` | `false` | `16` | `false` | `true`| `false` |
| `metrics_port` | `false` | `integer` | `` | `false` | `8089` | `true` | `true`| `false` |
| `metrics_sample_interval` | `false` | `integer` | `` | `false` | `15` | `false` | `true`| `false` |
| `metrics_timeout` | `false` | `integer` | `` | `false` | `10` | `false` | `true`| `false` |
| `minimum_password_length` | `false` | `integer` | `integer` | `true` | `8` | `false` | `false`| `true` |
| `mirror_available` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `mirror_capacity_threshold` | `false` | `integer` | `integer` | `true` | `50` | `true` | `false`| `true` |
| `mirror_max_capacity` | `false` | `integer` | `integer` | `true` | `100` | `true` | `false`| `true` |
| `mirror_max_delay` | `false` | `integer` | `integer` | `true` | `300` | `true` | `false`| `true` |
| `model_prompt_cache_enabled` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `namespace_aggregation_schedule_lease_duration_in_seconds` | `false` | `integer` | `` | `true` | `300` | `false` | `false`| `false` |
| `namespace_storage_forks_cost_factor` | `false` | `double` | `` | `true` | `1.0` | `true` | `false`| `false` |
| `new_user_signups_cap` | `false` | `integer` | `` | `false` | `null` | `false` | `false`| `false` |
| `notes_create_limit` | `false` | `integer` | `` | `true` | `300` | `true` | `true`| `false` |
| `notes_create_limit_allowlist` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `true` | `true`| `false` |
| `notify_on_unknown_sign_in` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `oauth_provider` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `observability_backend_ssl_verification_enabled` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `observability_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `outbound_local_requests_whitelist` | `false` | `character` | `array of strings` | `true` | `'{}'::character` | `true` | `true`| `true` |
| `package_metadata_purl_types` | `false` | `smallint[]` | `array of integers` | `false` | `'{1` | `false` | `false`| `true` |
| `package_registry` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `pages` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `pages_domain_verification_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `password_authentication_enabled_for_git` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `password_authentication_enabled_for_web` | `false` | `boolean` | `boolean` | `false` | `null` | `true` | `false`| `true` |
| `password_expiration_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `password_expires_in_days` [JIHU] | `false` | `integer` | `` | `true` | `90` | `false` | `false`| `false` |
| `password_expires_notice_before_days` [JIHU] | `false` | `integer` | `` | `true` | `7` | `false` | `false`| `false` |
| `password_lowercase_required` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `password_number_required` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `password_symbol_required` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `password_uppercase_required` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `performance_bar_allowed_group_id` | `false` | `bigint` | `string` | `false` | `null` | `true` | `false`| `true` |
| `personal_access_token_prefix` | `false` | `text` | `string` | `false` | `'glpat-'::text` | `false` | `false`| `true` |
| `phone_verification_code_enabled` [JIHU] | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `pipeline_limit_per_project_user_sha` | `false` | `integer` | `integer` | `true` | `0` | `true` | `false`| `true` |
| `plantuml_enabled` | `false` | `boolean` | `boolean` | `false` | `null` | `true` | `true`| `true` |
| `plantuml_url` | `false` | `character` | `string` | `false` | `null` | `true` | `true`| `true` |
| `polling_interval_multiplier` | `false` | `numeric` | `float` | `true` | `1.0` | `false` | `false`| `true` |
| `pre_receive_secret_detection_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `true`| `false` |
| `prevent_merge_requests_author_approval` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `prevent_merge_requests_committers_approval` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `product_analytics_configurator_connection_string` | `true` | `bytea` | `` | `false` | `null` | `true` | `false`| `false` |
| `product_analytics_data_collector_host` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `product_analytics_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `productivity_analytics_start_date` | `false` | `timestamp` | `` | `false` | `null` | `true` | `false`| `false` |
| `project_download_export_limit` | `false` | `integer` | `` | `true` | `1` | `false` | `false`| `false` |
| `project_export_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `project_export_limit` | `false` | `integer` | `` | `true` | `6` | `false` | `false`| `false` |
| `project_import_limit` | `false` | `integer` | `` | `true` | `6` | `false` | `false`| `false` |
| `project_jobs_api_rate_limit` | `false` | `integer` | `integer` | `true` | `600` | `false` | `false`| `true` |
| `project_runner_token_expiration_interval` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `projects_api_rate_limit_unauthenticated` | `false` | `integer` | `integer` | `true` | `400` | `false` | `false`| `true` |
| `prometheus_alert_db_indicators_settings` | `false` | `jsonb` | `` | `false` | `null` | `true` | `false`| `false` |
| `prometheus_metrics_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `protected_ci_variables` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `protected_paths` | `false` | `character` | `` | `false` | `'{/users/password` | `false` | `false`| `false` |
| `protected_paths_for_get_request` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `false` | `false`| `false` |
| `pseudonymizer_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `public_runner_releases_url` | `false` | `text` | `` | `true` | `'https://gitlab.com/api/v4/projects/gitlab-org%2Fgitlab-runner/releases'::text` | `false` | `false`| `false` |
| `push_event_activities_limit` | `false` | `integer` | `integer` | `true` | `3` | `false` | `false`| `true` |
| `push_event_hooks_limit` | `false` | `integer` | `integer` | `true` | `3` | `false` | `false`| `true` |
| `push_rule_id` | `false` | `bigint` | `` | `false` | `null` | `true` | `false`| `false` |
| `rate_limiting_response_text` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `rate_limits` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `rate_limits_unauthenticated_git_http` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `raw_blob_request_limit` | `false` | `integer` | `integer` | `true` | `300` | `false` | `false`| `true` |
| `recaptcha_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `true`| `true` |
| `recaptcha_private_key` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `recaptcha_site_key` | `true` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `receive_max_input_size` | `false` | `integer` | `integer` | `false` | `null` | `true` | `false`| `true` |
| `remember_me_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `repository_checks_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `false`| `true` |
| `repository_size_limit` | `false` | `bigint` | `integer` | `false` | `0` | `true` | `true`| `true` |
| `repository_storages` | `false` | `character` | `` | `false` | `'default'::character` | `true` | `false`| `false` |
| `repository_storages_weighted` | `false` | `jsonb` | `hash of strings to integers` | `true` | `'{}'::jsonb` | `true` | `true`| `true` |
| `require_admin_approval_after_user_signup` | `false` | `boolean` | `boolean` | `true` | `true` | `true` | `false`| `true` |
| `require_admin_two_factor_authentication` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `require_personal_access_token_expiry` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `require_two_factor_authentication` | `false` | `boolean` | `boolean` | `false` | `false` | `false` | `false`| `true` |
| `required_instance_ci_template` | `false` | `text` | `` | `false` | `null` | `false` | `false`| `false` |
| `resource_access_tokens_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `resource_usage_limits` | `false` | `jsonb` | `hash` | `true` | `'{}'::jsonb` | `false` | `false`| `true` |
| `response_limits` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `true`| `false` |
| `restricted_visibility_levels` | `false` | `text` | `array of strings` | `false` | `null` | `true` | `false`| `true` |
| `rsa_key_restriction` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `runner_token_expiration_interval` | `false` | `integer` | `integer` | `false` | `null` | `false` | `false`| `true` |
| `runners_registration_token` | `true` | `character` | `` | `false` | `null` | `true` | `false`| `false` |
| `sdrs_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `true`| `false` |
| `sdrs_jwt_signing_key` | `true` | `jsonb` | `` | `false` | `null` | `false` | `true`| `false` |
| `sdrs_url` | `false` | `text` | `` | `false` | `null` | `false` | `true`| `false` |
| `search` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `search_max_docs_denominator` | `false` | `integer` | `` | `true` | `5000000` | `false` | `false`| `false` |
| `search_max_shard_size_gb` | `false` | `integer` | `` | `true` | `50` | `false` | `false`| `false` |
| `search_min_docs_before_rollover` | `false` | `integer` | `` | `true` | `100000` | `false` | `false`| `false` |
| `search_rate_limit` | `false` | `integer` | `integer` | `true` | `300` | `true` | `false`| `true` |
| `search_rate_limit_allowlist` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `true` | `false`| `false` |
| `search_rate_limit_unauthenticated` | `false` | `integer` | `integer` | `true` | `100` | `false` | `false`| `true` |
| `secret_detection_revocation_token_types_url` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `secret_detection_service_auth_token` | `true` | `bytea` | `` | `false` | `null` | `true` | `false`| `false` |
| `secret_detection_service_url` | `false` | `text` | `` | `true` | `''::text` | `true` | `false`| `false` |
| `secret_detection_token_revocation_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `secret_detection_token_revocation_token` | `true` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `secret_detection_token_revocation_url` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `secret_push_protection_available` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `true`| `true` |
| `security_and_compliance_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `false`| `false` |
| `security_approval_policies_limit` | `false` | `integer` | `integer` | `true` | `5` | `false` | `false`| `true` |
| `security_policies` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `security_policy_global_group_approvers_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `true` | `false`| `true` |
| `security_txt_content` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `sentry_clientside_dsn` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `sentry_clientside_traces_sample_rate` | `false` | `double` | `` | `true` | `0.0` | `true` | `false`| `false` |
| `sentry_dsn` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `sentry_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `sentry_environment` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `service_access_tokens_expiration_enforced` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `service_ping_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `session_expire_delay` | `false` | `integer` | `integer` | `true` | `10080` | `false` | `false`| `true` |
| `shared_runners_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `shared_runners_minutes` | `false` | `integer` | `integer` | `true` | `0` | `true` | `false`| `true` |
| `shared_runners_text` | `false` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `show_migrate_from_jenkins_banner` | `false` | `boolean` | `` | `true` | `true` | `false` | `false`| `false` |
| `sidekiq_job_limiter_compression_threshold_bytes` | `false` | `integer` | `integer` | `true` | `100000` | `false` | `false`| `true` |
| `sidekiq_job_limiter_limit_bytes` | `false` | `integer` | `integer` | `true` | `0` | `true` | `false`| `true` |
| `sidekiq_job_limiter_mode` | `false` | `smallint` | `string` | `true` | `1` | `false` | `false`| `true` |
| `sign_in_restrictions` | `false` | `jsonb` | `hash` | `true` | `'{}'::jsonb` | `false` | `false`| `true` |
| `signup_enabled` | `false` | `boolean` | `boolean` | `false` | `null` | `true` | `false`| `true` |
| `silent_mode_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `slack_app_enabled` | `false` | `boolean` | `boolean` | `false` | `false` | `true` | `false`| `true` |
| `slack_app_id` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `slack_app_secret` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `slack_app_signing_secret` | `true` | `bytea` | `string` | `false` | `null` | `true` | `false`| `true` |
| `slack_app_verification_token` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `snippet_size_limit` | `false` | `bigint` | `integer` | `true` | `52428800` | `false` | `false`| `true` |
| `snowplow_app_id` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `snowplow_collector_hostname` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `snowplow_cookie_domain` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `snowplow_database_collector_hostname` | `false` | `text` | `string` | `false` | `null` | `false` | `false`| `true` |
| `snowplow_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `sourcegraph_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `sourcegraph_public_only` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `sourcegraph_url` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `spam_check_api_key` | `true` | `bytea` | `string` | `false` | `null` | `true` | `true`| `true` |
| `spam_check_endpoint_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `true`| `true` |
| `spam_check_endpoint_url` | `false` | `text` | `string` | `false` | `null` | `true` | `true`| `true` |
| `spp_repository_pipeline_access` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `static_objects_external_storage_auth_token` | `true` | `text` | `string` | `false` | `null` | `true` | `false`| `true` |
| `static_objects_external_storage_url` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `suggest_pipeline_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `telesign_api_key` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `telesign_customer_xid` | `true` | `bytea` | `` | `false` | `null` | `true` | `true`| `false` |
| `terminal_max_session_time` | `false` | `integer` | `integer` | `true` | `0` | `false` | `false`| `true` |
| `throttle_authenticated_api_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `throttle_authenticated_api_period_in_seconds` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_authenticated_api_requests_per_period` | `false` | `integer` | `integer` | `true` | `7200` | `true` | `false`| `true` |
| `throttle_authenticated_deprecated_api_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_authenticated_deprecated_api_period_in_seconds` | `false` | `integer` | `` | `true` | `3600` | `true` | `false`| `false` |
| `throttle_authenticated_deprecated_api_requests_per_period` | `false` | `integer` | `` | `true` | `3600` | `false` | `false`| `false` |
| `throttle_authenticated_files_api_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_authenticated_files_api_period_in_seconds` | `false` | `integer` | `` | `true` | `15` | `false` | `false`| `false` |
| `throttle_authenticated_files_api_requests_per_period` | `false` | `integer` | `` | `true` | `500` | `false` | `false`| `false` |
| `throttle_authenticated_git_lfs_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_authenticated_git_lfs_period_in_seconds` | `false` | `integer` | `` | `true` | `60` | `false` | `false`| `false` |
| `throttle_authenticated_git_lfs_requests_per_period` | `false` | `integer` | `` | `true` | `1000` | `false` | `false`| `false` |
| `throttle_authenticated_packages_api_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `throttle_authenticated_packages_api_period_in_seconds` | `false` | `integer` | `integer` | `true` | `15` | `false` | `false`| `true` |
| `throttle_authenticated_packages_api_requests_per_period` | `false` | `integer` | `integer` | `true` | `1000` | `false` | `false`| `true` |
| `throttle_authenticated_web_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `throttle_authenticated_web_period_in_seconds` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_authenticated_web_requests_per_period` | `false` | `integer` | `integer` | `true` | `7200` | `true` | `false`| `true` |
| `throttle_incident_management_notification_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `throttle_incident_management_notification_per_period` | `false` | `integer` | `` | `false` | `3600` | `false` | `false`| `false` |
| `throttle_incident_management_notification_period_in_seconds` | `false` | `integer` | `` | `false` | `3600` | `false` | `false`| `false` |
| `throttle_protected_paths_enabled` | `false` | `boolean` | `` | `true` | `false` | `true` | `false`| `false` |
| `throttle_protected_paths_period_in_seconds` | `false` | `integer` | `` | `true` | `60` | `false` | `false`| `false` |
| `throttle_protected_paths_requests_per_period` | `false` | `integer` | `` | `true` | `10` | `false` | `false`| `false` |
| `throttle_unauthenticated_api_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `throttle_unauthenticated_api_period_in_seconds` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_unauthenticated_api_requests_per_period` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_unauthenticated_deprecated_api_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_unauthenticated_deprecated_api_period_in_seconds` | `false` | `integer` | `` | `true` | `3600` | `false` | `false`| `false` |
| `throttle_unauthenticated_deprecated_api_requests_per_period` | `false` | `integer` | `` | `true` | `1800` | `true` | `false`| `false` |
| `throttle_unauthenticated_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `throttle_unauthenticated_files_api_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `throttle_unauthenticated_files_api_period_in_seconds` | `false` | `integer` | `` | `true` | `15` | `false` | `false`| `false` |
| `throttle_unauthenticated_files_api_requests_per_period` | `false` | `integer` | `` | `true` | `125` | `false` | `false`| `false` |
| `throttle_unauthenticated_packages_api_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `throttle_unauthenticated_packages_api_period_in_seconds` | `false` | `integer` | `integer` | `true` | `15` | `false` | `false`| `true` |
| `throttle_unauthenticated_packages_api_requests_per_period` | `false` | `integer` | `integer` | `true` | `800` | `false` | `false`| `true` |
| `throttle_unauthenticated_period_in_seconds` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `throttle_unauthenticated_requests_per_period` | `false` | `integer` | `integer` | `true` | `3600` | `true` | `false`| `true` |
| `time_tracking_limit_to_hours` | `false` | `boolean` | `boolean` | `true` | `false` | `true` | `false`| `true` |
| `token_prefixes` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `transactional_emails` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
| `two_factor_grace_period` | `false` | `integer` | `integer` | `false` | `48` | `true` | `false`| `true` |
| `unconfirmed_users_delete_after_days` | `false` | `integer` | `integer` | `true` | `7` | `true` | `true`| `true` |
| `unique_ips_limit_enabled` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `unique_ips_limit_per_user` | `false` | `integer` | `integer` | `false` | `null` | `true` | `false`| `true` |
| `unique_ips_limit_time_window` | `false` | `integer` | `integer` | `false` | `null` | `true` | `false`| `true` |
| `update_namespace_name_rate_limit` | `false` | `smallint` | `` | `true` | `120` | `false` | `false`| `false` |
| `update_runner_versions_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `updated_at` | `false` | `timestamp` | `` | `false` | `null` | `true` | `false`| `false` |
| `updating_name_disabled_for_users` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `usage_ping_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `usage_ping_features_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `usage_stats_set_by_user_id` | `false` | `bigint` | `` | `false` | `null` | `true` | `false`| `false` |
| `user_deactivation_emails_enabled` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `user_default_external` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `user_default_internal_regex` | `false` | `character` | `string` | `false` | `null` | `true` | `false`| `true` |
| `user_defaults_to_private_profile` | `false` | `boolean` | `boolean` | `true` | `false` | `false` | `false`| `true` |
| `user_oauth_applications` | `false` | `boolean` | `boolean` | `false` | `true` | `false` | `false`| `true` |
| `user_seat_management` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `false` | `true`| `false` |
| `user_show_add_ssh_key_message` | `false` | `boolean` | `boolean` | `true` | `true` | `false` | `false`| `true` |
| `users_get_by_id_limit` | `false` | `integer` | `` | `true` | `300` | `false` | `false`| `false` |
| `users_get_by_id_limit_allowlist` | `false` | `text[]` | `` | `true` | `'{}'::text[]` | `true` | `false`| `false` |
| `uuid` | `false` | `character` | `` | `false` | `null` | `true` | `true`| `false` |
| `valid_runner_registrars` | `false` | `character` | `array of strings` | `false` | `'{project` | `false` | `false`| `true` |
| `version_check_enabled` | `false` | `boolean` | `boolean` | `false` | `true` | `false` | `false`| `true` |
| `vertex_ai_host` | `false` | `text` | `` | `false` | `null` | `false` | `false`| `false` |
| `vertex_ai_project` | `false` | `text` | `` | `false` | `null` | `true` | `false`| `false` |
| `vscode_extension_marketplace` | `false` | `jsonb` | `hash` | `true` | `'{}'::jsonb` | `false` | `false`| `true` |
| `web_based_commit_signing_enabled` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `web_ide_oauth_application_id` | `false` | `bigint` | `` | `false` | `null` | `true` | `false`| `false` |
| `whats_new_variant` | `false` | `smallint` | `string` | `false` | `0` | `false` | `false`| `true` |
| `wiki_asciidoc_allow_uri_includes` | `false` | `boolean` | `` | `true` | `false` | `false` | `false`| `false` |
| `wiki_page_max_content_bytes` | `false` | `bigint` | `integer` | `true` | `52428800` | `false` | `false`| `true` |
| `zoekt_settings` | `false` | `jsonb` | `` | `true` | `'{}'::jsonb` | `true` | `false`| `false` |
|
https://docs.gitlab.com/development/secure_partner_integration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/secure_partner_integration.md
|
2025-08-13
|
doc/development/integrations
|
[
"doc",
"development",
"integrations"
] |
secure_partner_integration.md
|
Application Security Testing
|
Static Analysis
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Secure Partner Integration - Onboarding Process
| null |
If you want to integrate your product with the [Secure Stage](https://about.gitlab.com/direction/secure/),
this page describes the developer workflow GitLab intends for
our users to follow with regards to security results. These should be used as
guidelines so you can build an integration that fits with the workflow GitLab
users are already familiar with.
This page also provides resources for the technical work associated
with [onboarding as a partner](https://about.gitlab.com/partners/technology-partners/integrate/).
The steps below are a high-level view of what needs to be done to complete an
integration as well as linking to more detailed resources for how to do so.
## Integration Tiers
The security offerings in GitLab are designed for GitLab Ultimate users, and the
[DevSecOps](https://handbook.gitlab.com/handbook/use-cases/#3-continuous-software-security-assurancehandbookmarketingbrand-and-product-marketingproduct-and-solution-marketingusecase-gtmdevsecops)
use case. All the features are in those tiers. This includes the APIs and standard reporting
framework needed to provide a consistent experience for users to easily bring their preferred
security tools into GitLab. We ask that our integration partners focus their work on those license
tiers so that we can provide the most value to our mutual customers.
## What is the GitLab Developer Workflow?
This workflow is how GitLab users interact with our product and expect it to
function. Understanding how users use GitLab today helps you choose the
best place to integrate your own product and its results into GitLab.
- Developers want to write code without using a new tool to consume results
or address feedback about the item they are working on. Staying inside a
single tool, GitLab, helps them to stay focused on finishing the code and
projects they are working on.
- Developers commit code to a Git branch. The developer creates a merge request (MR)
inside GitLab where these changes can be reviewed. The MR triggers a GitLab
pipeline to run associated jobs, including security checks, on the code.
- Pipeline jobs serve a variety of purposes. Jobs can do scanning for and have
implications for app security, corporate policy, or compliance. When complete,
the job reports back on its status and creates a
[job artifact](../../ci/jobs/job_artifacts.md) as a result.
- The [Merge Request Security Widget](../../ci/testing/_index.md#security-reports)
displays the results of the pipeline's security checks and the developer can
review them. The developer can review both a summary and a detailed version
of the results.
- If certain policies (such as [merge request approvals](../../user/project/merge_requests/approvals/_index.md))
are in place for a project, developers must resolve specific findings or get
an approval from a specific list of people.
- The [security dashboard](../../user/application_security/security_dashboard/_index.md)
also shows results which can developers can use to quickly see all the
vulnerabilities that need to be addressed in the code.
- When the developer reads the details about a vulnerability, they are
presented with additional information and choices on next steps:
1. Create Issue (Confirm finding): Creates a new issue to be prioritized.
1. Add Comment and Dismiss Vulnerability: When dismissing a finding, users
can comment to note items that they
have mitigated, that they accept the vulnerability, or that the
vulnerability is a false positive.
1. Auto-Remediation / Create Merge Request: A fix for the vulnerability can
be offered, allowing an easy solution that does not require extra effort
from users. This should be offered whenever possible.
1. Links: Vulnerabilities can link out external sites or sources for users
to get more data around the vulnerability.
## How to onboard
This section describes the steps you need to complete to onboard as a partner
and complete an integration with the Secure stage.
1. Read about our [partnerships](https://about.gitlab.com/partners/technology-partners/integrate/).
1. [Create an issue](https://gitlab.com/gitlab-com/alliances/alliances/-/issues/new?issuable_template=new_partner)
using our new partner issue template to begin the discussion.
1. Get a test account to begin developing your integration. You can
request a [GitLab.com Subscription Sandbox](https://about.gitlab.com/partners/technology-partners/integrate/#gitlabcom-subscription-sandbox-request)
or an [EE Developer License](https://about.gitlab.com/partners/technology-partners/integrate/#requesting-ultimate-dev-license-for-rd).
1. Provide a [pipeline job](../pipelines/_index.md)
template that users could integrate into their own GitLab pipelines.
1. Create a report artifact with your pipeline jobs.
1. Ensure your pipeline jobs create a report artifact that GitLab can process
to successfully display your own product's results with the rest of GitLab.
- See detailed [technical directions](secure.md) for this step.
- Read more about [job report artifacts](../../ci/yaml/_index.md#artifactsreports).
- Read about [job artifacts](../../ci/jobs/job_artifacts.md).
- Your report artifact must be in one of our supported formats.
For more information, see the [documentation on reports](secure.md#report).
- Documentation for [SAST output](../../user/application_security/sast/_index.md#download-a-sast-report).
- Documentation for [Dependency Scanning reports](../../user/application_security/dependency_scanning/_index.md#understanding-the-results).
- Documentation for [Container Scanning reports](../../user/application_security/container_scanning/_index.md#reports-json-format).
- See this [example secure job definition that also defines the artifact created](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Container-Scanning.gitlab-ci.yml).
- If you need a new kind of scan or report, [create an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new#)
and add the label `devops::secure`.
- Once the job is completed, the data can be seen:
- In the [Merge Request Security Report](../../ci/testing/_index.md#security-reports) ([MR Security Report data flow](https://gitlab.com/snippets/1910005#merge-request-view)).
- While [browsing a Job Artifact](../../ci/jobs/job_artifacts.md).
- In the [Security Dashboard](../../user/application_security/security_dashboard/_index.md) ([Dashboard data flow](https://gitlab.com/snippets/1910005#project-and-group-dashboards)).
1. Optional: Provide a way to interact with results as Vulnerabilities:
- Users can interact with the findings from your artifact within their workflow. They can dismiss the findings or accept them and create a backlog issue.
- To automatically create issues without user interaction, use the [issue API](../../api/issues.md).
1. Optional: Provide auto-remediation steps:
- If you specified `remediations` in your artifact, it is proposed through our [remediation](../../user/application_security/vulnerabilities/_index.md#resolve-a-vulnerability)
interface.
1. Demo the integration to GitLab:
- After you have tested and are ready to demo your integration,
[reach out](https://about.gitlab.com/partners/technology-partners/integrate/) to us. If you
skip this step you won't be able to do supported marketing.
1. Begin doing supported marketing of your GitLab integration.
- Work with our [partner team](https://about.gitlab.com/partners/technology-partners/integrate/)
to support your go-to-market as appropriate.
- Examples of supported marketing could include being listed on our [Security Partner page](https://about.gitlab.com/partners/#security),
doing a [blog post](https://handbook.gitlab.com/handbook/marketing/blog/),
doing a co-branded webinar, or producing a co-branded white paper.
We have a <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [video playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KpMqYxJiOLz-uBIr5w-yP4A)
that may be helpful as part of this process. This covers various topics related to integrating your
tool.
If you have any issues while working through your integration or the steps
above, create an issue to discuss with us further.
|
---
stage: Application Security Testing
group: Static Analysis
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Secure Partner Integration - Onboarding Process
breadcrumbs:
- doc
- development
- integrations
---
If you want to integrate your product with the [Secure Stage](https://about.gitlab.com/direction/secure/),
this page describes the developer workflow GitLab intends for
our users to follow with regards to security results. These should be used as
guidelines so you can build an integration that fits with the workflow GitLab
users are already familiar with.
This page also provides resources for the technical work associated
with [onboarding as a partner](https://about.gitlab.com/partners/technology-partners/integrate/).
The steps below are a high-level view of what needs to be done to complete an
integration as well as linking to more detailed resources for how to do so.
## Integration Tiers
The security offerings in GitLab are designed for GitLab Ultimate users, and the
[DevSecOps](https://handbook.gitlab.com/handbook/use-cases/#3-continuous-software-security-assurancehandbookmarketingbrand-and-product-marketingproduct-and-solution-marketingusecase-gtmdevsecops)
use case. All the features are in those tiers. This includes the APIs and standard reporting
framework needed to provide a consistent experience for users to easily bring their preferred
security tools into GitLab. We ask that our integration partners focus their work on those license
tiers so that we can provide the most value to our mutual customers.
## What is the GitLab Developer Workflow?
This workflow is how GitLab users interact with our product and expect it to
function. Understanding how users use GitLab today helps you choose the
best place to integrate your own product and its results into GitLab.
- Developers want to write code without using a new tool to consume results
or address feedback about the item they are working on. Staying inside a
single tool, GitLab, helps them to stay focused on finishing the code and
projects they are working on.
- Developers commit code to a Git branch. The developer creates a merge request (MR)
inside GitLab where these changes can be reviewed. The MR triggers a GitLab
pipeline to run associated jobs, including security checks, on the code.
- Pipeline jobs serve a variety of purposes. Jobs can do scanning for and have
implications for app security, corporate policy, or compliance. When complete,
the job reports back on its status and creates a
[job artifact](../../ci/jobs/job_artifacts.md) as a result.
- The [Merge Request Security Widget](../../ci/testing/_index.md#security-reports)
displays the results of the pipeline's security checks and the developer can
review them. The developer can review both a summary and a detailed version
of the results.
- If certain policies (such as [merge request approvals](../../user/project/merge_requests/approvals/_index.md))
are in place for a project, developers must resolve specific findings or get
an approval from a specific list of people.
- The [security dashboard](../../user/application_security/security_dashboard/_index.md)
also shows results which can developers can use to quickly see all the
vulnerabilities that need to be addressed in the code.
- When the developer reads the details about a vulnerability, they are
presented with additional information and choices on next steps:
1. Create Issue (Confirm finding): Creates a new issue to be prioritized.
1. Add Comment and Dismiss Vulnerability: When dismissing a finding, users
can comment to note items that they
have mitigated, that they accept the vulnerability, or that the
vulnerability is a false positive.
1. Auto-Remediation / Create Merge Request: A fix for the vulnerability can
be offered, allowing an easy solution that does not require extra effort
from users. This should be offered whenever possible.
1. Links: Vulnerabilities can link out external sites or sources for users
to get more data around the vulnerability.
## How to onboard
This section describes the steps you need to complete to onboard as a partner
and complete an integration with the Secure stage.
1. Read about our [partnerships](https://about.gitlab.com/partners/technology-partners/integrate/).
1. [Create an issue](https://gitlab.com/gitlab-com/alliances/alliances/-/issues/new?issuable_template=new_partner)
using our new partner issue template to begin the discussion.
1. Get a test account to begin developing your integration. You can
request a [GitLab.com Subscription Sandbox](https://about.gitlab.com/partners/technology-partners/integrate/#gitlabcom-subscription-sandbox-request)
or an [EE Developer License](https://about.gitlab.com/partners/technology-partners/integrate/#requesting-ultimate-dev-license-for-rd).
1. Provide a [pipeline job](../pipelines/_index.md)
template that users could integrate into their own GitLab pipelines.
1. Create a report artifact with your pipeline jobs.
1. Ensure your pipeline jobs create a report artifact that GitLab can process
to successfully display your own product's results with the rest of GitLab.
- See detailed [technical directions](secure.md) for this step.
- Read more about [job report artifacts](../../ci/yaml/_index.md#artifactsreports).
- Read about [job artifacts](../../ci/jobs/job_artifacts.md).
- Your report artifact must be in one of our supported formats.
For more information, see the [documentation on reports](secure.md#report).
- Documentation for [SAST output](../../user/application_security/sast/_index.md#download-a-sast-report).
- Documentation for [Dependency Scanning reports](../../user/application_security/dependency_scanning/_index.md#understanding-the-results).
- Documentation for [Container Scanning reports](../../user/application_security/container_scanning/_index.md#reports-json-format).
- See this [example secure job definition that also defines the artifact created](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Container-Scanning.gitlab-ci.yml).
- If you need a new kind of scan or report, [create an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new#)
and add the label `devops::secure`.
- Once the job is completed, the data can be seen:
- In the [Merge Request Security Report](../../ci/testing/_index.md#security-reports) ([MR Security Report data flow](https://gitlab.com/snippets/1910005#merge-request-view)).
- While [browsing a Job Artifact](../../ci/jobs/job_artifacts.md).
- In the [Security Dashboard](../../user/application_security/security_dashboard/_index.md) ([Dashboard data flow](https://gitlab.com/snippets/1910005#project-and-group-dashboards)).
1. Optional: Provide a way to interact with results as Vulnerabilities:
- Users can interact with the findings from your artifact within their workflow. They can dismiss the findings or accept them and create a backlog issue.
- To automatically create issues without user interaction, use the [issue API](../../api/issues.md).
1. Optional: Provide auto-remediation steps:
- If you specified `remediations` in your artifact, it is proposed through our [remediation](../../user/application_security/vulnerabilities/_index.md#resolve-a-vulnerability)
interface.
1. Demo the integration to GitLab:
- After you have tested and are ready to demo your integration,
[reach out](https://about.gitlab.com/partners/technology-partners/integrate/) to us. If you
skip this step you won't be able to do supported marketing.
1. Begin doing supported marketing of your GitLab integration.
- Work with our [partner team](https://about.gitlab.com/partners/technology-partners/integrate/)
to support your go-to-market as appropriate.
- Examples of supported marketing could include being listed on our [Security Partner page](https://about.gitlab.com/partners/#security),
doing a [blog post](https://handbook.gitlab.com/handbook/marketing/blog/),
doing a co-branded webinar, or producing a co-branded white paper.
We have a <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [video playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KpMqYxJiOLz-uBIr5w-yP4A)
that may be helpful as part of this process. This covers various topics related to integrating your
tool.
If you have any issues while working through your integration or the steps
above, create an issue to discuss with us further.
|
https://docs.gitlab.com/development/jira_connect
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/jira_connect.md
|
2025-08-13
|
doc/development/integrations
|
[
"doc",
"development",
"integrations"
] |
jira_connect.md
|
Plan
|
Project Management
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab for Jira Cloud app development
| null |
Developers have several options for how set up a development environment for the GitLab for Jira Cloud app:
1. A full environment [with Jira](#set-up-with-jira). Use this when you need to test interactions with Jira.
1. A full environment [with a Jira Connect proxy](#setting-up-a-jira-connect-proxy). Use this when you need to test multiple GitLab instances connecting to Jira through a Jira Connect proxy, or when testing changes to the Jira Connect proxy itself.
1. A local environment [without Jira](#setup-without-jira). You can use this quicker setup if you do not require Jira, for example when testing the GitLab frontend.
## Set up with Jira
The following are required to install the app:
- A Jira Cloud instance. Atlassian provides [free instances for development and testing](https://developer.atlassian.com/platform/marketplace/getting-started/#free-developer-instances-to-build-and-test-your-app).
- A GitLab instance available over the internet. For the app to work, Jira Cloud should
be able to connect to the GitLab instance through the internet. For this we
recommend using Gitpod or a similar cloud development environment. For more
information on using Gitpod with GDK, see the:
- [GDK with Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md)
documentation.
- [GDK in Gitpod](https://www.loom.com/share/9c9711d4876a40869b9294eecb24c54d)
video.
<!-- vale gitlab_base.Spelling = NO -->
GitLab team members **must not** use tunneling tools such as Serveo or `ngrok`. These are
security risks, and must not be run on GitLab developer laptops.
<!-- vale gitlab_base.Spelling = YES -->
Jira requires all connections to the app host to be over SSL. If you set up
your own environment, remember to enable SSL and an appropriate certificate.
### Setting up GitPod
If you are using [Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md)
you must [make port `3000` public](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md#make-the-rails-web-server-publicly-accessible).
### Install the app in Jira
To install the app in Jira:
1. Enable Jira development mode to install apps that are not from the Atlassian
Marketplace:
1. In Jira, go to **Jira settings > Apps > Manage apps**.
1. Scroll to the bottom of the **Manage apps** page and select **Settings**.
1. Select **Enable development mode** and select **Apply**.
1. Install the app:
1. In Jira, go to **Jira settings > Apps > Manage apps**.
1. Select **Upload app**.
1. In the **From this URL** field, provide a link to the app descriptor. The host and port must point to your GitLab instance.
For example:
```plaintext
https://xxxx.gitpod.io/-/jira_connect/app_descriptor.json
```
1. Select **Upload**.
If the install was successful, you should see the **GitLab for Jira Cloud** app under **Manage apps**.
You can also select **Getting Started** to open the configuration page rendered from your GitLab instance.
_Note that any changes to the app descriptor requires you to uninstall then reinstall the app._
1. If the _Installed and ready to go!_ dialog opens asking you to **Get started**, do not get started yet
and instead select **Close**.
1. You must now [set up the OAuth authentication flow](#set-up-the-gitlab-oauth-authentication-flow).
### Set up the GitLab OAuth authentication flow
GitLab for Jira users authenticate with GitLab using GitLab OAuth.
Ensure you have [installed the app in Jira](#install-the-app-in-jira) first before doing these steps,
otherwise the app installation in Jira fails.
The following steps describe setting up an environment to test the GitLab OAuth flow:
1. Start a [Gitpod session](#setting-up-gitpod).
1. On your GitLab instance, go to **Admin > Applications**.
1. Create a new application with the following settings:
- Name: `GitLab for Jira`
- Redirect URI: `YOUR_GITPOD_INSTANCE/-/jira_connect/oauth_callbacks`
- Trusted: **No**
- Confidential: **No**
- Scopes: `api`
1. Copy the **Application ID** value.
1. Go to **Admin > Settings > General**.
1. Expand **GitLab for Jira App**.
1. Paste the **Application ID** value into **Jira Connect Application ID**.
1. In **Jira Connect Proxy URL**, enter `YOUR_GITPOD_INSTANCE` (for example, `https://xxxx.gitpod.io`).
1. Enable public key storage: **Leave unchecked**.
1. Select **Save changes**.
### Set up the app in Jira
Ensure you have [set up OAuth first](#set-up-the-gitlab-oauth-authentication-flow) first before doing these steps,
otherwise these steps fail.
1. In Jira, go to **Jira settings > Apps > Manage apps**.
1. Scroll to **User-installed apps**, find your GitLab for Jira Cloud app and expand it.
1. Select **Get started**.
You should be able to authenticate with your GitLab instance and begin linking groups.
### Troubleshooting
#### App installation fails
If the app installation fails, you might need to delete `jira_connect_installations` from your database.
1. Open the [database console](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/postgresql.md#access-postgresql).
1. Run `TRUNCATE TABLE jira_connect_installations CASCADE;`.
### Not authorized to access the file
If you use Gitpod and you get an error about Jira not being able to access the descriptor file, you will need to [make GitPod port public](#setting-up-gitpod).
## Setting up a Jira Connect Proxy
When a GitLab Self-Managed instance [installs the GitLab for Jira app from the Atlassian Marketplace](../../administration/settings/jira_cloud_app.md#install-the-gitlab-for-jira-cloud-app-from-the-atlassian-marketplace), the instance must use GitLab.com as a Jira Connect proxy. You can emulate this setup if you need to develop or test features such as the [handling of Jira lifecycle events](../../administration/settings/jira_cloud_app.md#gitlabcom-handling-of-app-lifecycle-events) and [branch creation](../../administration/settings/jira_cloud_app.md#gitlabcom-handling-of-branch-creation).
To set up a development Jira Connect Proxy:
- A Jira Cloud instance. Atlassian provides [free instances for development and testing](https://developer.atlassian.com/platform/marketplace/getting-started/#free-developer-instances-to-build-and-test-your-app).
- Two GitLab instances available over the internet.
- One to serve as the **Jira Connect proxy** (simulating GitLab.com)
- One to serve as the **GitLab instance** that will connect to Jira through the Jira Connect proxy
- For the app to work, Jira Cloud should
be able to connect to the **Jira Connect proxy** instance through the internet. For this we
recommend using Gitpod or a similar cloud development environment. For more
information on using Gitpod with GDK, see the:
- [GDK with Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md)
documentation.
- [GDK in Gitpod](https://www.loom.com/share/9c9711d4876a40869b9294eecb24c54d)
video.
<!-- vale gitlab_base.Spelling = NO -->
GitLab team members **must not** use tunneling tools such as Serveo or `ngrok`. These are
security risks, and must not be run on GitLab developer laptops.
<!-- vale gitlab_base.Spelling = YES -->
Jira requires all connections to the app host to be over SSL. If you set up
your own environment, remember to enable SSL and an appropriate certificate.
### Setting up GitPod
If you are using [Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md)
you must [make port `3000` public](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md#make-the-rails-web-server-publicly-accessible).
### Set up the Jira Connect proxy instance
1. For the **Jira Connect proxy** instance, follow the [GDK with Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md) instructions to start a new Gitpod workspace.
1. Set up OAuth authentication on the **Jira Connect proxy** by following the [Set up the GitLab OAuth authentication flow](#set-up-the-gitlab-oauth-authentication-flow) section.
1. Configure the **Jira Connect proxy** [to serve as a proxy](../../administration/settings/jira_cloud_app.md#configure-your-gitlab-instance-to-serve-as-a-proxy).
### Install the GitLab for Jira Cloud app in Jira
Follow the [Install the app in Jira](#install-the-app-in-jira) section, but use the URL of your **Jira Connect proxy** instance for the app descriptor:
```plaintext
https://JIRA_CONNECT_PROXY_INSTANCE/-/jira_connect/app_descriptor.json
```
If the _Installed and ready to go!_ dialog opens, select **Close** (don't select **Get started** yet).
### Set up the secondary GitLab instance
1. Set up a second GitLab instance using Gitpod, following the same [GDK with Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md) instructions as for the proxy instance
1. Set up OAuth authentication on this instance following the same steps as in [Set up the GitLab OAuth authentication flow](#set-up-the-gitlab-oauth-authentication-flow), but with a crucial difference:
- When setting the **Redirect URI**, use the URL of your **Jira Connect proxy** instance, not this secondary instance:
```plaintext
https://JIRA_CONNECT_PROXY_INSTANCE/-/jira_connect/oauth_callbacks
```
1. Configure this GitLab instance to use the proxy:
1. Go to **Admin > Settings > General**
1. Expand **GitLab for Jira App**
1. Paste the **Application ID** value into **Jira Connect Application ID**
1. In **Jira Connect Proxy URL**, enter `JIRA_CONNECT_PROXY_INSTANCE` (for example, `https://xxxx.gitpod.io`)
1. Select **Save changes**
### Complete the setup in Jira
1. In Jira, go to **Jira settings > Apps > Manage apps**.
1. Scroll to **User-installed apps**, find your GitLab for Jira Cloud app and expand it.
1. Select **Get started**.
1. To link the app to the secondary GitLab instance, select **Change GitLab version**.
1. Select all checkboxes, then select Next.
1. In **GitLab instance URL** ,enter `GITLAB_INSTANCE` (for example, `https://xxxx.gitpod.io`), then select Save.
1. Select **Sign in to GitLab**.
1. Select **Authorize**. A list of groups is now visible.
1. Select **Link groups**.
1. To link to a group, select **Link**.
## Setup without Jira
If you do not require Jira to test with, you can use the [Jira connect test tool](https://gitlab.com/gitlab-org/foundations/import-and-integrate/jira-connect-test-tool) and your local GDK.
1. Clone the [**Jira-connect-test-tool**](https://gitlab.com/gitlab-org/foundations/import-and-integrate/jira-connect-test-tool) `git clone git@gitlab.com:gitlab-org/manage/integrations/jira-connect-test-tool.git`.
1. Start the app `bundle exec rackup`. (The app requires your GDK GitLab to be available on `http://127.0.0.1:3000`.).
1. Open `config/gitlab.yml` and uncomment the `jira_connect` config.
1. If running GDK on a domain other than `localhost`, you must add the domain to `additional_iframe_ancestors`. For example:
```yaml
additional_iframe_ancestors: ['localhost:*', '127.0.0.1:*', 'gdk.test:*']
```
1. Restart GDK.
1. Go to `http://127.0.0.1:3000/-/user_settings/personal_access_tokens`.
1. Create a new token with the `api` scope and copy the token.
1. Go to `http://localhost:9292`.
1. Paste the token and select **Install GitLab.com Jira Cloud app**.
|
---
stage: Plan
group: Project Management
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab for Jira Cloud app development
breadcrumbs:
- doc
- development
- integrations
---
Developers have several options for how set up a development environment for the GitLab for Jira Cloud app:
1. A full environment [with Jira](#set-up-with-jira). Use this when you need to test interactions with Jira.
1. A full environment [with a Jira Connect proxy](#setting-up-a-jira-connect-proxy). Use this when you need to test multiple GitLab instances connecting to Jira through a Jira Connect proxy, or when testing changes to the Jira Connect proxy itself.
1. A local environment [without Jira](#setup-without-jira). You can use this quicker setup if you do not require Jira, for example when testing the GitLab frontend.
## Set up with Jira
The following are required to install the app:
- A Jira Cloud instance. Atlassian provides [free instances for development and testing](https://developer.atlassian.com/platform/marketplace/getting-started/#free-developer-instances-to-build-and-test-your-app).
- A GitLab instance available over the internet. For the app to work, Jira Cloud should
be able to connect to the GitLab instance through the internet. For this we
recommend using Gitpod or a similar cloud development environment. For more
information on using Gitpod with GDK, see the:
- [GDK with Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md)
documentation.
- [GDK in Gitpod](https://www.loom.com/share/9c9711d4876a40869b9294eecb24c54d)
video.
<!-- vale gitlab_base.Spelling = NO -->
GitLab team members **must not** use tunneling tools such as Serveo or `ngrok`. These are
security risks, and must not be run on GitLab developer laptops.
<!-- vale gitlab_base.Spelling = YES -->
Jira requires all connections to the app host to be over SSL. If you set up
your own environment, remember to enable SSL and an appropriate certificate.
### Setting up GitPod
If you are using [Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md)
you must [make port `3000` public](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md#make-the-rails-web-server-publicly-accessible).
### Install the app in Jira
To install the app in Jira:
1. Enable Jira development mode to install apps that are not from the Atlassian
Marketplace:
1. In Jira, go to **Jira settings > Apps > Manage apps**.
1. Scroll to the bottom of the **Manage apps** page and select **Settings**.
1. Select **Enable development mode** and select **Apply**.
1. Install the app:
1. In Jira, go to **Jira settings > Apps > Manage apps**.
1. Select **Upload app**.
1. In the **From this URL** field, provide a link to the app descriptor. The host and port must point to your GitLab instance.
For example:
```plaintext
https://xxxx.gitpod.io/-/jira_connect/app_descriptor.json
```
1. Select **Upload**.
If the install was successful, you should see the **GitLab for Jira Cloud** app under **Manage apps**.
You can also select **Getting Started** to open the configuration page rendered from your GitLab instance.
_Note that any changes to the app descriptor requires you to uninstall then reinstall the app._
1. If the _Installed and ready to go!_ dialog opens asking you to **Get started**, do not get started yet
and instead select **Close**.
1. You must now [set up the OAuth authentication flow](#set-up-the-gitlab-oauth-authentication-flow).
### Set up the GitLab OAuth authentication flow
GitLab for Jira users authenticate with GitLab using GitLab OAuth.
Ensure you have [installed the app in Jira](#install-the-app-in-jira) first before doing these steps,
otherwise the app installation in Jira fails.
The following steps describe setting up an environment to test the GitLab OAuth flow:
1. Start a [Gitpod session](#setting-up-gitpod).
1. On your GitLab instance, go to **Admin > Applications**.
1. Create a new application with the following settings:
- Name: `GitLab for Jira`
- Redirect URI: `YOUR_GITPOD_INSTANCE/-/jira_connect/oauth_callbacks`
- Trusted: **No**
- Confidential: **No**
- Scopes: `api`
1. Copy the **Application ID** value.
1. Go to **Admin > Settings > General**.
1. Expand **GitLab for Jira App**.
1. Paste the **Application ID** value into **Jira Connect Application ID**.
1. In **Jira Connect Proxy URL**, enter `YOUR_GITPOD_INSTANCE` (for example, `https://xxxx.gitpod.io`).
1. Enable public key storage: **Leave unchecked**.
1. Select **Save changes**.
### Set up the app in Jira
Ensure you have [set up OAuth first](#set-up-the-gitlab-oauth-authentication-flow) first before doing these steps,
otherwise these steps fail.
1. In Jira, go to **Jira settings > Apps > Manage apps**.
1. Scroll to **User-installed apps**, find your GitLab for Jira Cloud app and expand it.
1. Select **Get started**.
You should be able to authenticate with your GitLab instance and begin linking groups.
### Troubleshooting
#### App installation fails
If the app installation fails, you might need to delete `jira_connect_installations` from your database.
1. Open the [database console](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/postgresql.md#access-postgresql).
1. Run `TRUNCATE TABLE jira_connect_installations CASCADE;`.
### Not authorized to access the file
If you use Gitpod and you get an error about Jira not being able to access the descriptor file, you will need to [make GitPod port public](#setting-up-gitpod).
## Setting up a Jira Connect Proxy
When a GitLab Self-Managed instance [installs the GitLab for Jira app from the Atlassian Marketplace](../../administration/settings/jira_cloud_app.md#install-the-gitlab-for-jira-cloud-app-from-the-atlassian-marketplace), the instance must use GitLab.com as a Jira Connect proxy. You can emulate this setup if you need to develop or test features such as the [handling of Jira lifecycle events](../../administration/settings/jira_cloud_app.md#gitlabcom-handling-of-app-lifecycle-events) and [branch creation](../../administration/settings/jira_cloud_app.md#gitlabcom-handling-of-branch-creation).
To set up a development Jira Connect Proxy:
- A Jira Cloud instance. Atlassian provides [free instances for development and testing](https://developer.atlassian.com/platform/marketplace/getting-started/#free-developer-instances-to-build-and-test-your-app).
- Two GitLab instances available over the internet.
- One to serve as the **Jira Connect proxy** (simulating GitLab.com)
- One to serve as the **GitLab instance** that will connect to Jira through the Jira Connect proxy
- For the app to work, Jira Cloud should
be able to connect to the **Jira Connect proxy** instance through the internet. For this we
recommend using Gitpod or a similar cloud development environment. For more
information on using Gitpod with GDK, see the:
- [GDK with Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md)
documentation.
- [GDK in Gitpod](https://www.loom.com/share/9c9711d4876a40869b9294eecb24c54d)
video.
<!-- vale gitlab_base.Spelling = NO -->
GitLab team members **must not** use tunneling tools such as Serveo or `ngrok`. These are
security risks, and must not be run on GitLab developer laptops.
<!-- vale gitlab_base.Spelling = YES -->
Jira requires all connections to the app host to be over SSL. If you set up
your own environment, remember to enable SSL and an appropriate certificate.
### Setting up GitPod
If you are using [Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md)
you must [make port `3000` public](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md#make-the-rails-web-server-publicly-accessible).
### Set up the Jira Connect proxy instance
1. For the **Jira Connect proxy** instance, follow the [GDK with Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md) instructions to start a new Gitpod workspace.
1. Set up OAuth authentication on the **Jira Connect proxy** by following the [Set up the GitLab OAuth authentication flow](#set-up-the-gitlab-oauth-authentication-flow) section.
1. Configure the **Jira Connect proxy** [to serve as a proxy](../../administration/settings/jira_cloud_app.md#configure-your-gitlab-instance-to-serve-as-a-proxy).
### Install the GitLab for Jira Cloud app in Jira
Follow the [Install the app in Jira](#install-the-app-in-jira) section, but use the URL of your **Jira Connect proxy** instance for the app descriptor:
```plaintext
https://JIRA_CONNECT_PROXY_INSTANCE/-/jira_connect/app_descriptor.json
```
If the _Installed and ready to go!_ dialog opens, select **Close** (don't select **Get started** yet).
### Set up the secondary GitLab instance
1. Set up a second GitLab instance using Gitpod, following the same [GDK with Gitpod](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitpod.md) instructions as for the proxy instance
1. Set up OAuth authentication on this instance following the same steps as in [Set up the GitLab OAuth authentication flow](#set-up-the-gitlab-oauth-authentication-flow), but with a crucial difference:
- When setting the **Redirect URI**, use the URL of your **Jira Connect proxy** instance, not this secondary instance:
```plaintext
https://JIRA_CONNECT_PROXY_INSTANCE/-/jira_connect/oauth_callbacks
```
1. Configure this GitLab instance to use the proxy:
1. Go to **Admin > Settings > General**
1. Expand **GitLab for Jira App**
1. Paste the **Application ID** value into **Jira Connect Application ID**
1. In **Jira Connect Proxy URL**, enter `JIRA_CONNECT_PROXY_INSTANCE` (for example, `https://xxxx.gitpod.io`)
1. Select **Save changes**
### Complete the setup in Jira
1. In Jira, go to **Jira settings > Apps > Manage apps**.
1. Scroll to **User-installed apps**, find your GitLab for Jira Cloud app and expand it.
1. Select **Get started**.
1. To link the app to the secondary GitLab instance, select **Change GitLab version**.
1. Select all checkboxes, then select Next.
1. In **GitLab instance URL** ,enter `GITLAB_INSTANCE` (for example, `https://xxxx.gitpod.io`), then select Save.
1. Select **Sign in to GitLab**.
1. Select **Authorize**. A list of groups is now visible.
1. Select **Link groups**.
1. To link to a group, select **Link**.
## Setup without Jira
If you do not require Jira to test with, you can use the [Jira connect test tool](https://gitlab.com/gitlab-org/foundations/import-and-integrate/jira-connect-test-tool) and your local GDK.
1. Clone the [**Jira-connect-test-tool**](https://gitlab.com/gitlab-org/foundations/import-and-integrate/jira-connect-test-tool) `git clone git@gitlab.com:gitlab-org/manage/integrations/jira-connect-test-tool.git`.
1. Start the app `bundle exec rackup`. (The app requires your GDK GitLab to be available on `http://127.0.0.1:3000`.).
1. Open `config/gitlab.yml` and uncomment the `jira_connect` config.
1. If running GDK on a domain other than `localhost`, you must add the domain to `additional_iframe_ancestors`. For example:
```yaml
additional_iframe_ancestors: ['localhost:*', '127.0.0.1:*', 'gdk.test:*']
```
1. Restart GDK.
1. Go to `http://127.0.0.1:3000/-/user_settings/personal_access_tokens`.
1. Create a new token with the `api` scope and copy the token.
1. Go to `http://localhost:9292`.
1. Paste the token and select **Install GitLab.com Jira Cloud app**.
|
https://docs.gitlab.com/development/secure
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/secure.md
|
2025-08-13
|
doc/development/integrations
|
[
"doc",
"development",
"integrations"
] |
secure.md
|
Application Security Testing
|
Static Analysis
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Security scanner integration
| null |
Integrating a security scanner into GitLab consists of providing end users
with a [CI/CD job definition](../../ci/jobs/_index.md)
they can add to their CI/CD configuration files to scan their GitLab projects.
This job should then output its results in a GitLab-specified format. These results are then
automatically presented in various places in GitLab, such as the Pipeline view, merge request
widget, and Security Dashboard.
The scanning job is usually based on a [Docker image](https://docs.docker.com/)
that contains the scanner and all its dependencies in a self-contained environment.
This page documents requirements and guidelines for writing CI/CD jobs that implement a security
scanner, as well as requirements and guidelines for the Docker image.
## Job definition
This section describes several important fields to add to the security scanner's job
definition file. Full documentation on these and other available fields can be viewed
in the [CI documentation](../../ci/yaml/_index.md#image).
### Name
For consistency, scanning jobs should be named after the scanner, in lowercase.
The job name is suffixed after the type of scanning:
- `_dependency_scanning`
- `_container_scanning`
- `_dast`
- `_sast`
For instance, the dependency scanning job based on the "MySec" scanner would be named `mysec_dependency_scanning`.
### Image
The [`image`](../../ci/yaml/_index.md#image) keyword is used to specify
the [Docker image](../../ci/docker/using_docker_images.md#what-is-an-image)
containing the security scanner.
### Script
The [`script`](../../ci/yaml/_index.md#script) keyword
is used to specify the commands to run the scanner.
Because the `script` entry can't be left empty, it must be set to the command that performs the scan.
It is not possible to rely on the predefined `ENTRYPOINT` and `CMD` of the Docker image
to perform the scan automatically, without passing any command.
The [`before_script`](../../ci/yaml/_index.md#before_script)
should not be used in the job definition because users may rely on this to prepare their projects before performing the scan.
For instance, it is common practice to use `before_script` to install system libraries
a particular project needs before performing SAST or Dependency Scanning.
Similarly, [`after_script`](../../ci/yaml/_index.md#after_script)
should not be used in the job definition, because it may be overridden by users.
### Stage
For consistency, scanning jobs should belong to the `test` stage when possible.
The [`stage`](../../ci/yaml/_index.md#stage) keyword can be omitted because `test` is the default value.
### Fail-safe
By default, scanning jobs do not block the pipeline when they fail,
so the [`allow_failure`](../../ci/yaml/_index.md#allow_failure) parameter should be set to `true`.
### Artifacts
Scanning jobs must declare a report that corresponds to the type of scanning they perform,
using the [`artifacts:reports`](../../ci/yaml/_index.md#artifactsreports) keyword.
Valid reports are:
- `dependency_scanning`
- `container_scanning`
- `dast`
- `api_fuzzing`
- `coverage_fuzzing`
- `sast`
- `secret_detection`
For example, here is the definition of a SAST job that generates a file named `gl-sast-report.json`,
and uploads it as a SAST report:
```yaml
mysec_sast:
image: registry.gitlab.com/secure/mysec
artifacts:
reports:
sast: gl-sast-report.json
```
`gl-sast-report.json` is an example file path but any other filename can be used. See
[the Output file section](#output-file) for more details. It's processed as a SAST report because
it's declared under the `reports:sast` key in the job definition, not because of the filename.
### Policies
Certain GitLab workflows, such as [AutoDevOps](../../topics/autodevops/cicd_variables.md#job-skipping-variables),
define CI/CD variables to indicate that given scans should be skipped. You can check for this by looking
for variables such as:
- `DEPENDENCY_SCANNING_DISABLED`
- `CONTAINER_SCANNING_DISABLED`
- `SAST_DISABLED`
- `DAST_DISABLED`
If appropriate based on the scanner type, you should then skip running the custom scanner.
GitLab also defines a `CI_PROJECT_REPOSITORY_LANGUAGES` variable, which provides the list of
languages in the repository. Depending on this value, your scanner may or may not do something different.
Language detection currently relies on the [`linguist`](https://github.com/github/linguist) Ruby gem.
See the [predefined CI/CD variables](../../ci/variables/predefined_variables.md).
#### Policy checking example
This example shows how to skip a custom Dependency Scanning job, `mysec_dependency_scanning`, unless
the project repository contains Java source code and the `dependency_scanning` feature is enabled:
```yaml
mysec_dependency_scanning:
rules:
- if: $DEPENDENCY_SCANNING_DISABLED == 'true'
when: never
- if: $GITLAB_FEATURES =~ /\bdependency_scanning\b/
exists:
- '**/*.java'
```
Any additional job policy should only be configured by users based on their needs.
For instance, predefined policies should not trigger the scanning job
for a particular branch or when a particular set of files changes.
## Docker image
The Docker image is a self-contained environment that combines
the scanner with all the libraries and tools it depends on.
Packaging your scanner into a Docker image makes its dependencies and configuration always present,
regardless of the individual machine the scanner runs on.
### Image size
Depending on the CI infrastructure,
the CI may have to fetch the Docker image every time the job runs.
For the scanning job to run fast and avoid wasting bandwidth, Docker images should be as small as
possible. You should aim for 50 MB or smaller. If that isn't possible, try to keep it below 1.46 GB,
which is the size of a DVD-ROM.
If the scanner requires a fully functional Linux environment,
it is recommended to use a [Debian](https://www.debian.org/intro/about) "slim" distribution or [Alpine Linux](https://www.alpinelinux.org/).
If possible, it is recommended to build the image from scratch, using the `FROM scratch` instruction,
and to compile the scanner with all the libraries it needs.
[Multi-stage builds](https://docs.docker.com/build/building/multi-stage/)
might also help with keeping the image small.
To keep an image size small, consider using [dive](https://github.com/wagoodman/dive#dive) to analyze layers in a Docker image to
identify where additional bloat might be originating from.
In some cases, it might be difficult to remove files from an image. When this occurs, consider using
[Zstandard](https://github.com/facebook/zstd)
to compress files or large directories. Zstandard offers many different compression levels that can
decrease the size of your image with very little impact to decompression speed. It may be helpful to
automatically decompress any compressed directories as soon as an image launches. You can accomplish
this by adding a step to the Docker image's `/etc/bashrc` or to a specific user's `$HOME/.bashrc`.
Remember to change the entry point to launch a bash login shell if you chose the latter option.
Here are some examples to get you started:
- <https://gitlab.com/gitlab-org/security-products/license-management/-/blob/0b976fcffe0a9b8e80587adb076bcdf279c9331c/config/install.sh#L168-170>
- <https://gitlab.com/gitlab-org/security-products/license-management/-/blob/0b976fcffe0a9b8e80587adb076bcdf279c9331c/config/.bashrc#L49>
### Image tag
As documented in the [Docker Official Images](https://github.com/docker-library/official-images#tags-and-aliases) project,
it is strongly encouraged that version number tags be given aliases which allows the user to easily refer to the "most recent" release of a particular series.
See also [Docker Tagging: Best practices for tagging and versioning Docker images](https://learn.microsoft.com/en-us/archive/blogs/stevelasker/docker-tagging-best-practices-for-tagging-and-versioning-docker-images).
### Permissions
To run a Docker container with non-root privileges the following user and group must be present in the container:
- User `gitlab` with user ID `1000`
- Group `gitlab` with group ID `1000`
## Command line
A scanner is a command-line tool that takes environment variables as inputs,
and generates a file that is uploaded as a report (based on the job definition).
It also generates text output on the standard output and standard error streams, and exits with a status code.
### Variables
All CI/CD variables are passed to the scanner as environment variables.
The scanned project is described by the [predefined CI/CD variables](../../ci/variables/_index.md).
#### SAST and Dependency Scanning
SAST and Dependency Scanning scanners must scan the files in the project directory, given by the `CI_PROJECT_DIR` CI/CD variable.
#### Container Scanning
To be consistent with the official Container Scanning for GitLab,
scanners must scan the Docker image whose name and tag are given by
`CI_APPLICATION_REPOSITORY` and `CI_APPLICATION_TAG`. If the `DOCKER_IMAGE`
CI/CD variable is provided, then the `CI_APPLICATION_REPOSITORY` and `CI_APPLICATION_TAG` variables
are ignored, and the image specified in the `DOCKER_IMAGE` variable is scanned instead.
If not provided, `CI_APPLICATION_REPOSITORY` should default to
`$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG`, which is a combination of predefined CI/CD variables.
`CI_APPLICATION_TAG` should default to `CI_COMMIT_SHA`.
The scanner should sign in the Docker registry
using the variables `DOCKER_USER` and `DOCKER_PASSWORD`.
If these are not defined, then the scanner should use
`CI_REGISTRY_USER` and `CI_REGISTRY_PASSWORD` as default values.
#### Configuration files
While scanners may use `CI_PROJECT_DIR` to load specific configuration files,
it is recommended to expose configuration as CI/CD variables, not files.
### Output file
Like any artifact uploaded to GitLab CI/CD,
the Secure report generated by the scanner must be written in the project directory,
given by the `CI_PROJECT_DIR` CI/CD variable.
It is recommended to name the output file after the type of scanning, and to use `gl-` as a prefix.
Since all Secure reports are JSON files, it is recommended to use `.json` as a file extension.
For instance, a suggested filename for a Dependency Scanning report is `gl-dependency-scanning.json`.
The [`artifacts:reports`](../../ci/yaml/_index.md#artifactsreports) keyword
of the job definition must be consistent with the file path where the Security report is written.
For instance, if a Dependency Scanning analyzer writes its report to the CI project directory,
and if this report filename is `depscan.json`,
then `artifacts:reports:dependency_scanning` must be set to `depscan.json`.
### Exit code
Following the POSIX exit code standard, the scanner exits with either `0` for success or `1` for failure.
Success also includes the case when vulnerabilities are found.
When a CI job fails, security report results are not ingested by GitLab, even if the job
[allows failure](../../ci/yaml/_index.md#allow_failure). However, the report artifacts are still uploaded to GitLab and available
for [download in the pipeline security tab](../../user/application_security/detect/security_scanning_results.md#download-a-security-report).
### Logging
The scanner should log error messages and warnings so that users can easily investigate
misconfiguration and integration issues by looking at the log of the CI scanning job.
Scanners may use [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code#Colors)
to colorize the messages they write to the Unix standard output and standard error streams.
We recommend using red to report errors, yellow for warnings, and green for notices.
Also, we recommend prefixing error messages with `[ERRO]`, warnings with `[WARN]`, and notices with `[INFO]`.
#### Logging level
The scanner should filter out a log message if its log level is lower than the
one set in the `SECURE_LOG_LEVEL` CI/CD variable. For instance, `info` and `warn`
messages should be skipped when `SECURE_LOG_LEVEL` is set to `error`. Accepted
values are as follows, listed from highest to lowest:
- `fatal`
- `error`
- `warn`
- `info`
- `debug`
It is recommended to use the `debug` level for verbose logging that could be
useful when debugging. The default value for `SECURE_LOG_LEVEL` should be set
to `info`.
When executing command lines, scanners should use the `debug` level to log the command line and its output.
If the command line fails, then it should be logged with the `error` log level;
this makes it possible to debug the problem without having to change the log level to `debug` and rerun the scanning job.
#### common `logutil` package
If you are using [go](https://go.dev/) and
[common](https://gitlab.com/gitlab-org/security-products/analyzers/common),
then it is suggested that you use [Logrus](https://github.com/Sirupsen/logrus)
and [common's `logutil` package](https://gitlab.com/gitlab-org/security-products/analyzers/common/-/tree/master/logutil)
to configure the formatter for [Logrus](https://github.com/Sirupsen/logrus).
See the [`logutil` README](https://gitlab.com/gitlab-org/security-products/analyzers/common/-/tree/master/logutil/README.md)
## Report
The report is a JSON document that combines vulnerabilities with possible remediations.
This documentation gives an overview of the report JSON format, recommendations, and examples to
help integrators set its fields.
The format is extensively described in the documentation of
[SAST](../../user/application_security/sast/_index.md#download-a-sast-report),
[DAST](../../user/application_security/dast/browser/_index.md),
[Dependency Scanning](../../user/application_security/dependency_scanning/_index.md#understanding-the-results),
and [Container Scanning](../../user/application_security/container_scanning/_index.md#reports-json-format)
You can find the schemas for these scanners here:
- [Container Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/container-scanning-report-format.json)
- [Coverage Fuzzing](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/coverage-fuzzing-report-format.json)
- [DAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dast-report-format.json)
- [Dependency Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dependency-scanning-report-format.json)
- [SAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json)
- [Secret Detection](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/secret-detection-report-format.json)
### Report validation
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/351000) in GitLab 15.0.
{{< /history >}}
You must ensure that reports generated by the scanner pass validation against the schema version
declared in your reports. Reports that don't pass validation are not ingested by GitLab, and an
error message displays on the corresponding pipeline.
Reports that use a deprecated version of the secure report schema are ingested but cause a warning
message to display on the corresponding pipeline. If you see this warning, update your
analyzer to use the latest available schemas.
After the deprecation period for a schema version, the file is removed from GitLab. Reports that
declare removed versions are rejected, and an error message displays on the corresponding pipeline.
If a report uses a `PATCH` version that doesn't match any vendored schema version, it is validated against
the latest vendored `PATCH` version. For example, if a report version is 15.0.23 and the latest vendored
version is 15.0.6, the report is validated against version 15.0.6.
GitLab validates reports against security report JSON schemas
it reads from the [`gitlab-security_report_schemas`](https://rubygems.org/gems/gitlab-security_report_schemas)
gem. You can see which schema versions are supported in your GitLab version
by looking at the version of the gem in your GitLab installation. For example,
[GitLab 15.4](https://gitlab.com/gitlab-org/gitlab/-/blob/93a2a651a48bd03d9d84847e1cade19962ab4292/Gemfile#L431)
uses version `0.1.2.min15.0.0.max15.2.0`, which means it has versions in the range `15.0.0` and `15.2.0`.
To see the exact versions, read the [validate locally](#validate-locally) section.
#### Validate locally
Before running your analyzer in GitLab, you should validate the report produced by your analyzer to
ensure it complies with the declared schema version.
1. Install [`gitlab-security_report_schemas`](https://rubygems.org/gems/gitlab-security_report_schemas).
1. Run `security-report-schemas` to see what schema versions are supported.
1. Run `security-report-schemas <report.json>` to validate a report.
```shell
$ gem install gitlab-security_report_schemas -v 0.1.2.min15.0.0.max15.2.1
Successfully installed gitlab-security_report_schemas-0.1.2.min15.0.0.max15.2.1
Parsing documentation for gitlab-security_report_schemas-0.1.2.min15.0.0.max15.2.1
Done installing documentation for gitlab-security_report_schemas after 0 seconds
1 gem installed
$ security-report-schemas
SecurityReportSchemas 0.1.2.min15.0.0.max15.2.1.
Supported schema versions: ["15.0.0", "15.0.1", "15.0.2", "15.0.4", "15.0.5", "15.0.6", "15.0.7", "15.1.0", "15.1.1", "15.1.2", "15.1.3", "15.1.4", "15.2.0", "15.2.1"]
Usage: security-report-schemas REPORT_FILE_PATH [options]
-r, --report_type=REPORT_TYPE Override the report type
-w, --warnings Prints the warning messages
$ security-report-schemas ~/Downloads/gl-dependency-scanning-report.json
Validating dependency_scanning v15.0.0 against schema v15.0.0
Content is invalid
* root is missing required keys: dependency_files
```
### Report Fields
#### Version
This field specifies which [Security Report Schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas) version you are using. For information about the versions to use, see [releases](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/releases).
GitLab validates your report against the version of the schema specified by this value.
#### Vulnerabilities
The `vulnerabilities` field of the report is an array of vulnerability objects.
##### ID
The `id` field is the unique identifier of the vulnerability.
It is used to reference a fixed vulnerability from a [remediation objects](#remediations).
We recommend that you generate a UUID and use it as the `id` field's value.
##### Category
The value of the `category` field matches the report type:
- `dependency_scanning`
- `container_scanning`
- `sast`
- `dast`
##### Scan
The `scan` field is an object that embeds meta information about the scan itself: the `analyzer`
and `scanner` that performed the scan, the `start_time` and `end_time` the scan executed,
and `status` of the scan (either "success" or "failure").
Both the `analyzer` and `scanner` fields are objects that embeds a human-readable `name` and a technical `id`.
The `id` should not collide with any other analyzers or scanners another integrator would provide.
##### Scan Primary Identifiers
The `scan.primary_identifiers` field is an optional field containing an array of
[primary identifiers](../../user/application_security/terminology/_index.md#primary-identifier)).
This is an exhaustive list of all rulesets for which the analyzer performed the scan.
Even when the [`Vulnerabilities`](#vulnerabilities) array for a given scan may be empty, this optional field
should contain the complete list of potential identifiers to inform the Rails application of which
rules were executed.
When populated, the Rails application [may automatically resolve previously detected vulnerabilities](../../user/application_security/iac_scanning/_index.md#automatic-vulnerability-resolution) as no
longer relevant when their primary identifier is not included.
##### Name, message, and description
The `name` and `message` fields contain a short description of the vulnerability.
The `description` field provides more details.
The `name` field is context-free and contains no information on where the vulnerability has been found,
whereas the `message` may repeat the location.
As a visual example, this screenshot highlights where these fields are used when viewing a
vulnerability as part of a pipeline view.

For instance, a `message` for a vulnerability
reported by Dependency Scanning gives information on the vulnerable dependency,
which is redundant with the `location` field of the vulnerability.
The `name` field is preferred but the `message` field is used
when the context/location cannot be removed from the title of the vulnerability.
To illustrate, here is an example vulnerability object reported by a Dependency Scanning scanner,
and where the `message` repeats the `location` field:
```json
{
"location": {
"dependency": {
"package": {
"name": "debug"
}
}
},
"name": "Regular Expression Denial of Service",
"message": "Regular Expression Denial of Service in debug",
"description": "The debug module is vulnerable to regular expression denial of service
when untrusted user input is passed into the `o` formatter.
It takes around 50k characters to block for 2 seconds making this a low severity issue."
}
```
The `description` might explain how the vulnerability works or give context about the exploit.
It should not repeat the other fields of the vulnerability object.
In particular, the `description` should not repeat the `location` (what is affected)
or the `solution` (how to mitigate the risk).
##### Solution
You can use the `solution` field to instruct users how to fix the identified vulnerability or to mitigate
the risk. End-users interact with this field, whereas GitLab automatically processes the
`remediations` objects.
##### Identifiers
The `identifiers` array describes the detected vulnerability. An identifier object's `type` and
`value` fields are used to [tell if two identifiers are the same](../../user/application_security/detect/vulnerability_deduplication.md).
The user interface uses the object's `name` and `url` fields to display the identifier.
We recommend that you use the identifiers the GitLab scanners already [define](https://gitlab.com/gitlab-org/security-products/analyzers/report/-/blob/main/identifier.go):
| Identifier | Type | Example value | Example name |
|------------|------|---------------|--------------|
| [CVE](https://cve.mitre.org/cve/) | `cve` | CVE-2019-10086 | CVE-2019-10086 |
| [CWE](https://cwe.mitre.org/data/index.html) | `cwe` | 1026 | CWE-1026 |
| [ELSA](https://linux.oracle.com/security/) | `elsa` | ELSA-2020-0085 | ELSA-2020-0085 |
| [OSVD](https://cve.mitre.org/data/refs/refmap/source-OSVDB.html) | `osvdb` | OSVDB-113928 | OSVDB-113928 |
| [OWASP](https://owasp.org/Top10/) | `owasp` | A01:2021 | A01:2021 - Broken Access Control |
| [RHSA](https://access.redhat.com/errata-search/#/) | `rhsa` | RHSA-2020:0111 | RHSA-2020:0111 |
| [USN](https://ubuntu.com/security/notices) | `usn` | USN-4234-1 | USN-4234-1 |
| [GHSA](https://github.com/advisories) | `ghsa` | GHSA-38jh-8h67-m7mj | GHSA-38jh-8h67-m7mj |
| [HACKERONE](https://hackerone.com/hacktivity/overview) | `hackerone` | 698789 | HACKERONE-698789 |
The generic identifiers listed above are defined in the [common library](https://gitlab.com/gitlab-org/security-products/analyzers/common),
which is shared by some of the analyzers that GitLab maintains. You can [contribute](https://gitlab.com/gitlab-org/security-products/analyzers/common/blob/master/issue/identifier.go)
new generic identifiers to if needed. Analyzers may also produce vendor-specific or product-specific
identifiers, which don't belong in the [common library](https://gitlab.com/gitlab-org/security-products/analyzers/common).
The first item of the `identifiers` array is called the
[primary identifier](../../user/application_security/terminology/_index.md#primary-identifier), and
it is used to
[track vulnerabilities](#tracking-and-merging-vulnerabilities) as new commits are pushed to the repository.
Not all vulnerabilities have CVEs, and a CVE can be identified multiple times. As a result, a CVE
isn't a stable identifier and you shouldn't assume it as such when tracking vulnerabilities.
The maximum number of identifiers for a vulnerability is set as 20. If a vulnerability has more than 20 identifiers,
the system saves only the first 20 of them. The vulnerabilities in the [Pipeline Security](../../user/application_security/detect/security_scanning_results.md)
tab do not enforce this limit and all identifiers present in the report artifact are displayed.
#### Details
The `details` field is an object that supports many different content elements that are displayed when viewing vulnerability information. An example of the various data elements can be seen in the [security-reports repository](https://gitlab.com/gitlab-examples/security/security-reports/-/tree/master/samples/details-example).
#### Location
The `location` indicates where the vulnerability has been detected.
The format of the location depends on the type of scanning.
Internally GitLab extracts some attributes of the `location` to generate the **location fingerprint**,
which is used to track vulnerabilities
as new commits are pushed to the repository.
The attributes used to generate the location fingerprint also depend on the type of scanning.
##### Dependency Scanning
The `location` of a Dependency Scanning vulnerability is composed of a `dependency` and a `file`.
The `dependency` object describes the affected `package` and the dependency `version`.
`package` embeds the `name` of the affected library/module.
`file` is the path of the dependency file that declares the affected dependency.
For instance, here is the `location` object for a vulnerability affecting
version `4.0.11` of npm package [`handlebars`](https://www.npmjs.com/package/handlebars):
```json
{
"file": "client/package.json",
"dependency": {
"package": {
"name": "handlebars"
},
"version": "4.0.11"
}
}
```
This affected dependency is listed in `client/package.json`,
a dependency file processed by npm or yarn.
The location fingerprint of a Dependency Scanning vulnerability
combines the `file` and the package `name`,
so these attributes are mandatory.
All other attributes are optional.
##### Container Scanning
Similar to Dependency Scanning,
the `location` of a Container Scanning vulnerability has a `dependency` and a `file`.
It also has an `operating_system` field.
For instance, here is the `location` object for a vulnerability affecting
version `2.50.3-2+deb9u1` of Debian package `glib2.0`:
```json
{
"dependency": {
"package": {
"name": "glib2.0"
},
},
"version": "2.50.3-2+deb9u1",
"operating_system": "debian:9",
"image": "registry.gitlab.com/example/app:latest"
}
```
The affected package is found when scanning the Docker image `registry.gitlab.com/example/app:latest`.
The Docker image is based on `debian:9` (Debian Stretch).
The location fingerprint of a Container Scanning vulnerability
combines the `operating_system` and the package `name`,
so these attributes are mandatory.
The `image` is also mandatory.
All other attributes are optional.
##### SAST
The `location` of a SAST vulnerability must have a `file` that gives the path of the affected file and
a `start_line` field with the affected line number.
It may also have an `end_line`, a `class`, and a `method`.
For instance, here is the `location` object for a security flaw found
at line `41` of `src/main/java/com/gitlab/example/App.java`,
in the `generateSecretToken` method of the `com.gitlab.security_products.tests.App` Java class:
```json
{
"file": "src/main/java/com/gitlab/example/App.java",
"start_line": 41,
"end_line": 41,
"class": "com.gitlab.security_products.tests.App",
"method": "generateSecretToken1"
}
```
The location fingerprint of a SAST vulnerability
combines `file`, `start_line`, and `end_line`,
so these attributes are mandatory.
All other attributes are optional.
#### Tracking and merging vulnerabilities
Users may give feedback on a vulnerability:
- They may dismiss a vulnerability if it doesn't apply to their projects
- They may create an issue for a vulnerability if there's a possible threat
GitLab tracks vulnerabilities so that user feedback is not lost
when new Git commits are pushed to the repository.
Vulnerabilities are tracked using a
[`UUIDv5`](https://gitlab.com/gitlab-org/gitlab/-/blob/1272957c4a55e616569721febccb685c056ca1e4/ee/app/models/vulnerabilities/finding.rb#L364-368)
digest, which is generated by a `SHA-1` hash of four attributes:
- [Report type](#category)
- [Primary identifier](#identifiers)
- [Location fingerprint](#location)
- Project ID
Right now, GitLab cannot track a vulnerability if its location changes
as new Git commits are pushed, and this results in user feedback being lost.
For instance, user feedback on a SAST vulnerability is lost
if the affected file is renamed or the affected line moves down.
This is addressed in [issue #7586](https://gitlab.com/gitlab-org/gitlab/-/issues/7586).
See also [deduplication process](../../user/application_security/detect/vulnerability_deduplication.md).
##### Severity
The `severity` field describes how badly the vulnerability impacts the software.
The severity is used to sort the vulnerabilities in the security dashboard.
The severity ranges from `Info` to `Critical`, but it can also be `Unknown`.
Valid values are: `Unknown`, `Info`, `Low`, `Medium`, `High`, or `Critical`
`Unknown` values means that data is unavailable to determine it's actual value. Therefore, it may be `high`, `medium`, or `low`,
and needs to be investigated.
#### Remediations
The `remediations` field of the report is an array of remediation objects.
Each remediation describes a patch that can be applied to
[resolve](../../user/application_security/vulnerabilities/_index.md#resolve-a-vulnerability)
a set of vulnerabilities.
Here is an example of a report that contains remediations.
```json
{
"vulnerabilities": [
{
"category": "dependency_scanning",
"name": "Regular Expression Denial of Service",
"id": "123e4567-e89b-12d3-a456-426655440000",
"solution": "Upgrade to new versions.",
"scanner": {
"id": "gemnasium",
"name": "Gemnasium"
},
"identifiers": [
{
"type": "gemnasium",
"name": "Gemnasium-642735a5-1425-428d-8d4e-3c854885a3c9",
"value": "642735a5-1425-428d-8d4e-3c854885a3c9"
}
]
}
],
"remediations": [
{
"fixes": [
{
"id": "123e4567-e89b-12d3-a456-426655440000"
}
],
"summary": "Upgrade to new version",
"diff": "ZGlmZiAtLWdpdCBhL3lhcm4ubG9jayBiL3lhcm4ubG9jawppbmRleCAwZWNjOTJmLi43ZmE0NTU0IDEwMDY0NAotLS0gYS95Y=="
}
]
}
```
##### Summary
The `summary` field is an overview of how the vulnerabilities can be fixed. This field is required.
##### Fixed vulnerabilities
The `fixes` field is an array of objects that reference the vulnerabilities fixed by the
remediation. `fixes[].id` contains a fixed vulnerability's [unique identifier](#id). This field is required.
##### Diff
The `diff` field is a base64-encoded remediation code diff, compatible with
[`git apply`](https://git-scm.com/docs/git-format-patch#_discussion). This field is required.
|
---
stage: Application Security Testing
group: Static Analysis
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Security scanner integration
breadcrumbs:
- doc
- development
- integrations
---
Integrating a security scanner into GitLab consists of providing end users
with a [CI/CD job definition](../../ci/jobs/_index.md)
they can add to their CI/CD configuration files to scan their GitLab projects.
This job should then output its results in a GitLab-specified format. These results are then
automatically presented in various places in GitLab, such as the Pipeline view, merge request
widget, and Security Dashboard.
The scanning job is usually based on a [Docker image](https://docs.docker.com/)
that contains the scanner and all its dependencies in a self-contained environment.
This page documents requirements and guidelines for writing CI/CD jobs that implement a security
scanner, as well as requirements and guidelines for the Docker image.
## Job definition
This section describes several important fields to add to the security scanner's job
definition file. Full documentation on these and other available fields can be viewed
in the [CI documentation](../../ci/yaml/_index.md#image).
### Name
For consistency, scanning jobs should be named after the scanner, in lowercase.
The job name is suffixed after the type of scanning:
- `_dependency_scanning`
- `_container_scanning`
- `_dast`
- `_sast`
For instance, the dependency scanning job based on the "MySec" scanner would be named `mysec_dependency_scanning`.
### Image
The [`image`](../../ci/yaml/_index.md#image) keyword is used to specify
the [Docker image](../../ci/docker/using_docker_images.md#what-is-an-image)
containing the security scanner.
### Script
The [`script`](../../ci/yaml/_index.md#script) keyword
is used to specify the commands to run the scanner.
Because the `script` entry can't be left empty, it must be set to the command that performs the scan.
It is not possible to rely on the predefined `ENTRYPOINT` and `CMD` of the Docker image
to perform the scan automatically, without passing any command.
The [`before_script`](../../ci/yaml/_index.md#before_script)
should not be used in the job definition because users may rely on this to prepare their projects before performing the scan.
For instance, it is common practice to use `before_script` to install system libraries
a particular project needs before performing SAST or Dependency Scanning.
Similarly, [`after_script`](../../ci/yaml/_index.md#after_script)
should not be used in the job definition, because it may be overridden by users.
### Stage
For consistency, scanning jobs should belong to the `test` stage when possible.
The [`stage`](../../ci/yaml/_index.md#stage) keyword can be omitted because `test` is the default value.
### Fail-safe
By default, scanning jobs do not block the pipeline when they fail,
so the [`allow_failure`](../../ci/yaml/_index.md#allow_failure) parameter should be set to `true`.
### Artifacts
Scanning jobs must declare a report that corresponds to the type of scanning they perform,
using the [`artifacts:reports`](../../ci/yaml/_index.md#artifactsreports) keyword.
Valid reports are:
- `dependency_scanning`
- `container_scanning`
- `dast`
- `api_fuzzing`
- `coverage_fuzzing`
- `sast`
- `secret_detection`
For example, here is the definition of a SAST job that generates a file named `gl-sast-report.json`,
and uploads it as a SAST report:
```yaml
mysec_sast:
image: registry.gitlab.com/secure/mysec
artifacts:
reports:
sast: gl-sast-report.json
```
`gl-sast-report.json` is an example file path but any other filename can be used. See
[the Output file section](#output-file) for more details. It's processed as a SAST report because
it's declared under the `reports:sast` key in the job definition, not because of the filename.
### Policies
Certain GitLab workflows, such as [AutoDevOps](../../topics/autodevops/cicd_variables.md#job-skipping-variables),
define CI/CD variables to indicate that given scans should be skipped. You can check for this by looking
for variables such as:
- `DEPENDENCY_SCANNING_DISABLED`
- `CONTAINER_SCANNING_DISABLED`
- `SAST_DISABLED`
- `DAST_DISABLED`
If appropriate based on the scanner type, you should then skip running the custom scanner.
GitLab also defines a `CI_PROJECT_REPOSITORY_LANGUAGES` variable, which provides the list of
languages in the repository. Depending on this value, your scanner may or may not do something different.
Language detection currently relies on the [`linguist`](https://github.com/github/linguist) Ruby gem.
See the [predefined CI/CD variables](../../ci/variables/predefined_variables.md).
#### Policy checking example
This example shows how to skip a custom Dependency Scanning job, `mysec_dependency_scanning`, unless
the project repository contains Java source code and the `dependency_scanning` feature is enabled:
```yaml
mysec_dependency_scanning:
rules:
- if: $DEPENDENCY_SCANNING_DISABLED == 'true'
when: never
- if: $GITLAB_FEATURES =~ /\bdependency_scanning\b/
exists:
- '**/*.java'
```
Any additional job policy should only be configured by users based on their needs.
For instance, predefined policies should not trigger the scanning job
for a particular branch or when a particular set of files changes.
## Docker image
The Docker image is a self-contained environment that combines
the scanner with all the libraries and tools it depends on.
Packaging your scanner into a Docker image makes its dependencies and configuration always present,
regardless of the individual machine the scanner runs on.
### Image size
Depending on the CI infrastructure,
the CI may have to fetch the Docker image every time the job runs.
For the scanning job to run fast and avoid wasting bandwidth, Docker images should be as small as
possible. You should aim for 50 MB or smaller. If that isn't possible, try to keep it below 1.46 GB,
which is the size of a DVD-ROM.
If the scanner requires a fully functional Linux environment,
it is recommended to use a [Debian](https://www.debian.org/intro/about) "slim" distribution or [Alpine Linux](https://www.alpinelinux.org/).
If possible, it is recommended to build the image from scratch, using the `FROM scratch` instruction,
and to compile the scanner with all the libraries it needs.
[Multi-stage builds](https://docs.docker.com/build/building/multi-stage/)
might also help with keeping the image small.
To keep an image size small, consider using [dive](https://github.com/wagoodman/dive#dive) to analyze layers in a Docker image to
identify where additional bloat might be originating from.
In some cases, it might be difficult to remove files from an image. When this occurs, consider using
[Zstandard](https://github.com/facebook/zstd)
to compress files or large directories. Zstandard offers many different compression levels that can
decrease the size of your image with very little impact to decompression speed. It may be helpful to
automatically decompress any compressed directories as soon as an image launches. You can accomplish
this by adding a step to the Docker image's `/etc/bashrc` or to a specific user's `$HOME/.bashrc`.
Remember to change the entry point to launch a bash login shell if you chose the latter option.
Here are some examples to get you started:
- <https://gitlab.com/gitlab-org/security-products/license-management/-/blob/0b976fcffe0a9b8e80587adb076bcdf279c9331c/config/install.sh#L168-170>
- <https://gitlab.com/gitlab-org/security-products/license-management/-/blob/0b976fcffe0a9b8e80587adb076bcdf279c9331c/config/.bashrc#L49>
### Image tag
As documented in the [Docker Official Images](https://github.com/docker-library/official-images#tags-and-aliases) project,
it is strongly encouraged that version number tags be given aliases which allows the user to easily refer to the "most recent" release of a particular series.
See also [Docker Tagging: Best practices for tagging and versioning Docker images](https://learn.microsoft.com/en-us/archive/blogs/stevelasker/docker-tagging-best-practices-for-tagging-and-versioning-docker-images).
### Permissions
To run a Docker container with non-root privileges the following user and group must be present in the container:
- User `gitlab` with user ID `1000`
- Group `gitlab` with group ID `1000`
## Command line
A scanner is a command-line tool that takes environment variables as inputs,
and generates a file that is uploaded as a report (based on the job definition).
It also generates text output on the standard output and standard error streams, and exits with a status code.
### Variables
All CI/CD variables are passed to the scanner as environment variables.
The scanned project is described by the [predefined CI/CD variables](../../ci/variables/_index.md).
#### SAST and Dependency Scanning
SAST and Dependency Scanning scanners must scan the files in the project directory, given by the `CI_PROJECT_DIR` CI/CD variable.
#### Container Scanning
To be consistent with the official Container Scanning for GitLab,
scanners must scan the Docker image whose name and tag are given by
`CI_APPLICATION_REPOSITORY` and `CI_APPLICATION_TAG`. If the `DOCKER_IMAGE`
CI/CD variable is provided, then the `CI_APPLICATION_REPOSITORY` and `CI_APPLICATION_TAG` variables
are ignored, and the image specified in the `DOCKER_IMAGE` variable is scanned instead.
If not provided, `CI_APPLICATION_REPOSITORY` should default to
`$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG`, which is a combination of predefined CI/CD variables.
`CI_APPLICATION_TAG` should default to `CI_COMMIT_SHA`.
The scanner should sign in the Docker registry
using the variables `DOCKER_USER` and `DOCKER_PASSWORD`.
If these are not defined, then the scanner should use
`CI_REGISTRY_USER` and `CI_REGISTRY_PASSWORD` as default values.
#### Configuration files
While scanners may use `CI_PROJECT_DIR` to load specific configuration files,
it is recommended to expose configuration as CI/CD variables, not files.
### Output file
Like any artifact uploaded to GitLab CI/CD,
the Secure report generated by the scanner must be written in the project directory,
given by the `CI_PROJECT_DIR` CI/CD variable.
It is recommended to name the output file after the type of scanning, and to use `gl-` as a prefix.
Since all Secure reports are JSON files, it is recommended to use `.json` as a file extension.
For instance, a suggested filename for a Dependency Scanning report is `gl-dependency-scanning.json`.
The [`artifacts:reports`](../../ci/yaml/_index.md#artifactsreports) keyword
of the job definition must be consistent with the file path where the Security report is written.
For instance, if a Dependency Scanning analyzer writes its report to the CI project directory,
and if this report filename is `depscan.json`,
then `artifacts:reports:dependency_scanning` must be set to `depscan.json`.
### Exit code
Following the POSIX exit code standard, the scanner exits with either `0` for success or `1` for failure.
Success also includes the case when vulnerabilities are found.
When a CI job fails, security report results are not ingested by GitLab, even if the job
[allows failure](../../ci/yaml/_index.md#allow_failure). However, the report artifacts are still uploaded to GitLab and available
for [download in the pipeline security tab](../../user/application_security/detect/security_scanning_results.md#download-a-security-report).
### Logging
The scanner should log error messages and warnings so that users can easily investigate
misconfiguration and integration issues by looking at the log of the CI scanning job.
Scanners may use [ANSI escape codes](https://en.wikipedia.org/wiki/ANSI_escape_code#Colors)
to colorize the messages they write to the Unix standard output and standard error streams.
We recommend using red to report errors, yellow for warnings, and green for notices.
Also, we recommend prefixing error messages with `[ERRO]`, warnings with `[WARN]`, and notices with `[INFO]`.
#### Logging level
The scanner should filter out a log message if its log level is lower than the
one set in the `SECURE_LOG_LEVEL` CI/CD variable. For instance, `info` and `warn`
messages should be skipped when `SECURE_LOG_LEVEL` is set to `error`. Accepted
values are as follows, listed from highest to lowest:
- `fatal`
- `error`
- `warn`
- `info`
- `debug`
It is recommended to use the `debug` level for verbose logging that could be
useful when debugging. The default value for `SECURE_LOG_LEVEL` should be set
to `info`.
When executing command lines, scanners should use the `debug` level to log the command line and its output.
If the command line fails, then it should be logged with the `error` log level;
this makes it possible to debug the problem without having to change the log level to `debug` and rerun the scanning job.
#### common `logutil` package
If you are using [go](https://go.dev/) and
[common](https://gitlab.com/gitlab-org/security-products/analyzers/common),
then it is suggested that you use [Logrus](https://github.com/Sirupsen/logrus)
and [common's `logutil` package](https://gitlab.com/gitlab-org/security-products/analyzers/common/-/tree/master/logutil)
to configure the formatter for [Logrus](https://github.com/Sirupsen/logrus).
See the [`logutil` README](https://gitlab.com/gitlab-org/security-products/analyzers/common/-/tree/master/logutil/README.md)
## Report
The report is a JSON document that combines vulnerabilities with possible remediations.
This documentation gives an overview of the report JSON format, recommendations, and examples to
help integrators set its fields.
The format is extensively described in the documentation of
[SAST](../../user/application_security/sast/_index.md#download-a-sast-report),
[DAST](../../user/application_security/dast/browser/_index.md),
[Dependency Scanning](../../user/application_security/dependency_scanning/_index.md#understanding-the-results),
and [Container Scanning](../../user/application_security/container_scanning/_index.md#reports-json-format)
You can find the schemas for these scanners here:
- [Container Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/container-scanning-report-format.json)
- [Coverage Fuzzing](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/coverage-fuzzing-report-format.json)
- [DAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dast-report-format.json)
- [Dependency Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dependency-scanning-report-format.json)
- [SAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json)
- [Secret Detection](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/secret-detection-report-format.json)
### Report validation
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/351000) in GitLab 15.0.
{{< /history >}}
You must ensure that reports generated by the scanner pass validation against the schema version
declared in your reports. Reports that don't pass validation are not ingested by GitLab, and an
error message displays on the corresponding pipeline.
Reports that use a deprecated version of the secure report schema are ingested but cause a warning
message to display on the corresponding pipeline. If you see this warning, update your
analyzer to use the latest available schemas.
After the deprecation period for a schema version, the file is removed from GitLab. Reports that
declare removed versions are rejected, and an error message displays on the corresponding pipeline.
If a report uses a `PATCH` version that doesn't match any vendored schema version, it is validated against
the latest vendored `PATCH` version. For example, if a report version is 15.0.23 and the latest vendored
version is 15.0.6, the report is validated against version 15.0.6.
GitLab validates reports against security report JSON schemas
it reads from the [`gitlab-security_report_schemas`](https://rubygems.org/gems/gitlab-security_report_schemas)
gem. You can see which schema versions are supported in your GitLab version
by looking at the version of the gem in your GitLab installation. For example,
[GitLab 15.4](https://gitlab.com/gitlab-org/gitlab/-/blob/93a2a651a48bd03d9d84847e1cade19962ab4292/Gemfile#L431)
uses version `0.1.2.min15.0.0.max15.2.0`, which means it has versions in the range `15.0.0` and `15.2.0`.
To see the exact versions, read the [validate locally](#validate-locally) section.
#### Validate locally
Before running your analyzer in GitLab, you should validate the report produced by your analyzer to
ensure it complies with the declared schema version.
1. Install [`gitlab-security_report_schemas`](https://rubygems.org/gems/gitlab-security_report_schemas).
1. Run `security-report-schemas` to see what schema versions are supported.
1. Run `security-report-schemas <report.json>` to validate a report.
```shell
$ gem install gitlab-security_report_schemas -v 0.1.2.min15.0.0.max15.2.1
Successfully installed gitlab-security_report_schemas-0.1.2.min15.0.0.max15.2.1
Parsing documentation for gitlab-security_report_schemas-0.1.2.min15.0.0.max15.2.1
Done installing documentation for gitlab-security_report_schemas after 0 seconds
1 gem installed
$ security-report-schemas
SecurityReportSchemas 0.1.2.min15.0.0.max15.2.1.
Supported schema versions: ["15.0.0", "15.0.1", "15.0.2", "15.0.4", "15.0.5", "15.0.6", "15.0.7", "15.1.0", "15.1.1", "15.1.2", "15.1.3", "15.1.4", "15.2.0", "15.2.1"]
Usage: security-report-schemas REPORT_FILE_PATH [options]
-r, --report_type=REPORT_TYPE Override the report type
-w, --warnings Prints the warning messages
$ security-report-schemas ~/Downloads/gl-dependency-scanning-report.json
Validating dependency_scanning v15.0.0 against schema v15.0.0
Content is invalid
* root is missing required keys: dependency_files
```
### Report Fields
#### Version
This field specifies which [Security Report Schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas) version you are using. For information about the versions to use, see [releases](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/releases).
GitLab validates your report against the version of the schema specified by this value.
#### Vulnerabilities
The `vulnerabilities` field of the report is an array of vulnerability objects.
##### ID
The `id` field is the unique identifier of the vulnerability.
It is used to reference a fixed vulnerability from a [remediation objects](#remediations).
We recommend that you generate a UUID and use it as the `id` field's value.
##### Category
The value of the `category` field matches the report type:
- `dependency_scanning`
- `container_scanning`
- `sast`
- `dast`
##### Scan
The `scan` field is an object that embeds meta information about the scan itself: the `analyzer`
and `scanner` that performed the scan, the `start_time` and `end_time` the scan executed,
and `status` of the scan (either "success" or "failure").
Both the `analyzer` and `scanner` fields are objects that embeds a human-readable `name` and a technical `id`.
The `id` should not collide with any other analyzers or scanners another integrator would provide.
##### Scan Primary Identifiers
The `scan.primary_identifiers` field is an optional field containing an array of
[primary identifiers](../../user/application_security/terminology/_index.md#primary-identifier)).
This is an exhaustive list of all rulesets for which the analyzer performed the scan.
Even when the [`Vulnerabilities`](#vulnerabilities) array for a given scan may be empty, this optional field
should contain the complete list of potential identifiers to inform the Rails application of which
rules were executed.
When populated, the Rails application [may automatically resolve previously detected vulnerabilities](../../user/application_security/iac_scanning/_index.md#automatic-vulnerability-resolution) as no
longer relevant when their primary identifier is not included.
##### Name, message, and description
The `name` and `message` fields contain a short description of the vulnerability.
The `description` field provides more details.
The `name` field is context-free and contains no information on where the vulnerability has been found,
whereas the `message` may repeat the location.
As a visual example, this screenshot highlights where these fields are used when viewing a
vulnerability as part of a pipeline view.

For instance, a `message` for a vulnerability
reported by Dependency Scanning gives information on the vulnerable dependency,
which is redundant with the `location` field of the vulnerability.
The `name` field is preferred but the `message` field is used
when the context/location cannot be removed from the title of the vulnerability.
To illustrate, here is an example vulnerability object reported by a Dependency Scanning scanner,
and where the `message` repeats the `location` field:
```json
{
"location": {
"dependency": {
"package": {
"name": "debug"
}
}
},
"name": "Regular Expression Denial of Service",
"message": "Regular Expression Denial of Service in debug",
"description": "The debug module is vulnerable to regular expression denial of service
when untrusted user input is passed into the `o` formatter.
It takes around 50k characters to block for 2 seconds making this a low severity issue."
}
```
The `description` might explain how the vulnerability works or give context about the exploit.
It should not repeat the other fields of the vulnerability object.
In particular, the `description` should not repeat the `location` (what is affected)
or the `solution` (how to mitigate the risk).
##### Solution
You can use the `solution` field to instruct users how to fix the identified vulnerability or to mitigate
the risk. End-users interact with this field, whereas GitLab automatically processes the
`remediations` objects.
##### Identifiers
The `identifiers` array describes the detected vulnerability. An identifier object's `type` and
`value` fields are used to [tell if two identifiers are the same](../../user/application_security/detect/vulnerability_deduplication.md).
The user interface uses the object's `name` and `url` fields to display the identifier.
We recommend that you use the identifiers the GitLab scanners already [define](https://gitlab.com/gitlab-org/security-products/analyzers/report/-/blob/main/identifier.go):
| Identifier | Type | Example value | Example name |
|------------|------|---------------|--------------|
| [CVE](https://cve.mitre.org/cve/) | `cve` | CVE-2019-10086 | CVE-2019-10086 |
| [CWE](https://cwe.mitre.org/data/index.html) | `cwe` | 1026 | CWE-1026 |
| [ELSA](https://linux.oracle.com/security/) | `elsa` | ELSA-2020-0085 | ELSA-2020-0085 |
| [OSVD](https://cve.mitre.org/data/refs/refmap/source-OSVDB.html) | `osvdb` | OSVDB-113928 | OSVDB-113928 |
| [OWASP](https://owasp.org/Top10/) | `owasp` | A01:2021 | A01:2021 - Broken Access Control |
| [RHSA](https://access.redhat.com/errata-search/#/) | `rhsa` | RHSA-2020:0111 | RHSA-2020:0111 |
| [USN](https://ubuntu.com/security/notices) | `usn` | USN-4234-1 | USN-4234-1 |
| [GHSA](https://github.com/advisories) | `ghsa` | GHSA-38jh-8h67-m7mj | GHSA-38jh-8h67-m7mj |
| [HACKERONE](https://hackerone.com/hacktivity/overview) | `hackerone` | 698789 | HACKERONE-698789 |
The generic identifiers listed above are defined in the [common library](https://gitlab.com/gitlab-org/security-products/analyzers/common),
which is shared by some of the analyzers that GitLab maintains. You can [contribute](https://gitlab.com/gitlab-org/security-products/analyzers/common/blob/master/issue/identifier.go)
new generic identifiers to if needed. Analyzers may also produce vendor-specific or product-specific
identifiers, which don't belong in the [common library](https://gitlab.com/gitlab-org/security-products/analyzers/common).
The first item of the `identifiers` array is called the
[primary identifier](../../user/application_security/terminology/_index.md#primary-identifier), and
it is used to
[track vulnerabilities](#tracking-and-merging-vulnerabilities) as new commits are pushed to the repository.
Not all vulnerabilities have CVEs, and a CVE can be identified multiple times. As a result, a CVE
isn't a stable identifier and you shouldn't assume it as such when tracking vulnerabilities.
The maximum number of identifiers for a vulnerability is set as 20. If a vulnerability has more than 20 identifiers,
the system saves only the first 20 of them. The vulnerabilities in the [Pipeline Security](../../user/application_security/detect/security_scanning_results.md)
tab do not enforce this limit and all identifiers present in the report artifact are displayed.
#### Details
The `details` field is an object that supports many different content elements that are displayed when viewing vulnerability information. An example of the various data elements can be seen in the [security-reports repository](https://gitlab.com/gitlab-examples/security/security-reports/-/tree/master/samples/details-example).
#### Location
The `location` indicates where the vulnerability has been detected.
The format of the location depends on the type of scanning.
Internally GitLab extracts some attributes of the `location` to generate the **location fingerprint**,
which is used to track vulnerabilities
as new commits are pushed to the repository.
The attributes used to generate the location fingerprint also depend on the type of scanning.
##### Dependency Scanning
The `location` of a Dependency Scanning vulnerability is composed of a `dependency` and a `file`.
The `dependency` object describes the affected `package` and the dependency `version`.
`package` embeds the `name` of the affected library/module.
`file` is the path of the dependency file that declares the affected dependency.
For instance, here is the `location` object for a vulnerability affecting
version `4.0.11` of npm package [`handlebars`](https://www.npmjs.com/package/handlebars):
```json
{
"file": "client/package.json",
"dependency": {
"package": {
"name": "handlebars"
},
"version": "4.0.11"
}
}
```
This affected dependency is listed in `client/package.json`,
a dependency file processed by npm or yarn.
The location fingerprint of a Dependency Scanning vulnerability
combines the `file` and the package `name`,
so these attributes are mandatory.
All other attributes are optional.
##### Container Scanning
Similar to Dependency Scanning,
the `location` of a Container Scanning vulnerability has a `dependency` and a `file`.
It also has an `operating_system` field.
For instance, here is the `location` object for a vulnerability affecting
version `2.50.3-2+deb9u1` of Debian package `glib2.0`:
```json
{
"dependency": {
"package": {
"name": "glib2.0"
},
},
"version": "2.50.3-2+deb9u1",
"operating_system": "debian:9",
"image": "registry.gitlab.com/example/app:latest"
}
```
The affected package is found when scanning the Docker image `registry.gitlab.com/example/app:latest`.
The Docker image is based on `debian:9` (Debian Stretch).
The location fingerprint of a Container Scanning vulnerability
combines the `operating_system` and the package `name`,
so these attributes are mandatory.
The `image` is also mandatory.
All other attributes are optional.
##### SAST
The `location` of a SAST vulnerability must have a `file` that gives the path of the affected file and
a `start_line` field with the affected line number.
It may also have an `end_line`, a `class`, and a `method`.
For instance, here is the `location` object for a security flaw found
at line `41` of `src/main/java/com/gitlab/example/App.java`,
in the `generateSecretToken` method of the `com.gitlab.security_products.tests.App` Java class:
```json
{
"file": "src/main/java/com/gitlab/example/App.java",
"start_line": 41,
"end_line": 41,
"class": "com.gitlab.security_products.tests.App",
"method": "generateSecretToken1"
}
```
The location fingerprint of a SAST vulnerability
combines `file`, `start_line`, and `end_line`,
so these attributes are mandatory.
All other attributes are optional.
#### Tracking and merging vulnerabilities
Users may give feedback on a vulnerability:
- They may dismiss a vulnerability if it doesn't apply to their projects
- They may create an issue for a vulnerability if there's a possible threat
GitLab tracks vulnerabilities so that user feedback is not lost
when new Git commits are pushed to the repository.
Vulnerabilities are tracked using a
[`UUIDv5`](https://gitlab.com/gitlab-org/gitlab/-/blob/1272957c4a55e616569721febccb685c056ca1e4/ee/app/models/vulnerabilities/finding.rb#L364-368)
digest, which is generated by a `SHA-1` hash of four attributes:
- [Report type](#category)
- [Primary identifier](#identifiers)
- [Location fingerprint](#location)
- Project ID
Right now, GitLab cannot track a vulnerability if its location changes
as new Git commits are pushed, and this results in user feedback being lost.
For instance, user feedback on a SAST vulnerability is lost
if the affected file is renamed or the affected line moves down.
This is addressed in [issue #7586](https://gitlab.com/gitlab-org/gitlab/-/issues/7586).
See also [deduplication process](../../user/application_security/detect/vulnerability_deduplication.md).
##### Severity
The `severity` field describes how badly the vulnerability impacts the software.
The severity is used to sort the vulnerabilities in the security dashboard.
The severity ranges from `Info` to `Critical`, but it can also be `Unknown`.
Valid values are: `Unknown`, `Info`, `Low`, `Medium`, `High`, or `Critical`
`Unknown` values means that data is unavailable to determine it's actual value. Therefore, it may be `high`, `medium`, or `low`,
and needs to be investigated.
#### Remediations
The `remediations` field of the report is an array of remediation objects.
Each remediation describes a patch that can be applied to
[resolve](../../user/application_security/vulnerabilities/_index.md#resolve-a-vulnerability)
a set of vulnerabilities.
Here is an example of a report that contains remediations.
```json
{
"vulnerabilities": [
{
"category": "dependency_scanning",
"name": "Regular Expression Denial of Service",
"id": "123e4567-e89b-12d3-a456-426655440000",
"solution": "Upgrade to new versions.",
"scanner": {
"id": "gemnasium",
"name": "Gemnasium"
},
"identifiers": [
{
"type": "gemnasium",
"name": "Gemnasium-642735a5-1425-428d-8d4e-3c854885a3c9",
"value": "642735a5-1425-428d-8d4e-3c854885a3c9"
}
]
}
],
"remediations": [
{
"fixes": [
{
"id": "123e4567-e89b-12d3-a456-426655440000"
}
],
"summary": "Upgrade to new version",
"diff": "ZGlmZiAtLWdpdCBhL3lhcm4ubG9jayBiL3lhcm4ubG9jawppbmRleCAwZWNjOTJmLi43ZmE0NTU0IDEwMDY0NAotLS0gYS95Y=="
}
]
}
```
##### Summary
The `summary` field is an overview of how the vulnerabilities can be fixed. This field is required.
##### Fixed vulnerabilities
The `fixes` field is an array of objects that reference the vulnerabilities fixed by the
remediation. `fixes[].id` contains a fixed vulnerability's [unique identifier](#id). This field is required.
##### Diff
The `diff` field is a base64-encoded remediation code diff, compatible with
[`git apply`](https://git-scm.com/docs/git-format-patch#_discussion). This field is required.
|
https://docs.gitlab.com/development/integrations
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/integrations
|
[
"doc",
"development",
"integrations"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Integration development guidelines
|
Development guidelines for Integrations
|
This page provides development guidelines for implementing [GitLab integrations](../../user/project/integrations/_index.md),
which are part of our [main Rails project](https://gitlab.com/gitlab-org/gitlab).
Also see our [direction page](https://about.gitlab.com/direction/manage/import_and_integrate/integrations/) for an overview of our strategy around integrations.
This guide is a work in progress. You're welcome to ping `@gitlab-org/foundations/import-and-integrate`
if you need clarification or spot any outdated information.
## Add a new integration
### Define the integration
1. Add a new model in `app/models/integrations` extending from `Integration`.
- For example, `Integrations::FooBar` in `app/models/integrations/foo_bar.rb`.
- For certain types of integrations, you can include these base modules:
- `Integrations::Base::ChatNotification`
- `Integrations::Base::Ci`
- `Integrations::Base::IssueTracker`
- `Integrations::Base::Monitoring`
- `Integrations::Base::SlashCommands`
- `Integrations::Base::ThirdPartyWiki`
- For integrations that primarily trigger HTTP calls to external services, you can
also use the `Integrations::HasWebHook` concern. This reuses the [webhook functionality](../../user/project/integrations/webhooks.md)
in GitLab through an associated `ServiceHook` model, and automatically records request logs
which can be viewed in the integration settings.
1. Add the integration's underscored name (`'foo_bar'`) to `Integration::INTEGRATION_NAMES`.
1. Add the integration as an association on `Project`:
```ruby
has_one :foo_bar_integration, class_name: 'Integrations::FooBar'
```
### Define fields
Integrations can define arbitrary fields to store their configuration with the class method `Integration.field`.
The values are stored as an encrypted JSON hash in the `integrations.encrypted_properties` column.
For example:
```ruby
module Integrations
class FooBar < Integration
field :url
field :tags
end
end
```
`Integration.field` installs accessor methods on the class.
Here we would have `#url`, `#url=`, and `#url_changed?` to manage the `url` field.
These accessors should access the fields stored in `Integration#properties` directly on the model, just like other `ActiveRecord` attributes.
You should always access the fields through their `getters` and not interact with the `properties` hash directly.
You **must not** write to the `properties` hash, you **must** use the generated setter method instead. Direct writes to this
hash are not persisted.
To see how these fields are exposed in the frontend form for the integration,
see [Customize the frontend form](#customize-the-frontend-form).
Other approaches include using `Integration.prop_accessor` or `Integration.data_field`, which you might see in earlier versions of integrations.
You should not use these approaches for new integrations.
### Define validations
You should define Rails validations for all of your fields.
Validations should only apply when the integration is enabled, by testing the `#activated?` method.
Any field with the [`required:` property](#customize-the-frontend-form) should have a
corresponding validation for `presence`, as the `required:` field property is only for the frontend.
For example:
```ruby
module Integrations
class FooBar < Integration
with_options if: :activated? do
validates :key, presence: true, format: { with: KEY_REGEX }
validates :bar, inclusion: [true, false]
end
field :key, required: true
field :bar, type: :checkbox
end
end
```
### Define trigger events
Integrations are triggered by calling their `#execute` method in response to events in GitLab,
which gets passed a payload hash with details about the event.
The supported events have some overlap with [webhook events](../../user/project/integrations/webhook_events.md),
and receive the same payload. You can specify the events you're interested in by overriding
the class method `Integration.supported_events` in your model.
The following events are supported for integrations:
| Event type | Default | Value | Trigger |
|:-----------------------------------------------------------------------------------------------|:--------|:---------------------|:--------|
| Alert event | | `alert` | A new, unique alert is recorded. |
| Commit event | ✓ | `commit` | A commit is created or updated. |
| [Deployment event](../../user/project/integrations/webhook_events.md#deployment-events) | | `deployment` | A deployment starts or finishes. |
| [Work item event](../../user/project/integrations/webhook_events.md#work-item-events) | ✓ | `issue` | An issue is created, updated, or closed. |
| [Confidential issue event](../../user/project/integrations/webhook_events.md#work-item-events) | ✓ | `confidential_issue` | A confidential issue is created, updated, or closed. |
| [Job event](../../user/project/integrations/webhook_events.md#job-events) | | `job` | |
| [Merge request event](../../user/project/integrations/webhook_events.md#merge-request-events) | ✓ | `merge_request` | A merge request is created, updated, or merged. |
| [Comment event](../../user/project/integrations/webhook_events.md#comment-events) | | `comment` | A new comment is added. |
| [Confidential comment event](../../user/project/integrations/webhook_events.md#comment-events) | | `confidential_note` | A new comment on a confidential issue is added. |
| [Pipeline event](../../user/project/integrations/webhook_events.md#pipeline-events) | | `pipeline` | A pipeline status changes. |
| [Push event](../../user/project/integrations/webhook_events.md#push-events) | ✓ | `push` | A push is made to the repository. |
| [Tag push event](../../user/project/integrations/webhook_events.md#tag-events) | ✓ | `tag_push` | New tags are pushed to the repository. |
| Vulnerability event | | `vulnerability` | A new, unique vulnerability is recorded. Ultimate only. |
| [Wiki page event](../../user/project/integrations/webhook_events.md#wiki-page-events) | ✓ | `wiki_page` | A wiki page is created or updated. |
#### Event examples
This example defines an integration that responds to `commit` and `merge_request` events:
```ruby
module Integrations
class FooBar < Integration
def self.supported_events
%w[commit merge_request]
end
end
end
```
An integration can also not respond to events, and implement custom functionality some other way:
```ruby
module Integrations
class FooBar < Integration
def self.supported_events
[]
end
end
end
```
### Define event attribute defaults
Integrations have a problem, tracked in [issue #382999](https://gitlab.com/gitlab-org/gitlab/-/issues/382999),
where due to the default for most
[event attributes](https://gitlab.com/gitlab-org/gitlab/-/blob/cd5edf7d6fe31db22d0f3a024ee1c704d817535b/app/models/concerns/integrations/base/integration.rb#L490-504)
being `true`, we load integrations more frequently than necessary.
Until we address that issue integrations must define all event `attribute` properties in the following way:
- For notification integrations (ones that include `Integrations::Base::ChatNotification`), set all event attributes to `false`.
This presents a form with checkboxes per event trigger that are unchecked by default.
- For other integrations:
- Set event attributes that match the integration's [trigger events](#define-trigger-events) to `true`.
- Set all other event `attributes` to `false`.
For example, an integration that responds to only commit and merge request [trigger events](#define-trigger-events) should set its event attributes as below:
```ruby
attribute :commit_events, default: true
attribute :merge_requests_events, default: true
attribute :alert_events, default: false
attribute :incident_events, default: false
attribute :confidential_issues_events, default: false
attribute :confidential_note_events, default: false
attribute :issues_events, default: false
attribute :job_events, default: false
attribute :note_events, default: false
attribute :pipeline_events, default: false
attribute :push_events, default: false
attribute :tag_push_events, default: false
attribute :wiki_page_events, default: false
```
#### Changing event attribute defaults
If an event attribute for an existing integration changes to `true`,
this requires a data migration to back-fill the attribute value for old records.
### Define metrics
Every new integration should have five [metrics](../internal_analytics/metrics/_index.md):
- Count of active projects with the given integration
- Count of active projects inheriting the given integration
- Count of active groups with the given integration
- Count of active groups inheriting the given integration
- Count of active instance-level integrations for the given integration
Metrics require the model class of the integration to work. You can add metrics only together with or after the model.
To create metric definitions:
1. Copy the metrics created for an existing active integration.
1. Replace all occurrences of the previous integration's name with the new integration's name.
1. Replace `milestone` with the current milestone and `introduced_by_url` with the merge request link.
1. Verify all other attributes have correct values by checking the [metrics guide](../internal_analytics/metrics/metrics_dictionary.md#metrics-definition-and-validation).
For example, to create metric definitions for the Slack integration, you copy these metrics, and
then replace `Slack` with the name of the new integration:
- [`20210216180122_projects_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180122_projects_slack_active.yml)
- [`20210216180124_groups_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180124_groups_slack_active.yml)
- [`20210216180127_instances_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180127_instances_slack_active.yml)
- [`20210216180131_groups_inheriting_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180131_groups_inheriting_slack_active.yml)
- [`20210216180129_projects_inheriting_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180129_projects_inheriting_slack_active.yml)
### Security requirements
#### All HTTP calls must use `Integrations::Clients::HTTP`
Integrations must always make HTTP calls using `Integrations::Clients::HTTP`, which:
- Ensures that [network settings](../../security/webhooks.md) are enforced for HTTP calls.
- Has additional [security hardening](../../security/webhooks.md#enforce-dns-rebinding-attack-protection) features.
- Is our single source of truth for making secure HTTP calls.
- Ensure all response sizes are validated.
#### Masking channel values
Integrations that [include from `Integrations::Base::ChatNotification`](#define-the-integration) can hide the
values of their channel input fields. Integrations should hide these values whenever the
fields contain sensitive information such as auth tokens.
By default, `#mask_configurable_channels?` returns `false`. To mask the channel values, override the `#mask_configurable_channels?` method in the integration to return `true`:
```ruby
override :mask_configurable_channels?
def mask_configurable_channels?
true
end
```
## No Ruby gems that make HTTP calls
GitLab integrations must not add Ruby gems that make HTTP calls.
Other gems that add small abstractions should also not be added.
Certain utility-like gems from official sources, like `atlassian-jwt` gem can be used if required.
Gems that wrap interactions with third-party services may look convenient at first glance,
but they offer minimal benefit compared to the costs involved:
- They increase the potential surface area of security problems and the effort required to fix them.
- Often these gems make HTTP calls on your behalf. As integrations can make HTTP calls to remote
servers configured by users, it is critical that we
[fully control the network calls](#all-http-calls-must-use-integrationsclientshttp).
- There is a maintenance cost of managing gem upgrades.
- They can block us from using newer features.
## Define configuration test
Optionally, you can define a configuration test of an integration's settings. The test is executed from the integration form's **Test** button, and results are returned to the user.
A good configuration test:
- Does not change data on the service. For example, it should not trigger a CI build. Sending a message is okay.
- Is meaningful and as thorough as possible.
If it's not possible to follow the above guidelines, consider not adding a configuration test.
To add a configuration test, define a `#test` method for the integration model.
The method receives `data`, which is a test push event payload.
It should return a hash, containing the keys:
- `success` (required): a boolean to indicate if the configuration test has passed.
- `result` (optional): a message returned to the user if the configuration test has failed.
For example:
```ruby
module Integrations
class FooBar < Integration
def test(data)
success = test_api_key(data)
{ success: success, result: 'API key is invalid' }
end
end
end
```
## Customize the frontend form
The frontend form is generated dynamically based on metadata defined in the model.
By default, the integration form provides:
- A checkbox to enable or disable the integration.
- Checkboxes for each of the trigger events returned from `Integration#configurable_events`.
You can also add help text at the top of the form by either overriding `Integration#help`,
or providing a template in `app/views/shared/integrations/$INTEGRATION_NAME/_help.html.haml`.
To add your custom properties to the form, you can define the metadata for them in `Integration#fields`.
This method should return an array of hashes for each field, where the keys can be:
| Key | Type | Required | Default | Description |
|:---------------|:------------------|:---------|:-----------------------------|:------------|
| `type:` | symbol | true | `:text` | The type of the form field. Can be `:text`, `:number`, `:textarea`, `:password`, `:checkbox`, `:string_array` or `:select`. |
| `section:` | symbol | false | | Specify which section the field belongs to. |
| `name:` | string | true | | The property name for the form field. |
| `required:` | boolean | false | `false` | Specify if the form field is required or optional. Note [backend validations](#define-validations) for presence are still needed. |
| `title:` | string | false | Capitalized value of `name:` | The label for the form field. |
| `placeholder:` | string | false | | A placeholder for the form field. |
| `help:` | string | false | | A help text that displays below the form field. |
| `api_only:` | boolean | false | `false` | Specify if the field should only be available through the API, and excluded from the frontend form. |
| `description` | string | false | | Description of the API field. |
| `if:` | boolean or lambda | false | `true` | Specify if the field should be available. The value can be a boolean or a lambda. |
### Additional keys for `type: :checkbox`
| Key | Type | Required | Default | Description |
|:------------------|:-------|:---------|:------------------|:------------|
| `checkbox_label:` | string | false | Value of `title:` | A custom label that displays next to the checkbox. |
### Additional keys for `type: :select`
| Key | Type | Required | Default | Description |
|:-----------|:------|:---------|:--------|:------------|
| `choices:` | array | true | | A nested array of `[label, value]` tuples. |
### Additional keys for `type: :password`
| Key | Type | Required | Default | Description |
|:----------------------------|:-------|:---------|:------------------|:------------|
| `non_empty_password_title:` | string | false | Value of `title:` | An alternative label that displays when a value is already stored. |
| `non_empty_password_help:` | string | false | Value of `help:` | An alternative help text that displays when a value is already stored. |
### Define sections
All integrations should define `Integration#sections` which split the form into smaller sections,
making it easier for users to set up the integration.
The most commonly used sections are pre-defined and already include some UI:
- `SECTION_TYPE_CONNECTION`: Contains basic fields like `url`, `username`, `password` that are required to connect to and authenticate with the integration.
- `SECTION_TYPE_CONFIGURATION`: Contains more advanced configuration and optional settings around how the integration works.
- `SECTION_TYPE_TRIGGER`: Contains a list of events which will trigger an integration.
`SECTION_TYPE_CONNECTION` and `SECTION_TYPE_CONFIGURATION` render the `dynamic-field` component internally.
The `dynamic-field` component renders a `checkbox`, `number`, `input`, `select`, or `textarea` type for the integration.
For example:
```ruby
module Integrations
class FooBar < Integration
def sections
[
{
type: SECTION_TYPE_CONNECTION,
title: s_('Integrations|Connection details'),
description: help
},
{
type: SECTION_TYPE_CONFIGURATION,
title: _('Configuration'),
description: s_('Advanced configuration for integration')
}
]
end
end
end
```
To add fields to a specific section, you can add the `section:` key to the field metadata.
#### New custom sections
If the existing sections do not meet your requirements for UI customization, you can create new custom sections:
1. Add a new section by adding a new constant `SECTION_TYPE_*` and add it to the `#sections` method:
```ruby
module Integrations
class FooBar < Integration
SECTION_TYPE_SUPER = :my_custom_section
def sections
[
{
type: SECTION_TYPE_SUPER,
title: s_('Integrations|Custom section'),
description: s_('Integrations|Help')
}
]
end
end
end
```
1. Update the frontend constants `integrationFormSections` and `integrationFormSectionComponents` in `~/integrations/constants.js`.
1. Add your new section component in `app/assets/javascripts/integrations/edit/components/sections/*`.
1. Include and render the new section in `app/assets/javascripts/integrations/edit/components/integration_forms/section.vue`.
### Frontend form examples
This example defines a required `url` field, and optional `username` and `password` fields, all under the `Connection details` section:
```ruby
module Integrations
class FooBar < Integration
field :url,
section: SECTION_TYPE_CONNECTION,
type: :text,
title: s_('FooBarIntegration|Server URL'),
placeholder: 'https://example.com/',
required: true
field :username,
section: SECTION_TYPE_CONNECTION,
type: :text,
title: s_('FooBarIntegration|Username')
field :password,
section: SECTION_TYPE_CONNECTION,
type: 'password',
title: s_('FoobarIntegration|Password'),
non_empty_password_title: s_('FooBarIntegration|Enter new password')
def sections
[
{
type: SECTION_TYPE_CONNECTION,
title: s_('Integrations|Connection details'),
description: s_('Integrations|Help')
}
]
end
end
end
```
## Expose the integration in the REST API
To expose the integration in the [REST API](../../api/project_integrations.md):
1. Add the integration's class (`::Integrations::FooBar`) to `API::Helpers::IntegrationsHelpers.integration_classes`.
1. Add the integration's API arguments to `API::Helpers::IntegrationsHelpers.integrations`, for example:
```ruby
'foo-bar' => ::Integrations::FooBar.api_arguments
```
1. Update the reference documentation in `doc/api/project_integrations.md` and `doc/api/group_integrations.md`, add a new section for your integration, and document all properties.
You can also refer to our [REST API style guide](../api_styleguide.md).
Sensitive fields are not exposed over the API. Sensitive fields are those fields that contain any of the following in their name:
- `key`
- `passphrase`
- `password`
- `secret`
- `token`
- `webhook`
## Availability of integrations
By default, integrations can apply to a specific project or group, or
to an entire instance.
Most integrations only act in a project context, but can be still configured
for the group and instance.
For some integrations it can make sense to only make it available on certain levels (project, group, or instance).
To do that, the integration must be removed from `Integration::INTEGRATION_NAMES` and instead added to:
- `Integration::PROJECT_LEVEL_ONLY_INTEGRATION_NAMES` to only allow enabling on the project level.
- `Integration::INSTANCE_LEVEL_ONLY_INTEGRATION_NAMES` to only allow enabling on the instance level.
- `Integration::PROJECT_AND_GROUP_LEVEL_ONLY_INTEGRATION_NAMES` to prevent enabling on the instance level.
When developing a new integration, we also recommend you gate the availability behind a
[feature flag](../feature_flags/_index.md) in `Integration.available_integration_names`.
## Documentation
Add documentation for the integration:
- Add a page in `doc/user/project/integrations`.
- Link it from the [Integrations overview](../../user/project/integrations/_index.md).
- After the documentation has merged, [add an entry](../documentation/site_architecture/global_nav.md#add-a-navigation-entry)
to the documentation navigation under [Integrations](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml?ref_type=heads#L2936).
You can also refer to our general [documentation guidelines](../documentation/_index.md).
You can provide help text in the integration form, including links to off-site documentation,
as described above in [Customize the frontend form](#customize-the-frontend-form). Refer to
our [usability guidelines](https://design.gitlab.com/patterns/contextual-help) for help text.
## Testing
Testing should not be confused with [defining configuration tests](#define-configuration-test).
It is often sufficient to add tests for the integration model in `spec/models/integrations`,
and a factory with example settings in `spec/factories/integrations.rb`.
Each integration is also tested as part of generalized tests. For example, there are feature specs
that verify that the settings form is rendering correctly for all integrations.
If your integration implements any custom behavior, especially in the frontend, this should be
covered by additional tests.
You can also refer to our general [testing guidelines](../testing_guide/_index.md).
## Internationalization
All UI strings should be prepared for translation by following our [internationalization guidelines](../i18n/externalization.md).
The strings should use the integration name as [namespace](../i18n/externalization.md#namespaces), for example, `s_('FooBarIntegration|My string')`.
## Deprecate and remove an integration
To remove an integration, you must first deprecate the integration. For more information,
see the [feature deprecation guidelines](../deprecation_guidelines/_index.md).
### Deprecate an integration
You must announce any deprecation no later than the third milestone preceding intended removal.
To deprecate an integration:
- [Add a deprecation entry](../deprecation_guidelines/_index.md#update-the-deprecations-and-removals-documentation).
- [Mark the integration documentation as deprecated](../documentation/styleguide/deprecations_and_removals.md).
- Optional. To prevent any new project-level records from
being created, add the integration to `Project#disabled_integrations` (see [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/114835)).
### Remove an integration
To safely remove an integration, you must stage the removal across two milestones.
In the major milestone of intended removal (M.0), disable the integration and delete the records from the database:
- Remove the integration from `Integration::INTEGRATION_NAMES`.
- Delete the integration model's `#execute` and `#test` methods (if defined), but keep the model.
- Add a post-migration to delete the integration records from PostgreSQL (see [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/114721)).
- [Mark the integration documentation as removed](../documentation/styleguide/deprecations_and_removals.md#remove-a-page).
- Update the [project](../../api/project_integrations.md) and [group](../../api/group_integrations.md) integrations API pages.
In the next minor release (M.1):
- Remove the integration's model and any remaining code.
- Close any issues, merge requests, and epics that have the integration's label (`~Integration::<name>`).
- Delete the integration's label (`~Integration::<name>`) from `gitlab-org`.
## Ongoing migrations and refactorings
Developers should be aware that the Integrations team is in the process of
[unifying the way integration properties are defined](https://gitlab.com/groups/gitlab-org/-/epics/3955).
## Integration examples
You can refer to these issues for examples of adding new integrations:
- [Datadog](https://gitlab.com/gitlab-org/gitlab/-/issues/270123): Metrics collector, similar to the Prometheus integration.
- [EWM/RTC](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36662): External issue tracker.
- [Webex Teams](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/31543): Chat notifications.
- [ZenTao](https://gitlab.com/gitlab-org/gitlab/-/issues/338178): External issue tracker with custom issue views, similar to the Jira issues integration.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Development guidelines for Integrations
title: Integration development guidelines
breadcrumbs:
- doc
- development
- integrations
---
This page provides development guidelines for implementing [GitLab integrations](../../user/project/integrations/_index.md),
which are part of our [main Rails project](https://gitlab.com/gitlab-org/gitlab).
Also see our [direction page](https://about.gitlab.com/direction/manage/import_and_integrate/integrations/) for an overview of our strategy around integrations.
This guide is a work in progress. You're welcome to ping `@gitlab-org/foundations/import-and-integrate`
if you need clarification or spot any outdated information.
## Add a new integration
### Define the integration
1. Add a new model in `app/models/integrations` extending from `Integration`.
- For example, `Integrations::FooBar` in `app/models/integrations/foo_bar.rb`.
- For certain types of integrations, you can include these base modules:
- `Integrations::Base::ChatNotification`
- `Integrations::Base::Ci`
- `Integrations::Base::IssueTracker`
- `Integrations::Base::Monitoring`
- `Integrations::Base::SlashCommands`
- `Integrations::Base::ThirdPartyWiki`
- For integrations that primarily trigger HTTP calls to external services, you can
also use the `Integrations::HasWebHook` concern. This reuses the [webhook functionality](../../user/project/integrations/webhooks.md)
in GitLab through an associated `ServiceHook` model, and automatically records request logs
which can be viewed in the integration settings.
1. Add the integration's underscored name (`'foo_bar'`) to `Integration::INTEGRATION_NAMES`.
1. Add the integration as an association on `Project`:
```ruby
has_one :foo_bar_integration, class_name: 'Integrations::FooBar'
```
### Define fields
Integrations can define arbitrary fields to store their configuration with the class method `Integration.field`.
The values are stored as an encrypted JSON hash in the `integrations.encrypted_properties` column.
For example:
```ruby
module Integrations
class FooBar < Integration
field :url
field :tags
end
end
```
`Integration.field` installs accessor methods on the class.
Here we would have `#url`, `#url=`, and `#url_changed?` to manage the `url` field.
These accessors should access the fields stored in `Integration#properties` directly on the model, just like other `ActiveRecord` attributes.
You should always access the fields through their `getters` and not interact with the `properties` hash directly.
You **must not** write to the `properties` hash, you **must** use the generated setter method instead. Direct writes to this
hash are not persisted.
To see how these fields are exposed in the frontend form for the integration,
see [Customize the frontend form](#customize-the-frontend-form).
Other approaches include using `Integration.prop_accessor` or `Integration.data_field`, which you might see in earlier versions of integrations.
You should not use these approaches for new integrations.
### Define validations
You should define Rails validations for all of your fields.
Validations should only apply when the integration is enabled, by testing the `#activated?` method.
Any field with the [`required:` property](#customize-the-frontend-form) should have a
corresponding validation for `presence`, as the `required:` field property is only for the frontend.
For example:
```ruby
module Integrations
class FooBar < Integration
with_options if: :activated? do
validates :key, presence: true, format: { with: KEY_REGEX }
validates :bar, inclusion: [true, false]
end
field :key, required: true
field :bar, type: :checkbox
end
end
```
### Define trigger events
Integrations are triggered by calling their `#execute` method in response to events in GitLab,
which gets passed a payload hash with details about the event.
The supported events have some overlap with [webhook events](../../user/project/integrations/webhook_events.md),
and receive the same payload. You can specify the events you're interested in by overriding
the class method `Integration.supported_events` in your model.
The following events are supported for integrations:
| Event type | Default | Value | Trigger |
|:-----------------------------------------------------------------------------------------------|:--------|:---------------------|:--------|
| Alert event | | `alert` | A new, unique alert is recorded. |
| Commit event | ✓ | `commit` | A commit is created or updated. |
| [Deployment event](../../user/project/integrations/webhook_events.md#deployment-events) | | `deployment` | A deployment starts or finishes. |
| [Work item event](../../user/project/integrations/webhook_events.md#work-item-events) | ✓ | `issue` | An issue is created, updated, or closed. |
| [Confidential issue event](../../user/project/integrations/webhook_events.md#work-item-events) | ✓ | `confidential_issue` | A confidential issue is created, updated, or closed. |
| [Job event](../../user/project/integrations/webhook_events.md#job-events) | | `job` | |
| [Merge request event](../../user/project/integrations/webhook_events.md#merge-request-events) | ✓ | `merge_request` | A merge request is created, updated, or merged. |
| [Comment event](../../user/project/integrations/webhook_events.md#comment-events) | | `comment` | A new comment is added. |
| [Confidential comment event](../../user/project/integrations/webhook_events.md#comment-events) | | `confidential_note` | A new comment on a confidential issue is added. |
| [Pipeline event](../../user/project/integrations/webhook_events.md#pipeline-events) | | `pipeline` | A pipeline status changes. |
| [Push event](../../user/project/integrations/webhook_events.md#push-events) | ✓ | `push` | A push is made to the repository. |
| [Tag push event](../../user/project/integrations/webhook_events.md#tag-events) | ✓ | `tag_push` | New tags are pushed to the repository. |
| Vulnerability event | | `vulnerability` | A new, unique vulnerability is recorded. Ultimate only. |
| [Wiki page event](../../user/project/integrations/webhook_events.md#wiki-page-events) | ✓ | `wiki_page` | A wiki page is created or updated. |
#### Event examples
This example defines an integration that responds to `commit` and `merge_request` events:
```ruby
module Integrations
class FooBar < Integration
def self.supported_events
%w[commit merge_request]
end
end
end
```
An integration can also not respond to events, and implement custom functionality some other way:
```ruby
module Integrations
class FooBar < Integration
def self.supported_events
[]
end
end
end
```
### Define event attribute defaults
Integrations have a problem, tracked in [issue #382999](https://gitlab.com/gitlab-org/gitlab/-/issues/382999),
where due to the default for most
[event attributes](https://gitlab.com/gitlab-org/gitlab/-/blob/cd5edf7d6fe31db22d0f3a024ee1c704d817535b/app/models/concerns/integrations/base/integration.rb#L490-504)
being `true`, we load integrations more frequently than necessary.
Until we address that issue integrations must define all event `attribute` properties in the following way:
- For notification integrations (ones that include `Integrations::Base::ChatNotification`), set all event attributes to `false`.
This presents a form with checkboxes per event trigger that are unchecked by default.
- For other integrations:
- Set event attributes that match the integration's [trigger events](#define-trigger-events) to `true`.
- Set all other event `attributes` to `false`.
For example, an integration that responds to only commit and merge request [trigger events](#define-trigger-events) should set its event attributes as below:
```ruby
attribute :commit_events, default: true
attribute :merge_requests_events, default: true
attribute :alert_events, default: false
attribute :incident_events, default: false
attribute :confidential_issues_events, default: false
attribute :confidential_note_events, default: false
attribute :issues_events, default: false
attribute :job_events, default: false
attribute :note_events, default: false
attribute :pipeline_events, default: false
attribute :push_events, default: false
attribute :tag_push_events, default: false
attribute :wiki_page_events, default: false
```
#### Changing event attribute defaults
If an event attribute for an existing integration changes to `true`,
this requires a data migration to back-fill the attribute value for old records.
### Define metrics
Every new integration should have five [metrics](../internal_analytics/metrics/_index.md):
- Count of active projects with the given integration
- Count of active projects inheriting the given integration
- Count of active groups with the given integration
- Count of active groups inheriting the given integration
- Count of active instance-level integrations for the given integration
Metrics require the model class of the integration to work. You can add metrics only together with or after the model.
To create metric definitions:
1. Copy the metrics created for an existing active integration.
1. Replace all occurrences of the previous integration's name with the new integration's name.
1. Replace `milestone` with the current milestone and `introduced_by_url` with the merge request link.
1. Verify all other attributes have correct values by checking the [metrics guide](../internal_analytics/metrics/metrics_dictionary.md#metrics-definition-and-validation).
For example, to create metric definitions for the Slack integration, you copy these metrics, and
then replace `Slack` with the name of the new integration:
- [`20210216180122_projects_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180122_projects_slack_active.yml)
- [`20210216180124_groups_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180124_groups_slack_active.yml)
- [`20210216180127_instances_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180127_instances_slack_active.yml)
- [`20210216180131_groups_inheriting_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180131_groups_inheriting_slack_active.yml)
- [`20210216180129_projects_inheriting_slack_active.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/config/metrics/counts_all/20210216180129_projects_inheriting_slack_active.yml)
### Security requirements
#### All HTTP calls must use `Integrations::Clients::HTTP`
Integrations must always make HTTP calls using `Integrations::Clients::HTTP`, which:
- Ensures that [network settings](../../security/webhooks.md) are enforced for HTTP calls.
- Has additional [security hardening](../../security/webhooks.md#enforce-dns-rebinding-attack-protection) features.
- Is our single source of truth for making secure HTTP calls.
- Ensure all response sizes are validated.
#### Masking channel values
Integrations that [include from `Integrations::Base::ChatNotification`](#define-the-integration) can hide the
values of their channel input fields. Integrations should hide these values whenever the
fields contain sensitive information such as auth tokens.
By default, `#mask_configurable_channels?` returns `false`. To mask the channel values, override the `#mask_configurable_channels?` method in the integration to return `true`:
```ruby
override :mask_configurable_channels?
def mask_configurable_channels?
true
end
```
## No Ruby gems that make HTTP calls
GitLab integrations must not add Ruby gems that make HTTP calls.
Other gems that add small abstractions should also not be added.
Certain utility-like gems from official sources, like `atlassian-jwt` gem can be used if required.
Gems that wrap interactions with third-party services may look convenient at first glance,
but they offer minimal benefit compared to the costs involved:
- They increase the potential surface area of security problems and the effort required to fix them.
- Often these gems make HTTP calls on your behalf. As integrations can make HTTP calls to remote
servers configured by users, it is critical that we
[fully control the network calls](#all-http-calls-must-use-integrationsclientshttp).
- There is a maintenance cost of managing gem upgrades.
- They can block us from using newer features.
## Define configuration test
Optionally, you can define a configuration test of an integration's settings. The test is executed from the integration form's **Test** button, and results are returned to the user.
A good configuration test:
- Does not change data on the service. For example, it should not trigger a CI build. Sending a message is okay.
- Is meaningful and as thorough as possible.
If it's not possible to follow the above guidelines, consider not adding a configuration test.
To add a configuration test, define a `#test` method for the integration model.
The method receives `data`, which is a test push event payload.
It should return a hash, containing the keys:
- `success` (required): a boolean to indicate if the configuration test has passed.
- `result` (optional): a message returned to the user if the configuration test has failed.
For example:
```ruby
module Integrations
class FooBar < Integration
def test(data)
success = test_api_key(data)
{ success: success, result: 'API key is invalid' }
end
end
end
```
## Customize the frontend form
The frontend form is generated dynamically based on metadata defined in the model.
By default, the integration form provides:
- A checkbox to enable or disable the integration.
- Checkboxes for each of the trigger events returned from `Integration#configurable_events`.
You can also add help text at the top of the form by either overriding `Integration#help`,
or providing a template in `app/views/shared/integrations/$INTEGRATION_NAME/_help.html.haml`.
To add your custom properties to the form, you can define the metadata for them in `Integration#fields`.
This method should return an array of hashes for each field, where the keys can be:
| Key | Type | Required | Default | Description |
|:---------------|:------------------|:---------|:-----------------------------|:------------|
| `type:` | symbol | true | `:text` | The type of the form field. Can be `:text`, `:number`, `:textarea`, `:password`, `:checkbox`, `:string_array` or `:select`. |
| `section:` | symbol | false | | Specify which section the field belongs to. |
| `name:` | string | true | | The property name for the form field. |
| `required:` | boolean | false | `false` | Specify if the form field is required or optional. Note [backend validations](#define-validations) for presence are still needed. |
| `title:` | string | false | Capitalized value of `name:` | The label for the form field. |
| `placeholder:` | string | false | | A placeholder for the form field. |
| `help:` | string | false | | A help text that displays below the form field. |
| `api_only:` | boolean | false | `false` | Specify if the field should only be available through the API, and excluded from the frontend form. |
| `description` | string | false | | Description of the API field. |
| `if:` | boolean or lambda | false | `true` | Specify if the field should be available. The value can be a boolean or a lambda. |
### Additional keys for `type: :checkbox`
| Key | Type | Required | Default | Description |
|:------------------|:-------|:---------|:------------------|:------------|
| `checkbox_label:` | string | false | Value of `title:` | A custom label that displays next to the checkbox. |
### Additional keys for `type: :select`
| Key | Type | Required | Default | Description |
|:-----------|:------|:---------|:--------|:------------|
| `choices:` | array | true | | A nested array of `[label, value]` tuples. |
### Additional keys for `type: :password`
| Key | Type | Required | Default | Description |
|:----------------------------|:-------|:---------|:------------------|:------------|
| `non_empty_password_title:` | string | false | Value of `title:` | An alternative label that displays when a value is already stored. |
| `non_empty_password_help:` | string | false | Value of `help:` | An alternative help text that displays when a value is already stored. |
### Define sections
All integrations should define `Integration#sections` which split the form into smaller sections,
making it easier for users to set up the integration.
The most commonly used sections are pre-defined and already include some UI:
- `SECTION_TYPE_CONNECTION`: Contains basic fields like `url`, `username`, `password` that are required to connect to and authenticate with the integration.
- `SECTION_TYPE_CONFIGURATION`: Contains more advanced configuration and optional settings around how the integration works.
- `SECTION_TYPE_TRIGGER`: Contains a list of events which will trigger an integration.
`SECTION_TYPE_CONNECTION` and `SECTION_TYPE_CONFIGURATION` render the `dynamic-field` component internally.
The `dynamic-field` component renders a `checkbox`, `number`, `input`, `select`, or `textarea` type for the integration.
For example:
```ruby
module Integrations
class FooBar < Integration
def sections
[
{
type: SECTION_TYPE_CONNECTION,
title: s_('Integrations|Connection details'),
description: help
},
{
type: SECTION_TYPE_CONFIGURATION,
title: _('Configuration'),
description: s_('Advanced configuration for integration')
}
]
end
end
end
```
To add fields to a specific section, you can add the `section:` key to the field metadata.
#### New custom sections
If the existing sections do not meet your requirements for UI customization, you can create new custom sections:
1. Add a new section by adding a new constant `SECTION_TYPE_*` and add it to the `#sections` method:
```ruby
module Integrations
class FooBar < Integration
SECTION_TYPE_SUPER = :my_custom_section
def sections
[
{
type: SECTION_TYPE_SUPER,
title: s_('Integrations|Custom section'),
description: s_('Integrations|Help')
}
]
end
end
end
```
1. Update the frontend constants `integrationFormSections` and `integrationFormSectionComponents` in `~/integrations/constants.js`.
1. Add your new section component in `app/assets/javascripts/integrations/edit/components/sections/*`.
1. Include and render the new section in `app/assets/javascripts/integrations/edit/components/integration_forms/section.vue`.
### Frontend form examples
This example defines a required `url` field, and optional `username` and `password` fields, all under the `Connection details` section:
```ruby
module Integrations
class FooBar < Integration
field :url,
section: SECTION_TYPE_CONNECTION,
type: :text,
title: s_('FooBarIntegration|Server URL'),
placeholder: 'https://example.com/',
required: true
field :username,
section: SECTION_TYPE_CONNECTION,
type: :text,
title: s_('FooBarIntegration|Username')
field :password,
section: SECTION_TYPE_CONNECTION,
type: 'password',
title: s_('FoobarIntegration|Password'),
non_empty_password_title: s_('FooBarIntegration|Enter new password')
def sections
[
{
type: SECTION_TYPE_CONNECTION,
title: s_('Integrations|Connection details'),
description: s_('Integrations|Help')
}
]
end
end
end
```
## Expose the integration in the REST API
To expose the integration in the [REST API](../../api/project_integrations.md):
1. Add the integration's class (`::Integrations::FooBar`) to `API::Helpers::IntegrationsHelpers.integration_classes`.
1. Add the integration's API arguments to `API::Helpers::IntegrationsHelpers.integrations`, for example:
```ruby
'foo-bar' => ::Integrations::FooBar.api_arguments
```
1. Update the reference documentation in `doc/api/project_integrations.md` and `doc/api/group_integrations.md`, add a new section for your integration, and document all properties.
You can also refer to our [REST API style guide](../api_styleguide.md).
Sensitive fields are not exposed over the API. Sensitive fields are those fields that contain any of the following in their name:
- `key`
- `passphrase`
- `password`
- `secret`
- `token`
- `webhook`
## Availability of integrations
By default, integrations can apply to a specific project or group, or
to an entire instance.
Most integrations only act in a project context, but can be still configured
for the group and instance.
For some integrations it can make sense to only make it available on certain levels (project, group, or instance).
To do that, the integration must be removed from `Integration::INTEGRATION_NAMES` and instead added to:
- `Integration::PROJECT_LEVEL_ONLY_INTEGRATION_NAMES` to only allow enabling on the project level.
- `Integration::INSTANCE_LEVEL_ONLY_INTEGRATION_NAMES` to only allow enabling on the instance level.
- `Integration::PROJECT_AND_GROUP_LEVEL_ONLY_INTEGRATION_NAMES` to prevent enabling on the instance level.
When developing a new integration, we also recommend you gate the availability behind a
[feature flag](../feature_flags/_index.md) in `Integration.available_integration_names`.
## Documentation
Add documentation for the integration:
- Add a page in `doc/user/project/integrations`.
- Link it from the [Integrations overview](../../user/project/integrations/_index.md).
- After the documentation has merged, [add an entry](../documentation/site_architecture/global_nav.md#add-a-navigation-entry)
to the documentation navigation under [Integrations](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/main/data/en-us/navigation.yaml?ref_type=heads#L2936).
You can also refer to our general [documentation guidelines](../documentation/_index.md).
You can provide help text in the integration form, including links to off-site documentation,
as described above in [Customize the frontend form](#customize-the-frontend-form). Refer to
our [usability guidelines](https://design.gitlab.com/patterns/contextual-help) for help text.
## Testing
Testing should not be confused with [defining configuration tests](#define-configuration-test).
It is often sufficient to add tests for the integration model in `spec/models/integrations`,
and a factory with example settings in `spec/factories/integrations.rb`.
Each integration is also tested as part of generalized tests. For example, there are feature specs
that verify that the settings form is rendering correctly for all integrations.
If your integration implements any custom behavior, especially in the frontend, this should be
covered by additional tests.
You can also refer to our general [testing guidelines](../testing_guide/_index.md).
## Internationalization
All UI strings should be prepared for translation by following our [internationalization guidelines](../i18n/externalization.md).
The strings should use the integration name as [namespace](../i18n/externalization.md#namespaces), for example, `s_('FooBarIntegration|My string')`.
## Deprecate and remove an integration
To remove an integration, you must first deprecate the integration. For more information,
see the [feature deprecation guidelines](../deprecation_guidelines/_index.md).
### Deprecate an integration
You must announce any deprecation no later than the third milestone preceding intended removal.
To deprecate an integration:
- [Add a deprecation entry](../deprecation_guidelines/_index.md#update-the-deprecations-and-removals-documentation).
- [Mark the integration documentation as deprecated](../documentation/styleguide/deprecations_and_removals.md).
- Optional. To prevent any new project-level records from
being created, add the integration to `Project#disabled_integrations` (see [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/114835)).
### Remove an integration
To safely remove an integration, you must stage the removal across two milestones.
In the major milestone of intended removal (M.0), disable the integration and delete the records from the database:
- Remove the integration from `Integration::INTEGRATION_NAMES`.
- Delete the integration model's `#execute` and `#test` methods (if defined), but keep the model.
- Add a post-migration to delete the integration records from PostgreSQL (see [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/114721)).
- [Mark the integration documentation as removed](../documentation/styleguide/deprecations_and_removals.md#remove-a-page).
- Update the [project](../../api/project_integrations.md) and [group](../../api/group_integrations.md) integrations API pages.
In the next minor release (M.1):
- Remove the integration's model and any remaining code.
- Close any issues, merge requests, and epics that have the integration's label (`~Integration::<name>`).
- Delete the integration's label (`~Integration::<name>`) from `gitlab-org`.
## Ongoing migrations and refactorings
Developers should be aware that the Integrations team is in the process of
[unifying the way integration properties are defined](https://gitlab.com/groups/gitlab-org/-/epics/3955).
## Integration examples
You can refer to these issues for examples of adding new integrations:
- [Datadog](https://gitlab.com/gitlab-org/gitlab/-/issues/270123): Metrics collector, similar to the Prometheus integration.
- [EWM/RTC](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36662): External issue tracker.
- [Webex Teams](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/31543): Chat notifications.
- [ZenTao](https://gitlab.com/gitlab-org/gitlab/-/issues/338178): External issue tracker with custom issue views, similar to the Jira issues integration.
|
https://docs.gitlab.com/development/jenkins
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/jenkins.md
|
2025-08-13
|
doc/development/integrations
|
[
"doc",
"development",
"integrations"
] |
jenkins.md
|
Create
|
Import
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
How to run Jenkins in development environment (on macOS)
| null |
This is a step by step guide on how to set up [Jenkins](https://www.jenkins.io/) on your local machine and connect to it from your GitLab instance. GitLab triggers webhooks on Jenkins, and Jenkins connects to GitLab using the API. By running both applications on the same machine, we can make sure they are able to access each other.
For configuring an existing Jenkins integration, read [Jenkins CI service](../../integration/jenkins.md).
## Install Jenkins
Install Jenkins and start the service using Homebrew.
```shell
brew install jenkins
brew services start jenkins
```
## Configure GitLab
GitLab does not allow requests to localhost or the local network by default. When running Jenkins on your local machine, you need to enable local access.
1. Sign in to your GitLab instance as an administrator.
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Network**.
1. Expand **Outbound requests**, and select the following checkboxes:
- **Allow requests to the local network from webhooks and integrations**
- **Allow requests to the local network from system hooks**
For more details about GitLab webhooks, see [Webhooks and insecure internal web services](../../security/webhooks.md).
Jenkins uses the GitLab API and needs an access token.
1. Sign in to your GitLab instance.
1. Select your profile picture, then select **Settings**.
1. Select **Access tokens**.
1. Create a new Access Token with the **API** scope enabled. Note the value of the token.
## Configure Jenkins
To configure your GitLab API connection in Jenkins, read
[Configure the Jenkins server](../../integration/jenkins.md#configure-the-jenkins-server).
## Configure Jenkins Project
To set up the Jenkins project you intend to run your build on, read
[Configure the Jenkins project](../../integration/jenkins.md#configure-the-jenkins-project).
## Configure your GitLab project
You can configure your integration between Jenkins and GitLab:
- With the [recommended approach for Jenkins integration](../../integration/jenkins.md#with-a-jenkins-server-url).
- [Using a webhook](../../integration/jenkins.md#with-a-webhook).
## Test your setup
Make a change in your repository and open an MR. In your Jenkins project it should have triggered a new build and on your MR, there should be a widget saying **Pipeline #NUMBER passed**.
It should also include a link to your Jenkins build.
|
---
stage: Create
group: Import
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: How to run Jenkins in development environment (on macOS)
breadcrumbs:
- doc
- development
- integrations
---
This is a step by step guide on how to set up [Jenkins](https://www.jenkins.io/) on your local machine and connect to it from your GitLab instance. GitLab triggers webhooks on Jenkins, and Jenkins connects to GitLab using the API. By running both applications on the same machine, we can make sure they are able to access each other.
For configuring an existing Jenkins integration, read [Jenkins CI service](../../integration/jenkins.md).
## Install Jenkins
Install Jenkins and start the service using Homebrew.
```shell
brew install jenkins
brew services start jenkins
```
## Configure GitLab
GitLab does not allow requests to localhost or the local network by default. When running Jenkins on your local machine, you need to enable local access.
1. Sign in to your GitLab instance as an administrator.
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Network**.
1. Expand **Outbound requests**, and select the following checkboxes:
- **Allow requests to the local network from webhooks and integrations**
- **Allow requests to the local network from system hooks**
For more details about GitLab webhooks, see [Webhooks and insecure internal web services](../../security/webhooks.md).
Jenkins uses the GitLab API and needs an access token.
1. Sign in to your GitLab instance.
1. Select your profile picture, then select **Settings**.
1. Select **Access tokens**.
1. Create a new Access Token with the **API** scope enabled. Note the value of the token.
## Configure Jenkins
To configure your GitLab API connection in Jenkins, read
[Configure the Jenkins server](../../integration/jenkins.md#configure-the-jenkins-server).
## Configure Jenkins Project
To set up the Jenkins project you intend to run your build on, read
[Configure the Jenkins project](../../integration/jenkins.md#configure-the-jenkins-project).
## Configure your GitLab project
You can configure your integration between Jenkins and GitLab:
- With the [recommended approach for Jenkins integration](../../integration/jenkins.md#with-a-jenkins-server-url).
- [Using a webhook](../../integration/jenkins.md#with-a-webhook).
## Test your setup
Make a change in your repository and open an MR. In your Jenkins project it should have triggered a new build and on your MR, there should be a widget saying **Pipeline #NUMBER passed**.
It should also include a link to your Jenkins build.
|
https://docs.gitlab.com/development/working_with_uploads
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/working_with_uploads.md
|
2025-08-13
|
doc/development/uploads
|
[
"doc",
"development",
"uploads"
] |
working_with_uploads.md
|
SaaS Platforms
|
Scalability
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Uploads guide: Adding new uploads
| null |
## Recommendations
- When creating an uploader, [make it a subclass](#where-should-you-store-your-files) of `AttachmentUploader`
- Add your uploader to the [tables](#tables) in this document
- Do not add [new object storage buckets](#where-should-you-store-your-files)
- Implement [direct upload](#implementing-direct-upload-support)
- If you need to process your uploads, decide [where to do that](#processing-uploads)
## Background information
- [CarrierWave Uploaders](#carrierwave-uploaders)
- [GitLab modifications to CarrierWave](#gitlab-modifications-to-carrierwave)
## Where should you store your files?
CarrierWave Uploaders determine where files get
stored. When you create a new Uploader class you are deciding where to store the files of your new
feature.
First of all, ask yourself if you need a new Uploader class. It is OK
to use the same Uploader class for different mount points or different
models.
If you do want or need your own Uploader class then you should make it
a **subclass of `AttachmentUploader`**. You then inherit the storage
location and directory scheme from that class. The directory scheme
is:
```ruby
File.join(model.class.underscore, mounted_as.to_s, model.id.to_s)
```
If you look around in the GitLab code base you find quite a few
Uploaders that have their own storage location. For object storage,
this means Uploaders have their own buckets. We now **discourage**
adding new buckets for the following reasons:
- Using a new bucket adds to development time because you need to make downstream changes in [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit), [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) and [CNG](https://gitlab.com/gitlab-org/build/CNG).
- Using a new bucket requires GitLab.com Infrastructure changes, which slows down the roll-out of your new feature
- Using a new bucket slows down adoption of your new feature for GitLab Self-Managed: people cannot start using your new feature until their local GitLab administrator has configured the new bucket.
By using an existing bucket you avoid all this extra work
and friction. The `Gitlab.config.uploads` storage location, which is what
`AttachmentUploader` uses, is guaranteed to already be configured.
## Implementing Direct Upload support
Below we outline how to implement [direct upload](#direct-upload-via-workhorse) support.
Using direct upload is not always necessary but it is usually a good
idea. Unless the uploads handled by your feature are both infrequent
and small, you probably want to implement direct upload. An example of
a feature with small and infrequent uploads is project avatars: these
rarely change and the application imposes strict size limits on them.
If your feature handles uploads that are not both infrequent and small,
then not implementing direct upload support means that you are taking on
technical debt. At the very least, you should make sure that you _can_
add direct upload support later.
To support Direct Upload you need two things:
1. A pre-authorization endpoint in Rails
1. A Workhorse routing rule
Workhorse does not know where to store your upload. To find out it
makes a pre-authorization request. It also does not know whether or
where to make a pre-authorization request. For that you need the
routing rule.
A note to those of us who remember,
[Workhorse used to be a separate project](https://gitlab.com/groups/gitlab-org/-/epics/4826):
it is not necessary anymore to break these two steps into separate merge
requests. In fact it is probably easier to do both in one merge
request.
### Adding a Workhorse routing rule
Routing rules are defined in
[workhorse/internal/upstream/routes.go](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/workhorse/internal/upstream/routes.go).
They consist of:
- An HTTP verb (usually "POST" or "PUT")
- A path regular expression
- An upload type: MIME multipart or "full request body"
- Optionally, you can also match on HTTP headers like `Content-Type`
Example:
```go
u.route("PUT", apiProjectPattern+`packages/nuget/`, mimeMultipartUploader),
```
You should add a test for your routing rule to `TestAcceleratedUpload`
in
[workhorse/upload_test.go](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/workhorse/upload_test.go).
You should also manually verify that when you perform an upload
request for your new feature, Workhorse makes a pre-authorization
request. You can check this by looking at the Rails access logs. This
is necessary because if you make a mistake in your routing rule you
don't get a hard failure: you just end up using the less efficient
default path.
### Adding a pre-authorization endpoint
We distinguish three cases: Rails controllers, Grape API endpoints and
GraphQL resources.
To start with the bad news: direct upload for GraphQL is currently not
supported. The reason for this is that Workhorse does not parse
GraphQL queries. Also see [issue #280819](https://gitlab.com/gitlab-org/gitlab/-/issues/280819).
Consider accepting your file upload via Grape instead.
For Grape pre-authorization endpoints, look for existing examples that
implement `/authorize` routes. One example is the
[POST `:id/uploads/authorize` endpoint](https://gitlab.com/gitlab-org/gitlab/-/blob/9ad53d623eecebb799ce89eada951e4f4a59c116/lib/api/projects.rb#L642-651).
This particular example is using FileUploader, which means
that the upload is stored in the storage location (bucket) of
that Uploader class.
For Rails endpoints you can use the
[WorkhorseAuthorization concern](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/app/controllers/concerns/workhorse_authorization.rb).
## Processing uploads
Some features require us to process uploads, for example to extract
metadata from the uploaded file. There are a couple of different ways
you can implement this. The main choice is where to implement the
processing, or "who is the processor".
| Processor | Direct Upload possible? | Can reject HTTP request? | Implementation |
|-----------|-------------------------|--------------------------|----------------|
| Sidekiq | yes | no | Straightforward |
| Workhorse | yes | yes | Complex |
| Rails | no | yes | Easy |
Processing in Rails looks appealing but it tends to lead to scaling
problems down the road because you cannot use direct upload. You are
then forced to rebuild your feature with processing in Workhorse. So
if the requirements of your feature allows it, doing the processing in
Sidekiq strikes a good balance between complexity and the ability to
scale.
## CarrierWave Uploaders
GitLab uses a modified version of
[CarrierWave](https://github.com/carrierwaveuploader/carrierwave) to
manage uploads. Below we describe how we use CarrierWave and how
we modified it.
The central concept of CarrierWave is the **Uploader** class. The
Uploader defines where files get stored, and optionally contains
validation and processing logic. To use an Uploader you must associate
it with a text column on an ActiveRecord model. This is called "mounting"
and the column is called `mountpoint`. For example:
```ruby
class Project < ApplicationRecord
mount_uploader :avatar, AttachmentUploader
end
```
Now if you upload an avatar called `tanuki.png` the idea is that in the
`projects.avatar` column for your project, CarrierWave stores the string
`tanuki.png`, and that the AttachmentUploader class contains the
configuration data and directory schema. For example if the project ID
is 123, the actual file may be in
`/var/opt/gitlab/gitlab-rails/uploads/-/system/project/avatar/123/tanuki.png`.
The directory
`/var/opt/gitlab/gitlab-rails/uploads/-/system/project/avatar/123/`
was chosen by the Uploader using among others configuration
(`/var/opt/gitlab/gitlab-rails/uploads`), the model name (`project`),
the model ID (`123`) and the mount point (`avatar`).
> The Uploader determines the individual storage directory of your
> upload. The `mountpoint` column in your model contains the filename.
You never access the `mountpoint` column directly because CarrierWave
defines a getter and setter on your model that operates on file handle
objects.
### Optional Uploader behaviors
Besides determining the storage directory for your upload, a
CarrierWave Uploader can implement several other behaviors via
callbacks. Not all of these behaviors are usable in GitLab. In
particular, you currently cannot use the `version` mechanism of
CarrierWave. Things you can do include:
- Filename validation
- **Incompatible with direct upload**: One time pre-processing of file contents, for example, image resizing
- **Incompatible with direct upload**: Encryption at rest
CarrierWave pre-processing behaviors such as image resizing
or encryption require local access to the uploaded file. This forces
you to upload the processed file from Ruby. This flies against direct
upload, which is all about not doing the upload in Ruby. If you use
direct upload with an Uploader with pre-processing behaviors then the
pre-processing behaviors are skipped silently.
### CarrierWave Storage engines
CarrierWave has 2 storage engines:
| CarrierWave class | GitLab name | Description |
|------------------------------|--------------------------------|-------------|
| `CarrierWave::Storage::File` | `ObjectStorage::Store::LOCAL` | Local files, accessed through the Ruby `stdlib` |
| `CarrierWave::Storage::Fog` | `ObjectStorage::Store::REMOTE` | Cloud files, accessed through the [Fog gem](https://github.com/fog/fog) |
GitLab uses both of these engines, depending on configuration.
The typical way to choose a storage engine in CarrierWave is to use the
`Uploader.storage` class method. In GitLab we do not do this; we have
overridden `Uploader#storage` instead. This allows us to vary the
storage engine file by file.
### CarrierWave file lifecycle
An Uploader is associated with two storage areas: regular storage and
cache storage. Each has its own storage engine. If you assign a file
to a mount point setter (`project.avatar = File.open('/tmp/tanuki.png')`)
you have to copy/move the file to cache
storage as a side effect via the `cache!` method. To persist the file
you must somehow call the `store!` method. This either happens via
[ActiveRecord callbacks](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/orm/activerecord.rb#L55)
or by calling `store!` on an Uploader instance.
Typically you do not need to interact with `cache!` and `store!` but if
you need to debug GitLab CarrierWave modifications it is useful to
know that they are there and that they always get called.
Specifically, it is good to know that CarrierWave pre-processing
behaviors (`process` etc.) are implemented as `before :cache` hooks,
and in the case of direct upload, these hooks are ignored and do not
run.
> Direct upload skips all CarrierWave `before :cache` hooks.
## GitLab modifications to CarrierWave
GitLab uses a modified version of CarrierWave to make a number of things possible.
### Migrating data between storage engines
In
[app/uploaders/object_storage.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/app/uploaders/object_storage.rb)
there is code for migrating user data between local storage and object
storage. This code exists because for a long time, GitLab.com stored
uploads on local storage via NFS. This changed when as part of an infrastructure
migration we had to move the uploads to object storage.
This is why the CarrierWave `storage` varies from upload to upload in
GitLab, and why we have database columns like `uploads.store` or
`ci_job_artifacts.file_store`.
### Direct Upload via Workhorse
Workhorse direct upload is a mechanism that lets us accept large
uploads without spending a lot of Ruby CPU time. Workhorse is written
in Go and goroutines have a much lower resource footprint than Ruby
threads.
Direct upload works as follows.
1. Workhorse accepts a user upload request
1. Workhorse pre-authenticates the request with Rails, and receives a temporary upload location
1. Workhorse stores the file upload in the user's request to the temporary upload location
1. Workhorse propagates the request to Rails
1. Rails issues a remote copy operation to copy the uploaded file from its temporary location to the final location
1. Rails deletes the temporary upload
1. Workhorse deletes the temporary upload a second time in case Rails timed out
Typically, `cache!` returns an instance of
`CarrierWave::SanitizedFile`, and `store!` then
[uploads that file using Fog](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/storage/fog.rb#L327-L335).
In the case of object storage, with the modifications specific to GitLab, the
copying from the temporary location to the final location is
implemented by Rails fooling CarrierWave. When CarrierWave tries to
`cache!` the upload, we
[return](https://gitlab.com/gitlab-org/gitlab/-/blob/59b441d578e41cb177406a9799639e7a5aa9c7e1/app/uploaders/object_storage.rb#L367)
a `CarrierWave::Storage::Fog::File` file handle which points to the
temporary file. During the `store!` phase, CarrierWave then
[copies](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/storage/fog.rb#L325)
this file to its intended location.
## Tables
The Scalability::Frameworks team is making object storage and uploads more easy to use and more robust. If you add or change uploaders, it helps us if you update this table too. This helps us keep an overview of where and how uploaders are used.
### Feature bucket details
| Feature | Upload technology | Uploader | Bucket structure |
|------------------------------------------|-------------------|-----------------------|-----------------------------------------------------------------------------------------------------------|
| Job artifacts | `direct upload` | `workhorse` | `/artifacts/<proj_id_hash>/<date>/<job_id>/<artifact_id>` |
| Pipeline artifacts | `carrierwave` | `sidekiq` | `/artifacts/<proj_id_hash>/pipelines/<pipeline_id>/artifacts/<artifact_id>` |
| Live job traces | `fog` | `sidekiq` | `/artifacts/tmp/builds/<job_id>/chunks/<chunk_index>.log` |
| Job traces archive | `carrierwave` | `sidekiq` | `/artifacts/<proj_id_hash>/<date>/<job_id>/<artifact_id>/job.log` |
| Autoscale runner caching | Not applicable | `gitlab-runner` | `/gitlab-com-[platform-]runners-cache/???` |
| Backups | Not applicable | `s3cmd`, `awscli`, or `gcs` | `/gitlab-backups/???` |
| Git LFS | `direct upload` | `workhorse` | `/lfs-objects/<lfs_obj_oid[0:2]>/<lfs_obj_oid[2:2]>` |
| Design management thumbnails | `carrierwave` | `sidekiq` | `/uploads/design_management/action/image_v432x230/<model_id>/<original_lfs_obj_oid[2:2]` |
| Generic file uploads | `direct upload` | `workhorse` | `/uploads/@hashed/[0:2]/[2:4]/<hash1>/<hash2>/file` |
| Generic file uploads - personal snippets | `direct upload` | `workhorse` | `/uploads/personal_snippet/<snippet_id>/<filename>` |
| Global appearance settings | `disk buffering` | `rails controller` | `/uploads/appearance/...` |
| Topics | `disk buffering` | `rails controller` | `/uploads/projects/topic/...` |
| Avatar images | `direct upload` | `workhorse` | `/uploads/[user,group,project]/avatar/<model_id>` |
| Import | `direct upload` | `workhorse` | `/uploads/import_export_upload/import_file/<model_id>/<file_name>` |
| Export | `carrierwave` | `sidekiq` | `/uploads/import_export_upload/export_file/<model_id>/<timestamp>_<namespace>-<project_name>_export.tag.gz` |
| Placeholder reassignment CSVs | `direct_upload` | `workhorse` | `/uploads/-/system/group/<model_id>/placeholder_reassignment_csv/<file_name>` |
| GitLab Migration | `carrierwave` | `sidekiq` | `/uploads/bulk_imports/???` |
| MR diffs | `carrierwave` | `sidekiq` | `/external-diffs/merge_request_diffs/mr-<mr_id>/diff-<diff_id>` |
| [Package manager assets (except for NPM)](../../user/packages/package_registry/_index.md) | `direct upload` | `workhorse` | `/packages/<proj_id_hash>/packages/<package_id>/files/<package_file_id>` |
| [NPM Package manager assets](../../user/packages/npm_registry/_index.md) | `carrierwave` | `grape API` | `/packages/<proj_id_hash>/packages/<package_id>/files/<package_file_id>` |
| [Debian Package manager assets](../../user/packages/debian_repository/_index.md) | `direct upload` | `workhorse` | `/packages/<group_id or project_id_hash>/debian_*/<group_id or project_id or distribution_file_id>` |
| [Dependency Proxy cache](../../user/packages/dependency_proxy/_index.md) | [`send_dependency`](https://gitlab.com/gitlab-org/gitlab/-/blob/6ed73615ff1261e6ed85c8f57181a65f5b4ffada/workhorse/internal/dependencyproxy/dependencyproxy.go) | `workhorse` | `/dependency-proxy/<group_id_hash>/dependency_proxy/<group_id>/files/<blob_id or manifest_id>` |
| Terraform state files | `carrierwave` | `rails controller` | `/terraform/<proj_id_hash>/<terraform_state_id>` |
| Pages content archives | `carrierwave` | `sidekiq` | `/gitlab-gprd-pages/<proj_id_hash>/pages_deployments/<deployment_id>/` |
| Secure Files | `carrierwave` | `sidekiq` | `/ci-secure-files/<proj_id_hash>/secure_files/<secure_file_id>/` |
### CarrierWave integration
| File | CarrierWave usage | Categorized |
|---------------------------------------------------------|----------------------------------------------------------------------------------|---------------------|
| `app/models/project.rb` | `include Avatarable` | {{< icon name="check-circle" >}} Yes |
| `app/models/projects/topic.rb` | `include Avatarable` | {{< icon name="check-circle" >}} Yes |
| `app/models/group.rb` | `include Avatarable` | {{< icon name="check-circle" >}} Yes |
| `app/models/user.rb` | `include Avatarable` | {{< icon name="check-circle" >}} Yes |
| `app/models/terraform/state_version.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/ci/job_artifact.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/ci/pipeline_artifact.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/pages_deployment.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/lfs_object.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/dependency_proxy/blob.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/dependency_proxy/manifest.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/composer/cache_file.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/package_file.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/concerns/packages/debian/component_file.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `ee/app/models/issuable_metric_image.rb` | `include FileStoreMounter` | |
| `ee/app/models/vulnerabilities/remediation.rb` | `include FileStoreMounter` | |
| `ee/app/models/vulnerabilities/export.rb` | `include FileStoreMounter` | |
| `app/models/packages/debian/project_distribution.rb` | `include Packages::Debian::Distribution` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/debian/group_distribution.rb` | `include Packages::Debian::Distribution` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/debian/project_component_file.rb` | `include Packages::Debian::ComponentFile` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/debian/group_component_file.rb` | `include Packages::Debian::ComponentFile` | {{< icon name="check-circle" >}} Yes |
| `app/models/merge_request_diff.rb` | `mount_uploader :external_diff, ExternalDiffUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/note.rb` | `mount_uploader :attachment, AttachmentUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/appearance.rb` | `mount_uploader :logo, AttachmentUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/appearance.rb` | `mount_uploader :header_logo, AttachmentUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/appearance.rb` | `mount_uploader :favicon, FaviconUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/project.rb` | `mount_uploader :bfg_object_map, AttachmentUploader` | |
| `app/models/import_export_upload.rb` | `mount_uploader :import_file, ImportExportUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/import_export_upload.rb` | `mount_uploader :export_file, ImportExportUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/ci/deleted_object.rb` | `mount_uploader :file, DeletedObjectUploader` | |
| `app/models/design_management/action.rb` | `mount_uploader :image_v432x230, DesignManagement::DesignV432x230Uploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/concerns/packages/debian/distribution.rb` | `mount_uploader :signed_file, Packages::Debian::DistributionReleaseFileUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/bulk_imports/export_upload.rb` | `mount_uploader :export_file, ExportUploader` | {{< icon name="check-circle" >}} Yes |
| `ee/app/models/user_permission_export_upload.rb` | `mount_uploader :file, AttachmentUploader` | |
| `app/models/ci/secure_file.rb` | `include FileStoreMounter` | |
|
---
stage: SaaS Platforms
group: Scalability
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: 'Uploads guide: Adding new uploads'
breadcrumbs:
- doc
- development
- uploads
---
## Recommendations
- When creating an uploader, [make it a subclass](#where-should-you-store-your-files) of `AttachmentUploader`
- Add your uploader to the [tables](#tables) in this document
- Do not add [new object storage buckets](#where-should-you-store-your-files)
- Implement [direct upload](#implementing-direct-upload-support)
- If you need to process your uploads, decide [where to do that](#processing-uploads)
## Background information
- [CarrierWave Uploaders](#carrierwave-uploaders)
- [GitLab modifications to CarrierWave](#gitlab-modifications-to-carrierwave)
## Where should you store your files?
CarrierWave Uploaders determine where files get
stored. When you create a new Uploader class you are deciding where to store the files of your new
feature.
First of all, ask yourself if you need a new Uploader class. It is OK
to use the same Uploader class for different mount points or different
models.
If you do want or need your own Uploader class then you should make it
a **subclass of `AttachmentUploader`**. You then inherit the storage
location and directory scheme from that class. The directory scheme
is:
```ruby
File.join(model.class.underscore, mounted_as.to_s, model.id.to_s)
```
If you look around in the GitLab code base you find quite a few
Uploaders that have their own storage location. For object storage,
this means Uploaders have their own buckets. We now **discourage**
adding new buckets for the following reasons:
- Using a new bucket adds to development time because you need to make downstream changes in [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit), [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) and [CNG](https://gitlab.com/gitlab-org/build/CNG).
- Using a new bucket requires GitLab.com Infrastructure changes, which slows down the roll-out of your new feature
- Using a new bucket slows down adoption of your new feature for GitLab Self-Managed: people cannot start using your new feature until their local GitLab administrator has configured the new bucket.
By using an existing bucket you avoid all this extra work
and friction. The `Gitlab.config.uploads` storage location, which is what
`AttachmentUploader` uses, is guaranteed to already be configured.
## Implementing Direct Upload support
Below we outline how to implement [direct upload](#direct-upload-via-workhorse) support.
Using direct upload is not always necessary but it is usually a good
idea. Unless the uploads handled by your feature are both infrequent
and small, you probably want to implement direct upload. An example of
a feature with small and infrequent uploads is project avatars: these
rarely change and the application imposes strict size limits on them.
If your feature handles uploads that are not both infrequent and small,
then not implementing direct upload support means that you are taking on
technical debt. At the very least, you should make sure that you _can_
add direct upload support later.
To support Direct Upload you need two things:
1. A pre-authorization endpoint in Rails
1. A Workhorse routing rule
Workhorse does not know where to store your upload. To find out it
makes a pre-authorization request. It also does not know whether or
where to make a pre-authorization request. For that you need the
routing rule.
A note to those of us who remember,
[Workhorse used to be a separate project](https://gitlab.com/groups/gitlab-org/-/epics/4826):
it is not necessary anymore to break these two steps into separate merge
requests. In fact it is probably easier to do both in one merge
request.
### Adding a Workhorse routing rule
Routing rules are defined in
[workhorse/internal/upstream/routes.go](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/workhorse/internal/upstream/routes.go).
They consist of:
- An HTTP verb (usually "POST" or "PUT")
- A path regular expression
- An upload type: MIME multipart or "full request body"
- Optionally, you can also match on HTTP headers like `Content-Type`
Example:
```go
u.route("PUT", apiProjectPattern+`packages/nuget/`, mimeMultipartUploader),
```
You should add a test for your routing rule to `TestAcceleratedUpload`
in
[workhorse/upload_test.go](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/workhorse/upload_test.go).
You should also manually verify that when you perform an upload
request for your new feature, Workhorse makes a pre-authorization
request. You can check this by looking at the Rails access logs. This
is necessary because if you make a mistake in your routing rule you
don't get a hard failure: you just end up using the less efficient
default path.
### Adding a pre-authorization endpoint
We distinguish three cases: Rails controllers, Grape API endpoints and
GraphQL resources.
To start with the bad news: direct upload for GraphQL is currently not
supported. The reason for this is that Workhorse does not parse
GraphQL queries. Also see [issue #280819](https://gitlab.com/gitlab-org/gitlab/-/issues/280819).
Consider accepting your file upload via Grape instead.
For Grape pre-authorization endpoints, look for existing examples that
implement `/authorize` routes. One example is the
[POST `:id/uploads/authorize` endpoint](https://gitlab.com/gitlab-org/gitlab/-/blob/9ad53d623eecebb799ce89eada951e4f4a59c116/lib/api/projects.rb#L642-651).
This particular example is using FileUploader, which means
that the upload is stored in the storage location (bucket) of
that Uploader class.
For Rails endpoints you can use the
[WorkhorseAuthorization concern](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/app/controllers/concerns/workhorse_authorization.rb).
## Processing uploads
Some features require us to process uploads, for example to extract
metadata from the uploaded file. There are a couple of different ways
you can implement this. The main choice is where to implement the
processing, or "who is the processor".
| Processor | Direct Upload possible? | Can reject HTTP request? | Implementation |
|-----------|-------------------------|--------------------------|----------------|
| Sidekiq | yes | no | Straightforward |
| Workhorse | yes | yes | Complex |
| Rails | no | yes | Easy |
Processing in Rails looks appealing but it tends to lead to scaling
problems down the road because you cannot use direct upload. You are
then forced to rebuild your feature with processing in Workhorse. So
if the requirements of your feature allows it, doing the processing in
Sidekiq strikes a good balance between complexity and the ability to
scale.
## CarrierWave Uploaders
GitLab uses a modified version of
[CarrierWave](https://github.com/carrierwaveuploader/carrierwave) to
manage uploads. Below we describe how we use CarrierWave and how
we modified it.
The central concept of CarrierWave is the **Uploader** class. The
Uploader defines where files get stored, and optionally contains
validation and processing logic. To use an Uploader you must associate
it with a text column on an ActiveRecord model. This is called "mounting"
and the column is called `mountpoint`. For example:
```ruby
class Project < ApplicationRecord
mount_uploader :avatar, AttachmentUploader
end
```
Now if you upload an avatar called `tanuki.png` the idea is that in the
`projects.avatar` column for your project, CarrierWave stores the string
`tanuki.png`, and that the AttachmentUploader class contains the
configuration data and directory schema. For example if the project ID
is 123, the actual file may be in
`/var/opt/gitlab/gitlab-rails/uploads/-/system/project/avatar/123/tanuki.png`.
The directory
`/var/opt/gitlab/gitlab-rails/uploads/-/system/project/avatar/123/`
was chosen by the Uploader using among others configuration
(`/var/opt/gitlab/gitlab-rails/uploads`), the model name (`project`),
the model ID (`123`) and the mount point (`avatar`).
> The Uploader determines the individual storage directory of your
> upload. The `mountpoint` column in your model contains the filename.
You never access the `mountpoint` column directly because CarrierWave
defines a getter and setter on your model that operates on file handle
objects.
### Optional Uploader behaviors
Besides determining the storage directory for your upload, a
CarrierWave Uploader can implement several other behaviors via
callbacks. Not all of these behaviors are usable in GitLab. In
particular, you currently cannot use the `version` mechanism of
CarrierWave. Things you can do include:
- Filename validation
- **Incompatible with direct upload**: One time pre-processing of file contents, for example, image resizing
- **Incompatible with direct upload**: Encryption at rest
CarrierWave pre-processing behaviors such as image resizing
or encryption require local access to the uploaded file. This forces
you to upload the processed file from Ruby. This flies against direct
upload, which is all about not doing the upload in Ruby. If you use
direct upload with an Uploader with pre-processing behaviors then the
pre-processing behaviors are skipped silently.
### CarrierWave Storage engines
CarrierWave has 2 storage engines:
| CarrierWave class | GitLab name | Description |
|------------------------------|--------------------------------|-------------|
| `CarrierWave::Storage::File` | `ObjectStorage::Store::LOCAL` | Local files, accessed through the Ruby `stdlib` |
| `CarrierWave::Storage::Fog` | `ObjectStorage::Store::REMOTE` | Cloud files, accessed through the [Fog gem](https://github.com/fog/fog) |
GitLab uses both of these engines, depending on configuration.
The typical way to choose a storage engine in CarrierWave is to use the
`Uploader.storage` class method. In GitLab we do not do this; we have
overridden `Uploader#storage` instead. This allows us to vary the
storage engine file by file.
### CarrierWave file lifecycle
An Uploader is associated with two storage areas: regular storage and
cache storage. Each has its own storage engine. If you assign a file
to a mount point setter (`project.avatar = File.open('/tmp/tanuki.png')`)
you have to copy/move the file to cache
storage as a side effect via the `cache!` method. To persist the file
you must somehow call the `store!` method. This either happens via
[ActiveRecord callbacks](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/orm/activerecord.rb#L55)
or by calling `store!` on an Uploader instance.
Typically you do not need to interact with `cache!` and `store!` but if
you need to debug GitLab CarrierWave modifications it is useful to
know that they are there and that they always get called.
Specifically, it is good to know that CarrierWave pre-processing
behaviors (`process` etc.) are implemented as `before :cache` hooks,
and in the case of direct upload, these hooks are ignored and do not
run.
> Direct upload skips all CarrierWave `before :cache` hooks.
## GitLab modifications to CarrierWave
GitLab uses a modified version of CarrierWave to make a number of things possible.
### Migrating data between storage engines
In
[app/uploaders/object_storage.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/app/uploaders/object_storage.rb)
there is code for migrating user data between local storage and object
storage. This code exists because for a long time, GitLab.com stored
uploads on local storage via NFS. This changed when as part of an infrastructure
migration we had to move the uploads to object storage.
This is why the CarrierWave `storage` varies from upload to upload in
GitLab, and why we have database columns like `uploads.store` or
`ci_job_artifacts.file_store`.
### Direct Upload via Workhorse
Workhorse direct upload is a mechanism that lets us accept large
uploads without spending a lot of Ruby CPU time. Workhorse is written
in Go and goroutines have a much lower resource footprint than Ruby
threads.
Direct upload works as follows.
1. Workhorse accepts a user upload request
1. Workhorse pre-authenticates the request with Rails, and receives a temporary upload location
1. Workhorse stores the file upload in the user's request to the temporary upload location
1. Workhorse propagates the request to Rails
1. Rails issues a remote copy operation to copy the uploaded file from its temporary location to the final location
1. Rails deletes the temporary upload
1. Workhorse deletes the temporary upload a second time in case Rails timed out
Typically, `cache!` returns an instance of
`CarrierWave::SanitizedFile`, and `store!` then
[uploads that file using Fog](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/storage/fog.rb#L327-L335).
In the case of object storage, with the modifications specific to GitLab, the
copying from the temporary location to the final location is
implemented by Rails fooling CarrierWave. When CarrierWave tries to
`cache!` the upload, we
[return](https://gitlab.com/gitlab-org/gitlab/-/blob/59b441d578e41cb177406a9799639e7a5aa9c7e1/app/uploaders/object_storage.rb#L367)
a `CarrierWave::Storage::Fog::File` file handle which points to the
temporary file. During the `store!` phase, CarrierWave then
[copies](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/storage/fog.rb#L325)
this file to its intended location.
## Tables
The Scalability::Frameworks team is making object storage and uploads more easy to use and more robust. If you add or change uploaders, it helps us if you update this table too. This helps us keep an overview of where and how uploaders are used.
### Feature bucket details
| Feature | Upload technology | Uploader | Bucket structure |
|------------------------------------------|-------------------|-----------------------|-----------------------------------------------------------------------------------------------------------|
| Job artifacts | `direct upload` | `workhorse` | `/artifacts/<proj_id_hash>/<date>/<job_id>/<artifact_id>` |
| Pipeline artifacts | `carrierwave` | `sidekiq` | `/artifacts/<proj_id_hash>/pipelines/<pipeline_id>/artifacts/<artifact_id>` |
| Live job traces | `fog` | `sidekiq` | `/artifacts/tmp/builds/<job_id>/chunks/<chunk_index>.log` |
| Job traces archive | `carrierwave` | `sidekiq` | `/artifacts/<proj_id_hash>/<date>/<job_id>/<artifact_id>/job.log` |
| Autoscale runner caching | Not applicable | `gitlab-runner` | `/gitlab-com-[platform-]runners-cache/???` |
| Backups | Not applicable | `s3cmd`, `awscli`, or `gcs` | `/gitlab-backups/???` |
| Git LFS | `direct upload` | `workhorse` | `/lfs-objects/<lfs_obj_oid[0:2]>/<lfs_obj_oid[2:2]>` |
| Design management thumbnails | `carrierwave` | `sidekiq` | `/uploads/design_management/action/image_v432x230/<model_id>/<original_lfs_obj_oid[2:2]` |
| Generic file uploads | `direct upload` | `workhorse` | `/uploads/@hashed/[0:2]/[2:4]/<hash1>/<hash2>/file` |
| Generic file uploads - personal snippets | `direct upload` | `workhorse` | `/uploads/personal_snippet/<snippet_id>/<filename>` |
| Global appearance settings | `disk buffering` | `rails controller` | `/uploads/appearance/...` |
| Topics | `disk buffering` | `rails controller` | `/uploads/projects/topic/...` |
| Avatar images | `direct upload` | `workhorse` | `/uploads/[user,group,project]/avatar/<model_id>` |
| Import | `direct upload` | `workhorse` | `/uploads/import_export_upload/import_file/<model_id>/<file_name>` |
| Export | `carrierwave` | `sidekiq` | `/uploads/import_export_upload/export_file/<model_id>/<timestamp>_<namespace>-<project_name>_export.tag.gz` |
| Placeholder reassignment CSVs | `direct_upload` | `workhorse` | `/uploads/-/system/group/<model_id>/placeholder_reassignment_csv/<file_name>` |
| GitLab Migration | `carrierwave` | `sidekiq` | `/uploads/bulk_imports/???` |
| MR diffs | `carrierwave` | `sidekiq` | `/external-diffs/merge_request_diffs/mr-<mr_id>/diff-<diff_id>` |
| [Package manager assets (except for NPM)](../../user/packages/package_registry/_index.md) | `direct upload` | `workhorse` | `/packages/<proj_id_hash>/packages/<package_id>/files/<package_file_id>` |
| [NPM Package manager assets](../../user/packages/npm_registry/_index.md) | `carrierwave` | `grape API` | `/packages/<proj_id_hash>/packages/<package_id>/files/<package_file_id>` |
| [Debian Package manager assets](../../user/packages/debian_repository/_index.md) | `direct upload` | `workhorse` | `/packages/<group_id or project_id_hash>/debian_*/<group_id or project_id or distribution_file_id>` |
| [Dependency Proxy cache](../../user/packages/dependency_proxy/_index.md) | [`send_dependency`](https://gitlab.com/gitlab-org/gitlab/-/blob/6ed73615ff1261e6ed85c8f57181a65f5b4ffada/workhorse/internal/dependencyproxy/dependencyproxy.go) | `workhorse` | `/dependency-proxy/<group_id_hash>/dependency_proxy/<group_id>/files/<blob_id or manifest_id>` |
| Terraform state files | `carrierwave` | `rails controller` | `/terraform/<proj_id_hash>/<terraform_state_id>` |
| Pages content archives | `carrierwave` | `sidekiq` | `/gitlab-gprd-pages/<proj_id_hash>/pages_deployments/<deployment_id>/` |
| Secure Files | `carrierwave` | `sidekiq` | `/ci-secure-files/<proj_id_hash>/secure_files/<secure_file_id>/` |
### CarrierWave integration
| File | CarrierWave usage | Categorized |
|---------------------------------------------------------|----------------------------------------------------------------------------------|---------------------|
| `app/models/project.rb` | `include Avatarable` | {{< icon name="check-circle" >}} Yes |
| `app/models/projects/topic.rb` | `include Avatarable` | {{< icon name="check-circle" >}} Yes |
| `app/models/group.rb` | `include Avatarable` | {{< icon name="check-circle" >}} Yes |
| `app/models/user.rb` | `include Avatarable` | {{< icon name="check-circle" >}} Yes |
| `app/models/terraform/state_version.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/ci/job_artifact.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/ci/pipeline_artifact.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/pages_deployment.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/lfs_object.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/dependency_proxy/blob.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/dependency_proxy/manifest.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/composer/cache_file.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/package_file.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `app/models/concerns/packages/debian/component_file.rb` | `include FileStoreMounter` | {{< icon name="check-circle" >}} Yes |
| `ee/app/models/issuable_metric_image.rb` | `include FileStoreMounter` | |
| `ee/app/models/vulnerabilities/remediation.rb` | `include FileStoreMounter` | |
| `ee/app/models/vulnerabilities/export.rb` | `include FileStoreMounter` | |
| `app/models/packages/debian/project_distribution.rb` | `include Packages::Debian::Distribution` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/debian/group_distribution.rb` | `include Packages::Debian::Distribution` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/debian/project_component_file.rb` | `include Packages::Debian::ComponentFile` | {{< icon name="check-circle" >}} Yes |
| `app/models/packages/debian/group_component_file.rb` | `include Packages::Debian::ComponentFile` | {{< icon name="check-circle" >}} Yes |
| `app/models/merge_request_diff.rb` | `mount_uploader :external_diff, ExternalDiffUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/note.rb` | `mount_uploader :attachment, AttachmentUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/appearance.rb` | `mount_uploader :logo, AttachmentUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/appearance.rb` | `mount_uploader :header_logo, AttachmentUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/appearance.rb` | `mount_uploader :favicon, FaviconUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/project.rb` | `mount_uploader :bfg_object_map, AttachmentUploader` | |
| `app/models/import_export_upload.rb` | `mount_uploader :import_file, ImportExportUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/import_export_upload.rb` | `mount_uploader :export_file, ImportExportUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/ci/deleted_object.rb` | `mount_uploader :file, DeletedObjectUploader` | |
| `app/models/design_management/action.rb` | `mount_uploader :image_v432x230, DesignManagement::DesignV432x230Uploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/concerns/packages/debian/distribution.rb` | `mount_uploader :signed_file, Packages::Debian::DistributionReleaseFileUploader` | {{< icon name="check-circle" >}} Yes |
| `app/models/bulk_imports/export_upload.rb` | `mount_uploader :export_file, ExportUploader` | {{< icon name="check-circle" >}} Yes |
| `ee/app/models/user_permission_export_upload.rb` | `mount_uploader :file, AttachmentUploader` | |
| `app/models/ci/secure_file.rb` | `include FileStoreMounter` | |
|
https://docs.gitlab.com/development/uploads
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/uploads
|
[
"doc",
"development",
"uploads"
] |
_index.md
|
SaaS Platforms
|
Scalability
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Uploads development guidelines
| null |
Uploads are an integral part of many GitLab features. To understand how GitLab handles uploads, this page
provides an overview of the key mechanisms for transferring files to a storage destination.
GitLab uploads are configured by feature. All features that involve uploads provide the same configuration options,
but they can be configured independently of one another. For example, Git LFS uploads can be configured
independently of CI/CD build artifact uploads, but they both offer the same set of settings keys. These settings
govern how an upload is processed, which can have a dramatic impact on performance and scalability.
This page summarizes the upload settings that are important in deciding how such files are handled. The sections
that follow then describe each of these mechanisms in more detail.
## How upload settings drive upload flow
Before we examine individual upload strategies in more detail, let's examine a high-level
breakdown of which upload settings map to each of these strategies.
Upload settings themselves are documented in [Uploads administration](../../administration/uploads.md).
Here, we focus on how these settings drive the internals of GitLab upload logic.
At the top level, we distinguish between two **destinations** for uploaded files:
- [**Local storage**](#local-storage) - Files are stored on a volume attached to the web server node.
- [**Object storage**](#object-storage) - Files are stored in a remote object store bucket.
In this table, `x.y.z` specifies the path taken through `gitlab.yml`:
| Setting | Value | Behavior |
| -------------------------------------- | ------- | ------------------------------- |
| `<feature>.object_store.enabled` | `false` | Files are stored locally in `<feature>.storage_path` |
| `<feature>.object_store.enabled` | `true` | Files are stored remotely in `<feature>.object_store.remote_directory` |
When using object storage, administrators can control how those files are moved into the respective bucket.
This move can happen in one of these ways:
- [Rails controller upload](#rails-controller-upload).
- [Direct upload](#direct-upload).
Individual Sidekiq workers might also store files in object storage, which is not something we cover here.
Finally, Workhorse assists most user-initiated uploads using an upload buffering mechanism to keep slow work out of Rails controllers.
This mechanism is explained in [Workhorse assisted uploads](#workhorse-assisted-uploads),
as it runs orthogonal to much of what we discuss beforehand.
We now look at each case in more detail.
## Local storage
Local storage is the simplest path an upload can take. It was how GitLab treated uploads in its early days.
It assumes a storage volume (like a disk or network attached storage) is accessible
to the Rails application at `storage_path`. This file path is relative to the Rails root directory and,
like any upload setting, configurable per feature.
When a client sends a file upload, Workhorse first buffers the file to disk, a mechanism explained in more
detail in [Workhorse assisted uploads](#workhorse-assisted-uploads). When the request reaches the Rails
application, the file already exists on local storage, so Rails merely has to move it to the specified
directory to finalize the transaction.
Local storage cannot be used with cloud-native GitLab (CNG) installations. It is therefore not used for
GitLab SaaS either.
## Object storage
To provide horizontally scalable storage, you must use an object store provider such as:
- Amazon AWS.
- Google Cloud Storage (GCS).
- Azure Cloud Storage.
Using object storage provides two main benefits:
- Ease of adding more storage capacity: cloud providers do this for you automatically.
- Enabling horizontal scaling of your GitLab installation: multiple GitLab application servers can access the same data
when it is stored in object storage.
CNG installations including GitLab SaaS always use object storage (GCS in the case of GitLab SaaS.)
A challenge with uploading to a remote object store is that it includes an outgoing HTTP request from
GitLab to the object store provider. As mentioned above, there are three different strategies available for how
this HTTP request is sent.
- [Rails controller upload](#rails-controller-upload).
- [Direct upload](#direct-upload).
- [Workhorse assisted uploads](#workhorse-assisted-uploads).
### Rails controller upload
When direct upload is not available, Rails uploads the file to object storage
as part of the controller `create` action. Which controller is responsible depends on the kind of file uploaded.
A Rails controller upload is very similar to uploading to local storage. The main difference: Rails must
send an HTTP request to the object store. This happens via the [CarrierWave Fog](https://github.com/carrierwaveuploader/carrierwave#fog)
uploader.
As with local storage, this strategy benefits from [Workhorse assistance](#workhorse-assisted-uploads) to
keep some of the costly I/O work out of Ruby and Rails. Direct upload does a better job at this because it also keeps the HTTP PUT requests to object storage outside Puma.
This strategy is only suitable for small file uploads, as it is subject to Puma's 60 second request timeout.
### Direct upload
Direct upload is the recommended way to move large files into object storage in CNG installations like GitLab SaaS.
With direct upload enabled, Workhorse:
1. Authorizes the request with Rails.
1. Establishes a connection with the object store itself to transfer the file to a temporary location.
1. When the transfer is complete, Workhorse finalizes the request with Rails.
1. Completes the upload by deleting the temporary file in object storage.
This strategy is a different form of [Workhorse assistance](#workhorse-assisted-uploads). It does not rely on shared storage that is accessible by both Workhorse and Puma.
Of all existing upload strategies, direct upload is best able to handle large (gigabyte) uploads.
### Disk buffered uploads
Direct upload falls back to _disk buffered upload_ when `direct_upload` is disabled inside the [object storage setting](../../administration/uploads.md#object-storage-settings). The answer to the `/authorize` call contains only a file system path.
```mermaid
sequenceDiagram
participant c as Client
participant w as Workhorse
participant r as Rails
participant os as Object Storage
activate c
c ->>+w: POST /some/url/upload
w ->>+r: POST /some/url/upload/authorize
Note over w,r: this request has an empty body
r-->>-w: presigned OS URL
w->>+os: PUT file
Note over w,os: file is stored on a temporary location. Rails select the destination
os-->>-w: request result
w->>+r: POST /some/url/upload
Note over w,r: file was replaced with its location<br>and other metadata
r->>+os: move object to final destination
os-->>-r: request result
opt requires async processing
r->>+redis: schedule a job
redis-->>-r: job is scheduled
end
r-->>-c: request result
deactivate c
w->>-w: cleanup
opt requires async processing
activate sidekiq
sidekiq->>+redis: fetch a job
redis-->>-sidekiq: job
sidekiq->>+os: get object
os-->>-sidekiq: file
sidekiq->>sidekiq: process file
deactivate sidekiq
end
```
## Workhorse assisted uploads
Most uploads receive assistance from Workhorse in some way.
- Often, Workhorse buffers the upload to a temporary file. Workhorse adds metadata to the request to tell
Puma the name and location of the temporary file. This requires shared temporary storage between Workhorse and Puma.
All GitLab installations (including CNG) have this shared temporary storage.
- Workhorse sometimes pre-processes the file. For example, for CI artifact uploads, Workhorse creates a separate index
of the contents of the ZIP file. By doing this in Workhorse we bypass the Puma request timeout.
Compared to Sidekiq background processing, this has the advantage that the user does not see an intermediate state
where GitLab accepts the file but has not yet processed it.
- With direct upload, Workhorse can both pre-process the file and upload it to object storage.
Uploading a large file to object storage takes time; by doing this in Workhorse we avoid the Puma request timeout.
For additional information about uploads, see [Workhorse handlers](../workhorse/handlers.md).
|
---
stage: SaaS Platforms
group: Scalability
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Uploads development guidelines
breadcrumbs:
- doc
- development
- uploads
---
Uploads are an integral part of many GitLab features. To understand how GitLab handles uploads, this page
provides an overview of the key mechanisms for transferring files to a storage destination.
GitLab uploads are configured by feature. All features that involve uploads provide the same configuration options,
but they can be configured independently of one another. For example, Git LFS uploads can be configured
independently of CI/CD build artifact uploads, but they both offer the same set of settings keys. These settings
govern how an upload is processed, which can have a dramatic impact on performance and scalability.
This page summarizes the upload settings that are important in deciding how such files are handled. The sections
that follow then describe each of these mechanisms in more detail.
## How upload settings drive upload flow
Before we examine individual upload strategies in more detail, let's examine a high-level
breakdown of which upload settings map to each of these strategies.
Upload settings themselves are documented in [Uploads administration](../../administration/uploads.md).
Here, we focus on how these settings drive the internals of GitLab upload logic.
At the top level, we distinguish between two **destinations** for uploaded files:
- [**Local storage**](#local-storage) - Files are stored on a volume attached to the web server node.
- [**Object storage**](#object-storage) - Files are stored in a remote object store bucket.
In this table, `x.y.z` specifies the path taken through `gitlab.yml`:
| Setting | Value | Behavior |
| -------------------------------------- | ------- | ------------------------------- |
| `<feature>.object_store.enabled` | `false` | Files are stored locally in `<feature>.storage_path` |
| `<feature>.object_store.enabled` | `true` | Files are stored remotely in `<feature>.object_store.remote_directory` |
When using object storage, administrators can control how those files are moved into the respective bucket.
This move can happen in one of these ways:
- [Rails controller upload](#rails-controller-upload).
- [Direct upload](#direct-upload).
Individual Sidekiq workers might also store files in object storage, which is not something we cover here.
Finally, Workhorse assists most user-initiated uploads using an upload buffering mechanism to keep slow work out of Rails controllers.
This mechanism is explained in [Workhorse assisted uploads](#workhorse-assisted-uploads),
as it runs orthogonal to much of what we discuss beforehand.
We now look at each case in more detail.
## Local storage
Local storage is the simplest path an upload can take. It was how GitLab treated uploads in its early days.
It assumes a storage volume (like a disk or network attached storage) is accessible
to the Rails application at `storage_path`. This file path is relative to the Rails root directory and,
like any upload setting, configurable per feature.
When a client sends a file upload, Workhorse first buffers the file to disk, a mechanism explained in more
detail in [Workhorse assisted uploads](#workhorse-assisted-uploads). When the request reaches the Rails
application, the file already exists on local storage, so Rails merely has to move it to the specified
directory to finalize the transaction.
Local storage cannot be used with cloud-native GitLab (CNG) installations. It is therefore not used for
GitLab SaaS either.
## Object storage
To provide horizontally scalable storage, you must use an object store provider such as:
- Amazon AWS.
- Google Cloud Storage (GCS).
- Azure Cloud Storage.
Using object storage provides two main benefits:
- Ease of adding more storage capacity: cloud providers do this for you automatically.
- Enabling horizontal scaling of your GitLab installation: multiple GitLab application servers can access the same data
when it is stored in object storage.
CNG installations including GitLab SaaS always use object storage (GCS in the case of GitLab SaaS.)
A challenge with uploading to a remote object store is that it includes an outgoing HTTP request from
GitLab to the object store provider. As mentioned above, there are three different strategies available for how
this HTTP request is sent.
- [Rails controller upload](#rails-controller-upload).
- [Direct upload](#direct-upload).
- [Workhorse assisted uploads](#workhorse-assisted-uploads).
### Rails controller upload
When direct upload is not available, Rails uploads the file to object storage
as part of the controller `create` action. Which controller is responsible depends on the kind of file uploaded.
A Rails controller upload is very similar to uploading to local storage. The main difference: Rails must
send an HTTP request to the object store. This happens via the [CarrierWave Fog](https://github.com/carrierwaveuploader/carrierwave#fog)
uploader.
As with local storage, this strategy benefits from [Workhorse assistance](#workhorse-assisted-uploads) to
keep some of the costly I/O work out of Ruby and Rails. Direct upload does a better job at this because it also keeps the HTTP PUT requests to object storage outside Puma.
This strategy is only suitable for small file uploads, as it is subject to Puma's 60 second request timeout.
### Direct upload
Direct upload is the recommended way to move large files into object storage in CNG installations like GitLab SaaS.
With direct upload enabled, Workhorse:
1. Authorizes the request with Rails.
1. Establishes a connection with the object store itself to transfer the file to a temporary location.
1. When the transfer is complete, Workhorse finalizes the request with Rails.
1. Completes the upload by deleting the temporary file in object storage.
This strategy is a different form of [Workhorse assistance](#workhorse-assisted-uploads). It does not rely on shared storage that is accessible by both Workhorse and Puma.
Of all existing upload strategies, direct upload is best able to handle large (gigabyte) uploads.
### Disk buffered uploads
Direct upload falls back to _disk buffered upload_ when `direct_upload` is disabled inside the [object storage setting](../../administration/uploads.md#object-storage-settings). The answer to the `/authorize` call contains only a file system path.
```mermaid
sequenceDiagram
participant c as Client
participant w as Workhorse
participant r as Rails
participant os as Object Storage
activate c
c ->>+w: POST /some/url/upload
w ->>+r: POST /some/url/upload/authorize
Note over w,r: this request has an empty body
r-->>-w: presigned OS URL
w->>+os: PUT file
Note over w,os: file is stored on a temporary location. Rails select the destination
os-->>-w: request result
w->>+r: POST /some/url/upload
Note over w,r: file was replaced with its location<br>and other metadata
r->>+os: move object to final destination
os-->>-r: request result
opt requires async processing
r->>+redis: schedule a job
redis-->>-r: job is scheduled
end
r-->>-c: request result
deactivate c
w->>-w: cleanup
opt requires async processing
activate sidekiq
sidekiq->>+redis: fetch a job
redis-->>-sidekiq: job
sidekiq->>+os: get object
os-->>-sidekiq: file
sidekiq->>sidekiq: process file
deactivate sidekiq
end
```
## Workhorse assisted uploads
Most uploads receive assistance from Workhorse in some way.
- Often, Workhorse buffers the upload to a temporary file. Workhorse adds metadata to the request to tell
Puma the name and location of the temporary file. This requires shared temporary storage between Workhorse and Puma.
All GitLab installations (including CNG) have this shared temporary storage.
- Workhorse sometimes pre-processes the file. For example, for CI artifact uploads, Workhorse creates a separate index
of the contents of the ZIP file. By doing this in Workhorse we bypass the Puma request timeout.
Compared to Sidekiq background processing, this has the advantage that the user does not see an intermediate state
where GitLab accepts the file but has not yet processed it.
- With direct upload, Workhorse can both pre-process the file and upload it to object storage.
Uploading a large file to object storage takes time; by doing this in Workhorse we avoid the Puma request timeout.
For additional information about uploads, see [Workhorse handlers](../workhorse/handlers.md).
|
https://docs.gitlab.com/development/data_science
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/data_science
|
[
"doc",
"development",
"data_science"
] |
_index.md
|
ModelOps
|
MLOps
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Data Science
| null |
- [Model Registry](model_registry/_index.md)
|
---
stage: ModelOps
group: MLOps
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Data Science
breadcrumbs:
- doc
- development
- data_science
---
- [Model Registry](model_registry/_index.md)
|
https://docs.gitlab.com/development/data_science/model_registry
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/data_science/_index.md
|
2025-08-13
|
doc/development/data_science/model_registry
|
[
"doc",
"development",
"data_science",
"model_registry"
] |
_index.md
|
ModelOps
|
MLOps
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Model Registry
| null |
Model registry is the component in the MLOps lifecycle responsible for managing
model versions. Beyond tracking just artifacts, it is responsible to track the
metadata associated to each model, like:
- Performance
- Parameters
- Data lineage
## Data topology
All entities belong to a project, and only users with access to the project can
interact with the entities.
### `Ml::Model`
- Holds general information about a model, like name and description.
- Each model as a default `Ml::Experiment` with the same name where candidates are logged to.
- Has many `Ml::ModelVersion`.
#### `Ml::ModelVersion`
- Is a version of the model.
- Links to a `Packages::Package` with the same project, name, and version.
- Version must use semantic versioning.
#### `Ml::Experiment`
- Collection of comparable `Ml::Candidates`.
#### `Ml::Candidate`
- A candidate to a model version.
- Can have many parameters (`Ml::CandidateParams`), which are usually configuration variables passed to the training code.
- Can have many performance indicators (`Ml::CandidateMetrics`).
- Can have many user defined metadata (`Ml::CandidateMetadata`).
## MLflow compatibility layer
To make it easier for Data Scientists with GitLab Model registry, we provided a
compatibility layer to [MLflow client](https://mlflow.org/docs/latest/python_api/mlflow.client.html).
We do not provide an MLflow instance with GitLab. Instead, GitLab itself acts as
an instance of MLflow. This method stores data on the GitLab database, which
improves user reliability and functionality. See the user documentation about
[the compatibility layer](../../../user/project/ml/experiment_tracking/mlflow_client.md).
The compatibility layer is implemented by replicating the [MLflow rest API](https://mlflow.org/docs/latest/rest-api.html)
in [`lib/api/ml/mlflow`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/api/ml/mlflow).
Some terms on MLflow are named differently in GitLab:
- An MLflow `Run` is a GitLab `Candidate`.
- An MLflow `Registered model` is a GitLab `Model`.
### Setting up for testing
To test the script with MLflow with GitLab as the backend:
1. Install MLflow:
```shell
mkdir mlflow-compatibility
cd mlflow-compatibility
pip install mlflow jupyterlab
```
1. In the directory, create a Python file named `mlflow_test.py` with the following code:
```python3
import mlflow
import os
from mlflow.tracking import MlflowClient
os.environ["MLFLOW_TRACKING_TOKEN"]='<TOKEN>'
os.environ["MLFLOW_TRACKING_URI"]='<your gitlab endpoint>/api/v4/projects/<your project id>/ml/mlflow'
client = MlflowClient()
client.create_experiment("My first experiment")
```
1. Run the script:
```shell
python mlflow_test.py
```
1. Go to the project `/-/ml/experiments`. An experiment should have been created.
You can edit the script to call the client methods we are trying to implement. See
[GitLab Model experiments example](https://gitlab.com/gitlab-org/incubation-engineering/mlops/model_experiment_example)
for a more complete example.
|
---
stage: ModelOps
group: MLOps
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Model Registry
breadcrumbs:
- doc
- development
- data_science
- model_registry
---
Model registry is the component in the MLOps lifecycle responsible for managing
model versions. Beyond tracking just artifacts, it is responsible to track the
metadata associated to each model, like:
- Performance
- Parameters
- Data lineage
## Data topology
All entities belong to a project, and only users with access to the project can
interact with the entities.
### `Ml::Model`
- Holds general information about a model, like name and description.
- Each model as a default `Ml::Experiment` with the same name where candidates are logged to.
- Has many `Ml::ModelVersion`.
#### `Ml::ModelVersion`
- Is a version of the model.
- Links to a `Packages::Package` with the same project, name, and version.
- Version must use semantic versioning.
#### `Ml::Experiment`
- Collection of comparable `Ml::Candidates`.
#### `Ml::Candidate`
- A candidate to a model version.
- Can have many parameters (`Ml::CandidateParams`), which are usually configuration variables passed to the training code.
- Can have many performance indicators (`Ml::CandidateMetrics`).
- Can have many user defined metadata (`Ml::CandidateMetadata`).
## MLflow compatibility layer
To make it easier for Data Scientists with GitLab Model registry, we provided a
compatibility layer to [MLflow client](https://mlflow.org/docs/latest/python_api/mlflow.client.html).
We do not provide an MLflow instance with GitLab. Instead, GitLab itself acts as
an instance of MLflow. This method stores data on the GitLab database, which
improves user reliability and functionality. See the user documentation about
[the compatibility layer](../../../user/project/ml/experiment_tracking/mlflow_client.md).
The compatibility layer is implemented by replicating the [MLflow rest API](https://mlflow.org/docs/latest/rest-api.html)
in [`lib/api/ml/mlflow`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/api/ml/mlflow).
Some terms on MLflow are named differently in GitLab:
- An MLflow `Run` is a GitLab `Candidate`.
- An MLflow `Registered model` is a GitLab `Model`.
### Setting up for testing
To test the script with MLflow with GitLab as the backend:
1. Install MLflow:
```shell
mkdir mlflow-compatibility
cd mlflow-compatibility
pip install mlflow jupyterlab
```
1. In the directory, create a Python file named `mlflow_test.py` with the following code:
```python3
import mlflow
import os
from mlflow.tracking import MlflowClient
os.environ["MLFLOW_TRACKING_TOKEN"]='<TOKEN>'
os.environ["MLFLOW_TRACKING_URI"]='<your gitlab endpoint>/api/v4/projects/<your project id>/ml/mlflow'
client = MlflowClient()
client.create_experiment("My first experiment")
```
1. Run the script:
```shell
python mlflow_test.py
```
1. Go to the project `/-/ml/experiments`. An experiment should have been created.
You can edit the script to call the client methods we are trying to implement. See
[GitLab Model experiments example](https://gitlab.com/gitlab-org/incubation-engineering/mlops/model_experiment_example)
for a more complete example.
|
https://docs.gitlab.com/development/controls
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/controls.md
|
2025-08-13
|
doc/development/feature_flags
|
[
"doc",
"development",
"feature_flags"
] |
controls.md
|
none
|
unassigned
|
See the Technical Writers assigned to Development Guidelines: https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines
|
Use ChatOps to enable and disable feature flags
| null |
{{< alert type="note" >}}
This document explains how to contribute to the development of the GitLab product.
If you want to use feature flags to show and hide functionality in your own applications,
view [this feature flags information](../../operations/feature_flags.md) instead.
{{< /alert >}}
To turn on/off features behind feature flags in any of the
GitLab-provided environments, like staging and production, you need to
have access to the [ChatOps](../chatops_on_gitlabcom.md) bot. The ChatOps bot
is currently running on the ops instance, which is different from
[GitLab.com](https://gitlab.com) or `dev.gitlab.org`.
Follow the ChatOps document to [request access](../chatops_on_gitlabcom.md#requesting-access).
After you are added to the project test if your access propagated,
run:
```shell
/chatops run feature --help
```
## Rolling out changes
When the changes are deployed to the environments it is time to start
rolling out the feature to our users. The exact procedure of rolling out a
change is unspecified, as this can vary from change to change. However, in
general we recommend rolling out changes incrementally, instead of enabling them
for everybody right away. We also recommend you to not enable a feature
_before_ the code is being deployed.
This allows you to separate rolling out a feature from a deploy, making it
easier to measure the impact of both separately.
The GitLab feature library (using
[Flipper](https://github.com/jnunemaker/flipper), and covered in the
[Feature flags process](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/) guide) supports rolling out changes to a percentage of
time to users. This in turn can be controlled using [GitLab ChatOps](../../ci/chatops/_index.md).
For an up to date list of feature flag commands see
[the source code](https://gitlab.com/gitlab-com/chatops/blob/master/lib/chatops/commands/feature.rb).
All the examples in that file must be preceded by `/chatops run`.
If you get an error "Whoops! This action is not allowed. This incident
will be reported." that means your Slack account is not allowed to
change feature flags or you do not have access.
### Enabling a feature for pre-production testing
As a first step in a feature rollout, you should enable the feature on
`staging.gitlab.com`
and `dev.gitlab.org`.
These two environments have different scopes.
`dev.gitlab.org` is a production CE environment that has internal GitLab Inc.
traffic and is used for some development and other related work.
`staging.gitlab.com` has a smaller subset of GitLab.com database and repositories
and does not have regular traffic. Staging is an EE instance and can give you
a (very) rough estimate of how your feature will look and behave on GitLab.com.
Both of these instances are connected to Sentry so make sure you check the projects
there for any exceptions while testing your feature after enabling the feature flag.
For these pre-production environments, it's strongly encouraged to run the command in
`#staging`, `#production`, or `#chat-ops-test`, for improved visibility.
#### Enabling the feature flag for a given percentage of actors
To enable a feature 25% of the time for any given actor, run the following in Slack:
```shell
/chatops run feature set new_navigation_bar 25 --actors --dev
/chatops run feature set new_navigation_bar 25 --actors --staging
```
See [percentage of actors](#percentage-based-actor-selection) for your choices of actors
for which you would like to randomize the rollout.
### Enabling a feature for GitLab.com
When a feature has successfully been
[enabled on a pre-production](#enabling-a-feature-for-pre-production-testing)
environment and verified as safe and working, you can roll out the
change to GitLab.com (production).
If a feature is [deprecated](../../update/deprecations.md), do not enable the flag.
#### Communicate the change
Some feature flag changes on GitLab.com should be communicated with
parts of the company. The developer responsible needs to determine
whether this is necessary and the appropriate level of communication.
This depends on the feature and what sort of impact it might have.
Guidelines:
- Notify `#support_gitlab-com` beforehand. So in case if the feature has any side effects on user experience, they can mitigate and disable the feature flag to reduce some impact.
- If the feature meets the requirements for creating a [Change Management](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/change-management/#feature-flags-and-the-change-management-process) issue, create a Change Management issue per [criticality guidelines](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/change-management/#change-request-workflows).
- For simple, low-risk, easily reverted features, proceed and [enable the feature in `#production`](#process).
- For support requests to toggle feature flags for specific groups or projects, follow the process outlined in the [support workflows](https://handbook.gitlab.com/handbook/support/workflows/saas_feature_flags/).
#### Guideline for which percentages to choose during the rollout
Choosing which the percentages while rolling out the feature flag
depends on different factors, for example:
- Is the feature flag checked often so that you can collect enough information to decide it's safe to continue with the rollout?
- If something goes wrong with the feature, how many requests or customers will be impacted?
- If something goes wrong, are there any other GitLab publicly available features that will be impacted by the rollout?
- Are there any possible performance degradation from rolling out the feature flag?
Let's take some examples for different types of feature flags, and how you can consider the rollout
in these cases:
##### A. Feature flag for an operation that runs a few times per day
If, for example, you're releasing a new feature that runs a few times per day in a cron job, and the feature is controlled by the newly introduced feature flag.
For example, [rewriting the database query for a cron job](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128759/diffs).
In this case, releasing the feature flag for a percentage below 25% might give you slow feedback
regarding whether to proceed with the rollout or not. Also, if the cron job fails, it [retries](../sidekiq/_index.md#retries).
So the consequences of something going wrong won't be that big. In this case, releasing with a percentage of 25% or 50%
will be an acceptable choice.
But you have to make sure to log the result of the feature flag check to the log of your worker. For more instructions about best practices for logging, see
[Logging context metadata (through Rails or Grape requests)](../logging.md#logging-context-metadata-through-rails-or-grape-requests).
##### B. Feature flag for an operation that runs hundreds or thousands times per day
Your newly introduced feature or change might be more customer facing than whatever runs in Sidekiq jobs. But
it might not be run often. In this case, choose a percentage high enough to collect some results in order
to know whether to proceed or not. You can consider starting with `5%` or `10%` in this case, while monitoring
the logs for any errors, or returned `500`s statuses to the users.
But as you continue with the rollout and increase the percentage, you need to consider looking at the
performance impact of the feature. You can consider monitoring
the [Latency: Apdex and error ratios](https://dashboards.gitlab.net/d/general-triage/general-platform-triage?orgId=1)
dashboard on Grafana.
##### C. Feature flag for an operation that runs at the core of the app
Sometimes, a new change that might touch every aspect of the GitLab application. For example, changing
a database query on one of the core models, like `User`, `Project` or `Namespace`. In this case, releasing
the feature for `1%` of the requests, or even less than that (via Change Request) is highly recommended to avoid any incidents.
See [this change request example](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/16427) of a feature flag that was released
for around `0.1%` of the requests, due to the high impact of the change.
To make sure that the rollout does not affect many customers, consider following these steps:
1. Estimate how many requests per minute can be affected by 100% of the feature flag rollout. This
can be achieved by tracking
the database queries. See [the instructions here](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/patroni/mapping_statements.md#example-queries).
1. Calculate the reasonable number of requests or users that can be affected, in case
the rollout doesn't go as expected.
1. Based on the numbers collected from (1) and (2), calculate the reasonable percentage to start with to roll out
the feature flag. Here is [an example](https://gitlab.com/gitlab-org/gitlab/-/issues/425859#note_1576923174)
of such calculation.
1. Make sure to communicate your findings on the rollout issue of the feature flag.
##### D. Unknown impact of releasing the feature flag
If you are not certain what percentages to use, then choose the safe recommended option, and choose these percentages:
1. 1%
1. 10%
1. 25%
1. 50%
1. 75%
1. 100%
Between every step you'll want to wait a little while and monitor the
appropriate graphs on <https://dashboards.gitlab.net>. The exact time to wait
may differ. For some features a few minutes is enough, while for others you may
want to wait several hours or even days. This is entirely up to you, just make
sure it is clearly communicated to your team and the Production team if you
anticipate any potential problems.
#### Process
When enabling a feature flag rollout, the system will automatically block the
ChatOps command from succeeding if there are active `"severity::1"` or `~"severity::2"`
incidents or in-progress change issues, for example:
```shell
/chatops run feature set gitaly_lfs_pointers_pipeline true
- Production checks fail!
- active incidents
2021-06-29 Canary deployment failing QA tests
```
Before enabling a feature flag, verify that you are not violating any [Production Change Lock periods](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/change-management/#production-change-lock-pcl) and are in compliance with the [Feature flags and the Change Management Process](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/change-management/#feature-flags-and-the-change-management-process).
The following `/chatops` commands must be performed in the Slack
`#production` channel.
##### Percentage of actors roll out
To enable a feature for 25% of actors such as users, projects, groups or the current request or job,
run the following in Slack:
```shell
/chatops run feature set some_feature 25 --actors
```
This sets a feature flag to `true` based on the following formula:
```ruby
feature_flag_state = Zlib.crc32("some_feature<Actor>:#{actor.id}") % (100 * 1_000) < 25 * 1_000
# where <Actor>: is a `User`, `Group`, `Project` and actor is an instance
```
During development, based on the nature of the feature, an actor choice
should be made.
For user focused features:
```ruby
Feature.enabled?(:feature_cool_avatars, current_user)
```
For group or namespace level features:
```ruby
Feature.enabled?(:feature_cooler_groups, group)
```
For project level features:
```ruby
Feature.enabled?(:feature_ice_cold_projects, project)
```
For current request:
```ruby
Feature.enabled?(:feature_ice_cold_projects, Feature.current_request)
```
Feature gates can also be actor based, for example a feature could first be
enabled for only the `gitlab` project. The project is passed by supplying a
`--project` flag:
```shell
/chatops run feature set --project=gitlab-org/gitlab some_feature true
```
You can use the `--user` option to enable a feature flag for a specific user:
```shell
/chatops run feature set --user=myusername some_feature true
```
If you would like to gather feedback internally first,
feature flags scoped to a user can also be enabled
for GitLab team members with the `gitlab_team_members`
[feature group](_index.md#feature-groups):
```shell
/chatops run feature set --feature-group=gitlab_team_members some_feature true
```
You can use the `--group` flag to enable a feature flag for a specific group:
```shell
/chatops run feature set --group=gitlab-org some_feature true
```
Note that `--group` does not work with user namespaces. To enable a feature flag for a
generic namespace (including groups) use `--namespace`:
```shell
/chatops run feature set --namespace=gitlab-org some_feature true
/chatops run feature set --namespace=myusername some_feature true
```
Actor-based gates are applied before percentages. For example, considering the
`group/project` as `gitlab-org/gitlab` and a given example feature as `some_feature`, if
you run these 2 commands:
```shell
/chatops run feature set --project=gitlab-org/gitlab some_feature true
/chatops run feature set some_feature 25 --actors
```
Then `some_feature` will be enabled for both 25% of actors and always when interacting with
`gitlab-org/gitlab`. This is a good idea if the feature flag development makes use of group
actors.
```ruby
Feature.enabled?(:some_feature, group)
```
Multiple actors can be passed together in a comma-separated form:
```shell
/chatops run feature set --project=gitlab-org/gitlab,example-org/example-project some_feature true
/chatops run feature set --group=gitlab-org,example-org some_feature true
/chatops run feature set --namespace=gitlab-org,example-org some_feature true
```
Lastly, to verify that the feature is deemed stable in as many cases as possible,
you should fully roll out the feature by enabling the flag **globally** by running:
```shell
/chatops run feature set some_feature true
```
This changes the feature flag state to be **enabled** always, which overrides the
existing gates (for example, `--group=gitlab-org`) in the above processes.
Note, that if an actor based feature gate is present, switching the
`default_enabled` attribute of the YAML definition from `false` to `true`
will not have any effect. The feature gate must be deleted first.
For example, a feature flag is set via ChatOps:
```shell
/chatops run feature set --project=gitlab-org/gitlab some_feature true
```
When the `default_enabled` attribute in the YAML definition is switched to
`true`, the feature gate must be deleted to have the desired effect:
```shell
/chatops run feature delete some_feature
```
##### Percentage of time roll out (deprecated)
Previously, to enable a feature 25% of the time, we would run the following in Slack:
```shell
/chatops run feature set new_navigation_bar 25 --random
```
This command enables the `new_navigation_bar` feature for GitLab.com. However, this command does not enable the feature for 25% of the total users.
Instead, when the feature is checked with `enabled?`, it returns `true` 25% of the time.
Percentage of time feature flags are now deprecated in favor of [percentage of actors](#percentage-based-actor-selection)
using the `Feature.current_request` actor. The problem with not using an actor is that the randomized
choice evaluates for each call into `Feature.enabled?` rather than once per request or job execution,
which can lead to flip-flopping between states. For example:
```ruby
feature_flag_state = rand < (25 / 100.0)
```
For the time being, we continue to allow use of percentage of time feature flags.
During rollout, you can force it using the `--ignore-random-deprecation-check` switch in ChatOps.
##### Disabling feature flags
To disable a feature flag that has been globally enabled you can run:
```shell
/chatops run feature set some_feature false
```
To disable a feature flag that has been enabled for a specific project you can run:
```shell
/chatops run feature set --project=gitlab-org/gitlab some_feature false
```
You cannot selectively disable feature flags for a specific project/group/user without applying a [specific method of implementing](controls.md#selectively-disable-by-actor) the feature flags.
If a feature flag is disabled via ChatOps, that will take precedence over the `default_enabled` value in the YAML. In other words, you could have a feature enabled for on-premise installations but not for GitLab.com.
#### Selectively disable by actor
By default you cannot selectively disable a feature flag by actor.
```shell
# This will not work how you would expect.
/chatops run feature set some_feature true
/chatops run feature set --project=gitlab-org/gitlab some_feature false
```
However, if you add two feature flags, you can write your conditional statement in such a way that the equivalent selective disable is possible.
```ruby
Feature.enabled?(:a_feature, project) && Feature.disabled?(:a_feature_override, project)
```
```shell
# This will enable a feature flag globally, except for gitlab-org/gitlab
/chatops run feature set a_feature true
/chatops run feature set --project=gitlab-org/gitlab a_feature_override true
```
#### Percentage-based actor selection
When using the percentage rollout of actors on multiple feature flags, the actors for each feature flag are selected separately.
For example, the following feature flags are enabled for a certain percentage of actors:
```plaintext
/chatops run feature set feature-set-1 25 --actors
/chatops run feature set feature-set-2 25 --actors
```
If a project A has `:feature-set-1` enabled, there is no guarantee that project A also has `:feature-set-2` enabled.
For more detail, see [This is how percentages work in Flipper](https://www.hackwithpassion.com/this-is-how-percentages-work-in-flipper/).
### Verifying metrics after enabling feature flag
After turning on the feature flag, you need to [monitor the relevant graphs](https://handbook.gitlab.com/handbook/engineering/monitoring/) between each step:
1. Go to [`dashboards.gitlab.net`](https://dashboards.gitlab.net).
1. Turn on the `feature-flag`.
1. Watch `Latency: Apdex` for services that might be impacted by your change
(like `sidekiq service`, `api service` or `web service`). Then check out more in-depth
dashboards by selecting `Service Overview Dashboards` and choosing a dashboard that might
be related to your change.
In this illustration, you can see that the Apdex score started to decline after the feature flag was enabled at `09:46`. The feature flag was then deactivated at `10:31`, and the service returned to the original value:

Certain features necessitate extensive monitoring over multiple days, particularly those that are high-risk and critical to business operations. In contrast, other features may only require a 24-hour monitoring period before continuing with the rollout.
It is recommended to determine the necessary extent of monitoring before initiating the rollout.
### Feature flag change logging
#### ChatOps level
Any feature flag change that affects GitLab.com (production) via [ChatOps](https://gitlab.com/gitlab-com/chatops)
is automatically logged in an issue.
The issue is created in the
[gl-infra/feature-flag-log](https://gitlab.com/gitlab-com/gl-infra/feature-flag-log/-/issues?scope=all&state=closed)
project, and it will at minimum log the Slack handle of person enabling
a feature flag, the time, and the name of the flag being changed.
The issue is then also posted to the GitLab internal
[Grafana dashboard](https://dashboards.gitlab.net/) as an annotation
marker to make the change even more visible.
Changes to the issue format can be submitted in the
[ChatOps project](https://gitlab.com/gitlab-com/chatops).
#### Instance level
Any feature flag change that affects any GitLab instance is automatically logged in
[features_json.log](../../administration/logs/_index.md#features_jsonlog).
You can search the change history in [Kibana](https://handbook.gitlab.com/handbook/support/workflows/kibana/).
You can also access the feature flag change history for GitLab.com [in Kibana](https://log.gprd.gitlab.net/goto/d060337c017723084c6d97e09e591fc6).
## Cleaning up
A feature flag should be removed as soon as it is no longer needed. Each additional
feature flag in the codebase increases the complexity of the application
and reduces confidence in our testing suite covering all possible combinations.
Additionally, a feature flag overwritten in some of the environments can result
in undefined and untested system behavior.
`development` type feature flags should have a short lifecycle because their purpose
is for rolling out a persistent change. `development` feature flags that are older
than 2 milestones are reported to engineering managers. The
[report tool](https://gitlab.com/gitlab-org/gitlab-feature-flag-alert) runs on a
monthly basis. For example, see [the report for December 2021](https://gitlab.com/gitlab-org/quality/triage-reports/-/issues/5480).
If a `development` feature flag is still present in the codebase after 6 months we should
take one of the following actions:
- Enable the feature flag by default and remove it.
- Convert it to an instance, group, or project setting.
- Revert the changes if it's still disabled and not needed anymore.
To remove a feature flag, open **one merge request** to make the changes. In the MR:
1. Add the ~"feature flag" label so release managers are aware of the removal.
1. If the merge request has to be backported into the current version, follow the
[patch release runbook](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/patch/engineers.md) process.
See [the feature flag process](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#including-a-feature-behind-feature-flag-in-the-final-release)
for further details.
1. Remove all references to the feature flag from the codebase, including tests.
1. Remove the YAML definition for the feature from the repository.
Once the above MR has been merged, you should:
1. [Clean up the feature flag from all environments](#cleanup-chatops) with `/chatops run feature delete some_feature`.
1. Close the rollout issue for the feature flag after the feature flag is removed from the codebase.
### Cleanup ChatOps
When a feature gate has been removed from the codebase, the feature
record still exists in the database that the flag was deployed too.
The record can be deleted once the MR is deployed to all the environments:
```shell
/chatops run feature delete <feature-flag-name> --dev --pre --staging --staging-ref --production
```
## Checking feature flag status
You can use the following ChatOps command to see a feature flag's current state:
```shell
/chatops run feature get <feature-flag-name>
```
Since this is a read-only command, you can avoid cluttering the production channels by either:
- Running it in the `#chat-ops-test` Slack channel
- Sending it as a direct message to the ChatOps bot
The result of this command will display:
- Whether the feature flag exists
- Its current state (enabled/disabled)
- Any percentage rollouts or actor-based gates that are configured
|
---
stage: none
group: unassigned
info: 'See the Technical Writers assigned to Development Guidelines: https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines'
title: Use ChatOps to enable and disable feature flags
breadcrumbs:
- doc
- development
- feature_flags
---
{{< alert type="note" >}}
This document explains how to contribute to the development of the GitLab product.
If you want to use feature flags to show and hide functionality in your own applications,
view [this feature flags information](../../operations/feature_flags.md) instead.
{{< /alert >}}
To turn on/off features behind feature flags in any of the
GitLab-provided environments, like staging and production, you need to
have access to the [ChatOps](../chatops_on_gitlabcom.md) bot. The ChatOps bot
is currently running on the ops instance, which is different from
[GitLab.com](https://gitlab.com) or `dev.gitlab.org`.
Follow the ChatOps document to [request access](../chatops_on_gitlabcom.md#requesting-access).
After you are added to the project test if your access propagated,
run:
```shell
/chatops run feature --help
```
## Rolling out changes
When the changes are deployed to the environments it is time to start
rolling out the feature to our users. The exact procedure of rolling out a
change is unspecified, as this can vary from change to change. However, in
general we recommend rolling out changes incrementally, instead of enabling them
for everybody right away. We also recommend you to not enable a feature
_before_ the code is being deployed.
This allows you to separate rolling out a feature from a deploy, making it
easier to measure the impact of both separately.
The GitLab feature library (using
[Flipper](https://github.com/jnunemaker/flipper), and covered in the
[Feature flags process](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/) guide) supports rolling out changes to a percentage of
time to users. This in turn can be controlled using [GitLab ChatOps](../../ci/chatops/_index.md).
For an up to date list of feature flag commands see
[the source code](https://gitlab.com/gitlab-com/chatops/blob/master/lib/chatops/commands/feature.rb).
All the examples in that file must be preceded by `/chatops run`.
If you get an error "Whoops! This action is not allowed. This incident
will be reported." that means your Slack account is not allowed to
change feature flags or you do not have access.
### Enabling a feature for pre-production testing
As a first step in a feature rollout, you should enable the feature on
`staging.gitlab.com`
and `dev.gitlab.org`.
These two environments have different scopes.
`dev.gitlab.org` is a production CE environment that has internal GitLab Inc.
traffic and is used for some development and other related work.
`staging.gitlab.com` has a smaller subset of GitLab.com database and repositories
and does not have regular traffic. Staging is an EE instance and can give you
a (very) rough estimate of how your feature will look and behave on GitLab.com.
Both of these instances are connected to Sentry so make sure you check the projects
there for any exceptions while testing your feature after enabling the feature flag.
For these pre-production environments, it's strongly encouraged to run the command in
`#staging`, `#production`, or `#chat-ops-test`, for improved visibility.
#### Enabling the feature flag for a given percentage of actors
To enable a feature 25% of the time for any given actor, run the following in Slack:
```shell
/chatops run feature set new_navigation_bar 25 --actors --dev
/chatops run feature set new_navigation_bar 25 --actors --staging
```
See [percentage of actors](#percentage-based-actor-selection) for your choices of actors
for which you would like to randomize the rollout.
### Enabling a feature for GitLab.com
When a feature has successfully been
[enabled on a pre-production](#enabling-a-feature-for-pre-production-testing)
environment and verified as safe and working, you can roll out the
change to GitLab.com (production).
If a feature is [deprecated](../../update/deprecations.md), do not enable the flag.
#### Communicate the change
Some feature flag changes on GitLab.com should be communicated with
parts of the company. The developer responsible needs to determine
whether this is necessary and the appropriate level of communication.
This depends on the feature and what sort of impact it might have.
Guidelines:
- Notify `#support_gitlab-com` beforehand. So in case if the feature has any side effects on user experience, they can mitigate and disable the feature flag to reduce some impact.
- If the feature meets the requirements for creating a [Change Management](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/change-management/#feature-flags-and-the-change-management-process) issue, create a Change Management issue per [criticality guidelines](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/change-management/#change-request-workflows).
- For simple, low-risk, easily reverted features, proceed and [enable the feature in `#production`](#process).
- For support requests to toggle feature flags for specific groups or projects, follow the process outlined in the [support workflows](https://handbook.gitlab.com/handbook/support/workflows/saas_feature_flags/).
#### Guideline for which percentages to choose during the rollout
Choosing which the percentages while rolling out the feature flag
depends on different factors, for example:
- Is the feature flag checked often so that you can collect enough information to decide it's safe to continue with the rollout?
- If something goes wrong with the feature, how many requests or customers will be impacted?
- If something goes wrong, are there any other GitLab publicly available features that will be impacted by the rollout?
- Are there any possible performance degradation from rolling out the feature flag?
Let's take some examples for different types of feature flags, and how you can consider the rollout
in these cases:
##### A. Feature flag for an operation that runs a few times per day
If, for example, you're releasing a new feature that runs a few times per day in a cron job, and the feature is controlled by the newly introduced feature flag.
For example, [rewriting the database query for a cron job](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128759/diffs).
In this case, releasing the feature flag for a percentage below 25% might give you slow feedback
regarding whether to proceed with the rollout or not. Also, if the cron job fails, it [retries](../sidekiq/_index.md#retries).
So the consequences of something going wrong won't be that big. In this case, releasing with a percentage of 25% or 50%
will be an acceptable choice.
But you have to make sure to log the result of the feature flag check to the log of your worker. For more instructions about best practices for logging, see
[Logging context metadata (through Rails or Grape requests)](../logging.md#logging-context-metadata-through-rails-or-grape-requests).
##### B. Feature flag for an operation that runs hundreds or thousands times per day
Your newly introduced feature or change might be more customer facing than whatever runs in Sidekiq jobs. But
it might not be run often. In this case, choose a percentage high enough to collect some results in order
to know whether to proceed or not. You can consider starting with `5%` or `10%` in this case, while monitoring
the logs for any errors, or returned `500`s statuses to the users.
But as you continue with the rollout and increase the percentage, you need to consider looking at the
performance impact of the feature. You can consider monitoring
the [Latency: Apdex and error ratios](https://dashboards.gitlab.net/d/general-triage/general-platform-triage?orgId=1)
dashboard on Grafana.
##### C. Feature flag for an operation that runs at the core of the app
Sometimes, a new change that might touch every aspect of the GitLab application. For example, changing
a database query on one of the core models, like `User`, `Project` or `Namespace`. In this case, releasing
the feature for `1%` of the requests, or even less than that (via Change Request) is highly recommended to avoid any incidents.
See [this change request example](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/16427) of a feature flag that was released
for around `0.1%` of the requests, due to the high impact of the change.
To make sure that the rollout does not affect many customers, consider following these steps:
1. Estimate how many requests per minute can be affected by 100% of the feature flag rollout. This
can be achieved by tracking
the database queries. See [the instructions here](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/patroni/mapping_statements.md#example-queries).
1. Calculate the reasonable number of requests or users that can be affected, in case
the rollout doesn't go as expected.
1. Based on the numbers collected from (1) and (2), calculate the reasonable percentage to start with to roll out
the feature flag. Here is [an example](https://gitlab.com/gitlab-org/gitlab/-/issues/425859#note_1576923174)
of such calculation.
1. Make sure to communicate your findings on the rollout issue of the feature flag.
##### D. Unknown impact of releasing the feature flag
If you are not certain what percentages to use, then choose the safe recommended option, and choose these percentages:
1. 1%
1. 10%
1. 25%
1. 50%
1. 75%
1. 100%
Between every step you'll want to wait a little while and monitor the
appropriate graphs on <https://dashboards.gitlab.net>. The exact time to wait
may differ. For some features a few minutes is enough, while for others you may
want to wait several hours or even days. This is entirely up to you, just make
sure it is clearly communicated to your team and the Production team if you
anticipate any potential problems.
#### Process
When enabling a feature flag rollout, the system will automatically block the
ChatOps command from succeeding if there are active `"severity::1"` or `~"severity::2"`
incidents or in-progress change issues, for example:
```shell
/chatops run feature set gitaly_lfs_pointers_pipeline true
- Production checks fail!
- active incidents
2021-06-29 Canary deployment failing QA tests
```
Before enabling a feature flag, verify that you are not violating any [Production Change Lock periods](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/change-management/#production-change-lock-pcl) and are in compliance with the [Feature flags and the Change Management Process](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/change-management/#feature-flags-and-the-change-management-process).
The following `/chatops` commands must be performed in the Slack
`#production` channel.
##### Percentage of actors roll out
To enable a feature for 25% of actors such as users, projects, groups or the current request or job,
run the following in Slack:
```shell
/chatops run feature set some_feature 25 --actors
```
This sets a feature flag to `true` based on the following formula:
```ruby
feature_flag_state = Zlib.crc32("some_feature<Actor>:#{actor.id}") % (100 * 1_000) < 25 * 1_000
# where <Actor>: is a `User`, `Group`, `Project` and actor is an instance
```
During development, based on the nature of the feature, an actor choice
should be made.
For user focused features:
```ruby
Feature.enabled?(:feature_cool_avatars, current_user)
```
For group or namespace level features:
```ruby
Feature.enabled?(:feature_cooler_groups, group)
```
For project level features:
```ruby
Feature.enabled?(:feature_ice_cold_projects, project)
```
For current request:
```ruby
Feature.enabled?(:feature_ice_cold_projects, Feature.current_request)
```
Feature gates can also be actor based, for example a feature could first be
enabled for only the `gitlab` project. The project is passed by supplying a
`--project` flag:
```shell
/chatops run feature set --project=gitlab-org/gitlab some_feature true
```
You can use the `--user` option to enable a feature flag for a specific user:
```shell
/chatops run feature set --user=myusername some_feature true
```
If you would like to gather feedback internally first,
feature flags scoped to a user can also be enabled
for GitLab team members with the `gitlab_team_members`
[feature group](_index.md#feature-groups):
```shell
/chatops run feature set --feature-group=gitlab_team_members some_feature true
```
You can use the `--group` flag to enable a feature flag for a specific group:
```shell
/chatops run feature set --group=gitlab-org some_feature true
```
Note that `--group` does not work with user namespaces. To enable a feature flag for a
generic namespace (including groups) use `--namespace`:
```shell
/chatops run feature set --namespace=gitlab-org some_feature true
/chatops run feature set --namespace=myusername some_feature true
```
Actor-based gates are applied before percentages. For example, considering the
`group/project` as `gitlab-org/gitlab` and a given example feature as `some_feature`, if
you run these 2 commands:
```shell
/chatops run feature set --project=gitlab-org/gitlab some_feature true
/chatops run feature set some_feature 25 --actors
```
Then `some_feature` will be enabled for both 25% of actors and always when interacting with
`gitlab-org/gitlab`. This is a good idea if the feature flag development makes use of group
actors.
```ruby
Feature.enabled?(:some_feature, group)
```
Multiple actors can be passed together in a comma-separated form:
```shell
/chatops run feature set --project=gitlab-org/gitlab,example-org/example-project some_feature true
/chatops run feature set --group=gitlab-org,example-org some_feature true
/chatops run feature set --namespace=gitlab-org,example-org some_feature true
```
Lastly, to verify that the feature is deemed stable in as many cases as possible,
you should fully roll out the feature by enabling the flag **globally** by running:
```shell
/chatops run feature set some_feature true
```
This changes the feature flag state to be **enabled** always, which overrides the
existing gates (for example, `--group=gitlab-org`) in the above processes.
Note, that if an actor based feature gate is present, switching the
`default_enabled` attribute of the YAML definition from `false` to `true`
will not have any effect. The feature gate must be deleted first.
For example, a feature flag is set via ChatOps:
```shell
/chatops run feature set --project=gitlab-org/gitlab some_feature true
```
When the `default_enabled` attribute in the YAML definition is switched to
`true`, the feature gate must be deleted to have the desired effect:
```shell
/chatops run feature delete some_feature
```
##### Percentage of time roll out (deprecated)
Previously, to enable a feature 25% of the time, we would run the following in Slack:
```shell
/chatops run feature set new_navigation_bar 25 --random
```
This command enables the `new_navigation_bar` feature for GitLab.com. However, this command does not enable the feature for 25% of the total users.
Instead, when the feature is checked with `enabled?`, it returns `true` 25% of the time.
Percentage of time feature flags are now deprecated in favor of [percentage of actors](#percentage-based-actor-selection)
using the `Feature.current_request` actor. The problem with not using an actor is that the randomized
choice evaluates for each call into `Feature.enabled?` rather than once per request or job execution,
which can lead to flip-flopping between states. For example:
```ruby
feature_flag_state = rand < (25 / 100.0)
```
For the time being, we continue to allow use of percentage of time feature flags.
During rollout, you can force it using the `--ignore-random-deprecation-check` switch in ChatOps.
##### Disabling feature flags
To disable a feature flag that has been globally enabled you can run:
```shell
/chatops run feature set some_feature false
```
To disable a feature flag that has been enabled for a specific project you can run:
```shell
/chatops run feature set --project=gitlab-org/gitlab some_feature false
```
You cannot selectively disable feature flags for a specific project/group/user without applying a [specific method of implementing](controls.md#selectively-disable-by-actor) the feature flags.
If a feature flag is disabled via ChatOps, that will take precedence over the `default_enabled` value in the YAML. In other words, you could have a feature enabled for on-premise installations but not for GitLab.com.
#### Selectively disable by actor
By default you cannot selectively disable a feature flag by actor.
```shell
# This will not work how you would expect.
/chatops run feature set some_feature true
/chatops run feature set --project=gitlab-org/gitlab some_feature false
```
However, if you add two feature flags, you can write your conditional statement in such a way that the equivalent selective disable is possible.
```ruby
Feature.enabled?(:a_feature, project) && Feature.disabled?(:a_feature_override, project)
```
```shell
# This will enable a feature flag globally, except for gitlab-org/gitlab
/chatops run feature set a_feature true
/chatops run feature set --project=gitlab-org/gitlab a_feature_override true
```
#### Percentage-based actor selection
When using the percentage rollout of actors on multiple feature flags, the actors for each feature flag are selected separately.
For example, the following feature flags are enabled for a certain percentage of actors:
```plaintext
/chatops run feature set feature-set-1 25 --actors
/chatops run feature set feature-set-2 25 --actors
```
If a project A has `:feature-set-1` enabled, there is no guarantee that project A also has `:feature-set-2` enabled.
For more detail, see [This is how percentages work in Flipper](https://www.hackwithpassion.com/this-is-how-percentages-work-in-flipper/).
### Verifying metrics after enabling feature flag
After turning on the feature flag, you need to [monitor the relevant graphs](https://handbook.gitlab.com/handbook/engineering/monitoring/) between each step:
1. Go to [`dashboards.gitlab.net`](https://dashboards.gitlab.net).
1. Turn on the `feature-flag`.
1. Watch `Latency: Apdex` for services that might be impacted by your change
(like `sidekiq service`, `api service` or `web service`). Then check out more in-depth
dashboards by selecting `Service Overview Dashboards` and choosing a dashboard that might
be related to your change.
In this illustration, you can see that the Apdex score started to decline after the feature flag was enabled at `09:46`. The feature flag was then deactivated at `10:31`, and the service returned to the original value:

Certain features necessitate extensive monitoring over multiple days, particularly those that are high-risk and critical to business operations. In contrast, other features may only require a 24-hour monitoring period before continuing with the rollout.
It is recommended to determine the necessary extent of monitoring before initiating the rollout.
### Feature flag change logging
#### ChatOps level
Any feature flag change that affects GitLab.com (production) via [ChatOps](https://gitlab.com/gitlab-com/chatops)
is automatically logged in an issue.
The issue is created in the
[gl-infra/feature-flag-log](https://gitlab.com/gitlab-com/gl-infra/feature-flag-log/-/issues?scope=all&state=closed)
project, and it will at minimum log the Slack handle of person enabling
a feature flag, the time, and the name of the flag being changed.
The issue is then also posted to the GitLab internal
[Grafana dashboard](https://dashboards.gitlab.net/) as an annotation
marker to make the change even more visible.
Changes to the issue format can be submitted in the
[ChatOps project](https://gitlab.com/gitlab-com/chatops).
#### Instance level
Any feature flag change that affects any GitLab instance is automatically logged in
[features_json.log](../../administration/logs/_index.md#features_jsonlog).
You can search the change history in [Kibana](https://handbook.gitlab.com/handbook/support/workflows/kibana/).
You can also access the feature flag change history for GitLab.com [in Kibana](https://log.gprd.gitlab.net/goto/d060337c017723084c6d97e09e591fc6).
## Cleaning up
A feature flag should be removed as soon as it is no longer needed. Each additional
feature flag in the codebase increases the complexity of the application
and reduces confidence in our testing suite covering all possible combinations.
Additionally, a feature flag overwritten in some of the environments can result
in undefined and untested system behavior.
`development` type feature flags should have a short lifecycle because their purpose
is for rolling out a persistent change. `development` feature flags that are older
than 2 milestones are reported to engineering managers. The
[report tool](https://gitlab.com/gitlab-org/gitlab-feature-flag-alert) runs on a
monthly basis. For example, see [the report for December 2021](https://gitlab.com/gitlab-org/quality/triage-reports/-/issues/5480).
If a `development` feature flag is still present in the codebase after 6 months we should
take one of the following actions:
- Enable the feature flag by default and remove it.
- Convert it to an instance, group, or project setting.
- Revert the changes if it's still disabled and not needed anymore.
To remove a feature flag, open **one merge request** to make the changes. In the MR:
1. Add the ~"feature flag" label so release managers are aware of the removal.
1. If the merge request has to be backported into the current version, follow the
[patch release runbook](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/patch/engineers.md) process.
See [the feature flag process](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#including-a-feature-behind-feature-flag-in-the-final-release)
for further details.
1. Remove all references to the feature flag from the codebase, including tests.
1. Remove the YAML definition for the feature from the repository.
Once the above MR has been merged, you should:
1. [Clean up the feature flag from all environments](#cleanup-chatops) with `/chatops run feature delete some_feature`.
1. Close the rollout issue for the feature flag after the feature flag is removed from the codebase.
### Cleanup ChatOps
When a feature gate has been removed from the codebase, the feature
record still exists in the database that the flag was deployed too.
The record can be deleted once the MR is deployed to all the environments:
```shell
/chatops run feature delete <feature-flag-name> --dev --pre --staging --staging-ref --production
```
## Checking feature flag status
You can use the following ChatOps command to see a feature flag's current state:
```shell
/chatops run feature get <feature-flag-name>
```
Since this is a read-only command, you can avoid cluttering the production channels by either:
- Running it in the `#chat-ops-test` Slack channel
- Sending it as a direct message to the ChatOps bot
The result of this command will display:
- Whether the feature flag exists
- Its current state (enabled/disabled)
- Any percentage rollouts or actor-based gates that are configured
|
https://docs.gitlab.com/development/feature_flags
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/feature_flags
|
[
"doc",
"development",
"feature_flags"
] |
_index.md
|
none
|
unassigned
|
See the Technical Writers assigned to Development Guidelines: https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines
|
Feature flags in the development of GitLab
|
Developer documentation about GitLab feature flags.
|
This page explains how developers contribute to the development and operations of the GitLab product
through feature flags. To create custom feature flags to show and hide features in your own applications,
see [Create a feature flag](../../operations/feature_flags.md#create-a-feature-flag).
A [complete list of feature flags](../../administration/feature_flags/list.md) in GitLab is also available.
{{< alert type="warning" >}}
All newly-introduced feature flags should be [disabled by default](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/).
{{< /alert >}}
{{< alert type="warning" >}}
All newly-introduced feature flags should be [used with an actor](controls.md#percentage-based-actor-selection).
{{< /alert >}}
Design documents:
- (Latest) [Feature Flags usage in GitLab development and operations](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/feature_flags_usage_in_dev_and_ops/)
- [Development Feature Flags Architecture](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/feature_flags_development/)
This document is the subject of continued work as part of an epic to [improve internal usage of feature flags](https://gitlab.com/groups/gitlab-org/-/epics/3551). Raise any suggestions as new issues and attach them to the epic.
For an [overview of the feature flag lifecycle](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#feature-flag-lifecycle), or if you need help deciding [if you should use a feature flag](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags) or not, see the feature flag lifecycle handbook page.
## When to use feature flags
Moved to the ["When to use feature flags"](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags) section in the handbook.
### Do not use feature flags for long lived settings
Feature flags are meant to be short lived. If you are intending on adding a
feature flag so that something can be enabled per user/group/project for a long
period of time, consider introducing
[Cascading Settings](../cascading_settings.md) or [Application Settings](../application_settings.md)
instead. Settings
offer a way for customers to enable or disable features for themselves on
GitLab.com or self-managed and can remain in the codebase as long as needed. In
contrast users have no way to enable or disable feature flags for themselves on
GitLab.com and only self-managed admins can change the feature flags.
Also,
[feature flags are not supported in GitLab Dedicated](../enabling_features_on_dedicated.md#feature-flags)
which is another reason you should not use them as a replacement for settings.
## Feature flags in GitLab development
The following highlights should be considered when deciding if feature flags
should be leveraged:
- The feature flag must be **disabled by default**.
- Feature flags should remain in the codebase for as short period as possible
to reduce the need for feature flag accounting.
- The person operating the feature flag is responsible for clearly communicating
the status of a feature behind the feature flag in the documentation and with other stakeholders. The
issue description should be updated with the feature flag name and whether it is
defaulted on or off as soon it is evident that a feature flag is needed.
- Merge requests that introduce a feature flag, update its state, or remove the
existing feature flag because a feature is deemed stable must have the
~"feature flag" label assigned.
When the feature implementation is delivered over multiple merge requests:
1. [Create a new feature flag](#create-a-new-feature-flag)
which is **disabled** by default, in the first merge request which uses the flag.
Flags [should not be added separately](#risk-of-a-broken-default-branch).
1. Submit incremental changes via one or more merge requests, ensuring that any
new code added can only be reached if the feature flag is **enabled**.
You can keep the feature flag enabled on your local GDK during development.
1. When the feature is ready to be tested by other team members, [create the initial documentation](../documentation/feature_flags.md#when-to-document-features-behind-a-feature-flag).
Include details about the status of the [feature flag](../documentation/feature_flags.md#how-to-add-feature-flag-documentation).
1. Enable the feature flag for a specific group/project/user and ensure that there are no issues
with the implementation. Do not enable the feature flag for a public project
like `gitlab-org/gitlab` if there is no documentation. Team members and contributors might search for
documentation on how to use the feature if they see it enabled in a public project.
1. When the feature is ready for production use, including GitLab Self-Managed instances, open one merge request to:
- Update the documentation to describe the latest flag status.
- Add a [changelog entry](#changelog).
- Remove the feature flag to enable the new behavior, or flip the feature flag to be **enabled by default** (only for `ops` and `beta` feature flags).
When the feature flag removal is delivered over multiple merge requests:
1. The value change of a feature flag should be the only change in a merge request. As long as the feature flag exists in the codebase, both states should be fully functional (when the feature is on and off).
1. After all mentions of the feature flag have been removed, legacy code can be removed. Steps in the feature flag roll-out issue should be followed, and if a step needs to be skipped, a comment should be added to the issue detailing why.
One might be tempted to think that feature flags will delay the release of a
feature by at least one month (= one release). This is not the case. A feature
flag does not have to stick around for a specific amount of time
(for example, at least one release), instead they should stick around until the feature
is deemed stable. **Stable means it works on GitLab.com without causing any
problems, such as outages.**
## Risk of a broken default branch
Feature flags must be used in the MR that introduces them. Not doing so causes a
[broken default branch](https://handbook.gitlab.com/handbook/engineering/workflow/#broken-master) scenario due
to the `rspec:feature-flags` job that only runs on the default branch.
## Types of feature flags
Choose a feature flag type that matches the expected usage.
### `gitlab_com_derisk` type
`gitlab_com_derisk` feature flags are short-lived feature flags,
used to de-risk GitLab.com deployments. Most feature flags used at
GitLab are of the `gitlab_com_derisk` type.
#### Constraints
- `default_enabled`: **Must not** be set to true. This kind of feature flag is meant to lower the risk on GitLab.com, thus there's no need to keep the flag in the codebase after it's been enabled on GitLab.com. `default_enabled: true` will not have any effect for this type of feature flag.
- Maximum Lifespan: 2 months after it's merged into the default branch
- Documentation: This type of feature flag doesn't need to be documented in the
[All feature flags in GitLab](../../administration/feature_flags/list.md) page given they're short-lived and deployment-related
- Rollout issue: **Must** have a rollout issue created from the
[Feature flag Roll Out template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20Flag%20Roll%20Out.md)
#### Usage
The format for `gitlab_com_derisk` feature flags is `Feature.<state>(:<dev_flag_name>)`.
To enable and disable them, run on the GitLab Rails console:
```ruby
# To enable it for the instance:
Feature.enable(:<dev_flag_name>)
# To disable it for the instance:
Feature.disable(:<dev_flag_name>)
# To enable for a specific project:
Feature.enable(:<dev_flag_name>, Project.find(<project id>))
# To disable for a specific project:
Feature.disable(:<dev_flag_name>, Project.find(<project id>))
```
To check a `gitlab_com_derisk` feature flag's state:
```ruby
# Check if the feature flag is enabled
Feature.enabled?(:dev_flag_name)
# Check if the feature flag is disabled
Feature.disabled?(:dev_flag_name)
```
### `wip` type
Some features are complex and need to be implemented through several MRs. Until they're fully implemented,
it needs to be hidden from anyone. In that case, the `wip` (for "Work In Progress") feature flag allows
to merge all the changes to the main branch without actually using the feature yet.
Once the feature is complete, the feature flag type can be changed to the `gitlab_com_derisk` or
`beta` type depending on how the feature will be presented/documented to customers.
#### Constraints
- `default_enabled`: **Must not** be set to true. If needed, this type can be changed to beta once the feature is complete.
- Maximum Lifespan: 4 months after it's merged into the default branch
- Documentation: This type of feature flag doesn't need to be documented in the
[All feature flags in GitLab](../../administration/feature_flags/list.md) page given they're mostly hiding unfinished code
- Rollout issue: Likely no need for a rollout issues, as `wip` feature flags should be transitioned to
another type before being enabled
#### Usage
```ruby
# Check if feature flag is enabled
Feature.enabled?(:my_wip_flag, project)
# Check if feature flag is disabled
Feature.disabled?(:my_wip_flag, project)
# Push feature flag to Frontend
push_frontend_feature_flag(:my_wip_flag, project)
```
### `beta` type
We might [not be confident we'll be able to scale, support, and maintain a feature](../../policy/development_stages_support.md) in its current form for every designed use case ([example](https://gitlab.com/gitlab-org/gitlab/-/issues/336070#note_1523983444)).
There are also scenarios where a feature is not complete enough to be considered an MVC.
Providing a flag in this case allows engineers and customers to disable the new feature until it's performant enough.
#### Constraints
- `default_enabled`: Can be set to `true` so that a feature can be "released" to everyone in beta with the
possibility to disable it in the case of scalability issues (ideally it should only be disabled for this
reason on specific on-premise installations)
- Maximum Lifespan: 6 months after it's merged into the default branch
- Documentation: This type of feature flag **must** be documented in the
[All feature flags in GitLab](../../administration/feature_flags/list.md) page
- Rollout issue: **Must** have a rollout issue
created from the
[Feature flag Roll Out template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20Flag%20Roll%20Out.md)
#### Usage
```ruby
# Check if feature flag is enabled
Feature.enabled?(:my_beta_flag, project)
# Check if feature flag is disabled
Feature.disabled?(:my_beta_flag, project)
# Push feature flag to Frontend
push_frontend_feature_flag(:my_beta_flag, project)
```
### `ops` type
`ops` feature flags are long-lived feature flags that control operational aspects
of GitLab product behavior. For example, feature flags that disable features that might
have a performance impact such as Sidekiq worker behavior.
Remember that using this type should follow a conscious decision not to introduce an
instance/group/project/user setting.
While `ops` type flags have an unlimited lifespan, every 12 months, they must be evaluated to determine if
they are still necessary.
#### Constraints
- `default_enabled`: Should be set to `false` in most cases, and only enabled to resolve temporary scalability
issues or help debug production issues.
- Maximum Lifespan: Unlimited, but must be evaluated every 12 months
- Documentation: This type of feature flag **must** be documented in the
[All feature flags in GitLab](../../administration/feature_flags/list.md) page as well as be associated with an operational
runbook describing the circumstances when it can be used.
- Rollout issue: Likely no need for a rollout issues, as it is hard to predict when they are enabled or disabled
#### Usage
```ruby
# Check if feature flag is enabled
Feature.enabled?(:my_ops_flag, project)
# Check if feature flag is disabled
Feature.disabled?(:my_ops_flag, project)
# Push feature flag to Frontend
push_frontend_feature_flag(:my_ops_flag, project)
```
### `experiment` type
`experiment` feature flags are used for A/B testing on GitLab.com.
An `experiment` feature flag should conform to the same standards as a `beta` feature flag,
although the interface has some differences. An experiment feature flag should have a rollout issue,
created using the [Experiment tracking template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Experiment%20Rollout.md). More information can be found in the [experiment guide](../experiment_guide/_index.md).
#### Constraints
- `default_enabled`: **Must not** be set to `true`.
- Maximum Lifespan: 6 months after it's merged into the default branch
### `worker` type
`worker` feature flags are special `ops` flags that allow to control Sidekiq workers behavior, such as deferring Sidekiq jobs.
`worker` feature flags likely do not have any YAML definition as the name could be dynamically generated using
the worker name itself, for example, `run_sidekiq_jobs_AuthorizedProjectsWorker`. Some examples for using `worker` type feature
flags can be found in [deferring Sidekiq jobs](#deferring-sidekiq-jobs).
### (Deprecated) `development` type
The `development` type is deprecated in favor of the `gitlab_com_derisk`, `wip`, and `beta` feature flag types.
## Feature flag definition and validation
During development (`RAILS_ENV=development`) or testing (`RAILS_ENV=test`) all feature flag usage is being strictly validated.
This process is meant to ensure consistent feature flag usage in the codebase. All feature flags **must**:
- Be known. Only use feature flags that are explicitly defined (except for feature flags of the types `experiment`, `worker` and `undefined`).
- Not be defined twice. They have to be defined either in FOSS or EE, but not both.
- For feature flags that don't have a definition file, use a valid and consistent `type:` across all invocations.
- Have an owner.
All feature flags known to GitLab are self-documented in YAML files stored in:
- [`config/feature_flags`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/feature_flags)
- [`ee/config/feature_flags`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/feature_flags)
Each feature flag is defined in a separate YAML file consisting of a number of fields:
| Field | Required | Description |
|---------------------|----------|----------------------------------------------------------------|
| `name` | yes | Name of the feature flag. |
| `description` | yes | A short description of the reason for the feature flag. |
| `type` | yes | Type of feature flag. |
| `default_enabled` | yes | The default state of the feature flag. |
| `introduced_by_url` | yes | The URL to the merge request that introduced the feature flag. |
| `milestone` | yes | Milestone in which the feature flag was created. |
| `group` | yes | The [group](https://handbook.gitlab.com/handbook/product/categories/#devops-stages) that owns the feature flag. |
| `feature_issue_url` | no | The URL to the original feature issue. |
| `rollout_issue_url` | no | The URL to the Issue covering the feature flag rollout. |
| `log_state_changes` | no | Used to log the state of the feature flag |
{{< alert type="note" >}}
All validations are skipped when running in `RAILS_ENV=production`.
{{< /alert >}}
## Create a new feature flag
{{< alert type="note" >}}
GitLab Pages uses [a different process](../pages/_index.md#feature-flags) for feature flags.
{{< /alert >}}
The GitLab codebase provides [`bin/feature-flag`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/bin/feature-flag),
a dedicated tool to create new feature flag definitions.
The tool asks various questions about the new feature flag, then creates
a YAML definition in `config/feature_flags` or `ee/config/feature_flags`.
Only feature flags that have a YAML definition file can be used when running the development or testing environments.
```shell
$ bin/feature-flag my_feature_flag
>> Specify the feature flag type
?> beta
You picked the type 'beta'
>> Specify the group label to which the feature flag belongs, from the following list:
1. group::group1
2. group::group2
?> 2
You picked the group 'group::group2'
>> URL of the original feature issue (enter to skip):
?> https://gitlab.com/gitlab-org/gitlab/-/issues/435435
>> URL of the MR introducing the feature flag (enter to skip and let Danger provide a suggestion directly in the MR):
?> https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141023
>> Username of the feature flag DRI (enter to skip):
?> bob
>> Is this an EE only feature (enter to skip):
?> [Return]
>> Press any key and paste the issue content that we copied to your clipboard! 🚀
?> [Return automatically opens the "New issue" page where you only have to paste the issue content]
>> URL of the rollout issue (enter to skip):
?> https://gitlab.com/gitlab-org/gitlab/-/issues/437162
create config/feature_flags/beta/my_feature_flag.yml
---
name: my_feature_flag
feature_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/435435
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141023
rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/437162
milestone: '16.9'
group: group::composition analysis
type: beta
default_enabled: false
```
All newly-introduced feature flags must be [**disabled by default**](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/).
Features that are developed and merged behind a feature flag
should not include a changelog entry. The entry should be added either in the merge
request removing the feature flag or the merge request where the default value of
the feature flag is set to enabled. If the feature contains any database migrations, it
*should* include a changelog entry for the database changes.
{{< alert type="note" >}}
To create a feature flag that is only used in EE, add the `--ee` flag: `bin/feature-flag --ee`
{{< /alert >}}
### Naming new flags
When choosing a name for a new feature flag, consider the following guidelines:
- Describe the feature the feature flag is holding
- A long, **descriptive** name is better than a short but confusing one.
- Avoid names that indicates state/phase of the feature like `_mvc`, `_alpha`, `_beta`, etc
- Write the name in snake case (`my_cool_feature_flag`).
- Avoid using `disable` in the name to avoid having to think (or [document](../documentation/feature_flags.md))
with double negatives. Consider starting the name with `hide_`, `remove_`, or `disallow_`.
In software engineering this problem is known as
["negative names for boolean variables"](https://www.serendipidata.com/posts/naming-guidelines-for-boolean-variables/).
But we can't forbid negative words altogether, to be able to introduce flags as
[disabled by default](#feature-flags-in-gitlab-development), use them to remove a feature by moving it behind a flag, or to [selectively disable a flag by actor](controls.md#selectively-disable-by-actor).
### Risk of a broken master (main) branch
{{< alert type="warning" >}}
Feature flags **must** be used in the MR that introduces them. Not doing so causes a
[broken master](https://handbook.gitlab.com/handbook/engineering/workflow/#broken-master) scenario due
to the `rspec:feature-flags` job that only runs on the `master` branch.
{{< /alert >}}
### Optionally add a `.patch` file for automated removal of feature flags
The [`gitlab-housekeeper`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/gems/gitlab-housekeeper) is able to automatically remove your feature flag code for you using the [`DeleteOldFeatureFlags` keep](https://gitlab.com/gitlab-org/gitlab/-/blob/master/keeps/delete_old_feature_flags.rb). The tool will run periodically and automatically clean up old feature flags from the code.
For this tool to automatically remove the usages of the feature flag in your code you can add a `.patch` file alongside your feature flag YAML file. The file should be exactly the same name except using the `.patch` extension instead of the `.yml` extension.
For example you can create a patch file for `config/feature_flags/beta/my_feature_flag.yml` using the following steps:
1. Ensure you have a clean Git working directory.
1. Delete `config/feature_flags/beta/my_feature_flag.yml`.
1. Edit the code locally to remove any usage of `my_feature_flag` as though that the feature flag is already enabled and the feature is moving forward.
1. Run `git diff > config/feature_flags/beta/my_feature_flag.patch`. If your feature flag is not a `beta` flag, ensure your patch file in the same directory as the YAML file that defines your feature flag.
1. Undo the deletion of `config/feature_flags/beta/my_feature_flag.yml`
1. Undo the changes to the files you ended to remove the feature flag usage
1. Commit the patch file to the branch where you are adding the feature flag
Then in future the `gitlab-housekeeper` will automatically clean up your
feature flag for you by applying this patch.
## List all the feature flags
To [use ChatOps](../../ci/chatops/_index.md) to output all the feature flags in an environment to Slack, you can use the `run feature list`
command. For example:
```shell
/chatops run feature list --dev
/chatops run feature list --staging
```
## Toggle a feature flag
See [rolling out changes](controls.md#rolling-out-changes) for more information
about toggling feature flags.
## Delete a feature flag
See [cleaning up feature flags](controls.md#cleaning-up) for more information about
deleting feature flags.
## Migrate an `ops` feature flag to an application setting
{{< alert type="warning" >}}
The changes to backfill application settings and use the settings in the code must be merged in the same milestone.
{{< /alert >}}
To migrate an `ops` feature flag to an application setting:
1. In application settings, create or identify an existing `JSONB` column to store the setting.
1. The application setting default should match `default_enabled:` in the feature flag YAML definition
1. Write a migration to backfill the column. This allows instances which have
opted out of the default behavior to remain in the same state. Avoid using `Feature.enabled?` or `Feature.disabled?`
in the migration. Use the `Gitlab::Database::MigrationHelpers::FeatureFlagMigratorHelpers` migration helpers. These
helpers will only migrate feature flags that are explicitly set to `true` or `false`. If a feature flag is set for a
percentage or specific actor, the default value will be used.
1. In the **Admin** area, create a setting to enable or disable the feature.
1. Replace the feature flag everywhere with the application setting.
1. Update all the relevant documentation pages. If frontend changes are merged in a later milestone, you should add
documentation about how to update the settings by using the [application settings API](../../api/settings.md) or
the Rails console.
An example migration for a `JSONB` column:
```ruby
# default_enabled copied from feature flag definition YAML before it is removed
DEFAULT_ENABLED = true
def up
up_migrate_to_jsonb_setting(feature_flag_name: :my_flag_name,
setting_name: :my_setting,
jsonb_column_name: :settings,
default_enabled: DEFAULT_ENABLED)
end
def down
down_migrate_to_jsonb_setting(setting_name: :my_setting, jsonb_column_name: :settings)
end
```
An example migration for a boolean column:
```ruby
# default_enabled copied from feature flag definition YAML before it is removed
DEFAULT_ENABLED = true
def up
up_migrate_to_setting(feature_flag_name: :my_flag_name,
setting_name: :my_setting,
default_enabled: DEFAULT_ENABLED)
end
def down
down_migrate_to_setting(setting_name: :my_setting, default_enabled: DEFAULT_ENABLED)
end
```
## Develop with a feature flag
There are two main ways of using feature flags in the GitLab codebase:
- [Backend code (Rails)](#backend)
- [Frontend code (VueJS)](#frontend)
### Backend
The feature flag interface is defined in [`lib/feature.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/feature.rb).
This interface provides a set of methods to check if the feature flag is enabled or disabled:
```ruby
if Feature.enabled?(:my_feature_flag, project)
# execute code if feature flag is enabled
else
# execute code if feature flag is disabled
end
if Feature.disabled?(:my_feature_flag, project)
# execute code if feature flag is disabled
end
```
Default behavior for not configured feature flags is controlled
by `default_enabled:` in YAML definition.
If feature flag does not have a YAML definition an error will be raised
in development or test environment, while returning `false` on production.
For feature flags that don't have a definition file (only allowed for the `experiment`, `worker` and `undefined` types),
you need to pass their `type:` when calling `Feature.enabled?` and `Feature.disabled?`:
```ruby
if Feature.enabled?(:experiment_feature_flag, project, type: :experiment)
# execute code if feature flag is enabled
end
if Feature.disabled?(:worker_feature_flag, project, type: :worker)
# execute code if feature flag is disabled
end
```
{{< alert type="warning" >}}
Don't use feature flags at application load time. For example, using the `Feature` class in
`config/initializers/*` or at the class level could cause an unexpected error. This error occurs
because a database that a feature flag adapter might depend on doesn't exist at load time
(especially for fresh installations). Checking for the database's existence at the caller isn't
recommended, as some adapters don't require a database at all (for example, the HTTP adapter). The
feature flag setup check must be abstracted in the `Feature` namespace. This approach also requires
application reload when the feature flag changes. You must therefore ask SREs to reload the
Web/API/Sidekiq fleet on production, which takes time to fully rollout/rollback the changes. For
these reasons, use environment variables (for example, `ENV['YOUR_FEATURE_NAME']`) or `gitlab.yml`
instead.
{{< /alert >}}
Here's an example of a pattern that you should avoid:
```ruby
class MyClass
if Feature.enabled?(:...)
new_process
else
legacy_process
end
end
```
#### Recursion detection
When there are many feature flags, it is not always obvious where they are
called. Avoid cycles where the evaluation of one feature flag requires the
evaluation of other feature flags. If this causes a cycle, it will be broken
and the default value will be returned.
To enable this recursion detection to work correctly, always access feature values through
`Feature::enabled?`, and avoid the low-level use of `Feature::get`. When this
happens, we track a `Feature::RecursionError` exception to the error tracker.
### Frontend
When using a feature flag for UI elements, make sure to also use a feature
flag for the underlying backend code, if there is any. This ensures there is
absolutely no way to use the feature until it is enabled.
Use the `push_frontend_feature_flag` method which is available to all controllers that inherit from `ApplicationController`. You can use this method to expose the state of a feature flag, for example:
```ruby
before_action do
# Prefer to scope it per project or user, for example
push_frontend_feature_flag(:vim_bindings, project)
end
def index
# ...
end
def edit
# ...
end
```
You can then check the state of the feature flag in JavaScript as follows:
```javascript
if ( gon.features.vimBindings ) {
// ...
}
```
The name of the feature flag in JavaScript is always camelCase,
so checking for `gon.features.vim_bindings` would not work.
See the [Vue guide](../fe_guide/vue.md#accessing-feature-flags) for details about
how to access feature flags in a Vue component.
For feature flags that don't have a definition file (only allowed for the `experiment`, `worker` and `undefined` types),
you need to pass their `type:` when calling `push_frontend_feature_flag`:
```ruby
before_action do
push_frontend_feature_flag(:vim_bindings, project, type: :experiment)
end
```
### Feature actors
**It is strongly advised to use actors with feature flags.** Actors provide a simple
way to enable a feature flag only for a given project, group or user. This makes debugging
easier, as you can filter logs and errors for example, based on actors. This also makes it possible
to enable the feature on the `gitlab-org` or `gitlab-com` groups first, while the rest of
the users aren't impacted.
Actors also provide an easy way to do a percentage rollout of a feature in a sticky way.
If a 1% rollout enabled a feature for a specific actor, that actor will continue to have the feature enabled at
10%, 50%, and 100%.
GitLab supports the following feature flag actors:
- `User` model
- `Project` model
- `Group` model
- Current request
The actor is a second parameter of the `Feature.enabled?` call. For example:
```ruby
Feature.enabled?(:feature_flag, project)
```
Models which `include FeatureGate` have an `.actor_from_id` class method.
If you have the model's ID and do not need the model for anything other than checking the feature
flag state, you can use `.actor_from_id` in order check the feature flag state without making a
database query to retrieve the model.
```ruby
# Bad -- Unnecessary query is executed
Feature.enabled?(:feature_flag, Project.find(project_id))
# Good -- No query for projects
Feature.enabled?(:feature_flag, Project.actor_from_id(project_id))
# Good -- Project model is used after feature flag check
project = Project.find(project_id)
return unless Feature.enabled?(:feature_flag, project)
project.update!(column: value)
```
See [Use ChatOps to enable and disable feature flags](controls.md#process) for details on how to use ChatOps
to selectively enable or disable feature flags in GitLab-provided environments, like staging and production.
Flag state is not inherited from a group by its subgroups or projects.
If you need a flag state to be consistent for an entire group hierarchy,
consider using the top-level group as the actor.
This group can be found by calling `#root_ancestor` on any group or project.
```ruby
Feature.enabled?(:feature_flag, group.root_ancestor)
```
#### Mixing actor types
Generally you should use only one type of actor in all invocations of `Feature.enabled?`
for a particular feature flag, and not mix different actor types.
Mixing actor types can lead to a feature being enabled or disabled inconsistently in ways
that can cause bugs. For example, if at the controller level a flag is checked using a
group actor and at the service level it is checked using a user actor, the feature may be
both enabled, and disabled at different points in the same request.
In some situations it is safe to mix actor types if you know that it won't lead to
inconsistent results. For example, a webhook can be associated with either a group or a
project, and so a feature flag for a webhook might leverage this to rollout a feature for
group and project webhooks using the same feature flag.
If you need to use different actor types and cannot safely mix them in your situation you
should use separate flags for each actor type instead. For example:
```ruby
Feature.enabled?(:feature_flag_group, group)
Feature.enabled?(:feature_flag_user, user)
```
#### Instance actor
{{< alert type="warning" >}}
Instance-wide feature flags should only be used when a feature is tied in to an entire instance. Always prioritize other actors first.
{{< /alert >}}
In some cases, you may want a feature flag to be enabled for an entire instance and not based on an actor. A great example are the Admin settings, where it would be impossible to enable the Feature Flag based on a group or a project since they are both `undefined`.
The user actor would cause confusion since a Feature Flag might be enabled for a user who is not an admin, but disabled for a user who is.
Instead, it is possible to use the `:instance` symbol as the second argument to `Feature.enabled?`, which will be sanitized as a GitLab instance.
```ruby
Feature.enabled?(:feature_flag, :instance)
```
#### Current request actor
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132078) in GitLab 16.5
{{< /history >}}
It is not recommended to use percentage of time rollout, as each call may return
inconsistent results.
Rather it is advised to use the current request as an actor.
```ruby
# Bad
Feature.enable_percentage_of_time(:feature_flag, 40)
Feature.enabled?(:feature_flag)
# Good
Feature.enable_percentage_of_actors(:feature_flag, 40)
Feature.enabled?(:feature_flag, Feature.current_request)
```
When using the current request as the actor, the feature flag should return the
same value within the context of a request.
As the current request actor is implemented using [`SafeRequestStore`](../caching.md#low-level), we should
have consistent feature flag values within:
- a Rack request
- a Sidekiq worker execution
- an ActionCable worker execution
To migrate an existing feature from percentage of time to the current request
actor, it is recommended that you create a new feature flag.
This is because it is difficult to control the timing between existing
`percentage_of_time` values, the deployment of the code change, and switching to
use `percentage_of_actors`.
#### Use actors for verifying in production
{{< alert type="warning" >}}
Using production as a testing environment is not recommended. Use our testing
environments for testing features that are not production-ready.
{{< /alert >}}
While the staging environment provides a way to test features in an environment
that resembles production, it doesn't allow you to compare before-and-after
performance metrics specific to production environment. It can be useful to have a
project in production with your development feature flag enabled, to allow tools
like Sitespeed reports to reveal the metrics of the new code under a feature flag.
This approach is even more useful if you're already tracking the old codebase in
Sitespeed, enabling you to compare performance accurately before and after the
feature flag's rollout.
### Enable additional objects as actors
To use feature gates based on actors, the model needs to respond to
`flipper_id`. For example, to enable for the Foo model:
```ruby
class Foo < ActiveRecord::Base
include FeatureGate
end
```
Only models that `include FeatureGate` or expose `flipper_id` method can be
used as an actor for `Feature.enabled?`.
### Feature flags for licensed features
You can't use a feature flag with the same name as a licensed feature name, because
it would cause a naming collision. This was [widely discussed and removed](https://gitlab.com/gitlab-org/gitlab/-/issues/259611)
because it is confusing.
To check for licensed features, add a dedicated feature flag under a different name
and check it explicitly, for example:
```ruby
Feature.enabled?(:licensed_feature_feature_flag, project) &&
project.feature_available?(:licensed_feature)
```
### Feature groups
Feature groups must be defined statically in `lib/feature.rb` (in the
`.register_feature_groups` method), but their implementation can be
dynamic (querying the DB, for example).
Once defined in `lib/feature.rb`, you can to activate a
feature for a given feature group via the [`feature_group` parameter of the features API](../../api/features.md#set-or-create-a-feature)
The available feature groups are:
| Group name | Scoped to | Description |
| --------------------- | --------- | ----------- |
| `gitlab_team_members` | Users | Enables the feature for users who are members of [`gitlab-com`](https://gitlab.com/gitlab-com) |
Feature groups can be enabled via the group name:
```ruby
Feature.enable(:feature_flag_name, :gitlab_team_members)
```
### Controlling feature flags locally
#### On rails console
In the rails console (`rails c`), enter the following command to enable a feature flag:
```ruby
Feature.enable(:feature_flag_name)
```
Similarly, the following command disables a feature flag:
```ruby
Feature.disable(:feature_flag_name)
```
You can also enable a feature flag for a given gate:
```ruby
Feature.enable(:feature_flag_name, Project.find_by_full_path("root/my-project"))
```
When manually enabling or disabling a feature flag from the Rails console, its default value gets overwritten.
This can cause confusion when changing the flag's `default_enabled` attribute.
To reset the feature flag to the default status:
```ruby
Feature.remove(:feature_flag_name)
```
#### On your browser
Access `http://gdk.test:3000/rails/features` to see the manage locally the feature flag.
### Logging
Usage and state of the feature flag is logged if either:
- `log_state_changes` is set to `true` in the feature flag definition.
- `milestone` refers to a milestone that is greater than or equal to the current GitLab version.
When the state of a feature flag is logged, it can be identified by using the `"json.feature_flag_states": "feature_flag_name:1"` or `"json.feature_flag_states": "feature_flag_name:0"` condition in Kibana.
You can see an example in [this](https://log.gprd.gitlab.net/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:60000),time:(from:now-7d%2Fd,to:now))&_a=(columns:!(json.feature_flag_states),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,field:json.feature_flag_states,index:'7092c4e2-4eb5-46f2-8305-a7da2edad090',key:json.feature_flag_states,negate:!f,params:(query:'optimize_where_full_path_in:1'),type:phrase),query:(match_phrase:(json.feature_flag_states:'optimize_where_full_path_in:1')))),hideChart:!f,index:'7092c4e2-4eb5-46f2-8305-a7da2edad090',interval:auto,query:(language:kuery,query:''),sort:!(!(json.time,desc)))) link.
{{< alert type="note" >}}
Only 20% of the requests log the state of the feature flags. This is controlled with the [`feature_flag_state_logs`](https://gitlab.com/gitlab-org/gitlab/-/blob/6deb6ecbc69f05a80d920a295dfc1a6a303fc7a0/config/feature_flags/ops/feature_flag_state_logs.yml) feature flag.
{{< /alert >}}
## Changelog
We want to avoid introducing a changelog when features are not accessible by an end-user either directly (example: ability to use the feature) or indirectly (examples: ability to take advantage of background jobs, performance improvements, or database migration updates).
- Database migrations are always accessible by an end-user indirectly, as self-managed customers need to be aware of database changes before upgrading. For this reason, they **should** have a changelog entry.
- Any change behind a feature flag **disabled** by default **should not** have a changelog entry.
- Any change behind a feature flag that is **enabled** by default **should** have a changelog entry.
- Changing the feature flag itself (flag removal, default-on setting) **should** have [a changelog entry](../changelog.md).
Use the flowchart to determine the changelog entry type.
```mermaid
flowchart LR
FDOFF(Flag is currently<br>'default: off')
FDON(Flag is currently<br>'default: on')
CDO{Change to<br>'default: on'}
ACF(added / changed / fixed / '...')
RF{Remove flag}
RF2{Remove flag}
RC(removed / changed)
OTHER(other)
FDOFF -->CDO-->ACF
FDOFF -->RF
RF-->|Keep new code?| ACF
RF-->|Keep old code?| OTHER
FDON -->RF2
RF2-->|Keep old code?| RC
RF2-->|Keep new code?| OTHER
```
- The changelog for a feature flag should describe the feature and not the
flag, unless a default on feature flag is removed keeping the new code (`other` in the flowchart above).
- A feature flag can also be used for rolling out a bug fix or a maintenance work. In this scenario, the changelog
must be related to it, for example; `fixed` or `other`.
## Feature flags in tests
Introducing a feature flag into the codebase creates an additional code path that should be tested.
It is strongly advised to include automated tests for all code affected by a feature flag, both when **enabled** and **disabled**
to ensure the feature works properly. If automated tests are not included for both states, the functionality associated
with the untested code path should be manually tested before deployment to production.
When using the testing environment, all feature flags are enabled by default.
Flags can be disabled by default in the [`spec/spec_helper.rb` file](https://gitlab.com/gitlab-org/gitlab/-/blob/b61fba42eea2cf5bb1ca64e80c067a07ed5d1921/spec/spec_helper.rb#L274).
Add a comment inline to explain why the flag needs to be disabled. You can also attach the issue URL for reference if possible.
{{< alert type="warning" >}}
This does not apply to end-to-end (QA) tests, which [do not enable feature flags by default](#end-to-end-qa-tests). There is a different [process for using feature flags in end-to-end tests](../testing_guide/end_to_end/best_practices/feature_flags.md).
{{< /alert >}}
To disable a feature flag in a test, use the `stub_feature_flags`
helper. For example, to globally disable the `ci_live_trace` feature
flag in a test:
```ruby
stub_feature_flags(ci_live_trace: false)
Feature.enabled?(:ci_live_trace) # => false
```
A common pattern of testing both paths looks like:
```ruby
it 'ci_live_trace works' do
# tests assuming ci_live_trace is enabled in tests by default
Feature.enabled?(:ci_live_trace) # => true
end
context 'when ci_live_trace is disabled' do
before do
stub_feature_flags(ci_live_trace: false)
end
it 'ci_live_trace does not work' do
Feature.enabled?(:ci_live_trace) # => false
end
end
```
If you wish to set up a test where a feature flag is enabled only
for some actors and not others, you can specify this in options
passed to the helper. For example, to enable the `ci_live_trace`
feature flag for a specific project:
```ruby
project1, project2 = build_list(:project, 2)
# Feature will only be enabled for project1
stub_feature_flags(ci_live_trace: project1)
Feature.enabled?(:ci_live_trace) # => false
Feature.enabled?(:ci_live_trace, project1) # => true
Feature.enabled?(:ci_live_trace, project2) # => false
```
The behavior of FlipperGate is as follows:
1. You can enable an override for a specified actor to be enabled.
1. You can disable (remove) an override for a specified actor,
falling back to the default state.
1. There's no way to model that you explicitly disabled a specified actor.
```ruby
Feature.enable(:my_feature)
Feature.disable(:my_feature, project1)
Feature.enabled?(:my_feature) # => true
Feature.enabled?(:my_feature, project1) # => true
Feature.disable(:my_feature2)
Feature.enable(:my_feature2, project1)
Feature.enabled?(:my_feature2) # => false
Feature.enabled?(:my_feature2, project1) # => true
```
### `have_pushed_frontend_feature_flags`
Use `have_pushed_frontend_feature_flags` to test if [`push_frontend_feature_flag`](#frontend)
has added the feature flag to the HTML.
For example,
```ruby
stub_feature_flags(value_stream_analytics_path_navigation: false)
visit group_analytics_cycle_analytics_path(group)
expect(page).to have_pushed_frontend_feature_flags(valueStreamAnalyticsPathNavigation: false)
```
### `stub_feature_flags` vs `Feature.enable*`
It is preferred to use `stub_feature_flags` to enable feature flags
in the testing environment. This method provides a simple and well described
interface for simple use cases.
However, in some cases more complex behavior needs to be tested,
like percentage rollouts of feature flags. This can be done using
`.enable_percentage_of_time` or `.enable_percentage_of_actors`:
```ruby
# Good: feature needs to be explicitly disabled, as it is enabled by default if not defined
stub_feature_flags(my_feature: false)
stub_feature_flags(my_feature: true)
stub_feature_flags(my_feature: project)
stub_feature_flags(my_feature: [project, project2])
# Bad
Feature.enable(:my_feature_2)
# Good: enable my_feature for 50% of time
Feature.enable_percentage_of_time(:my_feature_3, 50)
# Good: enable my_feature for 50% of actors/gates/things
Feature.enable_percentage_of_actors(:my_feature_4, 50)
```
Each feature flag that has a defined state is persisted
during test execution time:
```ruby
Feature.persisted_names.include?('my_feature') => true
Feature.persisted_names.include?('my_feature_2') => true
Feature.persisted_names.include?('my_feature_3') => true
Feature.persisted_names.include?('my_feature_4') => true
```
### Stubbing actor
When you want to enable a feature flag for a specific actor only,
you can stub its representation. A gate that is passed
as an argument to `Feature.enabled?` and `Feature.disabled?` must be an object
that includes `FeatureGate`.
In specs you can use the `stub_feature_flag_gate` method that allows you to
quickly create a custom actor:
```ruby
gate = stub_feature_flag_gate('CustomActor')
stub_feature_flags(ci_live_trace: gate)
Feature.enabled?(:ci_live_trace) # => false
Feature.enabled?(:ci_live_trace, gate) # => true
```
### Controlling feature flags engine in tests
Our Flipper engine in the test environment works in a memory mode `Flipper::Adapters::Memory`.
`production` and `development` modes use `Flipper::Adapters::ActiveRecord`.
You can control whether the `Flipper::Adapters::Memory` or `ActiveRecord` mode is being used.
#### `stub_feature_flags: true` (default and preferred)
In this mode Flipper is configured to use `Flipper::Adapters::Memory` and mark all feature
flags to be on-by-default and persisted on a first use.
Make sure behavior under feature flag doesn't go untested in some non-specific contexts.
### `stub_feature_flags: false`
This disables a memory-stubbed flipper, and uses `Flipper::Adapters::ActiveRecord`
a mode that is used by `production` and `development`.
You should use this mode only when you really want to tests aspects of Flipper
with how it interacts with `ActiveRecord`.
### End-to-end (QA) tests
Toggling feature flags works differently in end-to-end (QA) tests. The end-to-end test framework does not have direct access to
Rails or the database, so it can't use Flipper. Instead, it uses [the public API](../../api/features.md#set-or-create-a-feature). Each end-to-end test can [enable or disable a feature flag during the test](../testing_guide/end_to_end/best_practices/feature_flags.md). Alternatively, you can enable or disable a feature flag before one or more tests when you [run them from your GitLab repository's `qa` directory](https://gitlab.com/gitlab-org/gitlab/-/tree/master/qa#running-tests-with-a-feature-flag-enabled-or-disabled), or if you [run the tests via GitLab QA](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#running-tests-with-a-feature-flag-enabled).
[As noted above, feature flags are not enabled by default in end-to-end tests.](#feature-flags-in-tests)
This means that end-to-end tests will run with feature flags in the default state implemented in the source
code, or with the feature flag in its current state on the GitLab instance under test, unless the
test is written to enable/disable a feature flag explicitly.
When a feature flag is changed on Staging or on GitLab.com, a Slack message will be posted to the `#e2e-run-staging` or `#e2e-run-production` channels to inform
the pipeline triage DRI so that they can more easily determine if any failures are related to a feature flag change. However, if you are working on a change you can
help to avoid unexpected failures by [confirming that the end-to-end tests pass with a feature flag enabled.](../testing_guide/end_to_end/best_practices/feature_flags.md#confirming-that-end-to-end-tests-pass-with-a-feature-flag-enabled)
## Controlling Sidekiq worker behavior with feature flags
Feature flags with [`worker` type](#worker-type) can be used to control the behavior of a Sidekiq worker.
### Deferring Sidekiq jobs
When disabled, feature flags with the format of `run_sidekiq_jobs_{WorkerName}` delay the execution of the worker
by scheduling the job at a later time. This feature flag is enabled by default for all workers.
Deferring jobs can be useful during an incident where contentious behavior from
worker instances are saturating infrastructure resources (such as database and database connection pool).
The implementation can be found at [SkipJobs Sidekiq server middleware](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/sidekiq_middleware/skip_jobs.rb).
{{< alert type="note" >}}
Jobs are deferred indefinitely as long as the feature flag is disabled. It is important to remove the
feature flag after the worker is deemed safe to continue processing.
{{< /alert >}}
When set to false, 100% of the jobs are deferred. When you want processing to resume, you can
use a **percentage of time** rollout. For example:
```shell
# not running any jobs, deferring all 100% of the jobs
/chatops run feature set run_sidekiq_jobs_SlowRunningWorker false
# only running 10% of the jobs, deferring 90% of the jobs
/chatops run feature set run_sidekiq_jobs_SlowRunningWorker 10
# running 50% of the jobs, deferring 50% of the jobs
/chatops run feature set run_sidekiq_jobs_SlowRunningWorker 50
# back to running all jobs normally
/chatops run feature delete run_sidekiq_jobs_SlowRunningWorker
```
### Dropping Sidekiq jobs
Instead of [deferring jobs](#deferring-sidekiq-jobs), jobs can be entirely dropped by enabling the feature flag
`drop_sidekiq_jobs_{WorkerName}`. Use this feature flag when you are certain the jobs do not need to be processed in the future, and therefore are safe to be dropped.
```shell
# drop all the jobs
/chatops run feature set drop_sidekiq_jobs_SlowRunningWorker true
# process jobs normally
/chatops run feature delete drop_sidekiq_jobs_SlowRunningWorker
```
{{< alert type="note" >}}
Dropping feature flag (`drop_sidekiq_jobs_{WorkerName}`) takes precedence over deferring feature flag (`run_sidekiq_jobs_{WorkerName}`). When `drop_sidekiq_jobs` is enabled and `run_sidekiq_jobs` is disabled, jobs are entirely dropped.
{{< /alert >}}
|
---
stage: none
group: unassigned
info: 'See the Technical Writers assigned to Development Guidelines: https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines'
description: Developer documentation about GitLab feature flags.
title: Feature flags in the development of GitLab
breadcrumbs:
- doc
- development
- feature_flags
---
This page explains how developers contribute to the development and operations of the GitLab product
through feature flags. To create custom feature flags to show and hide features in your own applications,
see [Create a feature flag](../../operations/feature_flags.md#create-a-feature-flag).
A [complete list of feature flags](../../administration/feature_flags/list.md) in GitLab is also available.
{{< alert type="warning" >}}
All newly-introduced feature flags should be [disabled by default](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/).
{{< /alert >}}
{{< alert type="warning" >}}
All newly-introduced feature flags should be [used with an actor](controls.md#percentage-based-actor-selection).
{{< /alert >}}
Design documents:
- (Latest) [Feature Flags usage in GitLab development and operations](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/feature_flags_usage_in_dev_and_ops/)
- [Development Feature Flags Architecture](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/feature_flags_development/)
This document is the subject of continued work as part of an epic to [improve internal usage of feature flags](https://gitlab.com/groups/gitlab-org/-/epics/3551). Raise any suggestions as new issues and attach them to the epic.
For an [overview of the feature flag lifecycle](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#feature-flag-lifecycle), or if you need help deciding [if you should use a feature flag](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags) or not, see the feature flag lifecycle handbook page.
## When to use feature flags
Moved to the ["When to use feature flags"](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags) section in the handbook.
### Do not use feature flags for long lived settings
Feature flags are meant to be short lived. If you are intending on adding a
feature flag so that something can be enabled per user/group/project for a long
period of time, consider introducing
[Cascading Settings](../cascading_settings.md) or [Application Settings](../application_settings.md)
instead. Settings
offer a way for customers to enable or disable features for themselves on
GitLab.com or self-managed and can remain in the codebase as long as needed. In
contrast users have no way to enable or disable feature flags for themselves on
GitLab.com and only self-managed admins can change the feature flags.
Also,
[feature flags are not supported in GitLab Dedicated](../enabling_features_on_dedicated.md#feature-flags)
which is another reason you should not use them as a replacement for settings.
## Feature flags in GitLab development
The following highlights should be considered when deciding if feature flags
should be leveraged:
- The feature flag must be **disabled by default**.
- Feature flags should remain in the codebase for as short period as possible
to reduce the need for feature flag accounting.
- The person operating the feature flag is responsible for clearly communicating
the status of a feature behind the feature flag in the documentation and with other stakeholders. The
issue description should be updated with the feature flag name and whether it is
defaulted on or off as soon it is evident that a feature flag is needed.
- Merge requests that introduce a feature flag, update its state, or remove the
existing feature flag because a feature is deemed stable must have the
~"feature flag" label assigned.
When the feature implementation is delivered over multiple merge requests:
1. [Create a new feature flag](#create-a-new-feature-flag)
which is **disabled** by default, in the first merge request which uses the flag.
Flags [should not be added separately](#risk-of-a-broken-default-branch).
1. Submit incremental changes via one or more merge requests, ensuring that any
new code added can only be reached if the feature flag is **enabled**.
You can keep the feature flag enabled on your local GDK during development.
1. When the feature is ready to be tested by other team members, [create the initial documentation](../documentation/feature_flags.md#when-to-document-features-behind-a-feature-flag).
Include details about the status of the [feature flag](../documentation/feature_flags.md#how-to-add-feature-flag-documentation).
1. Enable the feature flag for a specific group/project/user and ensure that there are no issues
with the implementation. Do not enable the feature flag for a public project
like `gitlab-org/gitlab` if there is no documentation. Team members and contributors might search for
documentation on how to use the feature if they see it enabled in a public project.
1. When the feature is ready for production use, including GitLab Self-Managed instances, open one merge request to:
- Update the documentation to describe the latest flag status.
- Add a [changelog entry](#changelog).
- Remove the feature flag to enable the new behavior, or flip the feature flag to be **enabled by default** (only for `ops` and `beta` feature flags).
When the feature flag removal is delivered over multiple merge requests:
1. The value change of a feature flag should be the only change in a merge request. As long as the feature flag exists in the codebase, both states should be fully functional (when the feature is on and off).
1. After all mentions of the feature flag have been removed, legacy code can be removed. Steps in the feature flag roll-out issue should be followed, and if a step needs to be skipped, a comment should be added to the issue detailing why.
One might be tempted to think that feature flags will delay the release of a
feature by at least one month (= one release). This is not the case. A feature
flag does not have to stick around for a specific amount of time
(for example, at least one release), instead they should stick around until the feature
is deemed stable. **Stable means it works on GitLab.com without causing any
problems, such as outages.**
## Risk of a broken default branch
Feature flags must be used in the MR that introduces them. Not doing so causes a
[broken default branch](https://handbook.gitlab.com/handbook/engineering/workflow/#broken-master) scenario due
to the `rspec:feature-flags` job that only runs on the default branch.
## Types of feature flags
Choose a feature flag type that matches the expected usage.
### `gitlab_com_derisk` type
`gitlab_com_derisk` feature flags are short-lived feature flags,
used to de-risk GitLab.com deployments. Most feature flags used at
GitLab are of the `gitlab_com_derisk` type.
#### Constraints
- `default_enabled`: **Must not** be set to true. This kind of feature flag is meant to lower the risk on GitLab.com, thus there's no need to keep the flag in the codebase after it's been enabled on GitLab.com. `default_enabled: true` will not have any effect for this type of feature flag.
- Maximum Lifespan: 2 months after it's merged into the default branch
- Documentation: This type of feature flag doesn't need to be documented in the
[All feature flags in GitLab](../../administration/feature_flags/list.md) page given they're short-lived and deployment-related
- Rollout issue: **Must** have a rollout issue created from the
[Feature flag Roll Out template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20Flag%20Roll%20Out.md)
#### Usage
The format for `gitlab_com_derisk` feature flags is `Feature.<state>(:<dev_flag_name>)`.
To enable and disable them, run on the GitLab Rails console:
```ruby
# To enable it for the instance:
Feature.enable(:<dev_flag_name>)
# To disable it for the instance:
Feature.disable(:<dev_flag_name>)
# To enable for a specific project:
Feature.enable(:<dev_flag_name>, Project.find(<project id>))
# To disable for a specific project:
Feature.disable(:<dev_flag_name>, Project.find(<project id>))
```
To check a `gitlab_com_derisk` feature flag's state:
```ruby
# Check if the feature flag is enabled
Feature.enabled?(:dev_flag_name)
# Check if the feature flag is disabled
Feature.disabled?(:dev_flag_name)
```
### `wip` type
Some features are complex and need to be implemented through several MRs. Until they're fully implemented,
it needs to be hidden from anyone. In that case, the `wip` (for "Work In Progress") feature flag allows
to merge all the changes to the main branch without actually using the feature yet.
Once the feature is complete, the feature flag type can be changed to the `gitlab_com_derisk` or
`beta` type depending on how the feature will be presented/documented to customers.
#### Constraints
- `default_enabled`: **Must not** be set to true. If needed, this type can be changed to beta once the feature is complete.
- Maximum Lifespan: 4 months after it's merged into the default branch
- Documentation: This type of feature flag doesn't need to be documented in the
[All feature flags in GitLab](../../administration/feature_flags/list.md) page given they're mostly hiding unfinished code
- Rollout issue: Likely no need for a rollout issues, as `wip` feature flags should be transitioned to
another type before being enabled
#### Usage
```ruby
# Check if feature flag is enabled
Feature.enabled?(:my_wip_flag, project)
# Check if feature flag is disabled
Feature.disabled?(:my_wip_flag, project)
# Push feature flag to Frontend
push_frontend_feature_flag(:my_wip_flag, project)
```
### `beta` type
We might [not be confident we'll be able to scale, support, and maintain a feature](../../policy/development_stages_support.md) in its current form for every designed use case ([example](https://gitlab.com/gitlab-org/gitlab/-/issues/336070#note_1523983444)).
There are also scenarios where a feature is not complete enough to be considered an MVC.
Providing a flag in this case allows engineers and customers to disable the new feature until it's performant enough.
#### Constraints
- `default_enabled`: Can be set to `true` so that a feature can be "released" to everyone in beta with the
possibility to disable it in the case of scalability issues (ideally it should only be disabled for this
reason on specific on-premise installations)
- Maximum Lifespan: 6 months after it's merged into the default branch
- Documentation: This type of feature flag **must** be documented in the
[All feature flags in GitLab](../../administration/feature_flags/list.md) page
- Rollout issue: **Must** have a rollout issue
created from the
[Feature flag Roll Out template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20Flag%20Roll%20Out.md)
#### Usage
```ruby
# Check if feature flag is enabled
Feature.enabled?(:my_beta_flag, project)
# Check if feature flag is disabled
Feature.disabled?(:my_beta_flag, project)
# Push feature flag to Frontend
push_frontend_feature_flag(:my_beta_flag, project)
```
### `ops` type
`ops` feature flags are long-lived feature flags that control operational aspects
of GitLab product behavior. For example, feature flags that disable features that might
have a performance impact such as Sidekiq worker behavior.
Remember that using this type should follow a conscious decision not to introduce an
instance/group/project/user setting.
While `ops` type flags have an unlimited lifespan, every 12 months, they must be evaluated to determine if
they are still necessary.
#### Constraints
- `default_enabled`: Should be set to `false` in most cases, and only enabled to resolve temporary scalability
issues or help debug production issues.
- Maximum Lifespan: Unlimited, but must be evaluated every 12 months
- Documentation: This type of feature flag **must** be documented in the
[All feature flags in GitLab](../../administration/feature_flags/list.md) page as well as be associated with an operational
runbook describing the circumstances when it can be used.
- Rollout issue: Likely no need for a rollout issues, as it is hard to predict when they are enabled or disabled
#### Usage
```ruby
# Check if feature flag is enabled
Feature.enabled?(:my_ops_flag, project)
# Check if feature flag is disabled
Feature.disabled?(:my_ops_flag, project)
# Push feature flag to Frontend
push_frontend_feature_flag(:my_ops_flag, project)
```
### `experiment` type
`experiment` feature flags are used for A/B testing on GitLab.com.
An `experiment` feature flag should conform to the same standards as a `beta` feature flag,
although the interface has some differences. An experiment feature flag should have a rollout issue,
created using the [Experiment tracking template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Experiment%20Rollout.md). More information can be found in the [experiment guide](../experiment_guide/_index.md).
#### Constraints
- `default_enabled`: **Must not** be set to `true`.
- Maximum Lifespan: 6 months after it's merged into the default branch
### `worker` type
`worker` feature flags are special `ops` flags that allow to control Sidekiq workers behavior, such as deferring Sidekiq jobs.
`worker` feature flags likely do not have any YAML definition as the name could be dynamically generated using
the worker name itself, for example, `run_sidekiq_jobs_AuthorizedProjectsWorker`. Some examples for using `worker` type feature
flags can be found in [deferring Sidekiq jobs](#deferring-sidekiq-jobs).
### (Deprecated) `development` type
The `development` type is deprecated in favor of the `gitlab_com_derisk`, `wip`, and `beta` feature flag types.
## Feature flag definition and validation
During development (`RAILS_ENV=development`) or testing (`RAILS_ENV=test`) all feature flag usage is being strictly validated.
This process is meant to ensure consistent feature flag usage in the codebase. All feature flags **must**:
- Be known. Only use feature flags that are explicitly defined (except for feature flags of the types `experiment`, `worker` and `undefined`).
- Not be defined twice. They have to be defined either in FOSS or EE, but not both.
- For feature flags that don't have a definition file, use a valid and consistent `type:` across all invocations.
- Have an owner.
All feature flags known to GitLab are self-documented in YAML files stored in:
- [`config/feature_flags`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/feature_flags)
- [`ee/config/feature_flags`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/feature_flags)
Each feature flag is defined in a separate YAML file consisting of a number of fields:
| Field | Required | Description |
|---------------------|----------|----------------------------------------------------------------|
| `name` | yes | Name of the feature flag. |
| `description` | yes | A short description of the reason for the feature flag. |
| `type` | yes | Type of feature flag. |
| `default_enabled` | yes | The default state of the feature flag. |
| `introduced_by_url` | yes | The URL to the merge request that introduced the feature flag. |
| `milestone` | yes | Milestone in which the feature flag was created. |
| `group` | yes | The [group](https://handbook.gitlab.com/handbook/product/categories/#devops-stages) that owns the feature flag. |
| `feature_issue_url` | no | The URL to the original feature issue. |
| `rollout_issue_url` | no | The URL to the Issue covering the feature flag rollout. |
| `log_state_changes` | no | Used to log the state of the feature flag |
{{< alert type="note" >}}
All validations are skipped when running in `RAILS_ENV=production`.
{{< /alert >}}
## Create a new feature flag
{{< alert type="note" >}}
GitLab Pages uses [a different process](../pages/_index.md#feature-flags) for feature flags.
{{< /alert >}}
The GitLab codebase provides [`bin/feature-flag`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/bin/feature-flag),
a dedicated tool to create new feature flag definitions.
The tool asks various questions about the new feature flag, then creates
a YAML definition in `config/feature_flags` or `ee/config/feature_flags`.
Only feature flags that have a YAML definition file can be used when running the development or testing environments.
```shell
$ bin/feature-flag my_feature_flag
>> Specify the feature flag type
?> beta
You picked the type 'beta'
>> Specify the group label to which the feature flag belongs, from the following list:
1. group::group1
2. group::group2
?> 2
You picked the group 'group::group2'
>> URL of the original feature issue (enter to skip):
?> https://gitlab.com/gitlab-org/gitlab/-/issues/435435
>> URL of the MR introducing the feature flag (enter to skip and let Danger provide a suggestion directly in the MR):
?> https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141023
>> Username of the feature flag DRI (enter to skip):
?> bob
>> Is this an EE only feature (enter to skip):
?> [Return]
>> Press any key and paste the issue content that we copied to your clipboard! 🚀
?> [Return automatically opens the "New issue" page where you only have to paste the issue content]
>> URL of the rollout issue (enter to skip):
?> https://gitlab.com/gitlab-org/gitlab/-/issues/437162
create config/feature_flags/beta/my_feature_flag.yml
---
name: my_feature_flag
feature_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/435435
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141023
rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/437162
milestone: '16.9'
group: group::composition analysis
type: beta
default_enabled: false
```
All newly-introduced feature flags must be [**disabled by default**](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/).
Features that are developed and merged behind a feature flag
should not include a changelog entry. The entry should be added either in the merge
request removing the feature flag or the merge request where the default value of
the feature flag is set to enabled. If the feature contains any database migrations, it
*should* include a changelog entry for the database changes.
{{< alert type="note" >}}
To create a feature flag that is only used in EE, add the `--ee` flag: `bin/feature-flag --ee`
{{< /alert >}}
### Naming new flags
When choosing a name for a new feature flag, consider the following guidelines:
- Describe the feature the feature flag is holding
- A long, **descriptive** name is better than a short but confusing one.
- Avoid names that indicates state/phase of the feature like `_mvc`, `_alpha`, `_beta`, etc
- Write the name in snake case (`my_cool_feature_flag`).
- Avoid using `disable` in the name to avoid having to think (or [document](../documentation/feature_flags.md))
with double negatives. Consider starting the name with `hide_`, `remove_`, or `disallow_`.
In software engineering this problem is known as
["negative names for boolean variables"](https://www.serendipidata.com/posts/naming-guidelines-for-boolean-variables/).
But we can't forbid negative words altogether, to be able to introduce flags as
[disabled by default](#feature-flags-in-gitlab-development), use them to remove a feature by moving it behind a flag, or to [selectively disable a flag by actor](controls.md#selectively-disable-by-actor).
### Risk of a broken master (main) branch
{{< alert type="warning" >}}
Feature flags **must** be used in the MR that introduces them. Not doing so causes a
[broken master](https://handbook.gitlab.com/handbook/engineering/workflow/#broken-master) scenario due
to the `rspec:feature-flags` job that only runs on the `master` branch.
{{< /alert >}}
### Optionally add a `.patch` file for automated removal of feature flags
The [`gitlab-housekeeper`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/gems/gitlab-housekeeper) is able to automatically remove your feature flag code for you using the [`DeleteOldFeatureFlags` keep](https://gitlab.com/gitlab-org/gitlab/-/blob/master/keeps/delete_old_feature_flags.rb). The tool will run periodically and automatically clean up old feature flags from the code.
For this tool to automatically remove the usages of the feature flag in your code you can add a `.patch` file alongside your feature flag YAML file. The file should be exactly the same name except using the `.patch` extension instead of the `.yml` extension.
For example you can create a patch file for `config/feature_flags/beta/my_feature_flag.yml` using the following steps:
1. Ensure you have a clean Git working directory.
1. Delete `config/feature_flags/beta/my_feature_flag.yml`.
1. Edit the code locally to remove any usage of `my_feature_flag` as though that the feature flag is already enabled and the feature is moving forward.
1. Run `git diff > config/feature_flags/beta/my_feature_flag.patch`. If your feature flag is not a `beta` flag, ensure your patch file in the same directory as the YAML file that defines your feature flag.
1. Undo the deletion of `config/feature_flags/beta/my_feature_flag.yml`
1. Undo the changes to the files you ended to remove the feature flag usage
1. Commit the patch file to the branch where you are adding the feature flag
Then in future the `gitlab-housekeeper` will automatically clean up your
feature flag for you by applying this patch.
## List all the feature flags
To [use ChatOps](../../ci/chatops/_index.md) to output all the feature flags in an environment to Slack, you can use the `run feature list`
command. For example:
```shell
/chatops run feature list --dev
/chatops run feature list --staging
```
## Toggle a feature flag
See [rolling out changes](controls.md#rolling-out-changes) for more information
about toggling feature flags.
## Delete a feature flag
See [cleaning up feature flags](controls.md#cleaning-up) for more information about
deleting feature flags.
## Migrate an `ops` feature flag to an application setting
{{< alert type="warning" >}}
The changes to backfill application settings and use the settings in the code must be merged in the same milestone.
{{< /alert >}}
To migrate an `ops` feature flag to an application setting:
1. In application settings, create or identify an existing `JSONB` column to store the setting.
1. The application setting default should match `default_enabled:` in the feature flag YAML definition
1. Write a migration to backfill the column. This allows instances which have
opted out of the default behavior to remain in the same state. Avoid using `Feature.enabled?` or `Feature.disabled?`
in the migration. Use the `Gitlab::Database::MigrationHelpers::FeatureFlagMigratorHelpers` migration helpers. These
helpers will only migrate feature flags that are explicitly set to `true` or `false`. If a feature flag is set for a
percentage or specific actor, the default value will be used.
1. In the **Admin** area, create a setting to enable or disable the feature.
1. Replace the feature flag everywhere with the application setting.
1. Update all the relevant documentation pages. If frontend changes are merged in a later milestone, you should add
documentation about how to update the settings by using the [application settings API](../../api/settings.md) or
the Rails console.
An example migration for a `JSONB` column:
```ruby
# default_enabled copied from feature flag definition YAML before it is removed
DEFAULT_ENABLED = true
def up
up_migrate_to_jsonb_setting(feature_flag_name: :my_flag_name,
setting_name: :my_setting,
jsonb_column_name: :settings,
default_enabled: DEFAULT_ENABLED)
end
def down
down_migrate_to_jsonb_setting(setting_name: :my_setting, jsonb_column_name: :settings)
end
```
An example migration for a boolean column:
```ruby
# default_enabled copied from feature flag definition YAML before it is removed
DEFAULT_ENABLED = true
def up
up_migrate_to_setting(feature_flag_name: :my_flag_name,
setting_name: :my_setting,
default_enabled: DEFAULT_ENABLED)
end
def down
down_migrate_to_setting(setting_name: :my_setting, default_enabled: DEFAULT_ENABLED)
end
```
## Develop with a feature flag
There are two main ways of using feature flags in the GitLab codebase:
- [Backend code (Rails)](#backend)
- [Frontend code (VueJS)](#frontend)
### Backend
The feature flag interface is defined in [`lib/feature.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/feature.rb).
This interface provides a set of methods to check if the feature flag is enabled or disabled:
```ruby
if Feature.enabled?(:my_feature_flag, project)
# execute code if feature flag is enabled
else
# execute code if feature flag is disabled
end
if Feature.disabled?(:my_feature_flag, project)
# execute code if feature flag is disabled
end
```
Default behavior for not configured feature flags is controlled
by `default_enabled:` in YAML definition.
If feature flag does not have a YAML definition an error will be raised
in development or test environment, while returning `false` on production.
For feature flags that don't have a definition file (only allowed for the `experiment`, `worker` and `undefined` types),
you need to pass their `type:` when calling `Feature.enabled?` and `Feature.disabled?`:
```ruby
if Feature.enabled?(:experiment_feature_flag, project, type: :experiment)
# execute code if feature flag is enabled
end
if Feature.disabled?(:worker_feature_flag, project, type: :worker)
# execute code if feature flag is disabled
end
```
{{< alert type="warning" >}}
Don't use feature flags at application load time. For example, using the `Feature` class in
`config/initializers/*` or at the class level could cause an unexpected error. This error occurs
because a database that a feature flag adapter might depend on doesn't exist at load time
(especially for fresh installations). Checking for the database's existence at the caller isn't
recommended, as some adapters don't require a database at all (for example, the HTTP adapter). The
feature flag setup check must be abstracted in the `Feature` namespace. This approach also requires
application reload when the feature flag changes. You must therefore ask SREs to reload the
Web/API/Sidekiq fleet on production, which takes time to fully rollout/rollback the changes. For
these reasons, use environment variables (for example, `ENV['YOUR_FEATURE_NAME']`) or `gitlab.yml`
instead.
{{< /alert >}}
Here's an example of a pattern that you should avoid:
```ruby
class MyClass
if Feature.enabled?(:...)
new_process
else
legacy_process
end
end
```
#### Recursion detection
When there are many feature flags, it is not always obvious where they are
called. Avoid cycles where the evaluation of one feature flag requires the
evaluation of other feature flags. If this causes a cycle, it will be broken
and the default value will be returned.
To enable this recursion detection to work correctly, always access feature values through
`Feature::enabled?`, and avoid the low-level use of `Feature::get`. When this
happens, we track a `Feature::RecursionError` exception to the error tracker.
### Frontend
When using a feature flag for UI elements, make sure to also use a feature
flag for the underlying backend code, if there is any. This ensures there is
absolutely no way to use the feature until it is enabled.
Use the `push_frontend_feature_flag` method which is available to all controllers that inherit from `ApplicationController`. You can use this method to expose the state of a feature flag, for example:
```ruby
before_action do
# Prefer to scope it per project or user, for example
push_frontend_feature_flag(:vim_bindings, project)
end
def index
# ...
end
def edit
# ...
end
```
You can then check the state of the feature flag in JavaScript as follows:
```javascript
if ( gon.features.vimBindings ) {
// ...
}
```
The name of the feature flag in JavaScript is always camelCase,
so checking for `gon.features.vim_bindings` would not work.
See the [Vue guide](../fe_guide/vue.md#accessing-feature-flags) for details about
how to access feature flags in a Vue component.
For feature flags that don't have a definition file (only allowed for the `experiment`, `worker` and `undefined` types),
you need to pass their `type:` when calling `push_frontend_feature_flag`:
```ruby
before_action do
push_frontend_feature_flag(:vim_bindings, project, type: :experiment)
end
```
### Feature actors
**It is strongly advised to use actors with feature flags.** Actors provide a simple
way to enable a feature flag only for a given project, group or user. This makes debugging
easier, as you can filter logs and errors for example, based on actors. This also makes it possible
to enable the feature on the `gitlab-org` or `gitlab-com` groups first, while the rest of
the users aren't impacted.
Actors also provide an easy way to do a percentage rollout of a feature in a sticky way.
If a 1% rollout enabled a feature for a specific actor, that actor will continue to have the feature enabled at
10%, 50%, and 100%.
GitLab supports the following feature flag actors:
- `User` model
- `Project` model
- `Group` model
- Current request
The actor is a second parameter of the `Feature.enabled?` call. For example:
```ruby
Feature.enabled?(:feature_flag, project)
```
Models which `include FeatureGate` have an `.actor_from_id` class method.
If you have the model's ID and do not need the model for anything other than checking the feature
flag state, you can use `.actor_from_id` in order check the feature flag state without making a
database query to retrieve the model.
```ruby
# Bad -- Unnecessary query is executed
Feature.enabled?(:feature_flag, Project.find(project_id))
# Good -- No query for projects
Feature.enabled?(:feature_flag, Project.actor_from_id(project_id))
# Good -- Project model is used after feature flag check
project = Project.find(project_id)
return unless Feature.enabled?(:feature_flag, project)
project.update!(column: value)
```
See [Use ChatOps to enable and disable feature flags](controls.md#process) for details on how to use ChatOps
to selectively enable or disable feature flags in GitLab-provided environments, like staging and production.
Flag state is not inherited from a group by its subgroups or projects.
If you need a flag state to be consistent for an entire group hierarchy,
consider using the top-level group as the actor.
This group can be found by calling `#root_ancestor` on any group or project.
```ruby
Feature.enabled?(:feature_flag, group.root_ancestor)
```
#### Mixing actor types
Generally you should use only one type of actor in all invocations of `Feature.enabled?`
for a particular feature flag, and not mix different actor types.
Mixing actor types can lead to a feature being enabled or disabled inconsistently in ways
that can cause bugs. For example, if at the controller level a flag is checked using a
group actor and at the service level it is checked using a user actor, the feature may be
both enabled, and disabled at different points in the same request.
In some situations it is safe to mix actor types if you know that it won't lead to
inconsistent results. For example, a webhook can be associated with either a group or a
project, and so a feature flag for a webhook might leverage this to rollout a feature for
group and project webhooks using the same feature flag.
If you need to use different actor types and cannot safely mix them in your situation you
should use separate flags for each actor type instead. For example:
```ruby
Feature.enabled?(:feature_flag_group, group)
Feature.enabled?(:feature_flag_user, user)
```
#### Instance actor
{{< alert type="warning" >}}
Instance-wide feature flags should only be used when a feature is tied in to an entire instance. Always prioritize other actors first.
{{< /alert >}}
In some cases, you may want a feature flag to be enabled for an entire instance and not based on an actor. A great example are the Admin settings, where it would be impossible to enable the Feature Flag based on a group or a project since they are both `undefined`.
The user actor would cause confusion since a Feature Flag might be enabled for a user who is not an admin, but disabled for a user who is.
Instead, it is possible to use the `:instance` symbol as the second argument to `Feature.enabled?`, which will be sanitized as a GitLab instance.
```ruby
Feature.enabled?(:feature_flag, :instance)
```
#### Current request actor
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132078) in GitLab 16.5
{{< /history >}}
It is not recommended to use percentage of time rollout, as each call may return
inconsistent results.
Rather it is advised to use the current request as an actor.
```ruby
# Bad
Feature.enable_percentage_of_time(:feature_flag, 40)
Feature.enabled?(:feature_flag)
# Good
Feature.enable_percentage_of_actors(:feature_flag, 40)
Feature.enabled?(:feature_flag, Feature.current_request)
```
When using the current request as the actor, the feature flag should return the
same value within the context of a request.
As the current request actor is implemented using [`SafeRequestStore`](../caching.md#low-level), we should
have consistent feature flag values within:
- a Rack request
- a Sidekiq worker execution
- an ActionCable worker execution
To migrate an existing feature from percentage of time to the current request
actor, it is recommended that you create a new feature flag.
This is because it is difficult to control the timing between existing
`percentage_of_time` values, the deployment of the code change, and switching to
use `percentage_of_actors`.
#### Use actors for verifying in production
{{< alert type="warning" >}}
Using production as a testing environment is not recommended. Use our testing
environments for testing features that are not production-ready.
{{< /alert >}}
While the staging environment provides a way to test features in an environment
that resembles production, it doesn't allow you to compare before-and-after
performance metrics specific to production environment. It can be useful to have a
project in production with your development feature flag enabled, to allow tools
like Sitespeed reports to reveal the metrics of the new code under a feature flag.
This approach is even more useful if you're already tracking the old codebase in
Sitespeed, enabling you to compare performance accurately before and after the
feature flag's rollout.
### Enable additional objects as actors
To use feature gates based on actors, the model needs to respond to
`flipper_id`. For example, to enable for the Foo model:
```ruby
class Foo < ActiveRecord::Base
include FeatureGate
end
```
Only models that `include FeatureGate` or expose `flipper_id` method can be
used as an actor for `Feature.enabled?`.
### Feature flags for licensed features
You can't use a feature flag with the same name as a licensed feature name, because
it would cause a naming collision. This was [widely discussed and removed](https://gitlab.com/gitlab-org/gitlab/-/issues/259611)
because it is confusing.
To check for licensed features, add a dedicated feature flag under a different name
and check it explicitly, for example:
```ruby
Feature.enabled?(:licensed_feature_feature_flag, project) &&
project.feature_available?(:licensed_feature)
```
### Feature groups
Feature groups must be defined statically in `lib/feature.rb` (in the
`.register_feature_groups` method), but their implementation can be
dynamic (querying the DB, for example).
Once defined in `lib/feature.rb`, you can to activate a
feature for a given feature group via the [`feature_group` parameter of the features API](../../api/features.md#set-or-create-a-feature)
The available feature groups are:
| Group name | Scoped to | Description |
| --------------------- | --------- | ----------- |
| `gitlab_team_members` | Users | Enables the feature for users who are members of [`gitlab-com`](https://gitlab.com/gitlab-com) |
Feature groups can be enabled via the group name:
```ruby
Feature.enable(:feature_flag_name, :gitlab_team_members)
```
### Controlling feature flags locally
#### On rails console
In the rails console (`rails c`), enter the following command to enable a feature flag:
```ruby
Feature.enable(:feature_flag_name)
```
Similarly, the following command disables a feature flag:
```ruby
Feature.disable(:feature_flag_name)
```
You can also enable a feature flag for a given gate:
```ruby
Feature.enable(:feature_flag_name, Project.find_by_full_path("root/my-project"))
```
When manually enabling or disabling a feature flag from the Rails console, its default value gets overwritten.
This can cause confusion when changing the flag's `default_enabled` attribute.
To reset the feature flag to the default status:
```ruby
Feature.remove(:feature_flag_name)
```
#### On your browser
Access `http://gdk.test:3000/rails/features` to see the manage locally the feature flag.
### Logging
Usage and state of the feature flag is logged if either:
- `log_state_changes` is set to `true` in the feature flag definition.
- `milestone` refers to a milestone that is greater than or equal to the current GitLab version.
When the state of a feature flag is logged, it can be identified by using the `"json.feature_flag_states": "feature_flag_name:1"` or `"json.feature_flag_states": "feature_flag_name:0"` condition in Kibana.
You can see an example in [this](https://log.gprd.gitlab.net/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:60000),time:(from:now-7d%2Fd,to:now))&_a=(columns:!(json.feature_flag_states),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,field:json.feature_flag_states,index:'7092c4e2-4eb5-46f2-8305-a7da2edad090',key:json.feature_flag_states,negate:!f,params:(query:'optimize_where_full_path_in:1'),type:phrase),query:(match_phrase:(json.feature_flag_states:'optimize_where_full_path_in:1')))),hideChart:!f,index:'7092c4e2-4eb5-46f2-8305-a7da2edad090',interval:auto,query:(language:kuery,query:''),sort:!(!(json.time,desc)))) link.
{{< alert type="note" >}}
Only 20% of the requests log the state of the feature flags. This is controlled with the [`feature_flag_state_logs`](https://gitlab.com/gitlab-org/gitlab/-/blob/6deb6ecbc69f05a80d920a295dfc1a6a303fc7a0/config/feature_flags/ops/feature_flag_state_logs.yml) feature flag.
{{< /alert >}}
## Changelog
We want to avoid introducing a changelog when features are not accessible by an end-user either directly (example: ability to use the feature) or indirectly (examples: ability to take advantage of background jobs, performance improvements, or database migration updates).
- Database migrations are always accessible by an end-user indirectly, as self-managed customers need to be aware of database changes before upgrading. For this reason, they **should** have a changelog entry.
- Any change behind a feature flag **disabled** by default **should not** have a changelog entry.
- Any change behind a feature flag that is **enabled** by default **should** have a changelog entry.
- Changing the feature flag itself (flag removal, default-on setting) **should** have [a changelog entry](../changelog.md).
Use the flowchart to determine the changelog entry type.
```mermaid
flowchart LR
FDOFF(Flag is currently<br>'default: off')
FDON(Flag is currently<br>'default: on')
CDO{Change to<br>'default: on'}
ACF(added / changed / fixed / '...')
RF{Remove flag}
RF2{Remove flag}
RC(removed / changed)
OTHER(other)
FDOFF -->CDO-->ACF
FDOFF -->RF
RF-->|Keep new code?| ACF
RF-->|Keep old code?| OTHER
FDON -->RF2
RF2-->|Keep old code?| RC
RF2-->|Keep new code?| OTHER
```
- The changelog for a feature flag should describe the feature and not the
flag, unless a default on feature flag is removed keeping the new code (`other` in the flowchart above).
- A feature flag can also be used for rolling out a bug fix or a maintenance work. In this scenario, the changelog
must be related to it, for example; `fixed` or `other`.
## Feature flags in tests
Introducing a feature flag into the codebase creates an additional code path that should be tested.
It is strongly advised to include automated tests for all code affected by a feature flag, both when **enabled** and **disabled**
to ensure the feature works properly. If automated tests are not included for both states, the functionality associated
with the untested code path should be manually tested before deployment to production.
When using the testing environment, all feature flags are enabled by default.
Flags can be disabled by default in the [`spec/spec_helper.rb` file](https://gitlab.com/gitlab-org/gitlab/-/blob/b61fba42eea2cf5bb1ca64e80c067a07ed5d1921/spec/spec_helper.rb#L274).
Add a comment inline to explain why the flag needs to be disabled. You can also attach the issue URL for reference if possible.
{{< alert type="warning" >}}
This does not apply to end-to-end (QA) tests, which [do not enable feature flags by default](#end-to-end-qa-tests). There is a different [process for using feature flags in end-to-end tests](../testing_guide/end_to_end/best_practices/feature_flags.md).
{{< /alert >}}
To disable a feature flag in a test, use the `stub_feature_flags`
helper. For example, to globally disable the `ci_live_trace` feature
flag in a test:
```ruby
stub_feature_flags(ci_live_trace: false)
Feature.enabled?(:ci_live_trace) # => false
```
A common pattern of testing both paths looks like:
```ruby
it 'ci_live_trace works' do
# tests assuming ci_live_trace is enabled in tests by default
Feature.enabled?(:ci_live_trace) # => true
end
context 'when ci_live_trace is disabled' do
before do
stub_feature_flags(ci_live_trace: false)
end
it 'ci_live_trace does not work' do
Feature.enabled?(:ci_live_trace) # => false
end
end
```
If you wish to set up a test where a feature flag is enabled only
for some actors and not others, you can specify this in options
passed to the helper. For example, to enable the `ci_live_trace`
feature flag for a specific project:
```ruby
project1, project2 = build_list(:project, 2)
# Feature will only be enabled for project1
stub_feature_flags(ci_live_trace: project1)
Feature.enabled?(:ci_live_trace) # => false
Feature.enabled?(:ci_live_trace, project1) # => true
Feature.enabled?(:ci_live_trace, project2) # => false
```
The behavior of FlipperGate is as follows:
1. You can enable an override for a specified actor to be enabled.
1. You can disable (remove) an override for a specified actor,
falling back to the default state.
1. There's no way to model that you explicitly disabled a specified actor.
```ruby
Feature.enable(:my_feature)
Feature.disable(:my_feature, project1)
Feature.enabled?(:my_feature) # => true
Feature.enabled?(:my_feature, project1) # => true
Feature.disable(:my_feature2)
Feature.enable(:my_feature2, project1)
Feature.enabled?(:my_feature2) # => false
Feature.enabled?(:my_feature2, project1) # => true
```
### `have_pushed_frontend_feature_flags`
Use `have_pushed_frontend_feature_flags` to test if [`push_frontend_feature_flag`](#frontend)
has added the feature flag to the HTML.
For example,
```ruby
stub_feature_flags(value_stream_analytics_path_navigation: false)
visit group_analytics_cycle_analytics_path(group)
expect(page).to have_pushed_frontend_feature_flags(valueStreamAnalyticsPathNavigation: false)
```
### `stub_feature_flags` vs `Feature.enable*`
It is preferred to use `stub_feature_flags` to enable feature flags
in the testing environment. This method provides a simple and well described
interface for simple use cases.
However, in some cases more complex behavior needs to be tested,
like percentage rollouts of feature flags. This can be done using
`.enable_percentage_of_time` or `.enable_percentage_of_actors`:
```ruby
# Good: feature needs to be explicitly disabled, as it is enabled by default if not defined
stub_feature_flags(my_feature: false)
stub_feature_flags(my_feature: true)
stub_feature_flags(my_feature: project)
stub_feature_flags(my_feature: [project, project2])
# Bad
Feature.enable(:my_feature_2)
# Good: enable my_feature for 50% of time
Feature.enable_percentage_of_time(:my_feature_3, 50)
# Good: enable my_feature for 50% of actors/gates/things
Feature.enable_percentage_of_actors(:my_feature_4, 50)
```
Each feature flag that has a defined state is persisted
during test execution time:
```ruby
Feature.persisted_names.include?('my_feature') => true
Feature.persisted_names.include?('my_feature_2') => true
Feature.persisted_names.include?('my_feature_3') => true
Feature.persisted_names.include?('my_feature_4') => true
```
### Stubbing actor
When you want to enable a feature flag for a specific actor only,
you can stub its representation. A gate that is passed
as an argument to `Feature.enabled?` and `Feature.disabled?` must be an object
that includes `FeatureGate`.
In specs you can use the `stub_feature_flag_gate` method that allows you to
quickly create a custom actor:
```ruby
gate = stub_feature_flag_gate('CustomActor')
stub_feature_flags(ci_live_trace: gate)
Feature.enabled?(:ci_live_trace) # => false
Feature.enabled?(:ci_live_trace, gate) # => true
```
### Controlling feature flags engine in tests
Our Flipper engine in the test environment works in a memory mode `Flipper::Adapters::Memory`.
`production` and `development` modes use `Flipper::Adapters::ActiveRecord`.
You can control whether the `Flipper::Adapters::Memory` or `ActiveRecord` mode is being used.
#### `stub_feature_flags: true` (default and preferred)
In this mode Flipper is configured to use `Flipper::Adapters::Memory` and mark all feature
flags to be on-by-default and persisted on a first use.
Make sure behavior under feature flag doesn't go untested in some non-specific contexts.
### `stub_feature_flags: false`
This disables a memory-stubbed flipper, and uses `Flipper::Adapters::ActiveRecord`
a mode that is used by `production` and `development`.
You should use this mode only when you really want to tests aspects of Flipper
with how it interacts with `ActiveRecord`.
### End-to-end (QA) tests
Toggling feature flags works differently in end-to-end (QA) tests. The end-to-end test framework does not have direct access to
Rails or the database, so it can't use Flipper. Instead, it uses [the public API](../../api/features.md#set-or-create-a-feature). Each end-to-end test can [enable or disable a feature flag during the test](../testing_guide/end_to_end/best_practices/feature_flags.md). Alternatively, you can enable or disable a feature flag before one or more tests when you [run them from your GitLab repository's `qa` directory](https://gitlab.com/gitlab-org/gitlab/-/tree/master/qa#running-tests-with-a-feature-flag-enabled-or-disabled), or if you [run the tests via GitLab QA](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#running-tests-with-a-feature-flag-enabled).
[As noted above, feature flags are not enabled by default in end-to-end tests.](#feature-flags-in-tests)
This means that end-to-end tests will run with feature flags in the default state implemented in the source
code, or with the feature flag in its current state on the GitLab instance under test, unless the
test is written to enable/disable a feature flag explicitly.
When a feature flag is changed on Staging or on GitLab.com, a Slack message will be posted to the `#e2e-run-staging` or `#e2e-run-production` channels to inform
the pipeline triage DRI so that they can more easily determine if any failures are related to a feature flag change. However, if you are working on a change you can
help to avoid unexpected failures by [confirming that the end-to-end tests pass with a feature flag enabled.](../testing_guide/end_to_end/best_practices/feature_flags.md#confirming-that-end-to-end-tests-pass-with-a-feature-flag-enabled)
## Controlling Sidekiq worker behavior with feature flags
Feature flags with [`worker` type](#worker-type) can be used to control the behavior of a Sidekiq worker.
### Deferring Sidekiq jobs
When disabled, feature flags with the format of `run_sidekiq_jobs_{WorkerName}` delay the execution of the worker
by scheduling the job at a later time. This feature flag is enabled by default for all workers.
Deferring jobs can be useful during an incident where contentious behavior from
worker instances are saturating infrastructure resources (such as database and database connection pool).
The implementation can be found at [SkipJobs Sidekiq server middleware](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/sidekiq_middleware/skip_jobs.rb).
{{< alert type="note" >}}
Jobs are deferred indefinitely as long as the feature flag is disabled. It is important to remove the
feature flag after the worker is deemed safe to continue processing.
{{< /alert >}}
When set to false, 100% of the jobs are deferred. When you want processing to resume, you can
use a **percentage of time** rollout. For example:
```shell
# not running any jobs, deferring all 100% of the jobs
/chatops run feature set run_sidekiq_jobs_SlowRunningWorker false
# only running 10% of the jobs, deferring 90% of the jobs
/chatops run feature set run_sidekiq_jobs_SlowRunningWorker 10
# running 50% of the jobs, deferring 50% of the jobs
/chatops run feature set run_sidekiq_jobs_SlowRunningWorker 50
# back to running all jobs normally
/chatops run feature delete run_sidekiq_jobs_SlowRunningWorker
```
### Dropping Sidekiq jobs
Instead of [deferring jobs](#deferring-sidekiq-jobs), jobs can be entirely dropped by enabling the feature flag
`drop_sidekiq_jobs_{WorkerName}`. Use this feature flag when you are certain the jobs do not need to be processed in the future, and therefore are safe to be dropped.
```shell
# drop all the jobs
/chatops run feature set drop_sidekiq_jobs_SlowRunningWorker true
# process jobs normally
/chatops run feature delete drop_sidekiq_jobs_SlowRunningWorker
```
{{< alert type="note" >}}
Dropping feature flag (`drop_sidekiq_jobs_{WorkerName}`) takes precedence over deferring feature flag (`run_sidekiq_jobs_{WorkerName}`). When `drop_sidekiq_jobs` is enabled and `run_sidekiq_jobs` is disabled, jobs are entirely dropped.
{{< /alert >}}
|
https://docs.gitlab.com/development/push_rules
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/push_rules
|
[
"doc",
"development",
"push_rules"
] |
_index.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Push rules development guidelines
| null |
This document was created to help contributors understand the code design of
[push rules](../../user/project/repository/push_rules.md). You should read this
document before making changes to the code for this feature.
This document is intentionally limited to an overview of how the code is
designed, as code can change often. To understand how a specific part of the
feature works, view the code and the specs. The details here explain how the
major components of the push rules feature work.
{{< alert type="note" >}}
This document should be updated when parts of the codebase referenced in this
document are updated, removed, or new parts are added.
{{< /alert >}}
## Business logic
The business logic is contained in two main places. The `PushRule` model stores
the settings for a rule and then we have checks that use those settings to
change the push behavior.
- `PushRule`: the main model used to store the configuration of each push rule.
- Defined in `ee/app/models/push_rule.rb`.
- `EE::Gitlab::Checks::DiffCheck`: Diff check prevents filenames matching the
push rule's `file_name_regex` and also files with names matching known secret
files, for example `id_rsa`.
- Defined in `ee/lib/ee/gitlab/checks/diff_check.rb`.
- `EE::Gitlab::Checks::PushRuleCheck`: Executes various push rule checks.
- Defined in `ee/lib/ee/gitlab/checks/push_rule_check.rb`.
- `EE::Gitlab::Checks::PushRules::BranchCheck`: Executes push rule checks
related to branch rules.
- Defined in `ee/lib/ee/gitlab/checks/push_rules/branch_check.rb`.
- `EE::Gitlab::Checks::PushRules::CommitCheck`: Executes push rule checks
related to commit rules.
- Defined in `ee/lib/ee/gitlab/checks/push_rules/commit_check.rb`.
- `EE::Gitlab::Checks::FileSizeLimtCheck`: Executes push rule checks
related to file size rules.
- Defined in `ee/lib/ee/gitlab/checks/file_size_limit_check.rb`.
- `EE::Gitlab::Checks::PushRules::TagCheck`: Executes push rule checks
related to tag rules.
- Defined in `ee/lib/ee/gitlab/checks/push_rules/tag_check.rb`.
## Entrypoints
The following controllers and APIs are all entrypoints into the push rules logic:
- `Admin::PushRulesController`: This controller is used to manage the global push rule.
- `Group::PushRulesController`: This controller is used to manage the group-level push rule.
- `Project::PushRulesController`: This controller is used to manage the project-level push rule.
- `Api::Internal::Base`: This `/internal/allowed` endpoint is called when pushing to GitLab over SSH to
ensure the user is allowed to push. The `/internal/allowed` endpoint performs a
`Gitlab::Checks::DiffCheck`. In EE, this includes push rules checks.
- Defined in `lib/api/internal/base.rb`.
- `Repositories::GitHttpController`: When changes are pushed to GitLab over HTTP, the controller performs an access
check to ensure the user is allowed to push. The checks perform a
`Gitlab::Checks::DiffCheck`. In EE, this includes push rules checks.
- Defined in `app/controllers/repositories/git_http_controller.rb`.
## Flow
These flowcharts should help explain the flow from the controllers down to the
models for different features.
### Git push over SSH
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
Repositories::GitHttpController --> Gitlab::GitAccess
Api::Internal::Base --> Gitlab::GitAccess
Gitlab::GitAccess --> Gitlab::Checks::ChangesAccess
Gitlab::Checks::ChangesAccess --> Gitlab::Checks::SingleChangeAccess
Gitlab::Checks::ChangesAccess --> EE::Gitlab::Checks::PushRuleCheck
Gitlab::Checks::SingleChangeAccess --> Gitlab::Checks::DiffCheck
EE::Gitlab::Checks::PushRuleCheck -->|Only if pushing to a tag| EE::Gitlab::Checks::PushRules::TagCheck
EE::Gitlab::Checks::PushRuleCheck -->|Only if pushing to a branch| EE::Gitlab::Checks::PushRules::BranchCheck
Gitlab::Checks::ChangesAccess --> EE::Gitlab::Checks::FileSizeLimitCheck
```
{{< alert type="note" >}}
The `PushRuleCheck` only triggers checks in parallel if the
`parallel_push_checks` feature flag is enabled. Otherwise tag or branch check
runs first, then file size.
{{< /alert >}}
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Push rules development guidelines
breadcrumbs:
- doc
- development
- push_rules
---
This document was created to help contributors understand the code design of
[push rules](../../user/project/repository/push_rules.md). You should read this
document before making changes to the code for this feature.
This document is intentionally limited to an overview of how the code is
designed, as code can change often. To understand how a specific part of the
feature works, view the code and the specs. The details here explain how the
major components of the push rules feature work.
{{< alert type="note" >}}
This document should be updated when parts of the codebase referenced in this
document are updated, removed, or new parts are added.
{{< /alert >}}
## Business logic
The business logic is contained in two main places. The `PushRule` model stores
the settings for a rule and then we have checks that use those settings to
change the push behavior.
- `PushRule`: the main model used to store the configuration of each push rule.
- Defined in `ee/app/models/push_rule.rb`.
- `EE::Gitlab::Checks::DiffCheck`: Diff check prevents filenames matching the
push rule's `file_name_regex` and also files with names matching known secret
files, for example `id_rsa`.
- Defined in `ee/lib/ee/gitlab/checks/diff_check.rb`.
- `EE::Gitlab::Checks::PushRuleCheck`: Executes various push rule checks.
- Defined in `ee/lib/ee/gitlab/checks/push_rule_check.rb`.
- `EE::Gitlab::Checks::PushRules::BranchCheck`: Executes push rule checks
related to branch rules.
- Defined in `ee/lib/ee/gitlab/checks/push_rules/branch_check.rb`.
- `EE::Gitlab::Checks::PushRules::CommitCheck`: Executes push rule checks
related to commit rules.
- Defined in `ee/lib/ee/gitlab/checks/push_rules/commit_check.rb`.
- `EE::Gitlab::Checks::FileSizeLimtCheck`: Executes push rule checks
related to file size rules.
- Defined in `ee/lib/ee/gitlab/checks/file_size_limit_check.rb`.
- `EE::Gitlab::Checks::PushRules::TagCheck`: Executes push rule checks
related to tag rules.
- Defined in `ee/lib/ee/gitlab/checks/push_rules/tag_check.rb`.
## Entrypoints
The following controllers and APIs are all entrypoints into the push rules logic:
- `Admin::PushRulesController`: This controller is used to manage the global push rule.
- `Group::PushRulesController`: This controller is used to manage the group-level push rule.
- `Project::PushRulesController`: This controller is used to manage the project-level push rule.
- `Api::Internal::Base`: This `/internal/allowed` endpoint is called when pushing to GitLab over SSH to
ensure the user is allowed to push. The `/internal/allowed` endpoint performs a
`Gitlab::Checks::DiffCheck`. In EE, this includes push rules checks.
- Defined in `lib/api/internal/base.rb`.
- `Repositories::GitHttpController`: When changes are pushed to GitLab over HTTP, the controller performs an access
check to ensure the user is allowed to push. The checks perform a
`Gitlab::Checks::DiffCheck`. In EE, this includes push rules checks.
- Defined in `app/controllers/repositories/git_http_controller.rb`.
## Flow
These flowcharts should help explain the flow from the controllers down to the
models for different features.
### Git push over SSH
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
Repositories::GitHttpController --> Gitlab::GitAccess
Api::Internal::Base --> Gitlab::GitAccess
Gitlab::GitAccess --> Gitlab::Checks::ChangesAccess
Gitlab::Checks::ChangesAccess --> Gitlab::Checks::SingleChangeAccess
Gitlab::Checks::ChangesAccess --> EE::Gitlab::Checks::PushRuleCheck
Gitlab::Checks::SingleChangeAccess --> Gitlab::Checks::DiffCheck
EE::Gitlab::Checks::PushRuleCheck -->|Only if pushing to a tag| EE::Gitlab::Checks::PushRules::TagCheck
EE::Gitlab::Checks::PushRuleCheck -->|Only if pushing to a branch| EE::Gitlab::Checks::PushRules::BranchCheck
Gitlab::Checks::ChangesAccess --> EE::Gitlab::Checks::FileSizeLimitCheck
```
{{< alert type="note" >}}
The `PushRuleCheck` only triggers checks in parallel if the
`parallel_push_checks` feature flag is enabled. Otherwise tag or branch check
runs first, then file size.
{{< /alert >}}
|
https://docs.gitlab.com/development/prevention-patterns
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/prevention-patterns.md
|
2025-08-13
|
doc/development/transient
|
[
"doc",
"development",
"transient"
] |
prevention-patterns.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Preventing Transient Bugs
| null |
This page covers architectural patterns and tips for developers to follow to prevent [transient bugs.](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/#transient-bugs)
## Common root causes
We've noticed a few root causes that come up frequently when addressing transient bugs.
- Needs better state management in the backend or frontend.
- Frontend code needs improvements.
- Lack of test coverage.
- Race conditions.
## Frontend
### Don't rely on response order
When working with multiple requests, it's easy to assume the order of the responses matches the order in which they are triggered.
That's not always the case and can cause bugs that only happen if the order is switched.
**Example**:
- `diffs_metadata.json` (lighter)
- `diffs_batch.json` (heavier)
If your feature requires data from both, ensure that the two have finished loading before working on it.
### Simulate slower connections when testing manually
Add a network condition template to your browser's developer tools to enable you to toggle between a slow and a fast connection.
**Example**:
- Turtle:
- Down: 50kb/s
- Up: 20kb/s
- Latency: 10000ms
### Collapsed elements
When setting event listeners, if not possible to use event delegation, ensure all relevant event listeners are set for expanded content.
Including when that expanded content is:
- **Invisible** (`display: none;`). Some JavaScript requires the element to be visible to work properly, such as when taking measurements.
- **Dynamic content** (AJAX/DOM manipulation).
### Using assertions to detect transient bugs caused by unmet conditions
Transient bugs happen in the context of code that executes under the assumption
that the application's state meets one or more conditions. We may write a feature
that assumes a server-side API response always include a group of attributes or that
an operation only executes when the application has successfully transitioned to a new
state.
Transient bugs are difficult to debug because there isn't any mechanism that alerts
the user or the developer about unsatisfied conditions. These conditions are usually
not expressed explicitly in the code. A useful debugging technique for such situations
is placing assertions to make any assumption explicit. They can help detect the cause
which unmet condition causes the bug.
#### Asserting pre-conditions on state mutations
A common scenario that leads to transient bugs is when there is a polling service
that should mutate state only if a user operation is completed. We can use
assertions to make this pre-condition explicit:
```javascript
// This action is called by a polling service. It assumes that all pre-conditions
// are satisfied by the time the action is dispatched.
export const updateMergeableStatus = ({ commit }, payload) => {
commit(types.SET_MERGEABLE_STATUS, payload);
};
// We can make any pre-condition explicit by adding an assertion
export const updateMergeableStatus = ({ state, commit }, payload) => {
console.assert(
state.isResolvingDiscussion === true,
'Resolve discussion request must be completed before updating mergeable status'
);
commit(types.SET_MERGEABLE_STATUS, payload);
};
```
#### Asserting API contracts
Another useful way of using assertions is to detect if the response payload returned
by the server-side endpoint satisfies the API contract.
#### Related reading
[Debug it!](https://pragprog.com/titles/pbdp/debug-it/) explores techniques to diagnose
and fix non-deterministic bugs and write software that is easier to debug.
## Backend
### Sidekiq jobs with locks
When dealing with asynchronous work via Sidekiq, it is possible to have 2 jobs with the same arguments
getting worked on at the same time. If not handled correctly, this can result in an outdated or inaccurate state.
For instance, consider a worker that updates a state of an object. Before the worker updates the state
(for example, `#update_state`) of the object, it needs to check what the appropriate state should be
(for example, `#check_state`).
When there are 2 jobs being worked on at the same time, it is possible that the order of operations will go like:
1. (Worker A) Calls `#check_state`
1. (Worker B) Calls `#check_state`
1. (Worker B) Calls `#update_state`
1. (Worker A) Calls `#update_state`
In this example, `Worker B` is meant to set the updated status. But `Worker A` calls `#update_state` a little too late.
This can be avoided by utilizing either database locks or `Gitlab::ExclusiveLease`. This way, jobs will be
worked on one at a time. This also allows them to be marked as [idempotent](../sidekiq/idempotent_jobs.md).
### Retry mechanism handling
There are times that an object/record will be on a failed state which can be rechecked.
If an object is in a state that can be rechecked, ensure that appropriate messaging is shown to the user
so they know what to do. Also, make sure that the retry functionality will be able to reset the state
correctly when triggered.
### Error Logging
Error logging doesn't necessarily directly prevents transient bugs but it can help to debug them.
When coding, sometimes we expect some exceptions to be raised and we rescue them.
Logging whenever we rescue an error helps in case it's causing transient bugs that a user may see.
While investigating a bug report, it may require the engineer to look into logs of when it happened.
Seeing an error being logged can be a signal of something that went wrong which can be handled differently.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Preventing Transient Bugs
breadcrumbs:
- doc
- development
- transient
---
This page covers architectural patterns and tips for developers to follow to prevent [transient bugs.](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/#transient-bugs)
## Common root causes
We've noticed a few root causes that come up frequently when addressing transient bugs.
- Needs better state management in the backend or frontend.
- Frontend code needs improvements.
- Lack of test coverage.
- Race conditions.
## Frontend
### Don't rely on response order
When working with multiple requests, it's easy to assume the order of the responses matches the order in which they are triggered.
That's not always the case and can cause bugs that only happen if the order is switched.
**Example**:
- `diffs_metadata.json` (lighter)
- `diffs_batch.json` (heavier)
If your feature requires data from both, ensure that the two have finished loading before working on it.
### Simulate slower connections when testing manually
Add a network condition template to your browser's developer tools to enable you to toggle between a slow and a fast connection.
**Example**:
- Turtle:
- Down: 50kb/s
- Up: 20kb/s
- Latency: 10000ms
### Collapsed elements
When setting event listeners, if not possible to use event delegation, ensure all relevant event listeners are set for expanded content.
Including when that expanded content is:
- **Invisible** (`display: none;`). Some JavaScript requires the element to be visible to work properly, such as when taking measurements.
- **Dynamic content** (AJAX/DOM manipulation).
### Using assertions to detect transient bugs caused by unmet conditions
Transient bugs happen in the context of code that executes under the assumption
that the application's state meets one or more conditions. We may write a feature
that assumes a server-side API response always include a group of attributes or that
an operation only executes when the application has successfully transitioned to a new
state.
Transient bugs are difficult to debug because there isn't any mechanism that alerts
the user or the developer about unsatisfied conditions. These conditions are usually
not expressed explicitly in the code. A useful debugging technique for such situations
is placing assertions to make any assumption explicit. They can help detect the cause
which unmet condition causes the bug.
#### Asserting pre-conditions on state mutations
A common scenario that leads to transient bugs is when there is a polling service
that should mutate state only if a user operation is completed. We can use
assertions to make this pre-condition explicit:
```javascript
// This action is called by a polling service. It assumes that all pre-conditions
// are satisfied by the time the action is dispatched.
export const updateMergeableStatus = ({ commit }, payload) => {
commit(types.SET_MERGEABLE_STATUS, payload);
};
// We can make any pre-condition explicit by adding an assertion
export const updateMergeableStatus = ({ state, commit }, payload) => {
console.assert(
state.isResolvingDiscussion === true,
'Resolve discussion request must be completed before updating mergeable status'
);
commit(types.SET_MERGEABLE_STATUS, payload);
};
```
#### Asserting API contracts
Another useful way of using assertions is to detect if the response payload returned
by the server-side endpoint satisfies the API contract.
#### Related reading
[Debug it!](https://pragprog.com/titles/pbdp/debug-it/) explores techniques to diagnose
and fix non-deterministic bugs and write software that is easier to debug.
## Backend
### Sidekiq jobs with locks
When dealing with asynchronous work via Sidekiq, it is possible to have 2 jobs with the same arguments
getting worked on at the same time. If not handled correctly, this can result in an outdated or inaccurate state.
For instance, consider a worker that updates a state of an object. Before the worker updates the state
(for example, `#update_state`) of the object, it needs to check what the appropriate state should be
(for example, `#check_state`).
When there are 2 jobs being worked on at the same time, it is possible that the order of operations will go like:
1. (Worker A) Calls `#check_state`
1. (Worker B) Calls `#check_state`
1. (Worker B) Calls `#update_state`
1. (Worker A) Calls `#update_state`
In this example, `Worker B` is meant to set the updated status. But `Worker A` calls `#update_state` a little too late.
This can be avoided by utilizing either database locks or `Gitlab::ExclusiveLease`. This way, jobs will be
worked on one at a time. This also allows them to be marked as [idempotent](../sidekiq/idempotent_jobs.md).
### Retry mechanism handling
There are times that an object/record will be on a failed state which can be rechecked.
If an object is in a state that can be rechecked, ensure that appropriate messaging is shown to the user
so they know what to do. Also, make sure that the retry functionality will be able to reset the state
correctly when triggered.
### Error Logging
Error logging doesn't necessarily directly prevents transient bugs but it can help to debug them.
When coding, sometimes we expect some exceptions to be raised and we rescue them.
Logging whenever we rescue an error helps in case it's causing transient bugs that a user may see.
While investigating a bug report, it may require the engineer to look into logs of when it happened.
Seeing an error being logged can be a signal of something that went wrong which can be handled differently.
|
https://docs.gitlab.com/development/tips
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/tips.md
|
2025-08-13
|
doc/development/advanced_search
|
[
"doc",
"development",
"advanced_search"
] |
tips.md
|
AI-powered
|
Global Search
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Advanced search development tips
| null |
## Kibana
Use Kibana to interact with your Elasticsearch cluster.
See the [download instructions](https://www.elastic.co/guide/en/kibana/8.11/install.html).
## Viewing index status
Run
```shell
bundle exec rake gitlab:elastic:info
```
to see the status and information about your cluster.
## Creating all indices from scratch and populating with local data
### Option 1: Rake task
Run
```shell
bundle exec rake gitlab:elastic:index
```
which triggers `Search::Elastic::TriggerIndexingWorker` to run async.
Run
```ruby
Elastic::ProcessInitialBookkeepingService.new.execute
```
until it shows `[0, 0]` meaning there are no more refs in the queue.
### Option 2: manual
Manually execute the steps in `Search::Elastic::TriggerIndexingWorker`.
Sometimes Sidekiq doesn't pick up jobs correctly, so you might need to restart Sidekiq or if you prefer to run through the steps in a Rails console:
```ruby
task_executor_service = Search::RakeTaskExecutorService.new(logger: ::Gitlab::Elasticsearch::Logger.build)
task_executor_service.execute(:recreate_index)
task_executor_service.execute(:clear_index_status)
task_executor_service.execute(:clear_reindex_status)
task_executor_service.execute(:resume_indexing)
task_executor_service.execute(:index_namespaces)
task_executor_service.execute(:index_projects)
task_executor_service.execute(:index_snippets)
task_executor_service.execute(:index_users)
```
Run
```ruby
Elastic::ProcessInitialBookkeepingService.new.execute
```
until it shows `[0, 0]` meaning there are no more refs in the queue.
### Option 3: reindexing task
First delete the existing index, then create a `ReindexingTask` for the index you want to target. This creates a new index based on the current configuration, then copies the data over.
```ruby
Search::Elastic::ReindexingTask.create!(targets: %w[MergeRequest])
```
Run
```ruby
ElasticClusterReindexingCronWorker.new.perform
```
On repeat until
```ruby
Search::Elastic::ReindexingTask.last.state
```
is `success`.
## Index data
To add and index database records, call the `track!` method and execute the book keeper:
```ruby
Elastic::ProcessBookkeepingService.track!(MergeRequest.first)
Elastic::ProcessBookkeepingService.track!(*MergeRequest.all)
Elastic::ProcessBookkeepingService.new.execute
```
## Dependent association index updates
You can use elastic_index_dependant_association to automatically update associated records in the index
when specific fields change. For example, to reindex all work items when a project's `visibility_level` changes
```ruby
elastic_index_dependant_association :work_items, on_change: :visibility_level, depends_on_finished_migration: :add_mapping_migration
```
The `depends_on_finished_migration` parameter is optional and ensures the update only occurs after the specified advanced
search migration has completed (such as a migration that added the necessary field to the mapping).
## Testing
{{< alert type="warning" >}}
Elasticsearch tests do not run on every merge request. Add `~pipeline:run-search-tests` or `~group::global search` labels to the merge
request to run tests with the production versions of Elasticsearch and PostgreSQL.
{{< /alert >}}
### Advanced search migrations
#### Testing a migration that changes a mapping of an index
1. Make sure the index doesn't already have the changes applied. Remember the migration cron worker runs in the background so it's possible the migration was already applied.
- Optional. [In GitLab 18.0 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/352424),
to disable the migration worker, run the following commands:
```ruby
settings = ApplicationSetting.last # Ensure this setting does not return `nil`
settings.elastic_migration_worker_enabled = false
settings.save!
```
- See if the migration is pending: `::Elastic::DataMigrationService.pending_migrations`.
- Check that the migration is not completed: `Elastic::DataMigrationService.pending_migrations.first.completed?`.
- Make sure the mappings aren't already applied
- either by checking in Kibana `GET gitlab-development-some-index/_mapping`
- or sending a curl request `curl "http://localhost:9200/gitlab-development-some-index/_mappings" | jq`
1. Tail the logs to see logged messages: `tail -f log/elasticsearch.log`.
1. Execute the migration in one of the following ways:
- Run the `Elastic::MigrationWorker.new.perform` migration worker.
[In GitLab 18.0 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/352424), the `elastic_migration_worker_enabled` application setting must be enabled.
- Use pending migrations: `::Elastic::DataMigrationService.pending_migrations.first.migrate`.
- Use the version: `Elastic::DataMigrationService[20250220214819].migrate`, replacing the version with the migration version.
1. View the status of the migration.
- View the migration record in Kibana: `GET gitlab-development-migrations/_doc/20250220214819` (changing the version). This contains information like when it started and what the status is.
- See if the mappings are changed in Kibana: `GET gitlab-development-some-index/_mapping`.
|
---
stage: AI-powered
group: Global Search
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Advanced search development tips
breadcrumbs:
- doc
- development
- advanced_search
---
## Kibana
Use Kibana to interact with your Elasticsearch cluster.
See the [download instructions](https://www.elastic.co/guide/en/kibana/8.11/install.html).
## Viewing index status
Run
```shell
bundle exec rake gitlab:elastic:info
```
to see the status and information about your cluster.
## Creating all indices from scratch and populating with local data
### Option 1: Rake task
Run
```shell
bundle exec rake gitlab:elastic:index
```
which triggers `Search::Elastic::TriggerIndexingWorker` to run async.
Run
```ruby
Elastic::ProcessInitialBookkeepingService.new.execute
```
until it shows `[0, 0]` meaning there are no more refs in the queue.
### Option 2: manual
Manually execute the steps in `Search::Elastic::TriggerIndexingWorker`.
Sometimes Sidekiq doesn't pick up jobs correctly, so you might need to restart Sidekiq or if you prefer to run through the steps in a Rails console:
```ruby
task_executor_service = Search::RakeTaskExecutorService.new(logger: ::Gitlab::Elasticsearch::Logger.build)
task_executor_service.execute(:recreate_index)
task_executor_service.execute(:clear_index_status)
task_executor_service.execute(:clear_reindex_status)
task_executor_service.execute(:resume_indexing)
task_executor_service.execute(:index_namespaces)
task_executor_service.execute(:index_projects)
task_executor_service.execute(:index_snippets)
task_executor_service.execute(:index_users)
```
Run
```ruby
Elastic::ProcessInitialBookkeepingService.new.execute
```
until it shows `[0, 0]` meaning there are no more refs in the queue.
### Option 3: reindexing task
First delete the existing index, then create a `ReindexingTask` for the index you want to target. This creates a new index based on the current configuration, then copies the data over.
```ruby
Search::Elastic::ReindexingTask.create!(targets: %w[MergeRequest])
```
Run
```ruby
ElasticClusterReindexingCronWorker.new.perform
```
On repeat until
```ruby
Search::Elastic::ReindexingTask.last.state
```
is `success`.
## Index data
To add and index database records, call the `track!` method and execute the book keeper:
```ruby
Elastic::ProcessBookkeepingService.track!(MergeRequest.first)
Elastic::ProcessBookkeepingService.track!(*MergeRequest.all)
Elastic::ProcessBookkeepingService.new.execute
```
## Dependent association index updates
You can use elastic_index_dependant_association to automatically update associated records in the index
when specific fields change. For example, to reindex all work items when a project's `visibility_level` changes
```ruby
elastic_index_dependant_association :work_items, on_change: :visibility_level, depends_on_finished_migration: :add_mapping_migration
```
The `depends_on_finished_migration` parameter is optional and ensures the update only occurs after the specified advanced
search migration has completed (such as a migration that added the necessary field to the mapping).
## Testing
{{< alert type="warning" >}}
Elasticsearch tests do not run on every merge request. Add `~pipeline:run-search-tests` or `~group::global search` labels to the merge
request to run tests with the production versions of Elasticsearch and PostgreSQL.
{{< /alert >}}
### Advanced search migrations
#### Testing a migration that changes a mapping of an index
1. Make sure the index doesn't already have the changes applied. Remember the migration cron worker runs in the background so it's possible the migration was already applied.
- Optional. [In GitLab 18.0 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/352424),
to disable the migration worker, run the following commands:
```ruby
settings = ApplicationSetting.last # Ensure this setting does not return `nil`
settings.elastic_migration_worker_enabled = false
settings.save!
```
- See if the migration is pending: `::Elastic::DataMigrationService.pending_migrations`.
- Check that the migration is not completed: `Elastic::DataMigrationService.pending_migrations.first.completed?`.
- Make sure the mappings aren't already applied
- either by checking in Kibana `GET gitlab-development-some-index/_mapping`
- or sending a curl request `curl "http://localhost:9200/gitlab-development-some-index/_mappings" | jq`
1. Tail the logs to see logged messages: `tail -f log/elasticsearch.log`.
1. Execute the migration in one of the following ways:
- Run the `Elastic::MigrationWorker.new.perform` migration worker.
[In GitLab 18.0 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/352424), the `elastic_migration_worker_enabled` application setting must be enabled.
- Use pending migrations: `::Elastic::DataMigrationService.pending_migrations.first.migrate`.
- Use the version: `Elastic::DataMigrationService[20250220214819].migrate`, replacing the version with the migration version.
1. View the status of the migration.
- View the migration record in Kibana: `GET gitlab-development-migrations/_doc/20250220214819` (changing the version). This contains information like when it started and what the status is.
- See if the mappings are changed in Kibana: `GET gitlab-development-some-index/_mapping`.
|
https://docs.gitlab.com/development/add_new_template
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/add_new_template.md
|
2025-08-13
|
doc/development/project_templates
|
[
"doc",
"development",
"project_templates"
] |
add_new_template.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Contribute to built-in project templates
| null |
GitLab provides some
[built-in project templates](../../user/project/_index.md#create-a-project-from-a-built-in-template)
that you can use when creating a new project.
Built-in templates are sourced from the following groups:
- [`gitlab-org/project-templates`](https://gitlab.com/gitlab-org/project-templates)
- [`pages`](https://gitlab.com/pages)
Prerequisites:
- You must have a working [GitLab Development Kit (GDK) environment](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/index.md).
In particular, PostgreSQL, Praefect, and `sshd` must be working.
- `wget` should be installed.
## Add a new built-in project template
If you'd like to contribute a new built-in project template to be distributed
with GitLab, there are a few steps to follow.
### Create the project
1. Create a new public project with the project content you'd like to contribute in a namespace of your choosing. You can view a [working example](https://gitlab.com/gitlab-org/project-templates/dotnetcore).
- Projects should be free of any unnecessary assets or dependencies.
1. When the project is ready for review, [create a new issue](https://gitlab.com/gitlab-org/gitlab/issues/new) with a link to your project.
- In your issue, `@` mention the relevant Backend Engineering Manager and Product Manager for the [Create:Source Code group](https://handbook.gitlab.com/handbook/product/categories/#source-code-group).
### Add the logo in `gitlab-svgs`
All templates fetch their icons from the
[`gitlab-svgs`](https://gitlab.com/gitlab-org/gitlab-svgs) library, so if the
icon of the template you add is not present, you have to submit one.
See how to add a [third-party logo](https://gitlab.com/gitlab-org/gitlab-svgs/-/tree/main#adding-third-party-logos-or-trademarks).
After the logo is added to the `main` branch,
[the bot](https://gitlab.com/gitlab-org/frontend/renovate-gitlab-bot/) picks the
new release up and create an MR in `gitlab-org/gitlab`. You can now proceed to
the next step.
### Add the template details
Two types of built-in templates are available in GitLab:
- **Standard templates**: Available in all GitLab tiers.
- **Enterprise templates**: Available only in GitLab Premium and Ultimate.
To make the project template available when creating a new project, you must
follow the vendoring process to create a working template.
#### Standard template
{{< alert type="note" >}}
See merge request [25318](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25318) for an example.
{{< /alert >}}
To contribute a standard template:
1. Add the details of the template in the `localized_templates_table` method in [`lib/gitlab/project_template.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/project_template.rb) using the following scheme:
```ruby
ProjectTemplate.new('<template_name>', '<template_short_description>', _('<template_long_description>'), '<template_project_link>', 'illustrations/logos/<template_logo_name>.svg'),
```
1. Add the details of the template in [`app/assets/javascripts/projects/default_project_templates.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/projects/default_project_templates.js).
1. Add the template name to [`spec/support/helpers/project_template_test_helper.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/helpers/project_template_test_helper.rb).
#### Enterprise template
{{< alert type="note" >}}
See merge request [28187](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28187) for an example.
{{< /alert >}}
To contribute an Enterprise template:
1. Add details of the template in the `localized_ee_templates_table` method in [`ee/lib/ee/gitlab/project_template.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/project_template.rb) using the following scheme:
```ruby
ProjectTemplate.new('<template_name>', '<template_short_description>', _('<template_long_description>'), '<template_project_link>', 'illustrations/logos/<template_logo_name>.svg'),
```
1. Add the template name in the list of `let(:enterprise_templates)` in [`ee/spec/lib/gitlab/project_template_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/spec/lib/gitlab/project_template_spec.rb).
1. Add details of the template in [`ee/app/assets/javascripts/projects/default_project_templates.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/projects/default_project_templates.js).
### Populate the template details
1. Start GDK:
```shell
gdk start
```
1. Run the following in the `gitlab` project, where `<template_name>` is the name you
gave the template in `gitlab/project_template.rb`:
```shell
bin/rake "gitlab:update_project_templates[<template_name>]"
```
1. Regenerate the localization file in the `gitlab` project and commit the new `.pot` file:
```shell
bin/rake gettext:regenerate
```
1. Add a changelog entry in the commit message (for example, `Changelog: added`).
For more information, see [Changelog entries](../changelog.md).
## Update an existing built-in project template
To contribute a change:
1. Open a merge request in the relevant project, and leave the following comment
when you are ready for a review:
```plaintext
@gitlab-org/manage/import/backend this is a contribution to update the project
template and is ready for review!
@gitlab-bot ready
```
1. If your merge request gets accepted:
- Either [open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new)
to ask for it to get updated.
- Or update the vendored template and open a merge request:
```shell
bin/rake "gitlab:update_project_templates[<template_name>]"
```
## Test your built-in project with the GitLab Development Kit
Complete the following steps to test the project template in your own
GDK instance:
1. Start GDK:
```shell
gdk start
```
1. Run the following Rake task, where `<template_name>` is the
name of the template in `lib/gitlab/project_template.rb`:
```shell
bin/rake "gitlab:update_project_templates[<template_name>]"
```
1. Visit GitLab in your browser and create a new project by selecting the
project template.
## For GitLab team members
Ensure all merge requests have been reviewed by the Security counterpart before merging.
### Update all templates
Starting a project from a template needs this project to be exported. On a
up to date default branch run:
```shell
gdk start # postgres, praefect, and sshd are required
bin/rake gitlab:update_project_templates
git checkout -b update-project-templates
git add vendor/project_templates
git commit
git push -u origin update-project-templates
```
Now create a merge request and assign to a Security counterpart to merge.
### Update a single template
To update just a single template instead of all of them, specify the template name
between square brackets. For example, for the `jekyll` template, run:
```shell
bin/rake "gitlab:update_project_templates[jekyll]"
```
### Review a template merge request
To review a merge request which changes one or more vendored project templates,
run the `check-template-changes` script:
```shell
scripts/check-template-changes vendor/project_templates/<template_name>.tar.gz
```
This script outputs a diff of the file changes against the default branch and also verifies that
the template repository matches the source template project.
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Contribute to built-in project templates
breadcrumbs:
- doc
- development
- project_templates
---
GitLab provides some
[built-in project templates](../../user/project/_index.md#create-a-project-from-a-built-in-template)
that you can use when creating a new project.
Built-in templates are sourced from the following groups:
- [`gitlab-org/project-templates`](https://gitlab.com/gitlab-org/project-templates)
- [`pages`](https://gitlab.com/pages)
Prerequisites:
- You must have a working [GitLab Development Kit (GDK) environment](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/index.md).
In particular, PostgreSQL, Praefect, and `sshd` must be working.
- `wget` should be installed.
## Add a new built-in project template
If you'd like to contribute a new built-in project template to be distributed
with GitLab, there are a few steps to follow.
### Create the project
1. Create a new public project with the project content you'd like to contribute in a namespace of your choosing. You can view a [working example](https://gitlab.com/gitlab-org/project-templates/dotnetcore).
- Projects should be free of any unnecessary assets or dependencies.
1. When the project is ready for review, [create a new issue](https://gitlab.com/gitlab-org/gitlab/issues/new) with a link to your project.
- In your issue, `@` mention the relevant Backend Engineering Manager and Product Manager for the [Create:Source Code group](https://handbook.gitlab.com/handbook/product/categories/#source-code-group).
### Add the logo in `gitlab-svgs`
All templates fetch their icons from the
[`gitlab-svgs`](https://gitlab.com/gitlab-org/gitlab-svgs) library, so if the
icon of the template you add is not present, you have to submit one.
See how to add a [third-party logo](https://gitlab.com/gitlab-org/gitlab-svgs/-/tree/main#adding-third-party-logos-or-trademarks).
After the logo is added to the `main` branch,
[the bot](https://gitlab.com/gitlab-org/frontend/renovate-gitlab-bot/) picks the
new release up and create an MR in `gitlab-org/gitlab`. You can now proceed to
the next step.
### Add the template details
Two types of built-in templates are available in GitLab:
- **Standard templates**: Available in all GitLab tiers.
- **Enterprise templates**: Available only in GitLab Premium and Ultimate.
To make the project template available when creating a new project, you must
follow the vendoring process to create a working template.
#### Standard template
{{< alert type="note" >}}
See merge request [25318](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25318) for an example.
{{< /alert >}}
To contribute a standard template:
1. Add the details of the template in the `localized_templates_table` method in [`lib/gitlab/project_template.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/project_template.rb) using the following scheme:
```ruby
ProjectTemplate.new('<template_name>', '<template_short_description>', _('<template_long_description>'), '<template_project_link>', 'illustrations/logos/<template_logo_name>.svg'),
```
1. Add the details of the template in [`app/assets/javascripts/projects/default_project_templates.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/projects/default_project_templates.js).
1. Add the template name to [`spec/support/helpers/project_template_test_helper.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/helpers/project_template_test_helper.rb).
#### Enterprise template
{{< alert type="note" >}}
See merge request [28187](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28187) for an example.
{{< /alert >}}
To contribute an Enterprise template:
1. Add details of the template in the `localized_ee_templates_table` method in [`ee/lib/ee/gitlab/project_template.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/project_template.rb) using the following scheme:
```ruby
ProjectTemplate.new('<template_name>', '<template_short_description>', _('<template_long_description>'), '<template_project_link>', 'illustrations/logos/<template_logo_name>.svg'),
```
1. Add the template name in the list of `let(:enterprise_templates)` in [`ee/spec/lib/gitlab/project_template_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/spec/lib/gitlab/project_template_spec.rb).
1. Add details of the template in [`ee/app/assets/javascripts/projects/default_project_templates.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/projects/default_project_templates.js).
### Populate the template details
1. Start GDK:
```shell
gdk start
```
1. Run the following in the `gitlab` project, where `<template_name>` is the name you
gave the template in `gitlab/project_template.rb`:
```shell
bin/rake "gitlab:update_project_templates[<template_name>]"
```
1. Regenerate the localization file in the `gitlab` project and commit the new `.pot` file:
```shell
bin/rake gettext:regenerate
```
1. Add a changelog entry in the commit message (for example, `Changelog: added`).
For more information, see [Changelog entries](../changelog.md).
## Update an existing built-in project template
To contribute a change:
1. Open a merge request in the relevant project, and leave the following comment
when you are ready for a review:
```plaintext
@gitlab-org/manage/import/backend this is a contribution to update the project
template and is ready for review!
@gitlab-bot ready
```
1. If your merge request gets accepted:
- Either [open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new)
to ask for it to get updated.
- Or update the vendored template and open a merge request:
```shell
bin/rake "gitlab:update_project_templates[<template_name>]"
```
## Test your built-in project with the GitLab Development Kit
Complete the following steps to test the project template in your own
GDK instance:
1. Start GDK:
```shell
gdk start
```
1. Run the following Rake task, where `<template_name>` is the
name of the template in `lib/gitlab/project_template.rb`:
```shell
bin/rake "gitlab:update_project_templates[<template_name>]"
```
1. Visit GitLab in your browser and create a new project by selecting the
project template.
## For GitLab team members
Ensure all merge requests have been reviewed by the Security counterpart before merging.
### Update all templates
Starting a project from a template needs this project to be exported. On a
up to date default branch run:
```shell
gdk start # postgres, praefect, and sshd are required
bin/rake gitlab:update_project_templates
git checkout -b update-project-templates
git add vendor/project_templates
git commit
git push -u origin update-project-templates
```
Now create a merge request and assign to a Security counterpart to merge.
### Update a single template
To update just a single template instead of all of them, specify the template name
between square brackets. For example, for the `jekyll` template, run:
```shell
bin/rake "gitlab:update_project_templates[jekyll]"
```
### Review a template merge request
To review a merge request which changes one or more vendored project templates,
run the `check-template-changes` script:
```shell
scripts/check-template-changes vendor/project_templates/<template_name>.tar.gz
```
This script outputs a diff of the file changes against the default branch and also verifies that
the template repository matches the source template project.
|
https://docs.gitlab.com/development/project_templates
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/project_templates
|
[
"doc",
"development",
"project_templates"
] |
_index.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Custom group-level project templates development guidelines
| null |
This document was created to help contributors understand the code design of
[custom group-level project templates](../../user/group/custom_project_templates.md).
You should read this document before making changes to the code for this feature.
This document is intentionally limited to an overview of how the code is
designed, as code can change often. To understand how a specific part of the
feature works, view the code and the specs. The details here explain how the
major components of the templating feature work.
{{< alert type="note" >}}
This document should be updated when parts of the codebase referenced in this
document are updated, removed, or new parts are added.
{{< /alert >}}
## Basic overview
A custom group-level project template is a regular project that is exported and
then imported into the newly created project.
Given we have `Group1` which contains template subgroup named `Subgroup1`.
Inside Subgroup1 we have a project called `Template1`.
`User1` creates `Project1` inside `Group1` using `Template1`, the logic follows these
steps:
1. Initialize `Project1`
1. Export `Template1`
1. Import into `Project1`
## Business logic
- `ProjectsController#create`: the controller where the flow begins
- Defined in `app/controllers/projects_controller.rb`.
- `Projects::CreateService`: handles the creation of the project.
- Defined in `app/services/projects/create_service.rb`.
- `EE::Projects::CreateService`: EE extension for create service
- Defined in `ee/app/services/ee/projects/create_service.rb`.
- `Projects::CreateFromTemplateService`: handles creating a project from a custom project template.
- Defined in `app/services/projects/create_from_template_service.rb`
- `EE:Projects::CreateFromTemplateService`: EE extension for create from template service.
- Defined in `ee/app/services/ee/projects/create_from_template_service.rb`.
- `Projects::GitlabProjectsImportService`: Handles importing the template.
- Defined in `app/services/projects/gitlab_projects_import_service.rb`.
- `EE::Projects::GitlabProjectsImportService`: EE extension to import service.
- Defined in `ee/app/services/ee/projects/gitlab_projects_import_service.rb`.
- `ProjectTemplateExportWorker`: Handles exporting the custom template.
- Defined in `ee/app/workers/project_template_export_worker.rb`.
- `ProjectExportWorker`: Base class for ProjectTemplateExportWorker.
- Defined in `app/workers/project_export_worker.rb`.
- `Projects::ImportExport::ExportService`: Service to export project.
- Defined in `app/workers/project_export_worker.rb`.
- `Gitlab::ImportExport::VersionSaver`: Handles exporting the versions.
- Defined in `lib/gitlab/import_export/version_saver.rb`.
- `Gitlab::ImportExport::UploadsManager`: Handles exporting uploaded files.
- Defined in `lib/gitlab/import_export/uploads_manager.rb`.
- `Gitlab::ImportExport::AvatarSaver`: Exports the avatars.
- Defined in `lib/gitlab/import_export/avatar_saver.rb`.
- `Gitlab::ImportExport::Project::TreeSaver`: Exports the project and related objects.
- Defined in `lib/gitlab/import_export/project/tree_saver.rb`.
- `EE:Gitlab::ImportExport::Project::TreeSaver`: Exports the project and related objects.
- Defined in `lib/gitlab/import_export/project/tree_saver.rb`.
- `Gitlab::ImportExport::Json::StreamingSerializer`: Serializes the exported objects to JSON.
- Defined in `lib/gitlab/import_export/json/streaming_serializer.rb`.
- `Gitlab::ImportExport::Reader`: Wrapper around exported JSON files.
- Defined in `lib/gitlab/import_export/reader.rb`.
- `Gitlab::ImportExport::AttributesFinder`: Parses configuration and finds attributes in exported JSON files.
- Defined in `lib/gitlab/import_export/attributes_finder.rb`.
- `Gitlab::ImportExport::Config`: Wrapper around import/export YAML configuration file.
- Defined in `lib/gitlab/import_export/config.rb`.
- `Gitlab::ImportExport`: Entry point with convenience methods.
- Defined in `lib/gitlab/import_export.rb`.
- `Gitlab::ImportExport::UploadsSaver`: Exports uploaded files.
- Defined in `lib/gitlab/import_export/uploads_saver.rb`.
- `Gitlab::ImportExport::RepoSaver`: Exports the repository.
- Defined in `lib/gitlab/import_export/repo_saver.rb`.
- `Gitlab::ImportExport::WikiRepoSaver`: Exports the wiki repository.
- Defined in `lib/gitlab/import_export/wiki_repo_saver.rb`.
- `EE:Gitlab::ImportExport::WikiRepoSaver`: Extends wiki repository saver.
- Defined in `ee/lib/ee/gitlab/import_export/wiki_repo_saver.rb`.
- `Gitlab::ImportExport::LfsSaver`: Export LFS objects and files.
- Defined in `lib/gitlab/import_export/lfs_saver.rb`.
- `Gitlab::ImportExport::SnippetsRepoSaver`: Exports snippets repository
- Defined in `lib/gitlab/import_export/snippet_repo_saver.rb`.
- `Gitlab::ImportExport::DesignRepoSaver`: Exports design repository
- Defined in `lib/gitlab/import_export/design_repo_saver.rb`.
- `Gitlab::ImportExport::Error`: Custom error object.
- Defined in `lib/gitlab/import_export/error.rb`.
- `Import::AfterExportStrategies::AfterExportStrategyBuilder`: Acts as callback to run after export is completed.
- Defined in `lib/import/after_export_strategies/after_export_strategy_builder.rb`.
- `Gitlab::Export::Logger`: Logger used during export.
- Defined in `lib/gitlab/export/logger.rb`.
- `Gitlab::ImportExport::LogUtil`: Builds log messages.
- Defined in `lib/gitlab/import_export/log_util.rb`.
- `Import::AfterExportStrategies::CustomTemplateExportImportStrategy`: Callback class to import the template after it has been exported.
- Defined in `ee/lib/import/after_export_strategies/custom_template_export_import_strategy.rb`.
- `Gitlab::TemplateHelper`: Helpers for importing templates.
- Defined in `lib/gitlab/template_helper.rb`.
- `ImportExportUpload`: Stores the import and export archive files.
- Defined in `app/models/import_export_upload.rb`.
- `Import::AfterExportStrategies::BaseAfterExportStrategy`: Base after export strategy.
- Defined in `lib/import/after_export_strategies/base_after_export_strategy.rb`.
- `RepositoryImportWorker`: Worker to trigger the import step.
- Defined in `app/workers/repository_import_worker.rb`.
- `EE::RepositoryImportWorker`: Extension to repository import worker.
- Defined in `ee/app/workers/ee/repository_import_worker.rb`.
- `Projects::ImportService`: Executes the import step.
- Defined in `app/services/projects/import_service.rb`.
- `EE:Projects::ImportService`: Extends import service.
- Defined in `ee/app/services/ee/projects/import_service.rb`.
- `Projects::LfsPointers::LfsImportService`: Imports the LFS objects.
- Defined in `app/services/projects/lfs_pointers/lfs_import_service.rb`.
- `Projects::LfsPointers::LfsObjectDownloadListService`: Main service to request links to download LFS objects.
- Defined in `app/services/projects/lfs_pointers/lfs_object_download_list_service.rb`.
- `Projects::LfsPointers::LfsDownloadLinkListService`: Handles requesting links in batches and building list.
- Defined in `app/services/projects/lfs_pointers/lfs_download_link_list_service.rb`.
- `Projects::LfsPointers::LfsListService`: Retrieves LFS blob pointers.
- Defined in `app/services/projects/lfs_pointers/lfs_list_service.rb`.
- `Projects::LfsPointers::LfsDownloadService`: Downloads and links LFS objects.
- Defined in `app/services/projects/lfs_pointers/lfs_download_service.rb`.
- `Gitlab::ImportSources`: Module to configure which importer to use.
- Defined in `lib/gitlab/import_sources.rb`.
- `EE::Gitlab::ImportSources`: Extends import sources.
- Defined in `ee/lib/ee/gitlab/import_sources.rb`.
- `Gitlab::ImportExport::Importer`: Importer class.
- Defined in `lib/gitlab/import_export/importer.rb`.
- `EE::Gitlab::ImportExport::Importer`: Extends importer.
- Defined in `ee/lib/ee/gitlab/import_export/importer.rb`.
- `Gitlab::ImportExport::FileImporter`: Imports archive files.
- Defined in `lib/gitlab/import_export/file_importer.rb`.
- `Gitlab::ImportExport::DecompressedArchiveSizeValidator`: Validates archive file size.
- Defined in `lib/gitlab/import_export/decompressed_archive_size_validator.rb`.
- `Gitlab::ImportExport::VersionChecker`: Verifies version of export matches importer.
- Defined in `lib/gitlab/import_export/version_checker.rb`.
- `Gitlab::ImportExport::Project::TreeRestorer`: Handles importing project and associated objects.
- Defined in `lib/gitlab/import_export/project/tree_restorer.rb`.
- `Gitlab::ImportExport::Json::NdjsonReader`: Reader for JSON export files.
- Defined in `lib/gitlab/import_export/json/ndjson_reader.rb`.
- `Gitlab::ImportExport::AvatarRestorer`: Handles importing avatar files.
- Defined in `lib/gitlab/import_export/avatar_restorer.rb`.
- `Gitlab::ImportExport::RepoRestorer`: Handles importing repositories.
- Defined in `lib/gitlab/import_export/repo_restorer.rb`.
- `EE:Gitlab::ImportExport::RepoRestorer`: Extends repository restorer.
- Defined in `ee/lib/ee/gitlab/import_export/repo_restorer.rb`.
- `Gitlab::ImportExport::DesignRepoRestorer`: Handles restoring design repository.
- Defined in `lib/gitlab/import_export/design_repo_restorer.rb`.
- `Gitlab::ImportExport::UploadsRestorer`: Handles restoring uploaded files.
- Defined in `lib/gitlab/import_export/uploads_restorer.rb`.
- `Gitlab::ImportExport::LfsRestorer`: Restores LFS objects.
- Defined in `lib/gitlab/import_export/lfs_restorer.rb`.
- `Gitlab::ImportExport::SnippetsRepoRestorer`: Handles restoring snippets repository.
- Defined in `lib/gitlab/import_export/snippets_repo_restorer.rb`.
- `Gitlab::ImportExport::SnippetRepoRestorer`: Handles restoring individual snippets.
- Defined in `lib/gitlab/import_export/snippet_repo_restorer.rb`.
- `Snippets::RepositoryValidationService`: Validates snippets repository archive.
- Defined in `app/services/snippets/repository_validation_service.rb`.
- `Snippets::UpdateStatisticsService`: Updates statistics for the snippets repository.
- Defined in `app/services/snippets/update_statistics_service.rb`.
- `Gitlab::BackgroundMigration::BackfillSnippetRepositories`: Backfills missing snippets in hashed storage.
- Defined in `lib/gitlab/background_migration/backfill_snippet_repositories.rb`.
- `Gitlab::ImportExport::StatisticsRestorer`: Refreshes project statistics.
- Defined in `lib/gitlab/import_export/importer.rb`.
- `Gitlab::ImportExport::Project::CustomTemplateRestorer`: Handles additional imports for custom templates.
- Defined in `ee/lib/gitlab/import_export/project/custom_template_restorer.rb`.
- `Gitlab::ImportExport::Project::ProjectHooksRestorer`: Handles importing project hooks.
- Defined in `ee/lib/gitlab/import_export/project/project_hooks_restorer.rb`.
- `Gitlab::ImportExport::Project::DeployKeysRestorer`: Handles importing deploy keys.
- Defined in `ee/lib/gitlab/import_export/project/deploy_keys_restorer.rb`.
- `Gitlab::ImportExport::Project::CustomTemplateRestorerHelper`: Helpers for custom templates restorer.
- Defined in `ee/lib/gitlab/import_export/project/custom_template_restorer_helper.rb`.
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Custom group-level project templates development guidelines
breadcrumbs:
- doc
- development
- project_templates
---
This document was created to help contributors understand the code design of
[custom group-level project templates](../../user/group/custom_project_templates.md).
You should read this document before making changes to the code for this feature.
This document is intentionally limited to an overview of how the code is
designed, as code can change often. To understand how a specific part of the
feature works, view the code and the specs. The details here explain how the
major components of the templating feature work.
{{< alert type="note" >}}
This document should be updated when parts of the codebase referenced in this
document are updated, removed, or new parts are added.
{{< /alert >}}
## Basic overview
A custom group-level project template is a regular project that is exported and
then imported into the newly created project.
Given we have `Group1` which contains template subgroup named `Subgroup1`.
Inside Subgroup1 we have a project called `Template1`.
`User1` creates `Project1` inside `Group1` using `Template1`, the logic follows these
steps:
1. Initialize `Project1`
1. Export `Template1`
1. Import into `Project1`
## Business logic
- `ProjectsController#create`: the controller where the flow begins
- Defined in `app/controllers/projects_controller.rb`.
- `Projects::CreateService`: handles the creation of the project.
- Defined in `app/services/projects/create_service.rb`.
- `EE::Projects::CreateService`: EE extension for create service
- Defined in `ee/app/services/ee/projects/create_service.rb`.
- `Projects::CreateFromTemplateService`: handles creating a project from a custom project template.
- Defined in `app/services/projects/create_from_template_service.rb`
- `EE:Projects::CreateFromTemplateService`: EE extension for create from template service.
- Defined in `ee/app/services/ee/projects/create_from_template_service.rb`.
- `Projects::GitlabProjectsImportService`: Handles importing the template.
- Defined in `app/services/projects/gitlab_projects_import_service.rb`.
- `EE::Projects::GitlabProjectsImportService`: EE extension to import service.
- Defined in `ee/app/services/ee/projects/gitlab_projects_import_service.rb`.
- `ProjectTemplateExportWorker`: Handles exporting the custom template.
- Defined in `ee/app/workers/project_template_export_worker.rb`.
- `ProjectExportWorker`: Base class for ProjectTemplateExportWorker.
- Defined in `app/workers/project_export_worker.rb`.
- `Projects::ImportExport::ExportService`: Service to export project.
- Defined in `app/workers/project_export_worker.rb`.
- `Gitlab::ImportExport::VersionSaver`: Handles exporting the versions.
- Defined in `lib/gitlab/import_export/version_saver.rb`.
- `Gitlab::ImportExport::UploadsManager`: Handles exporting uploaded files.
- Defined in `lib/gitlab/import_export/uploads_manager.rb`.
- `Gitlab::ImportExport::AvatarSaver`: Exports the avatars.
- Defined in `lib/gitlab/import_export/avatar_saver.rb`.
- `Gitlab::ImportExport::Project::TreeSaver`: Exports the project and related objects.
- Defined in `lib/gitlab/import_export/project/tree_saver.rb`.
- `EE:Gitlab::ImportExport::Project::TreeSaver`: Exports the project and related objects.
- Defined in `lib/gitlab/import_export/project/tree_saver.rb`.
- `Gitlab::ImportExport::Json::StreamingSerializer`: Serializes the exported objects to JSON.
- Defined in `lib/gitlab/import_export/json/streaming_serializer.rb`.
- `Gitlab::ImportExport::Reader`: Wrapper around exported JSON files.
- Defined in `lib/gitlab/import_export/reader.rb`.
- `Gitlab::ImportExport::AttributesFinder`: Parses configuration and finds attributes in exported JSON files.
- Defined in `lib/gitlab/import_export/attributes_finder.rb`.
- `Gitlab::ImportExport::Config`: Wrapper around import/export YAML configuration file.
- Defined in `lib/gitlab/import_export/config.rb`.
- `Gitlab::ImportExport`: Entry point with convenience methods.
- Defined in `lib/gitlab/import_export.rb`.
- `Gitlab::ImportExport::UploadsSaver`: Exports uploaded files.
- Defined in `lib/gitlab/import_export/uploads_saver.rb`.
- `Gitlab::ImportExport::RepoSaver`: Exports the repository.
- Defined in `lib/gitlab/import_export/repo_saver.rb`.
- `Gitlab::ImportExport::WikiRepoSaver`: Exports the wiki repository.
- Defined in `lib/gitlab/import_export/wiki_repo_saver.rb`.
- `EE:Gitlab::ImportExport::WikiRepoSaver`: Extends wiki repository saver.
- Defined in `ee/lib/ee/gitlab/import_export/wiki_repo_saver.rb`.
- `Gitlab::ImportExport::LfsSaver`: Export LFS objects and files.
- Defined in `lib/gitlab/import_export/lfs_saver.rb`.
- `Gitlab::ImportExport::SnippetsRepoSaver`: Exports snippets repository
- Defined in `lib/gitlab/import_export/snippet_repo_saver.rb`.
- `Gitlab::ImportExport::DesignRepoSaver`: Exports design repository
- Defined in `lib/gitlab/import_export/design_repo_saver.rb`.
- `Gitlab::ImportExport::Error`: Custom error object.
- Defined in `lib/gitlab/import_export/error.rb`.
- `Import::AfterExportStrategies::AfterExportStrategyBuilder`: Acts as callback to run after export is completed.
- Defined in `lib/import/after_export_strategies/after_export_strategy_builder.rb`.
- `Gitlab::Export::Logger`: Logger used during export.
- Defined in `lib/gitlab/export/logger.rb`.
- `Gitlab::ImportExport::LogUtil`: Builds log messages.
- Defined in `lib/gitlab/import_export/log_util.rb`.
- `Import::AfterExportStrategies::CustomTemplateExportImportStrategy`: Callback class to import the template after it has been exported.
- Defined in `ee/lib/import/after_export_strategies/custom_template_export_import_strategy.rb`.
- `Gitlab::TemplateHelper`: Helpers for importing templates.
- Defined in `lib/gitlab/template_helper.rb`.
- `ImportExportUpload`: Stores the import and export archive files.
- Defined in `app/models/import_export_upload.rb`.
- `Import::AfterExportStrategies::BaseAfterExportStrategy`: Base after export strategy.
- Defined in `lib/import/after_export_strategies/base_after_export_strategy.rb`.
- `RepositoryImportWorker`: Worker to trigger the import step.
- Defined in `app/workers/repository_import_worker.rb`.
- `EE::RepositoryImportWorker`: Extension to repository import worker.
- Defined in `ee/app/workers/ee/repository_import_worker.rb`.
- `Projects::ImportService`: Executes the import step.
- Defined in `app/services/projects/import_service.rb`.
- `EE:Projects::ImportService`: Extends import service.
- Defined in `ee/app/services/ee/projects/import_service.rb`.
- `Projects::LfsPointers::LfsImportService`: Imports the LFS objects.
- Defined in `app/services/projects/lfs_pointers/lfs_import_service.rb`.
- `Projects::LfsPointers::LfsObjectDownloadListService`: Main service to request links to download LFS objects.
- Defined in `app/services/projects/lfs_pointers/lfs_object_download_list_service.rb`.
- `Projects::LfsPointers::LfsDownloadLinkListService`: Handles requesting links in batches and building list.
- Defined in `app/services/projects/lfs_pointers/lfs_download_link_list_service.rb`.
- `Projects::LfsPointers::LfsListService`: Retrieves LFS blob pointers.
- Defined in `app/services/projects/lfs_pointers/lfs_list_service.rb`.
- `Projects::LfsPointers::LfsDownloadService`: Downloads and links LFS objects.
- Defined in `app/services/projects/lfs_pointers/lfs_download_service.rb`.
- `Gitlab::ImportSources`: Module to configure which importer to use.
- Defined in `lib/gitlab/import_sources.rb`.
- `EE::Gitlab::ImportSources`: Extends import sources.
- Defined in `ee/lib/ee/gitlab/import_sources.rb`.
- `Gitlab::ImportExport::Importer`: Importer class.
- Defined in `lib/gitlab/import_export/importer.rb`.
- `EE::Gitlab::ImportExport::Importer`: Extends importer.
- Defined in `ee/lib/ee/gitlab/import_export/importer.rb`.
- `Gitlab::ImportExport::FileImporter`: Imports archive files.
- Defined in `lib/gitlab/import_export/file_importer.rb`.
- `Gitlab::ImportExport::DecompressedArchiveSizeValidator`: Validates archive file size.
- Defined in `lib/gitlab/import_export/decompressed_archive_size_validator.rb`.
- `Gitlab::ImportExport::VersionChecker`: Verifies version of export matches importer.
- Defined in `lib/gitlab/import_export/version_checker.rb`.
- `Gitlab::ImportExport::Project::TreeRestorer`: Handles importing project and associated objects.
- Defined in `lib/gitlab/import_export/project/tree_restorer.rb`.
- `Gitlab::ImportExport::Json::NdjsonReader`: Reader for JSON export files.
- Defined in `lib/gitlab/import_export/json/ndjson_reader.rb`.
- `Gitlab::ImportExport::AvatarRestorer`: Handles importing avatar files.
- Defined in `lib/gitlab/import_export/avatar_restorer.rb`.
- `Gitlab::ImportExport::RepoRestorer`: Handles importing repositories.
- Defined in `lib/gitlab/import_export/repo_restorer.rb`.
- `EE:Gitlab::ImportExport::RepoRestorer`: Extends repository restorer.
- Defined in `ee/lib/ee/gitlab/import_export/repo_restorer.rb`.
- `Gitlab::ImportExport::DesignRepoRestorer`: Handles restoring design repository.
- Defined in `lib/gitlab/import_export/design_repo_restorer.rb`.
- `Gitlab::ImportExport::UploadsRestorer`: Handles restoring uploaded files.
- Defined in `lib/gitlab/import_export/uploads_restorer.rb`.
- `Gitlab::ImportExport::LfsRestorer`: Restores LFS objects.
- Defined in `lib/gitlab/import_export/lfs_restorer.rb`.
- `Gitlab::ImportExport::SnippetsRepoRestorer`: Handles restoring snippets repository.
- Defined in `lib/gitlab/import_export/snippets_repo_restorer.rb`.
- `Gitlab::ImportExport::SnippetRepoRestorer`: Handles restoring individual snippets.
- Defined in `lib/gitlab/import_export/snippet_repo_restorer.rb`.
- `Snippets::RepositoryValidationService`: Validates snippets repository archive.
- Defined in `app/services/snippets/repository_validation_service.rb`.
- `Snippets::UpdateStatisticsService`: Updates statistics for the snippets repository.
- Defined in `app/services/snippets/update_statistics_service.rb`.
- `Gitlab::BackgroundMigration::BackfillSnippetRepositories`: Backfills missing snippets in hashed storage.
- Defined in `lib/gitlab/background_migration/backfill_snippet_repositories.rb`.
- `Gitlab::ImportExport::StatisticsRestorer`: Refreshes project statistics.
- Defined in `lib/gitlab/import_export/importer.rb`.
- `Gitlab::ImportExport::Project::CustomTemplateRestorer`: Handles additional imports for custom templates.
- Defined in `ee/lib/gitlab/import_export/project/custom_template_restorer.rb`.
- `Gitlab::ImportExport::Project::ProjectHooksRestorer`: Handles importing project hooks.
- Defined in `ee/lib/gitlab/import_export/project/project_hooks_restorer.rb`.
- `Gitlab::ImportExport::Project::DeployKeysRestorer`: Handles importing deploy keys.
- Defined in `ee/lib/gitlab/import_export/project/deploy_keys_restorer.rb`.
- `Gitlab::ImportExport::Project::CustomTemplateRestorerHelper`: Helpers for custom templates restorer.
- Defined in `ee/lib/gitlab/import_export/project/custom_template_restorer_helper.rb`.
|
https://docs.gitlab.com/development/value_stream_analytics_aggregated_backend
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/value_stream_analytics_aggregated_backend.md
|
2025-08-13
|
doc/development/value_stream_analytics
|
[
"doc",
"development",
"value_stream_analytics"
] |
value_stream_analytics_aggregated_backend.md
|
Plan
|
Optimize
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Aggregated Value Stream Analytics
| null |
{{< alert type="disclaimer" />}}
This page provides a high-level overview of the aggregated backend for
Value Stream Analytics (VSA).
## Current Status
The aggregated backend is used by default since GitLab 15.0 on the group-level.
## Motivation
The aggregated backend aims to solve the performance limitations of the VSA feature and set it up
for long-term growth.
Our main database is not prepared for analytical workloads. Executing long-running queries can
affect the reliability of the application. For large groups, the current
implementation (old backend) is slow and, in some cases, doesn't even load due to the configured
statement timeout (15 s).
The database queries in the old backend use the core domain models directly through
`IssuableFinders` classes: ([MergeRequestsFinder](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/finders/merge_requests_finder.rb) and [IssuesFinder](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/finders/issues_finder.rb)).
With the requested change of the [date range filters](https://gitlab.com/groups/gitlab-org/-/epics/6046),
this approach was no longer viable from the performance point of view.
Benefits of the aggregated VSA backend:
- Simpler database queries (fewer JOINs).
- Faster aggregations, only a single table is accessed.
- Possibility to introduce further aggregations for improving the first page load time.
- Better performance for large groups (with many subgroups, projects, issues and, merge requests).
- Ready for database decomposition. The VSA related database tables could live in a separate
database with a minimal development effort.
- Ready for keyset pagination which can be useful for exporting the data.
- Possibility to implement more complex event definitions.
- For example, the start event can be two timestamp columns where the earliest value would be
used by the system.
- Example: `MIN(issues.created_at, issues.updated_at)`
### Example configuration

In this example, two independent value streams are set up for two teams that are using
different development workflows within the `Test Group` (top-level namespace).
The first value stream uses standard timestamp-based events for defining the stages. The second
value stream uses label events.
Each value stream and stage item from the example is persisted in the database. Notice that
the `Deployment` stage is identical for both value streams; that means that the underlying
`stage_event_hash_id` is the same for both stages. The `stage_event_hash_id` reduces
the amount of data the backend collects and plays a vital role in database partitioning.
We expect value streams and stages to be rarely changed. When stages (start and end events) are
changed, the aggregated data gets stale. This is fixed by the periodical aggregation occurring
every day.
### Feature availability
The aggregated VSA feature is available on the group and project level however, the aggregated
backend is only available for Premium and Ultimate customers due to data storage and data
computation costs. Storing de-normalized, aggregated data requires significant disk space.
## Aggregated value stream analytics architecture
The main idea behind the aggregated VSA backend is separation: VSA database tables and queries do
not use the core domain models directly (Issue, MergeRequest). This allows us to scale and
optimize VSA independently from the other parts of the application.
The architecture consists of two main mechanisms:
- Periodical data collection and loading (happens in the background).
- Querying the collected data (invoked by the user).
### Data loading
The aggregated nature of VSA comes from the periodical data loading. The system queries the core
domain models to collect the stage and timestamp data. This data is periodically inserted into the
VSA database tables.
High-level overview for each top-level namespace with Premium or Ultimate license:
1. Load all stages in the group.
1. Iterate over the issues and merge requests records.
1. Based on the stage configurations (start and end event identifiers) collect the timestamp data.
1. `INSERT` or `UPDATE` the data into the VSA database tables.
The data loading is implemented within the [`Analytics::CycleAnalytics::DataLoaderService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/analytics/cycle_analytics/data_loader_service.rb)
class. Some groups contain a lot of data, so to avoid overloading the primary database,
the service performs operations in batches and enforces strict application limits:
- Load records in batches.
- Insert records in batches.
- Stop processing when a limit is reached, schedule a background job to continue the processing later.
- Continue processing data from a specific point.
The data loading is done manually. Once the feature is ready, the service is
invoked periodically by the system via a cron job (this part is not implemented yet).
#### Record iteration
The batched iteration is implemented with the
[efficient IN operator](../database/efficient_in_operator_queries.md). The background job scans
all issues and merge request records in the group hierarchy ordered by the `updated_at` and the
`id` columns. For already aggregated groups, the `DataLoaderService` continues the aggregation
from a specific point which saves time.
Collecting the timestamp data happens on every iteration. The `DataLoaderService` determines which
stage events are configured within the group hierarchy and builds a query that selects the
required timestamps. The stage record knows which events are configured and the events know how to
select the timestamp columns.
Example for collected stage events: merge request merged, merge request created, merge request
closed
Generated SQL query for loading the timestamps:
```sql
SELECT
-- the list of columns depends on the configured stages
"merge_request_metrics"."merged_at",
"merge_requests"."created_at",
"merge_request_metrics"."latest_closed_at"
FROM "merge_requests"
LEFT OUTER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE "merge_requests"."id" IN (1, 2, 3, 4) -- ids are coming from the batching query
```
The `merged_at` column is located in a separate table (`merge_request_metrics`). The
`Gitlab::Analytics::CycleAnalytics::StagEvents::MergeRequestMerged` class adds itself to a scope
for loading the timestamp data without affecting the number of rows (uses `LEFT JOIN`). This
behavior is implemented for each `StageEvent` class with the `include_in` method.
The data collection query works on the event level. It extracts the event timestamps from the
stages and ensures that we don't collect the same data multiple times. The events mentioned above
could come from the following stage configuration:
- merge request created - merge request merged
- merge request created - merge request closed
Other combinations might be also possible, but we prevent the ones that make no sense, for example:
- merge request merged - merge request created
Creation time always happens first, so this stage always reports negative duration.
#### Data scope
The data collection scans and processes all issues and merge requests records in the group
hierarchy, starting from the top-level group. This means that if a group only has one value stream
in a subgroup, we nevertheless collect data of all issues and merge requests in the hierarchy of
this group. This aims to simplify the data collection mechanism. Moreover, data research shows
that most group hierarchies have their stages configured on the top level.
During the data collection process, the collected timestamp data is transformed into rows. For
each configured stage, if the start event timestamp is present, the system inserts or updates one
event record. This allows us to determine the upper limit of the inserted rows per group by
counting all issues and merge requests and multiplying the sum by the stage count.
#### Data consistency concerns
Due to the async nature of the data collection, data consistency issues are bound to happen. This
is a trade-off that makes the query performance significantly faster. We think that for analytical
workload a slight lag in the data is acceptable.
Before the rollout we plan to implement some indicators on the VSA page that shows the most
recent backend activities. For example, indicators that show the last data collection timestamp
and the last consistency check timestamp.
#### Database structure
VSA collects data for the following domain models: `Issue` and `MergeRequest`. To keep the
aggregated data separated, we use two additional database tables:
- `analytics_cycle_analytics_issue_stage_events`
- `analytics_cycle_analytics_merge_request_stage_events`
Both tables are hash partitioned by the `stage_event_hash_id`. Each table uses 32 partitions. It's
an arbitrary number and it could be changed. Important is to keep the partitions under 100 GB in
size (which gives the feature a lot of headroom).
| Column | Description |
|----------------------------------|-------------|
| `stage_event_hash_id` | partitioning key |
| `merge_request_id` or `issue_id` | reference to the domain record (Issuable) |
| `group_id` | reference to the group (de-normalization) |
| `project_id` | reference to the project |
| `milestone_id` | duplicated data from the domain record table |
| `author_id` | duplicated data from the domain record table |
| `state_id` | duplicated data from the domain record table |
| `start_event_timestamp` | timestamp derived from the stage configuration |
| `end_event_timestamp` | timestamp derived from the stage configuration |
With accordance to the data separation requirements, the table doesn't have any foreign keys. The
consistency is ensured by a background job (eventually consistent).
### Data querying
The base query always includes the following filters:
- `stage_event_hash_id` - partition key
- `project_id` or `group_id` - depending on whether it's a project or group query
- `end_event_timestamp` - date range filter (last 30 days)
Example: Selecting review stage duration for the GitLab project
```sql
SELECT end_event_timestamp - start_event_timestamp
FROM analytics_cycle_analytics_merge_request_stage_events
WHERE
stage_event_hash_id = 16 AND -- hits a specific partition
project_id = 278964 AND
end_event_timestamp > '2022-01-01' AND end_event_timestamp < '2022-01-30'
```
#### Query generation
The query backend is hidden behind the same interface that the old backend implementation uses.
Thanks to this, we can easily switch between the old and new query backends.
- `DataCollector`: entrypoint for querying VSA data
- `BaseQueryBuilder`: provides the base `ActiveRecord` scope (filters are applied here).
- `average`: average aggregation.
- `median`: median aggregation.
- `count`: row counting.
- `records`: list of issue or merge request records.
#### Filters
VSA supports various filters on the base query. Most of the filters require no additional JOINs:
| Filter name | Description |
|-------------------|-------------|
| `milestone_title` | The backend translates it to `milestone_id` filter |
| `author_username` | The backend translates it to `author_id` filter |
| `project_ids` | Only used on the group-level |
Exceptions: these filters are applied on other tables which means we `JOIN` them.
| Filter name | Description |
|---------------------|-------------|
| `label_name` | Array filter, using the `label_links` table |
| `assignee_username` | Array filter, using the `*_assignees` table |
To fully decompose the database, the required ID values would need to be replicated in the VSA
database tables. This change could be implemented using array columns.
### Endpoints
The feature uses private JSON APIs for delivering the data to the frontend. On the first page load
, the following requests are invoked:
- Initial HTML page load which is mostly empty. Some configuration data is exposed via `data` attributes.
- `value_streams` - Load the available value streams for the given group.
- `stages` - Load the stages for the currently selected value stream.
- `median` - For each stage, request the median duration.
- `count` - For each stage, request the number of items in the stage (this is a
[limit count](../merge_request_concepts/performance.md#badge-counters), maximum 1000 rows).
- `average_duration_chart` - Data for the duration chart.
- `summary`, `time_summary` - Top-level aggregations, most of the metrics are using different APIs/
finders and not invoking the aggregated backend.
When selecting a specific stage, the `records` endpoint is invoked, which returns the related
records (paginated) for the chosen stage in a specific order.
### Database decomposition
By separating the query logic from the main application code, the feature is ready for database
decomposition. If we decide that VSA requires a separate database instance, then moving the
aggregated tables can be accomplished with little effort.
A different database technology could also be used to further improve the performance of the
feature, for example [Timescale DB](https://www.timescale.com).
|
---
stage: Plan
group: Optimize
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Aggregated Value Stream Analytics
breadcrumbs:
- doc
- development
- value_stream_analytics
---
{{< alert type="disclaimer" />}}
This page provides a high-level overview of the aggregated backend for
Value Stream Analytics (VSA).
## Current Status
The aggregated backend is used by default since GitLab 15.0 on the group-level.
## Motivation
The aggregated backend aims to solve the performance limitations of the VSA feature and set it up
for long-term growth.
Our main database is not prepared for analytical workloads. Executing long-running queries can
affect the reliability of the application. For large groups, the current
implementation (old backend) is slow and, in some cases, doesn't even load due to the configured
statement timeout (15 s).
The database queries in the old backend use the core domain models directly through
`IssuableFinders` classes: ([MergeRequestsFinder](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/finders/merge_requests_finder.rb) and [IssuesFinder](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/finders/issues_finder.rb)).
With the requested change of the [date range filters](https://gitlab.com/groups/gitlab-org/-/epics/6046),
this approach was no longer viable from the performance point of view.
Benefits of the aggregated VSA backend:
- Simpler database queries (fewer JOINs).
- Faster aggregations, only a single table is accessed.
- Possibility to introduce further aggregations for improving the first page load time.
- Better performance for large groups (with many subgroups, projects, issues and, merge requests).
- Ready for database decomposition. The VSA related database tables could live in a separate
database with a minimal development effort.
- Ready for keyset pagination which can be useful for exporting the data.
- Possibility to implement more complex event definitions.
- For example, the start event can be two timestamp columns where the earliest value would be
used by the system.
- Example: `MIN(issues.created_at, issues.updated_at)`
### Example configuration

In this example, two independent value streams are set up for two teams that are using
different development workflows within the `Test Group` (top-level namespace).
The first value stream uses standard timestamp-based events for defining the stages. The second
value stream uses label events.
Each value stream and stage item from the example is persisted in the database. Notice that
the `Deployment` stage is identical for both value streams; that means that the underlying
`stage_event_hash_id` is the same for both stages. The `stage_event_hash_id` reduces
the amount of data the backend collects and plays a vital role in database partitioning.
We expect value streams and stages to be rarely changed. When stages (start and end events) are
changed, the aggregated data gets stale. This is fixed by the periodical aggregation occurring
every day.
### Feature availability
The aggregated VSA feature is available on the group and project level however, the aggregated
backend is only available for Premium and Ultimate customers due to data storage and data
computation costs. Storing de-normalized, aggregated data requires significant disk space.
## Aggregated value stream analytics architecture
The main idea behind the aggregated VSA backend is separation: VSA database tables and queries do
not use the core domain models directly (Issue, MergeRequest). This allows us to scale and
optimize VSA independently from the other parts of the application.
The architecture consists of two main mechanisms:
- Periodical data collection and loading (happens in the background).
- Querying the collected data (invoked by the user).
### Data loading
The aggregated nature of VSA comes from the periodical data loading. The system queries the core
domain models to collect the stage and timestamp data. This data is periodically inserted into the
VSA database tables.
High-level overview for each top-level namespace with Premium or Ultimate license:
1. Load all stages in the group.
1. Iterate over the issues and merge requests records.
1. Based on the stage configurations (start and end event identifiers) collect the timestamp data.
1. `INSERT` or `UPDATE` the data into the VSA database tables.
The data loading is implemented within the [`Analytics::CycleAnalytics::DataLoaderService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/analytics/cycle_analytics/data_loader_service.rb)
class. Some groups contain a lot of data, so to avoid overloading the primary database,
the service performs operations in batches and enforces strict application limits:
- Load records in batches.
- Insert records in batches.
- Stop processing when a limit is reached, schedule a background job to continue the processing later.
- Continue processing data from a specific point.
The data loading is done manually. Once the feature is ready, the service is
invoked periodically by the system via a cron job (this part is not implemented yet).
#### Record iteration
The batched iteration is implemented with the
[efficient IN operator](../database/efficient_in_operator_queries.md). The background job scans
all issues and merge request records in the group hierarchy ordered by the `updated_at` and the
`id` columns. For already aggregated groups, the `DataLoaderService` continues the aggregation
from a specific point which saves time.
Collecting the timestamp data happens on every iteration. The `DataLoaderService` determines which
stage events are configured within the group hierarchy and builds a query that selects the
required timestamps. The stage record knows which events are configured and the events know how to
select the timestamp columns.
Example for collected stage events: merge request merged, merge request created, merge request
closed
Generated SQL query for loading the timestamps:
```sql
SELECT
-- the list of columns depends on the configured stages
"merge_request_metrics"."merged_at",
"merge_requests"."created_at",
"merge_request_metrics"."latest_closed_at"
FROM "merge_requests"
LEFT OUTER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE "merge_requests"."id" IN (1, 2, 3, 4) -- ids are coming from the batching query
```
The `merged_at` column is located in a separate table (`merge_request_metrics`). The
`Gitlab::Analytics::CycleAnalytics::StagEvents::MergeRequestMerged` class adds itself to a scope
for loading the timestamp data without affecting the number of rows (uses `LEFT JOIN`). This
behavior is implemented for each `StageEvent` class with the `include_in` method.
The data collection query works on the event level. It extracts the event timestamps from the
stages and ensures that we don't collect the same data multiple times. The events mentioned above
could come from the following stage configuration:
- merge request created - merge request merged
- merge request created - merge request closed
Other combinations might be also possible, but we prevent the ones that make no sense, for example:
- merge request merged - merge request created
Creation time always happens first, so this stage always reports negative duration.
#### Data scope
The data collection scans and processes all issues and merge requests records in the group
hierarchy, starting from the top-level group. This means that if a group only has one value stream
in a subgroup, we nevertheless collect data of all issues and merge requests in the hierarchy of
this group. This aims to simplify the data collection mechanism. Moreover, data research shows
that most group hierarchies have their stages configured on the top level.
During the data collection process, the collected timestamp data is transformed into rows. For
each configured stage, if the start event timestamp is present, the system inserts or updates one
event record. This allows us to determine the upper limit of the inserted rows per group by
counting all issues and merge requests and multiplying the sum by the stage count.
#### Data consistency concerns
Due to the async nature of the data collection, data consistency issues are bound to happen. This
is a trade-off that makes the query performance significantly faster. We think that for analytical
workload a slight lag in the data is acceptable.
Before the rollout we plan to implement some indicators on the VSA page that shows the most
recent backend activities. For example, indicators that show the last data collection timestamp
and the last consistency check timestamp.
#### Database structure
VSA collects data for the following domain models: `Issue` and `MergeRequest`. To keep the
aggregated data separated, we use two additional database tables:
- `analytics_cycle_analytics_issue_stage_events`
- `analytics_cycle_analytics_merge_request_stage_events`
Both tables are hash partitioned by the `stage_event_hash_id`. Each table uses 32 partitions. It's
an arbitrary number and it could be changed. Important is to keep the partitions under 100 GB in
size (which gives the feature a lot of headroom).
| Column | Description |
|----------------------------------|-------------|
| `stage_event_hash_id` | partitioning key |
| `merge_request_id` or `issue_id` | reference to the domain record (Issuable) |
| `group_id` | reference to the group (de-normalization) |
| `project_id` | reference to the project |
| `milestone_id` | duplicated data from the domain record table |
| `author_id` | duplicated data from the domain record table |
| `state_id` | duplicated data from the domain record table |
| `start_event_timestamp` | timestamp derived from the stage configuration |
| `end_event_timestamp` | timestamp derived from the stage configuration |
With accordance to the data separation requirements, the table doesn't have any foreign keys. The
consistency is ensured by a background job (eventually consistent).
### Data querying
The base query always includes the following filters:
- `stage_event_hash_id` - partition key
- `project_id` or `group_id` - depending on whether it's a project or group query
- `end_event_timestamp` - date range filter (last 30 days)
Example: Selecting review stage duration for the GitLab project
```sql
SELECT end_event_timestamp - start_event_timestamp
FROM analytics_cycle_analytics_merge_request_stage_events
WHERE
stage_event_hash_id = 16 AND -- hits a specific partition
project_id = 278964 AND
end_event_timestamp > '2022-01-01' AND end_event_timestamp < '2022-01-30'
```
#### Query generation
The query backend is hidden behind the same interface that the old backend implementation uses.
Thanks to this, we can easily switch between the old and new query backends.
- `DataCollector`: entrypoint for querying VSA data
- `BaseQueryBuilder`: provides the base `ActiveRecord` scope (filters are applied here).
- `average`: average aggregation.
- `median`: median aggregation.
- `count`: row counting.
- `records`: list of issue or merge request records.
#### Filters
VSA supports various filters on the base query. Most of the filters require no additional JOINs:
| Filter name | Description |
|-------------------|-------------|
| `milestone_title` | The backend translates it to `milestone_id` filter |
| `author_username` | The backend translates it to `author_id` filter |
| `project_ids` | Only used on the group-level |
Exceptions: these filters are applied on other tables which means we `JOIN` them.
| Filter name | Description |
|---------------------|-------------|
| `label_name` | Array filter, using the `label_links` table |
| `assignee_username` | Array filter, using the `*_assignees` table |
To fully decompose the database, the required ID values would need to be replicated in the VSA
database tables. This change could be implemented using array columns.
### Endpoints
The feature uses private JSON APIs for delivering the data to the frontend. On the first page load
, the following requests are invoked:
- Initial HTML page load which is mostly empty. Some configuration data is exposed via `data` attributes.
- `value_streams` - Load the available value streams for the given group.
- `stages` - Load the stages for the currently selected value stream.
- `median` - For each stage, request the median duration.
- `count` - For each stage, request the number of items in the stage (this is a
[limit count](../merge_request_concepts/performance.md#badge-counters), maximum 1000 rows).
- `average_duration_chart` - Data for the duration chart.
- `summary`, `time_summary` - Top-level aggregations, most of the metrics are using different APIs/
finders and not invoking the aggregated backend.
When selecting a specific stage, the `records` endpoint is invoked, which returns the related
records (paginated) for the chosen stage in a specific order.
### Database decomposition
By separating the query logic from the main application code, the feature is ready for database
decomposition. If we decide that VSA requires a separate database instance, then moving the
aggregated tables can be accomplished with little effort.
A different database technology could also be used to further improve the performance of the
feature, for example [Timescale DB](https://www.timescale.com).
|
https://docs.gitlab.com/development/code_owners
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/code_owners
|
[
"doc",
"development",
"code_owners"
] |
_index.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Code Owners development guidelines
| null |
This document was created to help contributors understand the code design of
[Code Owners](../../user/project/codeowners/_index.md). You should read this
document before making changes to the code for this feature.
This document is intentionally limited to an overview of how the code is
designed, as code can change often. To understand how a specific part of the
feature works, view the code and the specs. The details here explain how the
major components of the Code Owners feature work.
{{< alert type="note" >}}
This document should be updated when parts of the codebase referenced in this
document are updated, removed, or new parts are added.
{{< /alert >}}
## Business logic
All of the business logic for code owners is located in the `Gitlab::CodeOwners`
namespace. Code Owners is an EE-only feature, so the files only exist in the `./ee` directory.
- `Gitlab::CodeOwners`: the main module used to interact with the code owner rules.
- Defined in `./ee/lib/gitlab/code_owners.rb`.
- `Gitlab::CodeOwners::File`: wraps a `CODEOWNERS` file and exposes the data through
the class' public methods.
- Defined in `./ee/lib/gitlab/code_owners/file.rb`.
- `Gitlab::CodeOwners::Section`: wraps a section heading from a
`CODEOWNERS` file and parses the different parts.
- Defined in `./ee/lib/gitlab/code_owners/section.rb`.
- `Gitlab::CodeOwners::Entry`: wraps an entry (a pattern and owners line) in a
`CODEOWNERS` file and exposes the data through the class' public methods.
- Defined in `./ee/lib/gitlab/code_owners/entry.rb`.
- `Gitlab::CodeOwners::Loader`: finds the correct `CODEOWNER` file and loads the
content into a `Gitlab::CodeOwners::File` instance.
- Defined in `./ee/lib/gitlab/code_owners/loader.rb`.
- `Gitlab::CodeOwners::ReferenceExtractor`: extracts `CODEOWNER` user, group,
and email references from texts.
- Defined in `./ee/lib/gitlab/code_owners/reference_extractor.rb`.
- `Gitlab::CodeOwners::UsersLoader`: the correct `CODEOWNER` file and loads the
content into a `Gitlab::CodeOwners::File` instance.
- Defined in `./ee/lib/gitlab/code_owners/users_loader.rb`.
- `Gitlab::CodeOwners::GroupsLoader`: finds the correct `CODEOWNER` file and loads
the content into a `Gitlab::CodeOwners::File` instance.
- Defined in `./ee/lib/gitlab/code_owners/groups_loader.rb`.
- `Gitlab::Checks::Diffs::CodeOwnersCheck`: validates no files in the `CODEOWNERS` entries
have been changed when a user pushes to a protected branch with `require_code_owner_approval` enabled.
- Defined in `./ee/lib/gitlab/checks/diffs/code_owners_check.rb`.
## Where Code Owners sits in the Git access check execution order
`Gitlab::Checks::DiffCheck#file_paths_validations` returns either an empty array, or an array with a single member of the results of `#lfs_file_locks_validation` if LFS is enabled and file locks are present. The return result of `#validate_code_owners` in the EE version of this file is inserted at the end of this list in the `EE::Gitlab::Checks::DiffCheck#file_paths_validations`. LFS checks are performed before Code Owners checks.
These checks are executed after those listed in `#validations_for_path`, which exists only in the EE version, and include `#path_locks_validation` and `#file_name_validation`. This means that checks for Path Locks precede checks for Code Owners in the flow.
The check order is as follows in `EE` (only LFS exists as a non-EE feature):
- Path Locks
- Filenames
- Blocks files containing secrets for example `id_rsa`
- Blocks files matching the `PushRule#file_name_regex`
- LFS File Locks
- Code Owners
## Related models
### `ProtectedBranch`
The `ProtectedBranch` model is defined in `app/models/protected_branch.rb` and
extended in `ee/app/models/concerns/ee/protected_branch.rb`. The EE version includes a column
named `require_code_owner_approval` which prevents changes from being pushed directly
to the branch being protected if the file is listed in `CODEOWNERS`.
### `ApprovalMergeRequestRule`
The `ApprovalMergeRequestRule` model is defined in `ee/app/models/approval_merge_request_rule.rb`.
The model stores approval rules for a merge request. We use multiple rule types,
including a `code_owner` type rule.
## Controllers and Services
The following controllers and services below are being used for the approval
rules feature to work:
### `Api::Internal::Base`
This `/internal/allowed` endpoint is called when pushing to GitLab to ensure the
user is allowed to push. The `/internal/allowed` endpoint performs a `Gitlab::Checks::DiffCheck`.
In EE, this includes code owner checks.
Defined in `lib/api/internal/base.rb`.
### `Repositories::GitHttpController`
When changes are pushed to GitLab over HTTP, the controller performs an access check
to ensure the user is allowed to push. The checks perform a `Gitlab::Checks::DiffCheck`.
In EE, this includes Code Owner checks.
Defined in `app/controllers/repositories/git_http_controller.rb`.
### `EE::Gitlab::Checks::DiffCheck`
This module extends the CE `Gitlab::Checks::DiffChecks` class and adds code owner
validation. It uses the `Gitlab::Checks::Diffs::CodeOwnersCheck` class to verify users are
not pushing files listed in `CODEOWNER` directly to a protected branch while the
branch requires code owner approval.
### `MergeRequests::SyncCodeOwnerApprovalRules`
This service is defined in `services/merge_requests/sync_code_owner_approval_rules.rb` and used for:
- Deleting outdated code owner approval rules when new changes are pushed to a merge request.
- Creating code owner approval rules for each changed file in a merge request that is also listed in the `CODEOWNER` file.
## Flow
These flowcharts should help explain the flow from the controllers down to the
models for different features.
A lot of the Code Owners implementations exist in the `EE` variants of the classes.
### Push changes to a protected branch with `require_code_owner_approval` enabled, over SSH
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
Api::Internal::Base --> Gitlab::GitAccess
Gitlab::GitAccess --> Gitlab::Checks::DiffCheck
Gitlab::Checks::DiffCheck --> Gitlab::Checks::Diffs::CodeOwnersCheck
Gitlab::Checks::Diffs::CodeOwnersCheck --> ProtectedBranch
Gitlab::Checks::Diffs::CodeOwnersCheck --> Gitlab::CodeOwners::Loader
Gitlab::CodeOwners::Loader --> Gitlab::CodeOwners::Entry
```
### Push changes to a protected branch with `require_code_owner_approval` enabled, over HTTPS
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
Repositories::GitHttpController --> Gitlab::GlRepository
Gitlab::GlRepository --> Gitlab::GitAccessProject
Gitlab::GitAccessProject --> Gitlab::Checks::DiffCheck
Gitlab::Checks::DiffCheck --> Gitlab::Checks::Diffs::CodeOwnersCheck
Gitlab::Checks::Diffs::CodeOwnersCheck --> ProtectedBranch
Gitlab::Checks::Diffs::CodeOwnersCheck --> Gitlab::CodeOwners::Loader
Gitlab::CodeOwners::Loader --> Gitlab::CodeOwners::Entry
```
### Sync code owner rules to merge request approval rules
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
EE::ProtectedBranches::CreateService --> MergeRequest::SyncCodeOwnerApprovalRules
EE::MergeRequestRefreshService --> MergeRequest::SyncCodeOwnerApprovalRules
EE::MergeRequests::ReloadMergeHeadDiffService --> MergeRequest::SyncCodeOwnerApprovalRules
EE::MergeRequests::CreateService --> MergeRequests::SyncCodeOwnerApprovalRulesWorker
EE::MergeRequests::UpdateService --> MergeRequests::SyncCodeOwnerApprovalRulesWorker
MergeRequests::SyncCodeOwnerApprovalRulesWorker --> MergeRequest::SyncCodeOwnerApprovalRules
MergeRequest::SyncCodeOwnerApprovalRules --> id1{delete outdated code owner rules}
MergeRequest::SyncCodeOwnerApprovalRules --> id2{create rule for each code owner entry}
```
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Code Owners development guidelines
breadcrumbs:
- doc
- development
- code_owners
---
This document was created to help contributors understand the code design of
[Code Owners](../../user/project/codeowners/_index.md). You should read this
document before making changes to the code for this feature.
This document is intentionally limited to an overview of how the code is
designed, as code can change often. To understand how a specific part of the
feature works, view the code and the specs. The details here explain how the
major components of the Code Owners feature work.
{{< alert type="note" >}}
This document should be updated when parts of the codebase referenced in this
document are updated, removed, or new parts are added.
{{< /alert >}}
## Business logic
All of the business logic for code owners is located in the `Gitlab::CodeOwners`
namespace. Code Owners is an EE-only feature, so the files only exist in the `./ee` directory.
- `Gitlab::CodeOwners`: the main module used to interact with the code owner rules.
- Defined in `./ee/lib/gitlab/code_owners.rb`.
- `Gitlab::CodeOwners::File`: wraps a `CODEOWNERS` file and exposes the data through
the class' public methods.
- Defined in `./ee/lib/gitlab/code_owners/file.rb`.
- `Gitlab::CodeOwners::Section`: wraps a section heading from a
`CODEOWNERS` file and parses the different parts.
- Defined in `./ee/lib/gitlab/code_owners/section.rb`.
- `Gitlab::CodeOwners::Entry`: wraps an entry (a pattern and owners line) in a
`CODEOWNERS` file and exposes the data through the class' public methods.
- Defined in `./ee/lib/gitlab/code_owners/entry.rb`.
- `Gitlab::CodeOwners::Loader`: finds the correct `CODEOWNER` file and loads the
content into a `Gitlab::CodeOwners::File` instance.
- Defined in `./ee/lib/gitlab/code_owners/loader.rb`.
- `Gitlab::CodeOwners::ReferenceExtractor`: extracts `CODEOWNER` user, group,
and email references from texts.
- Defined in `./ee/lib/gitlab/code_owners/reference_extractor.rb`.
- `Gitlab::CodeOwners::UsersLoader`: the correct `CODEOWNER` file and loads the
content into a `Gitlab::CodeOwners::File` instance.
- Defined in `./ee/lib/gitlab/code_owners/users_loader.rb`.
- `Gitlab::CodeOwners::GroupsLoader`: finds the correct `CODEOWNER` file and loads
the content into a `Gitlab::CodeOwners::File` instance.
- Defined in `./ee/lib/gitlab/code_owners/groups_loader.rb`.
- `Gitlab::Checks::Diffs::CodeOwnersCheck`: validates no files in the `CODEOWNERS` entries
have been changed when a user pushes to a protected branch with `require_code_owner_approval` enabled.
- Defined in `./ee/lib/gitlab/checks/diffs/code_owners_check.rb`.
## Where Code Owners sits in the Git access check execution order
`Gitlab::Checks::DiffCheck#file_paths_validations` returns either an empty array, or an array with a single member of the results of `#lfs_file_locks_validation` if LFS is enabled and file locks are present. The return result of `#validate_code_owners` in the EE version of this file is inserted at the end of this list in the `EE::Gitlab::Checks::DiffCheck#file_paths_validations`. LFS checks are performed before Code Owners checks.
These checks are executed after those listed in `#validations_for_path`, which exists only in the EE version, and include `#path_locks_validation` and `#file_name_validation`. This means that checks for Path Locks precede checks for Code Owners in the flow.
The check order is as follows in `EE` (only LFS exists as a non-EE feature):
- Path Locks
- Filenames
- Blocks files containing secrets for example `id_rsa`
- Blocks files matching the `PushRule#file_name_regex`
- LFS File Locks
- Code Owners
## Related models
### `ProtectedBranch`
The `ProtectedBranch` model is defined in `app/models/protected_branch.rb` and
extended in `ee/app/models/concerns/ee/protected_branch.rb`. The EE version includes a column
named `require_code_owner_approval` which prevents changes from being pushed directly
to the branch being protected if the file is listed in `CODEOWNERS`.
### `ApprovalMergeRequestRule`
The `ApprovalMergeRequestRule` model is defined in `ee/app/models/approval_merge_request_rule.rb`.
The model stores approval rules for a merge request. We use multiple rule types,
including a `code_owner` type rule.
## Controllers and Services
The following controllers and services below are being used for the approval
rules feature to work:
### `Api::Internal::Base`
This `/internal/allowed` endpoint is called when pushing to GitLab to ensure the
user is allowed to push. The `/internal/allowed` endpoint performs a `Gitlab::Checks::DiffCheck`.
In EE, this includes code owner checks.
Defined in `lib/api/internal/base.rb`.
### `Repositories::GitHttpController`
When changes are pushed to GitLab over HTTP, the controller performs an access check
to ensure the user is allowed to push. The checks perform a `Gitlab::Checks::DiffCheck`.
In EE, this includes Code Owner checks.
Defined in `app/controllers/repositories/git_http_controller.rb`.
### `EE::Gitlab::Checks::DiffCheck`
This module extends the CE `Gitlab::Checks::DiffChecks` class and adds code owner
validation. It uses the `Gitlab::Checks::Diffs::CodeOwnersCheck` class to verify users are
not pushing files listed in `CODEOWNER` directly to a protected branch while the
branch requires code owner approval.
### `MergeRequests::SyncCodeOwnerApprovalRules`
This service is defined in `services/merge_requests/sync_code_owner_approval_rules.rb` and used for:
- Deleting outdated code owner approval rules when new changes are pushed to a merge request.
- Creating code owner approval rules for each changed file in a merge request that is also listed in the `CODEOWNER` file.
## Flow
These flowcharts should help explain the flow from the controllers down to the
models for different features.
A lot of the Code Owners implementations exist in the `EE` variants of the classes.
### Push changes to a protected branch with `require_code_owner_approval` enabled, over SSH
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
Api::Internal::Base --> Gitlab::GitAccess
Gitlab::GitAccess --> Gitlab::Checks::DiffCheck
Gitlab::Checks::DiffCheck --> Gitlab::Checks::Diffs::CodeOwnersCheck
Gitlab::Checks::Diffs::CodeOwnersCheck --> ProtectedBranch
Gitlab::Checks::Diffs::CodeOwnersCheck --> Gitlab::CodeOwners::Loader
Gitlab::CodeOwners::Loader --> Gitlab::CodeOwners::Entry
```
### Push changes to a protected branch with `require_code_owner_approval` enabled, over HTTPS
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
Repositories::GitHttpController --> Gitlab::GlRepository
Gitlab::GlRepository --> Gitlab::GitAccessProject
Gitlab::GitAccessProject --> Gitlab::Checks::DiffCheck
Gitlab::Checks::DiffCheck --> Gitlab::Checks::Diffs::CodeOwnersCheck
Gitlab::Checks::Diffs::CodeOwnersCheck --> ProtectedBranch
Gitlab::Checks::Diffs::CodeOwnersCheck --> Gitlab::CodeOwners::Loader
Gitlab::CodeOwners::Loader --> Gitlab::CodeOwners::Entry
```
### Sync code owner rules to merge request approval rules
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
EE::ProtectedBranches::CreateService --> MergeRequest::SyncCodeOwnerApprovalRules
EE::MergeRequestRefreshService --> MergeRequest::SyncCodeOwnerApprovalRules
EE::MergeRequests::ReloadMergeHeadDiffService --> MergeRequest::SyncCodeOwnerApprovalRules
EE::MergeRequests::CreateService --> MergeRequests::SyncCodeOwnerApprovalRulesWorker
EE::MergeRequests::UpdateService --> MergeRequests::SyncCodeOwnerApprovalRulesWorker
MergeRequests::SyncCodeOwnerApprovalRulesWorker --> MergeRequest::SyncCodeOwnerApprovalRules
MergeRequest::SyncCodeOwnerApprovalRules --> id1{delete outdated code owner rules}
MergeRequest::SyncCodeOwnerApprovalRules --> id2{create rule for each code owner entry}
```
|
https://docs.gitlab.com/development/new_redis_instance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/new_redis_instance.md
|
2025-08-13
|
doc/development/redis
|
[
"doc",
"development",
"redis"
] |
new_redis_instance.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Add a new Redis instance
| null |
GitLab can make use of multiple [Redis instances](../redis.md#redis-instances).
These instances are functionally partitioned so that, for example, we
can store [CI trace chunks](../../administration/cicd/job_logs.md#incremental-logging)
from one Redis instance while storing sessions in another.
From time to time we might want to add a new Redis instance. Typically this will
be a functional partition split from one of the existing instances such as the
cache or shared state. This document describes an approach
for adding a new Redis instance that handles existing data, based on
prior examples:
- [Dedicated Redis instance for Trace Chunk storage](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/462).
- [Create dedicated Redis instance for Rate Limiting data](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/526).
This document does not cover the operational side of preparing and configuring
the new Redis instance in detail, but the example epics do contain information
on previous approaches to this.
## Step 1: Support configuring the new instance
Before we can switch any features to using the new instance, we have to support
configuring it and referring to it in the codebase. We must support the
main installation types:
- Self-compiled installations (including development environments) - [example MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/62767)
- Linux package installations - [example MR](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5316)
- Helm charts - [example MR](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2031)
### Fallback instance
In the application code, we need to define a fallback instance in case the new
instance is not configured. For example, if a GitLab instance has already
configured a separate shared state Redis, and we are partitioning data from the
shared state Redis, our new instance's configuration should default to that of
the shared state Redis when it's not present. Otherwise we could break instances
that don't configure the new Redis instance as soon as it's available.
You can [define a `.config_fallback` method](https://gitlab.com/gitlab-org/gitlab/-/blob/a75471dd744678f1a59eeb99f71fca577b155acd/lib/gitlab/redis/wrapper.rb#L69-87)
in `Gitlab::Redis::Wrapper` (the base class for all Redis instances)
that defines the instance to be used if this one is not configured. If we were
adding a `Foo` instance that should fall back to `SharedState`, we can do that
like this:
```ruby
module Gitlab
module Redis
class Foo < ::Gitlab::Redis::Wrapper
# The data we store on Foo used to be stored on SharedState.
def self.config_fallback
SharedState
end
end
end
end
```
We should also add specs like those in
[`trace_chunks_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/redis/trace_chunks_spec.rb)
to ensure that this fallback works correctly.
## Step 2: Support writing to and reading from the new instance
When migrating to the new instance, we must account for cases where data is
either on:
- The 'old' (original) instance.
- The new one that we have just added support for.
As a result we may need to support reading from and writing to both
instances, depending on some condition.
The exact condition to use varies depending on the data to be migrated. For
the trace chunks case above, there was already a database column indicating where the
data was stored (as there are other storage options than Redis).
This step may not apply if the data has a very short lifetime (a few minutes at most)
and is not critical. In that case, we
may decide that it is OK to incur a small amount of data loss and switch
over through configuration only.
If there is not a more natural way to mark where the data is stored, using a
[feature flag](../feature_flags/_index.md) may be convenient:
- It does not require an application restart to take effect.
- It applies to all application instances (Sidekiq, API, web, etc.) at
the same time.
- It supports incremental rollout - ideally by actor (project, group,
user, etc.) - so that we can monitor for errors and roll back easily.
## Step 3: Migrate the data
We then need to configure the new instance for GitLab.com's production and
staging environments. Hopefully it will be possible to test this change
effectively on staging, to at least make sure that basic usage continues to
work.
After that is done, we can roll out the change to production. Ideally this would
be in an incremental fashion, following the
[standard incremental rollout](../feature_flags/controls.md#rolling-out-changes)
documentation for feature flags.
When we have been using the new instance 100% of the time in production for a
while and there are no issues, we can proceed.
### Proposed solution: Migrate data by using MultiStore with the fallback strategy
We need a way to migrate users to a new Redis store without causing any inconveniences from UX perspective.
We also want the ability to fall back to the "old" Redis instance if something goes wrong with the new instance.
Migration Requirements:
- No downtime.
- No loss of stored data until the TTL for storing data expires.
- Partial rollout using feature flags or ENV vars or combinations of both.
- Monitoring of the switch.
- Prometheus metrics in place.
- Easy rollback without downtime in case the new instance or logic does not behave as expected.
It is somewhat similar to the zero-downtime DB table rename.
We need to write data into both Redis instances (old + new).
We read from the new instance, but we need to fall back to the old instance when pre-fetching from the new dedicated Redis instance that failed.
We need to log any issues or exceptions with a new instance, but still fall back to the old instance.
The proposed migration strategy is to implement and use the [MultiStore](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/multi_store.rb).
We used this approach with [adding new dedicated Redis instance for session keys](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/579).
Also MultiStore comes with corresponding [specs](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/redis/multi_store_spec.rb).
The MultiStore looks like a `redis-rb ::Redis` instance.
In the new Redis instance class you added in [Step 1](#step-1-support-configuring-the-new-instance), inherit from `::Gitlab::Redis::MultiStoreWrapper` instead and override the `multistore` class method to define the MultiStore.
```ruby
module Gitlab
module Redis
class Foo < ::Gitlab::Redis::MultiStoreWrapper
...
def self.multistore
MultiStore.create_using_pool(self.pool, config_fallback.pool, store_name)
end
end
end
end
```
MultiStore is initialized by providing the new Redis connection pools as a primary pool, and [old (fallback-instance) connection pool](#fallback-instance) as a secondary pool.
The third argument is `store_name` which is used for logs, metrics and feature flag names, in case we use MultiStore implementation for different Redis stores at the same time.
By default, the MultiStore reads and writes only from the default Redis store.
The default Redis store is `secondary_store` (the old fallback-instance).
This allows us to introduce MultiStore without changing the default behavior.
MultiStore uses two feature flags to control the actual migration:
- `use_primary_and_secondary_stores_for_[store_name]`
- `use_primary_store_as_default_for_[store_name]`
For example, if our new Redis instance is called `Gitlab::Redis::Foo`, we can [create](../feature_flags/_index.md#create-a-new-feature-flag) two feature flags by executing:
```shell
bin/feature-flag use_primary_and_secondary_stores_for_foo
bin/feature-flag use_primary_store_as_default_for_foo
```
By enabling `use_primary_and_secondary_stores_for_foo` feature flag, our `Gitlab::Redis::Foo` will use `MultiStore` to write to both new Redis instance
and the [old (fallback-instance)](#fallback-instance). All read commands are performed only on the default store which is controlled using the
`use_primary_store_as_default_for_foo` feature flag. By enabling `use_primary_store_as_default_for_foo` feature flag,
the `MultiStore` uses `primary_store` (new instance) as default Redis store.
For `pipelined` commands (`pipelined` and `multi`), we execute the entire operation in both stores and then compare the results. If they differ, we emit a
`Gitlab::Redis::MultiStore:PipelinedDiffError` error, and track it in the `gitlab_redis_multi_store_pipelined_diff_error_total` Prometheus counter.
After a period of time for the new store to be populated, we can perform external validation to compare the state of both stores.
Upon satisfactory validation results, we are probably safe to move the traffic to the new Redis store. We can disable `use_primary_and_secondary_stores_for_foo` feature flag.
This will allow the MultiStore to read and write only from the primary Redis store (new store), moving all the traffic to the new Redis store.
Once we have moved all our traffic to the primary store, our data migration is complete.
We can safely remove the MultiStore implementation and continue to use newly introduced Redis store instance.
#### Implementation details
MultiStore implements read and write Redis commands separately.
##### Read commands
Read commands are defined in the [`Gitlab::Redis::MultiStore::READ_COMMANDS` constant](https://gitlab.com/gitlab-org/gitlab/-/blob/c991bac5b1d67355ad4ac1d975ace6c2a052e1b4/lib/gitlab/redis/multi_store.rb#L56).
##### Write commands
Write commands are defined in the [`Gitlab::Redis::MultiStore::WRITE_COMMANDS` constant](https://gitlab.com/gitlab-org/gitlab/-/blob/c991bac5b1d67355ad4ac1d975ace6c2a052e1b4/lib/gitlab/redis/multi_store.rb#L91).
##### `pipelined` commands
{{< alert type="note" >}}
The Ruby block passed to these commands will be executed twice, once per each store.
Thus, excluding the Redis operations performed, the block should be idempotent.
{{< /alert >}}
- `pipelined`
- `multi`
When a command outside of the supported list is used, `method_missing` will pass it to the old Redis instance and keep track of it.
This ensures that anything unexpected behaves like it would before. In development or test environment, an error would be raised for early
detection.
{{< alert type="note" >}}
By tracking `gitlab_redis_multi_store_method_missing_total` counter and `Gitlab::Redis::MultiStore::MethodMissingError`,
a developer will need to add an implementation for missing Redis commands before proceeding with the migration.
{{< /alert >}}
{{< alert type="note" >}}
Variable assignments within `pipelined` and `multi` blocks are not advised as the block should be idempotent. Refer to the [corrective fix MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/137734) removing non-idempotent blocks which previously led to incorrect application behavior during a migration.
{{< /alert >}}
##### Errors
| error | message |
|---------------------------------------------------|---------------------------------------------------------------------------------------------|
| `Gitlab::Redis::MultiStore::PipelinedDiffError` | `pipelined` command executed on both stores successfully but results differ between them. |
| `Gitlab::Redis::MultiStore::MethodMissingError` | Method missing. Falling back to execute method on the Redis secondary store. |
##### Metrics
| Metrics name | Type | Labels | Description |
|-------------------------------------------------------|--------------------|----------------------------|----------------------------------------------------------|
| `gitlab_redis_multi_store_pipelined_diff_error_total` | Prometheus Counter | `command`, `instance_name` | Redis MultiStore `pipelined` command diff between stores |
| `gitlab_redis_multi_store_method_missing_total` | Prometheus Counter | `command`, `instance_name` | Client side Redis MultiStore method missing total |
## Step 4: clean up after the migration
<!-- markdownlint-disable MD044 -->
We may choose to keep the migration paths or remove them, depending on whether
or not we expect GitLab Self-Managed instances to perform this migration.
[gitlab-com/gl-infra/scalability#1131](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1131#note_603354746)
contains a discussion on this topic for the trace chunks feature flag. It may
be - as in that case - that we decide that the maintenance costs of supporting
the migration code are higher than the benefits of allowing self-managed
instances to perform this migration seamlessly, if we expect self-managed
instances to cope without this functional partition.
<!-- markdownlint-enable MD044 -->
If we decide to keep the migration code:
- We should document the migration steps.
- If we used a feature flag, we should ensure it's an
[ops type feature flag](../feature_flags/_index.md#ops-type), as these are long-lived flags.
Otherwise, we can remove the flags and conclude the project.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Add a new Redis instance
breadcrumbs:
- doc
- development
- redis
---
GitLab can make use of multiple [Redis instances](../redis.md#redis-instances).
These instances are functionally partitioned so that, for example, we
can store [CI trace chunks](../../administration/cicd/job_logs.md#incremental-logging)
from one Redis instance while storing sessions in another.
From time to time we might want to add a new Redis instance. Typically this will
be a functional partition split from one of the existing instances such as the
cache or shared state. This document describes an approach
for adding a new Redis instance that handles existing data, based on
prior examples:
- [Dedicated Redis instance for Trace Chunk storage](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/462).
- [Create dedicated Redis instance for Rate Limiting data](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/526).
This document does not cover the operational side of preparing and configuring
the new Redis instance in detail, but the example epics do contain information
on previous approaches to this.
## Step 1: Support configuring the new instance
Before we can switch any features to using the new instance, we have to support
configuring it and referring to it in the codebase. We must support the
main installation types:
- Self-compiled installations (including development environments) - [example MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/62767)
- Linux package installations - [example MR](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5316)
- Helm charts - [example MR](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2031)
### Fallback instance
In the application code, we need to define a fallback instance in case the new
instance is not configured. For example, if a GitLab instance has already
configured a separate shared state Redis, and we are partitioning data from the
shared state Redis, our new instance's configuration should default to that of
the shared state Redis when it's not present. Otherwise we could break instances
that don't configure the new Redis instance as soon as it's available.
You can [define a `.config_fallback` method](https://gitlab.com/gitlab-org/gitlab/-/blob/a75471dd744678f1a59eeb99f71fca577b155acd/lib/gitlab/redis/wrapper.rb#L69-87)
in `Gitlab::Redis::Wrapper` (the base class for all Redis instances)
that defines the instance to be used if this one is not configured. If we were
adding a `Foo` instance that should fall back to `SharedState`, we can do that
like this:
```ruby
module Gitlab
module Redis
class Foo < ::Gitlab::Redis::Wrapper
# The data we store on Foo used to be stored on SharedState.
def self.config_fallback
SharedState
end
end
end
end
```
We should also add specs like those in
[`trace_chunks_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/redis/trace_chunks_spec.rb)
to ensure that this fallback works correctly.
## Step 2: Support writing to and reading from the new instance
When migrating to the new instance, we must account for cases where data is
either on:
- The 'old' (original) instance.
- The new one that we have just added support for.
As a result we may need to support reading from and writing to both
instances, depending on some condition.
The exact condition to use varies depending on the data to be migrated. For
the trace chunks case above, there was already a database column indicating where the
data was stored (as there are other storage options than Redis).
This step may not apply if the data has a very short lifetime (a few minutes at most)
and is not critical. In that case, we
may decide that it is OK to incur a small amount of data loss and switch
over through configuration only.
If there is not a more natural way to mark where the data is stored, using a
[feature flag](../feature_flags/_index.md) may be convenient:
- It does not require an application restart to take effect.
- It applies to all application instances (Sidekiq, API, web, etc.) at
the same time.
- It supports incremental rollout - ideally by actor (project, group,
user, etc.) - so that we can monitor for errors and roll back easily.
## Step 3: Migrate the data
We then need to configure the new instance for GitLab.com's production and
staging environments. Hopefully it will be possible to test this change
effectively on staging, to at least make sure that basic usage continues to
work.
After that is done, we can roll out the change to production. Ideally this would
be in an incremental fashion, following the
[standard incremental rollout](../feature_flags/controls.md#rolling-out-changes)
documentation for feature flags.
When we have been using the new instance 100% of the time in production for a
while and there are no issues, we can proceed.
### Proposed solution: Migrate data by using MultiStore with the fallback strategy
We need a way to migrate users to a new Redis store without causing any inconveniences from UX perspective.
We also want the ability to fall back to the "old" Redis instance if something goes wrong with the new instance.
Migration Requirements:
- No downtime.
- No loss of stored data until the TTL for storing data expires.
- Partial rollout using feature flags or ENV vars or combinations of both.
- Monitoring of the switch.
- Prometheus metrics in place.
- Easy rollback without downtime in case the new instance or logic does not behave as expected.
It is somewhat similar to the zero-downtime DB table rename.
We need to write data into both Redis instances (old + new).
We read from the new instance, but we need to fall back to the old instance when pre-fetching from the new dedicated Redis instance that failed.
We need to log any issues or exceptions with a new instance, but still fall back to the old instance.
The proposed migration strategy is to implement and use the [MultiStore](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/multi_store.rb).
We used this approach with [adding new dedicated Redis instance for session keys](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/579).
Also MultiStore comes with corresponding [specs](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/redis/multi_store_spec.rb).
The MultiStore looks like a `redis-rb ::Redis` instance.
In the new Redis instance class you added in [Step 1](#step-1-support-configuring-the-new-instance), inherit from `::Gitlab::Redis::MultiStoreWrapper` instead and override the `multistore` class method to define the MultiStore.
```ruby
module Gitlab
module Redis
class Foo < ::Gitlab::Redis::MultiStoreWrapper
...
def self.multistore
MultiStore.create_using_pool(self.pool, config_fallback.pool, store_name)
end
end
end
end
```
MultiStore is initialized by providing the new Redis connection pools as a primary pool, and [old (fallback-instance) connection pool](#fallback-instance) as a secondary pool.
The third argument is `store_name` which is used for logs, metrics and feature flag names, in case we use MultiStore implementation for different Redis stores at the same time.
By default, the MultiStore reads and writes only from the default Redis store.
The default Redis store is `secondary_store` (the old fallback-instance).
This allows us to introduce MultiStore without changing the default behavior.
MultiStore uses two feature flags to control the actual migration:
- `use_primary_and_secondary_stores_for_[store_name]`
- `use_primary_store_as_default_for_[store_name]`
For example, if our new Redis instance is called `Gitlab::Redis::Foo`, we can [create](../feature_flags/_index.md#create-a-new-feature-flag) two feature flags by executing:
```shell
bin/feature-flag use_primary_and_secondary_stores_for_foo
bin/feature-flag use_primary_store_as_default_for_foo
```
By enabling `use_primary_and_secondary_stores_for_foo` feature flag, our `Gitlab::Redis::Foo` will use `MultiStore` to write to both new Redis instance
and the [old (fallback-instance)](#fallback-instance). All read commands are performed only on the default store which is controlled using the
`use_primary_store_as_default_for_foo` feature flag. By enabling `use_primary_store_as_default_for_foo` feature flag,
the `MultiStore` uses `primary_store` (new instance) as default Redis store.
For `pipelined` commands (`pipelined` and `multi`), we execute the entire operation in both stores and then compare the results. If they differ, we emit a
`Gitlab::Redis::MultiStore:PipelinedDiffError` error, and track it in the `gitlab_redis_multi_store_pipelined_diff_error_total` Prometheus counter.
After a period of time for the new store to be populated, we can perform external validation to compare the state of both stores.
Upon satisfactory validation results, we are probably safe to move the traffic to the new Redis store. We can disable `use_primary_and_secondary_stores_for_foo` feature flag.
This will allow the MultiStore to read and write only from the primary Redis store (new store), moving all the traffic to the new Redis store.
Once we have moved all our traffic to the primary store, our data migration is complete.
We can safely remove the MultiStore implementation and continue to use newly introduced Redis store instance.
#### Implementation details
MultiStore implements read and write Redis commands separately.
##### Read commands
Read commands are defined in the [`Gitlab::Redis::MultiStore::READ_COMMANDS` constant](https://gitlab.com/gitlab-org/gitlab/-/blob/c991bac5b1d67355ad4ac1d975ace6c2a052e1b4/lib/gitlab/redis/multi_store.rb#L56).
##### Write commands
Write commands are defined in the [`Gitlab::Redis::MultiStore::WRITE_COMMANDS` constant](https://gitlab.com/gitlab-org/gitlab/-/blob/c991bac5b1d67355ad4ac1d975ace6c2a052e1b4/lib/gitlab/redis/multi_store.rb#L91).
##### `pipelined` commands
{{< alert type="note" >}}
The Ruby block passed to these commands will be executed twice, once per each store.
Thus, excluding the Redis operations performed, the block should be idempotent.
{{< /alert >}}
- `pipelined`
- `multi`
When a command outside of the supported list is used, `method_missing` will pass it to the old Redis instance and keep track of it.
This ensures that anything unexpected behaves like it would before. In development or test environment, an error would be raised for early
detection.
{{< alert type="note" >}}
By tracking `gitlab_redis_multi_store_method_missing_total` counter and `Gitlab::Redis::MultiStore::MethodMissingError`,
a developer will need to add an implementation for missing Redis commands before proceeding with the migration.
{{< /alert >}}
{{< alert type="note" >}}
Variable assignments within `pipelined` and `multi` blocks are not advised as the block should be idempotent. Refer to the [corrective fix MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/137734) removing non-idempotent blocks which previously led to incorrect application behavior during a migration.
{{< /alert >}}
##### Errors
| error | message |
|---------------------------------------------------|---------------------------------------------------------------------------------------------|
| `Gitlab::Redis::MultiStore::PipelinedDiffError` | `pipelined` command executed on both stores successfully but results differ between them. |
| `Gitlab::Redis::MultiStore::MethodMissingError` | Method missing. Falling back to execute method on the Redis secondary store. |
##### Metrics
| Metrics name | Type | Labels | Description |
|-------------------------------------------------------|--------------------|----------------------------|----------------------------------------------------------|
| `gitlab_redis_multi_store_pipelined_diff_error_total` | Prometheus Counter | `command`, `instance_name` | Redis MultiStore `pipelined` command diff between stores |
| `gitlab_redis_multi_store_method_missing_total` | Prometheus Counter | `command`, `instance_name` | Client side Redis MultiStore method missing total |
## Step 4: clean up after the migration
<!-- markdownlint-disable MD044 -->
We may choose to keep the migration paths or remove them, depending on whether
or not we expect GitLab Self-Managed instances to perform this migration.
[gitlab-com/gl-infra/scalability#1131](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1131#note_603354746)
contains a discussion on this topic for the trace chunks feature flag. It may
be - as in that case - that we decide that the maintenance costs of supporting
the migration code are higher than the benefits of allowing self-managed
instances to perform this migration seamlessly, if we expect self-managed
instances to cope without this functional partition.
<!-- markdownlint-enable MD044 -->
If we decide to keep the migration code:
- We should document the migration steps.
- If we used a feature flag, we should ensure it's an
[ops type feature flag](../feature_flags/_index.md#ops-type), as these are long-lived flags.
Otherwise, we can remove the flags and conclude the project.
|
https://docs.gitlab.com/development/testing_and_validation
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/testing_and_validation.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
testing_and_validation.md
| null | null | null | null | null |
<!-- markdownlint-disable -->
This document was moved to [another location](ai_evaluation_guidelines.md).
<!-- This redirect file can be deleted after <2025-10-30>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
---
redirect_to: ai_evaluation_guidelines.md
remove_date: '2025-10-30'
breadcrumbs:
- doc
- development
- ai_features
---
<!-- markdownlint-disable -->
This document was moved to [another location](ai_evaluation_guidelines.md).
<!-- This redirect file can be deleted after <2025-10-30>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
https://docs.gitlab.com/development/ai_development_license
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/ai_development_license.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
ai_development_license.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Duo licensing for local development
|
Documentation about GitLab Duo licensing options for local development
|
To use GitLab Duo Features, you need to:
- Use GitLab enterprise edition
- Have an online cloud license
- Have either Premium or Ultimate Subscription License plan
- Have one of the Duo add-ons in addition to your license plan (Duo Core, Duo Pro, or Duo Enterprise)
This document walks you through how to get ensure these requirements are met for your GDK.
## Set up GitLab Team Member License for GDK
**Why**: Cloud licenses are mandatory for our cloud connected Duo features for
GitLab Self-Managed and Dedicated customers. As opposed to "legacy" GitLab
licenses, cloud licenses require internet connectivity to validate with
`customers.gitlab.com` (CustomersDot). GitLab periodically checks license
validity, and provides automatic updates to subscription changes through
CustomersDot.
GitLab Duo is available to Premium and Ultimate customers only. You likely want
an Ultimate license for your GDK. Ultimate gets you access to all GitLab Duo
features. Premium gets access to only a subset of GitLab Duo features.
**How**:
1. Follow [the process to obtain an Ultimate license](https://handbook.gitlab.com/handbook/engineering/developer-onboarding/#working-on-gitlab-ee-developer-licenses)
for your local instance. Start with a GitLab Self-Managed Ultimate license. After you have a GitLab Self-Managed license configured, you can always [simulate a SaaS instance](../ee_features.md#simulate-a-saas-instance) and assign individual groups Premium and Ultimate licenses in the admin panel.
1. [Upload your license activation code](../../administration/license.md#activate-gitlab-ee)
1. [Set environment variables](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/contributing/runit.md#using-environment-variables) in GDK:
```shell
export GITLAB_LICENSE_MODE=test
export CUSTOMER_PORTAL_URL=https://customers.staging.gitlab.com
export CLOUD_CONNECTOR_SELF_SIGN_TOKENS=1
```
## (Alternatively) Connect to staging AI Gateway
Developers may also choose to connect their local GitLab instance to the staging AI Gateway instance.
To connect to the staging AI Gateway, configure it through the Admin UI. This option is only available with Ultimate license and active Duo Enterprise add-on:
1. Go to **Admin Area** > **Settings** > **GitLab Duo** > **Self-hosted models**
1. Set the **AI Gateway URL** to `https://cloud.staging.gitlab.com/ai`
1. Select **Save changes**
Alternatively, you can set the AI gateway URL in a Rails console (useful when you don't have access to the Admin UI):
```ruby
Ai::Setting.instance.update!(ai_gateway_url: 'https://cloud.staging.gitlab.com/ai')
```
- Restart your GDK.
- Inside your GDK, navigate to **Admin area** > **GitLab Duo Pro**, navigate to `/admin/code_suggestions`
- Filter users to find `root` and use the toggle to assign a GitLab Duo Pro add-on seat to the root user.
## Troubleshooting
If you're having issues with your Duo license setup:
- Run the [Duo health check](../../administration/gitlab_duo/setup.md#run-a-health-check-for-gitlab-duo) to identify specific issues. Note that if you have Duo licenses that were generated from a setup script locally, this will show "Cloud Connector access token is missing" but that is OK.
- Verify your license is active by checking the Admin Area
- Ensure your user has a Duo seat assigned. The GDK setup scripts assign a Duo
seat to the `root` user only. If you want to test with other users, make sure
to [assign them a seat](../../subscriptions/subscription-add-ons.md#assign-gitlab-duo-seats).
- To more deeply debug why the root user cannot access a feature like Duo Chat, you can run `GlobalPolicy.new(User.first, User.first).debug(:access_duo_chat)`. This [Declarative Policy debug output](../policies.md#scores-order-performance) will help you dive into the specific access logic for more granular debugging.
- Check logs for any authentication or license validation errors
- For cloud license issues, reach out to `#s_fulfillment_engineering` in Slack
- For AI Gateway connection issues, reach out to `#g_ai_framework` in Slack
## Best Practices
- **Test in both environments**: For thorough testing, consider alternating between multi-tenant and single-tenant setups to ensure your feature works well in both environments.
- **Consult domain documentation**: Review specific feature documentation to understand if there are any environment-specific behaviors you need to consider.
- **Consider end-user context**: Remember that features should work well for both GitLab.com users and self-managed/dedicated customers.
## Additional resources
- [AI Features Documentation](_index.md)
- [Code Suggestions Development](code_suggestions.md)
- [Duo Enterprise License Access Process for Staging Environment](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/duo/duo_license.md)
## Setting up GitLab Duo for your Staging GitLab.com user account
When working in staging environments, you may need to set up Duo add-ons for your `staging.gitlab.com` account.
### Duo Pro
1. Have your account ready at <https://staging.gitlab.com>.
1. [Create a new group](../../user/group/_index.md#create-a-group) or use an existing one as the namespace which will receive the Duo Pro access.
1. Go to **Settings > Billing**.
1. Initiate the purchase flow for the Ultimate plan by clicking on `Upgrade to Ultimate`.
1. After being redirected to <https://customers.staging.gitlab.com>, click on `Continue with your Gitlab.com account`.
1. Purchase the SaaS Ultimate subscription using [a test credit card](https://gitlab.com/gitlab-org/customers-gitlab-com#testing-credit-card-information).
1. Find the newly purchased subscription card, and select from the three dots menu the option `Buy GitLab Duo Pro`.
1. Purchase the GitLab Duo Pro add-on using the same test credit card from the above steps.
1. Go back to <https://staging.gitlab.com> and verify that your group has access to Duo Pro by navigating to `Settings > GitLab Duo` and managing seats.
### Duo Enterprise
**Internal use only**: Purchasing a license for Duo Enterprise is not
self-serviceable; post a request in the `#g_provision` Slack channel to grant
your staging account a Duo Enterprise license.
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Documentation about GitLab Duo licensing options for local development
title: GitLab Duo licensing for local development
breadcrumbs:
- doc
- development
- ai_features
---
To use GitLab Duo Features, you need to:
- Use GitLab enterprise edition
- Have an online cloud license
- Have either Premium or Ultimate Subscription License plan
- Have one of the Duo add-ons in addition to your license plan (Duo Core, Duo Pro, or Duo Enterprise)
This document walks you through how to get ensure these requirements are met for your GDK.
## Set up GitLab Team Member License for GDK
**Why**: Cloud licenses are mandatory for our cloud connected Duo features for
GitLab Self-Managed and Dedicated customers. As opposed to "legacy" GitLab
licenses, cloud licenses require internet connectivity to validate with
`customers.gitlab.com` (CustomersDot). GitLab periodically checks license
validity, and provides automatic updates to subscription changes through
CustomersDot.
GitLab Duo is available to Premium and Ultimate customers only. You likely want
an Ultimate license for your GDK. Ultimate gets you access to all GitLab Duo
features. Premium gets access to only a subset of GitLab Duo features.
**How**:
1. Follow [the process to obtain an Ultimate license](https://handbook.gitlab.com/handbook/engineering/developer-onboarding/#working-on-gitlab-ee-developer-licenses)
for your local instance. Start with a GitLab Self-Managed Ultimate license. After you have a GitLab Self-Managed license configured, you can always [simulate a SaaS instance](../ee_features.md#simulate-a-saas-instance) and assign individual groups Premium and Ultimate licenses in the admin panel.
1. [Upload your license activation code](../../administration/license.md#activate-gitlab-ee)
1. [Set environment variables](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/contributing/runit.md#using-environment-variables) in GDK:
```shell
export GITLAB_LICENSE_MODE=test
export CUSTOMER_PORTAL_URL=https://customers.staging.gitlab.com
export CLOUD_CONNECTOR_SELF_SIGN_TOKENS=1
```
## (Alternatively) Connect to staging AI Gateway
Developers may also choose to connect their local GitLab instance to the staging AI Gateway instance.
To connect to the staging AI Gateway, configure it through the Admin UI. This option is only available with Ultimate license and active Duo Enterprise add-on:
1. Go to **Admin Area** > **Settings** > **GitLab Duo** > **Self-hosted models**
1. Set the **AI Gateway URL** to `https://cloud.staging.gitlab.com/ai`
1. Select **Save changes**
Alternatively, you can set the AI gateway URL in a Rails console (useful when you don't have access to the Admin UI):
```ruby
Ai::Setting.instance.update!(ai_gateway_url: 'https://cloud.staging.gitlab.com/ai')
```
- Restart your GDK.
- Inside your GDK, navigate to **Admin area** > **GitLab Duo Pro**, navigate to `/admin/code_suggestions`
- Filter users to find `root` and use the toggle to assign a GitLab Duo Pro add-on seat to the root user.
## Troubleshooting
If you're having issues with your Duo license setup:
- Run the [Duo health check](../../administration/gitlab_duo/setup.md#run-a-health-check-for-gitlab-duo) to identify specific issues. Note that if you have Duo licenses that were generated from a setup script locally, this will show "Cloud Connector access token is missing" but that is OK.
- Verify your license is active by checking the Admin Area
- Ensure your user has a Duo seat assigned. The GDK setup scripts assign a Duo
seat to the `root` user only. If you want to test with other users, make sure
to [assign them a seat](../../subscriptions/subscription-add-ons.md#assign-gitlab-duo-seats).
- To more deeply debug why the root user cannot access a feature like Duo Chat, you can run `GlobalPolicy.new(User.first, User.first).debug(:access_duo_chat)`. This [Declarative Policy debug output](../policies.md#scores-order-performance) will help you dive into the specific access logic for more granular debugging.
- Check logs for any authentication or license validation errors
- For cloud license issues, reach out to `#s_fulfillment_engineering` in Slack
- For AI Gateway connection issues, reach out to `#g_ai_framework` in Slack
## Best Practices
- **Test in both environments**: For thorough testing, consider alternating between multi-tenant and single-tenant setups to ensure your feature works well in both environments.
- **Consult domain documentation**: Review specific feature documentation to understand if there are any environment-specific behaviors you need to consider.
- **Consider end-user context**: Remember that features should work well for both GitLab.com users and self-managed/dedicated customers.
## Additional resources
- [AI Features Documentation](_index.md)
- [Code Suggestions Development](code_suggestions.md)
- [Duo Enterprise License Access Process for Staging Environment](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/duo/duo_license.md)
## Setting up GitLab Duo for your Staging GitLab.com user account
When working in staging environments, you may need to set up Duo add-ons for your `staging.gitlab.com` account.
### Duo Pro
1. Have your account ready at <https://staging.gitlab.com>.
1. [Create a new group](../../user/group/_index.md#create-a-group) or use an existing one as the namespace which will receive the Duo Pro access.
1. Go to **Settings > Billing**.
1. Initiate the purchase flow for the Ultimate plan by clicking on `Upgrade to Ultimate`.
1. After being redirected to <https://customers.staging.gitlab.com>, click on `Continue with your Gitlab.com account`.
1. Purchase the SaaS Ultimate subscription using [a test credit card](https://gitlab.com/gitlab-org/customers-gitlab-com#testing-credit-card-information).
1. Find the newly purchased subscription card, and select from the three dots menu the option `Buy GitLab Duo Pro`.
1. Purchase the GitLab Duo Pro add-on using the same test credit card from the above steps.
1. Go back to <https://staging.gitlab.com> and verify that your group has access to Duo Pro by navigating to `Settings > GitLab Duo` and managing seats.
### Duo Enterprise
**Internal use only**: Purchasing a license for Duo Enterprise is not
self-serviceable; post a request in the `#g_provision` Slack channel to grant
your staging account a Duo Enterprise license.
|
https://docs.gitlab.com/development/actions
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/actions.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
actions.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
AI actions
| null |
This page includes how to implement actions and migrate them to the AI Gateway.
## How to implement a new action
Implementing a new AI action will require changes across different components.
We'll use the example of wanting to implement an action that allows users to rewrite issue descriptions according to
a given prompt.
### 1. Add your action to the Cloud Connector feature list
The Cloud Connector configuration stores the permissions needed to access your service, as well as additional metadata.
If there's no entry for your feature, [add the feature as a Cloud Connector unit primitive](../cloud_connector/_index.md#register-new-feature-for-gitlab-self-managed-dedicated-and-gitlabcom-customers):
For more information, see [Cloud Connector: Configuration](../cloud_connector/configuration.md).
### 2. Create a prompt definition in the AI gateway
In [the AI gateway project](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist), create a
new prompt definition under `ai_gateway/prompts/definitions` with the route `[ai-action]/base/[prompt-version].yml`
(see [Prompt versioning conventions](#appendix-a-prompt-versioning-conventions)).
Specify the model and provider you wish to use, and the prompts that
will be fed to the model. You can specify inputs to be plugged into the prompt by using `{}`.
```yaml
# ai_gateway/prompts/definitions/rewrite_description/base/1.0.0.yml
name: Description rewriter
model:
config_file: conversation_performant
params:
model_class_provider: anthropic
prompt_template:
system: |
You are a helpful assistant that rewrites the description of resources. You'll be given the current description, and a prompt on how you should rewrite it. Reply only with your rewritten description.
<description>{description}</description>
<prompt>{prompt}</prompt>
```
When an AI action uses multiple prompts, the definitions can be organized in a tree structure in the form
`[ai-action]/[prompt-name]/base/[version].yaml`:
```yaml
# ai_gateway/prompts/definitions/code_suggestions/generations/base/1.0.0.yml
name: Code generations
model:
config_file: conversation_performant
params:
model_class_provider: anthropic
...
```
To specify prompts for multiple models, use the name of the model in the path for the definition:
```yaml
# ai_gateway/prompts/definitions/code_suggestions/generations/mistral/1.0.0.yml
name: Code generations
model:
name: mistral
params:
model_class_provider: litellm
...
```
### 3. Create a Completion class
1. Create a new completion under `ee/lib/gitlab/llm/ai_gateway/completions/` and inherit it from the `Base`
AI gateway Completion.
```ruby
# ee/lib/gitlab/llm/ai_gateway/completions/rewrite_description.rb
module Gitlab
module Llm
module AiGateway
module Completions
class RewriteDescription < Base
extend ::Gitlab::Utils::Override
override :inputs
def inputs
{ description: resource.description, prompt: prompt_message.content }
end
end
end
end
end
end
```
### 4. Create a Service
1. Create a new service under `ee/app/services/llm/` and inherit it from the `BaseService`.
1. The `resource` is the object we want to act on. It can be any object that includes the `Ai::Model` concern. For example it could be a `Project`, `MergeRequest`, or `Issue`.
```ruby
# ee/app/services/llm/rewrite_description_service.rb
module Llm
class RewriteDescriptionService < BaseService
extend ::Gitlab::Utils::Override
override :valid
def valid?
super &&
# You can restrict which type of resources your service applies to
resource.to_ability_name == "issue" &&
# Always check that the user is allowed to perform this action on the resource
Ability.allowed?(user, :rewrite_description, resource)
end
private
def perform
schedule_completion_worker
end
end
end
```
### 5. Register the feature in the catalogue
Go to `Gitlab::Llm::Utils::AiFeaturesCatalogue` and add a new entry for your AI action.
```ruby
class AiFeaturesCatalogue
LIST = {
# ...
rewrite_description: {
service_class: ::Gitlab::Llm::AiGateway::Completions::RewriteDescription,
feature_category: :ai_abstraction_layer,
execute_method: ::Llm::RewriteDescriptionService,
maturity: :experimental,
self_managed: false,
internal: false
}
}.freeze
```
### 6. Add a default prompt version query
Go to `Gitlab::Llm::PromptVersions` and add an entry for your AI action with a query that includes your desired prompt
version (for new features this will usually be `^1.0.0`, see [Prompt version resolution](#prompt-version-resolution)):
```ruby
class PromptVersions
class << self
VERSIONS = {
# ...
"rewrite_description/base": "^1.0.0"
```
## Updating an AI action
To make changes to the template, model, or parameters of an AI feature, create a new YAML version file in the AI Gateway:
```yaml
# ai_gateway/prompts/definitions/rewrite_description/base/1.0.1.yml
name: Description rewriter with Claude 3.5
model:
name: claude-3-5-sonnet-20240620
params:
model_class_provider: anthropic
prompt_template:
system: |
You are a helpful assistant that rewrites the description of resources. You'll be given the current description, and a prompt on how you should rewrite it. Reply only with your rewritten description.
<description>{description}</description>
<prompt>{prompt}</prompt>
```
### Incremental rollout of prompt versions
Once a stable prompt version is added to the AI Gateway it should not be altered. You can create a mutable version of a
prompt by adding a pre-release suffix to the file name (for example, `1.0.1-dev.yml`). This will also prevent it from being
automatically served to clients. Then you can use a feature flag to control the rollout this new version. For GitLab
Duo Self-hosted, forced versions are ignored, and only versions defined in `PromptVersions` are used. This avoids
mistakenly enabling versions for models that don't have that specified version.
If your AI action is implemented as a subclass of `AiGateway::Completions::Base`, you can achieve this by overriding the prompt
version in your subclass:
```ruby
# ee/lib/gitlab/llm/ai_gateway/completions/rewrite_description.rb
module Gitlab
module Llm
module AiGateway
module Completions
class RewriteDescription < Base
extend ::Gitlab::Utils::Override
override :prompt_version
def prompt_version
'1.0.1-dev' if Feature.enabled?(:my_feature_flag) # You can also scope it to `user` or `resource`, as appropriate
end
# ...
```
Once you are ready to make this version stable and start auto-serving it to compatible clients, simply rename the YAML
definition file to remove the pre-release suffix, and remove the `prompt_version` override.
## How to migrate an existing action to the AI gateway
AI actions were initially implemented inside the GitLab monolith. As part of our
[AI gateway as the Sole Access Point for Monolith to Access Models Epic](https://gitlab.com/groups/gitlab-org/-/epics/13024)
we're migrating prompts, model selection and model parameters into the AI gateway. This will increase the speed at which
we can deliver improvements to users on GitLab Self-Managed, by decoupling prompt and model changes from monolith releases. To
migrate an existing action:
1. Follow steps 1 through 3 on [How to implement a new action](#how-to-implement-a-new-action).
1. Modify the entry for your AI action in the catalogue to list the new completion class as the `aigw_service_class`.
```ruby
class AiFeaturesCatalogue
LIST = {
# ...
generate_description: {
service_class: ::Gitlab::Llm::Anthropic::Completions::GenerateDescription,
aigw_service_class: ::Gitlab::Llm::AiGateway::Completions::GenerateDescription,
prompt_class: ::Gitlab::Llm::Templates::GenerateDescription,
feature_category: :ai_abstraction_layer,
execute_method: ::Llm::GenerateDescriptionService,
maturity: :experimental,
self_managed: false,
internal: false
},
# ...
}.freeze
```
1. Create `prompt_migration_#{feature_name}` feature flag (e.g `prompt_migration_generate_description`)
When the feature flag is enabled, the `aigw_service_class` will be used to process the AI action.
Once you've validated the correct functioning of your action, you can remove the `aigw_service_class` key and replace
the `service_class` with the new `AiGateway::Completions` class to make it the permanent provider.
For a complete example of the changes needed to migrate an AI action, see the following MRs:
- [Changes to the GitLab Rails monolith](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/152429)
- [Changes to the AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/921)
### Authorization in GitLab-Rails
We recommend to use [policies](../policies.md) to deal with authorization for a feature. Currently we need to make sure to cover the following checks:
Some basic authorization is included in the Abstraction Layer classes that are base classes for more specialized classes.
What needs to be included in the code:
1. Check for feature flag compatibility: `Gitlab::Llm::Utils::FlagChecker.flag_enabled_for_feature?(ai_action)` - included in the `Llm::BaseService` class.
1. Check if resource is authorized: `Gitlab::Llm::Utils::Authorizer.resource(resource: resource, user: user).allowed?` - also included in the `Llm::BaseService` class.
1. Both of those checks are included in the `::Gitlab::Llm::FeatureAuthorizer.new(container: subject_container, feature_name: action_name).allowed?`
1. Access to AI features depend on several factors, such as: their maturity, if they are enabled on self-managed, if they are bundled within an add-on etc.
- [Example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/global_policy.rb#L222-222) of policy not connected to the particular resource.
- [Example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/issue_policy.rb#L25-25) of policy connected to the particular resource.
{{< alert type="note" >}}
For more information, see [the GitLab AI gateway documentation](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitlab_ai_gateway.md#optional-enable-authentication-and-authorization-in-ai-gateway) about authentication and authorization in AI gateway.
{{< /alert >}}
If your Duo feature involves an autonomous agent, you should use
[composite identity](composite_identity.md) authorization.
### Pairing requests with responses
Because multiple users' requests can be processed in parallel, when receiving responses,
it can be difficult to pair a response with its original request. The `requestId`
field can be used for this purpose, because both the request and response are assured
to have the same `requestId` UUID.
### Caching
AI requests and responses can be cached. Cached conversation is being used to
display user interaction with AI features. In the current implementation, this cache
is not used to skip consecutive calls to the AI service when a user repeats
their requests.
```graphql
query {
aiMessages {
nodes {
id
requestId
content
role
errors
timestamp
}
}
}
```
This cache is used for chat functionality. For other services, caching is
disabled. You can enable this for a service by using the `cache_response: true`
option.
Caching has following limitations:
- Messages are stored in Redis stream.
- There is a single stream of messages per user. This means that all services
currently share the same cache. If needed, this could be extended to multiple
streams per user (after checking with the infrastructure team that Redis can handle
the estimated amount of messages).
- Only the last 50 messages (requests + responses) are kept.
- Expiration time of the stream is 3 days since adding last message.
- User can access only their own messages. There is no authorization on the caching
level, and any authorization (if accessed by not current user) is expected on
the service layer.
### Check if feature is allowed for this resource based on namespace settings
There is one setting allowed on root namespace level that restrict the use of AI features:
- `experiment_features_enabled`
To check if that feature is allowed for a given namespace, call:
```ruby
Gitlab::Llm::StageCheck.available?(namespace, :name_of_the_feature)
```
Add the name of the feature to the `Gitlab::Llm::StageCheck` class. There are
arrays there that differentiate between experimental and beta features.
This way we are ready for the following different cases:
- If the feature is not in any array, the check will return `true`. For example, the feature is generally available.
To move the feature from the experimental phase to the beta phase, move the name of the feature from the `EXPERIMENTAL_FEATURES` array to the `BETA_FEATURES` array.
### Implement calls to AI APIs and the prompts
The `CompletionWorker` will call the `Completions::Factory` which will initialize the Service and execute the actual call to the API.
In our example, we will use VertexAI and implement two new classes:
```ruby
# /ee/lib/gitlab/llm/vertex_ai/completions/rewrite_description.rb
module Gitlab
module Llm
module VertexAi
module Completions
class AmazingNewAiFeature < Gitlab::Llm::Completions::Base
def execute
prompt = ai_prompt_class.new(options[:user_input]).to_prompt
response = Gitlab::Llm::VertexAi::Client.new(user, unit_primitive: 'amazing_feature').text(content: prompt)
response_modifier = ::Gitlab::Llm::VertexAi::ResponseModifiers::Predictions.new(response)
::Gitlab::Llm::GraphqlSubscriptionResponseService.new(
user, nil, response_modifier, options: response_options
).execute
end
end
end
end
end
end
```
```ruby
# /ee/lib/gitlab/llm/vertex_ai/templates/rewrite_description.rb
module Gitlab
module Llm
module VertexAi
module Templates
class AmazingNewAiFeature
def initialize(user_input)
@user_input = user_input
end
def to_prompt
<<~PROMPT
You are an assistant that writes code for the following context:
context: #{user_input}
PROMPT
end
end
end
end
end
end
```
Because we support multiple AI providers, you may also use those providers for
the same example:
```ruby
Gitlab::Llm::VertexAi::Client.new(user, unit_primitive: 'your_feature')
Gitlab::Llm::Anthropic::Client.new(user, unit_primitive: 'your_feature')
```
## Appendix A: Prompt versioning conventions
Prompt versions should adjust to [Semantic Versioning](https://semver.org/) standards: `MAJOR.MINOR.PATCH[-PRERELEASE]`.
- A change in the MAJOR component reflects changes will break with older versions of GitLab. For example, when the new
prompt must receive a new property that doesn't have a default, since if this change were applied to all GitLab versions,
requests made from older versions will throw an error since that property is not present.
- A change in the MINOR component reflects feature additions, but that are still backwards compatible. For example,
suppose we want to use a new more powerful model: requests of older versions of GitLab will still work.
- A change in the PATCH component reflects small bug fixes to prompts, like a typo.
The MAJOR component guarantees that older versions of GitLab will not break once a new change is added, without blocking
the evolution of our codebase. Changes in MINOR and PATCH are more subjective.
### Immutability of prompt versions
To guarantee traceability of changes, only prompts with a [pre-release version](https://semver.org/#spec-item-9) (eg `1.0.1-dev.yml`)
may be changed once committed. Prompts defining a stable version are immutable, and changing them will trigger a pipeline failure.
### Using partials
To better organize the prompts, it is possible to use partials to split a prompt into smaller parts. Partials must also be
versioned. For example:
```yaml
# ai_gateway/prompts/definitions/rewrite_description/base/1.0.0.yml
name: Description rewriter
model:
config_file: conversation_performant
params:
model_class_provider: anthropic
prompt_template:
system: |
{% include 'rewrite_description/system/1.0.0.jinja' %}
user: |
{% include 'rewrite_description/user/1.0.0.jinja' %}
```
### Prompt version resolution
AI Gateway will fetch the latest stable version available that matches the prompt version query passed as argument.
Queries follow [Poetry's version constraint rules](https://python-poetry.org/docs/dependency-specification/#version-constraints).
For example, if prompt `foo/bar` has the following versions:
- `1.0.1.yml`
- `1.1.0.yml`
- `1.5.0-dev.yml`
- `2.0.1.yml`
Then, if `/v1/prompts/foo/bar` is called with
- `{'prompt_version': "^1.0.0"}`, prompt version `1.1.0.yml` will be selected.
- `{'prompt_version': "1.5.0-dev"}`, prompt version `1.5.0-dev.yml` will be selected.
- `{'prompt_version': "^2.0.0"}`, prompt version `2.0.1.yml` will be selected.
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: AI actions
breadcrumbs:
- doc
- development
- ai_features
---
This page includes how to implement actions and migrate them to the AI Gateway.
## How to implement a new action
Implementing a new AI action will require changes across different components.
We'll use the example of wanting to implement an action that allows users to rewrite issue descriptions according to
a given prompt.
### 1. Add your action to the Cloud Connector feature list
The Cloud Connector configuration stores the permissions needed to access your service, as well as additional metadata.
If there's no entry for your feature, [add the feature as a Cloud Connector unit primitive](../cloud_connector/_index.md#register-new-feature-for-gitlab-self-managed-dedicated-and-gitlabcom-customers):
For more information, see [Cloud Connector: Configuration](../cloud_connector/configuration.md).
### 2. Create a prompt definition in the AI gateway
In [the AI gateway project](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist), create a
new prompt definition under `ai_gateway/prompts/definitions` with the route `[ai-action]/base/[prompt-version].yml`
(see [Prompt versioning conventions](#appendix-a-prompt-versioning-conventions)).
Specify the model and provider you wish to use, and the prompts that
will be fed to the model. You can specify inputs to be plugged into the prompt by using `{}`.
```yaml
# ai_gateway/prompts/definitions/rewrite_description/base/1.0.0.yml
name: Description rewriter
model:
config_file: conversation_performant
params:
model_class_provider: anthropic
prompt_template:
system: |
You are a helpful assistant that rewrites the description of resources. You'll be given the current description, and a prompt on how you should rewrite it. Reply only with your rewritten description.
<description>{description}</description>
<prompt>{prompt}</prompt>
```
When an AI action uses multiple prompts, the definitions can be organized in a tree structure in the form
`[ai-action]/[prompt-name]/base/[version].yaml`:
```yaml
# ai_gateway/prompts/definitions/code_suggestions/generations/base/1.0.0.yml
name: Code generations
model:
config_file: conversation_performant
params:
model_class_provider: anthropic
...
```
To specify prompts for multiple models, use the name of the model in the path for the definition:
```yaml
# ai_gateway/prompts/definitions/code_suggestions/generations/mistral/1.0.0.yml
name: Code generations
model:
name: mistral
params:
model_class_provider: litellm
...
```
### 3. Create a Completion class
1. Create a new completion under `ee/lib/gitlab/llm/ai_gateway/completions/` and inherit it from the `Base`
AI gateway Completion.
```ruby
# ee/lib/gitlab/llm/ai_gateway/completions/rewrite_description.rb
module Gitlab
module Llm
module AiGateway
module Completions
class RewriteDescription < Base
extend ::Gitlab::Utils::Override
override :inputs
def inputs
{ description: resource.description, prompt: prompt_message.content }
end
end
end
end
end
end
```
### 4. Create a Service
1. Create a new service under `ee/app/services/llm/` and inherit it from the `BaseService`.
1. The `resource` is the object we want to act on. It can be any object that includes the `Ai::Model` concern. For example it could be a `Project`, `MergeRequest`, or `Issue`.
```ruby
# ee/app/services/llm/rewrite_description_service.rb
module Llm
class RewriteDescriptionService < BaseService
extend ::Gitlab::Utils::Override
override :valid
def valid?
super &&
# You can restrict which type of resources your service applies to
resource.to_ability_name == "issue" &&
# Always check that the user is allowed to perform this action on the resource
Ability.allowed?(user, :rewrite_description, resource)
end
private
def perform
schedule_completion_worker
end
end
end
```
### 5. Register the feature in the catalogue
Go to `Gitlab::Llm::Utils::AiFeaturesCatalogue` and add a new entry for your AI action.
```ruby
class AiFeaturesCatalogue
LIST = {
# ...
rewrite_description: {
service_class: ::Gitlab::Llm::AiGateway::Completions::RewriteDescription,
feature_category: :ai_abstraction_layer,
execute_method: ::Llm::RewriteDescriptionService,
maturity: :experimental,
self_managed: false,
internal: false
}
}.freeze
```
### 6. Add a default prompt version query
Go to `Gitlab::Llm::PromptVersions` and add an entry for your AI action with a query that includes your desired prompt
version (for new features this will usually be `^1.0.0`, see [Prompt version resolution](#prompt-version-resolution)):
```ruby
class PromptVersions
class << self
VERSIONS = {
# ...
"rewrite_description/base": "^1.0.0"
```
## Updating an AI action
To make changes to the template, model, or parameters of an AI feature, create a new YAML version file in the AI Gateway:
```yaml
# ai_gateway/prompts/definitions/rewrite_description/base/1.0.1.yml
name: Description rewriter with Claude 3.5
model:
name: claude-3-5-sonnet-20240620
params:
model_class_provider: anthropic
prompt_template:
system: |
You are a helpful assistant that rewrites the description of resources. You'll be given the current description, and a prompt on how you should rewrite it. Reply only with your rewritten description.
<description>{description}</description>
<prompt>{prompt}</prompt>
```
### Incremental rollout of prompt versions
Once a stable prompt version is added to the AI Gateway it should not be altered. You can create a mutable version of a
prompt by adding a pre-release suffix to the file name (for example, `1.0.1-dev.yml`). This will also prevent it from being
automatically served to clients. Then you can use a feature flag to control the rollout this new version. For GitLab
Duo Self-hosted, forced versions are ignored, and only versions defined in `PromptVersions` are used. This avoids
mistakenly enabling versions for models that don't have that specified version.
If your AI action is implemented as a subclass of `AiGateway::Completions::Base`, you can achieve this by overriding the prompt
version in your subclass:
```ruby
# ee/lib/gitlab/llm/ai_gateway/completions/rewrite_description.rb
module Gitlab
module Llm
module AiGateway
module Completions
class RewriteDescription < Base
extend ::Gitlab::Utils::Override
override :prompt_version
def prompt_version
'1.0.1-dev' if Feature.enabled?(:my_feature_flag) # You can also scope it to `user` or `resource`, as appropriate
end
# ...
```
Once you are ready to make this version stable and start auto-serving it to compatible clients, simply rename the YAML
definition file to remove the pre-release suffix, and remove the `prompt_version` override.
## How to migrate an existing action to the AI gateway
AI actions were initially implemented inside the GitLab monolith. As part of our
[AI gateway as the Sole Access Point for Monolith to Access Models Epic](https://gitlab.com/groups/gitlab-org/-/epics/13024)
we're migrating prompts, model selection and model parameters into the AI gateway. This will increase the speed at which
we can deliver improvements to users on GitLab Self-Managed, by decoupling prompt and model changes from monolith releases. To
migrate an existing action:
1. Follow steps 1 through 3 on [How to implement a new action](#how-to-implement-a-new-action).
1. Modify the entry for your AI action in the catalogue to list the new completion class as the `aigw_service_class`.
```ruby
class AiFeaturesCatalogue
LIST = {
# ...
generate_description: {
service_class: ::Gitlab::Llm::Anthropic::Completions::GenerateDescription,
aigw_service_class: ::Gitlab::Llm::AiGateway::Completions::GenerateDescription,
prompt_class: ::Gitlab::Llm::Templates::GenerateDescription,
feature_category: :ai_abstraction_layer,
execute_method: ::Llm::GenerateDescriptionService,
maturity: :experimental,
self_managed: false,
internal: false
},
# ...
}.freeze
```
1. Create `prompt_migration_#{feature_name}` feature flag (e.g `prompt_migration_generate_description`)
When the feature flag is enabled, the `aigw_service_class` will be used to process the AI action.
Once you've validated the correct functioning of your action, you can remove the `aigw_service_class` key and replace
the `service_class` with the new `AiGateway::Completions` class to make it the permanent provider.
For a complete example of the changes needed to migrate an AI action, see the following MRs:
- [Changes to the GitLab Rails monolith](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/152429)
- [Changes to the AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/921)
### Authorization in GitLab-Rails
We recommend to use [policies](../policies.md) to deal with authorization for a feature. Currently we need to make sure to cover the following checks:
Some basic authorization is included in the Abstraction Layer classes that are base classes for more specialized classes.
What needs to be included in the code:
1. Check for feature flag compatibility: `Gitlab::Llm::Utils::FlagChecker.flag_enabled_for_feature?(ai_action)` - included in the `Llm::BaseService` class.
1. Check if resource is authorized: `Gitlab::Llm::Utils::Authorizer.resource(resource: resource, user: user).allowed?` - also included in the `Llm::BaseService` class.
1. Both of those checks are included in the `::Gitlab::Llm::FeatureAuthorizer.new(container: subject_container, feature_name: action_name).allowed?`
1. Access to AI features depend on several factors, such as: their maturity, if they are enabled on self-managed, if they are bundled within an add-on etc.
- [Example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/global_policy.rb#L222-222) of policy not connected to the particular resource.
- [Example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/issue_policy.rb#L25-25) of policy connected to the particular resource.
{{< alert type="note" >}}
For more information, see [the GitLab AI gateway documentation](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitlab_ai_gateway.md#optional-enable-authentication-and-authorization-in-ai-gateway) about authentication and authorization in AI gateway.
{{< /alert >}}
If your Duo feature involves an autonomous agent, you should use
[composite identity](composite_identity.md) authorization.
### Pairing requests with responses
Because multiple users' requests can be processed in parallel, when receiving responses,
it can be difficult to pair a response with its original request. The `requestId`
field can be used for this purpose, because both the request and response are assured
to have the same `requestId` UUID.
### Caching
AI requests and responses can be cached. Cached conversation is being used to
display user interaction with AI features. In the current implementation, this cache
is not used to skip consecutive calls to the AI service when a user repeats
their requests.
```graphql
query {
aiMessages {
nodes {
id
requestId
content
role
errors
timestamp
}
}
}
```
This cache is used for chat functionality. For other services, caching is
disabled. You can enable this for a service by using the `cache_response: true`
option.
Caching has following limitations:
- Messages are stored in Redis stream.
- There is a single stream of messages per user. This means that all services
currently share the same cache. If needed, this could be extended to multiple
streams per user (after checking with the infrastructure team that Redis can handle
the estimated amount of messages).
- Only the last 50 messages (requests + responses) are kept.
- Expiration time of the stream is 3 days since adding last message.
- User can access only their own messages. There is no authorization on the caching
level, and any authorization (if accessed by not current user) is expected on
the service layer.
### Check if feature is allowed for this resource based on namespace settings
There is one setting allowed on root namespace level that restrict the use of AI features:
- `experiment_features_enabled`
To check if that feature is allowed for a given namespace, call:
```ruby
Gitlab::Llm::StageCheck.available?(namespace, :name_of_the_feature)
```
Add the name of the feature to the `Gitlab::Llm::StageCheck` class. There are
arrays there that differentiate between experimental and beta features.
This way we are ready for the following different cases:
- If the feature is not in any array, the check will return `true`. For example, the feature is generally available.
To move the feature from the experimental phase to the beta phase, move the name of the feature from the `EXPERIMENTAL_FEATURES` array to the `BETA_FEATURES` array.
### Implement calls to AI APIs and the prompts
The `CompletionWorker` will call the `Completions::Factory` which will initialize the Service and execute the actual call to the API.
In our example, we will use VertexAI and implement two new classes:
```ruby
# /ee/lib/gitlab/llm/vertex_ai/completions/rewrite_description.rb
module Gitlab
module Llm
module VertexAi
module Completions
class AmazingNewAiFeature < Gitlab::Llm::Completions::Base
def execute
prompt = ai_prompt_class.new(options[:user_input]).to_prompt
response = Gitlab::Llm::VertexAi::Client.new(user, unit_primitive: 'amazing_feature').text(content: prompt)
response_modifier = ::Gitlab::Llm::VertexAi::ResponseModifiers::Predictions.new(response)
::Gitlab::Llm::GraphqlSubscriptionResponseService.new(
user, nil, response_modifier, options: response_options
).execute
end
end
end
end
end
end
```
```ruby
# /ee/lib/gitlab/llm/vertex_ai/templates/rewrite_description.rb
module Gitlab
module Llm
module VertexAi
module Templates
class AmazingNewAiFeature
def initialize(user_input)
@user_input = user_input
end
def to_prompt
<<~PROMPT
You are an assistant that writes code for the following context:
context: #{user_input}
PROMPT
end
end
end
end
end
end
```
Because we support multiple AI providers, you may also use those providers for
the same example:
```ruby
Gitlab::Llm::VertexAi::Client.new(user, unit_primitive: 'your_feature')
Gitlab::Llm::Anthropic::Client.new(user, unit_primitive: 'your_feature')
```
## Appendix A: Prompt versioning conventions
Prompt versions should adjust to [Semantic Versioning](https://semver.org/) standards: `MAJOR.MINOR.PATCH[-PRERELEASE]`.
- A change in the MAJOR component reflects changes will break with older versions of GitLab. For example, when the new
prompt must receive a new property that doesn't have a default, since if this change were applied to all GitLab versions,
requests made from older versions will throw an error since that property is not present.
- A change in the MINOR component reflects feature additions, but that are still backwards compatible. For example,
suppose we want to use a new more powerful model: requests of older versions of GitLab will still work.
- A change in the PATCH component reflects small bug fixes to prompts, like a typo.
The MAJOR component guarantees that older versions of GitLab will not break once a new change is added, without blocking
the evolution of our codebase. Changes in MINOR and PATCH are more subjective.
### Immutability of prompt versions
To guarantee traceability of changes, only prompts with a [pre-release version](https://semver.org/#spec-item-9) (eg `1.0.1-dev.yml`)
may be changed once committed. Prompts defining a stable version are immutable, and changing them will trigger a pipeline failure.
### Using partials
To better organize the prompts, it is possible to use partials to split a prompt into smaller parts. Partials must also be
versioned. For example:
```yaml
# ai_gateway/prompts/definitions/rewrite_description/base/1.0.0.yml
name: Description rewriter
model:
config_file: conversation_performant
params:
model_class_provider: anthropic
prompt_template:
system: |
{% include 'rewrite_description/system/1.0.0.jinja' %}
user: |
{% include 'rewrite_description/user/1.0.0.jinja' %}
```
### Prompt version resolution
AI Gateway will fetch the latest stable version available that matches the prompt version query passed as argument.
Queries follow [Poetry's version constraint rules](https://python-poetry.org/docs/dependency-specification/#version-constraints).
For example, if prompt `foo/bar` has the following versions:
- `1.0.1.yml`
- `1.1.0.yml`
- `1.5.0-dev.yml`
- `2.0.1.yml`
Then, if `/v1/prompts/foo/bar` is called with
- `{'prompt_version': "^1.0.0"}`, prompt version `1.1.0.yml` will be selected.
- `{'prompt_version': "1.5.0-dev"}`, prompt version `1.5.0-dev.yml` will be selected.
- `{'prompt_version': "^2.0.0"}`, prompt version `2.0.1.yml` will be selected.
|
https://docs.gitlab.com/development/composite_identity
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/composite_identity.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
composite_identity.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Composite Identity
| null |
GitLab Duo with Amazon Q uses a [composite identity](../../user/gitlab_duo/security.md)
to authenticate requests.
For security reasons, you should use composite identity for any
AI-generated activity on the GitLab platform that performs write actions.
## Prerequisites
To generate a composite identity token, you must have:
1. A [service account user](../../user/profile/service_accounts.md) who can be the
primary token owner for the composite identity token.
1. Because service accounts
are only available on Premium and Ultimate instances, composite identity
only works on EE GitLab instances.
1. The service account user must have the `composite_identity_enforced` boolean
attribute set to `true`.
1. The OAuth application associated with the composite token must have a
[dynamic scope](https://github.com/doorkeeper-gem/doorkeeper/pull/1739) of
`user:*`. This scope is not available on the OAuth application web UI. As a
result, the OAuth application must be created programmatically.
## How to generate a composite identity token
After you have met the requirements above, follow these steps to generate a
composite identity token. Only OAuth tokens are supported at present.
1. Because a service account is a bot user that cannot sign in, the typical
[authorization code flow](../../api/oauth2.md), which asks the user to
authorize access to their account in the browser, does not work.
1. If you are integrating with 3rd party services:
1. Manually generate an OAuth grant for the service account + OAuth app.
[Example](https://gitlab.com/gitlab-org/gitlab/-/blob/3665a013d3eca00d50cbac4d4aec3053bd5ca9b5/ee/app/services/ai/amazon_q/amazon_q_trigger_service.rb#L135-142)
of how we do this for Amazon Q
Ensure that the grant's scopes the `id` of the human user who
originated the AI request.
1. The OAuth grant can be exchange for an OAuth access token using the standard
method of making a request to `'https://gitlab.example.com/oauth/token'`.
1. If you are not integrating with 3rd party services:
1. You can skip the access grant and manually generate an OAuth access token
Ensure that the token's scopes contains the `id` of the human user who
originated the AI request.
1. The OAuth access token can be refreshed using the standard method of
making a request to `'https://gitlab.example.com/oauth/token'`.
1. The returned access token belongs to the service account but has `user:$ID`
in the scopes. The token can be refreshed like a standard OAuth access token.
Any API requests made with composite identity token are automatically authorized
as composite identity requests. As a result, both the service account user and
the human user whose `id` is in the token scopes must have access to the
resource.
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Composite Identity
breadcrumbs:
- doc
- development
- ai_features
---
GitLab Duo with Amazon Q uses a [composite identity](../../user/gitlab_duo/security.md)
to authenticate requests.
For security reasons, you should use composite identity for any
AI-generated activity on the GitLab platform that performs write actions.
## Prerequisites
To generate a composite identity token, you must have:
1. A [service account user](../../user/profile/service_accounts.md) who can be the
primary token owner for the composite identity token.
1. Because service accounts
are only available on Premium and Ultimate instances, composite identity
only works on EE GitLab instances.
1. The service account user must have the `composite_identity_enforced` boolean
attribute set to `true`.
1. The OAuth application associated with the composite token must have a
[dynamic scope](https://github.com/doorkeeper-gem/doorkeeper/pull/1739) of
`user:*`. This scope is not available on the OAuth application web UI. As a
result, the OAuth application must be created programmatically.
## How to generate a composite identity token
After you have met the requirements above, follow these steps to generate a
composite identity token. Only OAuth tokens are supported at present.
1. Because a service account is a bot user that cannot sign in, the typical
[authorization code flow](../../api/oauth2.md), which asks the user to
authorize access to their account in the browser, does not work.
1. If you are integrating with 3rd party services:
1. Manually generate an OAuth grant for the service account + OAuth app.
[Example](https://gitlab.com/gitlab-org/gitlab/-/blob/3665a013d3eca00d50cbac4d4aec3053bd5ca9b5/ee/app/services/ai/amazon_q/amazon_q_trigger_service.rb#L135-142)
of how we do this for Amazon Q
Ensure that the grant's scopes the `id` of the human user who
originated the AI request.
1. The OAuth grant can be exchange for an OAuth access token using the standard
method of making a request to `'https://gitlab.example.com/oauth/token'`.
1. If you are not integrating with 3rd party services:
1. You can skip the access grant and manually generate an OAuth access token
Ensure that the token's scopes contains the `id` of the human user who
originated the AI request.
1. The OAuth access token can be refreshed using the standard method of
making a request to `'https://gitlab.example.com/oauth/token'`.
1. The returned access token belongs to the service account but has `user:$ID`
in the scopes. The token can be refreshed like a standard OAuth access token.
Any API requests made with composite identity token are automatically authorized
as composite identity requests. As a result, both the service account user and
the human user whose `id` is in the token scopes must have access to the
resource.
|
https://docs.gitlab.com/development/prompt_engineering
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/prompt_engineering.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
prompt_engineering.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Prompt Engineering Guide
| null |
This guide outlines the key aspects of prompt engineering when working with Large Language Models (LLMs),
including prompt design, optimization, evaluation, and monitoring.
## Understanding prompt engineering
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/bOA6BtBaMTQ).
Most important takeaways:
- **Definition of a prompt:**
- An instruction sent to a language model to solve a task
- Forms the core of AI features in user interfaces
- **Importance of prompt quality:**
- Greatly influences the quality of the language model's response
- Iterating on prompts is crucial for optimal results
- **Key considerations when crafting prompts:**
- Understand the task you're asking the model to perform
- Know what kind of response you're expecting
- Prepare a dataset to test the prompts
- Be specific - provide lots of details and context to help the AI understand
- Give examples of potential questions and desired answers
- **Prompt universality:**
- Prompts are not universal across different language models
- When changing models, prompts need to be adjusted
- Consult the language model provider's documentation for specific tips
- Test new models before fully switching
- **Tools for working with prompts:**
- Anthropic Console: A platform for writing and testing prompts
- Generator Prompt: A tool that creates crafted prompts based on task descriptions
- **Prompt structure:**
- Typically includes a general task description
- Contains placeholders for input text
- May include specific instructions and suggested output formats
- Consider wrapping inputs in XML tags for better understanding and data extraction
- **System prompts:**
- Set the general tone and role for the AI
- Can improve the model's performance
- Usually placed at the beginning of the prompt
- Set the role for the language model
- **Best practices:**
- Invest time in understanding the assignment
- Use prompt generation tools as a starting point
- Test and iterate on prompts to improve results
- Use proper English grammar and syntax to help the AI understand
- Allow uncertainty - tell the AI to say "I don't know" if it is unsure
- Use positive phrasing - say what the AI should do, not what it shouldn't do
### Best practices for writing effective prompts
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video about writing effective prompts](https://youtu.be/xL-zj-Z4Mh4).
Here are the key takeaways from this video:
- **No universal "good" prompt:**
- The effectiveness of a prompt depends on the specific task.
- There's no one-size-fits-all approach to prompt writing.
- **Characteristics of effective prompts:**
- Clear and explanatory of the task and expected outcomes.
- Direct and detailed.
- Specific about the desired output.
- **Key elements to consider:**
- Understand the task, audience, and end goal.
- Explain these elements clearly in the prompt.
- **Strategies for improving prompt performance:**
- Add instructions in sequential steps.
- Include relevant examples.
- Ask the model to think in steps (chain of thought).
- Request reasoning before providing answers.
- Guide the input - use delimiters to clearly indicate where the user's input starts and ends.
- **Adapting to model preferences:**
- Adjust prompts to suit the preferred data structure of the model.
- For example, Anthropic models work well with XML tags.
- **Importance of system prompts:**
- Set the role for the language model.
- Placed at the beginning of the interaction.
- Can include awareness of tools or long context.
- **Iteration is crucial:**
- Emphasized as the most important part of working with prompts.
- Continual refinement leads to better results.
- Build quality control - automate testing prompts with RSpec or Rake tasks to catch differences.
- **Use traditional code:**
- If a task can be done efficiently outside of calling an LLM, use code for more reliable and deterministic outputs.
## Tuning and optimizing workflows for prompts
### Prompt tuning for LLMs using LangSmith and Anthropic Workbench together + CEF
#### Iterating on the prompt using Anthropic console
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/03nOKxr8BS4).
#### Iterating on the prompt using LangSmith
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/9WXT0licAdg).
#### Using Datasets for prompt tuning with LangSmith
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://www.youtube.com/watch?v=kUnm0c2LMlQ).
#### Using automated evaluation in LangSmith
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/MT6SK4y47Zw).
#### Using pairwise experiments in LangSmith
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/mhpY7ddjXqc).
[View the CEF documentation](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/running_evaluation_locally/pairwise_evaluation.md).
#### When to use LangSmith and when CEF
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/-DK-XFFllwg).
##### Key Points on CEF (Centralized Evaluation Framework) Project
1. Initial Development
- Start with pure LangSmith for prompt iteration
- Easier and quicker to set up
- More cost-effective for early stages
1. When to Transition to CEF
- When investing more in the feature
- For working with larger datasets
- For repeated, long-term use
1. CEF Setup Considerations
- Requires upfront time investment
- Need to adjust evaluations for specific features
- Set up input data (for example, local GDK for chat features)
1. Challenges
- Ensuring consistent data across different users
- Exploring options like seats and imports for data sharing
1. Current CEF Capabilities
- Supports chat questions about code
- Handles documentation-related queries
- Includes evaluations for code suggestions
1. Advantages of CEF
- Allows running evaluations on local GDK
- Results viewable in LangSmith UI
- Enables use of larger datasets
1. Flexibility
- Requires customization for specific use cases
- Not a one-size-fits-all solution
1. Documentation
- CEF has extensive documentation available.
1. Adoption
- Already in use by some teams, including code suggestions and create teams
## Further resources
For more comprehensive prompt engineering guides, see:
- [Prompt Engineering Guide 1](https://www.promptingguide.ai/)
- [Prompt Engineering Guide 2](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/)
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Prompt Engineering Guide
breadcrumbs:
- doc
- development
- ai_features
---
This guide outlines the key aspects of prompt engineering when working with Large Language Models (LLMs),
including prompt design, optimization, evaluation, and monitoring.
## Understanding prompt engineering
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/bOA6BtBaMTQ).
Most important takeaways:
- **Definition of a prompt:**
- An instruction sent to a language model to solve a task
- Forms the core of AI features in user interfaces
- **Importance of prompt quality:**
- Greatly influences the quality of the language model's response
- Iterating on prompts is crucial for optimal results
- **Key considerations when crafting prompts:**
- Understand the task you're asking the model to perform
- Know what kind of response you're expecting
- Prepare a dataset to test the prompts
- Be specific - provide lots of details and context to help the AI understand
- Give examples of potential questions and desired answers
- **Prompt universality:**
- Prompts are not universal across different language models
- When changing models, prompts need to be adjusted
- Consult the language model provider's documentation for specific tips
- Test new models before fully switching
- **Tools for working with prompts:**
- Anthropic Console: A platform for writing and testing prompts
- Generator Prompt: A tool that creates crafted prompts based on task descriptions
- **Prompt structure:**
- Typically includes a general task description
- Contains placeholders for input text
- May include specific instructions and suggested output formats
- Consider wrapping inputs in XML tags for better understanding and data extraction
- **System prompts:**
- Set the general tone and role for the AI
- Can improve the model's performance
- Usually placed at the beginning of the prompt
- Set the role for the language model
- **Best practices:**
- Invest time in understanding the assignment
- Use prompt generation tools as a starting point
- Test and iterate on prompts to improve results
- Use proper English grammar and syntax to help the AI understand
- Allow uncertainty - tell the AI to say "I don't know" if it is unsure
- Use positive phrasing - say what the AI should do, not what it shouldn't do
### Best practices for writing effective prompts
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video about writing effective prompts](https://youtu.be/xL-zj-Z4Mh4).
Here are the key takeaways from this video:
- **No universal "good" prompt:**
- The effectiveness of a prompt depends on the specific task.
- There's no one-size-fits-all approach to prompt writing.
- **Characteristics of effective prompts:**
- Clear and explanatory of the task and expected outcomes.
- Direct and detailed.
- Specific about the desired output.
- **Key elements to consider:**
- Understand the task, audience, and end goal.
- Explain these elements clearly in the prompt.
- **Strategies for improving prompt performance:**
- Add instructions in sequential steps.
- Include relevant examples.
- Ask the model to think in steps (chain of thought).
- Request reasoning before providing answers.
- Guide the input - use delimiters to clearly indicate where the user's input starts and ends.
- **Adapting to model preferences:**
- Adjust prompts to suit the preferred data structure of the model.
- For example, Anthropic models work well with XML tags.
- **Importance of system prompts:**
- Set the role for the language model.
- Placed at the beginning of the interaction.
- Can include awareness of tools or long context.
- **Iteration is crucial:**
- Emphasized as the most important part of working with prompts.
- Continual refinement leads to better results.
- Build quality control - automate testing prompts with RSpec or Rake tasks to catch differences.
- **Use traditional code:**
- If a task can be done efficiently outside of calling an LLM, use code for more reliable and deterministic outputs.
## Tuning and optimizing workflows for prompts
### Prompt tuning for LLMs using LangSmith and Anthropic Workbench together + CEF
#### Iterating on the prompt using Anthropic console
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/03nOKxr8BS4).
#### Iterating on the prompt using LangSmith
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/9WXT0licAdg).
#### Using Datasets for prompt tuning with LangSmith
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://www.youtube.com/watch?v=kUnm0c2LMlQ).
#### Using automated evaluation in LangSmith
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/MT6SK4y47Zw).
#### Using pairwise experiments in LangSmith
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/mhpY7ddjXqc).
[View the CEF documentation](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/running_evaluation_locally/pairwise_evaluation.md).
#### When to use LangSmith and when CEF
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [this video](https://youtu.be/-DK-XFFllwg).
##### Key Points on CEF (Centralized Evaluation Framework) Project
1. Initial Development
- Start with pure LangSmith for prompt iteration
- Easier and quicker to set up
- More cost-effective for early stages
1. When to Transition to CEF
- When investing more in the feature
- For working with larger datasets
- For repeated, long-term use
1. CEF Setup Considerations
- Requires upfront time investment
- Need to adjust evaluations for specific features
- Set up input data (for example, local GDK for chat features)
1. Challenges
- Ensuring consistent data across different users
- Exploring options like seats and imports for data sharing
1. Current CEF Capabilities
- Supports chat questions about code
- Handles documentation-related queries
- Includes evaluations for code suggestions
1. Advantages of CEF
- Allows running evaluations on local GDK
- Results viewable in LangSmith UI
- Enables use of larger datasets
1. Flexibility
- Requires customization for specific use cases
- Not a one-size-fits-all solution
1. Documentation
- CEF has extensive documentation available.
1. Adoption
- Already in use by some teams, including code suggestions and create teams
## Further resources
For more comprehensive prompt engineering guides, see:
- [Prompt Engineering Guide 1](https://www.promptingguide.ai/)
- [Prompt Engineering Guide 2](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/)
|
https://docs.gitlab.com/development/ai_evaluation_guidelines
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/ai_evaluation_guidelines.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
ai_evaluation_guidelines.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
AI Evaluation Guidelines
| null |
Unlike traditional software systems that behave more-or-less predictably, minor input changes can cause AI-powered systems to produce significantly different outputs. This unpredictability stems from the non-deterministic nature of AI-generated responses. Traditional software testing methods are not designed to handle such variability, which is why AI evaluation has become essential. AI evaluation is a data-driven, quantitative process that analyzes AI outputs to assess system performance, quality, and reliability.
The [Centralized Evaluation Framework (CEF)](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library) provides a streamlined, unified approach to evaluating AI features at GitLab.
It is essential to our strategy for ensuring the quality of our AI-powered features.
Conceptually, there are three parts to an evaluation:
1. **Dataset**: A collection of test inputs (and, optionally, expected outputs).
1. **Target**: The target of the evaluation. For example, a prompt, an agent, a tool, a feature, a system component, or the application end-to-end.
1. **Metrics**: Measurable criteria used to assess the AI-generated output.
Each part plays a role in the evaluation process, as described below:
1. **Establish acceptance criteria**: Define metrics to indicate correct target behavior.
1. **Design evaluations**: Design evaluators and scenarios to score the metrics to assess the criteria.
1. **Create a dataset**: Collect representative examples covering typical usage patterns, edge cases, and error conditions.
1. **Execute**: Run evaluations of the target against the dataset.
1. **Analyze results**: Compare results with acceptance criteria and identify areas for improvement.
1. **Iterate and refine**: Make necessary adjustments based on evaluation findings.
## Establish acceptance criteria
Define metrics to determine when the target AI feature or component is working correctly.
The chosen metrics should align with success metrics that determine when desired business outcomes have been met.
### Types of metrics
The following are examples of metrics that might be relevant:
- **Accuracy**: Measures how often AI predictions are correct.
- **Precision and Recall**: Evaluate the balance between correctly identified positive results and the number of actual positives.
- **F1 score**: Combines precision and recall into a single metric.
- **Latency**: Measures the time taken to produce a response.
- **Token usage**: Evaluates the efficiency of the model in terms of token consumption.
- **Conciseness and Coherence**: Assess the clarity and logical consistency of the AI output.
Please note that for some targets, domain-specific metrics are essential, perhaps even more important than the general metrics listed here.
In some cases, choosing the right metric is a gradual, iterative process of discovery and experimentation involving multiple teams as well as feedback from users.
### Define thresholds
Establish clear thresholds for each metric if possible, such as minimum acceptable performance. For example:
- Accuracy: ≥85% of explanations are technically correct
- Latency: ≤3 seconds for 95th percentile response time
Note that it might not be feasible to define a threshold for novel metrics. This particularly applies to domain-specific metrics.
In general, we rely on user expectations to define thresholds for acceptable performance.
In some cases we'll know what users will expect before releasing a feature and can define thresholds accordingly.
In other cases we'll need to wait until we get feedback before we know what threshold to set.
## Design evaluations
When designing an evaluation, you define how you'll measure the target AI feature or component performance against acceptance criteria.
This involves choosing the right evaluators.
[Evaluators](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/tree/main/doc/evaluators) are functions that score the target AI performance on specific metrics.
Designing evaluations can also involve creating scenarios that test the target AI feature or component under realistic conditions.
You can implement different scenarios as distinct categories of dataset examples, or as variations in how the evaluation invokes the target AI feature or component.
Scenarios to consider include:
- **Baseline comparisons**: Compare new models or prompts against a baseline to determine improvements.
- **Side-by-side evaluations**: Compare different models, prompts, or configurations directly against each other.
- **Custom evaluators**: Implement custom evaluation functions to test specific aspects of AI performance relevant to your application's needs.
- **Dataset sampling**: Sample different subsets of the dataset that focus on different aspects of the target.
## Create a dataset
A well-structured dataset enables consistent testing and validation of an AI system or component across different scenarios and use cases.
For an overview of working with datasets in the CEF and LangSmith, see the [dataset management](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/datasets/-/blob/main/doc/dataset_management.md) documentation.
For more detailed information on creating and preparing datasets for evaluation, see our [dataset creation guidelines](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/tree/main/doc/datasets#dataset-creation-guidelines-for-gitlab-ai-features) and [instructions for uploading datasets to LangSmith](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/datasets/-/blob/main/doc/guidelines/create_dataset.md).
### Synthetic prompt evaluation dataset generator
If you are evaluating a prompt, a quick way to get started is to use our [dataset generator](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/evaluation/dataset_generation.md).
It generates a synthetic evaluation dataset directly from an AI Gateway prompt definition.
You can watch a quick [demonstration](https://www.youtube.com/watch?v=qZEnC4PN3Co).
## Execute evaluations
When an evaluation is executed, the CEF invokes the target AI feature or component at least once for each input example in the evaluation dataset.
The framework then invokes evaluators to score the AI output, and provides you with the results of the evaluation.
### In merge requests
[Evaluation Runner](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner) can be used to run an evaluation in a CI pipeline in a merge request. It spins up a new [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/) instance on a remote environment, runs an evaluation using the CEF, and reports the results in the CI job log. See the guide for [how to use evaluation runner](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#how-to-use).
### On your local machine
See the [step-by-step guide for conducting evaluations using the CEF](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/index.md?ref_type=heads).
## Analyze results
The CEF uses LangSmith to store and analyze evaluation results. See the [LangSmith guide for how to analyze an experiment](https://docs.smith.langchain.com/evaluation/how_to_guides/analyze_single_experiment).
For guidance regarding specific features, see the Analyze Results section of the feature-specific documentation for [running evaluations locally](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/tree/main/doc/running_evaluation_locally). You can also find some information about interpreting evaluation metrics in the [Duo Chat evaluation documentation](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/tree/main/doc/duo_chat).
Please note that we're updating the documentation on executing and interpreting the results of existing evaluation pipelines (see [#671](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/issues/671)).
## Iterate and refine
Similar to the [AI feature development process](ai_feature_development_playbook.md), iterating on evaluation means returning to previous steps as indicated by the evaluation results. [Prompt engineering](prompt_engineering.md) is key to this step. However, it might also involve adding examples to the dataset, editing existing examples, adjusting the design of the evaluations, or reviewing and revising the metrics and success criteria.
## Additional resources
- [AI evaluation tooling](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation): The group containing AI evaluation tooling used at GitLab.
- [LangSmith Evaluations YouTube playlist](https://www.youtube.com/playlist?list=PLfaIDFEXuae0um8Fj0V4dHG37fGFU8Q5S):
Deep dive on evaluation with LangSmith.
- [LangSmith Evaluation Cookbook](https://github.com/langchain-ai/langsmith-cookbook/blob/main/README.md#testing--evaluation):
Contains various evaluation scenarios and examples.
- [LangSmith How To Guides](https://docs.smith.langchain.com/evaluation/how_to_guides): Contains various how to
walkthroughs.
- [GitLab Duo Chat Documentation](duo_chat.md):
Comprehensive guide on setting up and using LangSmith for chat evaluations.
- [Prompt and AI Feature Evaluation Setup and Workflow](https://gitlab.com/groups/gitlab-org/-/epics/13952):
Details on the overall workflow and setup for evaluations.
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: AI Evaluation Guidelines
breadcrumbs:
- doc
- development
- ai_features
---
Unlike traditional software systems that behave more-or-less predictably, minor input changes can cause AI-powered systems to produce significantly different outputs. This unpredictability stems from the non-deterministic nature of AI-generated responses. Traditional software testing methods are not designed to handle such variability, which is why AI evaluation has become essential. AI evaluation is a data-driven, quantitative process that analyzes AI outputs to assess system performance, quality, and reliability.
The [Centralized Evaluation Framework (CEF)](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library) provides a streamlined, unified approach to evaluating AI features at GitLab.
It is essential to our strategy for ensuring the quality of our AI-powered features.
Conceptually, there are three parts to an evaluation:
1. **Dataset**: A collection of test inputs (and, optionally, expected outputs).
1. **Target**: The target of the evaluation. For example, a prompt, an agent, a tool, a feature, a system component, or the application end-to-end.
1. **Metrics**: Measurable criteria used to assess the AI-generated output.
Each part plays a role in the evaluation process, as described below:
1. **Establish acceptance criteria**: Define metrics to indicate correct target behavior.
1. **Design evaluations**: Design evaluators and scenarios to score the metrics to assess the criteria.
1. **Create a dataset**: Collect representative examples covering typical usage patterns, edge cases, and error conditions.
1. **Execute**: Run evaluations of the target against the dataset.
1. **Analyze results**: Compare results with acceptance criteria and identify areas for improvement.
1. **Iterate and refine**: Make necessary adjustments based on evaluation findings.
## Establish acceptance criteria
Define metrics to determine when the target AI feature or component is working correctly.
The chosen metrics should align with success metrics that determine when desired business outcomes have been met.
### Types of metrics
The following are examples of metrics that might be relevant:
- **Accuracy**: Measures how often AI predictions are correct.
- **Precision and Recall**: Evaluate the balance between correctly identified positive results and the number of actual positives.
- **F1 score**: Combines precision and recall into a single metric.
- **Latency**: Measures the time taken to produce a response.
- **Token usage**: Evaluates the efficiency of the model in terms of token consumption.
- **Conciseness and Coherence**: Assess the clarity and logical consistency of the AI output.
Please note that for some targets, domain-specific metrics are essential, perhaps even more important than the general metrics listed here.
In some cases, choosing the right metric is a gradual, iterative process of discovery and experimentation involving multiple teams as well as feedback from users.
### Define thresholds
Establish clear thresholds for each metric if possible, such as minimum acceptable performance. For example:
- Accuracy: ≥85% of explanations are technically correct
- Latency: ≤3 seconds for 95th percentile response time
Note that it might not be feasible to define a threshold for novel metrics. This particularly applies to domain-specific metrics.
In general, we rely on user expectations to define thresholds for acceptable performance.
In some cases we'll know what users will expect before releasing a feature and can define thresholds accordingly.
In other cases we'll need to wait until we get feedback before we know what threshold to set.
## Design evaluations
When designing an evaluation, you define how you'll measure the target AI feature or component performance against acceptance criteria.
This involves choosing the right evaluators.
[Evaluators](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/tree/main/doc/evaluators) are functions that score the target AI performance on specific metrics.
Designing evaluations can also involve creating scenarios that test the target AI feature or component under realistic conditions.
You can implement different scenarios as distinct categories of dataset examples, or as variations in how the evaluation invokes the target AI feature or component.
Scenarios to consider include:
- **Baseline comparisons**: Compare new models or prompts against a baseline to determine improvements.
- **Side-by-side evaluations**: Compare different models, prompts, or configurations directly against each other.
- **Custom evaluators**: Implement custom evaluation functions to test specific aspects of AI performance relevant to your application's needs.
- **Dataset sampling**: Sample different subsets of the dataset that focus on different aspects of the target.
## Create a dataset
A well-structured dataset enables consistent testing and validation of an AI system or component across different scenarios and use cases.
For an overview of working with datasets in the CEF and LangSmith, see the [dataset management](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/datasets/-/blob/main/doc/dataset_management.md) documentation.
For more detailed information on creating and preparing datasets for evaluation, see our [dataset creation guidelines](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/tree/main/doc/datasets#dataset-creation-guidelines-for-gitlab-ai-features) and [instructions for uploading datasets to LangSmith](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/datasets/-/blob/main/doc/guidelines/create_dataset.md).
### Synthetic prompt evaluation dataset generator
If you are evaluating a prompt, a quick way to get started is to use our [dataset generator](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/evaluation/dataset_generation.md).
It generates a synthetic evaluation dataset directly from an AI Gateway prompt definition.
You can watch a quick [demonstration](https://www.youtube.com/watch?v=qZEnC4PN3Co).
## Execute evaluations
When an evaluation is executed, the CEF invokes the target AI feature or component at least once for each input example in the evaluation dataset.
The framework then invokes evaluators to score the AI output, and provides you with the results of the evaluation.
### In merge requests
[Evaluation Runner](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner) can be used to run an evaluation in a CI pipeline in a merge request. It spins up a new [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/) instance on a remote environment, runs an evaluation using the CEF, and reports the results in the CI job log. See the guide for [how to use evaluation runner](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#how-to-use).
### On your local machine
See the [step-by-step guide for conducting evaluations using the CEF](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/index.md?ref_type=heads).
## Analyze results
The CEF uses LangSmith to store and analyze evaluation results. See the [LangSmith guide for how to analyze an experiment](https://docs.smith.langchain.com/evaluation/how_to_guides/analyze_single_experiment).
For guidance regarding specific features, see the Analyze Results section of the feature-specific documentation for [running evaluations locally](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/tree/main/doc/running_evaluation_locally). You can also find some information about interpreting evaluation metrics in the [Duo Chat evaluation documentation](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/tree/main/doc/duo_chat).
Please note that we're updating the documentation on executing and interpreting the results of existing evaluation pipelines (see [#671](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/issues/671)).
## Iterate and refine
Similar to the [AI feature development process](ai_feature_development_playbook.md), iterating on evaluation means returning to previous steps as indicated by the evaluation results. [Prompt engineering](prompt_engineering.md) is key to this step. However, it might also involve adding examples to the dataset, editing existing examples, adjusting the design of the evaluations, or reviewing and revising the metrics and success criteria.
## Additional resources
- [AI evaluation tooling](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation): The group containing AI evaluation tooling used at GitLab.
- [LangSmith Evaluations YouTube playlist](https://www.youtube.com/playlist?list=PLfaIDFEXuae0um8Fj0V4dHG37fGFU8Q5S):
Deep dive on evaluation with LangSmith.
- [LangSmith Evaluation Cookbook](https://github.com/langchain-ai/langsmith-cookbook/blob/main/README.md#testing--evaluation):
Contains various evaluation scenarios and examples.
- [LangSmith How To Guides](https://docs.smith.langchain.com/evaluation/how_to_guides): Contains various how to
walkthroughs.
- [GitLab Duo Chat Documentation](duo_chat.md):
Comprehensive guide on setting up and using LangSmith for chat evaluations.
- [Prompt and AI Feature Evaluation Setup and Workflow](https://gitlab.com/groups/gitlab-org/-/epics/13952):
Details on the overall workflow and setup for evaluations.
|
https://docs.gitlab.com/development/availability
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/availability.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
availability.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Duo Feature Availability and Configuration
| null |
This document explains how GitLab Duo features are controlled, who can access them, and how they are configured across GitLab deployments.
## Controlling GitLab Duo Feature Availability
Various settings control when and how users can interact with GitLab Duo features. The
[end-user documentation](../../user/gitlab_duo/turn_on_off.md) explains this from a
user perspective. This document explains the implementation logic from a
developer perspective and includes technical details.
### UI Options and Database States
In the UI, the "GitLab Duo Enterprise availability" setting shows 3 options:
1. **On by default** - Features are enabled and child entities inherit this setting
1. **Off by default** - Features are disabled but can be overridden by child entities
1. **Always off** - Features are disabled and cannot be overridden by child entities
These UI options map directly to the following database states:
| UI Option | `duo_features_enabled` | `lock_duo_features_enabled` |
|-----------|------------------------|----------------------------|
| On by default | `true` | `false` |
| Off by default | `false` | `false` |
| Always off | `false` | `true` |
### Cascading Settings Implementation
The `duo_features_enabled` setting is a [cascading setting](../cascading_settings.md), which impacts how GitLab Duo features are propagated through the hierarchy.
This cascading behavior means:
1. The setting can be configured at any level: instance, group, subgroup, or project
1. For GitLab.com, the instance-wide setting is always `true`
1. Child entities can override their parent's setting. For example:
- An instance with `duo_features_enabled: false` can have a group with `duo_features_enabled: true`
- A group with `duo_features_enabled: true` can have a subgroup with `duo_features_enabled: false`
1. When the setting is `true` at a parent level, all child entities are reset to `true`
1. When the setting is `false` at a parent level, all child entities are reset to `false`
1. A parent entity can "lock" the setting using `lock_duo_features_enabled: true` (displayed as "Always off" in the UI)
- When locked, child entities cannot override the parent setting
- This effectively disables GitLab Duo features for the entire hierarchy below that point
## Feature Accessibility By Context
### Where Users Can Access GitLab Duo Features
Users with a paid GitLab Duo license (Duo Pro or Duo Enterprise) can access Chat and Code Suggestions in both the Web UI and IDE for:
1. Resources that cannot disable GitLab Duo features:
- [Personal GitLab namespaces and projects](https://gitlab.com/gitlab-org/gitlab/-/issues/493850#note_2128888470)
- Free tier GitLab groups and projects
- Premium and Ultimate groups and projects without a paid GitLab Duo license
1. Resources where GitLab Duo features are enabled:
- In GitLab Self-Managed and GitLab Dedicated: Groups or projects in an instance with a paid GitLab Duo license
- In GitLab.com: Groups or projects in a group with a paid GitLab Duo license
### Additional IDE Access Scenarios
In the IDE environment specifically, users with a GitLab Duo license can always use Chat and Code Suggestions for:
1. Repositories without Git configuration
1. Repositories with Git configuration pointing to unknown origins (such as GitHub or other GitLab instances where the user is not authenticated)
## Platform-Specific Behavior
### GitLab.com License Assignment
On GitLab.com, a GitLab Duo license is associated with the individual user to
whom it is assigned, not the group that assigned the seat. User accounts on
GitLab.com are independent entities that belong to the entire GitLab instance
rather than being "owned" by any specific group.
#### Impact on Feature Availability
Disabling GitLab Duo features (`duo_features_enabled: false`) in a group:
- Does not revoke GitLab Duo access for group members who have GitLab Duo licenses
- Only prevents licensed users from using GitLab Duo features with resources belonging to that group
#### Example Scenario
If a user has a GitLab Duo license but belongs to a group where GitLab Duo features are set to "Always off", they can still:
- Use GitLab Duo Chat for questions about issues in free projects
- Use Code Suggestions on their personal projects
- Ask GitLab Duo Chat general coding questions or questions about GitLab
- Use GitLab Duo features with resources from other groups where GitLab Duo features are enabled
This flow diagram shows how GitLab Duo feature availability works on GitLab.com:
```mermaid
flowchart TD
A[Start] --> B{Member of Premium/Ultimate group?}
B -->|No| C[Cannot use GitLab Duo]
B -->|Yes| D{Has Duo Pro/Enterprise license?}
D -->|No| E[Cannot use GitLab Duo]
D -->|Yes| F{Using Duo with specific
group/project resource?}
F -->|No| G[Can use GitLab Duo]
F -->|Yes| H{Group/Project has
Duo features enabled?}
H -->|No| I[Cannot use Duo with
this resource]
H -->|Yes| J[Can use Duo with
this resource]
```
#### GitLab.com with Duo Core, Duo Pro, and Duo Enterprise
GitLab offers three tiers of AI functionality:
1. **Duo Core** - Basic AI capabilities
1. **Duo Pro** - Enhanced AI capabilities with more advanced features
1. **Duo Enterprise** - Comprehensive AI capabilities with additional controls and features
##### Duo Core Configuration
With the introduction of Duo Core, a new setting is available for top-level Premium and Ultimate groups.
This setting allows group owners to control Duo Core availability:
- When Duo Core is enabled ("on"): Every member of the group automatically receives a Duo Core seat
- When Duo Core is disabled ("off"): No members of the group have Duo Core seats
##### Feature Availability by License Tier
| Feature | Duo Core | Duo Pro | Duo Enterprise |
|---------|----------|---------|----------------|
| Chat | Limited to IDE | Full functionality | Full functionality |
| Code Suggestions | Available in IDE and Web IDE | Available in IDE and Web IDE | Available in IDE and Web IDE |
| Additional AI features | Not available | Some Available | All Available |
This flow diagram shows how GitLab Duo feature availability works on GitLab.com with
Duo Core settings taken into consideration:
```mermaid
flowchart TD
A[Start] --> B{Member of Premium/Ultimate group?}
B -->|No| C[Cannot use GitLab Duo]
B -->|Yes| D{Has Duo Pro/Enterprise license?}
D -->|Yes| E[Can use GitLab Duo]
D -->|No| F{Any Premium/Ultimate group has
Duo Core enabled?}
F -->|No| G[Cannot use GitLab Duo]
F -->|Yes| H{Using Chat or
Code Suggestions in IDE?}
H -->|No| I[Cannot use GitLab Duo]
H -->|Yes| J{Using Duo with specific
group/project resource?}
J -->|No| K[Can use GitLab Duo]
J -->|Yes| L{Group/Project has
Duo features enabled?}
L -->|Yes| M[Can use Duo with
this resource]
L -->|No| N[Cannot use Duo with
this resource]
```
### Configuration Locations
#### GitLab.com Settings Pages
The following settings pages are available for configuring GitLab Duo on GitLab.com:
##### Admin Level
- `/admin/gitlab_duo`
- Onboard GitLab Duo Agent Platform
##### Top-Level Group Settings
- `/groups/$GROUP_FULL_PATH/-/settings/gitlab_duo`
- Assign paid GitLab Duo seats (if available)
- Access GitLab Duo Configuration
- `/groups/$GROUP_PATH/-/settings/gitlab_duo/configuration`
- Configure GitLab Duo availability ("On by default", "Off by default", or "Always off")
- Enable experimental and beta GitLab Duo features
##### Subgroup Settings
- `/groups/$GROUP_FULL_PATH/-/edit`
- Configure GitLab Duo availability for the subgroup and all its children
##### Project Settings
- `/$PROJECT_FULL_PATH/edit`
- Under "Visibility, project features, permissions" section
- Configure GitLab Duo availability for the specific project
### GitLab Self-Managed and Dedicated Instances
For Premium and Ultimate GitLab Self-Managed and Dedicated instances, the feature availability logic follows similar patterns as GitLab.com with one key difference:
Instance administrators have the ability to set GitLab Duo features to "Always off" at the instance level. When configured this way, all GitLab Duo features are disabled for all users across the entire instance, regardless of individual license assignments.
```mermaid
flowchart TD
A[Start] --> B{Instance has Duo features
set to 'Always off'?}
B -->|Yes| C[Cannot use GitLab Duo]
B -->|No| D{Has Duo Pro/Enterprise license?}
D -->|No| E[Cannot use GitLab Duo]
D -->|Yes| F{Using Duo with specific
group/project resource?}
F -->|No| G[Can use GitLab Duo]
F -->|Yes| H{Group/Project has
Duo features enabled?}
H -->|No| I[Cannot use Duo with
this resource]
H -->|Yes| J[Can use Duo with
this resource]
```
#### GitLab Self-Managed and Dedicated with Duo Core, Duo Pro, and Duo Enterprise
##### Instance-Wide Duo Core Configuration
For GitLab Self-Managed and Dedicated instances, Duo Core is controlled through an instance-level setting.
This setting is available to all Premium and Ultimate instances.
Instance administrators can:
- Enable Duo Core ("on") - Every user in the instance automatically receives a Duo Core seat
- Disable Duo Core ("off") - No users in the instance have Duo Core seats
##### License Tier Differences in Self-Managed and Dedicated Instances
The same feature differentiation between Duo Core, Duo Pro, and Duo Enterprise applies to self-managed and dedicated instances:
- **Duo Core**: Basic AI capabilities limited to IDE use cases and general coding assistance
- **Duo Pro**: Enhanced AI capabilities with broader feature access
- **Duo Enterprise**: Comprehensive AI capabilities with additional enterprise controls
Self-managed instances have additional configuration options for integrating with self-hosted AI models and controlling feature behavior.
This flow diagram shows how Duo feature availability works on non-GitLab.com
instances with Duo Core settings taken into consideration:
```mermaid
flowchart TD
A[Start] --> B{Instance has Duo features
set to 'Always off'?}
B -->|Yes| C[Cannot use GitLab Duo]
B -->|No| D{Has Duo Pro/Enterprise license?}
D -->|Yes| E[Can use GitLab Duo]
D -->|No| F{Instance has Duo Core enabled?}
F -->|No| G[Cannot use GitLab Duo]
F -->|Yes| H{Using Chat or
Code Suggestions in IDE?}
H -->|No| I[Cannot use GitLab Duo]
H -->|Yes| J{Using Duo with specific
group/project resource?}
J -->|No| K[Can use GitLab Duo]
J -->|Yes| L{Group/Project has
Duo features enabled?}
L -->|Yes| M[Can use Duo with
this resource]
L -->|No| N[Cannot use Duo with
this resource]
```
#### GitLab Self-Managed and Dedicated Settings Pages
The following settings pages are available for configuring GitLab Duo on self-managed and dedicated instances:
##### Instance Admin Settings
- `/admin/gitlab_duo`
- Assign paid GitLab Duo seats to users
- Access GitLab Duo Configuration
- `/admin/gitlab_duo/configuration`
- Configure instance-wide GitLab Duo availability
- Enable experimental and beta GitLab Duo features
- Configure Duo Chat conversation expiration periods
- Enable Code Suggestions direct connections
- Enable beta AI models for self-hosted deployments
- Configure AI logging settings
- Set AI Gateway URL for self-hosted deployments
- `/admin/ai/duo_self_hosted`
- Configure self-hosted AI model integrations
- Select specific self-hosted models for different GitLab Duo features
##### Group and Subgroup Settings
- `/groups/$GROUP_FULL_PATH/-/edit`
- Configure GitLab Duo availability for the group and all its child entities
##### Project Settings
- `/$PROJECT_FULL_PATH/edit`
- Under "Visibility, project features, permissions" section
- Configure GitLab Duo availability for the specific project
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab Duo Feature Availability and Configuration
breadcrumbs:
- doc
- development
- ai_features
---
This document explains how GitLab Duo features are controlled, who can access them, and how they are configured across GitLab deployments.
## Controlling GitLab Duo Feature Availability
Various settings control when and how users can interact with GitLab Duo features. The
[end-user documentation](../../user/gitlab_duo/turn_on_off.md) explains this from a
user perspective. This document explains the implementation logic from a
developer perspective and includes technical details.
### UI Options and Database States
In the UI, the "GitLab Duo Enterprise availability" setting shows 3 options:
1. **On by default** - Features are enabled and child entities inherit this setting
1. **Off by default** - Features are disabled but can be overridden by child entities
1. **Always off** - Features are disabled and cannot be overridden by child entities
These UI options map directly to the following database states:
| UI Option | `duo_features_enabled` | `lock_duo_features_enabled` |
|-----------|------------------------|----------------------------|
| On by default | `true` | `false` |
| Off by default | `false` | `false` |
| Always off | `false` | `true` |
### Cascading Settings Implementation
The `duo_features_enabled` setting is a [cascading setting](../cascading_settings.md), which impacts how GitLab Duo features are propagated through the hierarchy.
This cascading behavior means:
1. The setting can be configured at any level: instance, group, subgroup, or project
1. For GitLab.com, the instance-wide setting is always `true`
1. Child entities can override their parent's setting. For example:
- An instance with `duo_features_enabled: false` can have a group with `duo_features_enabled: true`
- A group with `duo_features_enabled: true` can have a subgroup with `duo_features_enabled: false`
1. When the setting is `true` at a parent level, all child entities are reset to `true`
1. When the setting is `false` at a parent level, all child entities are reset to `false`
1. A parent entity can "lock" the setting using `lock_duo_features_enabled: true` (displayed as "Always off" in the UI)
- When locked, child entities cannot override the parent setting
- This effectively disables GitLab Duo features for the entire hierarchy below that point
## Feature Accessibility By Context
### Where Users Can Access GitLab Duo Features
Users with a paid GitLab Duo license (Duo Pro or Duo Enterprise) can access Chat and Code Suggestions in both the Web UI and IDE for:
1. Resources that cannot disable GitLab Duo features:
- [Personal GitLab namespaces and projects](https://gitlab.com/gitlab-org/gitlab/-/issues/493850#note_2128888470)
- Free tier GitLab groups and projects
- Premium and Ultimate groups and projects without a paid GitLab Duo license
1. Resources where GitLab Duo features are enabled:
- In GitLab Self-Managed and GitLab Dedicated: Groups or projects in an instance with a paid GitLab Duo license
- In GitLab.com: Groups or projects in a group with a paid GitLab Duo license
### Additional IDE Access Scenarios
In the IDE environment specifically, users with a GitLab Duo license can always use Chat and Code Suggestions for:
1. Repositories without Git configuration
1. Repositories with Git configuration pointing to unknown origins (such as GitHub or other GitLab instances where the user is not authenticated)
## Platform-Specific Behavior
### GitLab.com License Assignment
On GitLab.com, a GitLab Duo license is associated with the individual user to
whom it is assigned, not the group that assigned the seat. User accounts on
GitLab.com are independent entities that belong to the entire GitLab instance
rather than being "owned" by any specific group.
#### Impact on Feature Availability
Disabling GitLab Duo features (`duo_features_enabled: false`) in a group:
- Does not revoke GitLab Duo access for group members who have GitLab Duo licenses
- Only prevents licensed users from using GitLab Duo features with resources belonging to that group
#### Example Scenario
If a user has a GitLab Duo license but belongs to a group where GitLab Duo features are set to "Always off", they can still:
- Use GitLab Duo Chat for questions about issues in free projects
- Use Code Suggestions on their personal projects
- Ask GitLab Duo Chat general coding questions or questions about GitLab
- Use GitLab Duo features with resources from other groups where GitLab Duo features are enabled
This flow diagram shows how GitLab Duo feature availability works on GitLab.com:
```mermaid
flowchart TD
A[Start] --> B{Member of Premium/Ultimate group?}
B -->|No| C[Cannot use GitLab Duo]
B -->|Yes| D{Has Duo Pro/Enterprise license?}
D -->|No| E[Cannot use GitLab Duo]
D -->|Yes| F{Using Duo with specific
group/project resource?}
F -->|No| G[Can use GitLab Duo]
F -->|Yes| H{Group/Project has
Duo features enabled?}
H -->|No| I[Cannot use Duo with
this resource]
H -->|Yes| J[Can use Duo with
this resource]
```
#### GitLab.com with Duo Core, Duo Pro, and Duo Enterprise
GitLab offers three tiers of AI functionality:
1. **Duo Core** - Basic AI capabilities
1. **Duo Pro** - Enhanced AI capabilities with more advanced features
1. **Duo Enterprise** - Comprehensive AI capabilities with additional controls and features
##### Duo Core Configuration
With the introduction of Duo Core, a new setting is available for top-level Premium and Ultimate groups.
This setting allows group owners to control Duo Core availability:
- When Duo Core is enabled ("on"): Every member of the group automatically receives a Duo Core seat
- When Duo Core is disabled ("off"): No members of the group have Duo Core seats
##### Feature Availability by License Tier
| Feature | Duo Core | Duo Pro | Duo Enterprise |
|---------|----------|---------|----------------|
| Chat | Limited to IDE | Full functionality | Full functionality |
| Code Suggestions | Available in IDE and Web IDE | Available in IDE and Web IDE | Available in IDE and Web IDE |
| Additional AI features | Not available | Some Available | All Available |
This flow diagram shows how GitLab Duo feature availability works on GitLab.com with
Duo Core settings taken into consideration:
```mermaid
flowchart TD
A[Start] --> B{Member of Premium/Ultimate group?}
B -->|No| C[Cannot use GitLab Duo]
B -->|Yes| D{Has Duo Pro/Enterprise license?}
D -->|Yes| E[Can use GitLab Duo]
D -->|No| F{Any Premium/Ultimate group has
Duo Core enabled?}
F -->|No| G[Cannot use GitLab Duo]
F -->|Yes| H{Using Chat or
Code Suggestions in IDE?}
H -->|No| I[Cannot use GitLab Duo]
H -->|Yes| J{Using Duo with specific
group/project resource?}
J -->|No| K[Can use GitLab Duo]
J -->|Yes| L{Group/Project has
Duo features enabled?}
L -->|Yes| M[Can use Duo with
this resource]
L -->|No| N[Cannot use Duo with
this resource]
```
### Configuration Locations
#### GitLab.com Settings Pages
The following settings pages are available for configuring GitLab Duo on GitLab.com:
##### Admin Level
- `/admin/gitlab_duo`
- Onboard GitLab Duo Agent Platform
##### Top-Level Group Settings
- `/groups/$GROUP_FULL_PATH/-/settings/gitlab_duo`
- Assign paid GitLab Duo seats (if available)
- Access GitLab Duo Configuration
- `/groups/$GROUP_PATH/-/settings/gitlab_duo/configuration`
- Configure GitLab Duo availability ("On by default", "Off by default", or "Always off")
- Enable experimental and beta GitLab Duo features
##### Subgroup Settings
- `/groups/$GROUP_FULL_PATH/-/edit`
- Configure GitLab Duo availability for the subgroup and all its children
##### Project Settings
- `/$PROJECT_FULL_PATH/edit`
- Under "Visibility, project features, permissions" section
- Configure GitLab Duo availability for the specific project
### GitLab Self-Managed and Dedicated Instances
For Premium and Ultimate GitLab Self-Managed and Dedicated instances, the feature availability logic follows similar patterns as GitLab.com with one key difference:
Instance administrators have the ability to set GitLab Duo features to "Always off" at the instance level. When configured this way, all GitLab Duo features are disabled for all users across the entire instance, regardless of individual license assignments.
```mermaid
flowchart TD
A[Start] --> B{Instance has Duo features
set to 'Always off'?}
B -->|Yes| C[Cannot use GitLab Duo]
B -->|No| D{Has Duo Pro/Enterprise license?}
D -->|No| E[Cannot use GitLab Duo]
D -->|Yes| F{Using Duo with specific
group/project resource?}
F -->|No| G[Can use GitLab Duo]
F -->|Yes| H{Group/Project has
Duo features enabled?}
H -->|No| I[Cannot use Duo with
this resource]
H -->|Yes| J[Can use Duo with
this resource]
```
#### GitLab Self-Managed and Dedicated with Duo Core, Duo Pro, and Duo Enterprise
##### Instance-Wide Duo Core Configuration
For GitLab Self-Managed and Dedicated instances, Duo Core is controlled through an instance-level setting.
This setting is available to all Premium and Ultimate instances.
Instance administrators can:
- Enable Duo Core ("on") - Every user in the instance automatically receives a Duo Core seat
- Disable Duo Core ("off") - No users in the instance have Duo Core seats
##### License Tier Differences in Self-Managed and Dedicated Instances
The same feature differentiation between Duo Core, Duo Pro, and Duo Enterprise applies to self-managed and dedicated instances:
- **Duo Core**: Basic AI capabilities limited to IDE use cases and general coding assistance
- **Duo Pro**: Enhanced AI capabilities with broader feature access
- **Duo Enterprise**: Comprehensive AI capabilities with additional enterprise controls
Self-managed instances have additional configuration options for integrating with self-hosted AI models and controlling feature behavior.
This flow diagram shows how Duo feature availability works on non-GitLab.com
instances with Duo Core settings taken into consideration:
```mermaid
flowchart TD
A[Start] --> B{Instance has Duo features
set to 'Always off'?}
B -->|Yes| C[Cannot use GitLab Duo]
B -->|No| D{Has Duo Pro/Enterprise license?}
D -->|Yes| E[Can use GitLab Duo]
D -->|No| F{Instance has Duo Core enabled?}
F -->|No| G[Cannot use GitLab Duo]
F -->|Yes| H{Using Chat or
Code Suggestions in IDE?}
H -->|No| I[Cannot use GitLab Duo]
H -->|Yes| J{Using Duo with specific
group/project resource?}
J -->|No| K[Can use GitLab Duo]
J -->|Yes| L{Group/Project has
Duo features enabled?}
L -->|Yes| M[Can use Duo with
this resource]
L -->|No| N[Cannot use Duo with
this resource]
```
#### GitLab Self-Managed and Dedicated Settings Pages
The following settings pages are available for configuring GitLab Duo on self-managed and dedicated instances:
##### Instance Admin Settings
- `/admin/gitlab_duo`
- Assign paid GitLab Duo seats to users
- Access GitLab Duo Configuration
- `/admin/gitlab_duo/configuration`
- Configure instance-wide GitLab Duo availability
- Enable experimental and beta GitLab Duo features
- Configure Duo Chat conversation expiration periods
- Enable Code Suggestions direct connections
- Enable beta AI models for self-hosted deployments
- Configure AI logging settings
- Set AI Gateway URL for self-hosted deployments
- `/admin/ai/duo_self_hosted`
- Configure self-hosted AI model integrations
- Select specific self-hosted models for different GitLab Duo features
##### Group and Subgroup Settings
- `/groups/$GROUP_FULL_PATH/-/edit`
- Configure GitLab Duo availability for the group and all its child entities
##### Project Settings
- `/$PROJECT_FULL_PATH/edit`
- Under "Visibility, project features, permissions" section
- Configure GitLab Duo availability for the specific project
|
https://docs.gitlab.com/development/model_migration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/model_migration.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
model_migration.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Model Migration Process
| null |
## Current Migration Issues
The table below shows current open issues labeled with `AI Model Migration`. This provides a live view of ongoing model migration work across GitLab.
```glql
display: table
fields: title, author, assignee, milestone, labels, updated
limit: 10
query: label = "AI Model Migration" AND opened = true
```
*Note: This table is dynamically generated using GitLab Query Language (GLQL) when viewing the rendered documentation. It shows up to 10 open issues with the AI Model Migration label, sorted by most recently updated.*
## Quick Links
- **[GitLab AI Features - Default GitLab AI Vendor Models](https://duo-feature-list-754252.gitlab.io/)**: View all features and their current model mappings
- **[AI Model Version Migration Initiative Epic](https://gitlab.com/groups/gitlab-org/-/epics/15650)**: Central tracking epic for all model migration work
- **[AI Gateway Repository](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)**: Where model configurations are managed
- **[Prompt Library](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library)**: For evaluating models and prompts
## Introduction
LLM models are constantly evolving, and GitLab needs to regularly update our AI features to support newer models. This guide provides a structured approach for migrating AI features to new models while maintaining stability and reliability.
*Note: GitLab strives to leverage the latest AI model capabilities to help provide optimal performance and features, which means model updates from existing GitLab subprocessors might occur without specific customer notifications beyond documentation updates.*
## Model Migration Timelines
Model migrations typically follow these general timelines:
- **Simple Model Updates (Same Provider)**: 1-2 weeks
- Example: Upgrading from Claude Sonnet 3.5 to 3.7
- Involves model validation, testing, and staged rollout
- Primary focus on maintaining stability and performance
- **Complex Migrations**: 1-2 months (full milestone or longer)
- Example: Adding support for a new provider like AWS Bedrock
- Example: Major version upgrades with breaking changes (for example, Claude 2 to 3)
- Requires significant API integration work
- May need infrastructure changes
### Timeline Factors
Several factors can impact migration timelines:
- Current system stability and recent incidents
- Resource availability and competing priorities
- Complexity of behavioral changes in new model
- Scale of testing required
- Feature flag rollout strategy
### Best Practices
- Always err on the side of caution with initial timeline estimates
- Use feature flags for gradual rollouts to minimize risk
- Plan for buffer time to handle unexpected issues
- Prioritize system stability over speed of deployment
{{< alert type="note" >}}
While some migrations can technically be completed quickly, we typically plan for longer timelines to ensure proper testing and staged rollouts. This approach helps maintain system stability and reliability.
{{< /alert >}}
## Team Responsibilities
Model migrations involve several teams working together. This section clarifies which teams are responsible for different aspects of the migration process.
### RACI Matrix for Model Migrations
| Task | AI Framework | Feature Teams | Product | Infrastructure |
|------|-------------|--------------|---------|---------------|
| Model configuration file creation | R/A | C | I | I |
| Infrastructure compatibility | R/A | I | I | C |
| Feature-specific prompt adjustments | C | R/A | I | I |
| Evaluations & testing | C | R/A | I | I |
| Feature flag implementation | C | R/A | I | I |
| Rollout planning | C | R/A | C | I |
| Documentation updates | C | R/A | C | I |
| Monitoring & incident response | C | R/A | I | C |
R = Responsible, A = Accountable, C = Consulted, I = Informed
## Migration Process
{{< alert type="note" >}}
**Model Mapping Resource**: You can see which features use which models and versions via the [GitLab AI Features - Default GitLab AI Vendor Models](https://duo-feature-list-754252.gitlab.io/) page.
{{< /alert >}}
### Standard Migration Process
1. **Initialization**
- AI Framework team creates an Issue in the [AI Model Version Migration Initiative Epic](https://gitlab.com/groups/gitlab-org/-/epics/15650)
- Issue should use the naming convention: `AI Model Migration - Provider/Model/Version`
- Apply the [`AI Model Migration`](https://gitlab.com/gitlab-org/gitlab/-/labels?subscribed=&sort=relevance&search=AI+Model+Migration#) label
- AI Framework team adds model configuration to AI Gateway
- AI Framework team verifies infrastructure compatibility
1. **Feature Team Implementation**
- Feature teams create implementation plans
- Feature teams adjust prompts if needed
- Feature teams implement feature flags for controlled rollout
1. **Testing & Validation**
- Feature teams run evaluations against the new model
- AI Framework team provides evaluation support
1. **Deployment**
- Feature teams manage feature flag rollout
- Feature teams monitor performance and make adjustments
1. **Completion**
- Feature teams remove feature flags when migration is complete
- Feature teams update documentation
### Model Deprecation Process
1. **Identification & Planning**
- AI Framework team monitors provider announcements
- AI Framework team creates an epic: `Replace discontinued [model] with [replacement]`
- Epic should have the `AI Model Migration` label
- Set due date at least 2-4 weeks before provider's cutoff date
- AI Framework team identifies replacement models
1. **Evaluation**
- AI Framework team evaluates replacement models
- Feature teams test affected features with candidates
- Teams determine the best replacement model
1. **Implementation**
- AI Framework team creates model configuration files
- Feature teams update features to use the replacement model
- Teams implement feature flags for controlled rollout
1. **Testing**
- Feature teams run comprehensive evaluations
- Teams document performance metrics
1. **Deployment**
- Feature teams manage phased rollout via feature flags
- Teams monitor performance closely
- Rollout expands gradually based on performance
1. **Completion**
- Remove feature flags when migration is complete
- Update documentation
- Clean up deprecated model references
## Prerequisites for Model Migration
Before starting a model migration:
1. **Create an issue** under the [AI Model Version Migration Initiative epic](https://gitlab.com/groups/gitlab-org/-/epics/15650):
- Label with `group::ai framework` and `AI Model Migration`
- Document behavioral changes or improvements
- Include any breaking changes or compatibility issues
- Reference provider documentation
1. **Verify model support** in AI Gateway:
- Check model definitions:
- For LiteLLM models: `ai_gateway/models/v2/container.py`
- For Anthropic models: `ai_gateway/models/anthropic.py`
- For new providers: Create new model definition file
- Verify configurations (enums, stop tokens, timeouts, etc.)
- Test the model locally:
- Set up the [AI gateway development environment](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist#how-to-run-the-server-locally)
- Configure API keys in `.env` file
- Test using Swagger UI at `http://localhost:5052/docs`
- Create an issue for new model support if needed
- Review provider API documentation for breaking changes
1. **Ensure access** to testing environments and monitoring tools
1. **Complete model evaluation** using the [Prompt Library](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/how-to/run_duo_chat_eval.md)
### Additional Prerequisites for Model Deprecations
For model deprecations:
1. **Create an epic** when a deprecation is announced:
- Label with `group::ai framework` and `AI Model Migration`
- Document the deprecation timeline
- Include provider migration recommendations
- Reference the deprecation announcement
- List all affected features
1. **Evaluate replacement models**:
- Document evaluation criteria
- Run comparative evaluations
- Consider regional availability
- Assess infrastructure changes required
1. **Create migration timeline**:
- Set completion target at least 2-4 weeks before cutoff
- Include time for each feature update
- Plan for gradual rollout
- Allow time for infrastructure changes
{{< alert type="note" >}}
Documentation of model changes and deprecations is crucial for tracking impact and future troubleshooting. Always create an issue before beginning any migration process.
{{< /alert >}}
## Implementation Guidelines
### Feature Team Migration Template
Feature teams should use the [AI Model Rollout template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/AI%20Model%20Rollout.md) to implement model migrations. See an example from our [Claude 3.7 Sonnet Code Generation Rollout Plan](https://gitlab.com/gitlab-org/gitlab/-/issues/521044).
### Anthropic Model Migration Tasks
**AI Framework Team**:
- Add new model to AI gateway configurations
- Verify compatibility with current API specification
- Verify the model works with existing API patterns
- Create model configuration file
- Document model-specific parameters or behaviors
- Verify infrastructure compatibility
- Update model definitions following [prompt definition guidelines](actions.md#2-create-a-prompt-definition-in-the-ai-gateway)
**Feature Team**:
- Add new model to [available models list](https://gitlab.com/gitlab-org/gitlab/-/blob/32fa9eaa3c8589ee7f448ae683710ec7bd82f36c/ee/lib/gitlab/llm/concerns/available_models.rb#L5-10)
- Change default model in [AI-Gateway client](https://gitlab.com/gitlab-org/gitlab/-/blob/41361629b302f2c55e35701d2c0a73cff32f9013/ee/lib/gitlab/llm/chain/requests/ai_gateway.rb#L63-67) behind feature flag
- Update model references in feature-specific code
- Implement feature flags for controlled rollout
- Test prompts with new model
- Monitor performance during rollout
- Update documentation
{{< alert type="note" >}}
While we're moving toward AI gateway holding the prompts, feature flag implementation still requires a GitLab release.
{{< /alert >}}
### Vertex Models Migration Tasks
**AI Framework Team**:
- Activate model in Google Cloud Platform
- Update AI gateway to support new Vertex model
- Document model-specific parameters
**Feature Team**:
- Update model references in feature-specific code
- Implement feature flags for controlled rollout
- Test prompts with new model
- Monitor performance during rollout
- Update documentation
## Feature Flag Implementation
### Implementation Steps
For implementing feature flags, refer to our [Feature Flags Development Guidelines](../feature_flags/_index.md).
{{< alert type="note" >}}
Feature flag implementations will affect self-hosted cloud-connected customers. These customers won't receive the model upgrade until the feature flag is removed from the AI gateway codebase, as they won't have access to the new GitLab release.
{{< /alert >}}
### Model Selection Implementation
Implement model selection logic in:
- AI gateway client (`ee/lib/gitlab/llm/chain/requests/ai_gateway.rb`)
- Model definitions in AI gateway
- Any custom implementations in specific features
### Rollout Strategy
1. **Enable feature flag** for small percentage of users/groups
1. **Monitor performance** using:
- [Sidekiq Service dashboard](https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview)
- [AI gateway metrics dashboard](https://dashboards.gitlab.net/d/ai-gateway-main/ai-gateway3a-overview?orgId=1)
- [AI gateway logs](https://log.gprd.gitlab.net/app/r/s/zKEel)
- [Feature usage dashboard](https://log.gprd.gitlab.net/app/r/s/egybF)
- [Periscope dashboard](https://app.periscopedata.com/app/gitlab/1137231/Ai-Features)
1. **Gradually increase** rollout percentage
1. **If issues arise**, disable feature flag to rollback
1. **Once stable**, remove feature flag
## Common Migration Scenarios
### Simple Model Version Update (Same Provider)
**Example**: Upgrading from Claude 3.5 to Claude 3.7
**AI Framework Team**:
- Create migration issue
- Add model configuration file
- Verify API compatibility
- Ensure infrastructure support
**Feature Teams**:
- Create implementation issues
- Test prompts with new model
- Implement feature flags
- Monitor performance
- Remove feature flags when stable
### New Provider Integration
**Example**: Adding AWS Bedrock models
**AI Framework Team**:
- Create integration plan
- Implement provider API in AI gateway
- Create model configuration files
- Update authentication mechanisms
- Document provider-specific parameters
- Evaluate model performance
**Feature Teams**:
- Evaluate feature quality and performance with the new model
- Adapt prompts for new provider's models
- Implement feature flags
- Deploy and monitor
- Update documentation
### Model Deprecation Response
**Example**: Replacing discontinued Vertex AI Code Gecko v2
**AI Framework Team**:
- Create epic to track deprecation
- Evaluate replacement models
- Create model configuration
- Document routing logic
- Verify infrastructure compatibility
**Feature Teams**:
- Implement routing logic
- Create feature flags for transition
- Run evaluations
- Implement staged rollout
- Monitor performance during transition
## Troubleshooting Guide
### Prompt Compatibility Issues
If you encounter prompt compatibility issues:
1. **Analyze Errors**:
- Enable "expanded AI logging" to capture model responses
- Check for "LLM didn't follow instructions" errors
- Review model outputs for unexpected patterns
1. **Resolve Issues**:
- Create new prompt version (following semantic versioning)
- Test prompt variations in evaluation environment
- Use feature flags to control prompt deployment
- Monitor performance during rollout
### Example: Claude 3.5 to 3.7 Migration
For Claude 3.7 migrations:
- Create new version 2.0.0 prompt definition
- Implement feature flag for prompt version control
- Use AI Framework team's model configuration file
- Run evaluations to verify performance
- Roll out gradually and monitor
## AI Framework Team Migration Issue Template
The AI Framework team should create a main migration issue following this template:
```markdown
# [Model Name] Model Upgrade
## Overview
[Brief description of the new model and its improvements]
## Features to Update
[List of features affected by this migration, organized by category]
### Generally Available Features
- [Feature 1]
- [Feature 2]
### Beta Features
- [Beta Feature 1]
### Experimental Features
- [Experimental Feature 1]
## Required Changes
- Add model configuration file for model flexibility
- New prompt definition created to use the new model
- Feature flag created for controlled rollout
## Technical Details
- [Any technical specifics about this migration]
- [Impact on GitLab.com and GitLab Self-Managed instances]
## Implementation Steps
- [ ] Update model configurations in each feature
- [ ] Verify performance improvements
- [ ] Deploy updates
- [ ] Update documentation
## Timeline
Priority: [Priority level]
## References
- [Model Announcement]
- [Model Documentation]
- [GitLab Documentation]
- [Other relevant links]
## Proposed Solution
[Description of the high-level implementation approach]
## Implementation Details
Follow the issues below with the associated rollout plans:
| Feature | DRI | ETA | Issue Link |
|---------|-----|-----|------------|
| [Feature 1] | [@username] | [Date] | [Issue link] |
| [Feature 2] | [@username] | [Date] | [Issue link] |
```
See an example in our [Claude 3.7 Model Upgrade](https://gitlab.com/gitlab-org/gitlab/-/issues/521034) issue.
## References
- **Model Documentation**
- [Anthropic Model Documentation](https://docs.anthropic.com/claude/reference/versions)
- [Google Vertex AI Documentation](https://cloud.google.com/vertex-ai/docs/reference)
- **GitLab Resources**
- [GitLab AI Features - Default GitLab AI Vendor Models](https://duo-feature-list-754252.gitlab.io/)
- [AI Gateway Repository](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)
- [Prompt Library](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library)
- [AI Model Version Migration Initiative](https://gitlab.com/groups/gitlab-org/-/epics/15650)
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Model Migration Process
breadcrumbs:
- doc
- development
- ai_features
---
## Current Migration Issues
The table below shows current open issues labeled with `AI Model Migration`. This provides a live view of ongoing model migration work across GitLab.
```glql
display: table
fields: title, author, assignee, milestone, labels, updated
limit: 10
query: label = "AI Model Migration" AND opened = true
```
*Note: This table is dynamically generated using GitLab Query Language (GLQL) when viewing the rendered documentation. It shows up to 10 open issues with the AI Model Migration label, sorted by most recently updated.*
## Quick Links
- **[GitLab AI Features - Default GitLab AI Vendor Models](https://duo-feature-list-754252.gitlab.io/)**: View all features and their current model mappings
- **[AI Model Version Migration Initiative Epic](https://gitlab.com/groups/gitlab-org/-/epics/15650)**: Central tracking epic for all model migration work
- **[AI Gateway Repository](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)**: Where model configurations are managed
- **[Prompt Library](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library)**: For evaluating models and prompts
## Introduction
LLM models are constantly evolving, and GitLab needs to regularly update our AI features to support newer models. This guide provides a structured approach for migrating AI features to new models while maintaining stability and reliability.
*Note: GitLab strives to leverage the latest AI model capabilities to help provide optimal performance and features, which means model updates from existing GitLab subprocessors might occur without specific customer notifications beyond documentation updates.*
## Model Migration Timelines
Model migrations typically follow these general timelines:
- **Simple Model Updates (Same Provider)**: 1-2 weeks
- Example: Upgrading from Claude Sonnet 3.5 to 3.7
- Involves model validation, testing, and staged rollout
- Primary focus on maintaining stability and performance
- **Complex Migrations**: 1-2 months (full milestone or longer)
- Example: Adding support for a new provider like AWS Bedrock
- Example: Major version upgrades with breaking changes (for example, Claude 2 to 3)
- Requires significant API integration work
- May need infrastructure changes
### Timeline Factors
Several factors can impact migration timelines:
- Current system stability and recent incidents
- Resource availability and competing priorities
- Complexity of behavioral changes in new model
- Scale of testing required
- Feature flag rollout strategy
### Best Practices
- Always err on the side of caution with initial timeline estimates
- Use feature flags for gradual rollouts to minimize risk
- Plan for buffer time to handle unexpected issues
- Prioritize system stability over speed of deployment
{{< alert type="note" >}}
While some migrations can technically be completed quickly, we typically plan for longer timelines to ensure proper testing and staged rollouts. This approach helps maintain system stability and reliability.
{{< /alert >}}
## Team Responsibilities
Model migrations involve several teams working together. This section clarifies which teams are responsible for different aspects of the migration process.
### RACI Matrix for Model Migrations
| Task | AI Framework | Feature Teams | Product | Infrastructure |
|------|-------------|--------------|---------|---------------|
| Model configuration file creation | R/A | C | I | I |
| Infrastructure compatibility | R/A | I | I | C |
| Feature-specific prompt adjustments | C | R/A | I | I |
| Evaluations & testing | C | R/A | I | I |
| Feature flag implementation | C | R/A | I | I |
| Rollout planning | C | R/A | C | I |
| Documentation updates | C | R/A | C | I |
| Monitoring & incident response | C | R/A | I | C |
R = Responsible, A = Accountable, C = Consulted, I = Informed
## Migration Process
{{< alert type="note" >}}
**Model Mapping Resource**: You can see which features use which models and versions via the [GitLab AI Features - Default GitLab AI Vendor Models](https://duo-feature-list-754252.gitlab.io/) page.
{{< /alert >}}
### Standard Migration Process
1. **Initialization**
- AI Framework team creates an Issue in the [AI Model Version Migration Initiative Epic](https://gitlab.com/groups/gitlab-org/-/epics/15650)
- Issue should use the naming convention: `AI Model Migration - Provider/Model/Version`
- Apply the [`AI Model Migration`](https://gitlab.com/gitlab-org/gitlab/-/labels?subscribed=&sort=relevance&search=AI+Model+Migration#) label
- AI Framework team adds model configuration to AI Gateway
- AI Framework team verifies infrastructure compatibility
1. **Feature Team Implementation**
- Feature teams create implementation plans
- Feature teams adjust prompts if needed
- Feature teams implement feature flags for controlled rollout
1. **Testing & Validation**
- Feature teams run evaluations against the new model
- AI Framework team provides evaluation support
1. **Deployment**
- Feature teams manage feature flag rollout
- Feature teams monitor performance and make adjustments
1. **Completion**
- Feature teams remove feature flags when migration is complete
- Feature teams update documentation
### Model Deprecation Process
1. **Identification & Planning**
- AI Framework team monitors provider announcements
- AI Framework team creates an epic: `Replace discontinued [model] with [replacement]`
- Epic should have the `AI Model Migration` label
- Set due date at least 2-4 weeks before provider's cutoff date
- AI Framework team identifies replacement models
1. **Evaluation**
- AI Framework team evaluates replacement models
- Feature teams test affected features with candidates
- Teams determine the best replacement model
1. **Implementation**
- AI Framework team creates model configuration files
- Feature teams update features to use the replacement model
- Teams implement feature flags for controlled rollout
1. **Testing**
- Feature teams run comprehensive evaluations
- Teams document performance metrics
1. **Deployment**
- Feature teams manage phased rollout via feature flags
- Teams monitor performance closely
- Rollout expands gradually based on performance
1. **Completion**
- Remove feature flags when migration is complete
- Update documentation
- Clean up deprecated model references
## Prerequisites for Model Migration
Before starting a model migration:
1. **Create an issue** under the [AI Model Version Migration Initiative epic](https://gitlab.com/groups/gitlab-org/-/epics/15650):
- Label with `group::ai framework` and `AI Model Migration`
- Document behavioral changes or improvements
- Include any breaking changes or compatibility issues
- Reference provider documentation
1. **Verify model support** in AI Gateway:
- Check model definitions:
- For LiteLLM models: `ai_gateway/models/v2/container.py`
- For Anthropic models: `ai_gateway/models/anthropic.py`
- For new providers: Create new model definition file
- Verify configurations (enums, stop tokens, timeouts, etc.)
- Test the model locally:
- Set up the [AI gateway development environment](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist#how-to-run-the-server-locally)
- Configure API keys in `.env` file
- Test using Swagger UI at `http://localhost:5052/docs`
- Create an issue for new model support if needed
- Review provider API documentation for breaking changes
1. **Ensure access** to testing environments and monitoring tools
1. **Complete model evaluation** using the [Prompt Library](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/how-to/run_duo_chat_eval.md)
### Additional Prerequisites for Model Deprecations
For model deprecations:
1. **Create an epic** when a deprecation is announced:
- Label with `group::ai framework` and `AI Model Migration`
- Document the deprecation timeline
- Include provider migration recommendations
- Reference the deprecation announcement
- List all affected features
1. **Evaluate replacement models**:
- Document evaluation criteria
- Run comparative evaluations
- Consider regional availability
- Assess infrastructure changes required
1. **Create migration timeline**:
- Set completion target at least 2-4 weeks before cutoff
- Include time for each feature update
- Plan for gradual rollout
- Allow time for infrastructure changes
{{< alert type="note" >}}
Documentation of model changes and deprecations is crucial for tracking impact and future troubleshooting. Always create an issue before beginning any migration process.
{{< /alert >}}
## Implementation Guidelines
### Feature Team Migration Template
Feature teams should use the [AI Model Rollout template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/AI%20Model%20Rollout.md) to implement model migrations. See an example from our [Claude 3.7 Sonnet Code Generation Rollout Plan](https://gitlab.com/gitlab-org/gitlab/-/issues/521044).
### Anthropic Model Migration Tasks
**AI Framework Team**:
- Add new model to AI gateway configurations
- Verify compatibility with current API specification
- Verify the model works with existing API patterns
- Create model configuration file
- Document model-specific parameters or behaviors
- Verify infrastructure compatibility
- Update model definitions following [prompt definition guidelines](actions.md#2-create-a-prompt-definition-in-the-ai-gateway)
**Feature Team**:
- Add new model to [available models list](https://gitlab.com/gitlab-org/gitlab/-/blob/32fa9eaa3c8589ee7f448ae683710ec7bd82f36c/ee/lib/gitlab/llm/concerns/available_models.rb#L5-10)
- Change default model in [AI-Gateway client](https://gitlab.com/gitlab-org/gitlab/-/blob/41361629b302f2c55e35701d2c0a73cff32f9013/ee/lib/gitlab/llm/chain/requests/ai_gateway.rb#L63-67) behind feature flag
- Update model references in feature-specific code
- Implement feature flags for controlled rollout
- Test prompts with new model
- Monitor performance during rollout
- Update documentation
{{< alert type="note" >}}
While we're moving toward AI gateway holding the prompts, feature flag implementation still requires a GitLab release.
{{< /alert >}}
### Vertex Models Migration Tasks
**AI Framework Team**:
- Activate model in Google Cloud Platform
- Update AI gateway to support new Vertex model
- Document model-specific parameters
**Feature Team**:
- Update model references in feature-specific code
- Implement feature flags for controlled rollout
- Test prompts with new model
- Monitor performance during rollout
- Update documentation
## Feature Flag Implementation
### Implementation Steps
For implementing feature flags, refer to our [Feature Flags Development Guidelines](../feature_flags/_index.md).
{{< alert type="note" >}}
Feature flag implementations will affect self-hosted cloud-connected customers. These customers won't receive the model upgrade until the feature flag is removed from the AI gateway codebase, as they won't have access to the new GitLab release.
{{< /alert >}}
### Model Selection Implementation
Implement model selection logic in:
- AI gateway client (`ee/lib/gitlab/llm/chain/requests/ai_gateway.rb`)
- Model definitions in AI gateway
- Any custom implementations in specific features
### Rollout Strategy
1. **Enable feature flag** for small percentage of users/groups
1. **Monitor performance** using:
- [Sidekiq Service dashboard](https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview)
- [AI gateway metrics dashboard](https://dashboards.gitlab.net/d/ai-gateway-main/ai-gateway3a-overview?orgId=1)
- [AI gateway logs](https://log.gprd.gitlab.net/app/r/s/zKEel)
- [Feature usage dashboard](https://log.gprd.gitlab.net/app/r/s/egybF)
- [Periscope dashboard](https://app.periscopedata.com/app/gitlab/1137231/Ai-Features)
1. **Gradually increase** rollout percentage
1. **If issues arise**, disable feature flag to rollback
1. **Once stable**, remove feature flag
## Common Migration Scenarios
### Simple Model Version Update (Same Provider)
**Example**: Upgrading from Claude 3.5 to Claude 3.7
**AI Framework Team**:
- Create migration issue
- Add model configuration file
- Verify API compatibility
- Ensure infrastructure support
**Feature Teams**:
- Create implementation issues
- Test prompts with new model
- Implement feature flags
- Monitor performance
- Remove feature flags when stable
### New Provider Integration
**Example**: Adding AWS Bedrock models
**AI Framework Team**:
- Create integration plan
- Implement provider API in AI gateway
- Create model configuration files
- Update authentication mechanisms
- Document provider-specific parameters
- Evaluate model performance
**Feature Teams**:
- Evaluate feature quality and performance with the new model
- Adapt prompts for new provider's models
- Implement feature flags
- Deploy and monitor
- Update documentation
### Model Deprecation Response
**Example**: Replacing discontinued Vertex AI Code Gecko v2
**AI Framework Team**:
- Create epic to track deprecation
- Evaluate replacement models
- Create model configuration
- Document routing logic
- Verify infrastructure compatibility
**Feature Teams**:
- Implement routing logic
- Create feature flags for transition
- Run evaluations
- Implement staged rollout
- Monitor performance during transition
## Troubleshooting Guide
### Prompt Compatibility Issues
If you encounter prompt compatibility issues:
1. **Analyze Errors**:
- Enable "expanded AI logging" to capture model responses
- Check for "LLM didn't follow instructions" errors
- Review model outputs for unexpected patterns
1. **Resolve Issues**:
- Create new prompt version (following semantic versioning)
- Test prompt variations in evaluation environment
- Use feature flags to control prompt deployment
- Monitor performance during rollout
### Example: Claude 3.5 to 3.7 Migration
For Claude 3.7 migrations:
- Create new version 2.0.0 prompt definition
- Implement feature flag for prompt version control
- Use AI Framework team's model configuration file
- Run evaluations to verify performance
- Roll out gradually and monitor
## AI Framework Team Migration Issue Template
The AI Framework team should create a main migration issue following this template:
```markdown
# [Model Name] Model Upgrade
## Overview
[Brief description of the new model and its improvements]
## Features to Update
[List of features affected by this migration, organized by category]
### Generally Available Features
- [Feature 1]
- [Feature 2]
### Beta Features
- [Beta Feature 1]
### Experimental Features
- [Experimental Feature 1]
## Required Changes
- Add model configuration file for model flexibility
- New prompt definition created to use the new model
- Feature flag created for controlled rollout
## Technical Details
- [Any technical specifics about this migration]
- [Impact on GitLab.com and GitLab Self-Managed instances]
## Implementation Steps
- [ ] Update model configurations in each feature
- [ ] Verify performance improvements
- [ ] Deploy updates
- [ ] Update documentation
## Timeline
Priority: [Priority level]
## References
- [Model Announcement]
- [Model Documentation]
- [GitLab Documentation]
- [Other relevant links]
## Proposed Solution
[Description of the high-level implementation approach]
## Implementation Details
Follow the issues below with the associated rollout plans:
| Feature | DRI | ETA | Issue Link |
|---------|-----|-----|------------|
| [Feature 1] | [@username] | [Date] | [Issue link] |
| [Feature 2] | [@username] | [Date] | [Issue link] |
```
See an example in our [Claude 3.7 Model Upgrade](https://gitlab.com/gitlab-org/gitlab/-/issues/521034) issue.
## References
- **Model Documentation**
- [Anthropic Model Documentation](https://docs.anthropic.com/claude/reference/versions)
- [Google Vertex AI Documentation](https://cloud.google.com/vertex-ai/docs/reference)
- **GitLab Resources**
- [GitLab AI Features - Default GitLab AI Vendor Models](https://duo-feature-list-754252.gitlab.io/)
- [AI Gateway Repository](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)
- [Prompt Library](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library)
- [AI Model Version Migration Initiative](https://gitlab.com/groups/gitlab-org/-/epics/15650)
|
https://docs.gitlab.com/development/code_suggestions
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/code_suggestions.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
code_suggestions.md
|
Create
|
Code Creation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Code Suggestions development guidelines
|
Code Suggestions documentation for developers interested in contributing features or bugfixes.
|
## Code Suggestions development setup
The recommended setup for locally developing and debugging Code Suggestions is to have all 3 different components running:
- IDE Extension (for example, GitLab Workflow extension for VS Code).
- Main application configured correctly (for example, GDK).
- [AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist).
This should enable everyone to locally see how any change made in an IDE is sent to the main application to be transformed into a prompt before being sent to the respective model.
### Setup instructions
1. Install and locally run the [GitLab Workflow extension for VS Code](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/blob/main/CONTRIBUTING.md#configuring-development-environment):
1. Add the `"gitlab.debug": true` info to the Code Suggestions development config:
1. In VS Code, go to the Extensions page and find "GitLab Workflow" in the list.
1. Open the extension settings by clicking a small cog icon and select "Extension Settings" option.
1. Check a "GitLab: Debug" checkbox.
1. If you'd like to test that Code Suggestions is working from inside the GitLab Workflow extension for VS Code, then follow the [authenticate with GitLab steps](../../editor_extensions/visual_studio_code/setup.md#authenticate-with-gitlab) with your GDK inside the new window of VS Code that pops up when you run the "Run and Debug" command.
- Once you complete the steps below, to test you are hitting your local `/code_suggestions/completions` endpoint and not production, follow these steps:
1. Inside the new window, in the built in terminal select the "Output" tab then "GitLab Language Server" from the drop down menu on the right.
1. Open a new file inside of this VS Code window and begin typing to see Code Suggestions in action.
1. You will see completion request URLs being fetched that match the Git remote URL for your GDK.
### Setup instructions to use GDK with the Code Suggestions
See the [instructions for setting up GitLab Duo features in the local development environment](_index.md)
### Bulk assign users to Duo Pro/Duo Enterprise add-on
After purchasing the Duo add-on, existing eligible users can be assigned/un-assigned to the Duo `add_on_purchase` in bulk. There are a few ways to perform this action, that apply for both GitLab.com and GitLab Self-Managed instances,
1. [Duo users management UI](../../subscriptions/subscription-add-ons.md#assign-gitlab-duo-seats)
1. [GraphQL endpoint](../../api/graphql/assign_gitlab_duo_seats.md)
1. [Rake task](../../administration/raketasks/user_management.md#bulk-assign-users-to-gitlab-duo)
The above methods make use of the [BulkAssignService](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/gitlab_subscriptions/duo/bulk_assign_service.rb)/[BulkUnassignService](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/gitlab_subscriptions/duo/bulk_unassign_service.rb), which evaluates eligibility criteria preliminarily before assigning/un-assigning the passed users in a single SQL operation.
### Setting up Duo on your staging GitLab.com account
For more information, see [setting up Duo on your GitLab.com staging account](ai_development_license.md#setting-up-gitlab-duo-for-your-staging-gitlabcom-user-account).
### Video demonstrations of installing and using Code Suggestions in IDEs
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For more guidance, see the following video demonstrations of installing
and using Code Suggestions in:
- [VS Code](https://www.youtube.com/watch?v=bJ7g9IEa48I).
<!-- Video published on 2024-09-03 -->
- [IntelliJ IDEA](https://www.youtube.com/watch?v=WE9agcnGT6A).
<!-- Video published on 2024-09-03 -->
|
---
stage: Create
group: Code Creation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Code Suggestions documentation for developers interested in contributing
features or bugfixes.
title: Code Suggestions development guidelines
breadcrumbs:
- doc
- development
- ai_features
---
## Code Suggestions development setup
The recommended setup for locally developing and debugging Code Suggestions is to have all 3 different components running:
- IDE Extension (for example, GitLab Workflow extension for VS Code).
- Main application configured correctly (for example, GDK).
- [AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist).
This should enable everyone to locally see how any change made in an IDE is sent to the main application to be transformed into a prompt before being sent to the respective model.
### Setup instructions
1. Install and locally run the [GitLab Workflow extension for VS Code](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/blob/main/CONTRIBUTING.md#configuring-development-environment):
1. Add the `"gitlab.debug": true` info to the Code Suggestions development config:
1. In VS Code, go to the Extensions page and find "GitLab Workflow" in the list.
1. Open the extension settings by clicking a small cog icon and select "Extension Settings" option.
1. Check a "GitLab: Debug" checkbox.
1. If you'd like to test that Code Suggestions is working from inside the GitLab Workflow extension for VS Code, then follow the [authenticate with GitLab steps](../../editor_extensions/visual_studio_code/setup.md#authenticate-with-gitlab) with your GDK inside the new window of VS Code that pops up when you run the "Run and Debug" command.
- Once you complete the steps below, to test you are hitting your local `/code_suggestions/completions` endpoint and not production, follow these steps:
1. Inside the new window, in the built in terminal select the "Output" tab then "GitLab Language Server" from the drop down menu on the right.
1. Open a new file inside of this VS Code window and begin typing to see Code Suggestions in action.
1. You will see completion request URLs being fetched that match the Git remote URL for your GDK.
### Setup instructions to use GDK with the Code Suggestions
See the [instructions for setting up GitLab Duo features in the local development environment](_index.md)
### Bulk assign users to Duo Pro/Duo Enterprise add-on
After purchasing the Duo add-on, existing eligible users can be assigned/un-assigned to the Duo `add_on_purchase` in bulk. There are a few ways to perform this action, that apply for both GitLab.com and GitLab Self-Managed instances,
1. [Duo users management UI](../../subscriptions/subscription-add-ons.md#assign-gitlab-duo-seats)
1. [GraphQL endpoint](../../api/graphql/assign_gitlab_duo_seats.md)
1. [Rake task](../../administration/raketasks/user_management.md#bulk-assign-users-to-gitlab-duo)
The above methods make use of the [BulkAssignService](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/gitlab_subscriptions/duo/bulk_assign_service.rb)/[BulkUnassignService](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/gitlab_subscriptions/duo/bulk_unassign_service.rb), which evaluates eligibility criteria preliminarily before assigning/un-assigning the passed users in a single SQL operation.
### Setting up Duo on your staging GitLab.com account
For more information, see [setting up Duo on your GitLab.com staging account](ai_development_license.md#setting-up-gitlab-duo-for-your-staging-gitlabcom-user-account).
### Video demonstrations of installing and using Code Suggestions in IDEs
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For more guidance, see the following video demonstrations of installing
and using Code Suggestions in:
- [VS Code](https://www.youtube.com/watch?v=bJ7g9IEa48I).
<!-- Video published on 2024-09-03 -->
- [IntelliJ IDEA](https://www.youtube.com/watch?v=WE9agcnGT6A).
<!-- Video published on 2024-09-03 -->
|
https://docs.gitlab.com/development/ai_feature_development_playbook
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/ai_feature_development_playbook.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
ai_feature_development_playbook.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
AI feature development playbook
| null |
This playbook outlines our approach to developing AI features at GitLab, similar to and concurrent with [the Build track of our product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-track). It serves as a playbook for AI feature development and operational considerations.
## Getting Started
- Start with [an overview of the AI-powered stage](https://about.gitlab.com/direction/ai-powered/).
- Play around with [existing features](../../user/gitlab_duo/feature_summary.md) in your [local development environment](_index.md#instructions-for-setting-up-gitlab-duo-features-in-the-local-development-environment).
- When you're ready, proceed with the development flow below.
## AI Feature Development Flow
The AI feature development process consists of five key interdependent and iterative phases:
### Plan
This phase prepares AI features so they are ready to be built by engineering. It supplements the [plan phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-1-plan).
At this point, the customer problem should be well understood, either because of a clearly stated requirement,
or by working through the [product development flow validation track](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#validation-track).
As part of this phase, teams decide if [approved models](../ai_architecture.md#models) satisfy the requirements of the new feature, or [submit a proposal for the approval of other models](../ai_architecture.md#supported-technologies). Teams also design or adopt testing and evaluation strategies, which includes identifying required datasets.
#### Key Activities
- Define AI feature requirements and success criteria
- Select models and assess their capabilities
- Plan testing and evaluation strategy
#### Resources
- [AI architecture overview](../ai_architecture.md)
### Develop
The develop phase, and the closely aligned test and evaluate phase, are where we build AI features,
address bugs or technical debt, and test the solutions before launching them. It supplements the [develop and test phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-2-develop--test).
This phase includes prompt engineering, where teams craft and refine prompts to achieve desired AI model behavior.
This often requires multiple iterations to optimize for accuracy, consistency, and user experience.
Development might include integrating chosen models with GitLab infrastructure through the AI Gateway,
and implementing API interfaces.
Teams must consider requirements for supporting [GitLab Duo Self-Hosted](../../administration/gitlab_duo_self_hosted/_index.md).
#### Key Activities
- [Local development environment setup](_index.md)
- [Prompt development and engineering](prompt_engineering.md)
- Model integration and API development
- [Feature flag implementation](_index.md#push-feature-flags-to-ai-gateway)
#### Resources
- [AI Gateway architecture design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_gateway/)
- [AI Gateway API documentation](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/api.md)
- [AI Gateway prompt registry](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/aigw_prompt_registry.md)
- [Developing AI Features for Duo Self-Hosted](developing_ai_features_for_duo_self_hosted.md)
### Test & Evaluate
In the test and evaluate phase, we validate AI feature quality, performance, and security,
using [traditional automated testing practices](../testing_guide/_index.md), as well as evaluation of AI-generated content.
It supplements the [develop and test phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-2-develop--test).
Evaluation involves creating datasets that represent real-world usage scenarios to ensure comprehensive coverage of the feature's behavior.
Teams implement evaluation strategies covering multiple aspects of the quality of AI-generated content, as well as performance characteristics.
#### Key Activities
- [Functional testing](../testing_guide/testing_ai_features.md)
- Performance testing
- Security and safety validation
- [Dataset creation](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/datasets/-/blob/main/doc/guidelines/create_dataset.md) and [management](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/datasets/-/blob/main/doc/dataset_management.md)
- [Evaluation](ai_evaluation_guidelines.md)
### Launch & Monitor
This phase focuses on safely introducing AI features to production through controlled rollouts and comprehensive monitoring.
It supplements the [launch phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-3-launch).
We employ feature flags to control access and gradually expand user exposure,
starting with internal teams before broader incremental release.
Monitoring tracks technical metrics (latency, error rates, resource usage)
and AI-specific indicators (model performance, response quality, user satisfaction).
Alerting systems can be used to detect performance degradation, unusual patterns, or safety concerns that require immediate attention.
#### Key Activities
- [Feature flag controlled rollout](../feature_flags/controls.md)
- Production monitoring setup
- Performance tracking and alerting
- User feedback collection
- Quality assurance in production
#### Resources
- [AI Gateway release process](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/release.md)
- [AI Gateway infrastructure runbook](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/ai-gateway/README.md)
### Improve
This phase focuses on iteratively improving the feature based on data, user feedback, and changing requirements.
It supplements the [improve phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-4-improve).
We analyze real-world usage patterns and performance metrics to identify opportunities for improvement,
whether in prompt engineering, model selection, system architecture, or feature design.
User feedback should capture qualitative insights about user satisfaction.
Teams can iteratively refine prompts based on user interactions and feedback.
This phase includes model migrations as newer, more capable models become available.
#### Key Activities
- Performance analysis and optimization
- [Prompt iteration and refinement](prompt_engineering.md#prompt-tuning-for-llms-using-langsmith-and-anthropic-workbench-together--cef)
- [Model migration and upgrades](model_migration.md)
- Dataset enhancement and expansion
## Phase Interdependencies
Each phase can feed back into any or all earlier phases as development proceeds. The develop and test & evaluate phases are especially intertwined.
Examples of interdependencies include:
- **Evaluation** insights might require new development iterations.
- **Production monitoring** results may suggest architectural replanning.
- **User feedback** could inform evaluation strategy changes.
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: AI feature development playbook
breadcrumbs:
- doc
- development
- ai_features
---
This playbook outlines our approach to developing AI features at GitLab, similar to and concurrent with [the Build track of our product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-track). It serves as a playbook for AI feature development and operational considerations.
## Getting Started
- Start with [an overview of the AI-powered stage](https://about.gitlab.com/direction/ai-powered/).
- Play around with [existing features](../../user/gitlab_duo/feature_summary.md) in your [local development environment](_index.md#instructions-for-setting-up-gitlab-duo-features-in-the-local-development-environment).
- When you're ready, proceed with the development flow below.
## AI Feature Development Flow
The AI feature development process consists of five key interdependent and iterative phases:
### Plan
This phase prepares AI features so they are ready to be built by engineering. It supplements the [plan phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-1-plan).
At this point, the customer problem should be well understood, either because of a clearly stated requirement,
or by working through the [product development flow validation track](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#validation-track).
As part of this phase, teams decide if [approved models](../ai_architecture.md#models) satisfy the requirements of the new feature, or [submit a proposal for the approval of other models](../ai_architecture.md#supported-technologies). Teams also design or adopt testing and evaluation strategies, which includes identifying required datasets.
#### Key Activities
- Define AI feature requirements and success criteria
- Select models and assess their capabilities
- Plan testing and evaluation strategy
#### Resources
- [AI architecture overview](../ai_architecture.md)
### Develop
The develop phase, and the closely aligned test and evaluate phase, are where we build AI features,
address bugs or technical debt, and test the solutions before launching them. It supplements the [develop and test phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-2-develop--test).
This phase includes prompt engineering, where teams craft and refine prompts to achieve desired AI model behavior.
This often requires multiple iterations to optimize for accuracy, consistency, and user experience.
Development might include integrating chosen models with GitLab infrastructure through the AI Gateway,
and implementing API interfaces.
Teams must consider requirements for supporting [GitLab Duo Self-Hosted](../../administration/gitlab_duo_self_hosted/_index.md).
#### Key Activities
- [Local development environment setup](_index.md)
- [Prompt development and engineering](prompt_engineering.md)
- Model integration and API development
- [Feature flag implementation](_index.md#push-feature-flags-to-ai-gateway)
#### Resources
- [AI Gateway architecture design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_gateway/)
- [AI Gateway API documentation](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/api.md)
- [AI Gateway prompt registry](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/aigw_prompt_registry.md)
- [Developing AI Features for Duo Self-Hosted](developing_ai_features_for_duo_self_hosted.md)
### Test & Evaluate
In the test and evaluate phase, we validate AI feature quality, performance, and security,
using [traditional automated testing practices](../testing_guide/_index.md), as well as evaluation of AI-generated content.
It supplements the [develop and test phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-2-develop--test).
Evaluation involves creating datasets that represent real-world usage scenarios to ensure comprehensive coverage of the feature's behavior.
Teams implement evaluation strategies covering multiple aspects of the quality of AI-generated content, as well as performance characteristics.
#### Key Activities
- [Functional testing](../testing_guide/testing_ai_features.md)
- Performance testing
- Security and safety validation
- [Dataset creation](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/datasets/-/blob/main/doc/guidelines/create_dataset.md) and [management](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/datasets/-/blob/main/doc/dataset_management.md)
- [Evaluation](ai_evaluation_guidelines.md)
### Launch & Monitor
This phase focuses on safely introducing AI features to production through controlled rollouts and comprehensive monitoring.
It supplements the [launch phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-3-launch).
We employ feature flags to control access and gradually expand user exposure,
starting with internal teams before broader incremental release.
Monitoring tracks technical metrics (latency, error rates, resource usage)
and AI-specific indicators (model performance, response quality, user satisfaction).
Alerting systems can be used to detect performance degradation, unusual patterns, or safety concerns that require immediate attention.
#### Key Activities
- [Feature flag controlled rollout](../feature_flags/controls.md)
- Production monitoring setup
- Performance tracking and alerting
- User feedback collection
- Quality assurance in production
#### Resources
- [AI Gateway release process](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/release.md)
- [AI Gateway infrastructure runbook](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/ai-gateway/README.md)
### Improve
This phase focuses on iteratively improving the feature based on data, user feedback, and changing requirements.
It supplements the [improve phase of the build track of the product development flow](https://handbook.gitlab.com/handbook/product-development/product-development-flow/#build-phase-4-improve).
We analyze real-world usage patterns and performance metrics to identify opportunities for improvement,
whether in prompt engineering, model selection, system architecture, or feature design.
User feedback should capture qualitative insights about user satisfaction.
Teams can iteratively refine prompts based on user interactions and feedback.
This phase includes model migrations as newer, more capable models become available.
#### Key Activities
- Performance analysis and optimization
- [Prompt iteration and refinement](prompt_engineering.md#prompt-tuning-for-llms-using-langsmith-and-anthropic-workbench-together--cef)
- [Model migration and upgrades](model_migration.md)
- Dataset enhancement and expansion
## Phase Interdependencies
Each phase can feed back into any or all earlier phases as development proceeds. The develop and test & evaluate phases are especially intertwined.
Examples of interdependencies include:
- **Evaluation** insights might require new development iterations.
- **Production monitoring** results may suggest architectural replanning.
- **User feedback** could inform evaluation strategy changes.
|
https://docs.gitlab.com/development/embeddings
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/embeddings.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
embeddings.md
|
AI-powered
|
Global Search
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Embeddings
| null |
Embeddings are a way of representing data in a vectorised format, making it easy and efficient to find similar documents.
Currently embeddings are only generated for issues which allows for features such as
- [Issue search](https://gitlab.com/gitlab-org/gitlab/-/issues/440424)
- [Find similar issues](https://gitlab.com/gitlab-org/gitlab/-/issues/407385)
- [Find duplicate issues](https://gitlab.com/gitlab-org/gitlab/-/issues/407385)
- [Find similar/related issues for Zendesk tickets](https://gitlab.com/gitlab-org/gitlab/-/issues/411847)
- [Auto-Categorize Service Desk issues](https://gitlab.com/gitlab-org/gitlab/-/issues/409646)
## Architecture
Embeddings are stored in Elasticsearch which is also used for [Advanced Search](../advanced_search.md).
```mermaid
graph LR
A[database record] --> B[ActiveRecord callback]
B --> C[build embedding reference]
C -->|add to queue| N[queue]
E[cron worker every minute] <-->|pull from queue| N
E --> G[deserialize reference]
G --> H[generate embedding]
H <--> I[AI gateway]
I <--> J[Vertex API]
H --> K[upsert document with embedding]
K --> L[Elasticsearch]
```
The process is driven by `Search::Elastic::ProcessEmbeddingBookkeepingService` which adds and pulls from a Redis queue.
### Adding to the embedding queue
The following process description uses issues as an example.
An issue embedding is generated from the content `"issue with title '#{issue.title}' and description '#{issue.description}'"`.
Using ActiveRecord callbacks defined in [`Search::Elastic::IssuesSearch`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/concerns/search/elastic/issues_search.rb), an [embedding reference](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/search/elastic/references/embedding.rb) is added to the embedding queue if it is created or if the title or description is updated and if [embedding generation is available](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/concerns/search/elastic/issues_search.rb#L38-47) for the issue.
### Pulling from the embedding queue
A `Search::ElasticIndexEmbeddingBulkCronWorker` cron worker runs every minute and does the following:
```mermaid
graph LR
A[cron] --> B{endpoint throttled?}
B -->|no| C[schedule 16 workers]
C ..->|each worker| D{endpoint throttled?}
D -->|no| E[fetch 19 references from queue]
E ..->|each reference| F[increment endpoint]
F --> G{endpoint throttled?}
G -->|no| H[call AI gateway to generate embedding]
```
Therefore we always make sure that we don't exceed the rate limit setting of 450 embeddings per minute even with 16 concurrent processes generating embeddings at the same time.
### Backfilling
An [Advanced Search migration](../search/advanced_search_migration_styleguide.md) is used to perform the backfill. It essentially adds references to the queue in batches which are then processed by the cron worker as described above.
## Adding a new embedding type
The following process outlines the steps to get embeddings generated and stored in Elasticsearch.
1. Do a cost and resource calculation to see if the Elasticsearch cluster can handle embedding generation or if it needs additional resources.
1. Decide where to store embeddings. Look at the [existing indices in Elasticsearch](../../integration/advanced_search/elasticsearch.md#advanced-search-index-scopes) and if there isn't a suitable existing index, [create a new index](../advanced_search.md#add-a-new-document-type-to-elasticsearch).
1. Add embedding fields to the index: [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149209).
1. Update the way [content](https://gitlab.com/gitlab-org/gitlab/-/blob/616f92a2251fcadfec5ef3792ff3d2e4a879920a/ee/lib/search/elastic/references/embedding.rb#L43-59) is generated to accommodate the new type.
1. Add a new unit primitive: [here](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/918) and [here](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/155835).
1. Use `Elastic::ApplicationVersionedSearch` to access callbacks and add the necessary checks for when to generate embeddings. See [`Search::Elastic::IssuesSearch`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/concerns/search/elastic/issues_search.rb) for an example.
1. Backfill embeddings: [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/154940).
## Adding work item embeddings locally
### Prerequisites
1. [Make sure Elasticsearch is running](../advanced_search.md#setting-up-your-development-environment).
1. If you have an existing Elasticsearch setup, make sure the `AddEmbeddingToWorkItems` migration has been completed by executing the following until it returns:
```ruby
Elastic::MigrationWorker.new.perform
```
1. Make sure you can run [GitLab Duo features on your local environment](_index.md#instructions-for-setting-up-gitlab-duo-features-in-the-local-development-environment).
1. Ensure running the following in a rails console outputs an embedding (a vector of 768 dimensions). If not, there is a problem with the AI setup.
```ruby
Gitlab::Llm::VertexAi::Embeddings::Text.new('text', user: nil, tracking_context: {}, unit_primitive: 'semantic_search_issue').execute
```
### Running the backfill
To backfill work item embeddings for a project's work items, run the following in a rails console:
```ruby
Gitlab::Duo::Developments::BackfillWorkItemEmbeddings.execute(project_id: project_id)
```
The task adds the work items to a queue and processes them in batches, indexing embeddings into Elasticsearch.
It respects a rate limit of 450 embeddings per minute. Reach out to `#g_global_search` in Slack if there are any issues.
### Verify
If the following returns 0, all work items for the project have embeddings:
```shell
curl "http://localhost:9200/gitlab-development-work_items/_count" \
--header "Content-Type: application/json" \
--data '{"query": {"bool": {"filter": [{"term": {"project_id": PROJECT_ID}}], "must_not": [{"exists": {"field": "embedding_0"}}]}}}' | jq '.count'
```
Replacing `PROJECT_ID` with your project ID.
|
---
stage: AI-powered
group: Global Search
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Embeddings
breadcrumbs:
- doc
- development
- ai_features
---
Embeddings are a way of representing data in a vectorised format, making it easy and efficient to find similar documents.
Currently embeddings are only generated for issues which allows for features such as
- [Issue search](https://gitlab.com/gitlab-org/gitlab/-/issues/440424)
- [Find similar issues](https://gitlab.com/gitlab-org/gitlab/-/issues/407385)
- [Find duplicate issues](https://gitlab.com/gitlab-org/gitlab/-/issues/407385)
- [Find similar/related issues for Zendesk tickets](https://gitlab.com/gitlab-org/gitlab/-/issues/411847)
- [Auto-Categorize Service Desk issues](https://gitlab.com/gitlab-org/gitlab/-/issues/409646)
## Architecture
Embeddings are stored in Elasticsearch which is also used for [Advanced Search](../advanced_search.md).
```mermaid
graph LR
A[database record] --> B[ActiveRecord callback]
B --> C[build embedding reference]
C -->|add to queue| N[queue]
E[cron worker every minute] <-->|pull from queue| N
E --> G[deserialize reference]
G --> H[generate embedding]
H <--> I[AI gateway]
I <--> J[Vertex API]
H --> K[upsert document with embedding]
K --> L[Elasticsearch]
```
The process is driven by `Search::Elastic::ProcessEmbeddingBookkeepingService` which adds and pulls from a Redis queue.
### Adding to the embedding queue
The following process description uses issues as an example.
An issue embedding is generated from the content `"issue with title '#{issue.title}' and description '#{issue.description}'"`.
Using ActiveRecord callbacks defined in [`Search::Elastic::IssuesSearch`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/concerns/search/elastic/issues_search.rb), an [embedding reference](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/search/elastic/references/embedding.rb) is added to the embedding queue if it is created or if the title or description is updated and if [embedding generation is available](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/concerns/search/elastic/issues_search.rb#L38-47) for the issue.
### Pulling from the embedding queue
A `Search::ElasticIndexEmbeddingBulkCronWorker` cron worker runs every minute and does the following:
```mermaid
graph LR
A[cron] --> B{endpoint throttled?}
B -->|no| C[schedule 16 workers]
C ..->|each worker| D{endpoint throttled?}
D -->|no| E[fetch 19 references from queue]
E ..->|each reference| F[increment endpoint]
F --> G{endpoint throttled?}
G -->|no| H[call AI gateway to generate embedding]
```
Therefore we always make sure that we don't exceed the rate limit setting of 450 embeddings per minute even with 16 concurrent processes generating embeddings at the same time.
### Backfilling
An [Advanced Search migration](../search/advanced_search_migration_styleguide.md) is used to perform the backfill. It essentially adds references to the queue in batches which are then processed by the cron worker as described above.
## Adding a new embedding type
The following process outlines the steps to get embeddings generated and stored in Elasticsearch.
1. Do a cost and resource calculation to see if the Elasticsearch cluster can handle embedding generation or if it needs additional resources.
1. Decide where to store embeddings. Look at the [existing indices in Elasticsearch](../../integration/advanced_search/elasticsearch.md#advanced-search-index-scopes) and if there isn't a suitable existing index, [create a new index](../advanced_search.md#add-a-new-document-type-to-elasticsearch).
1. Add embedding fields to the index: [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149209).
1. Update the way [content](https://gitlab.com/gitlab-org/gitlab/-/blob/616f92a2251fcadfec5ef3792ff3d2e4a879920a/ee/lib/search/elastic/references/embedding.rb#L43-59) is generated to accommodate the new type.
1. Add a new unit primitive: [here](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/918) and [here](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/155835).
1. Use `Elastic::ApplicationVersionedSearch` to access callbacks and add the necessary checks for when to generate embeddings. See [`Search::Elastic::IssuesSearch`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/concerns/search/elastic/issues_search.rb) for an example.
1. Backfill embeddings: [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/154940).
## Adding work item embeddings locally
### Prerequisites
1. [Make sure Elasticsearch is running](../advanced_search.md#setting-up-your-development-environment).
1. If you have an existing Elasticsearch setup, make sure the `AddEmbeddingToWorkItems` migration has been completed by executing the following until it returns:
```ruby
Elastic::MigrationWorker.new.perform
```
1. Make sure you can run [GitLab Duo features on your local environment](_index.md#instructions-for-setting-up-gitlab-duo-features-in-the-local-development-environment).
1. Ensure running the following in a rails console outputs an embedding (a vector of 768 dimensions). If not, there is a problem with the AI setup.
```ruby
Gitlab::Llm::VertexAi::Embeddings::Text.new('text', user: nil, tracking_context: {}, unit_primitive: 'semantic_search_issue').execute
```
### Running the backfill
To backfill work item embeddings for a project's work items, run the following in a rails console:
```ruby
Gitlab::Duo::Developments::BackfillWorkItemEmbeddings.execute(project_id: project_id)
```
The task adds the work items to a queue and processes them in batches, indexing embeddings into Elasticsearch.
It respects a rate limit of 450 embeddings per minute. Reach out to `#g_global_search` in Slack if there are any issues.
### Verify
If the following returns 0, all work items for the project have embeddings:
```shell
curl "http://localhost:9200/gitlab-development-work_items/_count" \
--header "Content-Type: application/json" \
--data '{"query": {"bool": {"filter": [{"term": {"project_id": PROJECT_ID}}], "must_not": [{"exists": {"field": "embedding_0"}}]}}}' | jq '.count'
```
Replacing `PROJECT_ID` with your project ID.
|
https://docs.gitlab.com/development/logged_events
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/logged_events.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
logged_events.md
|
AI-powered
|
AI Framework
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Logged events
| null |
In addition to standard logging in the GitLab Rails Monolith instance, specialized logging is available for features based on large language models (LLMs).
## Events logged
<!-- markdownlint-disable -->
<!-- vale off -->
### Returning from Service due to validation
- Description: user not permitted to perform action
- Class: `Llm::BaseService`
- Ai_event_name: permission_denied
- Level: info
- Arguments:
- none
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: yes
- Sidekiq: no
### Enqueuing CompletionWorker
- Description: scheduling completion worker in sidekiq
- Class: `Llm::BaseService`
- Ai_event_name: worker_enqueued
- Level: info
- Arguments:
- `user_id: message.user.id`
- `resource_id: message.resource&.id`
- `resource_class: message.resource&.class&.name`
- `request_id: message.request_id`
- `action_name: message.ai_action`
- `options: job_options`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: yes
- Sidekiq: no
### aborting: missing resource
- Description: If there is no resource for slash command
- Class: `Llm::ChatService`
- Ai_event_name: missing_resource
- Level: info
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: yes
- Sidekiq: no
### Performing CompletionService
- Description: performing completion
- Class: `Llm::Internal::CompletionService`
- Ai_event_name: completion_service_performed
- Level: info
- Arguments:
- `user_id: prompt_message.user.to_gid`
- `resource_id: prompt_message.resource&.to_gid`
- `action_name: prompt_message.ai_action`
- `request_id: prompt_message.request_id`
- `client_subscription_id: prompt_message.client_subscription_id`
- `completion_service_name: completion_class_name`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Answer from LLM response
- Description: Get answer from response
- Class: `Gitlab::Llm::Chain::Answer`
- Ai_event_name: answer_received
- Level: info
- Arguments:
- `llm_answer_content: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Final answer
- Description: Get final answer from response
- Class: `Gitlab::Llm::Chain::Answer`
- Ai_event_name: final_answer_received
- Level: info
- Arguments:
- `llm_answer_content: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Default final answer
- Description: Default final answer: I'm sorry, I couldn't respond in time. Please try a more specific request or enter /clear to start a new chat.
- Class: `Gitlab::Llm::Chain::Answer`
- Ai_event_name: default_final_answer_received
- Level: info
- Arguments:
- `error_code: "A6000"`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Error message/ "Error"
- Description: when answering with an error
- Class: `Gitlab::Llm::Chain::Answer`
- Ai_event_name: error_returned
- Level: error
- Arguments:
- `error: content`
- `error_code: error_code`
- `source: source`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Received response from AI gateway
- Description: when response from AIGW is returned
- Class: `Gitlab::Llm::AiGateway::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `response_from_llm: response_body`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Received error from AI gateway
- Description: when error is returned from AIGW for streaming command
- Class: `Gitlab::Llm::AiGateway::Client`
- Ai_event_name: error_response_received
- Level: error
- Arguments:
- `response_from_llm: parsed_response.dig('detail', 0, 'msg')`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Performing request to AI gateway
- Description: before performing request to the AI GW
- Class: `Gitlab::Llm::AiGateway::Client`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `url: url`
- `body: body`
- `timeout: timeout`
- `stream: stream`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Creating user access token
- Description: creating short-lived token in AIGW
- Class: `Gitlab::Llm::AiGateway::CodeSuggestionsClient`
- Ai_event_name: user_token_created
- Level: info
- Arguments:
- none
- Part of the system: code suggestions
- Expanded logging?: no
- Rails: yes
- Sidekiq: no
### Received response from Anthropic
- Description: Received response
- Class: `Gitlab::Llm::Anthropic::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `ai_request_type: request_type`
- `unit_primitive: unit_primitive`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Response content
- Description: Content of response
- Class: `Gitlab::Llm::Anthropic::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `ai_request_type: request_type`
- `unit_primitive: unit_primitive`
- `response_from_llm: response_body`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Performing request to Anthropic
- Description: performing completion request
- Class: `Gitlab::Llm::Anthropic::Client`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `options: options`
- `ai_request_type: request_type`
- `unit_primitive: unit_primitive`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Searching docs from AI gateway
- Description: performing search docs request
- Class: `Gitlab::Llm::AiGateway::DocsClient`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `options: options`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Searched docs content from AI gateway
- Description: response from AIGW with docs
- Class: `Gitlab::Llm::AiGateway::DocsClient`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `response_from_llm: response`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Json parsing error during Question Categorization
- Description: logged when json is not parsable
- Class: `Gitlab::Llm::AiGateway::Completions::CategorizeQuestions`
- Ai_event_name: error
- Level: error
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Response did not contain defined categories
- Description: logged when response is not containing one of the defined categories
- Class: `Gitlab::Llm::AiGateway::Completions::CategorizeQuestions`
- Ai_event_name: error
- Level: error
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Error response received while categorizing question
- Description: logged when response returned is not successful
- Class: `Gitlab::Llm::AiGateway::Completions::CategorizeQuestions`
- Ai_event_name: error
- Level: error
- Arguments:
- `error_type: response.dig('error', 'type')`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Picked tool
- Description: information about tool picked by chat
- Class: `Gitlab::Llm::Chain::Tools::Tool`
- Ai_event_name: picked_tool
- Level: info
- Arguments:
- `duo_chat_tool: tool_class.to_s`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Made request to AI Client
- Description: making request for chat
- Class: `Gitlab::Llm::Chain::Requests::AiGateway`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `prompt: prompt[:prompt]`
- `response_from_llm: response`
- `unit_primitive: unit_primitive`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Streaming error
- Description: Error returned when streaming
- Class: `Gitlab::Llm::Chain::Requests::Anthropic`
- Ai_event_name: error_response_received
- Level: error
- Arguments:
- `error: data&.dig("error")`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Got Final Result for documentation question content
- Description: got result for documentation question - content
- Class: `Gitlab::Llm::Chain::Tools::EmbeddingsCompletion`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `prompt: final_prompt[:prompt]`
- `response_from_llm: final_prompt_result`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Streaming error
- Description: when error is returned from AIGW for streaming command in docs question
- Class: `Gitlab::Llm::Chain::Tools::EmbeddingsCompletion`
- Ai_event_name: error_response_received
- Level: error
- Arguments:
- `error: error.message`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Answer already received from tool
- Description: when tool was already picked up (content: You already have the answer from #{self.class::NAME} tool, read carefully.)
- Class: `Gitlab::Llm::Chain::Tools::Tool`
- Ai_event_name: incorrect_response_received
- Level: info
- Arguments:
- `error_message: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Tool cycling detected
- Description: When tool is picked up again
- Class: `Gitlab::Llm::Chain::Tools::Tool`
- Ai_event_name: incorrect_response_received
- Level: info
- Arguments:
- `picked_tool: cls.class.to_s`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Calling TanukiBot
- Description: performing documentation request
- Class: `Gitlab::Llm::Chain::Tools::GitlabDocumentation::Executor`
- Ai_event_name: documentation_question_initial_request
- Level: info
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Error finding #{resource_name}
- Description: when resource (issue/epic/mr) is not found
- Class: `Gitlab::Llm::Chain::Tools::Identifier`
- Ai_event_name: incorrect_response_received
- Level: error
- Arguments:
- `error_message: authorizer.message`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Answer received from LLM
- Description: response from identifier
- Class: `Gitlab::Llm::Chain::Tools::Identifier`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `response_from_llm: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Json parsing error
- Description: when json is malformed (Observation: JSON has an invalid format. Please retry)
- Class: `Gitlab::Llm::Chain::Tools::Identifier`
- Ai_event_name: error
- Level: error
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Resource already identified
- Description: already identified resource (You already have identified the #{resource_name} #{resource.to_global_id}, read carefully.)
- Class: `Gitlab::Llm::Chain::Tools::Identifier`
- Ai_event_name: incorrect_response_received
- Level: info
- Arguments:
- `error_message: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Supported Issuable Typees Ability Allowed
- Description: logging the ability (policy.can?) for the issue/epic
- Class: `Gitlab::Llm::Chain::Tools::SummarizeComments::Executor`
- Ai_event_name: permission
- Level: info
- Arguments:
- `allowed: ability`
- Part of the system: feature
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Supported Issuable Typees Ability Allowed
- Description: logging the ability (policy.can?) for the issue/epic
- Class: `Gitlab::Llm::Chain::Tools::SummarizeComments::ExecutorOld`
- Ai_event_name: permission
- Level: info
- Arguments:
- `allowed: ability`
- Part of the system: feature
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Answer content for summarize_comments
- Description: Answer for summarize comments feature
- Class: `Gitlab::Llm::Chain::Tools::SummarizeComments::ExecutorOld`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `response_from_llm: content`
- Part of the system: feature
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Content of the prompt from chat request
- Description: chat-related request
- Class: `Gitlab::Llm::Chain::Concerns::AiDependent`
- Ai_event_name: prompt_content
- Level: info
- Arguments:
- `prompt: prompt_text`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### "Too many requests, will retry in #{delay} seconds"
- Description: When entered in exponential backoff loop
- Class: `Gitlab::Llm::Chain::Concerns::ExponentialBackoff`
- Ai_event_name: retrying_request
- Level: info
- Arguments:
- none
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Resource not found
- Description: Resource not found/not authorized
- Class: `Gitlab::Llm::Utils::Authorizer`
- Ai_event_name: permission_denied
- Level: info
- Arguments:
- `error_code: "M3003"`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### No access to Duo Chat
- Description: No access to duo chat
- Class: `Gitlab::Llm::Utils::Authorizer`
- Ai_event_name: permission_denied
- Level: info
- Arguments:
- `error_code: "M3004"`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### AI is disabled
- Description: AI is not enabled for container
- Class: `Gitlab::Llm::Utils::Authorizer`
- Ai_event_name: permission_denied
- Level: info
- Arguments:
- `error_code: "M3002"`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Performing request to Vertex
- Description: performing request
- Class: `Gitlab::Llm::VertexAi::Client`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `unit_primitive: unit_primitive`
- `options: config`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Response content
- Description: response from aigw - vertex -content
- Class: `Gitlab::Llm::VertexAi::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `unit_primitive: unit_primitive`
- `response_from_llm: response.to_json`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Received response from Vertex
- Description: response from aigw - vertex
- Class: `Gitlab::Llm::VertexAi::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `unit_primitive: unit_primitive`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Empty response from Vertex
- Description: empty response from aigw - vertex
- Class: `Gitlab::Llm::VertexAi::Client`
- Ai_event_name: empty_response_received
- Level: error
- Arguments:
- `unit_primitive: unit_primitive`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Surface an unknown event as a final answer to the user
- Description: unknown event
- Class: `Gitlab::Llm::Chain::Agents::SingleActionExecutor`
- Ai_event_name: unknown_event
- Level: warn
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Failed to find a tool in GitLab Rails
- Description: failed to find a tool
- Class: `Gitlab::Llm::Chain::Agents::SingleActionExecutor`
- Ai_event_name: tool_not_find
- Level: error
- Arguments:
- `tool_name: tool_name`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Received an event from v2/chat/agent
- Description: Received event
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: event_received
- Level: info
- Arguments:
- `event: event`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Failed to update observation
- Description: Failed to update observation
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: agent_steps_empty
- Level: error
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Request to v2/chat/agent
- Description: request
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `params: params`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Finished streaming from v2/chat/agent
- Description: finished streaming
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: streaming_finished
- Level: info
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Received error from Duo Chat Agent
- Description: Error returned when streaming
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: error_returned
- Level: error
- Arguments:
- `status: response.code`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Failed to parse a chunk from Duo Chat Agent
- Description: failed to parse a chunk
- Class: `Gitlab::Duo::Chat::AgentEventParser`
- Ai_event_name: parsing_error
- Level: warn
- Arguments:
- `event_json_size: event_json.length`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Failed to find the event class in GitLab-Rails
- Description: no event class
- Class: `Gitlab::Duo::Chat::AgentEventParser`
- Ai_event_name: parsing_error
- Level: error
- Arguments:
- `event_type: event['type']`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
<!-- markdownlint-enable -->
<!-- vale on -->
|
---
stage: AI-powered
group: AI Framework
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Logged events
breadcrumbs:
- doc
- development
- ai_features
---
In addition to standard logging in the GitLab Rails Monolith instance, specialized logging is available for features based on large language models (LLMs).
## Events logged
<!-- markdownlint-disable -->
<!-- vale off -->
### Returning from Service due to validation
- Description: user not permitted to perform action
- Class: `Llm::BaseService`
- Ai_event_name: permission_denied
- Level: info
- Arguments:
- none
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: yes
- Sidekiq: no
### Enqueuing CompletionWorker
- Description: scheduling completion worker in sidekiq
- Class: `Llm::BaseService`
- Ai_event_name: worker_enqueued
- Level: info
- Arguments:
- `user_id: message.user.id`
- `resource_id: message.resource&.id`
- `resource_class: message.resource&.class&.name`
- `request_id: message.request_id`
- `action_name: message.ai_action`
- `options: job_options`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: yes
- Sidekiq: no
### aborting: missing resource
- Description: If there is no resource for slash command
- Class: `Llm::ChatService`
- Ai_event_name: missing_resource
- Level: info
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: yes
- Sidekiq: no
### Performing CompletionService
- Description: performing completion
- Class: `Llm::Internal::CompletionService`
- Ai_event_name: completion_service_performed
- Level: info
- Arguments:
- `user_id: prompt_message.user.to_gid`
- `resource_id: prompt_message.resource&.to_gid`
- `action_name: prompt_message.ai_action`
- `request_id: prompt_message.request_id`
- `client_subscription_id: prompt_message.client_subscription_id`
- `completion_service_name: completion_class_name`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Answer from LLM response
- Description: Get answer from response
- Class: `Gitlab::Llm::Chain::Answer`
- Ai_event_name: answer_received
- Level: info
- Arguments:
- `llm_answer_content: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Final answer
- Description: Get final answer from response
- Class: `Gitlab::Llm::Chain::Answer`
- Ai_event_name: final_answer_received
- Level: info
- Arguments:
- `llm_answer_content: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Default final answer
- Description: Default final answer: I'm sorry, I couldn't respond in time. Please try a more specific request or enter /clear to start a new chat.
- Class: `Gitlab::Llm::Chain::Answer`
- Ai_event_name: default_final_answer_received
- Level: info
- Arguments:
- `error_code: "A6000"`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Error message/ "Error"
- Description: when answering with an error
- Class: `Gitlab::Llm::Chain::Answer`
- Ai_event_name: error_returned
- Level: error
- Arguments:
- `error: content`
- `error_code: error_code`
- `source: source`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Received response from AI gateway
- Description: when response from AIGW is returned
- Class: `Gitlab::Llm::AiGateway::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `response_from_llm: response_body`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Received error from AI gateway
- Description: when error is returned from AIGW for streaming command
- Class: `Gitlab::Llm::AiGateway::Client`
- Ai_event_name: error_response_received
- Level: error
- Arguments:
- `response_from_llm: parsed_response.dig('detail', 0, 'msg')`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Performing request to AI gateway
- Description: before performing request to the AI GW
- Class: `Gitlab::Llm::AiGateway::Client`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `url: url`
- `body: body`
- `timeout: timeout`
- `stream: stream`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Creating user access token
- Description: creating short-lived token in AIGW
- Class: `Gitlab::Llm::AiGateway::CodeSuggestionsClient`
- Ai_event_name: user_token_created
- Level: info
- Arguments:
- none
- Part of the system: code suggestions
- Expanded logging?: no
- Rails: yes
- Sidekiq: no
### Received response from Anthropic
- Description: Received response
- Class: `Gitlab::Llm::Anthropic::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `ai_request_type: request_type`
- `unit_primitive: unit_primitive`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Response content
- Description: Content of response
- Class: `Gitlab::Llm::Anthropic::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `ai_request_type: request_type`
- `unit_primitive: unit_primitive`
- `response_from_llm: response_body`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Performing request to Anthropic
- Description: performing completion request
- Class: `Gitlab::Llm::Anthropic::Client`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `options: options`
- `ai_request_type: request_type`
- `unit_primitive: unit_primitive`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Searching docs from AI gateway
- Description: performing search docs request
- Class: `Gitlab::Llm::AiGateway::DocsClient`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `options: options`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Searched docs content from AI gateway
- Description: response from AIGW with docs
- Class: `Gitlab::Llm::AiGateway::DocsClient`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `response_from_llm: response`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Json parsing error during Question Categorization
- Description: logged when json is not parsable
- Class: `Gitlab::Llm::AiGateway::Completions::CategorizeQuestions`
- Ai_event_name: error
- Level: error
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Response did not contain defined categories
- Description: logged when response is not containing one of the defined categories
- Class: `Gitlab::Llm::AiGateway::Completions::CategorizeQuestions`
- Ai_event_name: error
- Level: error
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Error response received while categorizing question
- Description: logged when response returned is not successful
- Class: `Gitlab::Llm::AiGateway::Completions::CategorizeQuestions`
- Ai_event_name: error
- Level: error
- Arguments:
- `error_type: response.dig('error', 'type')`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Picked tool
- Description: information about tool picked by chat
- Class: `Gitlab::Llm::Chain::Tools::Tool`
- Ai_event_name: picked_tool
- Level: info
- Arguments:
- `duo_chat_tool: tool_class.to_s`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Made request to AI Client
- Description: making request for chat
- Class: `Gitlab::Llm::Chain::Requests::AiGateway`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `prompt: prompt[:prompt]`
- `response_from_llm: response`
- `unit_primitive: unit_primitive`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Streaming error
- Description: Error returned when streaming
- Class: `Gitlab::Llm::Chain::Requests::Anthropic`
- Ai_event_name: error_response_received
- Level: error
- Arguments:
- `error: data&.dig("error")`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Got Final Result for documentation question content
- Description: got result for documentation question - content
- Class: `Gitlab::Llm::Chain::Tools::EmbeddingsCompletion`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `prompt: final_prompt[:prompt]`
- `response_from_llm: final_prompt_result`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Streaming error
- Description: when error is returned from AIGW for streaming command in docs question
- Class: `Gitlab::Llm::Chain::Tools::EmbeddingsCompletion`
- Ai_event_name: error_response_received
- Level: error
- Arguments:
- `error: error.message`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Answer already received from tool
- Description: when tool was already picked up (content: You already have the answer from #{self.class::NAME} tool, read carefully.)
- Class: `Gitlab::Llm::Chain::Tools::Tool`
- Ai_event_name: incorrect_response_received
- Level: info
- Arguments:
- `error_message: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Tool cycling detected
- Description: When tool is picked up again
- Class: `Gitlab::Llm::Chain::Tools::Tool`
- Ai_event_name: incorrect_response_received
- Level: info
- Arguments:
- `picked_tool: cls.class.to_s`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Calling TanukiBot
- Description: performing documentation request
- Class: `Gitlab::Llm::Chain::Tools::GitlabDocumentation::Executor`
- Ai_event_name: documentation_question_initial_request
- Level: info
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Error finding #{resource_name}
- Description: when resource (issue/epic/mr) is not found
- Class: `Gitlab::Llm::Chain::Tools::Identifier`
- Ai_event_name: incorrect_response_received
- Level: error
- Arguments:
- `error_message: authorizer.message`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Answer received from LLM
- Description: response from identifier
- Class: `Gitlab::Llm::Chain::Tools::Identifier`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `response_from_llm: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Json parsing error
- Description: when json is malformed (Observation: JSON has an invalid format. Please retry)
- Class: `Gitlab::Llm::Chain::Tools::Identifier`
- Ai_event_name: error
- Level: error
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Resource already identified
- Description: already identified resource (You already have identified the #{resource_name} #{resource.to_global_id}, read carefully.)
- Class: `Gitlab::Llm::Chain::Tools::Identifier`
- Ai_event_name: incorrect_response_received
- Level: info
- Arguments:
- `error_message: content`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Supported Issuable Typees Ability Allowed
- Description: logging the ability (policy.can?) for the issue/epic
- Class: `Gitlab::Llm::Chain::Tools::SummarizeComments::Executor`
- Ai_event_name: permission
- Level: info
- Arguments:
- `allowed: ability`
- Part of the system: feature
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Supported Issuable Typees Ability Allowed
- Description: logging the ability (policy.can?) for the issue/epic
- Class: `Gitlab::Llm::Chain::Tools::SummarizeComments::ExecutorOld`
- Ai_event_name: permission
- Level: info
- Arguments:
- `allowed: ability`
- Part of the system: feature
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Answer content for summarize_comments
- Description: Answer for summarize comments feature
- Class: `Gitlab::Llm::Chain::Tools::SummarizeComments::ExecutorOld`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `response_from_llm: content`
- Part of the system: feature
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Content of the prompt from chat request
- Description: chat-related request
- Class: `Gitlab::Llm::Chain::Concerns::AiDependent`
- Ai_event_name: prompt_content
- Level: info
- Arguments:
- `prompt: prompt_text`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### "Too many requests, will retry in #{delay} seconds"
- Description: When entered in exponential backoff loop
- Class: `Gitlab::Llm::Chain::Concerns::ExponentialBackoff`
- Ai_event_name: retrying_request
- Level: info
- Arguments:
- none
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Resource not found
- Description: Resource not found/not authorized
- Class: `Gitlab::Llm::Utils::Authorizer`
- Ai_event_name: permission_denied
- Level: info
- Arguments:
- `error_code: "M3003"`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### No access to Duo Chat
- Description: No access to duo chat
- Class: `Gitlab::Llm::Utils::Authorizer`
- Ai_event_name: permission_denied
- Level: info
- Arguments:
- `error_code: "M3004"`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### AI is disabled
- Description: AI is not enabled for container
- Class: `Gitlab::Llm::Utils::Authorizer`
- Ai_event_name: permission_denied
- Level: info
- Arguments:
- `error_code: "M3002"`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Performing request to Vertex
- Description: performing request
- Class: `Gitlab::Llm::VertexAi::Client`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `unit_primitive: unit_primitive`
- `options: config`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Response content
- Description: response from aigw - vertex -content
- Class: `Gitlab::Llm::VertexAi::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `unit_primitive: unit_primitive`
- `response_from_llm: response.to_json`
- Part of the system: abstraction_layer
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Received response from Vertex
- Description: response from aigw - vertex
- Class: `Gitlab::Llm::VertexAi::Client`
- Ai_event_name: response_received
- Level: info
- Arguments:
- `unit_primitive: unit_primitive`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Empty response from Vertex
- Description: empty response from aigw - vertex
- Class: `Gitlab::Llm::VertexAi::Client`
- Ai_event_name: empty_response_received
- Level: error
- Arguments:
- `unit_primitive: unit_primitive`
- Part of the system: abstraction_layer
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Surface an unknown event as a final answer to the user
- Description: unknown event
- Class: `Gitlab::Llm::Chain::Agents::SingleActionExecutor`
- Ai_event_name: unknown_event
- Level: warn
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Failed to find a tool in GitLab Rails
- Description: failed to find a tool
- Class: `Gitlab::Llm::Chain::Agents::SingleActionExecutor`
- Ai_event_name: tool_not_find
- Level: error
- Arguments:
- `tool_name: tool_name`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Received an event from v2/chat/agent
- Description: Received event
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: event_received
- Level: info
- Arguments:
- `event: event`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Failed to update observation
- Description: Failed to update observation
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: agent_steps_empty
- Level: error
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Request to v2/chat/agent
- Description: request
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: performing_request
- Level: info
- Arguments:
- `params: params`
- Part of the system: duo_chat
- Expanded logging?: yes
- Rails: no
- Sidekiq: yes
### Finished streaming from v2/chat/agent
- Description: finished streaming
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: streaming_finished
- Level: info
- Arguments:
- none
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Received error from Duo Chat Agent
- Description: Error returned when streaming
- Class: `Gitlab::Duo::Chat::StepExecutor`
- Ai_event_name: error_returned
- Level: error
- Arguments:
- `status: response.code`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Failed to parse a chunk from Duo Chat Agent
- Description: failed to parse a chunk
- Class: `Gitlab::Duo::Chat::AgentEventParser`
- Ai_event_name: parsing_error
- Level: warn
- Arguments:
- `event_json_size: event_json.length`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
### Failed to find the event class in GitLab-Rails
- Description: no event class
- Class: `Gitlab::Duo::Chat::AgentEventParser`
- Ai_event_name: parsing_error
- Level: error
- Arguments:
- `event_type: event['type']`
- Part of the system: duo_chat
- Expanded logging?: no
- Rails: no
- Sidekiq: yes
<!-- markdownlint-enable -->
<!-- vale on -->
|
https://docs.gitlab.com/development/logging
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/logging.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
logging.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
LLM logging
| null |
In addition to standard logging in the GitLab Rails Monolith instance, specialized logging is available for features based on large language models (LLMs).
## Logged events
Currently logged events are documented [here](logged_events.md).
## Implementation
### Logger Class
To implement LLM-specific logging, use the `Gitlab::Llm::Logger` class.
### Privacy Considerations
**Important**: User inputs and complete prompts containing user data must not be logged unless explicitly permitted.
## Feature Flag
### For GitLab.com
A feature flag named `expanded_ai_logging` controls the logging of sensitive data.
### For GitLab Self-Managed instances
The instance setting `::Ai::Setting.instance.enabled_instance_verbose_ai_logs` controls the logging of sensitive data.
Use the `conditional_info` helper method for conditional logging based on the status of the feature flag or the instance setting:
- If the feature flag is enabled for the current user, or if the instance setting is enabled, that setting logs the information on `info` level (logs are accessible in Kibana).
- If the feature flag is disabled for the current user, or, if the instance setting is disabled, that setting logs the information on `info` level, but without optional parameters (logs are accessible in Kibana, but only obligatory fields).
## Best Practices
When implementing logging for LLM features, consider the following:
- Identify critical information for debugging purposes.
- Ensure compliance with privacy requirements by not logging sensitive user data without proper authorization.
- Use the `conditional_info` helper method to respect the `expanded_ai_logging` feature flag.
- Structure your logs to provide meaningful insights for troubleshooting and analysis.
## Example Usage
```ruby
# including concern that handles logging
include Gitlab::Llm::Concerns::Logger
# Logging potentially sensitive information
log_conditional_info(user, message:"User prompt processed", event_name: 'ai_event', ai_component: 'abstraction_layer', prompt: sanitized_prompt)
# Logging application error information
log_error(user, message: "System application error", event_name: 'ai_event', ai_component: 'abstraction_layer', error_message: sanitized_error_message)
```
**Important**: Familiarize yourself with our [Data Retention Policy](../../user/gitlab_duo/data_usage.md#data-retention) and remember
to make sure we are not logging user input and LLM-generated output.
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: LLM logging
breadcrumbs:
- doc
- development
- ai_features
---
In addition to standard logging in the GitLab Rails Monolith instance, specialized logging is available for features based on large language models (LLMs).
## Logged events
Currently logged events are documented [here](logged_events.md).
## Implementation
### Logger Class
To implement LLM-specific logging, use the `Gitlab::Llm::Logger` class.
### Privacy Considerations
**Important**: User inputs and complete prompts containing user data must not be logged unless explicitly permitted.
## Feature Flag
### For GitLab.com
A feature flag named `expanded_ai_logging` controls the logging of sensitive data.
### For GitLab Self-Managed instances
The instance setting `::Ai::Setting.instance.enabled_instance_verbose_ai_logs` controls the logging of sensitive data.
Use the `conditional_info` helper method for conditional logging based on the status of the feature flag or the instance setting:
- If the feature flag is enabled for the current user, or if the instance setting is enabled, that setting logs the information on `info` level (logs are accessible in Kibana).
- If the feature flag is disabled for the current user, or, if the instance setting is disabled, that setting logs the information on `info` level, but without optional parameters (logs are accessible in Kibana, but only obligatory fields).
## Best Practices
When implementing logging for LLM features, consider the following:
- Identify critical information for debugging purposes.
- Ensure compliance with privacy requirements by not logging sensitive user data without proper authorization.
- Use the `conditional_info` helper method to respect the `expanded_ai_logging` feature flag.
- Structure your logs to provide meaningful insights for troubleshooting and analysis.
## Example Usage
```ruby
# including concern that handles logging
include Gitlab::Llm::Concerns::Logger
# Logging potentially sensitive information
log_conditional_info(user, message:"User prompt processed", event_name: 'ai_event', ai_component: 'abstraction_layer', prompt: sanitized_prompt)
# Logging application error information
log_error(user, message: "System application error", event_name: 'ai_event', ai_component: 'abstraction_layer', error_message: sanitized_error_message)
```
**Important**: Familiarize yourself with our [Data Retention Policy](../../user/gitlab_duo/data_usage.md#data-retention) and remember
to make sure we are not logging user input and LLM-generated output.
|
https://docs.gitlab.com/development/ai_features
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
_index.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
AI features based on 3rd-party integrations
| null |
GitLab Duo features are powered by AI models and integrations. This document provides an overview of developing with AI features in GitLab.
For detailed instructions on setting up GitLab Duo licensing in your development environment, see [GitLab Duo licensing for local development](ai_development_license.md).
## Instructions for setting up GitLab Duo features in the local development environment
### Required: Configure licenses
See [GitLab Duo licensing for local development](ai_development_license.md).
### Required: Install AI gateway
**Why**: Duo features (except for Duo Workflow) route LLM requests through the AI gateway.
**How**:
Follow [these instructions](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitlab_ai_gateway.md)
to install the AI gateway with GDK.
### Required: Run `gitlab:duo:setup` script
**Why**: This ensures that your instance or group has the correct licenses, settings, and feature flags to test Duo features locally.
**How**:
1. GitLab.com (SaaS) mode
```shell
GITLAB_SIMULATE_SAAS=1 bundle exec 'rake gitlab:duo:setup'
```
This:
- Creates a test group called `gitlab-duo`, which contains a project called `test`
- Applies an Ultimate license to the group
- Sets up Duo Enterprise seats for the group
- Enables all feature flags for the group
- Updates group settings to enable all available GitLab Duo features
Alternatively, if you want to add GitLab Duo Pro licenses for the group instead (which only enables a subset of features), you can run:
```shell
GITLAB_SIMULATE_SAAS=1 bundle exec 'rake gitlab:duo:setup[duo_pro]'
```
1. GitLab Self-Managed / Dedicated mode
```shell
GITLAB_SIMULATE_SAAS=0 bundle exec 'rake gitlab:duo:setup'
```
This:
- Creates a test group called `gitlab-duo`, which contains a project called `test`
- Applies an Ultimate license to the instance
- Sets up Duo Enterprise seats for the instance
- Enables all feature flags for the instance
- Updates instance settings to enable all available GitLab Duo features
Alternatively, if you want to add GitLab Duo Pro add-on for the instance instead (which only enables a subset of features), you can run:
```shell
GITLAB_SIMULATE_SAAS=0 bundle exec 'rake gitlab:duo:setup[duo_pro]'
```
## Tips for local development
1. When responses are taking too long to appear in the user interface, consider
restarting Sidekiq by running `gdk restart rails-background-jobs`. If that
doesn't work, try `gdk kill` and then `gdk start`.
1. Alternatively, bypass Sidekiq entirely and run the service synchronously.
This can help with debugging errors as GraphQL errors are now available in
the network inspector instead of the Sidekiq logs. To do that, temporarily alter
the `perform_for` method in `Llm::CompletionWorker` class by changing
`perform_async` to `perform_inline`.
## Feature development (Abstraction Layer)
### Feature flags
Apply the following feature flags to any AI feature work:
- A general flag (`ai_global_switch`) that applies to all other AI features. It's enabled by default.
- A flag specific to that feature. The feature flag name [must be different](../feature_flags/_index.md#feature-flags-for-licensed-features) than the licensed feature name.
See the [feature flag tracker epic](https://gitlab.com/groups/gitlab-org/-/epics/10524) for the list of all feature flags and how to use them.
### Push feature flags to AI gateway
You can push [feature flags](../feature_flags/_index.md) to AI gateway. This is helpful to gradually rollout user-facing changes even if the feature resides in AI gateway.
See the following example:
```ruby
# Push a feature flag state to AI gateway.
Gitlab::AiGateway.push_feature_flag(:new_prompt_template, user)
```
Later, you can use the feature flag state in AI gateway in the following way:
```python
from ai_gateway.feature_flags import is_feature_enabled
# Check if the feature flag "new_prompt_template" is enabled.
if is_feature_enabled('new_prompt_template'):
# Build a prompt from the new prompt template
else:
# Build a prompt from the old prompt template
```
**IMPORTANT**: At the [cleaning up](../feature_flags/controls.md#cleaning-up) step, remove the feature flag in AI gateway repository **before** removing the flag in GitLab-Rails repository.
If you clean up the flag in GitLab-Rails repository at first, the feature flag in AI gateway will be disabled immediately as it's the default state, hence you might encounter a surprising behavior.
**IMPORTANT**: Cleaning up the feature flag in AI gateway will immediately distribute the change to all GitLab instances, including GitLab.com, GitLab Self-Managed, and GitLab Dedicated.
**Technical details**:
- When `push_feature_flag` runs on an enabled feature flag, the name of the flag is cached in the current context,
which is later attached to the `x-gitlab-enabled-feature-flags` HTTP header when `GitLab-Sidekiq/Rails` sends requests to AI gateway.
- When frontend clients (for example, VS Code Extension or LSP) request a [User JWT](../cloud_connector/architecture.md#ai-gateway) (UJWT)
for direct AI gateway communication, GitLab returns:
- Public headers (including `x-gitlab-enabled-feature-flags`).
- The generated UJWT (1-hour expiration).
Frontend clients must regenerate UJWT upon expiration. Backend changes such as feature flag updates through [ChatOps](../feature_flags/controls.md) render the header values to become stale. These header values are refreshed at the next UJWT generation.
Similarly, we also have [`push_frontend_feature_flag`](../feature_flags/_index.md) to push feature flags to frontend.
### GraphQL API
To connect to the AI provider API using the Abstraction Layer, use an extendable
GraphQL API called [`aiAction`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/graphql/mutations/ai/action.rb).
The `input` accepts key/value pairs, where the `key` is the action that needs to
be performed. We only allow one AI action per mutation request.
Example of a mutation:
```graphql
mutation {
aiAction(input: {summarizeComments: {resourceId: "gid://gitlab/Issue/52"}}) {
clientMutationId
}
}
```
As an example, assume we want to build an "explain code" action. To do this, we extend the `input` with a new key,
`explainCode`. The mutation would look like this:
```graphql
mutation {
aiAction(
input: {
explainCode: { resourceId: "gid://gitlab/MergeRequest/52", code: "foo() { console.log() }" }
}
) {
clientMutationId
}
}
```
The GraphQL API then uses the [Anthropic Client](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/llm/anthropic/client.rb)
to send the response.
#### How to receive a response
The API requests to AI providers are handled in a background job. We therefore do not keep the request alive and the Frontend needs to match the request to the response from the subscription.
{{< alert type="warning" >}}
Determining the right response to a request can cause problems when only `userId` and `resourceId` are used. For example, when two AI features use the same `userId` and `resourceId` both subscriptions will receive the response from each other. To prevent this interference, we introduced the `clientSubscriptionId`.
{{< /alert >}}
To match a response on the `aiCompletionResponse` subscription, you can provide a `clientSubscriptionId` to the `aiAction` mutation.
- The `clientSubscriptionId` should be unique per feature and within a page to not interfere with other AI features. We recommend to use a `UUID`.
- Only when the `clientSubscriptionId` is provided as part of the `aiAction` mutation, it will be used for broadcasting the `aiCompletionResponse`.
- If the `clientSubscriptionId` is not provided, only `userId` and `resourceId` are used for the `aiCompletionResponse`.
As an example mutation for summarizing comments, we provide a `randomId` as part of the mutation:
```graphql
mutation {
aiAction(
input: {
summarizeComments: { resourceId: "gid://gitlab/Issue/52" }
clientSubscriptionId: "randomId"
}
) {
clientMutationId
}
}
```
In our component, we then listen on the `aiCompletionResponse` using the `userId`, `resourceId` and `clientSubscriptionId` (`"randomId"`):
```graphql
subscription aiCompletionResponse(
$userId: UserID
$resourceId: AiModelID
$clientSubscriptionId: String
) {
aiCompletionResponse(
userId: $userId
resourceId: $resourceId
clientSubscriptionId: $clientSubscriptionId
) {
content
errors
}
}
```
The [subscription for Chat](duo_chat.md#graphql-subscription) behaves differently.
To not have many concurrent subscriptions, you should also only subscribe to the subscription once the mutation is sent by using [`skip()`](https://apollo.vuejs.org/guide-option/subscriptions.html#skipping-the-subscription).
##### Clarifying different ID parameters
When working with the `aiAction` mutation, several ID parameters are used for routing requests and responses correctly. Here's what each parameter does:
- **user_id** (required)
- Purpose: Identifies and authenticates the requesting user
- Used for: Permission checks, request attribution, and response routing
- Example: `gid://gitlab/User/123`
- Note: This ID is automatically included by the GraphQL API framework
- **client_subscription_id** (recommended for streaming or multiple features)
- Client-generated UUID for tracking specific request/response pairs
- Required when using streaming responses or when multiple AI features share the same page
- Example: `"9f5dedb3-c58d-46e3-8197-73d653c71e69"`
- Can be omitted for simple, isolated requests with no streaming
- **resource_id** (contextual - required for some features)
- Purpose: References a specific GitLab entity (project, issue, MR) that provides context for the AI operation
- Used for: Permission verification and contextual information gathering
- Real example: `"gid://gitlab/Issue/164723626"`
- Note: Some features may not require a specific resource
- **project_id** (contextual - required for some features)
- Purpose: Identifies the project context for the AI operation
- Used for: Project-specific permission checks and context
- Real example: `"gid://gitlab/Project/278964"`
- Note: Some features may not require a specific project
#### Current abstraction layer flow
The following graph uses VertexAI as an example. You can use different providers.
```mermaid
flowchart TD
A[GitLab frontend] -->B[AiAction GraphQL mutation]
B --> C[Llm::ExecuteMethodService]
C --> D[One of services, for example: Llm::GenerateSummaryService]
D -->|scheduled| E[AI worker:Llm::CompletionWorker]
E -->F[::Gitlab::Llm::Completions::Factory]
F -->G[#96;::Gitlab::Llm::VertexAi::Completions::...#96; class using #96;::Gitlab::Llm::Templates::...#96; class]
G -->|calling| H[Gitlab::Llm::VertexAi::Client]
H --> |response| I[::Gitlab::Llm::GraphqlSubscriptionResponseService]
I --> J[GraphqlTriggers.ai_completion_response]
J --> K[::GitlabSchema.subscriptions.trigger]
```
## Reuse the existing AI components for multiple models
We thrive optimizing AI components, such as prompt, input/output parser, tools/function-calling, for each LLM,
however, diverging the components for each model could increase the maintenance overhead.
Hence, it's generally advised to reuse the existing components for multiple models as long as it doesn't degrade a feature quality.
Here are the rules of thumbs:
1. Iterate on the existing prompt template for multiple models. Do _NOT_ introduce a new one unless it causes a quality degradation for a particular model.
1. Iterate on the existing input/output parsers and tools/functions-calling for multiple models. Do _NOT_ introduce a new one unless it causes a quality degradation for a particular model.
1. If a quality degradation is detected for a particular model, the shared component should be diverged for the particular model.
An [example](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/issues/713) of this case is that we can apply Claude specific CoT optimization to the other models such as Mixtral as long as it doesn't cause a quality degradation.
## Monitoring
- Error ratio and response latency apdex for each Ai action can be found on [Sidekiq Service dashboard](https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1) under **SLI Detail: `llm_completion`**.
- Spent tokens, usage of each Ai feature and other statistics can be found on [periscope dashboard](https://app.periscopedata.com/app/gitlab/1137231/Ai-Features).
- [AI gateway logs](https://log.gprd.gitlab.net/app/r/s/zKEel).
- [AI gateway metrics](https://dashboards.gitlab.net/d/ai-gateway-main/ai-gateway3a-overview?orgId=1).
- [Feature usage dashboard via proxy](https://log.gprd.gitlab.net/app/r/s/egybF).
## Security
Refer to the [secure coding guidelines for Artificial Intelligence (AI) features](../secure_coding_guidelines.md#artificial-intelligence-ai-features).
## Help
- [Here's how to reach us!](https://handbook.gitlab.com/handbook/engineering/development/data-science/ai-powered/ai-framework/#-how-to-reach-us)
- View [guidelines](duo_chat.md) for working with GitLab Duo Chat.
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: AI features based on 3rd-party integrations
breadcrumbs:
- doc
- development
- ai_features
---
GitLab Duo features are powered by AI models and integrations. This document provides an overview of developing with AI features in GitLab.
For detailed instructions on setting up GitLab Duo licensing in your development environment, see [GitLab Duo licensing for local development](ai_development_license.md).
## Instructions for setting up GitLab Duo features in the local development environment
### Required: Configure licenses
See [GitLab Duo licensing for local development](ai_development_license.md).
### Required: Install AI gateway
**Why**: Duo features (except for Duo Workflow) route LLM requests through the AI gateway.
**How**:
Follow [these instructions](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitlab_ai_gateway.md)
to install the AI gateway with GDK.
### Required: Run `gitlab:duo:setup` script
**Why**: This ensures that your instance or group has the correct licenses, settings, and feature flags to test Duo features locally.
**How**:
1. GitLab.com (SaaS) mode
```shell
GITLAB_SIMULATE_SAAS=1 bundle exec 'rake gitlab:duo:setup'
```
This:
- Creates a test group called `gitlab-duo`, which contains a project called `test`
- Applies an Ultimate license to the group
- Sets up Duo Enterprise seats for the group
- Enables all feature flags for the group
- Updates group settings to enable all available GitLab Duo features
Alternatively, if you want to add GitLab Duo Pro licenses for the group instead (which only enables a subset of features), you can run:
```shell
GITLAB_SIMULATE_SAAS=1 bundle exec 'rake gitlab:duo:setup[duo_pro]'
```
1. GitLab Self-Managed / Dedicated mode
```shell
GITLAB_SIMULATE_SAAS=0 bundle exec 'rake gitlab:duo:setup'
```
This:
- Creates a test group called `gitlab-duo`, which contains a project called `test`
- Applies an Ultimate license to the instance
- Sets up Duo Enterprise seats for the instance
- Enables all feature flags for the instance
- Updates instance settings to enable all available GitLab Duo features
Alternatively, if you want to add GitLab Duo Pro add-on for the instance instead (which only enables a subset of features), you can run:
```shell
GITLAB_SIMULATE_SAAS=0 bundle exec 'rake gitlab:duo:setup[duo_pro]'
```
## Tips for local development
1. When responses are taking too long to appear in the user interface, consider
restarting Sidekiq by running `gdk restart rails-background-jobs`. If that
doesn't work, try `gdk kill` and then `gdk start`.
1. Alternatively, bypass Sidekiq entirely and run the service synchronously.
This can help with debugging errors as GraphQL errors are now available in
the network inspector instead of the Sidekiq logs. To do that, temporarily alter
the `perform_for` method in `Llm::CompletionWorker` class by changing
`perform_async` to `perform_inline`.
## Feature development (Abstraction Layer)
### Feature flags
Apply the following feature flags to any AI feature work:
- A general flag (`ai_global_switch`) that applies to all other AI features. It's enabled by default.
- A flag specific to that feature. The feature flag name [must be different](../feature_flags/_index.md#feature-flags-for-licensed-features) than the licensed feature name.
See the [feature flag tracker epic](https://gitlab.com/groups/gitlab-org/-/epics/10524) for the list of all feature flags and how to use them.
### Push feature flags to AI gateway
You can push [feature flags](../feature_flags/_index.md) to AI gateway. This is helpful to gradually rollout user-facing changes even if the feature resides in AI gateway.
See the following example:
```ruby
# Push a feature flag state to AI gateway.
Gitlab::AiGateway.push_feature_flag(:new_prompt_template, user)
```
Later, you can use the feature flag state in AI gateway in the following way:
```python
from ai_gateway.feature_flags import is_feature_enabled
# Check if the feature flag "new_prompt_template" is enabled.
if is_feature_enabled('new_prompt_template'):
# Build a prompt from the new prompt template
else:
# Build a prompt from the old prompt template
```
**IMPORTANT**: At the [cleaning up](../feature_flags/controls.md#cleaning-up) step, remove the feature flag in AI gateway repository **before** removing the flag in GitLab-Rails repository.
If you clean up the flag in GitLab-Rails repository at first, the feature flag in AI gateway will be disabled immediately as it's the default state, hence you might encounter a surprising behavior.
**IMPORTANT**: Cleaning up the feature flag in AI gateway will immediately distribute the change to all GitLab instances, including GitLab.com, GitLab Self-Managed, and GitLab Dedicated.
**Technical details**:
- When `push_feature_flag` runs on an enabled feature flag, the name of the flag is cached in the current context,
which is later attached to the `x-gitlab-enabled-feature-flags` HTTP header when `GitLab-Sidekiq/Rails` sends requests to AI gateway.
- When frontend clients (for example, VS Code Extension or LSP) request a [User JWT](../cloud_connector/architecture.md#ai-gateway) (UJWT)
for direct AI gateway communication, GitLab returns:
- Public headers (including `x-gitlab-enabled-feature-flags`).
- The generated UJWT (1-hour expiration).
Frontend clients must regenerate UJWT upon expiration. Backend changes such as feature flag updates through [ChatOps](../feature_flags/controls.md) render the header values to become stale. These header values are refreshed at the next UJWT generation.
Similarly, we also have [`push_frontend_feature_flag`](../feature_flags/_index.md) to push feature flags to frontend.
### GraphQL API
To connect to the AI provider API using the Abstraction Layer, use an extendable
GraphQL API called [`aiAction`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/graphql/mutations/ai/action.rb).
The `input` accepts key/value pairs, where the `key` is the action that needs to
be performed. We only allow one AI action per mutation request.
Example of a mutation:
```graphql
mutation {
aiAction(input: {summarizeComments: {resourceId: "gid://gitlab/Issue/52"}}) {
clientMutationId
}
}
```
As an example, assume we want to build an "explain code" action. To do this, we extend the `input` with a new key,
`explainCode`. The mutation would look like this:
```graphql
mutation {
aiAction(
input: {
explainCode: { resourceId: "gid://gitlab/MergeRequest/52", code: "foo() { console.log() }" }
}
) {
clientMutationId
}
}
```
The GraphQL API then uses the [Anthropic Client](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/llm/anthropic/client.rb)
to send the response.
#### How to receive a response
The API requests to AI providers are handled in a background job. We therefore do not keep the request alive and the Frontend needs to match the request to the response from the subscription.
{{< alert type="warning" >}}
Determining the right response to a request can cause problems when only `userId` and `resourceId` are used. For example, when two AI features use the same `userId` and `resourceId` both subscriptions will receive the response from each other. To prevent this interference, we introduced the `clientSubscriptionId`.
{{< /alert >}}
To match a response on the `aiCompletionResponse` subscription, you can provide a `clientSubscriptionId` to the `aiAction` mutation.
- The `clientSubscriptionId` should be unique per feature and within a page to not interfere with other AI features. We recommend to use a `UUID`.
- Only when the `clientSubscriptionId` is provided as part of the `aiAction` mutation, it will be used for broadcasting the `aiCompletionResponse`.
- If the `clientSubscriptionId` is not provided, only `userId` and `resourceId` are used for the `aiCompletionResponse`.
As an example mutation for summarizing comments, we provide a `randomId` as part of the mutation:
```graphql
mutation {
aiAction(
input: {
summarizeComments: { resourceId: "gid://gitlab/Issue/52" }
clientSubscriptionId: "randomId"
}
) {
clientMutationId
}
}
```
In our component, we then listen on the `aiCompletionResponse` using the `userId`, `resourceId` and `clientSubscriptionId` (`"randomId"`):
```graphql
subscription aiCompletionResponse(
$userId: UserID
$resourceId: AiModelID
$clientSubscriptionId: String
) {
aiCompletionResponse(
userId: $userId
resourceId: $resourceId
clientSubscriptionId: $clientSubscriptionId
) {
content
errors
}
}
```
The [subscription for Chat](duo_chat.md#graphql-subscription) behaves differently.
To not have many concurrent subscriptions, you should also only subscribe to the subscription once the mutation is sent by using [`skip()`](https://apollo.vuejs.org/guide-option/subscriptions.html#skipping-the-subscription).
##### Clarifying different ID parameters
When working with the `aiAction` mutation, several ID parameters are used for routing requests and responses correctly. Here's what each parameter does:
- **user_id** (required)
- Purpose: Identifies and authenticates the requesting user
- Used for: Permission checks, request attribution, and response routing
- Example: `gid://gitlab/User/123`
- Note: This ID is automatically included by the GraphQL API framework
- **client_subscription_id** (recommended for streaming or multiple features)
- Client-generated UUID for tracking specific request/response pairs
- Required when using streaming responses or when multiple AI features share the same page
- Example: `"9f5dedb3-c58d-46e3-8197-73d653c71e69"`
- Can be omitted for simple, isolated requests with no streaming
- **resource_id** (contextual - required for some features)
- Purpose: References a specific GitLab entity (project, issue, MR) that provides context for the AI operation
- Used for: Permission verification and contextual information gathering
- Real example: `"gid://gitlab/Issue/164723626"`
- Note: Some features may not require a specific resource
- **project_id** (contextual - required for some features)
- Purpose: Identifies the project context for the AI operation
- Used for: Project-specific permission checks and context
- Real example: `"gid://gitlab/Project/278964"`
- Note: Some features may not require a specific project
#### Current abstraction layer flow
The following graph uses VertexAI as an example. You can use different providers.
```mermaid
flowchart TD
A[GitLab frontend] -->B[AiAction GraphQL mutation]
B --> C[Llm::ExecuteMethodService]
C --> D[One of services, for example: Llm::GenerateSummaryService]
D -->|scheduled| E[AI worker:Llm::CompletionWorker]
E -->F[::Gitlab::Llm::Completions::Factory]
F -->G[#96;::Gitlab::Llm::VertexAi::Completions::...#96; class using #96;::Gitlab::Llm::Templates::...#96; class]
G -->|calling| H[Gitlab::Llm::VertexAi::Client]
H --> |response| I[::Gitlab::Llm::GraphqlSubscriptionResponseService]
I --> J[GraphqlTriggers.ai_completion_response]
J --> K[::GitlabSchema.subscriptions.trigger]
```
## Reuse the existing AI components for multiple models
We thrive optimizing AI components, such as prompt, input/output parser, tools/function-calling, for each LLM,
however, diverging the components for each model could increase the maintenance overhead.
Hence, it's generally advised to reuse the existing components for multiple models as long as it doesn't degrade a feature quality.
Here are the rules of thumbs:
1. Iterate on the existing prompt template for multiple models. Do _NOT_ introduce a new one unless it causes a quality degradation for a particular model.
1. Iterate on the existing input/output parsers and tools/functions-calling for multiple models. Do _NOT_ introduce a new one unless it causes a quality degradation for a particular model.
1. If a quality degradation is detected for a particular model, the shared component should be diverged for the particular model.
An [example](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/issues/713) of this case is that we can apply Claude specific CoT optimization to the other models such as Mixtral as long as it doesn't cause a quality degradation.
## Monitoring
- Error ratio and response latency apdex for each Ai action can be found on [Sidekiq Service dashboard](https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1) under **SLI Detail: `llm_completion`**.
- Spent tokens, usage of each Ai feature and other statistics can be found on [periscope dashboard](https://app.periscopedata.com/app/gitlab/1137231/Ai-Features).
- [AI gateway logs](https://log.gprd.gitlab.net/app/r/s/zKEel).
- [AI gateway metrics](https://dashboards.gitlab.net/d/ai-gateway-main/ai-gateway3a-overview?orgId=1).
- [Feature usage dashboard via proxy](https://log.gprd.gitlab.net/app/r/s/egybF).
## Security
Refer to the [secure coding guidelines for Artificial Intelligence (AI) features](../secure_coding_guidelines.md#artificial-intelligence-ai-features).
## Help
- [Here's how to reach us!](https://handbook.gitlab.com/handbook/engineering/development/data-science/ai-powered/ai-framework/#-how-to-reach-us)
- View [guidelines](duo_chat.md) for working with GitLab Duo Chat.
|
https://docs.gitlab.com/development/duo_chat
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/duo_chat.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
duo_chat.md
|
AI-powered
|
Duo Chat
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Duo Chat
| null |
GitLab Duo Chat aims to assist users with AI in ideation and creation tasks as
well as in learning tasks across the entire Software Development Lifecycle
(SDLC) to make them faster and more efficient.
[Chat](../../user/gitlab_duo_chat/_index.md) is a part of the [GitLab Duo](../../user/gitlab_duo/_index.md)
offering.
Chat can answer different questions and perform certain tasks. It's done with
the help of [prompts](glossary.md) and [tools](#adding-a-new-tool).
To answer a user's question asked in the Chat interface, GitLab sends a
[GraphQL request](https://gitlab.com/gitlab-org/gitlab/-/blob/4cfd0af35be922045499edb8114652ba96fcba63/ee/app/graphql/mutations/ai/action.rb)
to the Rails backend. Rails backend sends then instructions to the Large
Language Model (LLM) through the [AI gateway](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_gateway/).
## Which use cases lend themselves most to contributing to Chat?
We aim to employ the Chat for all use cases and workflows that can benefit from a **conversational** interaction **between** **a user** and **an AI** that is driven by a large language model (LLM). Typically, these are:
- **Creation and ideation** task as well as **Learning** tasks that are more effectively and more efficiently solved through iteration than through a one-shot interaction.
- **Tasks** that are typically satisfiable with one-shot interactions but **that might need refinement or could turn into a conversation**.
- Among the latter are tasks where the **AI may not get it right the first time but** where **users can easily course correct** by telling the AI more precisely what they need. For instance, "Explain this code" is a common question that most of the time would result in a satisfying answer, but sometimes the user may have additional questions.
- **Tasks that benefit from the history of a conversation**, so neither the user nor the AI need to repeat themselves.
Chat aims to be context aware and ultimately have access to all the resources in GitLab that the user has access to. Initially, this context was limited to the content of individual issues and epics, as well as GitLab documentation. Since then additional contexts have been added, such as code selection and code files. Currently, work is underway contributing vulnerability context and pipeline job context, so that users can ask questions about these contexts.
To scale the context awareness and hence to scale creation, ideation, and learning use cases across the entire DevSecOps domain, the Duo Chat team welcomes contributions to the Chat platform from other GitLab teams and the wider community. They are the experts for the use cases and workflows to accelerate.
### Which use cases are better implemented as stand-alone AI features?
Which use cases are better implemented as stand-alone AI features, or at least also as stand-alone AI features?
- Narrowly scoped tasks that be can accelerated by deeply integrating AI into an existing workflow.
- That can't benefit from conversations with AI.
To make this more tangible, here is an example.
Generating a commit message based on the changes is best implemented into the commit
message writing workflow.
- Without AI, commit message writing may take ten seconds.
- When autopopulating an AI-generated commit message in the **Commit message** field in the IDE, this brings the task down to one second.
Using Chat for commit message writing would probably take longer than writing the message oneself. The user would have to switch to the Chat window, type the request and then copy the result into the commit message field.
That said, it does not mean that Chat can't write commit messages, nor that it would be prevented from doing so. If Chat has the commit context (which may be added at some point for reasons other than commit message writing), the user can certainly ask to do anything with this commit content, including writing a commit message. But users are certainly unlikely to do that with Chat as they would only loose time. Note: the resulting commit messages may be different if created from Chat with a prompt written by the user vs. a static prompt behind a purpose-built commit message creation.
## Set up GitLab Duo Chat
To set up Duo Chat locally, go through the
[general setup instructions for AI features](_index.md).
## Working with GitLab Duo Chat
Prompts are the most vital part of GitLab Duo Chat system. Prompts are the
instructions sent to the LLM to perform certain tasks.
The state of the prompts is the result of weeks of iteration. If you want to
change any prompt in the current tool, you must put it behind a feature flag.
If you have any new or updated prompts, ask members of [Duo Chat team](https://handbook.gitlab.com/handbook/engineering/development/data-science/ai-powered/duo-chat/)
to review, because they have significant experience with them.
### Troubleshooting
When working with Chat locally, you might run into an error. Most commons
problems are documented in this section.
If you find an undocumented issue, you should document it in this section after
you find a solution.
| Problem | Solution |
|-----------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| There is no Chat button in the GitLab UI. | Make sure your user is a part of a group with Premium or Ultimate license and enabled Chat. |
| Chat replies with "Forbidden by auth provider" error. | Backend can't access LLMs. Make sure your [AI gateway](_index.md#required-install-ai-gateway) is set up correctly. |
| Requests take too long to appear in UI | Consider restarting Sidekiq by running `gdk restart rails-background-jobs`. If that doesn't work, try `gdk kill` and then `gdk start`. Alternatively, you can bypass Sidekiq entirely. To do that temporary alter `Llm::CompletionWorker.perform_async` statements with `Llm::CompletionWorker.perform_inline` |
| There is no Chat button in GitLab UI when GDK is running on non-SaaS mode | You do not have cloud connector access token record or seat assigned. To create cloud connector access record, in rails console put following code: `CloudConnector::Access.new(data: { available_services: [{ name: "duo_chat", serviceStartTime: ":date_in_the_future" }] }).save`. |
For more information, see [interpreting GitLab Duo Chat error codes](#interpreting-gitlab-duo-chat-error-codes).
that Chat sends to assist troubleshooting.
## Contributing to GitLab Duo Chat
From the code perspective, Chat is implemented in the similar fashion as other
AI features. Read more about GitLab [AI Abstraction layer](_index.md#feature-development-abstraction-layer).
The Chat feature uses a [zero-shot agent](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/duo/chat/react_executor.rb)
that sends user question and relevant context to the [AI Gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)
which construct a prompt and sends the request to the large language model.
Large language model decides if it can answer directly or if it needs to use
one of the defined tools.
The tools each have their own prompt that provides instructions to the large
language model on how to use that tool to gather information. The tools are
designed to be self-sufficient and avoid multiple requests back and forth to the
large language model.
After the tools have gathered the required information, it is returned to the
zero-shot agent, which asks the large language model if enough information has
been gathered to provide the final answer to the user's question.
### Customizing interaction with GitLab Duo Chat
You can customize user interaction with GitLab Duo Chat in several ways.
#### Programmatically open GitLab Duo Chat
To provide users with a more dynamic way to access GitLab Duo Chat, you can
integrate functionality directly into their applications to open the GitLab Duo
Chat interface. The following example shows how to open the GitLab Duo Chat
drawer by using an event listener and the GitLab Duo Chat global state:
```javascript
import { duoChatGlobalState } from '~/super_sidebar/constants';
myFancyToggleToOpenChat.addEventListener('click', () => {
duoChatGlobalState.isShown = true;
});
```
#### Initiating GitLab Duo Chat with a pre-defined prompt
In some scenarios, you may want to direct users towards a specific topic or
query when they open GitLab Duo Chat. We have a utility function that will
open DuoChat drawer and send a command in a queue for DuoChat to execute on.
This should trigger the loading state and the streaming with the given prompt.
```javascript
import { sendDuoChatCommand } from 'ee/ai/utils';
[...]
methods: {
openChatWithPrompt() {
sendDuoChatCommand(
{
question: '/feedback' // This is your prompt
resourceId: 'gid:://gitlab/WorkItem/1', // A unique ID to identify the action for streaming
variables: {} // Any additional graphql variables you want to pass to ee/app/assets/javascripts/ai/graphql/chat.mutation.graphql when executing the query
}
)
}
}
```
Note that `sendDuoChatCommand` cannot be chained, meaning that you can send one command to DuoChat and have to wait until this action is done before sending a different command or the previous command might not work as expected.
This enhancement allows for a more tailored user experience by guiding the
conversation in GitLab Duo Chat towards predefined areas of interest or concern.
### Adding a new tool
To add a new tool you need to add changes both to [AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)
and Rails Monolith. The main chat prompt is stored and assembled on AI gateway. Rails side is responsible for assembling
required parameters of the prompt and sending them to AI gateway. AI gateway is responsible for assembling Chat prompt and
selecting Chat tools that are available for user based on their subscription and addon.
When LLM selects the tool to use, this tool is executed on the Rails side. Tools use different endpoint to make
a request to AI gateway. When you add a new tool, take into account that AI gateway works with different clients
and GitLab applications that have different versions. That means that old versions of GitLab won't know about a new tool.
If you want to add a new tool, contact the Duo Chat team. We're working on long-term solution for this [problem](https://gitlab.com/gitlab-org/gitlab/-/issues/466247).
#### Changes in AI gateway
1. Create a new class for a tool in `ai_gateway/chat/tools/gitlab.py`. This class should include next properties:
- `name` of the tool
- GitLab `resource` that tool works with
- `description` of what the tool does
- `example` of question and desired answer
1. Add tool to `__all__` list of tools in `ai_gateway/chat/tools/gitlab.py`.
1. Add tool class to the `DuoChatToolsRegistry` in `ai_gateway/chat/toolset.py` with an appropriate Unit Primitive.
1. Add test for your changes.
#### Changes in Rails Monolith
1. Create files for the tool in the `ee/lib/gitlab/llm/chain/tools/` folder. Use existing tools like `issue_reader` or
`epic_reader` as a template.
1. Write a class for the tool that includes instructions for the large language model on how to use the tool
to gather information - the main prompts that this tool is using.
1. Implement code in the tool to parse the response from the large language model and return it to the [chat agent](https://gitlab.com/gitlab-org/gitlab/-/blob/e0220502f1b3459b5a571d510ce5d1826877c3ce/ee/lib/gitlab/llm/chain/agents/single_action_executor.rb).
1. Add the new tool name to the `tools` array in `ee/lib/gitlab/llm/completions/chat.rb` so the agent knows about it.
#### Testing all together
Test and iterate on the prompt using RSpec tests that make real requests to the large language model.
- Prompts require trial and error, the non-deterministic nature of working with LLM can be surprising.
- Anthropic provides good [guide](https://docs.anthropic.com/claude/docs/intro-to-prompting) on working on prompts.
- GitLab [guide](ai_feature_development_playbook.md) on working with prompts.
The key things to keep in mind are properly instructing the large language model through prompts and tool descriptions,
keeping tools self-sufficient, and returning responses to the zero-shot agent. With some trial and error on prompts,
adding new tools can expand the capabilities of the Chat feature.
There are available short [videos](https://www.youtube.com/playlist?list=PL05JrBw4t0KoOK-bm_bwfHaOv-1cveh8i) covering this topic.
### Working with multi-thread conversation
If you're building features that interact with Duo Chat conversations, you need to understand how threads work.
Duo Chat supports multiple conversations. Each conversation is represented by a thread, which contains multiple messages. The important attributes of a thread are:
- `id`: The `id` is required when replying to a thread.
- `conversation_type`: This allows for distinguishing between the different available Duo Chat conversation types. See the [thread conversation types list](../../api/graphql/reference/_index.md#aiconversationsthreadsconversationtype).
- If your feature needs its own conversation type, contact the Duo Chat team.
If your feature requires calling GraphQL API directly, the following queries and mutations are available, for which you **must** specify the `conversation_type`.
- [Query.aiConversationThreads](../../api/graphql/reference/_index.md#queryaiconversationthreads): lists threads
- [Query.aiMessages](../../api/graphql/reference/_index.md#queryaimessages): lists one thread's messages. **Must** specify `threadId`.
- [Mutation.aiAction](../../api/graphql/reference/_index.md#mutationaiaction): creates one message. If `threadId` is specified the message is appended into that thread.
All chat conversations have a retention period, controlled by the admin. The default retention period is 30 days after last reply.
- [Configure Duo Chat Conversation Expiration](../../user/gitlab_duo_chat/_index.md#configure-chat-conversation-expiration)
### Developer Resources
- [Example GraphQL Queries](#duo-chat-conversation-threads-graphql-queries) - See examples below in this document
## Debugging
To gather more insights about the full request, use the `Gitlab::Llm::Logger` file to debug logs.
The default logging level on production is `INFO` and **must not** be used to log any data that could contain personal identifying information.
To follow the debugging messages related to the AI requests on the abstraction layer, you can use:
```shell
export LLM_DEBUG=1
gdk start
tail -f log/llm.log
```
### Debugging in production environment
All information related to debugging and troubleshooting in production environment is collected in [the Duo Chat On-Call Runbook](https://gitlab.com/gitlab-com/runbooks/-/tree/master/docs/duo-chat).
## Tracing with LangSmith
Tracing is a powerful tool for understanding the behavior of your LLM application.
LangSmith has best-in-class tracing capabilities, and it's integrated with GitLab Duo Chat. Tracing can help you track down issues like:
- I'm new to GitLab Duo Chat and would like to understand what's going on under the hood.
- Where exactly the process failed when you got an unexpected answer.
- Which process was a bottle neck of the latency.
- What tool was used for an ambiguous question.

Tracing is especially useful for evaluation that runs GitLab Duo Chat against large dataset.
LangSmith integration works with any tools, including [Prompt Library](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library).
### Use tracing with LangSmith
{{< alert type="note" >}}
Tracing is available in Development and Testing environment only.
It's not available in Production environment.
{{< /alert >}}
1. Access [LangSmith](https://smith.langchain.com/) and create an account
1. Optional: [Create an Access Request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/new?issuable_template=Individual_Bulk_Access_Request) to be added to the GitLab organization in LangSmith.
1. Create [an API key](https://docs.smith.langchain.com/#create-an-api-key) (be careful where you create API key - they can be created in personal namespace or in GL namespace).
1. Set the following environment variables in GDK. You can define it in `env.runit` or directly `export` in the terminal.
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY='<your-api-key>'
export LANGCHAIN_PROJECT='<your-project-name>'
export LANGCHAIN_ENDPOINT='https://api.smith.langchain.com'
export GITLAB_RAILS_RACK_TIMEOUT=180 # Extending puma timeout for using LangSmith with Prompt Library as the evaluation tool.
```
Project name is the existing project in LangSmith or new one. It's enough to put new name in the environment variable -
project will be created during request.
1. Restart GDK.
1. Ask any question to Chat.
1. Observe project in the LangSmith [page](https://smith.langchain.com/) > Projects > \[Project name\]. 'Runs' tab should contain
your last requests.
## Evaluate your merge request in one click
To evaluate your merge request with [Central Evaluation Framework](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library) (a.k.a. CEF),
you can use [Evaluation Runner](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner) (internal only).
Follow [run evaluation on your merge request](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#run-evaluation-on-your-merge-request) instruction.
### Prevent regressions in your merge request
When you make a change to Duo Chat or related components,
you should run the [regression evaluator](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/eli5/duo_chat/regression_evaluator.md)
to detect quality degradation and bugs in the merge request.
It covers any Duo Chat execution patterns, including tool execution and slash commands.
To run the regression evaluator, [run evaluation on your merge request](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#run-evaluation-on-your-merge-request) and click a play button for the regression evaluator.
Later, you can [compare the evaluation result of the merge request against master](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#compare-the-evaluation-result-of-your-merge-request-against-master).
Make sure that there are no quality degradation and bugs in the [LangSmith's comparison page](https://docs.smith.langchain.com/evaluation/how_to_guides/compare_experiment_results).
While there are no strict guidelines for interpreting the comparison results, here are some helpful tips to consider:
- If the number of degraded scores exceeds the number of improved scores, it may indicate that the merge request has introduced a quality degradation.
- If any examples encounter errors during evaluation, it could suggest a potential bug in the merge request.
In either of these scenarios, we recommend further investigation:
1. Compare your results with [the daily evaluation results](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#view-daily-evaluation-result-on-master). For instance, look at the daily evaluations from yesterday and the day before.
1. If you observe similar patterns in these daily evaluations, it's likely that your merge request is safe to merge. However, if the patterns differ, it may indicate that your merge request has introduced unexpected changes.
We strongly recommend running the regression evaluator in at least the following environments:
| Environment | Evaluation pipeline name |
| ----------- | ------------------------ |
| GitLab Self-Managed and a custom model that is widely adopted | `duo-chat regression sm: [bedrock_mistral_8x7b_instruct]` |
| GitLab.com and GitLab Duo Enterprise add-on | `duo-chat regression .com: [duo_enterprise]` |
| GitLab.com and GitLab Duo Pro add-on | `duo-chat regression .com: [duo_pro]` |
Additionally, you can run other evaluators, such as `gitlab-docs`, which has a more comprehensive dataset for specific scopes.
See [available evaluation pipelines](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#evaluation-pipelines) for more information.
### Add an example to the regression dataset
When you introduce a new feature or received a regression report from users, you should add a new example to the regression dataset for expanding the coverage.
To add an example to the regression dataset, follow [this section](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/eli5/duo_chat/regression_evaluator.md#guideline).
For more information, see [the guideline of the regression evaluator](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/eli5/duo_chat/regression_evaluator.md#guideline).
## GitLab Duo Chat Self-managed End-to-End Tests
In MRs, the end-to-end tests exercise the Duo Chat functionality of GitLab Self-Managed instances by using an instance of the GitLab Linux package
integrated with the `latest` version of AI gateway. The instance of AI gateway is configured to return [mock responses](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist#mocking-ai-model-responses).
To view the results of these tests, open the `e2e:test-on-omnibus-ee` child pipeline and view the `ai-gateway` job.
The `ai-gateway` job activates a cloud license and then assigns a Duo Pro seat to a test user, before the tests are run.
For more information, see [AiGateway Scenarios](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#aigateway-scenarios).
## GraphQL Subscription
The GraphQL Subscription for Chat behaves slightly different because it's user-centric. A user could have Chat open on multiple browser tabs, or also on their IDE.
We therefore need to broadcast messages to multiple clients to keep them in sync. The `aiAction` mutation with the `chat` action behaves the following:
1. All complete Chat messages (including messages from the user) are broadcasted with the `userId`, `aiAction: "chat"` as identifier.
1. Chunks from streamed Chat messages are broadcasted with the `clientSubscriptionId` from the mutation as identifier.
Examples of GraphQL Subscriptions in a Vue component:
1. Complete Chat message
```javascript
import aiResponseSubscription from 'ee/graphql_shared/subscriptions/ai_completion_response.subscription.graphql';
[...]
apollo: {
$subscribe: {
aiCompletionResponse: {
query: aiResponseSubscription,
variables() {
return {
userId, // for example "gid://gitlab/User/1"
aiAction: 'CHAT',
};
},
result({ data }) {
// handle data.aiCompletionResponse
},
error(err) {
// handle error
},
},
},
```
1. Streamed Chat message
```javascript
import aiResponseSubscription from 'ee/graphql_shared/subscriptions/ai_completion_response.subscription.graphql';
[...]
apollo: {
$subscribe: {
aiCompletionResponseStream: {
query: aiResponseSubscription,
variables() {
return {
aiAction: 'CHAT',
userId, // for example "gid://gitlab/User/1"
clientSubscriptionId // randomly generated identifier for every message
htmlResponse: false, // important to bypass HTML processing on every chunk
};
},
result({ data }) {
// handle data.aiCompletionResponse
},
error(err) {
// handle error
},
},
},
```
Keep in mind that the `clientSubscriptionId` must be unique for every request. Reusing a `clientSubscriptionId` will cause several unwanted side effects in the subscription responses.
### Duo Chat GraphQL queries
1. [Set up GitLab Duo Chat](#set-up-gitlab-duo-chat)
1. Visit [GraphQL explorer](../../api/graphql/_index.md#interactive-graphql-explorer).
1. Execute the `aiAction` mutation. Here is an example:
```graphql
mutation {
aiAction(
input: {
chat: {
resourceId: "gid://gitlab/User/1",
content: "Hello"
}
}
){
requestId
errors
}
}
```
1. Execute the following query to fetch the response:
```graphql
query {
aiMessages {
nodes {
requestId
content
role
timestamp
chunkId
errors
}
}
}
```
If you can't fetch the response, check `graphql_json.log`,
`sidekiq_json.log`, `llm.log` or `modelgateway_debug.log` if it contains error
information.
### Duo Chat Conversation Threads GraphQL queries
#### Querying messages in a conversation thread
To retrieve messages from a specific thread, use the `aiMessages` query with a thread ID:
```graphql
query {
aiMessages(threadId: "gid://gitlab/Ai::Conversation::Thread/1") {
nodes {
requestId
content
role
timestamp
chunkId
errors
}
}
}
```
#### Starting a new conversation thread
If you don't include a threadId in your aiAction mutation, a new thread will be created:
```graphql
mutation {
aiAction(input: {
chat: {
content: "This will create a new conversation thread"
},
conversationType: DUO_CHAT
})
{
requestId
errors
threadId # This will contain the ID of the newly created thread
}
}
```
#### Creating a new message in an existing conversation thread
To add a message to an existing thread, include the threadId in your aiAction mutation:
```graphql
mutation {
aiAction(input: {
chat: {
content: "this is another message in the same thread"
},
conversationType: DUO_CHAT,
threadId: "gid://gitlab/Ai::Conversation::Thread/1",
})
{
requestId
errors
threadId
}
}
```
## Testing GitLab Duo Chat in production-like environments
GitLab Duo Chat is enabled in the [Staging](https://staging.gitlab.com/users/sign_in) and
[Staging Ref](https://staging-ref.gitlab.com/) GitLab environments.
Because GitLab Duo Chat is currently only available to members of groups in the
Premium and Ultimate tiers, Staging Ref may be an easier place to test changes as a GitLab
team member because
[you can make yourself an instance Admin in Staging Ref](https://handbook.gitlab.com/handbook/engineering/infrastructure/environments/staging-ref/#admin-access)
and, as an Admin, easily create licensed groups for testing.
### Important Testing Considerations
**Note**: A user who has a seat in multiple groups with different tiers of Duo add-on gets the highest tier experience across the entire instance.
It's not possible to test feature separation between different Duo add-ons if your test account has a seat in a higher tier add-on.
To properly test different tiers, create a separate test account for each tier you need to test.
### Staging testing groups
To simplify testing on [staging](https://staging.gitlab.com), several pre-configured groups have been created with the appropriate licenses and add-ons:
| Group | Duo Add-on | GitLab license |
| --- | --- | --- |
| [`duo_pro_gitlab_premium`](https://staging.gitlab.com/groups/duo_pro_gitlab_premium) | Pro | Premium |
| [`duo_pro_gitlab_ultimate`](https://staging.gitlab.com/groups/duo_pro_gitlab_ultimate) | Pro | Ultimate |
| [`duo_enterprise_gitlab_ultimate`](https://staging.gitlab.com/groups/duo_enterprise_gitlab_ultimate) | Enterprise | Ultimate |
Ask in the `#g_duo_chat` channel on Slack to be added as an Owner to these groups.
Once added as an Owner, you can add your secondary accounts to the group with a role Developer and assign them a seat in the Duo add-on.
Then you can sign in as your Developer user and test access control to Duo Chat.
### GitLab Duo Chat End-to-End Tests in live environments
Duo Chat end-to-end tests run continuously against [Staging](https://staging.gitlab.com/users/sign_in) and [Production](https://gitlab.com/) GitLab environments.
These tests run in scheduled pipelines and ensure the end-to-end user experiences are functioning correctly.
Results can be viewed in the `#e2e-run-staging` and `#e2e-run-production` Slack channels. The pipelines can be found below, access can be requested in `#s_developer_experience`:
- [Staging-canary pipelines](https://ops.gitlab.net/gitlab-org/quality/staging-canary/-/pipelines)
- [Staging pipelines](https://ops.gitlab.net/gitlab-org/quality/staging/-/pipelines)
- [Canary pipelines](https://ops.gitlab.net/gitlab-org/quality/canary/-/pipelines)
- [Production pipelines](https://ops.gitlab.net/gitlab-org/quality/production/-/pipelines)
## Product Analysis
To better understand how the feature is used, each production user input message is analyzed using LLM and Ruby,
and the analysis is tracked as a Snowplow event.
The analysis can contain any of the attributes defined in the latest [iglu schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/ai_question_category/jsonschema).
- The categories and detailed categories have been predefined by the product manager and the product designer, as we are not allowed to look at the actual questions from users. If there is reason to believe that there are missing or confusing categories, they can be changed. To edit the definitions, update `categories.xml` in both [AI Gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/ai_gateway/prompts/definitions/categorize_question/categories.xml) and [monolith](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/llm/fixtures/categories.xml).
- The list of attributes captured can be found in [labesl.xml](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/ai_gateway/prompts/definitions/categorize_question/labels.xml).
- The following is yet to be implemented:
- `is_proper_sentence`
- The following are deprecated:
- `number_of_questions_in_history`
- `length_of_questions_in_history`
- `time_since_first_question`
The request count and the user count for each question category and detail category can be reviewed in [this Tableau dashboard](https://10az.online.tableau.com/#/site/gitlab/views/DuoCategoriesofQuestions/DuoCategories) (GitLab team members only).
## How `access_duo_chat` policy works
This table describes the requirements for the `access_duo_chat` policy to
return `true` in different contexts.
| | GitLab.com | Dedicated or GitLab Self-Managed | All instances |
|----------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------|
| for user outside of project or group (`user.can?(:access_duo_chat)`) | User need to belong to at least one group on Premium or Ultimate tier with `duo_features_enabled` group setting switched on | - Instance needs to be on Premium or Ultimate tier<br>- Instance needs to have `duo_features_enabled` setting switched on | |
| for user in group context (`user.can?(:access_duo_chat, group)`) | - User needs to belong to at least one group on Premium or Ultimate tier with `experiment_and_beta_features` group setting switched on<br>- Root ancestor group of the group needs to be on Premium or Ultimate tier and the group must have `duo_features_enabled` setting switched on | - Instance needs to be on Premium or Ultimate tier<br>- Instance needs to have `duo_features_enabled` setting switched on | User must have at least _read_ permissions on the group |
| for user in project context (`user.can?(:access_duo_chat, project)`) | - User needs to belong to at least one group on the Premium or Ultimate tier with `experiment_and_beta_features` group setting enabled<br>- Project root ancestor group needs to be on Premium or Ultimate tier and project must have `duo_features_enabled` setting switched on | - Instance need to be on Ultimate tier<br>- Instance needs to have `duo_features_enabled` setting switched on | User must to have at least _read_ permission on the project |
## Running GitLab Duo Chat prompt experiments
Before being merged, all prompt or model changes for GitLab Duo Chat should both:
1. Be behind a feature flag *and*
1. Be evaluated locally
The type of local evaluation needed depends on the type of change. GitLab Duo
Chat local evaluation using the Prompt Library is an effective way of measuring
average correctness of responses to questions about issues and epics.
Follow the
[Prompt Library guide](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/how-to/run_duo_chat_eval.md#configuring-duo-chat-with-local-gdk)
to evaluate GitLab Duo Chat changes locally. The prompt library documentation is
the single source of truth and should be the most up-to-date.
See the video ([internal link](https://drive.google.com/file/d/1X6CARf0gebFYX4Rc9ULhcfq9LLLnJ_O-)) that covers the full setup.
### (Deprecated) Issue and epic experiments
{{< alert type="note" >}}
This section is deprecated in favor of the [development seed file](../development_seed_files.md#seed-project-and-group-resources-for-gitlab-duo).
{{< /alert >}}
If you would like to use the evaluation framework (as described [here](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/how-to/run_duo_chat_eval.md?ref_type=heads#evaluation-on-issueepic))
you can import the required groups and projects using this Rake task:
```shell
GITLAB_SIMULATE_SAAS=1 bundle exec 'rake gitlab:duo:setup_evaluation[<test-group-name>]'
```
Since we use the `Setup` class (under `ee/lib/gitlab/duo/developments/setup.rb`)
that requires "saas" mode to create a group (necessary for importing subgroups),
you need to set `GITLAB_SIMULATE_SAAS=1`. This is just to complete the import
successfully, and then you can switch back to `GITLAB_SIMULATE_SAAS=0` if
desired.
#### (Deprecated) Epic and issue fixtures
{{< alert type="note" >}}
This section is deprecated in favor of the [development seed file](../development_seed_files.md#seed-project-and-group-resources-for-gitlab-duo).
{{< /alert >}}
The fixtures are the replicas of the _public_ issues and epics from projects and groups _owned by_ GitLab.
The internal notes were excluded when they were sampled. The fixtures have been committed into the canonical `gitlab` repository.
See [the snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/3613745) used to create the fixtures.
## How a Chat prompt is constructed
All Chat requests are resolved with the GitLab GraphQL API. And, for now,
prompts for 3rd party LLMs are hard-coded into the GitLab codebase.
But if you want to make a change to a Chat prompt, it isn't as obvious as
finding the string in a single file. Chat prompt construction is hard to follow
because the prompt is put together over the course of many steps. Here is the
flow of how we construct a Chat prompt:
1. API request is made to the GraphQL AI Mutation; request contains user Chat
input.
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/graphql/mutations/ai/action.rb#L6))
1. GraphQL mutation calls `Llm::ExecuteMethodService#execute`
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/graphql/mutations/ai/action.rb#L43))
1. `Llm::ExecuteMethodService#execute` sees that the `chat` method was sent to
the GraphQL API and calls `Llm::ChatService#execute`
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/services/llm/execute_method_service.rb#L36))
1. `Llm::ChatService#execute` calls `schedule_completion_worker`, which is
defined in `Llm::BaseService` (the base class for `ChatService`)
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/services/llm/base_service.rb#L72-87))
1. `schedule_completion_worker` calls `Llm::CompletionWorker.perform_for`, which
asynchronously enqueues the job
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/workers/llm/completion_worker.rb#L33))
1. `Llm::CompletionWorker#perform` is called when the job runs. It deserializes
the user input and other message context and passes that over to
`Llm::Internal::CompletionService#execute`
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/workers/llm/completion_worker.rb#L44))
1. `Llm::Internal::CompletionService#execute` calls
`Gitlab::Llm::CompletionsFactory#completion!`, which pulls the `ai_action`
from original GraphQL request and initializes a new instance of
`Gitlab::Llm::Completions::Chat` and calls `execute` on it
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/55b8eb6ff869e61500c839074f080979cc60f9de/ee/lib/gitlab/llm/completions_factory.rb#L89))
1. `Gitlab::Llm::Completions::Chat#execute` calls `Gitlab::Duo::Chat::ReactExecutor`.
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/completions/chat.rb#L122-L130))
1. `Gitlab::Duo::Chat::ReactExecutor#execute` calls `#step_forward` which calls `Gitlab::Duo::Chat::StepExecutor#step`
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L235)).
1. `Gitlab::Duo::Chat::StepExecutor#step` calls `Gitlab::Duo::Chat::StepExecutor#perform_agent_request`, which sends a request to the AI Gateway `/v2/chat/agent/` endpoint
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/step_executor.rb#L69)).
1. The AI Gateway `/v2/chat/agent` endpoint receives the request on the `api.v2.agent.chat.agent.chat` function
([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v2/chat/agent.py#L133))
1. `api.v2.agent.chat.agent.chat` creates the `GLAgentRemoteExecutor` through the `gl_agent_remote_executor_factory` ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v2/chat/agent.py#L166)).
Upon creation of the `GLAgentRemoteExecutor`, the following parameters are passed:
- `tools_registry` - the registry of all available tools; this is passed through the factory ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/container.py#L35))
- `agent` - `ReActAgent` object that wraps the prompt information, including the chosen LLM model, prompt template, etc
1. `api.v2.agent.chat.agent.chat` calls the `GLAgentRemoteExecutor.on_behalf`, which gets the user tools early to raise an exception as soon as possible if an error occurs ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/executor.py#L56)).
1. `api.v2.agent.chat.agent.chat` calls the `GLAgentRemoteExecutor.stream` ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/executor.py#L81)).
1. `GLAgentRemoteExecutor.stream` calls `astream` on `agent` (an instance of `ReActAgent`) with inputs such as the messages and the list of available tools ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/executor.py#L92)).
1. The `ReActAgent` builds the prompts, with the available tools inserted into the system prompt template
([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/prompts/definitions/chat/react/system/1.0.0.jinja)).
1. `ReActAgent.astream` sends a call to the LLM model ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/agents/react.py#L216))
1. The LLM response is returned to Rails
(code path: [`ReActAgent.astream`](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/agents/react.py#L209)
-> [`GLAgentRemoteExecutor.stream`](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/executor.py#L81)
-> [`api.v2.agent.chat.agent.chat`](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v2/chat/agent.py#L133)
-> Rails)
1. We've now made our first request to the AI gateway. If the LLM says that the answer to the first request is final,
Rails [parses the answer](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L56) and [returns it](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L63) for further response handling by [`Gitlab::Llm::Completions::Chat`](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/completions/chat.rb#L66).
1. If the answer is not final, the "thoughts" and "picked tools" from the first LLM request are parsed and then the relevant tool class is called.
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L207)
| [example tool class](https://gitlab.com/gitlab-org/gitlab/-/blob/971d07aa37d9f300b108ed66304505f2d7022841/ee/lib/gitlab/llm/chain/tools/identifier.rb))
1. The tool executor classes include `Concerns::AiDependent` and use its `request` method.
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/chain/concerns/ai_dependent.rb#L14))
1. The `request` method uses the `ai_request` instance
that was injected into the `context` in `Llm::Completions::Chat`. For Chat,
this is `Gitlab::Llm::Chain::Requests::AiGateway`. ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/971d07aa37d9f300b108ed66304505f2d7022841/ee/lib/gitlab/llm/completions/chat.rb#L42)).
1. The tool indicates that `use_ai_gateway_agent_prompt=true` ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/chain/tools/issue_reader/executor.rb#L121)).
This tells the `ai_request` to send the prompt to the `/v1/prompts/chat` endpoint ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/chain/requests/ai_gateway.rb#L87)).
1. AI Gateway `/v1/prompts/chat` endpoint receives the request on `api.v1.prompts.invoke`
([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v1/prompts/invoke.py#L41)).
1. `api.v1.prompts.invoke` gets the correct tool prompt from the tool prompt registry ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v1/prompts/invoke.py#L49)).
1. The prompt is called either as a [stream](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v1/prompts/invoke.py#L86) or as a [non-streamed invocation](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v1/prompts/invoke.py#L96).
1. If the tool answer is not final, the response is added to agent_scratchpad and the loop in `Gitlab::Duo::Chat::ReactExecutor` starts again, adding the additional context to the request. It loops to up to 10 times until a final answer is reached. ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L44))
## Interpreting GitLab Duo Chat error codes
GitLab Duo Chat has error codes with specified meanings to assist in debugging.
See the [GitLab Duo Chat troubleshooting documentation](../../user/gitlab_duo_chat/troubleshooting.md) for a list of all GitLab Duo Chat error codes.
When developing for GitLab Duo Chat, include these error codes when returning an error and [document them](../../user/gitlab_duo_chat/troubleshooting.md), especially for user-facing errors.
### Error Code Format
The error codes follow the format: `<Layer Identifier><Four-digit Series Number>`.
For example:
- `M1001`: A network communication error in the monolith layer.
- `G2005`: A data formatting/processing error in the AI gateway layer.
- `A3010`: An authentication or data access permissions error in a third-party API.
### Error Code Layer Identifier
| Code | Layer |
|------|-----------------|
| M | Monolith |
| G | AI gateway |
| A | Third-party API |
### Error Series
| Series | Type |
|--------|------------------------------------------------------------------------------|
| 1000 | Network communication errors |
| 2000 | Data formatting/processing errors |
| 3000 | Authentication and/or data access permission errors |
| 4000 | Code execution exceptions |
| 5000 | Bad configuration or bad parameters errors |
| 6000 | Semantic or inference errors (the model does not understand or hallucinates) |
|
---
stage: AI-powered
group: Duo Chat
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab Duo Chat
breadcrumbs:
- doc
- development
- ai_features
---
GitLab Duo Chat aims to assist users with AI in ideation and creation tasks as
well as in learning tasks across the entire Software Development Lifecycle
(SDLC) to make them faster and more efficient.
[Chat](../../user/gitlab_duo_chat/_index.md) is a part of the [GitLab Duo](../../user/gitlab_duo/_index.md)
offering.
Chat can answer different questions and perform certain tasks. It's done with
the help of [prompts](glossary.md) and [tools](#adding-a-new-tool).
To answer a user's question asked in the Chat interface, GitLab sends a
[GraphQL request](https://gitlab.com/gitlab-org/gitlab/-/blob/4cfd0af35be922045499edb8114652ba96fcba63/ee/app/graphql/mutations/ai/action.rb)
to the Rails backend. Rails backend sends then instructions to the Large
Language Model (LLM) through the [AI gateway](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_gateway/).
## Which use cases lend themselves most to contributing to Chat?
We aim to employ the Chat for all use cases and workflows that can benefit from a **conversational** interaction **between** **a user** and **an AI** that is driven by a large language model (LLM). Typically, these are:
- **Creation and ideation** task as well as **Learning** tasks that are more effectively and more efficiently solved through iteration than through a one-shot interaction.
- **Tasks** that are typically satisfiable with one-shot interactions but **that might need refinement or could turn into a conversation**.
- Among the latter are tasks where the **AI may not get it right the first time but** where **users can easily course correct** by telling the AI more precisely what they need. For instance, "Explain this code" is a common question that most of the time would result in a satisfying answer, but sometimes the user may have additional questions.
- **Tasks that benefit from the history of a conversation**, so neither the user nor the AI need to repeat themselves.
Chat aims to be context aware and ultimately have access to all the resources in GitLab that the user has access to. Initially, this context was limited to the content of individual issues and epics, as well as GitLab documentation. Since then additional contexts have been added, such as code selection and code files. Currently, work is underway contributing vulnerability context and pipeline job context, so that users can ask questions about these contexts.
To scale the context awareness and hence to scale creation, ideation, and learning use cases across the entire DevSecOps domain, the Duo Chat team welcomes contributions to the Chat platform from other GitLab teams and the wider community. They are the experts for the use cases and workflows to accelerate.
### Which use cases are better implemented as stand-alone AI features?
Which use cases are better implemented as stand-alone AI features, or at least also as stand-alone AI features?
- Narrowly scoped tasks that be can accelerated by deeply integrating AI into an existing workflow.
- That can't benefit from conversations with AI.
To make this more tangible, here is an example.
Generating a commit message based on the changes is best implemented into the commit
message writing workflow.
- Without AI, commit message writing may take ten seconds.
- When autopopulating an AI-generated commit message in the **Commit message** field in the IDE, this brings the task down to one second.
Using Chat for commit message writing would probably take longer than writing the message oneself. The user would have to switch to the Chat window, type the request and then copy the result into the commit message field.
That said, it does not mean that Chat can't write commit messages, nor that it would be prevented from doing so. If Chat has the commit context (which may be added at some point for reasons other than commit message writing), the user can certainly ask to do anything with this commit content, including writing a commit message. But users are certainly unlikely to do that with Chat as they would only loose time. Note: the resulting commit messages may be different if created from Chat with a prompt written by the user vs. a static prompt behind a purpose-built commit message creation.
## Set up GitLab Duo Chat
To set up Duo Chat locally, go through the
[general setup instructions for AI features](_index.md).
## Working with GitLab Duo Chat
Prompts are the most vital part of GitLab Duo Chat system. Prompts are the
instructions sent to the LLM to perform certain tasks.
The state of the prompts is the result of weeks of iteration. If you want to
change any prompt in the current tool, you must put it behind a feature flag.
If you have any new or updated prompts, ask members of [Duo Chat team](https://handbook.gitlab.com/handbook/engineering/development/data-science/ai-powered/duo-chat/)
to review, because they have significant experience with them.
### Troubleshooting
When working with Chat locally, you might run into an error. Most commons
problems are documented in this section.
If you find an undocumented issue, you should document it in this section after
you find a solution.
| Problem | Solution |
|-----------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| There is no Chat button in the GitLab UI. | Make sure your user is a part of a group with Premium or Ultimate license and enabled Chat. |
| Chat replies with "Forbidden by auth provider" error. | Backend can't access LLMs. Make sure your [AI gateway](_index.md#required-install-ai-gateway) is set up correctly. |
| Requests take too long to appear in UI | Consider restarting Sidekiq by running `gdk restart rails-background-jobs`. If that doesn't work, try `gdk kill` and then `gdk start`. Alternatively, you can bypass Sidekiq entirely. To do that temporary alter `Llm::CompletionWorker.perform_async` statements with `Llm::CompletionWorker.perform_inline` |
| There is no Chat button in GitLab UI when GDK is running on non-SaaS mode | You do not have cloud connector access token record or seat assigned. To create cloud connector access record, in rails console put following code: `CloudConnector::Access.new(data: { available_services: [{ name: "duo_chat", serviceStartTime: ":date_in_the_future" }] }).save`. |
For more information, see [interpreting GitLab Duo Chat error codes](#interpreting-gitlab-duo-chat-error-codes).
that Chat sends to assist troubleshooting.
## Contributing to GitLab Duo Chat
From the code perspective, Chat is implemented in the similar fashion as other
AI features. Read more about GitLab [AI Abstraction layer](_index.md#feature-development-abstraction-layer).
The Chat feature uses a [zero-shot agent](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/duo/chat/react_executor.rb)
that sends user question and relevant context to the [AI Gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)
which construct a prompt and sends the request to the large language model.
Large language model decides if it can answer directly or if it needs to use
one of the defined tools.
The tools each have their own prompt that provides instructions to the large
language model on how to use that tool to gather information. The tools are
designed to be self-sufficient and avoid multiple requests back and forth to the
large language model.
After the tools have gathered the required information, it is returned to the
zero-shot agent, which asks the large language model if enough information has
been gathered to provide the final answer to the user's question.
### Customizing interaction with GitLab Duo Chat
You can customize user interaction with GitLab Duo Chat in several ways.
#### Programmatically open GitLab Duo Chat
To provide users with a more dynamic way to access GitLab Duo Chat, you can
integrate functionality directly into their applications to open the GitLab Duo
Chat interface. The following example shows how to open the GitLab Duo Chat
drawer by using an event listener and the GitLab Duo Chat global state:
```javascript
import { duoChatGlobalState } from '~/super_sidebar/constants';
myFancyToggleToOpenChat.addEventListener('click', () => {
duoChatGlobalState.isShown = true;
});
```
#### Initiating GitLab Duo Chat with a pre-defined prompt
In some scenarios, you may want to direct users towards a specific topic or
query when they open GitLab Duo Chat. We have a utility function that will
open DuoChat drawer and send a command in a queue for DuoChat to execute on.
This should trigger the loading state and the streaming with the given prompt.
```javascript
import { sendDuoChatCommand } from 'ee/ai/utils';
[...]
methods: {
openChatWithPrompt() {
sendDuoChatCommand(
{
question: '/feedback' // This is your prompt
resourceId: 'gid:://gitlab/WorkItem/1', // A unique ID to identify the action for streaming
variables: {} // Any additional graphql variables you want to pass to ee/app/assets/javascripts/ai/graphql/chat.mutation.graphql when executing the query
}
)
}
}
```
Note that `sendDuoChatCommand` cannot be chained, meaning that you can send one command to DuoChat and have to wait until this action is done before sending a different command or the previous command might not work as expected.
This enhancement allows for a more tailored user experience by guiding the
conversation in GitLab Duo Chat towards predefined areas of interest or concern.
### Adding a new tool
To add a new tool you need to add changes both to [AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist)
and Rails Monolith. The main chat prompt is stored and assembled on AI gateway. Rails side is responsible for assembling
required parameters of the prompt and sending them to AI gateway. AI gateway is responsible for assembling Chat prompt and
selecting Chat tools that are available for user based on their subscription and addon.
When LLM selects the tool to use, this tool is executed on the Rails side. Tools use different endpoint to make
a request to AI gateway. When you add a new tool, take into account that AI gateway works with different clients
and GitLab applications that have different versions. That means that old versions of GitLab won't know about a new tool.
If you want to add a new tool, contact the Duo Chat team. We're working on long-term solution for this [problem](https://gitlab.com/gitlab-org/gitlab/-/issues/466247).
#### Changes in AI gateway
1. Create a new class for a tool in `ai_gateway/chat/tools/gitlab.py`. This class should include next properties:
- `name` of the tool
- GitLab `resource` that tool works with
- `description` of what the tool does
- `example` of question and desired answer
1. Add tool to `__all__` list of tools in `ai_gateway/chat/tools/gitlab.py`.
1. Add tool class to the `DuoChatToolsRegistry` in `ai_gateway/chat/toolset.py` with an appropriate Unit Primitive.
1. Add test for your changes.
#### Changes in Rails Monolith
1. Create files for the tool in the `ee/lib/gitlab/llm/chain/tools/` folder. Use existing tools like `issue_reader` or
`epic_reader` as a template.
1. Write a class for the tool that includes instructions for the large language model on how to use the tool
to gather information - the main prompts that this tool is using.
1. Implement code in the tool to parse the response from the large language model and return it to the [chat agent](https://gitlab.com/gitlab-org/gitlab/-/blob/e0220502f1b3459b5a571d510ce5d1826877c3ce/ee/lib/gitlab/llm/chain/agents/single_action_executor.rb).
1. Add the new tool name to the `tools` array in `ee/lib/gitlab/llm/completions/chat.rb` so the agent knows about it.
#### Testing all together
Test and iterate on the prompt using RSpec tests that make real requests to the large language model.
- Prompts require trial and error, the non-deterministic nature of working with LLM can be surprising.
- Anthropic provides good [guide](https://docs.anthropic.com/claude/docs/intro-to-prompting) on working on prompts.
- GitLab [guide](ai_feature_development_playbook.md) on working with prompts.
The key things to keep in mind are properly instructing the large language model through prompts and tool descriptions,
keeping tools self-sufficient, and returning responses to the zero-shot agent. With some trial and error on prompts,
adding new tools can expand the capabilities of the Chat feature.
There are available short [videos](https://www.youtube.com/playlist?list=PL05JrBw4t0KoOK-bm_bwfHaOv-1cveh8i) covering this topic.
### Working with multi-thread conversation
If you're building features that interact with Duo Chat conversations, you need to understand how threads work.
Duo Chat supports multiple conversations. Each conversation is represented by a thread, which contains multiple messages. The important attributes of a thread are:
- `id`: The `id` is required when replying to a thread.
- `conversation_type`: This allows for distinguishing between the different available Duo Chat conversation types. See the [thread conversation types list](../../api/graphql/reference/_index.md#aiconversationsthreadsconversationtype).
- If your feature needs its own conversation type, contact the Duo Chat team.
If your feature requires calling GraphQL API directly, the following queries and mutations are available, for which you **must** specify the `conversation_type`.
- [Query.aiConversationThreads](../../api/graphql/reference/_index.md#queryaiconversationthreads): lists threads
- [Query.aiMessages](../../api/graphql/reference/_index.md#queryaimessages): lists one thread's messages. **Must** specify `threadId`.
- [Mutation.aiAction](../../api/graphql/reference/_index.md#mutationaiaction): creates one message. If `threadId` is specified the message is appended into that thread.
All chat conversations have a retention period, controlled by the admin. The default retention period is 30 days after last reply.
- [Configure Duo Chat Conversation Expiration](../../user/gitlab_duo_chat/_index.md#configure-chat-conversation-expiration)
### Developer Resources
- [Example GraphQL Queries](#duo-chat-conversation-threads-graphql-queries) - See examples below in this document
## Debugging
To gather more insights about the full request, use the `Gitlab::Llm::Logger` file to debug logs.
The default logging level on production is `INFO` and **must not** be used to log any data that could contain personal identifying information.
To follow the debugging messages related to the AI requests on the abstraction layer, you can use:
```shell
export LLM_DEBUG=1
gdk start
tail -f log/llm.log
```
### Debugging in production environment
All information related to debugging and troubleshooting in production environment is collected in [the Duo Chat On-Call Runbook](https://gitlab.com/gitlab-com/runbooks/-/tree/master/docs/duo-chat).
## Tracing with LangSmith
Tracing is a powerful tool for understanding the behavior of your LLM application.
LangSmith has best-in-class tracing capabilities, and it's integrated with GitLab Duo Chat. Tracing can help you track down issues like:
- I'm new to GitLab Duo Chat and would like to understand what's going on under the hood.
- Where exactly the process failed when you got an unexpected answer.
- Which process was a bottle neck of the latency.
- What tool was used for an ambiguous question.

Tracing is especially useful for evaluation that runs GitLab Duo Chat against large dataset.
LangSmith integration works with any tools, including [Prompt Library](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library).
### Use tracing with LangSmith
{{< alert type="note" >}}
Tracing is available in Development and Testing environment only.
It's not available in Production environment.
{{< /alert >}}
1. Access [LangSmith](https://smith.langchain.com/) and create an account
1. Optional: [Create an Access Request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/new?issuable_template=Individual_Bulk_Access_Request) to be added to the GitLab organization in LangSmith.
1. Create [an API key](https://docs.smith.langchain.com/#create-an-api-key) (be careful where you create API key - they can be created in personal namespace or in GL namespace).
1. Set the following environment variables in GDK. You can define it in `env.runit` or directly `export` in the terminal.
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY='<your-api-key>'
export LANGCHAIN_PROJECT='<your-project-name>'
export LANGCHAIN_ENDPOINT='https://api.smith.langchain.com'
export GITLAB_RAILS_RACK_TIMEOUT=180 # Extending puma timeout for using LangSmith with Prompt Library as the evaluation tool.
```
Project name is the existing project in LangSmith or new one. It's enough to put new name in the environment variable -
project will be created during request.
1. Restart GDK.
1. Ask any question to Chat.
1. Observe project in the LangSmith [page](https://smith.langchain.com/) > Projects > \[Project name\]. 'Runs' tab should contain
your last requests.
## Evaluate your merge request in one click
To evaluate your merge request with [Central Evaluation Framework](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library) (a.k.a. CEF),
you can use [Evaluation Runner](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner) (internal only).
Follow [run evaluation on your merge request](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#run-evaluation-on-your-merge-request) instruction.
### Prevent regressions in your merge request
When you make a change to Duo Chat or related components,
you should run the [regression evaluator](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/eli5/duo_chat/regression_evaluator.md)
to detect quality degradation and bugs in the merge request.
It covers any Duo Chat execution patterns, including tool execution and slash commands.
To run the regression evaluator, [run evaluation on your merge request](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#run-evaluation-on-your-merge-request) and click a play button for the regression evaluator.
Later, you can [compare the evaluation result of the merge request against master](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#compare-the-evaluation-result-of-your-merge-request-against-master).
Make sure that there are no quality degradation and bugs in the [LangSmith's comparison page](https://docs.smith.langchain.com/evaluation/how_to_guides/compare_experiment_results).
While there are no strict guidelines for interpreting the comparison results, here are some helpful tips to consider:
- If the number of degraded scores exceeds the number of improved scores, it may indicate that the merge request has introduced a quality degradation.
- If any examples encounter errors during evaluation, it could suggest a potential bug in the merge request.
In either of these scenarios, we recommend further investigation:
1. Compare your results with [the daily evaluation results](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#view-daily-evaluation-result-on-master). For instance, look at the daily evaluations from yesterday and the day before.
1. If you observe similar patterns in these daily evaluations, it's likely that your merge request is safe to merge. However, if the patterns differ, it may indicate that your merge request has introduced unexpected changes.
We strongly recommend running the regression evaluator in at least the following environments:
| Environment | Evaluation pipeline name |
| ----------- | ------------------------ |
| GitLab Self-Managed and a custom model that is widely adopted | `duo-chat regression sm: [bedrock_mistral_8x7b_instruct]` |
| GitLab.com and GitLab Duo Enterprise add-on | `duo-chat regression .com: [duo_enterprise]` |
| GitLab.com and GitLab Duo Pro add-on | `duo-chat regression .com: [duo_pro]` |
Additionally, you can run other evaluators, such as `gitlab-docs`, which has a more comprehensive dataset for specific scopes.
See [available evaluation pipelines](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#evaluation-pipelines) for more information.
### Add an example to the regression dataset
When you introduce a new feature or received a regression report from users, you should add a new example to the regression dataset for expanding the coverage.
To add an example to the regression dataset, follow [this section](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/eli5/duo_chat/regression_evaluator.md#guideline).
For more information, see [the guideline of the regression evaluator](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/eli5/duo_chat/regression_evaluator.md#guideline).
## GitLab Duo Chat Self-managed End-to-End Tests
In MRs, the end-to-end tests exercise the Duo Chat functionality of GitLab Self-Managed instances by using an instance of the GitLab Linux package
integrated with the `latest` version of AI gateway. The instance of AI gateway is configured to return [mock responses](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist#mocking-ai-model-responses).
To view the results of these tests, open the `e2e:test-on-omnibus-ee` child pipeline and view the `ai-gateway` job.
The `ai-gateway` job activates a cloud license and then assigns a Duo Pro seat to a test user, before the tests are run.
For more information, see [AiGateway Scenarios](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#aigateway-scenarios).
## GraphQL Subscription
The GraphQL Subscription for Chat behaves slightly different because it's user-centric. A user could have Chat open on multiple browser tabs, or also on their IDE.
We therefore need to broadcast messages to multiple clients to keep them in sync. The `aiAction` mutation with the `chat` action behaves the following:
1. All complete Chat messages (including messages from the user) are broadcasted with the `userId`, `aiAction: "chat"` as identifier.
1. Chunks from streamed Chat messages are broadcasted with the `clientSubscriptionId` from the mutation as identifier.
Examples of GraphQL Subscriptions in a Vue component:
1. Complete Chat message
```javascript
import aiResponseSubscription from 'ee/graphql_shared/subscriptions/ai_completion_response.subscription.graphql';
[...]
apollo: {
$subscribe: {
aiCompletionResponse: {
query: aiResponseSubscription,
variables() {
return {
userId, // for example "gid://gitlab/User/1"
aiAction: 'CHAT',
};
},
result({ data }) {
// handle data.aiCompletionResponse
},
error(err) {
// handle error
},
},
},
```
1. Streamed Chat message
```javascript
import aiResponseSubscription from 'ee/graphql_shared/subscriptions/ai_completion_response.subscription.graphql';
[...]
apollo: {
$subscribe: {
aiCompletionResponseStream: {
query: aiResponseSubscription,
variables() {
return {
aiAction: 'CHAT',
userId, // for example "gid://gitlab/User/1"
clientSubscriptionId // randomly generated identifier for every message
htmlResponse: false, // important to bypass HTML processing on every chunk
};
},
result({ data }) {
// handle data.aiCompletionResponse
},
error(err) {
// handle error
},
},
},
```
Keep in mind that the `clientSubscriptionId` must be unique for every request. Reusing a `clientSubscriptionId` will cause several unwanted side effects in the subscription responses.
### Duo Chat GraphQL queries
1. [Set up GitLab Duo Chat](#set-up-gitlab-duo-chat)
1. Visit [GraphQL explorer](../../api/graphql/_index.md#interactive-graphql-explorer).
1. Execute the `aiAction` mutation. Here is an example:
```graphql
mutation {
aiAction(
input: {
chat: {
resourceId: "gid://gitlab/User/1",
content: "Hello"
}
}
){
requestId
errors
}
}
```
1. Execute the following query to fetch the response:
```graphql
query {
aiMessages {
nodes {
requestId
content
role
timestamp
chunkId
errors
}
}
}
```
If you can't fetch the response, check `graphql_json.log`,
`sidekiq_json.log`, `llm.log` or `modelgateway_debug.log` if it contains error
information.
### Duo Chat Conversation Threads GraphQL queries
#### Querying messages in a conversation thread
To retrieve messages from a specific thread, use the `aiMessages` query with a thread ID:
```graphql
query {
aiMessages(threadId: "gid://gitlab/Ai::Conversation::Thread/1") {
nodes {
requestId
content
role
timestamp
chunkId
errors
}
}
}
```
#### Starting a new conversation thread
If you don't include a threadId in your aiAction mutation, a new thread will be created:
```graphql
mutation {
aiAction(input: {
chat: {
content: "This will create a new conversation thread"
},
conversationType: DUO_CHAT
})
{
requestId
errors
threadId # This will contain the ID of the newly created thread
}
}
```
#### Creating a new message in an existing conversation thread
To add a message to an existing thread, include the threadId in your aiAction mutation:
```graphql
mutation {
aiAction(input: {
chat: {
content: "this is another message in the same thread"
},
conversationType: DUO_CHAT,
threadId: "gid://gitlab/Ai::Conversation::Thread/1",
})
{
requestId
errors
threadId
}
}
```
## Testing GitLab Duo Chat in production-like environments
GitLab Duo Chat is enabled in the [Staging](https://staging.gitlab.com/users/sign_in) and
[Staging Ref](https://staging-ref.gitlab.com/) GitLab environments.
Because GitLab Duo Chat is currently only available to members of groups in the
Premium and Ultimate tiers, Staging Ref may be an easier place to test changes as a GitLab
team member because
[you can make yourself an instance Admin in Staging Ref](https://handbook.gitlab.com/handbook/engineering/infrastructure/environments/staging-ref/#admin-access)
and, as an Admin, easily create licensed groups for testing.
### Important Testing Considerations
**Note**: A user who has a seat in multiple groups with different tiers of Duo add-on gets the highest tier experience across the entire instance.
It's not possible to test feature separation between different Duo add-ons if your test account has a seat in a higher tier add-on.
To properly test different tiers, create a separate test account for each tier you need to test.
### Staging testing groups
To simplify testing on [staging](https://staging.gitlab.com), several pre-configured groups have been created with the appropriate licenses and add-ons:
| Group | Duo Add-on | GitLab license |
| --- | --- | --- |
| [`duo_pro_gitlab_premium`](https://staging.gitlab.com/groups/duo_pro_gitlab_premium) | Pro | Premium |
| [`duo_pro_gitlab_ultimate`](https://staging.gitlab.com/groups/duo_pro_gitlab_ultimate) | Pro | Ultimate |
| [`duo_enterprise_gitlab_ultimate`](https://staging.gitlab.com/groups/duo_enterprise_gitlab_ultimate) | Enterprise | Ultimate |
Ask in the `#g_duo_chat` channel on Slack to be added as an Owner to these groups.
Once added as an Owner, you can add your secondary accounts to the group with a role Developer and assign them a seat in the Duo add-on.
Then you can sign in as your Developer user and test access control to Duo Chat.
### GitLab Duo Chat End-to-End Tests in live environments
Duo Chat end-to-end tests run continuously against [Staging](https://staging.gitlab.com/users/sign_in) and [Production](https://gitlab.com/) GitLab environments.
These tests run in scheduled pipelines and ensure the end-to-end user experiences are functioning correctly.
Results can be viewed in the `#e2e-run-staging` and `#e2e-run-production` Slack channels. The pipelines can be found below, access can be requested in `#s_developer_experience`:
- [Staging-canary pipelines](https://ops.gitlab.net/gitlab-org/quality/staging-canary/-/pipelines)
- [Staging pipelines](https://ops.gitlab.net/gitlab-org/quality/staging/-/pipelines)
- [Canary pipelines](https://ops.gitlab.net/gitlab-org/quality/canary/-/pipelines)
- [Production pipelines](https://ops.gitlab.net/gitlab-org/quality/production/-/pipelines)
## Product Analysis
To better understand how the feature is used, each production user input message is analyzed using LLM and Ruby,
and the analysis is tracked as a Snowplow event.
The analysis can contain any of the attributes defined in the latest [iglu schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/ai_question_category/jsonschema).
- The categories and detailed categories have been predefined by the product manager and the product designer, as we are not allowed to look at the actual questions from users. If there is reason to believe that there are missing or confusing categories, they can be changed. To edit the definitions, update `categories.xml` in both [AI Gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/ai_gateway/prompts/definitions/categorize_question/categories.xml) and [monolith](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/llm/fixtures/categories.xml).
- The list of attributes captured can be found in [labesl.xml](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/ai_gateway/prompts/definitions/categorize_question/labels.xml).
- The following is yet to be implemented:
- `is_proper_sentence`
- The following are deprecated:
- `number_of_questions_in_history`
- `length_of_questions_in_history`
- `time_since_first_question`
The request count and the user count for each question category and detail category can be reviewed in [this Tableau dashboard](https://10az.online.tableau.com/#/site/gitlab/views/DuoCategoriesofQuestions/DuoCategories) (GitLab team members only).
## How `access_duo_chat` policy works
This table describes the requirements for the `access_duo_chat` policy to
return `true` in different contexts.
| | GitLab.com | Dedicated or GitLab Self-Managed | All instances |
|----------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------|
| for user outside of project or group (`user.can?(:access_duo_chat)`) | User need to belong to at least one group on Premium or Ultimate tier with `duo_features_enabled` group setting switched on | - Instance needs to be on Premium or Ultimate tier<br>- Instance needs to have `duo_features_enabled` setting switched on | |
| for user in group context (`user.can?(:access_duo_chat, group)`) | - User needs to belong to at least one group on Premium or Ultimate tier with `experiment_and_beta_features` group setting switched on<br>- Root ancestor group of the group needs to be on Premium or Ultimate tier and the group must have `duo_features_enabled` setting switched on | - Instance needs to be on Premium or Ultimate tier<br>- Instance needs to have `duo_features_enabled` setting switched on | User must have at least _read_ permissions on the group |
| for user in project context (`user.can?(:access_duo_chat, project)`) | - User needs to belong to at least one group on the Premium or Ultimate tier with `experiment_and_beta_features` group setting enabled<br>- Project root ancestor group needs to be on Premium or Ultimate tier and project must have `duo_features_enabled` setting switched on | - Instance need to be on Ultimate tier<br>- Instance needs to have `duo_features_enabled` setting switched on | User must to have at least _read_ permission on the project |
## Running GitLab Duo Chat prompt experiments
Before being merged, all prompt or model changes for GitLab Duo Chat should both:
1. Be behind a feature flag *and*
1. Be evaluated locally
The type of local evaluation needed depends on the type of change. GitLab Duo
Chat local evaluation using the Prompt Library is an effective way of measuring
average correctness of responses to questions about issues and epics.
Follow the
[Prompt Library guide](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/how-to/run_duo_chat_eval.md#configuring-duo-chat-with-local-gdk)
to evaluate GitLab Duo Chat changes locally. The prompt library documentation is
the single source of truth and should be the most up-to-date.
See the video ([internal link](https://drive.google.com/file/d/1X6CARf0gebFYX4Rc9ULhcfq9LLLnJ_O-)) that covers the full setup.
### (Deprecated) Issue and epic experiments
{{< alert type="note" >}}
This section is deprecated in favor of the [development seed file](../development_seed_files.md#seed-project-and-group-resources-for-gitlab-duo).
{{< /alert >}}
If you would like to use the evaluation framework (as described [here](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library/-/blob/main/doc/how-to/run_duo_chat_eval.md?ref_type=heads#evaluation-on-issueepic))
you can import the required groups and projects using this Rake task:
```shell
GITLAB_SIMULATE_SAAS=1 bundle exec 'rake gitlab:duo:setup_evaluation[<test-group-name>]'
```
Since we use the `Setup` class (under `ee/lib/gitlab/duo/developments/setup.rb`)
that requires "saas" mode to create a group (necessary for importing subgroups),
you need to set `GITLAB_SIMULATE_SAAS=1`. This is just to complete the import
successfully, and then you can switch back to `GITLAB_SIMULATE_SAAS=0` if
desired.
#### (Deprecated) Epic and issue fixtures
{{< alert type="note" >}}
This section is deprecated in favor of the [development seed file](../development_seed_files.md#seed-project-and-group-resources-for-gitlab-duo).
{{< /alert >}}
The fixtures are the replicas of the _public_ issues and epics from projects and groups _owned by_ GitLab.
The internal notes were excluded when they were sampled. The fixtures have been committed into the canonical `gitlab` repository.
See [the snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/3613745) used to create the fixtures.
## How a Chat prompt is constructed
All Chat requests are resolved with the GitLab GraphQL API. And, for now,
prompts for 3rd party LLMs are hard-coded into the GitLab codebase.
But if you want to make a change to a Chat prompt, it isn't as obvious as
finding the string in a single file. Chat prompt construction is hard to follow
because the prompt is put together over the course of many steps. Here is the
flow of how we construct a Chat prompt:
1. API request is made to the GraphQL AI Mutation; request contains user Chat
input.
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/graphql/mutations/ai/action.rb#L6))
1. GraphQL mutation calls `Llm::ExecuteMethodService#execute`
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/graphql/mutations/ai/action.rb#L43))
1. `Llm::ExecuteMethodService#execute` sees that the `chat` method was sent to
the GraphQL API and calls `Llm::ChatService#execute`
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/services/llm/execute_method_service.rb#L36))
1. `Llm::ChatService#execute` calls `schedule_completion_worker`, which is
defined in `Llm::BaseService` (the base class for `ChatService`)
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/services/llm/base_service.rb#L72-87))
1. `schedule_completion_worker` calls `Llm::CompletionWorker.perform_for`, which
asynchronously enqueues the job
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/workers/llm/completion_worker.rb#L33))
1. `Llm::CompletionWorker#perform` is called when the job runs. It deserializes
the user input and other message context and passes that over to
`Llm::Internal::CompletionService#execute`
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/676cca2ea68d87bcfcca02a148c354b0e4eabc97/ee/app/workers/llm/completion_worker.rb#L44))
1. `Llm::Internal::CompletionService#execute` calls
`Gitlab::Llm::CompletionsFactory#completion!`, which pulls the `ai_action`
from original GraphQL request and initializes a new instance of
`Gitlab::Llm::Completions::Chat` and calls `execute` on it
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/55b8eb6ff869e61500c839074f080979cc60f9de/ee/lib/gitlab/llm/completions_factory.rb#L89))
1. `Gitlab::Llm::Completions::Chat#execute` calls `Gitlab::Duo::Chat::ReactExecutor`.
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/completions/chat.rb#L122-L130))
1. `Gitlab::Duo::Chat::ReactExecutor#execute` calls `#step_forward` which calls `Gitlab::Duo::Chat::StepExecutor#step`
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L235)).
1. `Gitlab::Duo::Chat::StepExecutor#step` calls `Gitlab::Duo::Chat::StepExecutor#perform_agent_request`, which sends a request to the AI Gateway `/v2/chat/agent/` endpoint
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/step_executor.rb#L69)).
1. The AI Gateway `/v2/chat/agent` endpoint receives the request on the `api.v2.agent.chat.agent.chat` function
([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v2/chat/agent.py#L133))
1. `api.v2.agent.chat.agent.chat` creates the `GLAgentRemoteExecutor` through the `gl_agent_remote_executor_factory` ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v2/chat/agent.py#L166)).
Upon creation of the `GLAgentRemoteExecutor`, the following parameters are passed:
- `tools_registry` - the registry of all available tools; this is passed through the factory ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/container.py#L35))
- `agent` - `ReActAgent` object that wraps the prompt information, including the chosen LLM model, prompt template, etc
1. `api.v2.agent.chat.agent.chat` calls the `GLAgentRemoteExecutor.on_behalf`, which gets the user tools early to raise an exception as soon as possible if an error occurs ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/executor.py#L56)).
1. `api.v2.agent.chat.agent.chat` calls the `GLAgentRemoteExecutor.stream` ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/executor.py#L81)).
1. `GLAgentRemoteExecutor.stream` calls `astream` on `agent` (an instance of `ReActAgent`) with inputs such as the messages and the list of available tools ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/executor.py#L92)).
1. The `ReActAgent` builds the prompts, with the available tools inserted into the system prompt template
([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/prompts/definitions/chat/react/system/1.0.0.jinja)).
1. `ReActAgent.astream` sends a call to the LLM model ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/agents/react.py#L216))
1. The LLM response is returned to Rails
(code path: [`ReActAgent.astream`](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/agents/react.py#L209)
-> [`GLAgentRemoteExecutor.stream`](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/chat/executor.py#L81)
-> [`api.v2.agent.chat.agent.chat`](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v2/chat/agent.py#L133)
-> Rails)
1. We've now made our first request to the AI gateway. If the LLM says that the answer to the first request is final,
Rails [parses the answer](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L56) and [returns it](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L63) for further response handling by [`Gitlab::Llm::Completions::Chat`](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/completions/chat.rb#L66).
1. If the answer is not final, the "thoughts" and "picked tools" from the first LLM request are parsed and then the relevant tool class is called.
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L207)
| [example tool class](https://gitlab.com/gitlab-org/gitlab/-/blob/971d07aa37d9f300b108ed66304505f2d7022841/ee/lib/gitlab/llm/chain/tools/identifier.rb))
1. The tool executor classes include `Concerns::AiDependent` and use its `request` method.
([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/chain/concerns/ai_dependent.rb#L14))
1. The `request` method uses the `ai_request` instance
that was injected into the `context` in `Llm::Completions::Chat`. For Chat,
this is `Gitlab::Llm::Chain::Requests::AiGateway`. ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/971d07aa37d9f300b108ed66304505f2d7022841/ee/lib/gitlab/llm/completions/chat.rb#L42)).
1. The tool indicates that `use_ai_gateway_agent_prompt=true` ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/chain/tools/issue_reader/executor.rb#L121)).
This tells the `ai_request` to send the prompt to the `/v1/prompts/chat` endpoint ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/llm/chain/requests/ai_gateway.rb#L87)).
1. AI Gateway `/v1/prompts/chat` endpoint receives the request on `api.v1.prompts.invoke`
([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v1/prompts/invoke.py#L41)).
1. `api.v1.prompts.invoke` gets the correct tool prompt from the tool prompt registry ([code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v1/prompts/invoke.py#L49)).
1. The prompt is called either as a [stream](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v1/prompts/invoke.py#L86) or as a [non-streamed invocation](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/989ead63fae493efab255180a51786b69a403b49/ai_gateway/api/v1/prompts/invoke.py#L96).
1. If the tool answer is not final, the response is added to agent_scratchpad and the loop in `Gitlab::Duo::Chat::ReactExecutor` starts again, adding the additional context to the request. It loops to up to 10 times until a final answer is reached. ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/30817374f2feecdaedbd3a0efaad93feaed5e0a0/ee/lib/gitlab/duo/chat/react_executor.rb#L44))
## Interpreting GitLab Duo Chat error codes
GitLab Duo Chat has error codes with specified meanings to assist in debugging.
See the [GitLab Duo Chat troubleshooting documentation](../../user/gitlab_duo_chat/troubleshooting.md) for a list of all GitLab Duo Chat error codes.
When developing for GitLab Duo Chat, include these error codes when returning an error and [document them](../../user/gitlab_duo_chat/troubleshooting.md), especially for user-facing errors.
### Error Code Format
The error codes follow the format: `<Layer Identifier><Four-digit Series Number>`.
For example:
- `M1001`: A network communication error in the monolith layer.
- `G2005`: A data formatting/processing error in the AI gateway layer.
- `A3010`: An authentication or data access permissions error in a third-party API.
### Error Code Layer Identifier
| Code | Layer |
|------|-----------------|
| M | Monolith |
| G | AI gateway |
| A | Third-party API |
### Error Series
| Series | Type |
|--------|------------------------------------------------------------------------------|
| 1000 | Network communication errors |
| 2000 | Data formatting/processing errors |
| 3000 | Authentication and/or data access permission errors |
| 4000 | Code execution exceptions |
| 5000 | Bad configuration or bad parameters errors |
| 6000 | Semantic or inference errors (the model does not understand or hallucinates) |
|
https://docs.gitlab.com/development/local_models
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/local_models.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
local_models.md
|
AI-powered
|
Custom Models
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Serve Large Language Models APIs Locally
| null |
There are several ways to serve large language models (LLMs) for local or self-deployment purposes.
[MistralAI](https://docs.mistral.ai/deployment/self-deployment/overview/) recommends two different serving frameworks for their models:
- [vLLM](https://docs.vllm.ai/en/latest/): A Python-only serving framework which deploys an API matching OpenAI's spec. vLLM provides a paged attention kernel to improve serving throughput.
- Nvidia's [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) served with Nvidia's Triton Inference Server: TensorRT-LLM provides a DSL to build fast inference engines with dedicated kernels for large language models. Triton Inference Server allows efficient serving of these inference engines.
These solutions require access to an Nvidia GPU as they rely on the [CUDA](https://developer.nvidia.com/cuda-gpus) graphics API for computation. However, [Ollama](https://ollama.com/download) offers a low configuration cross-platform solution to do it. This is the solution we are going to explore.
## Ollama
[Ollama](https://ollama.com/download) is an open-source framework to help you get up and running with large language models locally. You can serve any [supported LLMs](https://ollama.com/library). You can also make your own and push it to [Hugging Face](https://huggingface.co/).
Be aware that LLMs are usually very heavy to run.
Therefore, we are just going to focus on serving one model, namely [`mistral:instruct`](https://ollama.com/library/mistral:instruct) as it is relatively lightweight to run given its accuracy.
### Setup Ollama
Install Ollama by following these [instructions](https://ollama.com/download) for your OS.
On MacOS, you can alternatively use [Homebrew](https://brew.sh/) by running `brew install ollama` in your terminal.
Once installed, pull the model with `ollama pull mistral:instruct` in your terminal.
If the model was successfully pulled, give it a run with `ollama run mistral:instruct`. Exit the process once you've tested the model.
Now you can use the Ollama server. Visit [`http://localhost:11434/`](http://localhost:11434/); you should see `Ollama is running`. This means your server is already running. If that's not the case, you can run `ollama serve` in your terminal. Use `brew services start ollama` if you installed it with Homebrew.
The Ollama serving framework has an OpenAI-compatible API. The API reference is documented [here](https://github.com/ollama/ollama/blob/main/docs/api.md).
Here is a simple example you can try:
```shell
curl "http://localhost:11434/api/chat" \
--data '{
"model": "mistral:instruct",
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
}
],
"stream": false
}'
```
It runs on the port `11434` by default. If you are running into issues because this port is already in use by another application, you can follow [these instructions](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server).
|
---
stage: AI-powered
group: Custom Models
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Serve Large Language Models APIs Locally
breadcrumbs:
- doc
- development
- ai_features
---
There are several ways to serve large language models (LLMs) for local or self-deployment purposes.
[MistralAI](https://docs.mistral.ai/deployment/self-deployment/overview/) recommends two different serving frameworks for their models:
- [vLLM](https://docs.vllm.ai/en/latest/): A Python-only serving framework which deploys an API matching OpenAI's spec. vLLM provides a paged attention kernel to improve serving throughput.
- Nvidia's [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) served with Nvidia's Triton Inference Server: TensorRT-LLM provides a DSL to build fast inference engines with dedicated kernels for large language models. Triton Inference Server allows efficient serving of these inference engines.
These solutions require access to an Nvidia GPU as they rely on the [CUDA](https://developer.nvidia.com/cuda-gpus) graphics API for computation. However, [Ollama](https://ollama.com/download) offers a low configuration cross-platform solution to do it. This is the solution we are going to explore.
## Ollama
[Ollama](https://ollama.com/download) is an open-source framework to help you get up and running with large language models locally. You can serve any [supported LLMs](https://ollama.com/library). You can also make your own and push it to [Hugging Face](https://huggingface.co/).
Be aware that LLMs are usually very heavy to run.
Therefore, we are just going to focus on serving one model, namely [`mistral:instruct`](https://ollama.com/library/mistral:instruct) as it is relatively lightweight to run given its accuracy.
### Setup Ollama
Install Ollama by following these [instructions](https://ollama.com/download) for your OS.
On MacOS, you can alternatively use [Homebrew](https://brew.sh/) by running `brew install ollama` in your terminal.
Once installed, pull the model with `ollama pull mistral:instruct` in your terminal.
If the model was successfully pulled, give it a run with `ollama run mistral:instruct`. Exit the process once you've tested the model.
Now you can use the Ollama server. Visit [`http://localhost:11434/`](http://localhost:11434/); you should see `Ollama is running`. This means your server is already running. If that's not the case, you can run `ollama serve` in your terminal. Use `brew services start ollama` if you installed it with Homebrew.
The Ollama serving framework has an OpenAI-compatible API. The API reference is documented [here](https://github.com/ollama/ollama/blob/main/docs/api.md).
Here is a simple example you can try:
```shell
curl "http://localhost:11434/api/chat" \
--data '{
"model": "mistral:instruct",
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
}
],
"stream": false
}'
```
It runs on the port `11434` by default. If you are running into issues because this port is already in use by another application, you can follow [these instructions](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server).
|
https://docs.gitlab.com/development/duo_agent_platform
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/duo_agent_platform.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
duo_agent_platform.md
|
AI-powered
|
Duo Agent Platform
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Duo Agent Platform
| null |
This guide explains how to work with the Duo Agent Platform.
## Overview
The Duo Agent Platform is a Single Page Application (SPA) built with Vue.js that provides a unified interface for AI-powered automation features. The platform uses a scoped routing system that allows multiple navigation items to coexist under the `/automate` path.
The platform is architected with a flexible namespace system that allows the same frontend infrastructure to be reused across different contexts (projects, groups, etc.) while providing context-specific functionality through a component mapping system.
This page is behind the feature flag `duo_workflow_in_ci`
## Namespace Architecture
The namespace system is built around a central mapping mechanism that:
1. **Checks the namespace** - Determines which context the platform is running in
1. **Maps to Vue components** - Routes to the appropriate Vue component for that namespace
1. **Passes GraphQL queries as props** - Provides namespace-specific data through dependency injection
### Entry Point
The main entry point is located at:
```markdown
ee/app/assets/javascripts/pages/projects/duo_agents_platform/index.js
```
This file imports and initializes the platform:
```javascript
import { initDuoAgentsPlatformProjectPage } from 'ee/ai/duo_agents_platform/namespace/project';
initDuoAgentsPlatformProjectPage();
```
### App Structure
```mermaid
graph TD
A[Entry Point] --> B[initDuoAgentsPlatformPage]
B --> C[Extract Namespace Data]
C --> D[Create Router with Namespace]
D --> E[Component Mapping]
E --> F[Namespace-Specific Component]
F --> G[GraphQL Query with Props]
G --> H[Rendered UI]
I[Dataset Properties] --> C
J[Namespace Constant] --> E
K[Component Mappings] --> E
```
## Adding a New Navigation Item
The Duo Agent Platform uses a router-driven navigation system where the Vue Router configuration directly drives the breadcrumb navigation. The key insight is that **the router structure in `ee/app/assets/javascripts/ai/duo_agents_platform/router/index.js` determines both the URL structure and the breadcrumb hierarchy**.
### How Router-Driven Navigation Works
The system works through these interconnected components:
1. **Router Structure**: Nested routes with `meta.text` properties define breadcrumb labels
1. **Breadcrumb Component**: `duo_agents_platform_breadcrumbs.vue` automatically generates breadcrumbs from matched routes
1. **Route Injection**: `injectVueAppBreadcrumbs()` in `index.js` connects the router to the breadcrumb system
#### Router Analysis
Looking at the current router structure in `ee/app/assets/javascripts/ai/duo_agents_platform/router/index.js`:
```javascript
routes: [
{
component: NestedRouteApp, // Simple <router-view /> wrapper
path: '/agent-sessions',
meta: {
text: s__('DuoAgentsPlatform|Agent sessions'), // This becomes a breadcrumb
},
children: [
{
name: AGENTS_PLATFORM_INDEX_ROUTE,
path: '', // Matches /agent-sessions exactly
component: AgentsPlatformIndex,
},
{
name: AGENTS_PLATFORM_NEW_ROUTE,
path: 'new', // Matches /agent-sessions/new
component: AgentsPlatformNew,
meta: {
text: s__('DuoAgentsPlatform|New'), // This becomes a breadcrumb
},
},
// ...
],
},
]
```
#### Breadcrumb Generation
The breadcrumb component (`duo_agents_platform_breadcrumbs.vue`) works by:
1. Taking all matched routes from `this.$route.matched`
1. Extracting `meta.text` from each matched route
1. Creating breadcrumb items with the text and route path
1. Combining with static breadcrumbs (like "Automate")
```javascript
// From duo_agents_platform_breadcrumbs.vue
const matchedRoutes = (this.$route?.matched || [])
.map((route) => {
return {
text: route.meta?.text, // Uses meta.text for breadcrumb label
to: { path: route.path },
};
})
.filter((r) => r.text); // Only routes with meta.text become breadcrumbs
```
### Adding a New Navigation Item
To add a new top-level navigation item (like "Your Feature"), you need to add a new route tree to the router:
#### Step 1: Add Route Constants
**File**: `ee/app/assets/javascripts/ai/duo_agents_platform/router/constants.js`
```javascript
// Add route name constants for your feature
export const AGENTS_PLATFORM_YOUR_FEATURE_INDEX = 'your_feature_index';
export const AGENTS_PLATFORM_YOUR_FEATURE_NEW = 'your_feature_new';
export const AGENTS_PLATFORM_YOUR_FEATURE_SHOW = 'your_feature_show';
```
#### Step 2: Add Routes to Router
**File**: `ee/app/assets/javascripts/ai/duo_agents_platform/router/index.js`
```javascript
import YourFeatureIndex from '../pages/your_feature/your_feature_index.vue';
import YourFeatureNew from '../pages/your_feature/your_feature_new.vue';
import YourFeatureShow from '../pages/your_feature/your_feature_show.vue';
export const createRouter = (base, namespace) => {
return new VueRouter({
base,
mode: 'history',
routes: [
// Existing agent-sessions routes
{
component: NestedRouteApp,
path: '/agent-sessions',
meta: {
text: s__('DuoAgentsPlatform|Agent sessions'),
},
children: [
{
name: AGENTS_PLATFORM_INDEX_ROUTE,
path: '',
component: getNamespaceIndexComponent(namespace),
},
// ... existing children
],
},
// NEW: Your feature routes
{
component: NestedRouteApp,
path: '/your-feature', // This becomes the URL path
meta: {
text: s__('DuoAgentsPlatform|Your Feature'), // This becomes the breadcrumb
},
children: [
{
name: AGENTS_PLATFORM_YOUR_FEATURE_INDEX,
path: '', // Matches /your-feature exactly
component: YourFeatureIndex,
},
{
name: AGENTS_PLATFORM_YOUR_FEATURE_NEW,
path: 'new', // Matches /your-feature/new
component: YourFeatureNew,
meta: {
text: s__('DuoAgentsPlatform|New'), // Breadcrumb: "Your Feature > New"
},
},
{
name: AGENTS_PLATFORM_YOUR_FEATURE_SHOW,
path: ':id(\\d+)', // Matches /your-feature/123
component: YourFeatureShow,
// No meta.text - will use route param as breadcrumb
},
],
},
{ path: '*', redirect: '/agent-sessions' },
],
});
};
```
#### Step 3: Add Backend Route (if needed)
The existing wildcard route in `ee/config/routes/project.rb` should handle your new paths:
```ruby
scope :automate do
get '/(*vueroute)' => 'duo_agents_platform#show', as: :automate, format: false
end
```
**Important**: If you're adding a sidebar menu item (Step 4), you must add a named route helper:
```ruby
scope :automate do
get '/(*vueroute)' => 'duo_agents_platform#show', as: :automate, format: false
# Named routes for sidebar menu helpers
get 'agent-sessions', to: 'duo_agents_platform#show', as: :automate_agent_sessions, format: false
get 'your-feature', to: 'duo_agents_platform#show', as: :automate_your_features, format: false
end
```
**Note**: Use plural form for the route name (e.g., `automate_your_features`) to match the existing pattern and ensure the Rails path helper is generated correctly.
#### Step 4: Add Sidebar Menu Item
**File**: `ee/lib/sidebars/projects/super_sidebar_menus/duo_agents_menu.rb`
Add the menu item to the `configure_menu_items` method and create the corresponding menu item method:
```ruby
override :configure_menu_items
def configure_menu_items
return false unless Feature.enabled?(:duo_workflow_in_ci, context.current_user)
add_item(duo_agents_runs_menu_item)
add_item(duo_agents_your_feature_menu_item) # Add your new menu item
true
end
private
def duo_agents_your_feature_menu_item
::Sidebars::MenuItem.new(
title: s_('Your Feature'),
link: project_automate_your_features_path(context.project), # Note: plural 'features'
active_routes: { controller: :duo_agents_platform },
item_id: :agents_your_feature
)
end
```
#### Step 5: Create Vue Components
**File**: `ee/app/assets/javascripts/ai/duo_agents_platform/pages/your_feature/your_feature_index.vue`
```vue
<script>
export default {
name: 'YourFeatureIndex',
};
</script>
<template>
<div>
<h1>Your Feature</h1>
</div>
</template>
```
## Adding New Namespaces
To add a new namespace (e.g., for groups):
### 1. Define the Namespace Constant
```javascript
// ee/app/assets/javascripts/ai/duo_agents_platform/constants.js
export const AGENT_PLATFORM_GROUP_PAGE = 'group';
```
### 2. Create Namespace Directory Structure
```markdown
ee/app/assets/javascripts/ai/duo_agents_platform/namespace/group/
├── index.js
├── group_agents_platform_index.vue
└── graphql/
└── queries/
└── get_group_agent_flows.query.graphql
```
### 3. Implement Namespace Initialization
```javascript
// ee/app/assets/javascripts/ai/duo_agents_platform/namespace/group/index.js
import { initDuoAgentsPlatformPage } from '../../index';
import { AGENT_PLATFORM_GROUP_PAGE } from '../../constants';
export const initDuoAgentsPlatformGroupPage = () => {
initDuoAgentsPlatformPage({
namespace: AGENT_PLATFORM_GROUP_PAGE,
namespaceDatasetProperties: ['groupPath', 'groupId'],
});
};
```
### 4. Create Namespace-Specific Component
```vue
<!-- ee/app/assets/javascripts/ai/duo_agents_platform/namespace/group/group_agents_platform_index.vue -->
<script>
import getGroupAgentFlows from './graphql/queries/get_group_agent_flows.query.graphql';
import DuoAgentsPlatformIndex from '../../pages/index/duo_agents_platform_index.vue';
export default {
components: { DuoAgentsPlatformIndex },
inject: ['groupPath'], // Group-specific injection
apollo: {
workflows: {
query: getGroupAgentFlows, // Group-specific query
variables() {
return {
groupPath: this.groupPath,
// ...
};
},
// ...
},
},
};
</script>
<template>
<duo-agents-platform-index
:is-loading-workflows="isLoadingWorkflows"
:workflows="workflows"
:workflows-page-info="workflowsPageInfo"
:workflow-query="$apollo.queries.workflows"
/>
</template>
```
### 5. Update Component Mappings
```javascript
// ee/app/assets/javascripts/ai/duo_agents_platform/router/utils.js
import { AGENT_PLATFORM_PROJECT_PAGE, AGENT_PLATFORM_GROUP_PAGE } from '../constants';
import ProjectAgentsPlatformIndex from '../namespace/project/project_agents_platform_index.vue';
import GroupAgentsPlatformIndex from '../namespace/group/group_agents_platform_index.vue';
export const getNamespaceIndexComponent = (namespace) => {
const componentMappings = {
[AGENT_PLATFORM_PROJECT_PAGE]: ProjectAgentsPlatformIndex,
[AGENT_PLATFORM_GROUP_PAGE]: GroupAgentsPlatformIndex, // New mapping
};
return componentMappings[namespace];
};
```
### 6. Create Entry Point
```javascript
// ee/app/assets/javascripts/pages/groups/duo_agents_platform/index.js
import { initDuoAgentsPlatformGroupPage } from 'ee/ai/duo_agents_platform/namespace/group';
initDuoAgentsPlatformGroupPage();
```
## Benefits of This Architecture
1. **Code Reuse**: The same frontend infrastructure works across different contexts
1. **Separation of Concerns**: Each namespace handles its own data fetching and business logic
1. **Scalability**: Easy to add new namespaces without modifying existing code
1. **Type Safety**: Clear contracts through namespace constants and required properties
1. **Maintainability**: Isolated namespace implementations reduce coupling
### Key Implementation Details
#### Router-Driven Breadcrumbs
The breadcrumb system automatically generates navigation from the router structure:
- **Parent routes** with `meta.text` become breadcrumb segments
- **Child routes** with `meta.text` extend the breadcrumb chain
- **Routes without `meta.text`** use route parameters (like `:id`) as breadcrumb text
- **Static breadcrumbs** (like "Automate") are prepended to all routes
#### URL Structure
Each top-level navigation item gets its own URL namespace:
- Agent Sessions: `/automate/agent-sessions`, `/automate/agent-sessions/new`, `/automate/agent-sessions/123`
- Your Feature: `/automate/your-feature`, `/automate/your-feature/new`, `/automate/your-feature/456`
#### Nested Route Pattern
The platform uses a consistent nested route pattern:
1. **Parent route**: Defines the URL path and top-level breadcrumb
1. **NestedRouteApp component**: Simple `<router-view />` wrapper for child routes
1. **Child routes**: Define specific pages and additional breadcrumb segments
### Example: Current Agent Sessions Implementation
The existing agent sessions feature demonstrates this pattern:
```javascript
{
component: NestedRouteApp, // Renders child routes
path: '/agent-sessions', // URL: /automate/agent-sessions
meta: {
text: s__('DuoAgentsPlatform|Agent sessions'), // Breadcrumb: "Agent sessions"
},
children: [
{
name: AGENTS_PLATFORM_INDEX_ROUTE,
path: '', // URL: /automate/agent-sessions
component: AgentsPlatformIndex, // No additional breadcrumb
},
{
name: AGENTS_PLATFORM_NEW_ROUTE,
path: 'new', // URL: /automate/agent-sessions/new
component: AgentsPlatformNew,
meta: {
text: s__('DuoAgentsPlatform|New'), // Breadcrumb: "Agent sessions > New"
},
},
{
name: AGENTS_PLATFORM_SHOW_ROUTE,
path: ':id(\\d+)', // URL: /automate/agent-sessions/123
component: AgentsPlatformShow, // Breadcrumb: "Agent sessions > 123"
// No meta.text - uses :id parameter as breadcrumb
},
],
}
```
This creates the breadcrumb hierarchy:
- `/automate/agent-sessions` → "Automate > Agent sessions"
- `/automate/agent-sessions/new` → "Automate > Agent sessions > New"
- `/automate/agent-sessions/123` → "Automate > Agent sessions > 123"
## Key Files Reference
- **Entry Point**: `ee/app/assets/javascripts/pages/projects/duo_agents_platform/index.js`
- **Main Initialization**: `ee/app/assets/javascripts/ai/duo_agents_platform/index.js`
- **Router**: `ee/app/assets/javascripts/ai/duo_agents_platform/router/index.js`
- **Component Mapping**: `ee/app/assets/javascripts/ai/duo_agents_platform/router/utils.js`
- **Constants**: `ee/app/assets/javascripts/ai/duo_agents_platform/constants.js`
- **Utilities**: `ee/app/assets/javascripts/ai/duo_agents_platform/utils.js`
- **Project Namespace**: `ee/app/assets/javascripts/ai/duo_agents_platform/namespace/project/`
## Best Practices
1. **Router Structure**: Use nested routes with `meta.text` properties to define breadcrumb hierarchy
1. **URL Naming**: Use kebab-case for URL paths (`/your-feature`, not `/yourFeature`)
1. **Route Names**: Use descriptive constants for route names (`AGENTS_PLATFORM_YOUR_FEATURE_INDEX`)
1. **Component Organization**: Place feature components in `pages/[feature]/` directories
1. **Breadcrumb Text**: Use internationalized strings (`s__()`) for all `meta.text` values
1. **Consistent Patterns**: Follow the existing nested route pattern with `NestedRouteApp`
1. **Namespace Separation**: Keep namespace-specific logic in dedicated directories under `namespace/`
1. **Component Mapping**: Use the component mapping system to route to namespace-specific components
1. **Data Injection**: Use Vue's provide/inject pattern for namespace-specific data
## Troubleshooting
### Common Issues
1. **Breadcrumbs not showing**: Ensure parent routes have `meta.text` properties
1. **Routes not working**: Check that the wildcard route in `ee/config/routes/project.rb` exists
1. **Sidebar not highlighting**: Verify the sidebar menu item has correct `active_routes`
1. **404 errors**: Ensure your route paths don't conflict with existing routes
### Debugging Tips
1. **Vue DevTools**: Inspect `$route.matched` to see which routes are being matched
1. **Router State**: Check the router configuration in the Vue DevTools
1. **Breadcrumb Debug**: Add `console.log(this.$route.matched)` in the breadcrumb component
1. **URL Testing**: Test routes directly by navigating to URLs in the browser
## Generating fake flows
To generate fake flows to test out the platform, you can run
`ee/lib/tasks/gitlab/duo_workflow/duo_workflow.rake` Rake task.
Example to make 50 flows, 20 made by the specified user in specific project
```shell
bundle exec rake "gitlab:duo_workflow:populate[50,20,user@example.com,gitlab-org/gitlab-test]
```
|
---
stage: AI-powered
group: Duo Agent Platform
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab Duo Agent Platform
breadcrumbs:
- doc
- development
- ai_features
---
This guide explains how to work with the Duo Agent Platform.
## Overview
The Duo Agent Platform is a Single Page Application (SPA) built with Vue.js that provides a unified interface for AI-powered automation features. The platform uses a scoped routing system that allows multiple navigation items to coexist under the `/automate` path.
The platform is architected with a flexible namespace system that allows the same frontend infrastructure to be reused across different contexts (projects, groups, etc.) while providing context-specific functionality through a component mapping system.
This page is behind the feature flag `duo_workflow_in_ci`
## Namespace Architecture
The namespace system is built around a central mapping mechanism that:
1. **Checks the namespace** - Determines which context the platform is running in
1. **Maps to Vue components** - Routes to the appropriate Vue component for that namespace
1. **Passes GraphQL queries as props** - Provides namespace-specific data through dependency injection
### Entry Point
The main entry point is located at:
```markdown
ee/app/assets/javascripts/pages/projects/duo_agents_platform/index.js
```
This file imports and initializes the platform:
```javascript
import { initDuoAgentsPlatformProjectPage } from 'ee/ai/duo_agents_platform/namespace/project';
initDuoAgentsPlatformProjectPage();
```
### App Structure
```mermaid
graph TD
A[Entry Point] --> B[initDuoAgentsPlatformPage]
B --> C[Extract Namespace Data]
C --> D[Create Router with Namespace]
D --> E[Component Mapping]
E --> F[Namespace-Specific Component]
F --> G[GraphQL Query with Props]
G --> H[Rendered UI]
I[Dataset Properties] --> C
J[Namespace Constant] --> E
K[Component Mappings] --> E
```
## Adding a New Navigation Item
The Duo Agent Platform uses a router-driven navigation system where the Vue Router configuration directly drives the breadcrumb navigation. The key insight is that **the router structure in `ee/app/assets/javascripts/ai/duo_agents_platform/router/index.js` determines both the URL structure and the breadcrumb hierarchy**.
### How Router-Driven Navigation Works
The system works through these interconnected components:
1. **Router Structure**: Nested routes with `meta.text` properties define breadcrumb labels
1. **Breadcrumb Component**: `duo_agents_platform_breadcrumbs.vue` automatically generates breadcrumbs from matched routes
1. **Route Injection**: `injectVueAppBreadcrumbs()` in `index.js` connects the router to the breadcrumb system
#### Router Analysis
Looking at the current router structure in `ee/app/assets/javascripts/ai/duo_agents_platform/router/index.js`:
```javascript
routes: [
{
component: NestedRouteApp, // Simple <router-view /> wrapper
path: '/agent-sessions',
meta: {
text: s__('DuoAgentsPlatform|Agent sessions'), // This becomes a breadcrumb
},
children: [
{
name: AGENTS_PLATFORM_INDEX_ROUTE,
path: '', // Matches /agent-sessions exactly
component: AgentsPlatformIndex,
},
{
name: AGENTS_PLATFORM_NEW_ROUTE,
path: 'new', // Matches /agent-sessions/new
component: AgentsPlatformNew,
meta: {
text: s__('DuoAgentsPlatform|New'), // This becomes a breadcrumb
},
},
// ...
],
},
]
```
#### Breadcrumb Generation
The breadcrumb component (`duo_agents_platform_breadcrumbs.vue`) works by:
1. Taking all matched routes from `this.$route.matched`
1. Extracting `meta.text` from each matched route
1. Creating breadcrumb items with the text and route path
1. Combining with static breadcrumbs (like "Automate")
```javascript
// From duo_agents_platform_breadcrumbs.vue
const matchedRoutes = (this.$route?.matched || [])
.map((route) => {
return {
text: route.meta?.text, // Uses meta.text for breadcrumb label
to: { path: route.path },
};
})
.filter((r) => r.text); // Only routes with meta.text become breadcrumbs
```
### Adding a New Navigation Item
To add a new top-level navigation item (like "Your Feature"), you need to add a new route tree to the router:
#### Step 1: Add Route Constants
**File**: `ee/app/assets/javascripts/ai/duo_agents_platform/router/constants.js`
```javascript
// Add route name constants for your feature
export const AGENTS_PLATFORM_YOUR_FEATURE_INDEX = 'your_feature_index';
export const AGENTS_PLATFORM_YOUR_FEATURE_NEW = 'your_feature_new';
export const AGENTS_PLATFORM_YOUR_FEATURE_SHOW = 'your_feature_show';
```
#### Step 2: Add Routes to Router
**File**: `ee/app/assets/javascripts/ai/duo_agents_platform/router/index.js`
```javascript
import YourFeatureIndex from '../pages/your_feature/your_feature_index.vue';
import YourFeatureNew from '../pages/your_feature/your_feature_new.vue';
import YourFeatureShow from '../pages/your_feature/your_feature_show.vue';
export const createRouter = (base, namespace) => {
return new VueRouter({
base,
mode: 'history',
routes: [
// Existing agent-sessions routes
{
component: NestedRouteApp,
path: '/agent-sessions',
meta: {
text: s__('DuoAgentsPlatform|Agent sessions'),
},
children: [
{
name: AGENTS_PLATFORM_INDEX_ROUTE,
path: '',
component: getNamespaceIndexComponent(namespace),
},
// ... existing children
],
},
// NEW: Your feature routes
{
component: NestedRouteApp,
path: '/your-feature', // This becomes the URL path
meta: {
text: s__('DuoAgentsPlatform|Your Feature'), // This becomes the breadcrumb
},
children: [
{
name: AGENTS_PLATFORM_YOUR_FEATURE_INDEX,
path: '', // Matches /your-feature exactly
component: YourFeatureIndex,
},
{
name: AGENTS_PLATFORM_YOUR_FEATURE_NEW,
path: 'new', // Matches /your-feature/new
component: YourFeatureNew,
meta: {
text: s__('DuoAgentsPlatform|New'), // Breadcrumb: "Your Feature > New"
},
},
{
name: AGENTS_PLATFORM_YOUR_FEATURE_SHOW,
path: ':id(\\d+)', // Matches /your-feature/123
component: YourFeatureShow,
// No meta.text - will use route param as breadcrumb
},
],
},
{ path: '*', redirect: '/agent-sessions' },
],
});
};
```
#### Step 3: Add Backend Route (if needed)
The existing wildcard route in `ee/config/routes/project.rb` should handle your new paths:
```ruby
scope :automate do
get '/(*vueroute)' => 'duo_agents_platform#show', as: :automate, format: false
end
```
**Important**: If you're adding a sidebar menu item (Step 4), you must add a named route helper:
```ruby
scope :automate do
get '/(*vueroute)' => 'duo_agents_platform#show', as: :automate, format: false
# Named routes for sidebar menu helpers
get 'agent-sessions', to: 'duo_agents_platform#show', as: :automate_agent_sessions, format: false
get 'your-feature', to: 'duo_agents_platform#show', as: :automate_your_features, format: false
end
```
**Note**: Use plural form for the route name (e.g., `automate_your_features`) to match the existing pattern and ensure the Rails path helper is generated correctly.
#### Step 4: Add Sidebar Menu Item
**File**: `ee/lib/sidebars/projects/super_sidebar_menus/duo_agents_menu.rb`
Add the menu item to the `configure_menu_items` method and create the corresponding menu item method:
```ruby
override :configure_menu_items
def configure_menu_items
return false unless Feature.enabled?(:duo_workflow_in_ci, context.current_user)
add_item(duo_agents_runs_menu_item)
add_item(duo_agents_your_feature_menu_item) # Add your new menu item
true
end
private
def duo_agents_your_feature_menu_item
::Sidebars::MenuItem.new(
title: s_('Your Feature'),
link: project_automate_your_features_path(context.project), # Note: plural 'features'
active_routes: { controller: :duo_agents_platform },
item_id: :agents_your_feature
)
end
```
#### Step 5: Create Vue Components
**File**: `ee/app/assets/javascripts/ai/duo_agents_platform/pages/your_feature/your_feature_index.vue`
```vue
<script>
export default {
name: 'YourFeatureIndex',
};
</script>
<template>
<div>
<h1>Your Feature</h1>
</div>
</template>
```
## Adding New Namespaces
To add a new namespace (e.g., for groups):
### 1. Define the Namespace Constant
```javascript
// ee/app/assets/javascripts/ai/duo_agents_platform/constants.js
export const AGENT_PLATFORM_GROUP_PAGE = 'group';
```
### 2. Create Namespace Directory Structure
```markdown
ee/app/assets/javascripts/ai/duo_agents_platform/namespace/group/
├── index.js
├── group_agents_platform_index.vue
└── graphql/
└── queries/
└── get_group_agent_flows.query.graphql
```
### 3. Implement Namespace Initialization
```javascript
// ee/app/assets/javascripts/ai/duo_agents_platform/namespace/group/index.js
import { initDuoAgentsPlatformPage } from '../../index';
import { AGENT_PLATFORM_GROUP_PAGE } from '../../constants';
export const initDuoAgentsPlatformGroupPage = () => {
initDuoAgentsPlatformPage({
namespace: AGENT_PLATFORM_GROUP_PAGE,
namespaceDatasetProperties: ['groupPath', 'groupId'],
});
};
```
### 4. Create Namespace-Specific Component
```vue
<!-- ee/app/assets/javascripts/ai/duo_agents_platform/namespace/group/group_agents_platform_index.vue -->
<script>
import getGroupAgentFlows from './graphql/queries/get_group_agent_flows.query.graphql';
import DuoAgentsPlatformIndex from '../../pages/index/duo_agents_platform_index.vue';
export default {
components: { DuoAgentsPlatformIndex },
inject: ['groupPath'], // Group-specific injection
apollo: {
workflows: {
query: getGroupAgentFlows, // Group-specific query
variables() {
return {
groupPath: this.groupPath,
// ...
};
},
// ...
},
},
};
</script>
<template>
<duo-agents-platform-index
:is-loading-workflows="isLoadingWorkflows"
:workflows="workflows"
:workflows-page-info="workflowsPageInfo"
:workflow-query="$apollo.queries.workflows"
/>
</template>
```
### 5. Update Component Mappings
```javascript
// ee/app/assets/javascripts/ai/duo_agents_platform/router/utils.js
import { AGENT_PLATFORM_PROJECT_PAGE, AGENT_PLATFORM_GROUP_PAGE } from '../constants';
import ProjectAgentsPlatformIndex from '../namespace/project/project_agents_platform_index.vue';
import GroupAgentsPlatformIndex from '../namespace/group/group_agents_platform_index.vue';
export const getNamespaceIndexComponent = (namespace) => {
const componentMappings = {
[AGENT_PLATFORM_PROJECT_PAGE]: ProjectAgentsPlatformIndex,
[AGENT_PLATFORM_GROUP_PAGE]: GroupAgentsPlatformIndex, // New mapping
};
return componentMappings[namespace];
};
```
### 6. Create Entry Point
```javascript
// ee/app/assets/javascripts/pages/groups/duo_agents_platform/index.js
import { initDuoAgentsPlatformGroupPage } from 'ee/ai/duo_agents_platform/namespace/group';
initDuoAgentsPlatformGroupPage();
```
## Benefits of This Architecture
1. **Code Reuse**: The same frontend infrastructure works across different contexts
1. **Separation of Concerns**: Each namespace handles its own data fetching and business logic
1. **Scalability**: Easy to add new namespaces without modifying existing code
1. **Type Safety**: Clear contracts through namespace constants and required properties
1. **Maintainability**: Isolated namespace implementations reduce coupling
### Key Implementation Details
#### Router-Driven Breadcrumbs
The breadcrumb system automatically generates navigation from the router structure:
- **Parent routes** with `meta.text` become breadcrumb segments
- **Child routes** with `meta.text` extend the breadcrumb chain
- **Routes without `meta.text`** use route parameters (like `:id`) as breadcrumb text
- **Static breadcrumbs** (like "Automate") are prepended to all routes
#### URL Structure
Each top-level navigation item gets its own URL namespace:
- Agent Sessions: `/automate/agent-sessions`, `/automate/agent-sessions/new`, `/automate/agent-sessions/123`
- Your Feature: `/automate/your-feature`, `/automate/your-feature/new`, `/automate/your-feature/456`
#### Nested Route Pattern
The platform uses a consistent nested route pattern:
1. **Parent route**: Defines the URL path and top-level breadcrumb
1. **NestedRouteApp component**: Simple `<router-view />` wrapper for child routes
1. **Child routes**: Define specific pages and additional breadcrumb segments
### Example: Current Agent Sessions Implementation
The existing agent sessions feature demonstrates this pattern:
```javascript
{
component: NestedRouteApp, // Renders child routes
path: '/agent-sessions', // URL: /automate/agent-sessions
meta: {
text: s__('DuoAgentsPlatform|Agent sessions'), // Breadcrumb: "Agent sessions"
},
children: [
{
name: AGENTS_PLATFORM_INDEX_ROUTE,
path: '', // URL: /automate/agent-sessions
component: AgentsPlatformIndex, // No additional breadcrumb
},
{
name: AGENTS_PLATFORM_NEW_ROUTE,
path: 'new', // URL: /automate/agent-sessions/new
component: AgentsPlatformNew,
meta: {
text: s__('DuoAgentsPlatform|New'), // Breadcrumb: "Agent sessions > New"
},
},
{
name: AGENTS_PLATFORM_SHOW_ROUTE,
path: ':id(\\d+)', // URL: /automate/agent-sessions/123
component: AgentsPlatformShow, // Breadcrumb: "Agent sessions > 123"
// No meta.text - uses :id parameter as breadcrumb
},
],
}
```
This creates the breadcrumb hierarchy:
- `/automate/agent-sessions` → "Automate > Agent sessions"
- `/automate/agent-sessions/new` → "Automate > Agent sessions > New"
- `/automate/agent-sessions/123` → "Automate > Agent sessions > 123"
## Key Files Reference
- **Entry Point**: `ee/app/assets/javascripts/pages/projects/duo_agents_platform/index.js`
- **Main Initialization**: `ee/app/assets/javascripts/ai/duo_agents_platform/index.js`
- **Router**: `ee/app/assets/javascripts/ai/duo_agents_platform/router/index.js`
- **Component Mapping**: `ee/app/assets/javascripts/ai/duo_agents_platform/router/utils.js`
- **Constants**: `ee/app/assets/javascripts/ai/duo_agents_platform/constants.js`
- **Utilities**: `ee/app/assets/javascripts/ai/duo_agents_platform/utils.js`
- **Project Namespace**: `ee/app/assets/javascripts/ai/duo_agents_platform/namespace/project/`
## Best Practices
1. **Router Structure**: Use nested routes with `meta.text` properties to define breadcrumb hierarchy
1. **URL Naming**: Use kebab-case for URL paths (`/your-feature`, not `/yourFeature`)
1. **Route Names**: Use descriptive constants for route names (`AGENTS_PLATFORM_YOUR_FEATURE_INDEX`)
1. **Component Organization**: Place feature components in `pages/[feature]/` directories
1. **Breadcrumb Text**: Use internationalized strings (`s__()`) for all `meta.text` values
1. **Consistent Patterns**: Follow the existing nested route pattern with `NestedRouteApp`
1. **Namespace Separation**: Keep namespace-specific logic in dedicated directories under `namespace/`
1. **Component Mapping**: Use the component mapping system to route to namespace-specific components
1. **Data Injection**: Use Vue's provide/inject pattern for namespace-specific data
## Troubleshooting
### Common Issues
1. **Breadcrumbs not showing**: Ensure parent routes have `meta.text` properties
1. **Routes not working**: Check that the wildcard route in `ee/config/routes/project.rb` exists
1. **Sidebar not highlighting**: Verify the sidebar menu item has correct `active_routes`
1. **404 errors**: Ensure your route paths don't conflict with existing routes
### Debugging Tips
1. **Vue DevTools**: Inspect `$route.matched` to see which routes are being matched
1. **Router State**: Check the router configuration in the Vue DevTools
1. **Breadcrumb Debug**: Add `console.log(this.$route.matched)` in the breadcrumb component
1. **URL Testing**: Test routes directly by navigating to URLs in the browser
## Generating fake flows
To generate fake flows to test out the platform, you can run
`ee/lib/tasks/gitlab/duo_workflow/duo_workflow.rake` Rake task.
Example to make 50 flows, 20 made by the specified user in specific project
```shell
bundle exec rake "gitlab:duo_workflow:populate[50,20,user@example.com,gitlab-org/gitlab-test]
```
|
https://docs.gitlab.com/development/vertex_model_enablement_process
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/vertex_model_enablement_process.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
vertex_model_enablement_process.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Vertex AI Model Enablement Process
| null |
## Production Environment Setup
### 1. Request Initiation
- Create an issue in the [GitLab project](https://gitlab.com/gitlab-org/gitlab/-/issues)
- Use the Model Enablement Request template - see below
- Specify the model(s) to be enabled (e.g., Codestral)
- Share the issue link in the `#ai-infrastructure` channel for visibility
### 2. Request Processing
- Request is handled by either:
- Infrastructure team (Infra)
- AI Framework team (AIF)
### 3. Model Enablement
- For Vertex AI managed models:
- Team enables the model via the Vertex AI console ("click on enable")
- For custom configurations:
- AIF team opens a ticket with Google for customization needs
### 4. Quota Management
- Monitoring for existing quota is available from the [AI-gateway dashboard](https://dashboards.gitlab.net/d/ai-gateway-main/ai-gateway3a-overview?from=now-6h%2Fm&orgId=1&timezone=utc&to=now%2Fm&var-PROMETHEUS_DS=mimir-runway&var-environment=gprd&viewPanel=panel-1217942947). Use the little arrow on the top left to drill down and see quota usage per model.
- Not all quota are available in our monitoring, all visible quota are available in the [GCP console for the `gitlab-ai-framework-prod` project](https://console.cloud.google.com/iam-admin/quotas?referrer=search&inv=1&invt=Abs5YQ&project=gitlab-ai-framework-prod)
- Quota capacity forecasting is available in [tamland](https://gitlab-com.gitlab.io/gl-infra/capacity-planning-trackers/gitlab-com/service_groups/ai-gateway/)
- Quota increases to shared resources need to be requested from Google
- Provisioned throughput could be purchased from Google if justifiable.
- Even when quota is available, requests may be throttled during high demand periods due to Anthropic's resource provisioning model. Unlike direct Google services which over-provision resources, Anthropic provisions based on actual demand. To ensure consistent throughput without throttling, dedicated provisioned throughput can be purchased through Anthropic.
## Load Testing Environment Setup
### 1. Environment Selection
- Options include:
- ai-framework-dev
- ai-framework-stage
- Dedicated load test environment (e.g., sandbox project)
### 2. Access Request
- Create an access request using the [template](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/new?description_template=Individual_Bulk_Access_Request)
- Request roles/writer role for the project
### 3. Environment Configuration
- Replicate the exact same model configuration from production
- Ensure isolation from production to prevent:
- Load test interrupting production traffic
- External traffic skewing load test results
### 4. Model Verification
- Verify model specs match production environment
- Validate quotas and capacity before running tests
## Best Practices
- Test new models or model versions before deploying to production
- Use isolated environments for load testing to prevent impacting users
- Monitor for GPU capacity issues and rate limits during testing
- Document configuration changes for future reference
## Model Enablement Request Template
```markdown
### Model Details
- **Model Name**: [e.g., Codestral, Claude 3 Opus, etc.]
- **Provider**: [e.g., Google Vertex AI, Anthropic, etc.]
- **Model Version/Edition**: [e.g., v1, Sonnet, Haiku, etc.]
### Business Justification
- **Purpose**: [Brief description of how this model will be used]
- **Features/Capabilities Required**: [Specific capabilities needed from this model]
- **Expected Impact**: [How this model will improve GitLab features/services]
### Technical Requirements
- **Environment(s)**: [Production, Staging, Dev, etc.]
- **Expected Traffic/Usage**: [Estimated QPS, daily usage, etc.]
- **Required Quotas**: [TPU/GPU hours, tokens per minute, etc. if known]
- **Integration Point**: [Which GitLab service(s) will use this model]
### Timeline
- **Requested By Date**: [When you need this model to be available]
- **Testing Period**: [Planned testing dates before full deployment]
### Additional Information
- **Special Configuration Needs**: [Any custom settings needed]
- **Similar Models Already Enabled**: [For reference/comparison]
- **Links to Relevant Documentation**: [Model documentation, internal specs, etc.]
/label ~"group::ai framework"
```
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Vertex AI Model Enablement Process
breadcrumbs:
- doc
- development
- ai_features
---
## Production Environment Setup
### 1. Request Initiation
- Create an issue in the [GitLab project](https://gitlab.com/gitlab-org/gitlab/-/issues)
- Use the Model Enablement Request template - see below
- Specify the model(s) to be enabled (e.g., Codestral)
- Share the issue link in the `#ai-infrastructure` channel for visibility
### 2. Request Processing
- Request is handled by either:
- Infrastructure team (Infra)
- AI Framework team (AIF)
### 3. Model Enablement
- For Vertex AI managed models:
- Team enables the model via the Vertex AI console ("click on enable")
- For custom configurations:
- AIF team opens a ticket with Google for customization needs
### 4. Quota Management
- Monitoring for existing quota is available from the [AI-gateway dashboard](https://dashboards.gitlab.net/d/ai-gateway-main/ai-gateway3a-overview?from=now-6h%2Fm&orgId=1&timezone=utc&to=now%2Fm&var-PROMETHEUS_DS=mimir-runway&var-environment=gprd&viewPanel=panel-1217942947). Use the little arrow on the top left to drill down and see quota usage per model.
- Not all quota are available in our monitoring, all visible quota are available in the [GCP console for the `gitlab-ai-framework-prod` project](https://console.cloud.google.com/iam-admin/quotas?referrer=search&inv=1&invt=Abs5YQ&project=gitlab-ai-framework-prod)
- Quota capacity forecasting is available in [tamland](https://gitlab-com.gitlab.io/gl-infra/capacity-planning-trackers/gitlab-com/service_groups/ai-gateway/)
- Quota increases to shared resources need to be requested from Google
- Provisioned throughput could be purchased from Google if justifiable.
- Even when quota is available, requests may be throttled during high demand periods due to Anthropic's resource provisioning model. Unlike direct Google services which over-provision resources, Anthropic provisions based on actual demand. To ensure consistent throughput without throttling, dedicated provisioned throughput can be purchased through Anthropic.
## Load Testing Environment Setup
### 1. Environment Selection
- Options include:
- ai-framework-dev
- ai-framework-stage
- Dedicated load test environment (e.g., sandbox project)
### 2. Access Request
- Create an access request using the [template](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/new?description_template=Individual_Bulk_Access_Request)
- Request roles/writer role for the project
### 3. Environment Configuration
- Replicate the exact same model configuration from production
- Ensure isolation from production to prevent:
- Load test interrupting production traffic
- External traffic skewing load test results
### 4. Model Verification
- Verify model specs match production environment
- Validate quotas and capacity before running tests
## Best Practices
- Test new models or model versions before deploying to production
- Use isolated environments for load testing to prevent impacting users
- Monitor for GPU capacity issues and rate limits during testing
- Document configuration changes for future reference
## Model Enablement Request Template
```markdown
### Model Details
- **Model Name**: [e.g., Codestral, Claude 3 Opus, etc.]
- **Provider**: [e.g., Google Vertex AI, Anthropic, etc.]
- **Model Version/Edition**: [e.g., v1, Sonnet, Haiku, etc.]
### Business Justification
- **Purpose**: [Brief description of how this model will be used]
- **Features/Capabilities Required**: [Specific capabilities needed from this model]
- **Expected Impact**: [How this model will improve GitLab features/services]
### Technical Requirements
- **Environment(s)**: [Production, Staging, Dev, etc.]
- **Expected Traffic/Usage**: [Estimated QPS, daily usage, etc.]
- **Required Quotas**: [TPU/GPU hours, tokens per minute, etc. if known]
- **Integration Point**: [Which GitLab service(s) will use this model]
### Timeline
- **Requested By Date**: [When you need this model to be available]
- **Testing Period**: [Planned testing dates before full deployment]
### Additional Information
- **Special Configuration Needs**: [Any custom settings needed]
- **Similar Models Already Enabled**: [For reference/comparison]
- **Links to Relevant Documentation**: [Model documentation, internal specs, etc.]
/label ~"group::ai framework"
```
|
https://docs.gitlab.com/development/amazon_q_integration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/amazon_q_integration.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
amazon_q_integration.md
|
AI-powered
|
Custom Models
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Amazon Q integration for testing and evaluation
| null |
This guide combines and builds on the following guides and sources. It describes Amazon Q set-up for testing and evaluation purposes:
- [Set up GitLab Duo with Amazon Q](../../user/duo_amazon_q/setup.md)
- [code-suggestions development guide](code_suggestions.md)
This guide describes how to set up Amazon Q in a GitLab Linux package running in a VM, using the staging AI Gateway. The reason we need a GitLab Linux package instance instead of GDK is that the GitLab instance needs an HTTPS URL that can be accessed by Amazon Q.
## Install and configure a GitLab Linux package on a virtual machine
1. Create a VM in AWS
1. Go to [cloud sandbox](https://gitlabsandbox.cloud/cloud), and login with OKTA
1. Click "Create Individual Account", and choose `aws-***` (not `aws-services-***` or `aws-dedicated-***`). This will create a AWS sandbox and display login credentials
1. Configure an EC2 machine
A few things to note:
- Need to enable both HTTP and HTTPS traffic under firewall setting.
- Copy the external IP of the VM instance created.
1. Install GitLab
1. Follow this [guide](https://about.gitlab.com/install/#ubuntu) on how to install GitLab Linux package.
We need to set up the external URL and an initial password. Install GitLab using the following command:
```shell
sudo GITLAB_ROOT_PASSWORD="your_password" EXTERNAL_URL="https://<vm-instance-external-ip>.nip.io" apt install gitlab-ee
```
This will use nip.io as the DNS service so the GitLab instance can be accessed through HTTPs
1. Config the newly installed GitLab instance
1. SSH into the VM, and add the following config into `/etc/gitlab/gitlab.rb`
```ruby
gitlab_rails['env'] = {
"GITLAB_LICENSE_MODE" => "test",
"CUSTOMER_PORTAL_URL" => "https://customers.staging.gitlab.com",
"AI_GATEWAY_URL" => "https://cloud.staging.gitlab.com/ai"
}
```
1. Apply the config changes by `sudo gitlab-ctl reconfigure`
1. Obtain and activate a self-managed ultimate license
1. Go to [staging customers portal](https://customers.staging.gitlab.com/), select "Signin with GitLab.com account".
1. Instead of clicking "Buy new subscription", go to the [product page](https://customers.staging.gitlab.com/subscriptions/new?plan_id=2c92a00c76f0c6c20176f2f9328b33c9) directly. For reason of this, see [buy_subscription](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/8aa922840091ad5c5d96ada43d0065a1b6198841/doc/flows/buy_subscription.md)
1. Purchase the subscription using [a test credit card](https://gitlab.com/gitlab-org/customers-gitlab-com/#testing-credit-card-information). An activation code will be given. Do not purchase a duo-pro add-on, because currently duo-pro and Q are mutually exclusive.
1. Go to the GitLab instance created earlier (`https://<vm-instance-external-ip>.nip.io`), log in with root account. Then on the left sidebar, go to **Admin > Subscription**, and enter the activation code
## Create and configure an AWS sandbox
1. Follow the [same step](#install-and-configure-a-gitlab-linux-package-on-a-virtual-machine) described above on how to create an AWS sandbox if you haven't had one already.
1. Login into the newly created AWS account and create an **Identity Provider** following this [instruction](../../user/duo_amazon_q/setup.md#create-an-iam-identity-provider) with slight modifications:
- Provider URL: `https://glgo.staging.runway.gitlab.net/cc/oidc/<your_gitlab_instance_id>`
- Audience: `gitlab-cc-<your_gitlab_instance_id>`
The GitLab instance ID can be found at `<gitlab_url>/admin/ai/amazon_q_settings`
1. Create a new role using the identity provider. For this, we can follow [this section](../../user/duo_amazon_q/setup.md#create-an-iam-role) exactly.
## Add Amazon Q to GitLab
1. Follow [Enter the ARN in GitLab and enable Amazon Q](../../user/duo_amazon_q/setup.md#enter-the-arn-in-gitlab-and-enable-amazon-q) exactly
1. Now Q should be working. We can test it like [this](https://gitlab.com/gitlab-com/ops-sub-department/aws-gitlab-ai-integration/integration-motion-planning/-/wikis/integration-docs#testing-q)
|
---
stage: AI-powered
group: Custom Models
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Amazon Q integration for testing and evaluation
breadcrumbs:
- doc
- development
- ai_features
---
This guide combines and builds on the following guides and sources. It describes Amazon Q set-up for testing and evaluation purposes:
- [Set up GitLab Duo with Amazon Q](../../user/duo_amazon_q/setup.md)
- [code-suggestions development guide](code_suggestions.md)
This guide describes how to set up Amazon Q in a GitLab Linux package running in a VM, using the staging AI Gateway. The reason we need a GitLab Linux package instance instead of GDK is that the GitLab instance needs an HTTPS URL that can be accessed by Amazon Q.
## Install and configure a GitLab Linux package on a virtual machine
1. Create a VM in AWS
1. Go to [cloud sandbox](https://gitlabsandbox.cloud/cloud), and login with OKTA
1. Click "Create Individual Account", and choose `aws-***` (not `aws-services-***` or `aws-dedicated-***`). This will create a AWS sandbox and display login credentials
1. Configure an EC2 machine
A few things to note:
- Need to enable both HTTP and HTTPS traffic under firewall setting.
- Copy the external IP of the VM instance created.
1. Install GitLab
1. Follow this [guide](https://about.gitlab.com/install/#ubuntu) on how to install GitLab Linux package.
We need to set up the external URL and an initial password. Install GitLab using the following command:
```shell
sudo GITLAB_ROOT_PASSWORD="your_password" EXTERNAL_URL="https://<vm-instance-external-ip>.nip.io" apt install gitlab-ee
```
This will use nip.io as the DNS service so the GitLab instance can be accessed through HTTPs
1. Config the newly installed GitLab instance
1. SSH into the VM, and add the following config into `/etc/gitlab/gitlab.rb`
```ruby
gitlab_rails['env'] = {
"GITLAB_LICENSE_MODE" => "test",
"CUSTOMER_PORTAL_URL" => "https://customers.staging.gitlab.com",
"AI_GATEWAY_URL" => "https://cloud.staging.gitlab.com/ai"
}
```
1. Apply the config changes by `sudo gitlab-ctl reconfigure`
1. Obtain and activate a self-managed ultimate license
1. Go to [staging customers portal](https://customers.staging.gitlab.com/), select "Signin with GitLab.com account".
1. Instead of clicking "Buy new subscription", go to the [product page](https://customers.staging.gitlab.com/subscriptions/new?plan_id=2c92a00c76f0c6c20176f2f9328b33c9) directly. For reason of this, see [buy_subscription](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/8aa922840091ad5c5d96ada43d0065a1b6198841/doc/flows/buy_subscription.md)
1. Purchase the subscription using [a test credit card](https://gitlab.com/gitlab-org/customers-gitlab-com/#testing-credit-card-information). An activation code will be given. Do not purchase a duo-pro add-on, because currently duo-pro and Q are mutually exclusive.
1. Go to the GitLab instance created earlier (`https://<vm-instance-external-ip>.nip.io`), log in with root account. Then on the left sidebar, go to **Admin > Subscription**, and enter the activation code
## Create and configure an AWS sandbox
1. Follow the [same step](#install-and-configure-a-gitlab-linux-package-on-a-virtual-machine) described above on how to create an AWS sandbox if you haven't had one already.
1. Login into the newly created AWS account and create an **Identity Provider** following this [instruction](../../user/duo_amazon_q/setup.md#create-an-iam-identity-provider) with slight modifications:
- Provider URL: `https://glgo.staging.runway.gitlab.net/cc/oidc/<your_gitlab_instance_id>`
- Audience: `gitlab-cc-<your_gitlab_instance_id>`
The GitLab instance ID can be found at `<gitlab_url>/admin/ai/amazon_q_settings`
1. Create a new role using the identity provider. For this, we can follow [this section](../../user/duo_amazon_q/setup.md#create-an-iam-role) exactly.
## Add Amazon Q to GitLab
1. Follow [Enter the ARN in GitLab and enable Amazon Q](../../user/duo_amazon_q/setup.md#enter-the-arn-in-gitlab-and-enable-amazon-q) exactly
1. Now Q should be working. We can test it like [this](https://gitlab.com/gitlab-com/ops-sub-department/aws-gitlab-ai-integration/integration-motion-planning/-/wikis/integration-docs#testing-q)
|
https://docs.gitlab.com/development/glossary
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/glossary.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
glossary.md
|
AI-powered
|
AI Framework
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Duo Glossary
| null |
This is a list of terms that may have a general meaning but also may have a
specific meaning at GitLab. If you encounter a piece of technical jargon related
to AI that you think could benefit from being in this list, add it!
## General terminology
### Adapters
A variation on Fine Tuning. Instead of opening the model and adjusting the layer weights, new trained layers are added onto the model or hosted in an upstream standalone model. Also known as Adapter-based Models. By selectively fine-tuning these specific modules rather than the entire model, Adapters facilitate the customisation of pre-trained models for distinct tasks, requiring only a minimal increase in parameters. This method enables precise, task-specific adjustments of the model without altering its foundational structure.
### AI catalog
The [Workflow Catalog Group](https://handbook.gitlab.com/handbook/engineering/ai/workflow-catalog/) is focused on developing Workflow Catalog, a catalog of Agents, tools, and flows that can be created, curated, and shared across organizations, groups, and projects.
### AI gateway
Standalone service used to give access to AI features to non-SaaS GitLab users. This logic will be moved to Cloud Connector when that service is ready. Eventually, the AI gateway will be used to host endpoints that proxy requests to AI providers, removing the need for the GitLab Rails monolith to integrate and communicate directly with third-party Large Language Models (LLMs). [Design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_gateway/).
### AI gateway prompt
An encapsulation of prompt templates, model selection, and model parameters. As part of the [AI gateway as the Sole Access Point for Monolith to Access Models](https://gitlab.com/groups/gitlab-org/-/epics/13024) effort we're migrating these components from the GitLab Rails monolith into [the `prompts` package in the AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/tree/main/ai_gateway/prompts).
### AI gateway prompt registry
A component responsible for maintaining a list of AI gateway Prompts available to perform specific actions. Currently, we use a [`LocalPromptRegistry`](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/874e05281cab50012a53685e051583e620dac8c4/ai_gateway/prompts/registry.py#L18) that reads definitions from YAML files in the AI gateway.
### Air-Gapped Model
A hosted model that is internal to an organisations intranet only. In the context of GitLab AI features, this could be connected to an air-gapped GitLab instance.
### Bring Your Own Model (BYOM)
A third-party model to be connected to one or more GitLab Duo features. Could be an off-the-shelf Open Source (OS) model, a fine-tuned model, or a closed source model. GitLab is planning to support specific, validated BYOMs for GitLab Duo features, but does not plan to support general BYOM use for GitLab Duo features.
### Chat Evaluation
Automated mechanism for determining the helpfulness and accuracy of GitLab Duo Chat to various user questions. The MVC is an RSpec test run via GitLab CI that asks a set of questions to Chat and then has a two different third-party LLMs determine if the generated answer is accurate or not. [MVC](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134610). [Design doc for next iteration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136127).
### Cloud Connector
Cloud Connector is a way to access services common to multiple GitLab deployments, instances, and cells. We use it as an umbrella term to refer to the set of technical solutions and APIs used to make such services available to all GitLab customers. For more information, see the [Cloud Connector architecture](../cloud_connector/architecture.md).
### Closed Source Model
A private model fine-tuned or built from scratch by an organisation. These may be hosted as cloud services, for example ChatGPT.
### Consensus Filtering
Consensus filtering is a method of LLM evaluation. An LLM judge is asked to rate and compare the output of multiple LLMs to sets of prompts. This is the method of evaluation being used for the Chat Evaluation MVC. [Issue from Model Validation team](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/prompt-library/-/issues/91#metric-2-consensus-filtering-with-llm-based-evaluation).
### Context
Relevant information that surrounds a data point, an event, or a piece of information, which helps to clarify its meaning and implications. For GitLab Duo Chat, context is the attributes of the Issue or Epic being referenced in a user question.
### Custom Model
Any implementation of a GitLab Duo feature using a self-hosted model, BYOM, fine-tuned model, RAG-enhanced model, or adapter-based model.
### Embeddings
In the context of machine learning and large language models, embeddings refer to a technique used to represent words, phrases, or even entire documents as dense numerical vectors in a continuous vector space. At GitLab, [we use Vertex AI's Embeddings API](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/129930) to create a vector representation of GitLab documentation. These embeddings are stored in the `vertex_gitlab_docs` database table in the `embeddings` database. The embeddings search is done in Postgres using the `vector` extension. The vertex embeddings database is updated based on the latest version of GitLab documentation on a daily basis by running `Llm::Embedding::GitlabDocumentation::CreateEmbeddingsRecordsWorker` as a cronjob.
### Fine Tuning
Altering an existing model using a supervised learning process that utilizes a dataset of labeled examples to update the weights of the LLM, improving its output for specific tasks such as code completion or Chat.
### Foundational Model
A general purpose LLM trained using a generic objective, typically next token prediction. These models are capable and flexible, and can be adjusted to solved many domain-specific tasks (through finetuning or prompt engineering). This means that these general purpose models are ideal to serve as the foundation of many downstream models. Examples of foundational models are: GPT-4o, Claude 3.7 Sonnet.
### Frozen Model
A LLM which cannot be fine-tuned (also Frozen LLM).
### GitLab Duo
AI-assisted features across the GitLab DevSecOps platform. These features aim to help increase velocity and solve key pain points across the software development lifecycle. See also the [GitLab Duo](../../user/gitlab_duo/_index.md) features page.
### GitLab Managed Model
A LLM that is managed by GitLab. Currently all [GitLab Managed Models](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2864#note_1787040242) are hosted externally and accessed through the AI gateway. GitLab-owned API keys are used to access the models.
### Golden Questions
A small subset of the types of questions we think a user should be able to ask GitLab Duo Chat. Used to generate data for Chat evaluation. [Questions for Chat Beta](https://gitlab.com/groups/gitlab-org/-/epics/10550#what-the-user-can-ask).
### Ground Truth
Data that is determined to be the true output for a given input, representing the reality that the AI model aims to learn and predict. Ground truth data are often human-annotated, but may also be produced from a trusted source such as an LLM that has known good output for a given use case.
### Local Model
A LLM running on a user's workstation. [More information](https://gitlab.com/groups/gitlab-org/-/epics/12907).
### LLM
A Large Language Model, or LLM, is a very large-scale neural network trained to understand and generate human-like text. For [GitLab Duo features](../../user/gitlab_duo/_index.md), GitLab is currently working with frozen models hosted at [Google and Anthropic](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2864#note_1787040242)
### Model Validation
Group within the AI-powered Stage working on the Prompt Library, supporting AI Validation of GitLab Duo features, and researching AI/ML models to support other use-cases for AI at GitLab. [Team handbook section](https://handbook.gitlab.com/handbook/product/categories/features/index.html#ai-powered-ai-model-validation-group)
### Offline Model
A model that runs without internet or intranet connection (for example, you are running a model on your laptop on a plane).
### Open Source Model
Models that are published with their source code and weights and are available for modifications and re-distribution. Examples: Llama / Llama 2, BLOOM, Falcon, Mistral, Gemma.
### Prompt library
The ["Prompt Library"](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/prompt-library) is a Python library that provides a CLI for testing different prompting techniques with LLMs. It enables data-driven improvements to LLM applications by facilitating hypothesis testing. Key features include the ability to manage and run dataflow pipelines using Apache Beam, and the execution of multiple evaluation experiments in a single pipeline run on prompts with various third-party AI Services. [Code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/prompt-library).
### Prompt Registry
Stored, versioned prompts used to interact with third-party AI Services. [Design document proposal MR (closed)](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/135872).
### Prompt
Natural language instructions sent to an LLM to perform certain tasks. [Prompt guidelines](ai_feature_development_playbook.md).
### RAG (Retrieval Augmented Generation)
RAG provide contextual data to an LLM as part of a query to personalise results. RAG is used to inject additional context into a prompt to decrease hallucinations and improve the quality of outputs.
### RAG Pipeline
A mechanism used to take an input (such as a user question) into a system, retrieve any relevant data for that input, augment the input with additional context, and then synthesize the information to generate a coherent, contextualy-relevant answer. This design pattern is helpful in open-domain question answering with LLMs, which is why we use this design pattern for answering questions to GitLab Duo Chat.
### Self-hosted model
A LLM hosted externally to GitLab by an organisation and interacting with GitLab AI features. See also the [style guide reference](../documentation/styleguide/word_list.md#self-hosted-model).
### Similarity Score
A mathematical method to determine the likeness between answers produced by an LLM and the reference ground truth answers. See also the [Model Validation direction page](https://about.gitlab.com/direction/ai-powered/ai_model_validation/ai_evaluation/metrics/#similarity-scores)
### Tool
Logic that performs a specific LLM-related task; each tool has a description and its own prompt. [How to add a new tool](duo_chat.md#adding-a-new-tool).
### Unit Primitive
GitLab-specific term that refers to the fundamental logical feature that a permission or access scope can control. Examples: [`duo_chat`](../../user/gitlab_duo_chat/_index.md) and [`code_suggestions`](../../api/code_suggestions.md). These features are both currently part of the GitLab Duo Pro license but we are building the concept of a Unit Primitive around each Duo feature so that Duo features are easily composable into different groupings to accommodate potential future product packaging needs.
### Word-Level Metrics
Method for LLM evaluation that compares aspects of text at the granularity of individual words. [Issue from Model Validation team](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/prompt-library/-/issues/98#metric-3-word-level-metrics).
### Zero-shot agent
In the general world of AI, a learning model or system that can perform tasks without having seen any
examples of that task during training. At GitLab, we use this term to refer specifically to a piece of our code that serves
as a sort of LLM-powered air traffic controller for GitLab Duo Chat. The GitLab zero-shot agent has a
system prompt that explains how an LLM should interpret user input from GitLab Duo Chat as well as a
list of tool descriptions. Using this information, the agent determines which tool to use to answer a user's question.
The agent may decide that no tools are required and answer the question directly.
If a tool is used, the answer from the tool is fed back to the zero-shot agent to evaluate if the answer is
sufficient or if an additional tool must be used to answer the question.
[Code](https://gitlab.com/gitlab-org/gitlab/-/blob/6b747cbd7c6a71145a8bfb8201db3c857b5aed6a/ee/lib/gitlab/llm/chain/agents/zero_shot/executor.rb).
[Zero-shot agent in action](https://gitlab.com/gitlab-org/gitlab/-/issues/427979).
## GitLab Duo Agent Platform terminology
## Core Layer Concepts (GitLab-specific)
### Flow
A **goal-oriented, structured graph** that orchestrates agents and tools to deliver a single, economically-valuable outcome (e.g., *create a code-review MR*, *triage issues*).
- **Structure** - Explicit phases: planning → execution → completion
- **Nodes** - Each node is an *Agent* (decision-maker) or *Deterministic step*: CRUD, Boolean decision
- **Trigger & Terminator** - Every flow has one or many defined start trigger(s) and a defined end state
- **Input** - Each Flow must have an input. Inputs set the context for the Flow session and will differentiate different flows in outcomes. Inputs can be: Free text, Entities (GitLab or from 3rd party)
- **Session** - One execution of an flow; sessions carry user-specific goals and data
{{< alert type="note" >}}
**Analogy:** *competency / job description* - the "what & when" of getting work done.
{{< /alert >}}
### Agent
A **specialized, LLM-powered decision-maker** that owns a single node inside an flow. Can be defined independently and reused across multiple flows as a reusable component.
- **Prompt (System)** - Sets the overall behavior, guardrails and persona for the agents
- **Prompt (Goal)** - Receives the session-specific objective from the flow
- **Tools** - May call only the tools granted by the flow node definition and the user/company definition of available tools
- **Agents / Flows** - Agents can invoke other agents or Flows to achieve their goal if these were made available
- **Reasoning** - Uses an LLM to decompose its goal into dynamic subtasks
- **Context awareness** - Gains project / repo / issue data through tool calls
GitLab agents are **specialists**, not generalists, to maximize reliability and UX.
### Tool
A **discrete, deterministic capability** an agent (or flow step) invokes to perform read/write actions. Tools can be used to perform these in GitLab or in 3rd party applications via MCP or other protocols.
*Examples:* read GitLab issues, clone a repository, commit & push changes, call a REST API.
Tools expose data or side-effects; they themselves perform **no reasoning**.
## Flow types
### Current implementation
- **Sequence** - The Flow is executing agents that handover their output to the next agent in a pre set manner
### Future implementations
- **Single Agent** - A single agent is executing the entire flow to completion, suitable for small defined tasks with latency considerations
- **Multi Agent** - A pool of agents are working to complete a task in a manner where each agent is getting a chance to solve it, and/or a supervisor chooses the final solution. Can support different graph topologies
## Supporting Terminology
| Term | Definition |
| ---- | ---------- |
| **Node (Flow node)** | A single step in the flow graph. GitLab currently supports *Agent*, *Tool Executor*, *Agent Handover*, *Supervisor*, and *Terminator* nodes. |
| **Run** | One instantiation of an flow with concrete user input and data context. |
| **Task** | A formal object representing a unit of work inside a run. At present only the *Executor* agent persists tasks, but the concept is extensible. |
| **Trigger** | An event that starts an flow run (e.g., slash command, schedule, issue label). |
| **Agent Handover** | Node type that packages context from one agent and passes it to another. |
| **Supervisor Agent** | An agent node that monitors other agents' progress and enforces run-level constraints (timeout, max tokens, etc.). |
| **Subagent** | Shorthand for an agent that operates under a Supervisor within the same run. |
| **Autonomous Agent** | Historical term for an agent that can loop without human approval. In GitLab, autonomy level is governed by flow design, not by a separate agent type. |
| **Framework** | A platform for building multi-agent systems. GitLab Duo Agent Platform uses **LangGraph**, an extension to LangChain that natively models agent graphs. |
## Execution
Flows are executed in the following ways:
- **Local** - The Flow is executed in relation to a project or a folder (future)
- **Remote** - The Flow is executed in CI Runners in relation to a project, Group (future), Namespace (future)
## Quick Reference Matrix
| Layer | Human Analogy | Key Question Answered |
| ----- | ------------- | --------------------- |
| **Tool** | Capability | "What concrete action can I perform?" |
| **Agent** | Skill / Specialist | "How do I use my tools to reach my goal?" |
| **Flow** | Competency / Job | "When and in what order should skills be applied to deliver value?" |
## AI Context Terminology
### Advanced Context Resolver
Advanced context is a comprehensive set of code-related information extending
beyond a single file, including open file tabs, imports, dependencies,
cross-file symbols and definitions, and project-wide relevant code snippets.
Advanced context *resolver* is a system designed to gather the above advanced context.
By providing advanced context, the resolver providers the LLM with a more
holistic understanding of the project structure, enabling more accurate and
context-aware code suggestions and generation.
### AI Context Abstraction Layer
A [Ruby gem](https://gitlab.com/gitlab-org/gitlab/-/tree/master/gems/gitlab-active-context) that provides a unified interface for Retrieval Augmented Generation (RAG) across multiple vector databases within GitLab. The system abstracts away the differences between Elasticsearch, OpenSearch, and PostgreSQL with pgvector, enabling AI features to work regardless of the underlying storage solution.
Key components include collections that define data schemas and reference classes that handle serialization, migrations for schema management, and preprocessors for chunking and embedding generation. The layer supports automatic model migration between different LLMs without downtime, asynchronous processing through Redis-backed queues, and permission-aware search with automatic redaction.
This [architecture](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_context_abstraction_layer/) prevents vendor lock-in and enables GitLab customers without Elasticsearch to access RAG-powered features through pgvector.
### AI Context Policies
A user-defined and user-managed mechanism allowing precise control over the
content that can be sent to LLMs as contextual information.
GitLab has an [architecture document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_context_management/)
that proposes a format for AI Context Policies.
### Codebase as Chat Context
This refers to a repository that the user explicitly provides using the `/include` command. The user may narrow the scope by choosing a directory within a repository.
This feature allows the user to ask questions about an entire repository, or a subset of that repository by selecting specific directories.
This is automatically enhanced by performing a semantic search of the user's question over the [Code Embeddings](#code-embeddings) of the included repository,
with the search results then added to the context sent to the LLM. This gives the LLM information about the included repository or directory that is specifically
targeted to the user's question, allowing the LLM to generate a more helpful response.
This [architecture document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/codebase_as_chat_context/) proposes
Codebase as Chat Context enhanced by semantic search over Code Embeddings.
In the future, the repository or directory context may also be enhanced by a [Knowledge Graph](#knowledge-graph) search.
### Code Embeddings
The [Code Embeddings initiative](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/codebase_as_chat_context/code_embeddings/)
aims to build vector embeddings representation of files in a repository. The file contents are chunked into logical segments, then embeddings are generated
for the chunked content and stored in a vector store.
With Code Embeddings, we can perform a semantic search over a given repository, with the search results then used as additional context for an LLM.
(See [Codebase as Chat Context](#codebase-as-chat-context) for how Code Embeddings will be used in Duo Chat.)
### GitLab Zoekt
A scalable exact code search service and file-based database system, with flexible architecture supporting various AI context use cases beyond traditional search. It's built on top of open-source code search engine Zoekt.
The system consists of a unified `gitlab-zoekt` binary that can operate in both indexer and webserver modes, managing index files on persistent storage for fast searches. Key features include bi-directional communication with GitLab and self-registering node [architecture](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/code_search_with_zoekt/) for easy scaling.
The system is designed to handle enterprise-scale deployments, with GitLab.com successfully operating over 48 TiB of indexed data.
Most likely, this distributed database system will be used to power [Knowledge Graph](#knowledge-graph). Also, we might leverage Exact Code Search to provide additional context and/or tools for GitLab Duo.
### Knowledge Graph
The [Knowledge Graph](https://gitlab.com/gitlab-org/rust/knowledge-graph) project aims to create a structured, queryable graph database from code repositories to power AI features and enhance developer productivity within GitLab.
Think of it like creating a detailed blueprint that shows which functions call other functions, how classes relate to each other, and where variables are used throughout the codebase. Instead of GitLab Duo having to read through thousands of files every time you ask it something, it can quickly navigate this pre-built map to give you better code suggestions, find related code snippets, or help debug issues. It gives Duo a much smarter way to understand your codebase so it can assist you more effectively with things like code reviews, refactoring, or finding where to make changes when you're working on a feature.
### One Parser (GitLab Code Parser)
The [GitLab Code Parser](https://gitlab.com/gitlab-org/code-creation/gitlab-code-parser#) establishes a single, efficient, and reliable static code analysis library. This library will serve as the foundation for diverse code intelligence features across GitLab, from server-side indexing (Knowledge Graph, Embeddings) to client-side analysis (Language Server, Web IDE). Initially scoped to AI and Editor Features.
### Supplementary User Context
Information, such as open tabs in their IDE, files, and folders,
that the user provides from their local environment to extend the default AI
Context. This is sometimes called "pinned context" internally. GitLab Duo Chat users
can provide supplementary user context with the `/include` command (IDE only).
|
---
stage: AI-powered
group: AI Framework
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: GitLab Duo Glossary
breadcrumbs:
- doc
- development
- ai_features
---
This is a list of terms that may have a general meaning but also may have a
specific meaning at GitLab. If you encounter a piece of technical jargon related
to AI that you think could benefit from being in this list, add it!
## General terminology
### Adapters
A variation on Fine Tuning. Instead of opening the model and adjusting the layer weights, new trained layers are added onto the model or hosted in an upstream standalone model. Also known as Adapter-based Models. By selectively fine-tuning these specific modules rather than the entire model, Adapters facilitate the customisation of pre-trained models for distinct tasks, requiring only a minimal increase in parameters. This method enables precise, task-specific adjustments of the model without altering its foundational structure.
### AI catalog
The [Workflow Catalog Group](https://handbook.gitlab.com/handbook/engineering/ai/workflow-catalog/) is focused on developing Workflow Catalog, a catalog of Agents, tools, and flows that can be created, curated, and shared across organizations, groups, and projects.
### AI gateway
Standalone service used to give access to AI features to non-SaaS GitLab users. This logic will be moved to Cloud Connector when that service is ready. Eventually, the AI gateway will be used to host endpoints that proxy requests to AI providers, removing the need for the GitLab Rails monolith to integrate and communicate directly with third-party Large Language Models (LLMs). [Design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_gateway/).
### AI gateway prompt
An encapsulation of prompt templates, model selection, and model parameters. As part of the [AI gateway as the Sole Access Point for Monolith to Access Models](https://gitlab.com/groups/gitlab-org/-/epics/13024) effort we're migrating these components from the GitLab Rails monolith into [the `prompts` package in the AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/tree/main/ai_gateway/prompts).
### AI gateway prompt registry
A component responsible for maintaining a list of AI gateway Prompts available to perform specific actions. Currently, we use a [`LocalPromptRegistry`](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/874e05281cab50012a53685e051583e620dac8c4/ai_gateway/prompts/registry.py#L18) that reads definitions from YAML files in the AI gateway.
### Air-Gapped Model
A hosted model that is internal to an organisations intranet only. In the context of GitLab AI features, this could be connected to an air-gapped GitLab instance.
### Bring Your Own Model (BYOM)
A third-party model to be connected to one or more GitLab Duo features. Could be an off-the-shelf Open Source (OS) model, a fine-tuned model, or a closed source model. GitLab is planning to support specific, validated BYOMs for GitLab Duo features, but does not plan to support general BYOM use for GitLab Duo features.
### Chat Evaluation
Automated mechanism for determining the helpfulness and accuracy of GitLab Duo Chat to various user questions. The MVC is an RSpec test run via GitLab CI that asks a set of questions to Chat and then has a two different third-party LLMs determine if the generated answer is accurate or not. [MVC](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134610). [Design doc for next iteration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136127).
### Cloud Connector
Cloud Connector is a way to access services common to multiple GitLab deployments, instances, and cells. We use it as an umbrella term to refer to the set of technical solutions and APIs used to make such services available to all GitLab customers. For more information, see the [Cloud Connector architecture](../cloud_connector/architecture.md).
### Closed Source Model
A private model fine-tuned or built from scratch by an organisation. These may be hosted as cloud services, for example ChatGPT.
### Consensus Filtering
Consensus filtering is a method of LLM evaluation. An LLM judge is asked to rate and compare the output of multiple LLMs to sets of prompts. This is the method of evaluation being used for the Chat Evaluation MVC. [Issue from Model Validation team](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/prompt-library/-/issues/91#metric-2-consensus-filtering-with-llm-based-evaluation).
### Context
Relevant information that surrounds a data point, an event, or a piece of information, which helps to clarify its meaning and implications. For GitLab Duo Chat, context is the attributes of the Issue or Epic being referenced in a user question.
### Custom Model
Any implementation of a GitLab Duo feature using a self-hosted model, BYOM, fine-tuned model, RAG-enhanced model, or adapter-based model.
### Embeddings
In the context of machine learning and large language models, embeddings refer to a technique used to represent words, phrases, or even entire documents as dense numerical vectors in a continuous vector space. At GitLab, [we use Vertex AI's Embeddings API](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/129930) to create a vector representation of GitLab documentation. These embeddings are stored in the `vertex_gitlab_docs` database table in the `embeddings` database. The embeddings search is done in Postgres using the `vector` extension. The vertex embeddings database is updated based on the latest version of GitLab documentation on a daily basis by running `Llm::Embedding::GitlabDocumentation::CreateEmbeddingsRecordsWorker` as a cronjob.
### Fine Tuning
Altering an existing model using a supervised learning process that utilizes a dataset of labeled examples to update the weights of the LLM, improving its output for specific tasks such as code completion or Chat.
### Foundational Model
A general purpose LLM trained using a generic objective, typically next token prediction. These models are capable and flexible, and can be adjusted to solved many domain-specific tasks (through finetuning or prompt engineering). This means that these general purpose models are ideal to serve as the foundation of many downstream models. Examples of foundational models are: GPT-4o, Claude 3.7 Sonnet.
### Frozen Model
A LLM which cannot be fine-tuned (also Frozen LLM).
### GitLab Duo
AI-assisted features across the GitLab DevSecOps platform. These features aim to help increase velocity and solve key pain points across the software development lifecycle. See also the [GitLab Duo](../../user/gitlab_duo/_index.md) features page.
### GitLab Managed Model
A LLM that is managed by GitLab. Currently all [GitLab Managed Models](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2864#note_1787040242) are hosted externally and accessed through the AI gateway. GitLab-owned API keys are used to access the models.
### Golden Questions
A small subset of the types of questions we think a user should be able to ask GitLab Duo Chat. Used to generate data for Chat evaluation. [Questions for Chat Beta](https://gitlab.com/groups/gitlab-org/-/epics/10550#what-the-user-can-ask).
### Ground Truth
Data that is determined to be the true output for a given input, representing the reality that the AI model aims to learn and predict. Ground truth data are often human-annotated, but may also be produced from a trusted source such as an LLM that has known good output for a given use case.
### Local Model
A LLM running on a user's workstation. [More information](https://gitlab.com/groups/gitlab-org/-/epics/12907).
### LLM
A Large Language Model, or LLM, is a very large-scale neural network trained to understand and generate human-like text. For [GitLab Duo features](../../user/gitlab_duo/_index.md), GitLab is currently working with frozen models hosted at [Google and Anthropic](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2864#note_1787040242)
### Model Validation
Group within the AI-powered Stage working on the Prompt Library, supporting AI Validation of GitLab Duo features, and researching AI/ML models to support other use-cases for AI at GitLab. [Team handbook section](https://handbook.gitlab.com/handbook/product/categories/features/index.html#ai-powered-ai-model-validation-group)
### Offline Model
A model that runs without internet or intranet connection (for example, you are running a model on your laptop on a plane).
### Open Source Model
Models that are published with their source code and weights and are available for modifications and re-distribution. Examples: Llama / Llama 2, BLOOM, Falcon, Mistral, Gemma.
### Prompt library
The ["Prompt Library"](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/prompt-library) is a Python library that provides a CLI for testing different prompting techniques with LLMs. It enables data-driven improvements to LLM applications by facilitating hypothesis testing. Key features include the ability to manage and run dataflow pipelines using Apache Beam, and the execution of multiple evaluation experiments in a single pipeline run on prompts with various third-party AI Services. [Code](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/prompt-library).
### Prompt Registry
Stored, versioned prompts used to interact with third-party AI Services. [Design document proposal MR (closed)](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/135872).
### Prompt
Natural language instructions sent to an LLM to perform certain tasks. [Prompt guidelines](ai_feature_development_playbook.md).
### RAG (Retrieval Augmented Generation)
RAG provide contextual data to an LLM as part of a query to personalise results. RAG is used to inject additional context into a prompt to decrease hallucinations and improve the quality of outputs.
### RAG Pipeline
A mechanism used to take an input (such as a user question) into a system, retrieve any relevant data for that input, augment the input with additional context, and then synthesize the information to generate a coherent, contextualy-relevant answer. This design pattern is helpful in open-domain question answering with LLMs, which is why we use this design pattern for answering questions to GitLab Duo Chat.
### Self-hosted model
A LLM hosted externally to GitLab by an organisation and interacting with GitLab AI features. See also the [style guide reference](../documentation/styleguide/word_list.md#self-hosted-model).
### Similarity Score
A mathematical method to determine the likeness between answers produced by an LLM and the reference ground truth answers. See also the [Model Validation direction page](https://about.gitlab.com/direction/ai-powered/ai_model_validation/ai_evaluation/metrics/#similarity-scores)
### Tool
Logic that performs a specific LLM-related task; each tool has a description and its own prompt. [How to add a new tool](duo_chat.md#adding-a-new-tool).
### Unit Primitive
GitLab-specific term that refers to the fundamental logical feature that a permission or access scope can control. Examples: [`duo_chat`](../../user/gitlab_duo_chat/_index.md) and [`code_suggestions`](../../api/code_suggestions.md). These features are both currently part of the GitLab Duo Pro license but we are building the concept of a Unit Primitive around each Duo feature so that Duo features are easily composable into different groupings to accommodate potential future product packaging needs.
### Word-Level Metrics
Method for LLM evaluation that compares aspects of text at the granularity of individual words. [Issue from Model Validation team](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/prompt-library/-/issues/98#metric-3-word-level-metrics).
### Zero-shot agent
In the general world of AI, a learning model or system that can perform tasks without having seen any
examples of that task during training. At GitLab, we use this term to refer specifically to a piece of our code that serves
as a sort of LLM-powered air traffic controller for GitLab Duo Chat. The GitLab zero-shot agent has a
system prompt that explains how an LLM should interpret user input from GitLab Duo Chat as well as a
list of tool descriptions. Using this information, the agent determines which tool to use to answer a user's question.
The agent may decide that no tools are required and answer the question directly.
If a tool is used, the answer from the tool is fed back to the zero-shot agent to evaluate if the answer is
sufficient or if an additional tool must be used to answer the question.
[Code](https://gitlab.com/gitlab-org/gitlab/-/blob/6b747cbd7c6a71145a8bfb8201db3c857b5aed6a/ee/lib/gitlab/llm/chain/agents/zero_shot/executor.rb).
[Zero-shot agent in action](https://gitlab.com/gitlab-org/gitlab/-/issues/427979).
## GitLab Duo Agent Platform terminology
## Core Layer Concepts (GitLab-specific)
### Flow
A **goal-oriented, structured graph** that orchestrates agents and tools to deliver a single, economically-valuable outcome (e.g., *create a code-review MR*, *triage issues*).
- **Structure** - Explicit phases: planning → execution → completion
- **Nodes** - Each node is an *Agent* (decision-maker) or *Deterministic step*: CRUD, Boolean decision
- **Trigger & Terminator** - Every flow has one or many defined start trigger(s) and a defined end state
- **Input** - Each Flow must have an input. Inputs set the context for the Flow session and will differentiate different flows in outcomes. Inputs can be: Free text, Entities (GitLab or from 3rd party)
- **Session** - One execution of an flow; sessions carry user-specific goals and data
{{< alert type="note" >}}
**Analogy:** *competency / job description* - the "what & when" of getting work done.
{{< /alert >}}
### Agent
A **specialized, LLM-powered decision-maker** that owns a single node inside an flow. Can be defined independently and reused across multiple flows as a reusable component.
- **Prompt (System)** - Sets the overall behavior, guardrails and persona for the agents
- **Prompt (Goal)** - Receives the session-specific objective from the flow
- **Tools** - May call only the tools granted by the flow node definition and the user/company definition of available tools
- **Agents / Flows** - Agents can invoke other agents or Flows to achieve their goal if these were made available
- **Reasoning** - Uses an LLM to decompose its goal into dynamic subtasks
- **Context awareness** - Gains project / repo / issue data through tool calls
GitLab agents are **specialists**, not generalists, to maximize reliability and UX.
### Tool
A **discrete, deterministic capability** an agent (or flow step) invokes to perform read/write actions. Tools can be used to perform these in GitLab or in 3rd party applications via MCP or other protocols.
*Examples:* read GitLab issues, clone a repository, commit & push changes, call a REST API.
Tools expose data or side-effects; they themselves perform **no reasoning**.
## Flow types
### Current implementation
- **Sequence** - The Flow is executing agents that handover their output to the next agent in a pre set manner
### Future implementations
- **Single Agent** - A single agent is executing the entire flow to completion, suitable for small defined tasks with latency considerations
- **Multi Agent** - A pool of agents are working to complete a task in a manner where each agent is getting a chance to solve it, and/or a supervisor chooses the final solution. Can support different graph topologies
## Supporting Terminology
| Term | Definition |
| ---- | ---------- |
| **Node (Flow node)** | A single step in the flow graph. GitLab currently supports *Agent*, *Tool Executor*, *Agent Handover*, *Supervisor*, and *Terminator* nodes. |
| **Run** | One instantiation of an flow with concrete user input and data context. |
| **Task** | A formal object representing a unit of work inside a run. At present only the *Executor* agent persists tasks, but the concept is extensible. |
| **Trigger** | An event that starts an flow run (e.g., slash command, schedule, issue label). |
| **Agent Handover** | Node type that packages context from one agent and passes it to another. |
| **Supervisor Agent** | An agent node that monitors other agents' progress and enforces run-level constraints (timeout, max tokens, etc.). |
| **Subagent** | Shorthand for an agent that operates under a Supervisor within the same run. |
| **Autonomous Agent** | Historical term for an agent that can loop without human approval. In GitLab, autonomy level is governed by flow design, not by a separate agent type. |
| **Framework** | A platform for building multi-agent systems. GitLab Duo Agent Platform uses **LangGraph**, an extension to LangChain that natively models agent graphs. |
## Execution
Flows are executed in the following ways:
- **Local** - The Flow is executed in relation to a project or a folder (future)
- **Remote** - The Flow is executed in CI Runners in relation to a project, Group (future), Namespace (future)
## Quick Reference Matrix
| Layer | Human Analogy | Key Question Answered |
| ----- | ------------- | --------------------- |
| **Tool** | Capability | "What concrete action can I perform?" |
| **Agent** | Skill / Specialist | "How do I use my tools to reach my goal?" |
| **Flow** | Competency / Job | "When and in what order should skills be applied to deliver value?" |
## AI Context Terminology
### Advanced Context Resolver
Advanced context is a comprehensive set of code-related information extending
beyond a single file, including open file tabs, imports, dependencies,
cross-file symbols and definitions, and project-wide relevant code snippets.
Advanced context *resolver* is a system designed to gather the above advanced context.
By providing advanced context, the resolver providers the LLM with a more
holistic understanding of the project structure, enabling more accurate and
context-aware code suggestions and generation.
### AI Context Abstraction Layer
A [Ruby gem](https://gitlab.com/gitlab-org/gitlab/-/tree/master/gems/gitlab-active-context) that provides a unified interface for Retrieval Augmented Generation (RAG) across multiple vector databases within GitLab. The system abstracts away the differences between Elasticsearch, OpenSearch, and PostgreSQL with pgvector, enabling AI features to work regardless of the underlying storage solution.
Key components include collections that define data schemas and reference classes that handle serialization, migrations for schema management, and preprocessors for chunking and embedding generation. The layer supports automatic model migration between different LLMs without downtime, asynchronous processing through Redis-backed queues, and permission-aware search with automatic redaction.
This [architecture](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_context_abstraction_layer/) prevents vendor lock-in and enables GitLab customers without Elasticsearch to access RAG-powered features through pgvector.
### AI Context Policies
A user-defined and user-managed mechanism allowing precise control over the
content that can be sent to LLMs as contextual information.
GitLab has an [architecture document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_context_management/)
that proposes a format for AI Context Policies.
### Codebase as Chat Context
This refers to a repository that the user explicitly provides using the `/include` command. The user may narrow the scope by choosing a directory within a repository.
This feature allows the user to ask questions about an entire repository, or a subset of that repository by selecting specific directories.
This is automatically enhanced by performing a semantic search of the user's question over the [Code Embeddings](#code-embeddings) of the included repository,
with the search results then added to the context sent to the LLM. This gives the LLM information about the included repository or directory that is specifically
targeted to the user's question, allowing the LLM to generate a more helpful response.
This [architecture document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/codebase_as_chat_context/) proposes
Codebase as Chat Context enhanced by semantic search over Code Embeddings.
In the future, the repository or directory context may also be enhanced by a [Knowledge Graph](#knowledge-graph) search.
### Code Embeddings
The [Code Embeddings initiative](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/codebase_as_chat_context/code_embeddings/)
aims to build vector embeddings representation of files in a repository. The file contents are chunked into logical segments, then embeddings are generated
for the chunked content and stored in a vector store.
With Code Embeddings, we can perform a semantic search over a given repository, with the search results then used as additional context for an LLM.
(See [Codebase as Chat Context](#codebase-as-chat-context) for how Code Embeddings will be used in Duo Chat.)
### GitLab Zoekt
A scalable exact code search service and file-based database system, with flexible architecture supporting various AI context use cases beyond traditional search. It's built on top of open-source code search engine Zoekt.
The system consists of a unified `gitlab-zoekt` binary that can operate in both indexer and webserver modes, managing index files on persistent storage for fast searches. Key features include bi-directional communication with GitLab and self-registering node [architecture](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/code_search_with_zoekt/) for easy scaling.
The system is designed to handle enterprise-scale deployments, with GitLab.com successfully operating over 48 TiB of indexed data.
Most likely, this distributed database system will be used to power [Knowledge Graph](#knowledge-graph). Also, we might leverage Exact Code Search to provide additional context and/or tools for GitLab Duo.
### Knowledge Graph
The [Knowledge Graph](https://gitlab.com/gitlab-org/rust/knowledge-graph) project aims to create a structured, queryable graph database from code repositories to power AI features and enhance developer productivity within GitLab.
Think of it like creating a detailed blueprint that shows which functions call other functions, how classes relate to each other, and where variables are used throughout the codebase. Instead of GitLab Duo having to read through thousands of files every time you ask it something, it can quickly navigate this pre-built map to give you better code suggestions, find related code snippets, or help debug issues. It gives Duo a much smarter way to understand your codebase so it can assist you more effectively with things like code reviews, refactoring, or finding where to make changes when you're working on a feature.
### One Parser (GitLab Code Parser)
The [GitLab Code Parser](https://gitlab.com/gitlab-org/code-creation/gitlab-code-parser#) establishes a single, efficient, and reliable static code analysis library. This library will serve as the foundation for diverse code intelligence features across GitLab, from server-side indexing (Knowledge Graph, Embeddings) to client-side analysis (Language Server, Web IDE). Initially scoped to AI and Editor Features.
### Supplementary User Context
Information, such as open tabs in their IDE, files, and folders,
that the user provides from their local environment to extend the default AI
Context. This is sometimes called "pinned context" internally. GitLab Duo Chat users
can provide supplementary user context with the `/include` command (IDE only).
|
https://docs.gitlab.com/development/developing_ai_features_for_duo_self_hosted
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/developing_ai_features_for_duo_self_hosted.md
|
2025-08-13
|
doc/development/ai_features
|
[
"doc",
"development",
"ai_features"
] |
developing_ai_features_for_duo_self_hosted.md
|
AI-powered
|
Custom Models
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/ee/development/development_processes.html#development-guidelines-review.
|
Developing AI Features for Duo Self-Hosted
| null |
This document outlines the process for developing AI features for GitLab Duo Self-Hosted. Developing AI features for GitLab Duo Self-Hosted is quite similar to developing AI features for Duo SaaS, but there are some differences.
## Gaining access to a hosted model
The following models are currently available to GitLab team members for development purposes as of July, 2025:
- `Claude Sonnet 3.5` on AWS Bedrock
- `Claude Sonnet 3.5 v2` on AWS Bedrock
- `Claude Sonnet 3.7` on AWS Bedrock
- `Claude Sonnet 4` on AWS Bedrock
- `Claude Haiku 3.5` on AWS Bedrock
- `Llama 3.3 70b` on AWS Bedrock
- `Llama 3.1 8b` on AWS Bedrock
- `Llama 3.1 70b` on AWS Bedrock
- `Mistral Small` on FireworksAI
- `Mixtral 8x22b` on FireworksAI
- `Codestral 22b v0.1` on FireworksAI
- `Llama 3.1 70b` on FireworksAI
- `Llama 3.1 8b` on FireworksAI
- `Llama 3.3 70b` on FireworksAI
Development environments provide access to a limited set of models for cost optimization. The [complete model catalog](../../administration/gitlab_duo_self_hosted/supported_models_and_hardware_requirements.md#supported-models) is available in production deployments.
### Gaining access to models on FireworksAI
To gain access to FireworksAI, first create an [Access Request](https://gitlab.com/gitlab-com/team-member-epics/access-requests). See [this example access request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/37505) if you aren't sure what information to fill in.
Our FireworksAI account is managed by `Create::Code Creation`. Once access is granted, navigate to `https://fireworks.ai/` to create an API key.
### Gaining access to models on AWS Bedrock
To gain access to models in AWS Bedrock, create an [access request using the `aws_services_account_iam_update` template](https://gitlab.com/gitlab-com/gl-security/corp/issue-tracker/-/issues/new?description_template=aws_services_account_iam_update). See [this example access request](https://gitlab.com/gitlab-com/gl-security/corp/issue-tracker/-/issues/949) if you aren't sure what information to fill in.
Once your access request is approved, you can gain access to AWS credentials by visiting [https://gitlabsandbox.cloud/login](https://gitlabsandbox.cloud/login).
After logging into `gitlabsandbox.cloud`, perform the following steps:
1. Select the `cstm-mdls-dev-bedrock` AWS account.
1. On the top right corner of the page, select **View IAM Credentials**.
1. In the model that opens up, you should be able to see `AWS Console URL`, `Username` and `Password`. Visit this AWS Console URL and input the obtained username and password to sign in.
On AWS Bedrock, you must gain access to the models you want to use. To do this:
1. Visit the [AWS Model Catalog Console](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/model-catalog).
1. Make sure your location is set to `us-east-1`.
1. From the list of models, find the model you want to use, and hover on the **Available to request** link. Then select **Request access**.
1. Complete the form to request access to the model.
Your access should be granted within a few minutes.
### Generating Access Key and Secret Key on AWS
To use AWS Bedrock models, you must generate access keys. To generate these access keys:
1. Visit the [IAM Console](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/home).
1. Select the **Users** tab.
1. Select your username.
1. Select the **Security credentials** tab.
1. Select **Create access key**.
1. Select **Download .csv** to download the access keys.
Keep the access keys in a secure location. You will need them to configure the model.
Alternatively, to generate access keys on AWS, you can follow this [video on how to create access and secret keys in AWS](https://www.youtube.com/watch?v=d1e-2ToweXQ).
## Setting up your GDK environment
GitLab Duo Self-Hosted requires that your GDK environment runs on Self-Managed mode. It does not work in Multi-Tenant/SaaS mode.
To set up your GDK environment to run the GitLab Duo Self-Hosted, follow the steps in this [AI development documentation](_index.md#required-run-gitlabduosetup-script), under **GitLab Self-Managed / Dedicated mode**.
### Setting up Environment Variables
To use the hosted models, set the following environment variables on your AI gateway:
1. In the `GDK_ROOT/gitlab-ai-gateway/.env` file, set the following variables:
```plaintext
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_REGION=us-east-1
FIREWORKS_AI_API_KEY=your-fireworks-api-key
AIGW_CUSTOM_MODELS__ENABLED=true
# useful for debugging
AIGW_LOGGING__ENABLE_REQUEST_LOGGING=true
AIGW_LOGGING__ENABLE_LITELLM_LOGGING=true
```
1. In the `GDK_ROOT/env.runit` file, set the following variables:
```plaintext
export GITLAB_SIMULATE_SAAS=0
```
1. Seed your Duo self-hosted models using `bundle exec rake gitlab:duo:seed_self_hosted_models`.
1. Running `bundle exec rake gitlab:duo:list_self_hosted_models` should output the list of created models
1. Restart your GDK for the changes to take effect using `gdk restart`.
## Configuring the custom model in the UI
To enable the use of self-hosted models in the GitLab instance, follow these steps:
1. On your GDK instance, go to `/admin/gitlab_duo/configuration`.
1. Select the **Use beta models and features in GitLab Duo Self-Hosted** checkbox.
1. For **Local AI Gateway URL**, enter the URL of your AI gateway instance. In most cases, this will be `http://localhost:5052`.
1. Save the changes.
1. To save your changes, select **Create self-hosted model**.
- Save your changes by clicking on the `Create self-hosted model` button.
### Using the self-hosted model to power AI features
To use the created self-hosted model to power AI-native features:
1. On your GDK instance, go to `/admin/gitlab_duo/self_hosted`.
1. For each AI feature you want to use with your self-hosted model (for example, Code Generation, Code Completion, General Chat, Explain Code, and so on), select your newly created self-hosted model (for example, **Claude 3.5 Sonnet on Bedrock**) from the corresponding dropdown list.
1. Optional. To copy the configuration to all features under a specific category, select the copy icon next to it.
1. After making your selections, the changes are usually saved automatically.

With this, you have successfully configured the self-hosted model to power AI-native features in your GitLab instance. To test the feature using, for example, Chat, open Chat and say `Hello`. You should see the response powered by your self-hosted model in the chat.
## Moving a feature available in GitLab.com or GitLab Self-Managed to GitLab Duo Self-Hosted
To move a feature available in GitLab.com or GitLab Self-Managed to GitLab Duo Self-Hosted:
1. Make the feature configurable to use a self-hosted model.
1. Add prompts for the feature, for each model family you want to support.
### Making the feature configurable to use a self-hosted model
When a feature is available in GitLab.com or GitLab Self-Managed, it should be configurable to use a self-hosted model. To make the feature configurable to use a self-hosted model:
- Add the feature's name to `ee/app/models/ai/feature_setting.rb` as a stable feature or a beta/experimental feature.
- Add the feature's name to the `ee/lib/gitlab/ai/feature_settings/feature_metadata.yml` file, including the list of model families it supports.
- Add the unit primitive to `config/services/self_hosted_models.yml` in the [`gitlab-cloud-connector` repository](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector). This [merge request](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/merge_requests/134) can be used as a reference.
- Associated spec changes based on the above changes.
Please refer to the following merge requests for reference:
- [Move Code Review Summary to Beta in Self-Hosted Duo](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186662)
- [Move Vulnerability Explanation to Beta in Self-Hosted Duo](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186500)
### Adding prompts for the feature
For each model family you want to support for the feature, you must add a prompt. Prompts are stored in the [AI Gateway repository](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist).
In most cases, the prompt that is used on GitLab.com is also used for Self-Hosted Duo.
Please refer to the following Merge Requests for reference:
- [Add prompts for Code Review Summary](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/2260)
- [Add prompts for Vulnerability Explanation](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/2223)
Your feature should now be available in GitLab Duo Self-Hosted. Restart your GDK instance to apply the changes and test the feature.
|
---
stage: AI-powered
group: Custom Models
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/ee/development/development_processes.html#development-guidelines-review.
title: Developing AI Features for Duo Self-Hosted
breadcrumbs:
- doc
- development
- ai_features
---
This document outlines the process for developing AI features for GitLab Duo Self-Hosted. Developing AI features for GitLab Duo Self-Hosted is quite similar to developing AI features for Duo SaaS, but there are some differences.
## Gaining access to a hosted model
The following models are currently available to GitLab team members for development purposes as of July, 2025:
- `Claude Sonnet 3.5` on AWS Bedrock
- `Claude Sonnet 3.5 v2` on AWS Bedrock
- `Claude Sonnet 3.7` on AWS Bedrock
- `Claude Sonnet 4` on AWS Bedrock
- `Claude Haiku 3.5` on AWS Bedrock
- `Llama 3.3 70b` on AWS Bedrock
- `Llama 3.1 8b` on AWS Bedrock
- `Llama 3.1 70b` on AWS Bedrock
- `Mistral Small` on FireworksAI
- `Mixtral 8x22b` on FireworksAI
- `Codestral 22b v0.1` on FireworksAI
- `Llama 3.1 70b` on FireworksAI
- `Llama 3.1 8b` on FireworksAI
- `Llama 3.3 70b` on FireworksAI
Development environments provide access to a limited set of models for cost optimization. The [complete model catalog](../../administration/gitlab_duo_self_hosted/supported_models_and_hardware_requirements.md#supported-models) is available in production deployments.
### Gaining access to models on FireworksAI
To gain access to FireworksAI, first create an [Access Request](https://gitlab.com/gitlab-com/team-member-epics/access-requests). See [this example access request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/37505) if you aren't sure what information to fill in.
Our FireworksAI account is managed by `Create::Code Creation`. Once access is granted, navigate to `https://fireworks.ai/` to create an API key.
### Gaining access to models on AWS Bedrock
To gain access to models in AWS Bedrock, create an [access request using the `aws_services_account_iam_update` template](https://gitlab.com/gitlab-com/gl-security/corp/issue-tracker/-/issues/new?description_template=aws_services_account_iam_update). See [this example access request](https://gitlab.com/gitlab-com/gl-security/corp/issue-tracker/-/issues/949) if you aren't sure what information to fill in.
Once your access request is approved, you can gain access to AWS credentials by visiting [https://gitlabsandbox.cloud/login](https://gitlabsandbox.cloud/login).
After logging into `gitlabsandbox.cloud`, perform the following steps:
1. Select the `cstm-mdls-dev-bedrock` AWS account.
1. On the top right corner of the page, select **View IAM Credentials**.
1. In the model that opens up, you should be able to see `AWS Console URL`, `Username` and `Password`. Visit this AWS Console URL and input the obtained username and password to sign in.
On AWS Bedrock, you must gain access to the models you want to use. To do this:
1. Visit the [AWS Model Catalog Console](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/model-catalog).
1. Make sure your location is set to `us-east-1`.
1. From the list of models, find the model you want to use, and hover on the **Available to request** link. Then select **Request access**.
1. Complete the form to request access to the model.
Your access should be granted within a few minutes.
### Generating Access Key and Secret Key on AWS
To use AWS Bedrock models, you must generate access keys. To generate these access keys:
1. Visit the [IAM Console](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/home).
1. Select the **Users** tab.
1. Select your username.
1. Select the **Security credentials** tab.
1. Select **Create access key**.
1. Select **Download .csv** to download the access keys.
Keep the access keys in a secure location. You will need them to configure the model.
Alternatively, to generate access keys on AWS, you can follow this [video on how to create access and secret keys in AWS](https://www.youtube.com/watch?v=d1e-2ToweXQ).
## Setting up your GDK environment
GitLab Duo Self-Hosted requires that your GDK environment runs on Self-Managed mode. It does not work in Multi-Tenant/SaaS mode.
To set up your GDK environment to run the GitLab Duo Self-Hosted, follow the steps in this [AI development documentation](_index.md#required-run-gitlabduosetup-script), under **GitLab Self-Managed / Dedicated mode**.
### Setting up Environment Variables
To use the hosted models, set the following environment variables on your AI gateway:
1. In the `GDK_ROOT/gitlab-ai-gateway/.env` file, set the following variables:
```plaintext
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_REGION=us-east-1
FIREWORKS_AI_API_KEY=your-fireworks-api-key
AIGW_CUSTOM_MODELS__ENABLED=true
# useful for debugging
AIGW_LOGGING__ENABLE_REQUEST_LOGGING=true
AIGW_LOGGING__ENABLE_LITELLM_LOGGING=true
```
1. In the `GDK_ROOT/env.runit` file, set the following variables:
```plaintext
export GITLAB_SIMULATE_SAAS=0
```
1. Seed your Duo self-hosted models using `bundle exec rake gitlab:duo:seed_self_hosted_models`.
1. Running `bundle exec rake gitlab:duo:list_self_hosted_models` should output the list of created models
1. Restart your GDK for the changes to take effect using `gdk restart`.
## Configuring the custom model in the UI
To enable the use of self-hosted models in the GitLab instance, follow these steps:
1. On your GDK instance, go to `/admin/gitlab_duo/configuration`.
1. Select the **Use beta models and features in GitLab Duo Self-Hosted** checkbox.
1. For **Local AI Gateway URL**, enter the URL of your AI gateway instance. In most cases, this will be `http://localhost:5052`.
1. Save the changes.
1. To save your changes, select **Create self-hosted model**.
- Save your changes by clicking on the `Create self-hosted model` button.
### Using the self-hosted model to power AI features
To use the created self-hosted model to power AI-native features:
1. On your GDK instance, go to `/admin/gitlab_duo/self_hosted`.
1. For each AI feature you want to use with your self-hosted model (for example, Code Generation, Code Completion, General Chat, Explain Code, and so on), select your newly created self-hosted model (for example, **Claude 3.5 Sonnet on Bedrock**) from the corresponding dropdown list.
1. Optional. To copy the configuration to all features under a specific category, select the copy icon next to it.
1. After making your selections, the changes are usually saved automatically.

With this, you have successfully configured the self-hosted model to power AI-native features in your GitLab instance. To test the feature using, for example, Chat, open Chat and say `Hello`. You should see the response powered by your self-hosted model in the chat.
## Moving a feature available in GitLab.com or GitLab Self-Managed to GitLab Duo Self-Hosted
To move a feature available in GitLab.com or GitLab Self-Managed to GitLab Duo Self-Hosted:
1. Make the feature configurable to use a self-hosted model.
1. Add prompts for the feature, for each model family you want to support.
### Making the feature configurable to use a self-hosted model
When a feature is available in GitLab.com or GitLab Self-Managed, it should be configurable to use a self-hosted model. To make the feature configurable to use a self-hosted model:
- Add the feature's name to `ee/app/models/ai/feature_setting.rb` as a stable feature or a beta/experimental feature.
- Add the feature's name to the `ee/lib/gitlab/ai/feature_settings/feature_metadata.yml` file, including the list of model families it supports.
- Add the unit primitive to `config/services/self_hosted_models.yml` in the [`gitlab-cloud-connector` repository](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector). This [merge request](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/merge_requests/134) can be used as a reference.
- Associated spec changes based on the above changes.
Please refer to the following merge requests for reference:
- [Move Code Review Summary to Beta in Self-Hosted Duo](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186662)
- [Move Vulnerability Explanation to Beta in Self-Hosted Duo](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186500)
### Adding prompts for the feature
For each model family you want to support for the feature, you must add a prompt. Prompts are stored in the [AI Gateway repository](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist).
In most cases, the prompt that is used on GitLab.com is also used for Self-Hosted Duo.
Please refer to the following Merge Requests for reference:
- [Add prompts for Code Review Summary](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/2260)
- [Add prompts for Vulnerability Explanation](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/2223)
Your feature should now be available in GitLab Duo Self-Hosted. Restart your GDK instance to apply the changes and test the feature.
|
https://docs.gitlab.com/development/ai_features/evaluation_runner
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/ai_features/_index.md
|
2025-08-13
|
doc/development/ai_features/evaluation_runner
|
[
"doc",
"development",
"ai_features",
"evaluation_runner"
] |
_index.md
|
AI-powered
|
AI Framework
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Evaluation runner
| null |
Evaluation runner (`evaluation-runner`) allows GitLab employees to run evaluations on specific GitLab AI features with one click.
- You can run the evaluation on GitLab.com and GitLab-supported self-hosted models.
- To view the AI features that are currently supported, see
[Evaluation pipelines](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#evaluation-pipelines).
Evaluation runner spins up a new GDK instance on a remote environment, runs an evaluation, and reports the result.
For more details, view the
[`evaluation-runner` repository](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner).
## Architecture
```mermaid
flowchart LR
subgraph EV["Evaluators"]
PL(["PromptLibrary/ELI5"])
DSIN(["Input Dataset"])
end
subgraph ER["EvaluationRunner"]
CI["CI/CD pipelines"]
subgraph GDKS["Remote GDKs"]
subgraph GDKM["GDK-master"]
bl1["Duo features on master branch"]
fi1["fixtures (Issue,MR,etc)"]
end
subgraph GDKF["GDK-feature"]
bl2["Duo features on feature branch"]
fi2["fixtures (Issue,MR,etc)"]
end
end
end
subgraph MR["MergeRequests"]
GRMR["GitLab-Rails MR"]
GRAI["AI Gateway MR"]
end
MR -- [1] trigger --- CI
CI -- [2] spins up --- GDKS
PL -- [3] get responses and evaluate --- GDKS
```
|
---
stage: AI-powered
group: AI Framework
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Evaluation runner
breadcrumbs:
- doc
- development
- ai_features
- evaluation_runner
---
Evaluation runner (`evaluation-runner`) allows GitLab employees to run evaluations on specific GitLab AI features with one click.
- You can run the evaluation on GitLab.com and GitLab-supported self-hosted models.
- To view the AI features that are currently supported, see
[Evaluation pipelines](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner#evaluation-pipelines).
Evaluation runner spins up a new GDK instance on a remote environment, runs an evaluation, and reports the result.
For more details, view the
[`evaluation-runner` repository](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/evaluation-runner).
## Architecture
```mermaid
flowchart LR
subgraph EV["Evaluators"]
PL(["PromptLibrary/ELI5"])
DSIN(["Input Dataset"])
end
subgraph ER["EvaluationRunner"]
CI["CI/CD pipelines"]
subgraph GDKS["Remote GDKs"]
subgraph GDKM["GDK-master"]
bl1["Duo features on master branch"]
fi1["fixtures (Issue,MR,etc)"]
end
subgraph GDKF["GDK-feature"]
bl2["Duo features on feature branch"]
fi2["fixtures (Issue,MR,etc)"]
end
end
end
subgraph MR["MergeRequests"]
GRMR["GitLab-Rails MR"]
GRAI["AI Gateway MR"]
end
MR -- [1] trigger --- CI
CI -- [2] spins up --- GDKS
PL -- [3] get responses and evaluate --- GDKS
```
|
https://docs.gitlab.com/development/security_report_ingestion_overview
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/security_report_ingestion_overview.md
|
2025-08-13
|
doc/development/sec
|
[
"doc",
"development",
"sec"
] |
security_report_ingestion_overview.md
|
Security Risk Management
|
Security Insights
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Security report ingestion overview
| null |
{{< alert type="warning" >}}
The `Vulnerability::Feedback` model is currently undergoing deprecation and should be actively avoided in all further development. It is currently maintained with feature parity to enable revert should any issues arise, but is intended to be removed in 16.0. Any interactions relating to the Feedback model are superseded by the `StateTransition`, `IssueLink`, and `MergeRequestLink` models. You can find out more on [in this epic](https://gitlab.com/groups/gitlab-org/-/epics/5629).
{{< /alert >}}
## Commonly used terms
### Feedback
An instance of `Vulnerabilities::Feedback` class. They are created to keep track of users' interactions with Vulnerability Findings before they are promoted to a Vulnerability. This model is deprecated and due to be removed by GitLab 16.0 as part of the [Deprecate and remove Vulnerabilities::Feedback epic](https://gitlab.com/groups/gitlab-org/-/epics/5629).
### Issue Link
An instance of `Vulnerabilities::IssueLink` class. They are used to link `Vulnerability` records to `Issue` records.
### Merge Request Link
An instance of `Vulnerabilities::MergeRequestLink` class. They are used to link `Vulnerability` records to `MergeRequest` records.
### Security Finding
An instance of `Security::Finding` class. These serve as a meta-data store of a specific vulnerability detected in a specific `Security::Scan`. They currently store **partial** finding data to improve performance of the pipeline security report. This class has been extended to store almost all required scan information so we can stop relying on job artifacts and is [due to be used in favor of `Vulnerability::Findings` soon.](https://gitlab.com/gitlab-org/gitlab/-/issues/393394)
### Security Scan
An instance of the `Security::Scan` class. Security scans are representative of a `Ci::Build` which output a `Job Artifact` which has been output as a security scan result, which GitLab acknowledges and ingests the findings of as `Security::Finding` records.
### State Transition
An instance of the `Vulnerabilities::StateTransition` class. This model represents a state change of a respective Vulnerability record, for example the dismissal of a vulnerability which has been determined to be safe.
### Vulnerability
An instance of `Vulnerability` class. A `Vulnerability` is representative of a `Vulnerability::Finding` which has been detected in the default branch of the project, or if the `present_on_default_branch` flag is false, is representative of a finding which has been interacted with in some way outside of the default branch, such as if it is dismissed (`State Transition`), or linked to an `Issue` or `Merge Request`. They are created based on information available in `Vulnerabilities::Finding` class. Every `Vulnerability` **must have** a corresponding `Vulnerabilities::Finding` object to be valid, however this is not enforced at the database level.
### Finding
An instance of `Vulnerabilities::Finding` class. A `Vulnerability::Finding` is a database only representation of a security finding which has been merged into the default branch of a project, as the same `Vulnerability` may be present in multiple places within a project. This class was previously called `Vulnerabilities::Occurrence`; after renaming the class, we kept the associated table name `vulnerability_occurrences` due to the effort involved in renaming large tables.
### Identifier
An instance of the `Vulnerabilities::Identifier` class. Each vulnerability is given a unique identifier that can be derived from it's finding, enabling multiple Findings of the same `Vulnerability` to be correlated accordingly.
### Vulnerability Read
An instance of the `Vulnerabilities::Read` class. This is a denormalized record of `Vulnerability` and `Vulnerability::Finding` data to improve performance of filtered querying of vulnerability data to the front end.
### Remediation
An instance of the `Vulnerabilities::Remediation` class. A remediation is representative of a known solution to a detected `Vulnerability`. These enable GitLab to recommend a change to resolve a specific `Vulnerability`.
## Vulnerability creation from Security Reports
Assumptions:
- Project uses GitLab CI
- Project uses [security scanning tools](../../user/application_security/_index.md)
- No Vulnerabilities are present in the database
- All pipelines perform security scans
### Scan runs in a pipeline for a non-default branch
1. Code is pushed to the branch.
1. GitLab CI runs a new pipeline for that branch.
1. Pipeline status transitions to any of [`::Ci::Pipeline.completed_statuses`](https://gitlab.com/gitlab-org/gitlab/-/blob/354261b2fe4fc5b86d1408467beadd90e466ce0a/app/models/concerns/ci/has_status.rb#L12).
1. `Security::StoreScansWorker` is called and it schedules `Security::StoreScansService`.
1. `Security::StoreScansService` calls `Security::StoreGroupedScansService` and schedules `ScanSecurityReportSecretsWorker`.
1. `Security::StoreGroupedScansService` calls `Security::StoreScanService`.
1. `Security::StoreScanService` calls `Security::StoreFindingsService`.
1. `ScanSecurityReportSecretsWorker` calls `Security::TokenRevocationService` to automatically revoke any leaked keys that were detected.
At this point we **only** have `Security::Finding` records, rather than `Vulnerability` records, as these findings are not present in the default branch of the project.
Some of the scenarios where these `Security::Finding` records may be promoted to `Vulnerability` records are described below.
### Scan runs in a pipeline for the default branch
If the pipeline ran on the default branch then the following steps, in addition to the steps in [Scan runs in a pipeline for a non-default branch](#scan-runs-in-a-pipeline-for-a-non-default-branch), are executed:
1. `Security::StoreScansService` gets called and schedules `StoreSecurityReportsByProjectWorker`.
1. `StoreSecurityReportsByProjectWorker` executes `Security::Ingestion::IngestReportsService`.
1. `Security::Ingestion::IngestReportsService` takes all reports from a given Pipeline and calls `Security::Ingestion::IngestReportService` and then calls `Security::Ingestion::MarkAsResolvedService`.
1. `Security::Ingestion::IngestReportService` calls `Security::Ingestion::IngestReportSliceService` which executes a number of tasks for a report slice.
### Dismissal
If you change the state of a vulnerability, such as selecting `Dismiss vulnerability` the following things currently happen:
- A `Feedback` record of `dismissal` type is created to record the current state.
- If they do not already exist, a `Vulnerability Finding` and a `Vulnerability` with `present_on_default_branch: false` attribute get created, to which a `State Transition` reflecting the state change is related.
You can optionally add a comment to the state change which is recorded on both the `Feedback` and the `State Transition`.
### Issue or Merge Request creation
If you select `Create issue` or `Create merge request` the following things currently happen:
- A `Vulnerabilities::Feedback` record is created. The Feedback will have a `feedback_type` of `issue` or `merge request` and an `issue_id` or `merge_request_id` that's not `NULL` respective to the attachment.
- If they do not already exist, a `Vulnerability Finding` and a `Vulnerability` with `present_on_default_branch: false` attribute get created, to which a `Issue Link` or `Merge Request Link` will be related respective to the action taken.
## Vulnerabilities in the Default Branch
Security Findings detected in scan run on the default branch are saved as `Vulnerabilities` with the `present_on_default_branch: true` attribute and respective `Vulnerability Finding` records. `Vulnerability` records that already exist from interactions outside of the default branch will be updated to `present_on_default_branch: true`
`Vulnerabilities` which have already been interacted with will retain all existing `State Transitions`, `Merge Request Links` and `Issue Links`, as well as a corresponding `Vulnerability Feedback`.
## Vulnerability Read Creation
`Vulnerability::Read` records are created via PostgreSQL database trigger upon the creation of a `Vulnerability::Finding` record and as such are part of our ingestion process, though they have no impact on it bar it's denormalization performance benefits on the report pages.
This style of creation was intended to be fast and seamless, but has proven difficult to debug and maintain and may be [migrated to the application layer later](https://gitlab.com/gitlab-org/gitlab/-/issues/393912).
## No longer detected
The "No longer detected" badge on the vulnerability report is displayed if the `Vulnerability` record has `resolved_on_default_branch: true`.
This is set by `Security::Ingestion::MarkAsResolvedService` when a pipeline runs on the default branch. Vulnerabilities which have
`resolved_on_default_branch: false` and _are not_ present in the pipeline scan results are marked as resolved.
[Secret Detection](../../user/application_security/secret_detection/_index.md) and [manual](../../user/application_security/vulnerability_report/_index.md#manually-add-a-vulnerability)
vulnerabilities are excluded from this process.
|
---
stage: Security Risk Management
group: Security Insights
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Security report ingestion overview
breadcrumbs:
- doc
- development
- sec
---
{{< alert type="warning" >}}
The `Vulnerability::Feedback` model is currently undergoing deprecation and should be actively avoided in all further development. It is currently maintained with feature parity to enable revert should any issues arise, but is intended to be removed in 16.0. Any interactions relating to the Feedback model are superseded by the `StateTransition`, `IssueLink`, and `MergeRequestLink` models. You can find out more on [in this epic](https://gitlab.com/groups/gitlab-org/-/epics/5629).
{{< /alert >}}
## Commonly used terms
### Feedback
An instance of `Vulnerabilities::Feedback` class. They are created to keep track of users' interactions with Vulnerability Findings before they are promoted to a Vulnerability. This model is deprecated and due to be removed by GitLab 16.0 as part of the [Deprecate and remove Vulnerabilities::Feedback epic](https://gitlab.com/groups/gitlab-org/-/epics/5629).
### Issue Link
An instance of `Vulnerabilities::IssueLink` class. They are used to link `Vulnerability` records to `Issue` records.
### Merge Request Link
An instance of `Vulnerabilities::MergeRequestLink` class. They are used to link `Vulnerability` records to `MergeRequest` records.
### Security Finding
An instance of `Security::Finding` class. These serve as a meta-data store of a specific vulnerability detected in a specific `Security::Scan`. They currently store **partial** finding data to improve performance of the pipeline security report. This class has been extended to store almost all required scan information so we can stop relying on job artifacts and is [due to be used in favor of `Vulnerability::Findings` soon.](https://gitlab.com/gitlab-org/gitlab/-/issues/393394)
### Security Scan
An instance of the `Security::Scan` class. Security scans are representative of a `Ci::Build` which output a `Job Artifact` which has been output as a security scan result, which GitLab acknowledges and ingests the findings of as `Security::Finding` records.
### State Transition
An instance of the `Vulnerabilities::StateTransition` class. This model represents a state change of a respective Vulnerability record, for example the dismissal of a vulnerability which has been determined to be safe.
### Vulnerability
An instance of `Vulnerability` class. A `Vulnerability` is representative of a `Vulnerability::Finding` which has been detected in the default branch of the project, or if the `present_on_default_branch` flag is false, is representative of a finding which has been interacted with in some way outside of the default branch, such as if it is dismissed (`State Transition`), or linked to an `Issue` or `Merge Request`. They are created based on information available in `Vulnerabilities::Finding` class. Every `Vulnerability` **must have** a corresponding `Vulnerabilities::Finding` object to be valid, however this is not enforced at the database level.
### Finding
An instance of `Vulnerabilities::Finding` class. A `Vulnerability::Finding` is a database only representation of a security finding which has been merged into the default branch of a project, as the same `Vulnerability` may be present in multiple places within a project. This class was previously called `Vulnerabilities::Occurrence`; after renaming the class, we kept the associated table name `vulnerability_occurrences` due to the effort involved in renaming large tables.
### Identifier
An instance of the `Vulnerabilities::Identifier` class. Each vulnerability is given a unique identifier that can be derived from it's finding, enabling multiple Findings of the same `Vulnerability` to be correlated accordingly.
### Vulnerability Read
An instance of the `Vulnerabilities::Read` class. This is a denormalized record of `Vulnerability` and `Vulnerability::Finding` data to improve performance of filtered querying of vulnerability data to the front end.
### Remediation
An instance of the `Vulnerabilities::Remediation` class. A remediation is representative of a known solution to a detected `Vulnerability`. These enable GitLab to recommend a change to resolve a specific `Vulnerability`.
## Vulnerability creation from Security Reports
Assumptions:
- Project uses GitLab CI
- Project uses [security scanning tools](../../user/application_security/_index.md)
- No Vulnerabilities are present in the database
- All pipelines perform security scans
### Scan runs in a pipeline for a non-default branch
1. Code is pushed to the branch.
1. GitLab CI runs a new pipeline for that branch.
1. Pipeline status transitions to any of [`::Ci::Pipeline.completed_statuses`](https://gitlab.com/gitlab-org/gitlab/-/blob/354261b2fe4fc5b86d1408467beadd90e466ce0a/app/models/concerns/ci/has_status.rb#L12).
1. `Security::StoreScansWorker` is called and it schedules `Security::StoreScansService`.
1. `Security::StoreScansService` calls `Security::StoreGroupedScansService` and schedules `ScanSecurityReportSecretsWorker`.
1. `Security::StoreGroupedScansService` calls `Security::StoreScanService`.
1. `Security::StoreScanService` calls `Security::StoreFindingsService`.
1. `ScanSecurityReportSecretsWorker` calls `Security::TokenRevocationService` to automatically revoke any leaked keys that were detected.
At this point we **only** have `Security::Finding` records, rather than `Vulnerability` records, as these findings are not present in the default branch of the project.
Some of the scenarios where these `Security::Finding` records may be promoted to `Vulnerability` records are described below.
### Scan runs in a pipeline for the default branch
If the pipeline ran on the default branch then the following steps, in addition to the steps in [Scan runs in a pipeline for a non-default branch](#scan-runs-in-a-pipeline-for-a-non-default-branch), are executed:
1. `Security::StoreScansService` gets called and schedules `StoreSecurityReportsByProjectWorker`.
1. `StoreSecurityReportsByProjectWorker` executes `Security::Ingestion::IngestReportsService`.
1. `Security::Ingestion::IngestReportsService` takes all reports from a given Pipeline and calls `Security::Ingestion::IngestReportService` and then calls `Security::Ingestion::MarkAsResolvedService`.
1. `Security::Ingestion::IngestReportService` calls `Security::Ingestion::IngestReportSliceService` which executes a number of tasks for a report slice.
### Dismissal
If you change the state of a vulnerability, such as selecting `Dismiss vulnerability` the following things currently happen:
- A `Feedback` record of `dismissal` type is created to record the current state.
- If they do not already exist, a `Vulnerability Finding` and a `Vulnerability` with `present_on_default_branch: false` attribute get created, to which a `State Transition` reflecting the state change is related.
You can optionally add a comment to the state change which is recorded on both the `Feedback` and the `State Transition`.
### Issue or Merge Request creation
If you select `Create issue` or `Create merge request` the following things currently happen:
- A `Vulnerabilities::Feedback` record is created. The Feedback will have a `feedback_type` of `issue` or `merge request` and an `issue_id` or `merge_request_id` that's not `NULL` respective to the attachment.
- If they do not already exist, a `Vulnerability Finding` and a `Vulnerability` with `present_on_default_branch: false` attribute get created, to which a `Issue Link` or `Merge Request Link` will be related respective to the action taken.
## Vulnerabilities in the Default Branch
Security Findings detected in scan run on the default branch are saved as `Vulnerabilities` with the `present_on_default_branch: true` attribute and respective `Vulnerability Finding` records. `Vulnerability` records that already exist from interactions outside of the default branch will be updated to `present_on_default_branch: true`
`Vulnerabilities` which have already been interacted with will retain all existing `State Transitions`, `Merge Request Links` and `Issue Links`, as well as a corresponding `Vulnerability Feedback`.
## Vulnerability Read Creation
`Vulnerability::Read` records are created via PostgreSQL database trigger upon the creation of a `Vulnerability::Finding` record and as such are part of our ingestion process, though they have no impact on it bar it's denormalization performance benefits on the report pages.
This style of creation was intended to be fast and seamless, but has proven difficult to debug and maintain and may be [migrated to the application layer later](https://gitlab.com/gitlab-org/gitlab/-/issues/393912).
## No longer detected
The "No longer detected" badge on the vulnerability report is displayed if the `Vulnerability` record has `resolved_on_default_branch: true`.
This is set by `Security::Ingestion::MarkAsResolvedService` when a pipeline runs on the default branch. Vulnerabilities which have
`resolved_on_default_branch: false` and _are not_ present in the pipeline scan results are marked as resolved.
[Secret Detection](../../user/application_security/secret_detection/_index.md) and [manual](../../user/application_security/vulnerability_report/_index.md#manually-add-a-vulnerability)
vulnerabilities are excluded from this process.
|
https://docs.gitlab.com/development/generate_test_vulnerabilities
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/generate_test_vulnerabilities.md
|
2025-08-13
|
doc/development/sec
|
[
"doc",
"development",
"sec"
] |
generate_test_vulnerabilities.md
|
Security Risk Management
|
Security Insights
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Generate test vulnerabilities
| null |
You can generate test vulnerabilities for the [Vulnerability Report](../../user/application_security/vulnerability_report/_index.md) to test GitLab
vulnerability management features without running a pipeline.
1. Sign in to GitLab.
1. Go to `/-/user_settings/personal_access_tokens` and generate a personal access token with `api` permissions.
1. Go to your project page and find the project ID. You can find the project ID below the project title.
1. Clone the GitLab repository to your local machine.
1. Open a terminal and go to `gitlab/qa` directory.
1. Run `bundle install`
1. Run the following command:
```shell
GITLAB_QA_ACCESS_TOKEN=<your_personal_access_token> GITLAB_URL="<address:port>" bundle exec rake vulnerabilities:setup\[<your_project_id>,<vulnerability_count>\] --trace
```
Make sure you do the following:
- Replace `<your_personal_access_token>` with the token you generated in step one.
- Double check the `GITLAB_URL`. It should point to address and port of your GitLab instance, for example `http://localhost:3000` if you are running GDK
- Replace `<your_project_id>` with the ID you obtained in step three above.
- Replace `<vulnerability_count>` with the number of vulnerabilities you'd like to generate.
The script creates the specified number of placeholder vulnerabilities in the project.
|
---
stage: Security Risk Management
group: Security Insights
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Generate test vulnerabilities
breadcrumbs:
- doc
- development
- sec
---
You can generate test vulnerabilities for the [Vulnerability Report](../../user/application_security/vulnerability_report/_index.md) to test GitLab
vulnerability management features without running a pipeline.
1. Sign in to GitLab.
1. Go to `/-/user_settings/personal_access_tokens` and generate a personal access token with `api` permissions.
1. Go to your project page and find the project ID. You can find the project ID below the project title.
1. Clone the GitLab repository to your local machine.
1. Open a terminal and go to `gitlab/qa` directory.
1. Run `bundle install`
1. Run the following command:
```shell
GITLAB_QA_ACCESS_TOKEN=<your_personal_access_token> GITLAB_URL="<address:port>" bundle exec rake vulnerabilities:setup\[<your_project_id>,<vulnerability_count>\] --trace
```
Make sure you do the following:
- Replace `<your_personal_access_token>` with the token you generated in step one.
- Double check the `GITLAB_URL`. It should point to address and port of your GitLab instance, for example `http://localhost:3000` if you are running GDK
- Replace `<your_project_id>` with the ID you obtained in step three above.
- Replace `<vulnerability_count>` with the number of vulnerabilities you'd like to generate.
The script creates the specified number of placeholder vulnerabilities in the project.
|
https://docs.gitlab.com/development/analyzer_development_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/analyzer_development_guide.md
|
2025-08-13
|
doc/development/sec
|
[
"doc",
"development",
"sec"
] |
analyzer_development_guide.md
|
Application Security Testing
|
Static Analysis
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sec section analyzer development
| null |
Analyzers are shipped as Docker images to execute within a CI pipeline context. This guide describes development and testing
practices across analyzers.
## Shared modules
There are a number of shared Go modules shared across analyzers for common behavior and interfaces:
- The [`command`](https://gitlab.com/gitlab-org/security-products/analyzers/command#how-to-use-the-library) Go package implements a CLI interface.
- The [`common`](https://gitlab.com/gitlab-org/security-products/analyzers/common) project provides miscellaneous shared modules for logging, certificate handling, and directory search capabilities.
- The [`report`](https://gitlab.com/gitlab-org/security-products/analyzers/report) Go package's `Report` and `Finding` structs marshal JSON reports.
- The [`template`](https://gitlab.com/gitlab-org/security-products/analyzers/template) project scaffolds new analyzers.
## How to use the analyzers
Analyzers are shipped as Docker images. For example, to run the
[Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) Docker image to scan the working directory:
1. `cd` into the directory of the source code you want to scan.
1. Run `docker login registry.gitlab.com` and provide username plus
[personal](../../user/profile/personal_access_tokens.md#create-a-personal-access-token)
or [project](../../user/project/settings/project_access_tokens.md#create-a-project-access-token)
access token with at least the `read_registry` scope.
1. Run the Docker image:
```shell
docker run \
--interactive --tty --rm \
--volume "$PWD":/tmp/app \
--env CI_PROJECT_DIR=/tmp/app \
-w /tmp/app \
registry.gitlab.com/gitlab-org/security-products/analyzers/semgrep:latest /analyzer run
```
1. The Docker container generates a report in the mounted project directory with a report filename corresponding to the analyzer category. For example, [SAST](../../user/application_security/sast/_index.md) generates a file named `gl-sast-report.json`.
## Analyzers development
To update the analyzer:
1. Modify the Go source code.
1. Build a new Docker image.
1. Run the analyzer against its test project.
1. Compare the generated report with what's expected.
Here's how to create a Docker image named `analyzer`:
```shell
docker build -t analyzer .
```
For example, to test Secret Detection run the following:
```shell
wget https://gitlab.com/gitlab-org/security-products/ci-templates/-/raw/master/scripts/compare_reports.sh
sh ./compare_reports.sh sd test/fixtures/gl-secret-detection-report.json test/expect/gl-secret-detection-report.json \
| patch -Np1 test/expect/gl-secret-detection-report.json && Git commit -m 'Update expectation' test/expect/gl-secret-detection-report.json
rm compare_reports.sh
```
You can also compile the binary for your own environment and run it locally
but `analyze` and `run` probably won't work
since the runtime dependencies of the analyzer are missing.
Here's an example based on
[SpotBugs](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs):
```shell
go build -o analyzer
./analyzer search test/fixtures
./analyzer convert test/fixtures/app/spotbugsXml.Xml > ./gl-sast-report.json
```
### Secure stage CI/CD Templates and components
The secure stage is responsible for maintaining the following CI/CD Templates and Components:
- [Composition Analysis](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/composition-analysis)
- CI/CD Templates
- [`Dependency-Scanning.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml)
- [`Dependency-Scanning.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.latest.gitlab-ci.yml)
- [`Container-Scanning.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Container-Scanning.gitlab-ci.yml)
- [`Container-Scanning.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Container-Scanning.latest.gitlab-ci.yml)
- CI/CD Components
- [Dependency Scanning](https://gitlab.com/components/dependency-scanning/-/blob/main/templates/main/template.yml)
- [Container Scanning](https://gitlab.com/components/container-scanning/-/blob/main/templates/container-scanning.yml)
- [Static Analysis (SAST)](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/static-analysis)
- CI/CD Templates
- [`SAST.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml)
- [`SAST.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST.latest.gitlab-ci.yml)
- [`SAST-IaC.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST-IaC.gitlab-ci.yml)
- [`SAST-IaC.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST-IaC.latest.gitlab-ci.yml#L1-1)
- CI/CD Components
- [SAST](https://gitlab.com/components/sast/-/blob/main/templates/sast.yml)
- [Secret Detection](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/secret-detection)
- CI/CD Templates
- [`Secret-Detection.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.gitlab-ci.yml)
- [`Secret-Detection.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.latest.gitlab-ci.yml)
- CI/CD Components
- [Secret Detection](https://gitlab.com/components/secret-detection/-/blob/main/templates/secret-detection.yml)
Changes must always be made to both the CI/CD template and component for your group, and you must also determine if the changes need to be applied to the latest CI/CD template.
Analyzers are also referenced in the [`Secure-Binaries.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/Secure-Binaries.gitlab-ci.yml) file for [offline environments](../../user/application_security/offline_deployments/_index.md#using-the-official-gitlab-template). Ensure this file is also kept in sync when doing changes.
### Execution criteria
[Enabling SAST](../../user/application_security/sast/_index.md#configure-sast-in-your-cicd-yaml) requires including a pre-defined [template](https://gitlab.com/gitlab-org/gitlab/-/blob/ee4d473eb9a39f2f84b719aa0ca13d2b8e11dc7e/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml) to your GitLab CI/CD configuration.
The following independent criteria determine which analyzer needs to be run on a project:
1. The SAST template uses [`rules:exists`](../../ci/yaml/_index.md#rulesexists) to determine which analyzer will be run based on the presence of certain files. For example, the Brakeman analyzer [runs when there are](https://gitlab.com/gitlab-org/gitlab/-/blob/ee4d473eb9a39f2f84b719aa0ca13d2b8e11dc7e/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L60) `.rb` files and a `Gemfile`.
1. Each analyzer runs a customizable [match interface](https://gitlab.com/gitlab-org/security-products/analyzers/common/-/blob/master/search/search.go) before it performs the actual analysis. For example: [Flawfinder checks for C/C++ files](https://gitlab.com/gitlab-org/security-products/analyzers/flawfinder/-/blob/f972ac786268fb649553056a94cda05cdc1248b2/plugin/plugin.go#L14).
1. For some analyzers that run on generic file extensions, there is a check based on a CI/CD variable. For example: Kubernetes manifests are written in YAML, so [Kubesec](https://gitlab.com/gitlab-org/security-products/analyzers/kubesec) runs only when [`SCAN_KUBERNETES_MANIFESTS` is set to true](../../user/application_security/sast/_index.md#enabling-kubesec-analyzer).
Step 1 helps prevent wastage of compute quota that would be spent running analyzers not suitable for the project. However, due to [technical limitations](https://gitlab.com/gitlab-org/gitlab/-/issues/227632), it cannot be used for large projects. Therefore, step 2 acts as final check to ensure a mismatched analyzer is able to exit early.
## How to test the analyzers
Video walkthrough of how Dependency Scanning analyzers are using [downstream pipeline](../../ci/pipelines/downstream_pipelines.md) feature to test analyzers using test projects:
<i class="fa-youtube-play" aria-hidden="true"></i>
[How Sec leverages the downstream pipeline feature of GitLab to test analyzers end to end](https://www.youtube.com/watch?v=KauRBlfUbDE)
<!-- Video published on 2019-10-09 -->
### Testing local changes
To test local changes in the shared modules (such as `command` or `report`) for an analyzer
you can use the
[`go mod replace`](https://github.com/golang/go/wiki/Modules#when-should-i-use-the-replace-directive)
directive to load `command` with your local changes instead of using the version of command that has been
tagged remotely. For example:
```shell
go mod edit -replace gitlab.com/gitlab-org/security-products/analyzers/command/v3=/local/path/to/command
```
Alternatively you can achieve the same result by manually updating the `go.mod` file:
```plaintext
module gitlab.com/gitlab-org/security-products/analyzers/awesome-analyzer/v2
replace gitlab.com/gitlab-org/security-products/analyzers/command/v3 => /path/to/command
require (
...
gitlab.com/gitlab-org/security-products/analyzers/command/v3 v2.19.0
)
```
#### Testing local changes in Docker
To use Docker with `replace` in the `go.mod` file:
1. Copy the contents of `command` into the directory of the analyzer. `cp -r /path/to/command path/to/analyzer/command`.
1. Add a copy statement in the analyzer's `Dockerfile`: `COPY command /command`.
1. Update the `replace` statement to make sure it matches the destination of the `COPY` statement in the step above:
`replace gitlab.com/gitlab-org/security-products/analyzers/command/v3 => /command`
### Testing container orchestration compatibility
Users may use tools other than Docker to orchestrate their containers and run their analyzers,
such as [containerd](https://containerd.io/), [Podman](https://podman.io/), or [skopeo](https://github.com/containers/skopeo).
To ensure compatibility with these tools, we [periodicically test](https://gitlab.com/gitlab-org/security-products/tests/analyzer-containerization-support/-/blob/main/.gitlab-ci.yml?ref_type=heads)
all analyzers using a scheduled pipeline. A Slack alert is raised if a test fails.
To avoid compatibility issues when building analyzer Docker images, use the [OCI media types](https://docs.docker.com/build/exporters/#oci-media-types) instead of the default proprietary Docker media types.
In addition to the periodic test, we ensure compatibility for users of the [`ci-templates` repo](https://gitlab.com/gitlab-org/security-products/ci-templates):
1. Analyzers using the [`ci-templates` `docker-test.yml` template](https://gitlab.com/gitlab-org/security-products/ci-templates/-/blob/master/includes-dev/docker-test.yml)
include [`tests`](https://gitlab.com/gitlab-org/security-products/ci-templates/-/blob/08319f7586fd9cc66f58ca894525ab54a2b7d831/includes-dev/docker-test.yml#L155-179) to ensure our Docker images function correctly with supported Docker tools.
These tests are executed in Merge Request pipelines and scheduled pipelines, and prevent images from being released if they break the supported Docker tools.
1. The [`ci-templates` `docker.yml` template](https://gitlab.com/gitlab-org/security-products/ci-templates/-/blob/master/includes-dev/docker.yml)
specifies [`oci-mediatypes=true`](https://docs.docker.com/build/exporters/#oci-media-types) for the `docker buildx` command when building analyzer images.
This builds images using [OCI](https://opencontainers.org/) media types rather than Docker proprietary media types.
When creating a new analyzer, or changing the location of existing analyzer images,
add it to the periodic test, or consider using the shared [`ci-templates`](https://gitlab.com/gitlab-org/security-products/ci-templates/) which includes an automated test.
## Analyzer scripts
The [analyzer-scripts](https://gitlab.com/gitlab-org/secure/tools/analyzer-scripts) repository contains scripts that can be used to interact with most analyzers. They enable you to build, run, and debug analyzers in a GitLab CI-like environment, and are particularly useful for locally validating changes to an analyzer.
For more information, refer to the [project README](https://gitlab.com/gitlab-org/secure/tools/analyzer-scripts/-/blob/master/README.md).
## Versioning and release process
GitLab Security Products use an independent versioning system from GitLab `MAJOR.MINOR`. All products use a variation of [Semantic Versioning](https://semver.org) and are available as Docker images.
`Major` is bumped with every new major release of GitLab, when [breaking changes are allowed](../deprecation_guidelines/_index.md). `Minor` is bumped for new functionality, and `Patch` is reserved for bugfixes.
The analyzers are released as Docker images following this scheme:
- each push to the default branch will override the `edge` image tag
- each push to any `awesome-feature` branch will generate a matching `awesome-feature` image tag
- each Git tag will generate the corresponding `Major.Minor.Patch` image tag. A manual job allows to override the corresponding `Major` and the `latest` image tags to point to this `Major.Minor.Patch`.
In most circumstances it is preferred to rely on the `MAJOR` image,
which is automatically kept up to date with the latest advisories or patches to our tools.
Our [included CI templates](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Security) pin to major version but if preferred, users can override their version directly.
To release a new analyzer Docker image, there are two different options:
- [Manual release process](#manual-release-process)
- [Automatic release process](#automatic-release-process)
The following diagram describes the Docker tags that are created when a new analyzer version is released:
```mermaid
graph LR
A1[git tag v1.1.0]--> B1(run CI pipeline)
B1 -->|build and tag patch| D1[1.1.0]
B1 -->|tag minor| E1[1.1]
B1 -->|retag major| F1[1]
B1 -->|retag latest| G1[latest]
A2[git tag v1.1.1]--> B2(run CI pipeline)
B2 -->|build and tag patch| D2[1.1.1]
B2 -->|retag minor| E2[1.1]
B2 -->|retag major| F2[1]
B2 -->|retag latest| G2[latest]
A3[push to default branch]--> B3(run CI pipeline)
B3 -->|build and tag edge| D3[edge]
```
Per our Continuous Deployment flow, for new components that do not have a counterpart in the GitLab
Rails application, the component can be released at any time. Until the components
are integrated with the existing application, iteration should not be blocked by
[our standard release cycle and process](https://handbook.gitlab.com/handbook/product/product-processes/).
### Manual release process
1. Ensure that the `CHANGELOG.md` entry for the new analyzer is correct.
1. Ensure that the release source (typically the `master` or `main` branch) has a passing pipeline.
1. Create a new release for the analyzer project by selecting the **Deployments** menu on the left-hand side of the project window, then selecting the **Releases** sub-menu.
1. Select **New release** to open the **New Release** page.
1. In the **Tag name** drop down, enter the same version used in the `CHANGELOG.md`, for example `v2.4.2`, and select the option to create the tag (`Create tag v2.4.2` here).
1. In the **Release title** text box enter the same version used above, for example `v2.4.2`.
1. In the `Release notes` text box, copy and paste the notes from the corresponding version in the `CHANGELOG.md`.
1. Leave all other settings as the default values.
1. Select **Create release**.
After following the above process and creating a new release, a new Git tag is created with the `Tag name` provided above. This triggers a new pipeline with the given tag version and a new analyzer Docker image is built.
If the analyzer uses the [`analyzer.yml` template](https://gitlab.com/gitlab-org/security-products/ci-templates/blob/b446fd3/includes-dev/analyzer.yml#L209-217), then the pipeline triggered as part of the **New release** process above automatically tags and deploys a new version of the analyzer Docker image.
If the analyzer does not use the `analyzer.yml` template, you'll need to manually tag and deploy a new version of the analyzer Docker image:
1. Select the **CI/CD** menu on the left-hand side of the project window, then select the **Pipelines** sub-menu.
1. A new pipeline should currently be running with the same tag used previously, for example `v2.4.2`.
1. After the pipeline has completed, it will be in a `blocked` state.
1. Select the `Manual job` play button on the right hand side of the window and select `tag version` to tag and deploy a new version of the analyzer Docker image.
Use your best judgment to decide when to create a Git tag, which will then trigger the release job. If you
can't decide, then ask for other's input.
### Automatic release process
The following must be performed before the automatic release process can be used:
1. Configure `CREATE_GIT_TAG: true` as a [`CI/CD` environment variable](../../ci/variables/_index.md).
1. Check the `Variables` in the CI/CD project settings:
- If the project is located under the `gitlab-org/security-products/analyzers` namespace, then it automatically inherits the `GITLAB_TOKEN` environment variable and nothing else needs to be done.
- If the project is not located under the `gitlab-org/security-products/analyzers` namespace, then you'll need to create a new [masked and hidden](../../ci/variables/_index.md#hide-a-cicd-variable) `GITLAB_TOKEN` [`CI/CD` environment variable](../../ci/variables/_index.md) and set its value to the Personal Access Token for the [@gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) account described in the [Service account used in the automatic release process](#service-account-used-in-the-automatic-release-process) section below.
After the above steps have been completed, the automatic release process executes as follows:
1. A project maintainer merges an MR into the default branch.
1. The default pipeline is triggered, and the `upsert git tag` job is executed.
- If the most recent version in the `CHANGELOG.md` matches one of the Git tags, the job is a no-op.
- Else, this job automatically creates a new release and Git tag using the [releases API](../../api/releases/_index.md#create-a-release). The version and message is obtained from the most recent entry in the `CHANGELOG.md` file for the project.
1. A pipeline is automatically triggered for the new Git tag. This pipeline releases the `latest`, `major`, `minor` and `patch` Docker images of the analyzer.
### Service account used in the automatic release process
| Key | Value |
|----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|
| Account name | [@gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) |
| Purpose | Used for creating releases/tags |
| Member of | [`gitlab-org/security-products`](https://gitlab.com/groups/gitlab-org/security-products/-/group_members?search=gl-service-dev-secure-analyzers-automation) |
| Maximum role | `Developer` |
| Scope of the associated `GITLAB_TOKEN` | `api` |
| Expiry date of `GITLAB_TOKEN` | `December 3, 2025` |
### Token rotation for service account
The `GITLAB_TOKEN` for the [@gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) service account **must** be rotated before the `Expiry Date` listed [above](#service-account-used-in-the-automatic-release-process) by doing the following:
1. Log in as the `gl-service-dev-secure-analyzers-automation` user.
The list of administrators who have credentials for this account can be found in the [service account access request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/29538#admin-users).
Administrators can find the login credentials in the shared GitLab `1password` vault.
1. Create a new [Personal Access Token](../../user/profile/personal_access_tokens.md) with `api` scope for the `gl-service-dev-secure-analyzers-automation` service account.
1. Update the `password` field of the `GitLab API Token - gl-service-dev-secure-analyzers-automation` account in the shared GitLab `1password` vault to the new Personal Access Token created in step 2 (above), and set the `Expires at` field to indicate when the token expires.
1. Update the expiry date of the `GITLAB_TOKEN` field in the [Service account used in the automatic release process](#service-account-used-in-the-automatic-release-process) table.
1. Set the following variables to the new Personal Access Token created in step 2 above:
{{< alert type="note" >}}
It's crucial to [mask and hide](../../ci/variables/_index.md#hide-a-cicd-variable) the following variables.
{{< /alert >}}
1. `GITLAB_TOKEN` CI/CD variable for the [`gitlab-org/security-products/analyzers`](https://gitlab.com/groups/gitlab-org/security-products/analyzers/-/settings/ci_cd#js-cicd-variables-settings) group.
This allows all projects under the `gitlab-org/security-products/analyzers` namespace to inherit this `GITLAB_TOKEN` value.
1. `GITLAB_TOKEN` CI/CD variable for the [`gitlab-org/security-products/ci-templates`](https://gitlab.com/gitlab-org/security-products/ci-templates/-/settings/ci_cd#js-cicd-variables-settings) project.
This must be explicitly configured because the `ci-templates` project is not nested under the `gitlab-org/security-products/analyzers` namespace, and therefore _does not inherit_ the `GITLAB_TOKEN` value.
The `ci-templates` project requires the `GITLAB_TOKEN` to allow certain scripts to execute API calls. This step can be removed after [allow JOB-TOKEN access to CI/lint endpoint](https://gitlab.com/gitlab-org/gitlab/-/issues/438781) has been completed.
1. `GITLAB_TOKEN` CI/CD variable for the [`gitlab-org/secure/tools/security-triage-automation`](https://gitlab.com/gitlab-org/secure/tools/security-triage-automation) project.
This must be explicitly configured because the `security-triage-automation` project is not nested under the `gitlab-org/security-products/analyzers` namespace, and therefore _does not inherit_ the `GITLAB_TOKEN` value.
1. `SEC_REGISTRY_PASSWORD` CI/CD variable for [`gitlab-advanced-sast`](https://gitlab.com/gitlab-org/security-products/analyzers/gitlab-advanced-sast/-/settings/ci_cd#js-cicd-variables-settings).
This allows our [tagging script](https://gitlab.com/gitlab-org/security-products/ci-templates/blob/cfe285a/scripts/tag_image.sh) to pull from the private container registry in the development project `registry.gitlab.com/gitlab-org/security-products/analyzers/<analyzer-name>/tmp`, and push to the publicly accessible container registry `registry.gitlab.com/security-products/<analyzer-name>`.
### Steps to perform after releasing an analyzer
1. After a new version of the analyzer Docker image has been tagged and deployed, test it with the corresponding test project.
1. Announce the release on the relevant group Slack channel. Example message:
> FYI I've just released `ANALYZER_NAME` `ANALYZER_VERSION`. `LINK_TO_RELEASE`
**Never delete a Git tag that has been pushed** as there is a good
chance that the tag will be used and/or cached by the Go package registry.
### Backporting a critical fix or patch
To backport a critical fix or patch to an earlier version, follow the steps below.
1. Create a new branch from the tag you are backporting the fix to, if it doesn't exist.
- For example, if the latest stable tag is `v4` and you are backporting a fix to `v3`, create a new branch called `v3`.
1. Submit a merge request targeting the branch you just created.
1. After its approved, merge the merge request into the branch.
1. Create a new tag for the branch.
1. If the analyzer has the [automatic release process](#automatic-release-process) enabled, a new version will be released.
1. If not, you have to follow the [manual release process](#manual-release-process) to release a new version.
1. NOTE: the release pipeline will override the latest `edge` tag so the most recent release pipeline's `tag edge` job may need to be re-ran to avoid a regression for that tag.
### Preparing analyzers for a major version release
This process applies to the following groups:
- [Composition Analysis](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/composition-analysis)
- [Static Analysis (SAST)](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/static-analysis)
- [Secret Detection](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/secret-detection)
Other groups are responsible for documenting their own major version release process.
Choose one of the following scenarios based on whether the major version release contains breaking changes:
1. [Major version release without breaking changes](#major-version-release-without-breaking-changes)
1. [Major version release with breaking changes](#major-version-release-with-breaking-changes)
#### Major version release without breaking changes
Assuming the current analyzer release is `v{N}`:
1. [Configure protected tags and branches](#configure-protected-tags-and-branches).
1. During the milestone of the major release, when there are no more changes to be merged into the `default` branch:
1. Create a `v{N}` branch from the `default` branch.
1. Create and merge a new Merge Request in the `default` branch containing only the following change to the `CHANGELOG.md` file:
```markdown
## v{N+1}.0.0
- Major version release (!<MR-ID>)
```
1. [Configure scheduled pipelines](#configure-scheduled-pipelines).
1. [Bump major analyzer versions in the CI/CD templates and components](#bump-major-analyzer-versions-in-the-cicd-templates-and-components)
#### Major version release with breaking changes
Assuming the current analyzer release is `v{N}`:
1. [Configure protected tags and branches](#configure-protected-tags-and-branches).
1. Create a new branch `v{N+1}` to "stage" breaking changes.
1. In the milestones leading up to the major release milestone:
- Merge non-breaking changes to the `default` branch (aka `master` or `main`)
- Merge breaking changes to the `v{N+1}` branch, and create a separate `release candidate` entry in the `CHANGELOG.md` file for each change:
```markdown
## v{N+1}.0.0-rc.0
- some breaking change (!123)
```
Using `release candidates` allows us to release **all breaking changes in a single major version bump**, which follows the [semver guidance](https://semver.org) of only making breaking changes in a major version update.
1. During the milestone of the major release, when there are no more changes to be merged into the `default` or `v{N+1}` branches:
1. Create a `v{N}` branch from the `default` branch.
1. Create a Merge Request in the `v{N+1}` branch to squash all the `release candidate` changelog entries into a single entry for `v{N+1}`.
For example, if the `CHANGELOG.md` contains the following 3 `release candidate` entries for version `v{N+1}`:
```markdown
## v{N+1}.0.0-rc.2
- yet another breaking change (!125)
## v{N+1}.0.0-rc.1
- another breaking change (!124)
## v{N+1}.0.0-rc.0
- some breaking change (!123)
```
Then the new Merge Request should update the `CHANGELOG.md` to have a single major release for `v{N+1}`, by combining all the `release candidate` entries into a single entry:
```markdown
## v{N+1}.0.0
- yet another breaking change (!125)
- another breaking change (!124)
- some breaking change (!123)
```
1. Create a Merge Request to merge all the breaking changes from the `v{N+1}` branch into the `default` branch.
1. Delete the `v{N+1}` branch, since it's no longer needed, as the `default` branch now contains all the changes from the `v{N+1}` branch.
1. [Configure scheduled pipelines](#configure-scheduled-pipelines).
1. [Bump major analyzer versions in the CI/CD templates and components](#bump-major-analyzer-versions-in-the-cicd-templates-and-components).
##### Configure protected tags and branches
1. Ensure the wildcard `v*` is set as both a [Protected Tag](../../user/project/protected_tags.md) and [Protected Branch](../../user/project/repository/branches/protected.md) for the project.
1. Verify the [gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) service account is `Allowed to create` protected tags.
See step `3.1` of the [Officially supported images](#officially-supported-images) section for more details.
##### Configure scheduled pipelines
1. Ensure three scheduled pipelines exist, creating them if necessary, and set `PUBLISH_IMAGES: true` for all of them:
- `Republish images v{N}` (against the `v{N}` branch)
This scheduled pipeline needs to be created
- `Daily build` (against the `default` branch)
This scheduled pipeline should already exist
- `Republish images v{N-1}` (against the `v{N-1}` branch)
This scheduled pipeline should already exist
1. Delete the scheduled pipeline for the `v{N-2}` branch (if it exists), since we only support [two previous major versions](https://about.gitlab.com/support/statement-of-support/#version-support).
##### Bump major analyzer versions in the CI/CD templates and components
When images for all the `v{N+1}` analyzers are available under `registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>`, create a new merge request to bump the major version for each analyzer in the [Secure stage CI/CD templates and components](#secure-stage-cicd-templates-and-components) belonging to your group.
## Development of new analyzers
We occasionally need to build out new analyzer projects to support new frameworks and tools.
In doing so we should follow [our engineering Open Source guidelines](https://handbook.gitlab.com/handbook/engineering/open-source/),
including licensing and [code standards](../go_guide/_index.md).
In addition, to write a custom analyzer that will integrate into the GitLab application
a minimal feature set is required:
### Checklist
Verify whether the underlying tool has:
- A [permissive software license](https://handbook.gitlab.com/handbook/engineering/open-source/#using-open-source-software).
- Headless execution (CLI tool).
- Bundle-able dependencies to be packaged as a Docker image, to be executed using GitLab Runner's [Linux or Windows Docker executor](https://docs.gitlab.com/runner/executors/docker.html).
- Compatible projects that can be detected based on filenames or extensions.
- Offline execution (no internet access) or can be configured to use custom proxies and/or CA certificates.
- The image is compatible with other container orchestration tools (see [testing container orchestration compatibility](#testing-container-orchestration-compatibility)).
#### Dockerfile
The `Dockerfile` should use an unprivileged user with the name `GitLab`.
This is necessary to provide compatibility with Red Hat OpenShift instances,
which don't allow containers to run as an admin (root) user.
There are certain limitations to keep in mind when running a container as an unprivileged user,
such as the fact that any files that need to be written on the Docker filesystem will require the appropriate permissions for the `GitLab` user.
See the following merge request for more details:
[Use GitLab user instead of root in Docker image](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/merge_requests/130).
#### Minimal vulnerability data
See [our security-report-schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/src/security-report-format.json) for a full list of required fields.
The [security-report-schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas) repository contains JSON schemas that list the required fields for each report type:
- [Container Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/container-scanning-report-format.json)
- [DAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dast-report-format.json)
- [Dependency Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dependency-scanning-report-format.json)
- [SAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json)
- [Secret Detection](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/secret-detection-report-format.json)
#### Compatibility with report schema
Security reports uploaded as artifacts to
GitLab are [validated](../integrations/secure.md#report-validation) before being
[ingested](security_report_ingestion_overview.md).
Security report schemas are versioned using SchemaVer: `MODEL-REVISION-ADDITION`. The Sec Section
is responsible for the
[`security-report-schemas` project](https://gitlab.com/gitlab-org/security-products/security-report-schemas),
including the compatibility of GitLab and the schema versions. Schema changes must follow the
product-wide [deprecation guidelines](../deprecation_guidelines/_index.md).
When a new `MODEL` version is introduced, analyzers that adopt the new schema are responsible for
ensuring that GitLab deployments that do not vendor this new schema version continue to ingest
security reports without errors or warnings.
This can be accomplished in different ways:
1. Implement support for multiple schema versions in the analyzer. Based on the GitLab version, the
analyzer emits a security report using the latest schema version supported by GitLab.
- Pro: analyzer can decide at runtime what the best version to utilize is.
- Con: implementation effort and increased complexity.
1. Release a new analyzer major version. Instances that don't vendor the latest `MODEL` schema
version continue to use an analyzer version that emits reports using version `MODEL-1`.
- Pro: keeps analyzer code simple.
- Con: extra analyzer version to maintain.
1. Delay use of new schema. This relies on `additionalProperties=true`, which allows a report to
include properties that are not present in the schema. A new analyzer major version would be
released at the usual cadence.
- Pro: no extra analyzer to maintain, keep analyzer code simple.
- Con: increased risk and/or effort to mitigate the risk of not having the schema validated.
If you are unsure which path to follow, reach-out to the
[`security-report-schemas` maintainers](https://gitlab.com/groups/gitlab-org/maintainers/security-report-schemas/-/group_members?with_inherited_permissions=exclude).
### Location of Container Images
Container images for secure analyzers are published in two places:
- [Officially supported images](#officially-supported-images) in the `registry.gitlab.com/security-products` namespace, for example:
```shell
registry.gitlab.com/security-products/semgrep:5
```
- [Temporary development images](#temporary-development-images) in the project namespace, for example:
```shell
registry.gitlab.com/gitlab-org/security-products/analyzers/semgrep/tmp:d27d44a9b33cacff0c54870a40515ec5f2698475
```
#### Officially supported images
The location of officially supported images, as referenced by our secure templates is:
```shell
registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>
```
For example, the [`semgrep-sast`](https://gitlab.com/gitlab-org/gitlab/blob/v17.7.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L172) job in the `SAST.gitlab-ci.yml` template references the container image `registry.gitlab.com/security-products/semgrep:5`.
In order to push images to this location:
1. Create a new project in `https://gitlab.com/security-products/<ANALYZER-NAME>`.
For example: `https://gitlab.com/security-products/semgrep`
Images for this project will be published to `registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>`.
For example: `registry.gitlab.com/security-products/semgrep:5`
1. Configure the project `https://gitlab.com/security-products/<ANALYZER-NAME>` as follows:
1. Add the following permissions:
- Maintainer: `@gitlab-org/secure/managers`, `@gitlab-org/govern/managers`
- Developer: [@gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation)
This is necessary to allow the [service account used in the automatic release process](#service-account-used-in-the-automatic-release-process) to push images to `registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>`.
1. Configure the following project settings:
- `Settings -> General -> Visibility, project features, permissions`
- `Project visibility`
- `Public`
- `Additional options`
- `Users can request access`
- `Disabled`
- `Issues`
- `Disabled`
- `Repository`
- `Only Project Members`
- `Merge Requests`
- `Disabled`
- `Forks`
- `Disabled`
- `Git Large File Storage (LFS)`
- `Disabled`
- `CI/CD`
- `Disabled`
- `Container Registry`
- `Everyone with access`
- `Analytics`, `Requirements`, `Security and compliance`, `Wiki`, `Snippets`, `Package registry`, `Model experiments`, `Model registry`, `Pages`, `Monitor`, `Environments`, `Feature flags`, `Infrastructure`, `Releases`, `GitLab Duo`
- `Disabled`
1. Configure the following options for the _analyzer project_, located at `https://gitlab.com/gitlab-org/security-products/analyzers/<ANALYZER_NAME>`:
1. Add the wildcard `v*` as a [Protected Tag](../../user/project/protected_tags.md).
Ensure the [gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) service account has been explicitly added to the list of accounts `Allowed to create` protected tags. This is required to allow the [`upsert git tag`](https://gitlab.com/gitlab-org/security-products/ci-templates/blob/2a3519d/includes-dev/upsert-git-tag.yml#L35-44) job to create new releases for the analyzer project.
1. Add the wildcard `v*` as a [Protected Branch](../../user/project/repository/branches/protected.md).
1. [`CI/CD` environment variables](../../ci/variables/_index.md)
{{< alert type="note" >}}
It's crucial to [mask and hide](../../ci/variables/_index.md#hide-a-cicd-variable) the `SEC_REGISTRY_PASSWORD` variable.
{{< /alert >}}
| Key | Value |
|-------------------------|-----------------------------------------------------------------------------|
| `SEC_REGISTRY_IMAGE` | `registry.gitlab.com/security-products/$CI_PROJECT_NAME` |
| `SEC_REGISTRY_USER` | `gl-service-dev-secure-analyzers-automation` |
| `SEC_REGISTRY_PASSWORD` | Personal Access Token for `gl-service-dev-secure-analyzers-automation` user. Request an [administrator](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/29538#admin-users) to configure this token value. |
The above variables are used by the [tag_image.sh](https://gitlab.com/gitlab-org/security-products/ci-templates/blob/a784f5d/scripts/tag_image.sh#L21-26) script in the `ci-templates` project to push the container image to `registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>`.
See the [semgrep CI/CD Variables](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/settings/ci_cd#js-cicd-variables-settings) for an example.
#### Temporary development images
The location of temporary development images is:
```shell
registry.gitlab.com/gitlab-org/security-products/analyzers/<ANALYZER-NAME>/tmp:<TAG>
```
For example, one of the development images for the [`semgrep`](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) analyzer is:
```shell
registry.gitlab.com/gitlab-org/security-products/analyzers/semgrep/tmp:7580d6b037d93646774de601be5f39c46707bf04
```
In order to
[restrict the number of people who have write access to the container registry](https://gitlab.com/gitlab-org/gitlab/-/issues/297525),
the container registry in the development project must be [made private](https://gitlab.com/gitlab-org/gitlab/-/issues/470641) by configuring the following [project features and permissions](../../user/project/settings/_index.md) settings for the project located at `https://gitlab.com/gitlab-org/security-products/analyzers/<ANALYZER-NAME>`:
- `Settings -> General -> Visibility, project features, permissions`
- `Container Registry`
- `Only Project Members`
Each group in the Sec Section is responsible for:
1. Managing the deprecation and removal schedule for their artifacts, and creating issues for this purpose.
1. Creating and configuring projects under the new location.
1. Configuring builds to push release artifacts to the new location.
1. Removing or keeping images in old locations according to their own support agreements.
### Daily rebuild of Container Images
The analyzer images are rebuilt on a daily basis to ensure that we frequently and automatically pull patches provided by vendors of the base images we rely on.
This process only applies to the images used in versions of GitLab matching the current MAJOR release. The intent is not to release a newer version each day but rather rebuild each active variant of an image and overwrite the corresponding tags:
- the `MAJOR.MINOR.PATCH` image tag (for example: `4.1.7`)
- the `MAJOR.MINOR` image tag(for example: `4.1`)
- the `MAJOR` image tag (for example: `4`)
- the `latest` image tag
The implementation of the rebuild process may vary [depending on the project](../../user/application_security/detect/vulnerability_scanner_maintenance.md), though a shared CI configuration is available in our [development ci-templates project](https://gitlab.com/gitlab-org/security-products/ci-templates/-/blob/master/includes-dev/docker.yml) to help achieving this.
## Adding new language support to GitLab Advanced SAST (GLAS)
This guide helps engineers evaluate and add new language support to GLAS. These guidelines ensure consistent quality when expanding language coverage, rather than serving as strict requirements.
### Language support readiness criteria
Adapt these guidelines to your specific language while maintaining our analyzer quality standards.
These guidelines come from our experience adding PHP support to GLAS (see [issue #514210](https://gitlab.com/gitlab-org/gitlab/-/issues/514210)) and help determine when new language support is ready for production.
#### Quality readiness
##### Cross-file analysis capability
- Support the most common dependency management patterns in the target language
- Support common inclusion mechanisms specific to the language
##### Detection quality
- Precision Rate ≥ 80% across supported CWEs
- Comprehensive test corpus for each supported CWE
- Testing against popular frameworks in the language ecosystem
#### Coverage readiness
##### Priority-based coverage
- Must cover critical injection vulnerabilities relevant to the language
- Must cover common security misconfigurations
- Must align with industry standards (OWASP Top 10, SANS CWE Top 25)
- Focus on high-impact vulnerabilities commonly found in the language
#### Support readiness
##### Documentation requirements
- Language listed and described in supported languages documentation
- CWE coverage table updated with new language column
- All supported CWEs properly marked
- Known limitations clearly documented
#### Performance readiness
##### Standard performance criteria
- Medium-sized applications: < 10 minutes
- Very large applications: < 30 minutes with multi-core options
##### Benchmark definition
- Define representative codebases for benchmarking
- Include common frameworks and libraries
## Security and Build fixes of Go
The `Dockerfile` of the Secure analyzers implemented in Go must reference a `MAJOR` release of Go, and not a `MINOR` revision.
This ensures that the version of Go used to compile the analyzer includes all the security fixes available at a given time.
For example, the multi-stage Dockerfile of an analyzer must use the `golang:1.15-alpine` image
to build the analyzer CLI, but not `golang:1.15.4-alpine`.
When a `MINOR` revision of Go is released, and when it includes security fixes,
project maintainers must check whether the Secure analyzers need to be re-built.
The version of Go used for the build should appear in the log of the `build` job corresponding to the release,
and it can also be extracted from the Go binary using the [strings](https://en.wikipedia.org/wiki/Strings_(Unix)) command.
If the latest image of the analyzer was built with the affected version of Go, then it needs to be rebuilt.
To rebuild the image, maintainers can either:
- trigger a new pipeline for the Git tag that corresponds to the stable release
- create a new Git tag where the `BUILD` number is incremented
- trigger a pipeline for the default branch, and where the `PUBLISH_IMAGES` variable is set to a non-empty value
Either way a new Docker image is built, and it's published with the same image tags: `MAJOR.MINOR.PATCH` and `MAJOR`.
This workflow assumes full compatibility between `MINOR` revisions of the same `MAJOR` release of Go.
If there's a compatibility issue, the project pipeline will fail when running the tests.
In that case, it might be necessary to reference a `MINOR` revision of Go in the Dockerfile
and document that exception until the compatibility issue has been resolved.
Since it is NOT referenced in the `Dockerfile`, the `MINOR` revision of Go is NOT mentioned in the project changelog.
There may be times where it makes sense to use a build tag as the changes made are build related and don't
require a changelog entry. For example, pushing Docker images to a new registry location.
### Git tag to rebuild
When creating a new Git tag to rebuild the analyzer,
the new tag has the same `MAJOR.MINOR.PATCH` version as before,
but the `BUILD` number (as defined in [semver](https://semver.org/)) is incremented.
For instance, if the latest release of the analyzer is `v1.2.3`,
and if the corresponding Docker image was built using an affected version of Go,
then maintainers create the Git tag `v1.2.3+1` to rebuild the image.
If the latest release is `v1.2.3+1`, then they create `v1.2.3+2`.
The build number is automatically removed from the image tag.
To illustrate, creating a Git tag `v1.2.3+1` in the `gemnasium` project
makes the pipeline rebuild the image, and push it as `gemnasium:1.2.3`.
The Git tag created to rebuild has a simple message that explains why the new build is needed.
Example: `Rebuild with Go 1.15.6`.
The tag has no release notes, and no release is created.
To create a new Git tag to rebuild the analyzer, follow these steps:
1. Create a new Git tag and provide a message
```shell
git tag -a v1.2.3+1 -m "Rebuild with Go 1.15.6"
```
1. Push the tags to the repo
```shell
git push origin --tags
```
1. A new pipeline for the Git tag will be triggered and a new image will be built and tagged.
1. Run a new pipeline for the `master` branch in order to run the full suite of tests and generate a new vulnerability report for the newly tagged image. This is necessary because the release pipeline triggered in step `3.` above runs only a subset of tests, for example, it does not perform a `Container Scanning` analysis.
### Monthly release process
This should be done on the **18th of each month**. Though, this is a soft deadline and there is no harm in doing it within a few days after.
First, create a new issue for a release with a script from this repo: `./scripts/release_issue.rb MAJOR.MINOR`.
This issue will guide you through the whole release process. In general, you have to perform the following tasks:
- Check the list of supported technologies in GitLab documentation.
- [Supported languages in SAST](../../user/application_security/sast/_index.md#supported-languages-and-frameworks)
- [Supported languages in DS](../../user/application_security/dependency_scanning/_index.md#supported-languages-and-package-managers)
- [Supported languages in LS](../../user/compliance/license_scanning_of_cyclonedx_files/_index.md#supported-languages-and-package-managers)
- Check that CI **_job definitions are still accurate_** in vendored CI/CD templates and **_all of the ENV vars are propagated_** to the Docker containers upon `docker run` per tool.
- [SAST vendored CI/CD template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml)
- [Dependency Scanning vendored CI/CD template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/Dependency-Scanning.gitlab-ci.yml)
- [Container Scanning CI/CD template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/Container-Scanning.gitlab-ci.yml)
If needed, go to the pipeline corresponding to the last Git tag,
and trigger the manual job that controls the build of this image.
#### Dependency updates
All dependencies and upstream scanners (if any) used in the analyzer source are updated on a monthly cadence which primarily includes security fixes and non-breaking changes.
##### SAST and Secret Detection
SAST and Secret Detection teams use an internal tool ([SastBot](https://gitlab.com/gitlab-org/security-products/analyzers/sast-analyzer-deps-bot#dependency-update-automation)) to automate dependency management of SAST and Pipeline-based Secret Detection analyzers. SastBot generates MRs on the **8th of each month** and distributes their assignment among team members to take them forward for review. For details on the process, see [Dependency Update Automation](https://gitlab.com/gitlab-org/security-products/analyzers/sast-analyzer-deps-bot#dependency-update-automation).
SastBot requires different access tokens for each job. It uses the `DEP_GITLAB_TOKEN` environment variable to retrieve the token when running scheduled pipeline jobs.
| Scheduled Pipeline | Token Source | Role | Scope | `DEP_GITLAB_TOKEN` Token Configuration Location | Token Expiry |
|--------------------|--------------|------|-------|-------------------------------------------------|--------------|
| `Merge Request Metadata Update` | [`security-products/analyzers`](https://gitlab.com/gitlab-org/security-products/analyzers) group | `developer` | `api` | Settings \> CI/CI Variables section (Masked, Protected, Hidden) | `Jul 25, 2026` |
| `Release Issue Creation` | [`security-products/release`](https://gitlab.com/gitlab-org/security-products/release) project | `planner` | `api` | Configuration section of the scheduled pipeline job | `Jul 28, 2026` |
| Analyzers | [`sast-bot`](https://gitlab.com/gitlab-org/security-products/analyzers/sast-bot) group | `developer` | `api` | Configuration section of the scheduled pipeline job | `Jul 28, 2026` |
|
---
stage: Application Security Testing
group: Static Analysis
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sec section analyzer development
breadcrumbs:
- doc
- development
- sec
---
Analyzers are shipped as Docker images to execute within a CI pipeline context. This guide describes development and testing
practices across analyzers.
## Shared modules
There are a number of shared Go modules shared across analyzers for common behavior and interfaces:
- The [`command`](https://gitlab.com/gitlab-org/security-products/analyzers/command#how-to-use-the-library) Go package implements a CLI interface.
- The [`common`](https://gitlab.com/gitlab-org/security-products/analyzers/common) project provides miscellaneous shared modules for logging, certificate handling, and directory search capabilities.
- The [`report`](https://gitlab.com/gitlab-org/security-products/analyzers/report) Go package's `Report` and `Finding` structs marshal JSON reports.
- The [`template`](https://gitlab.com/gitlab-org/security-products/analyzers/template) project scaffolds new analyzers.
## How to use the analyzers
Analyzers are shipped as Docker images. For example, to run the
[Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) Docker image to scan the working directory:
1. `cd` into the directory of the source code you want to scan.
1. Run `docker login registry.gitlab.com` and provide username plus
[personal](../../user/profile/personal_access_tokens.md#create-a-personal-access-token)
or [project](../../user/project/settings/project_access_tokens.md#create-a-project-access-token)
access token with at least the `read_registry` scope.
1. Run the Docker image:
```shell
docker run \
--interactive --tty --rm \
--volume "$PWD":/tmp/app \
--env CI_PROJECT_DIR=/tmp/app \
-w /tmp/app \
registry.gitlab.com/gitlab-org/security-products/analyzers/semgrep:latest /analyzer run
```
1. The Docker container generates a report in the mounted project directory with a report filename corresponding to the analyzer category. For example, [SAST](../../user/application_security/sast/_index.md) generates a file named `gl-sast-report.json`.
## Analyzers development
To update the analyzer:
1. Modify the Go source code.
1. Build a new Docker image.
1. Run the analyzer against its test project.
1. Compare the generated report with what's expected.
Here's how to create a Docker image named `analyzer`:
```shell
docker build -t analyzer .
```
For example, to test Secret Detection run the following:
```shell
wget https://gitlab.com/gitlab-org/security-products/ci-templates/-/raw/master/scripts/compare_reports.sh
sh ./compare_reports.sh sd test/fixtures/gl-secret-detection-report.json test/expect/gl-secret-detection-report.json \
| patch -Np1 test/expect/gl-secret-detection-report.json && Git commit -m 'Update expectation' test/expect/gl-secret-detection-report.json
rm compare_reports.sh
```
You can also compile the binary for your own environment and run it locally
but `analyze` and `run` probably won't work
since the runtime dependencies of the analyzer are missing.
Here's an example based on
[SpotBugs](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs):
```shell
go build -o analyzer
./analyzer search test/fixtures
./analyzer convert test/fixtures/app/spotbugsXml.Xml > ./gl-sast-report.json
```
### Secure stage CI/CD Templates and components
The secure stage is responsible for maintaining the following CI/CD Templates and Components:
- [Composition Analysis](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/composition-analysis)
- CI/CD Templates
- [`Dependency-Scanning.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml)
- [`Dependency-Scanning.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.latest.gitlab-ci.yml)
- [`Container-Scanning.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Container-Scanning.gitlab-ci.yml)
- [`Container-Scanning.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Container-Scanning.latest.gitlab-ci.yml)
- CI/CD Components
- [Dependency Scanning](https://gitlab.com/components/dependency-scanning/-/blob/main/templates/main/template.yml)
- [Container Scanning](https://gitlab.com/components/container-scanning/-/blob/main/templates/container-scanning.yml)
- [Static Analysis (SAST)](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/static-analysis)
- CI/CD Templates
- [`SAST.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml)
- [`SAST.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST.latest.gitlab-ci.yml)
- [`SAST-IaC.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST-IaC.gitlab-ci.yml)
- [`SAST-IaC.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST-IaC.latest.gitlab-ci.yml#L1-1)
- CI/CD Components
- [SAST](https://gitlab.com/components/sast/-/blob/main/templates/sast.yml)
- [Secret Detection](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/secret-detection)
- CI/CD Templates
- [`Secret-Detection.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.gitlab-ci.yml)
- [`Secret-Detection.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.latest.gitlab-ci.yml)
- CI/CD Components
- [Secret Detection](https://gitlab.com/components/secret-detection/-/blob/main/templates/secret-detection.yml)
Changes must always be made to both the CI/CD template and component for your group, and you must also determine if the changes need to be applied to the latest CI/CD template.
Analyzers are also referenced in the [`Secure-Binaries.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/Secure-Binaries.gitlab-ci.yml) file for [offline environments](../../user/application_security/offline_deployments/_index.md#using-the-official-gitlab-template). Ensure this file is also kept in sync when doing changes.
### Execution criteria
[Enabling SAST](../../user/application_security/sast/_index.md#configure-sast-in-your-cicd-yaml) requires including a pre-defined [template](https://gitlab.com/gitlab-org/gitlab/-/blob/ee4d473eb9a39f2f84b719aa0ca13d2b8e11dc7e/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml) to your GitLab CI/CD configuration.
The following independent criteria determine which analyzer needs to be run on a project:
1. The SAST template uses [`rules:exists`](../../ci/yaml/_index.md#rulesexists) to determine which analyzer will be run based on the presence of certain files. For example, the Brakeman analyzer [runs when there are](https://gitlab.com/gitlab-org/gitlab/-/blob/ee4d473eb9a39f2f84b719aa0ca13d2b8e11dc7e/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L60) `.rb` files and a `Gemfile`.
1. Each analyzer runs a customizable [match interface](https://gitlab.com/gitlab-org/security-products/analyzers/common/-/blob/master/search/search.go) before it performs the actual analysis. For example: [Flawfinder checks for C/C++ files](https://gitlab.com/gitlab-org/security-products/analyzers/flawfinder/-/blob/f972ac786268fb649553056a94cda05cdc1248b2/plugin/plugin.go#L14).
1. For some analyzers that run on generic file extensions, there is a check based on a CI/CD variable. For example: Kubernetes manifests are written in YAML, so [Kubesec](https://gitlab.com/gitlab-org/security-products/analyzers/kubesec) runs only when [`SCAN_KUBERNETES_MANIFESTS` is set to true](../../user/application_security/sast/_index.md#enabling-kubesec-analyzer).
Step 1 helps prevent wastage of compute quota that would be spent running analyzers not suitable for the project. However, due to [technical limitations](https://gitlab.com/gitlab-org/gitlab/-/issues/227632), it cannot be used for large projects. Therefore, step 2 acts as final check to ensure a mismatched analyzer is able to exit early.
## How to test the analyzers
Video walkthrough of how Dependency Scanning analyzers are using [downstream pipeline](../../ci/pipelines/downstream_pipelines.md) feature to test analyzers using test projects:
<i class="fa-youtube-play" aria-hidden="true"></i>
[How Sec leverages the downstream pipeline feature of GitLab to test analyzers end to end](https://www.youtube.com/watch?v=KauRBlfUbDE)
<!-- Video published on 2019-10-09 -->
### Testing local changes
To test local changes in the shared modules (such as `command` or `report`) for an analyzer
you can use the
[`go mod replace`](https://github.com/golang/go/wiki/Modules#when-should-i-use-the-replace-directive)
directive to load `command` with your local changes instead of using the version of command that has been
tagged remotely. For example:
```shell
go mod edit -replace gitlab.com/gitlab-org/security-products/analyzers/command/v3=/local/path/to/command
```
Alternatively you can achieve the same result by manually updating the `go.mod` file:
```plaintext
module gitlab.com/gitlab-org/security-products/analyzers/awesome-analyzer/v2
replace gitlab.com/gitlab-org/security-products/analyzers/command/v3 => /path/to/command
require (
...
gitlab.com/gitlab-org/security-products/analyzers/command/v3 v2.19.0
)
```
#### Testing local changes in Docker
To use Docker with `replace` in the `go.mod` file:
1. Copy the contents of `command` into the directory of the analyzer. `cp -r /path/to/command path/to/analyzer/command`.
1. Add a copy statement in the analyzer's `Dockerfile`: `COPY command /command`.
1. Update the `replace` statement to make sure it matches the destination of the `COPY` statement in the step above:
`replace gitlab.com/gitlab-org/security-products/analyzers/command/v3 => /command`
### Testing container orchestration compatibility
Users may use tools other than Docker to orchestrate their containers and run their analyzers,
such as [containerd](https://containerd.io/), [Podman](https://podman.io/), or [skopeo](https://github.com/containers/skopeo).
To ensure compatibility with these tools, we [periodicically test](https://gitlab.com/gitlab-org/security-products/tests/analyzer-containerization-support/-/blob/main/.gitlab-ci.yml?ref_type=heads)
all analyzers using a scheduled pipeline. A Slack alert is raised if a test fails.
To avoid compatibility issues when building analyzer Docker images, use the [OCI media types](https://docs.docker.com/build/exporters/#oci-media-types) instead of the default proprietary Docker media types.
In addition to the periodic test, we ensure compatibility for users of the [`ci-templates` repo](https://gitlab.com/gitlab-org/security-products/ci-templates):
1. Analyzers using the [`ci-templates` `docker-test.yml` template](https://gitlab.com/gitlab-org/security-products/ci-templates/-/blob/master/includes-dev/docker-test.yml)
include [`tests`](https://gitlab.com/gitlab-org/security-products/ci-templates/-/blob/08319f7586fd9cc66f58ca894525ab54a2b7d831/includes-dev/docker-test.yml#L155-179) to ensure our Docker images function correctly with supported Docker tools.
These tests are executed in Merge Request pipelines and scheduled pipelines, and prevent images from being released if they break the supported Docker tools.
1. The [`ci-templates` `docker.yml` template](https://gitlab.com/gitlab-org/security-products/ci-templates/-/blob/master/includes-dev/docker.yml)
specifies [`oci-mediatypes=true`](https://docs.docker.com/build/exporters/#oci-media-types) for the `docker buildx` command when building analyzer images.
This builds images using [OCI](https://opencontainers.org/) media types rather than Docker proprietary media types.
When creating a new analyzer, or changing the location of existing analyzer images,
add it to the periodic test, or consider using the shared [`ci-templates`](https://gitlab.com/gitlab-org/security-products/ci-templates/) which includes an automated test.
## Analyzer scripts
The [analyzer-scripts](https://gitlab.com/gitlab-org/secure/tools/analyzer-scripts) repository contains scripts that can be used to interact with most analyzers. They enable you to build, run, and debug analyzers in a GitLab CI-like environment, and are particularly useful for locally validating changes to an analyzer.
For more information, refer to the [project README](https://gitlab.com/gitlab-org/secure/tools/analyzer-scripts/-/blob/master/README.md).
## Versioning and release process
GitLab Security Products use an independent versioning system from GitLab `MAJOR.MINOR`. All products use a variation of [Semantic Versioning](https://semver.org) and are available as Docker images.
`Major` is bumped with every new major release of GitLab, when [breaking changes are allowed](../deprecation_guidelines/_index.md). `Minor` is bumped for new functionality, and `Patch` is reserved for bugfixes.
The analyzers are released as Docker images following this scheme:
- each push to the default branch will override the `edge` image tag
- each push to any `awesome-feature` branch will generate a matching `awesome-feature` image tag
- each Git tag will generate the corresponding `Major.Minor.Patch` image tag. A manual job allows to override the corresponding `Major` and the `latest` image tags to point to this `Major.Minor.Patch`.
In most circumstances it is preferred to rely on the `MAJOR` image,
which is automatically kept up to date with the latest advisories or patches to our tools.
Our [included CI templates](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Security) pin to major version but if preferred, users can override their version directly.
To release a new analyzer Docker image, there are two different options:
- [Manual release process](#manual-release-process)
- [Automatic release process](#automatic-release-process)
The following diagram describes the Docker tags that are created when a new analyzer version is released:
```mermaid
graph LR
A1[git tag v1.1.0]--> B1(run CI pipeline)
B1 -->|build and tag patch| D1[1.1.0]
B1 -->|tag minor| E1[1.1]
B1 -->|retag major| F1[1]
B1 -->|retag latest| G1[latest]
A2[git tag v1.1.1]--> B2(run CI pipeline)
B2 -->|build and tag patch| D2[1.1.1]
B2 -->|retag minor| E2[1.1]
B2 -->|retag major| F2[1]
B2 -->|retag latest| G2[latest]
A3[push to default branch]--> B3(run CI pipeline)
B3 -->|build and tag edge| D3[edge]
```
Per our Continuous Deployment flow, for new components that do not have a counterpart in the GitLab
Rails application, the component can be released at any time. Until the components
are integrated with the existing application, iteration should not be blocked by
[our standard release cycle and process](https://handbook.gitlab.com/handbook/product/product-processes/).
### Manual release process
1. Ensure that the `CHANGELOG.md` entry for the new analyzer is correct.
1. Ensure that the release source (typically the `master` or `main` branch) has a passing pipeline.
1. Create a new release for the analyzer project by selecting the **Deployments** menu on the left-hand side of the project window, then selecting the **Releases** sub-menu.
1. Select **New release** to open the **New Release** page.
1. In the **Tag name** drop down, enter the same version used in the `CHANGELOG.md`, for example `v2.4.2`, and select the option to create the tag (`Create tag v2.4.2` here).
1. In the **Release title** text box enter the same version used above, for example `v2.4.2`.
1. In the `Release notes` text box, copy and paste the notes from the corresponding version in the `CHANGELOG.md`.
1. Leave all other settings as the default values.
1. Select **Create release**.
After following the above process and creating a new release, a new Git tag is created with the `Tag name` provided above. This triggers a new pipeline with the given tag version and a new analyzer Docker image is built.
If the analyzer uses the [`analyzer.yml` template](https://gitlab.com/gitlab-org/security-products/ci-templates/blob/b446fd3/includes-dev/analyzer.yml#L209-217), then the pipeline triggered as part of the **New release** process above automatically tags and deploys a new version of the analyzer Docker image.
If the analyzer does not use the `analyzer.yml` template, you'll need to manually tag and deploy a new version of the analyzer Docker image:
1. Select the **CI/CD** menu on the left-hand side of the project window, then select the **Pipelines** sub-menu.
1. A new pipeline should currently be running with the same tag used previously, for example `v2.4.2`.
1. After the pipeline has completed, it will be in a `blocked` state.
1. Select the `Manual job` play button on the right hand side of the window and select `tag version` to tag and deploy a new version of the analyzer Docker image.
Use your best judgment to decide when to create a Git tag, which will then trigger the release job. If you
can't decide, then ask for other's input.
### Automatic release process
The following must be performed before the automatic release process can be used:
1. Configure `CREATE_GIT_TAG: true` as a [`CI/CD` environment variable](../../ci/variables/_index.md).
1. Check the `Variables` in the CI/CD project settings:
- If the project is located under the `gitlab-org/security-products/analyzers` namespace, then it automatically inherits the `GITLAB_TOKEN` environment variable and nothing else needs to be done.
- If the project is not located under the `gitlab-org/security-products/analyzers` namespace, then you'll need to create a new [masked and hidden](../../ci/variables/_index.md#hide-a-cicd-variable) `GITLAB_TOKEN` [`CI/CD` environment variable](../../ci/variables/_index.md) and set its value to the Personal Access Token for the [@gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) account described in the [Service account used in the automatic release process](#service-account-used-in-the-automatic-release-process) section below.
After the above steps have been completed, the automatic release process executes as follows:
1. A project maintainer merges an MR into the default branch.
1. The default pipeline is triggered, and the `upsert git tag` job is executed.
- If the most recent version in the `CHANGELOG.md` matches one of the Git tags, the job is a no-op.
- Else, this job automatically creates a new release and Git tag using the [releases API](../../api/releases/_index.md#create-a-release). The version and message is obtained from the most recent entry in the `CHANGELOG.md` file for the project.
1. A pipeline is automatically triggered for the new Git tag. This pipeline releases the `latest`, `major`, `minor` and `patch` Docker images of the analyzer.
### Service account used in the automatic release process
| Key | Value |
|----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|
| Account name | [@gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) |
| Purpose | Used for creating releases/tags |
| Member of | [`gitlab-org/security-products`](https://gitlab.com/groups/gitlab-org/security-products/-/group_members?search=gl-service-dev-secure-analyzers-automation) |
| Maximum role | `Developer` |
| Scope of the associated `GITLAB_TOKEN` | `api` |
| Expiry date of `GITLAB_TOKEN` | `December 3, 2025` |
### Token rotation for service account
The `GITLAB_TOKEN` for the [@gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) service account **must** be rotated before the `Expiry Date` listed [above](#service-account-used-in-the-automatic-release-process) by doing the following:
1. Log in as the `gl-service-dev-secure-analyzers-automation` user.
The list of administrators who have credentials for this account can be found in the [service account access request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/29538#admin-users).
Administrators can find the login credentials in the shared GitLab `1password` vault.
1. Create a new [Personal Access Token](../../user/profile/personal_access_tokens.md) with `api` scope for the `gl-service-dev-secure-analyzers-automation` service account.
1. Update the `password` field of the `GitLab API Token - gl-service-dev-secure-analyzers-automation` account in the shared GitLab `1password` vault to the new Personal Access Token created in step 2 (above), and set the `Expires at` field to indicate when the token expires.
1. Update the expiry date of the `GITLAB_TOKEN` field in the [Service account used in the automatic release process](#service-account-used-in-the-automatic-release-process) table.
1. Set the following variables to the new Personal Access Token created in step 2 above:
{{< alert type="note" >}}
It's crucial to [mask and hide](../../ci/variables/_index.md#hide-a-cicd-variable) the following variables.
{{< /alert >}}
1. `GITLAB_TOKEN` CI/CD variable for the [`gitlab-org/security-products/analyzers`](https://gitlab.com/groups/gitlab-org/security-products/analyzers/-/settings/ci_cd#js-cicd-variables-settings) group.
This allows all projects under the `gitlab-org/security-products/analyzers` namespace to inherit this `GITLAB_TOKEN` value.
1. `GITLAB_TOKEN` CI/CD variable for the [`gitlab-org/security-products/ci-templates`](https://gitlab.com/gitlab-org/security-products/ci-templates/-/settings/ci_cd#js-cicd-variables-settings) project.
This must be explicitly configured because the `ci-templates` project is not nested under the `gitlab-org/security-products/analyzers` namespace, and therefore _does not inherit_ the `GITLAB_TOKEN` value.
The `ci-templates` project requires the `GITLAB_TOKEN` to allow certain scripts to execute API calls. This step can be removed after [allow JOB-TOKEN access to CI/lint endpoint](https://gitlab.com/gitlab-org/gitlab/-/issues/438781) has been completed.
1. `GITLAB_TOKEN` CI/CD variable for the [`gitlab-org/secure/tools/security-triage-automation`](https://gitlab.com/gitlab-org/secure/tools/security-triage-automation) project.
This must be explicitly configured because the `security-triage-automation` project is not nested under the `gitlab-org/security-products/analyzers` namespace, and therefore _does not inherit_ the `GITLAB_TOKEN` value.
1. `SEC_REGISTRY_PASSWORD` CI/CD variable for [`gitlab-advanced-sast`](https://gitlab.com/gitlab-org/security-products/analyzers/gitlab-advanced-sast/-/settings/ci_cd#js-cicd-variables-settings).
This allows our [tagging script](https://gitlab.com/gitlab-org/security-products/ci-templates/blob/cfe285a/scripts/tag_image.sh) to pull from the private container registry in the development project `registry.gitlab.com/gitlab-org/security-products/analyzers/<analyzer-name>/tmp`, and push to the publicly accessible container registry `registry.gitlab.com/security-products/<analyzer-name>`.
### Steps to perform after releasing an analyzer
1. After a new version of the analyzer Docker image has been tagged and deployed, test it with the corresponding test project.
1. Announce the release on the relevant group Slack channel. Example message:
> FYI I've just released `ANALYZER_NAME` `ANALYZER_VERSION`. `LINK_TO_RELEASE`
**Never delete a Git tag that has been pushed** as there is a good
chance that the tag will be used and/or cached by the Go package registry.
### Backporting a critical fix or patch
To backport a critical fix or patch to an earlier version, follow the steps below.
1. Create a new branch from the tag you are backporting the fix to, if it doesn't exist.
- For example, if the latest stable tag is `v4` and you are backporting a fix to `v3`, create a new branch called `v3`.
1. Submit a merge request targeting the branch you just created.
1. After its approved, merge the merge request into the branch.
1. Create a new tag for the branch.
1. If the analyzer has the [automatic release process](#automatic-release-process) enabled, a new version will be released.
1. If not, you have to follow the [manual release process](#manual-release-process) to release a new version.
1. NOTE: the release pipeline will override the latest `edge` tag so the most recent release pipeline's `tag edge` job may need to be re-ran to avoid a regression for that tag.
### Preparing analyzers for a major version release
This process applies to the following groups:
- [Composition Analysis](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/composition-analysis)
- [Static Analysis (SAST)](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/static-analysis)
- [Secret Detection](https://handbook.gitlab.com/handbook/engineering/development/sec/secure/secret-detection)
Other groups are responsible for documenting their own major version release process.
Choose one of the following scenarios based on whether the major version release contains breaking changes:
1. [Major version release without breaking changes](#major-version-release-without-breaking-changes)
1. [Major version release with breaking changes](#major-version-release-with-breaking-changes)
#### Major version release without breaking changes
Assuming the current analyzer release is `v{N}`:
1. [Configure protected tags and branches](#configure-protected-tags-and-branches).
1. During the milestone of the major release, when there are no more changes to be merged into the `default` branch:
1. Create a `v{N}` branch from the `default` branch.
1. Create and merge a new Merge Request in the `default` branch containing only the following change to the `CHANGELOG.md` file:
```markdown
## v{N+1}.0.0
- Major version release (!<MR-ID>)
```
1. [Configure scheduled pipelines](#configure-scheduled-pipelines).
1. [Bump major analyzer versions in the CI/CD templates and components](#bump-major-analyzer-versions-in-the-cicd-templates-and-components)
#### Major version release with breaking changes
Assuming the current analyzer release is `v{N}`:
1. [Configure protected tags and branches](#configure-protected-tags-and-branches).
1. Create a new branch `v{N+1}` to "stage" breaking changes.
1. In the milestones leading up to the major release milestone:
- Merge non-breaking changes to the `default` branch (aka `master` or `main`)
- Merge breaking changes to the `v{N+1}` branch, and create a separate `release candidate` entry in the `CHANGELOG.md` file for each change:
```markdown
## v{N+1}.0.0-rc.0
- some breaking change (!123)
```
Using `release candidates` allows us to release **all breaking changes in a single major version bump**, which follows the [semver guidance](https://semver.org) of only making breaking changes in a major version update.
1. During the milestone of the major release, when there are no more changes to be merged into the `default` or `v{N+1}` branches:
1. Create a `v{N}` branch from the `default` branch.
1. Create a Merge Request in the `v{N+1}` branch to squash all the `release candidate` changelog entries into a single entry for `v{N+1}`.
For example, if the `CHANGELOG.md` contains the following 3 `release candidate` entries for version `v{N+1}`:
```markdown
## v{N+1}.0.0-rc.2
- yet another breaking change (!125)
## v{N+1}.0.0-rc.1
- another breaking change (!124)
## v{N+1}.0.0-rc.0
- some breaking change (!123)
```
Then the new Merge Request should update the `CHANGELOG.md` to have a single major release for `v{N+1}`, by combining all the `release candidate` entries into a single entry:
```markdown
## v{N+1}.0.0
- yet another breaking change (!125)
- another breaking change (!124)
- some breaking change (!123)
```
1. Create a Merge Request to merge all the breaking changes from the `v{N+1}` branch into the `default` branch.
1. Delete the `v{N+1}` branch, since it's no longer needed, as the `default` branch now contains all the changes from the `v{N+1}` branch.
1. [Configure scheduled pipelines](#configure-scheduled-pipelines).
1. [Bump major analyzer versions in the CI/CD templates and components](#bump-major-analyzer-versions-in-the-cicd-templates-and-components).
##### Configure protected tags and branches
1. Ensure the wildcard `v*` is set as both a [Protected Tag](../../user/project/protected_tags.md) and [Protected Branch](../../user/project/repository/branches/protected.md) for the project.
1. Verify the [gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) service account is `Allowed to create` protected tags.
See step `3.1` of the [Officially supported images](#officially-supported-images) section for more details.
##### Configure scheduled pipelines
1. Ensure three scheduled pipelines exist, creating them if necessary, and set `PUBLISH_IMAGES: true` for all of them:
- `Republish images v{N}` (against the `v{N}` branch)
This scheduled pipeline needs to be created
- `Daily build` (against the `default` branch)
This scheduled pipeline should already exist
- `Republish images v{N-1}` (against the `v{N-1}` branch)
This scheduled pipeline should already exist
1. Delete the scheduled pipeline for the `v{N-2}` branch (if it exists), since we only support [two previous major versions](https://about.gitlab.com/support/statement-of-support/#version-support).
##### Bump major analyzer versions in the CI/CD templates and components
When images for all the `v{N+1}` analyzers are available under `registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>`, create a new merge request to bump the major version for each analyzer in the [Secure stage CI/CD templates and components](#secure-stage-cicd-templates-and-components) belonging to your group.
## Development of new analyzers
We occasionally need to build out new analyzer projects to support new frameworks and tools.
In doing so we should follow [our engineering Open Source guidelines](https://handbook.gitlab.com/handbook/engineering/open-source/),
including licensing and [code standards](../go_guide/_index.md).
In addition, to write a custom analyzer that will integrate into the GitLab application
a minimal feature set is required:
### Checklist
Verify whether the underlying tool has:
- A [permissive software license](https://handbook.gitlab.com/handbook/engineering/open-source/#using-open-source-software).
- Headless execution (CLI tool).
- Bundle-able dependencies to be packaged as a Docker image, to be executed using GitLab Runner's [Linux or Windows Docker executor](https://docs.gitlab.com/runner/executors/docker.html).
- Compatible projects that can be detected based on filenames or extensions.
- Offline execution (no internet access) or can be configured to use custom proxies and/or CA certificates.
- The image is compatible with other container orchestration tools (see [testing container orchestration compatibility](#testing-container-orchestration-compatibility)).
#### Dockerfile
The `Dockerfile` should use an unprivileged user with the name `GitLab`.
This is necessary to provide compatibility with Red Hat OpenShift instances,
which don't allow containers to run as an admin (root) user.
There are certain limitations to keep in mind when running a container as an unprivileged user,
such as the fact that any files that need to be written on the Docker filesystem will require the appropriate permissions for the `GitLab` user.
See the following merge request for more details:
[Use GitLab user instead of root in Docker image](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/merge_requests/130).
#### Minimal vulnerability data
See [our security-report-schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/src/security-report-format.json) for a full list of required fields.
The [security-report-schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas) repository contains JSON schemas that list the required fields for each report type:
- [Container Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/container-scanning-report-format.json)
- [DAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dast-report-format.json)
- [Dependency Scanning](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dependency-scanning-report-format.json)
- [SAST](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json)
- [Secret Detection](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/secret-detection-report-format.json)
#### Compatibility with report schema
Security reports uploaded as artifacts to
GitLab are [validated](../integrations/secure.md#report-validation) before being
[ingested](security_report_ingestion_overview.md).
Security report schemas are versioned using SchemaVer: `MODEL-REVISION-ADDITION`. The Sec Section
is responsible for the
[`security-report-schemas` project](https://gitlab.com/gitlab-org/security-products/security-report-schemas),
including the compatibility of GitLab and the schema versions. Schema changes must follow the
product-wide [deprecation guidelines](../deprecation_guidelines/_index.md).
When a new `MODEL` version is introduced, analyzers that adopt the new schema are responsible for
ensuring that GitLab deployments that do not vendor this new schema version continue to ingest
security reports without errors or warnings.
This can be accomplished in different ways:
1. Implement support for multiple schema versions in the analyzer. Based on the GitLab version, the
analyzer emits a security report using the latest schema version supported by GitLab.
- Pro: analyzer can decide at runtime what the best version to utilize is.
- Con: implementation effort and increased complexity.
1. Release a new analyzer major version. Instances that don't vendor the latest `MODEL` schema
version continue to use an analyzer version that emits reports using version `MODEL-1`.
- Pro: keeps analyzer code simple.
- Con: extra analyzer version to maintain.
1. Delay use of new schema. This relies on `additionalProperties=true`, which allows a report to
include properties that are not present in the schema. A new analyzer major version would be
released at the usual cadence.
- Pro: no extra analyzer to maintain, keep analyzer code simple.
- Con: increased risk and/or effort to mitigate the risk of not having the schema validated.
If you are unsure which path to follow, reach-out to the
[`security-report-schemas` maintainers](https://gitlab.com/groups/gitlab-org/maintainers/security-report-schemas/-/group_members?with_inherited_permissions=exclude).
### Location of Container Images
Container images for secure analyzers are published in two places:
- [Officially supported images](#officially-supported-images) in the `registry.gitlab.com/security-products` namespace, for example:
```shell
registry.gitlab.com/security-products/semgrep:5
```
- [Temporary development images](#temporary-development-images) in the project namespace, for example:
```shell
registry.gitlab.com/gitlab-org/security-products/analyzers/semgrep/tmp:d27d44a9b33cacff0c54870a40515ec5f2698475
```
#### Officially supported images
The location of officially supported images, as referenced by our secure templates is:
```shell
registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>
```
For example, the [`semgrep-sast`](https://gitlab.com/gitlab-org/gitlab/blob/v17.7.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L172) job in the `SAST.gitlab-ci.yml` template references the container image `registry.gitlab.com/security-products/semgrep:5`.
In order to push images to this location:
1. Create a new project in `https://gitlab.com/security-products/<ANALYZER-NAME>`.
For example: `https://gitlab.com/security-products/semgrep`
Images for this project will be published to `registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>`.
For example: `registry.gitlab.com/security-products/semgrep:5`
1. Configure the project `https://gitlab.com/security-products/<ANALYZER-NAME>` as follows:
1. Add the following permissions:
- Maintainer: `@gitlab-org/secure/managers`, `@gitlab-org/govern/managers`
- Developer: [@gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation)
This is necessary to allow the [service account used in the automatic release process](#service-account-used-in-the-automatic-release-process) to push images to `registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>`.
1. Configure the following project settings:
- `Settings -> General -> Visibility, project features, permissions`
- `Project visibility`
- `Public`
- `Additional options`
- `Users can request access`
- `Disabled`
- `Issues`
- `Disabled`
- `Repository`
- `Only Project Members`
- `Merge Requests`
- `Disabled`
- `Forks`
- `Disabled`
- `Git Large File Storage (LFS)`
- `Disabled`
- `CI/CD`
- `Disabled`
- `Container Registry`
- `Everyone with access`
- `Analytics`, `Requirements`, `Security and compliance`, `Wiki`, `Snippets`, `Package registry`, `Model experiments`, `Model registry`, `Pages`, `Monitor`, `Environments`, `Feature flags`, `Infrastructure`, `Releases`, `GitLab Duo`
- `Disabled`
1. Configure the following options for the _analyzer project_, located at `https://gitlab.com/gitlab-org/security-products/analyzers/<ANALYZER_NAME>`:
1. Add the wildcard `v*` as a [Protected Tag](../../user/project/protected_tags.md).
Ensure the [gl-service-dev-secure-analyzers-automation](https://gitlab.com/gl-service-dev-secure-analyzers-automation) service account has been explicitly added to the list of accounts `Allowed to create` protected tags. This is required to allow the [`upsert git tag`](https://gitlab.com/gitlab-org/security-products/ci-templates/blob/2a3519d/includes-dev/upsert-git-tag.yml#L35-44) job to create new releases for the analyzer project.
1. Add the wildcard `v*` as a [Protected Branch](../../user/project/repository/branches/protected.md).
1. [`CI/CD` environment variables](../../ci/variables/_index.md)
{{< alert type="note" >}}
It's crucial to [mask and hide](../../ci/variables/_index.md#hide-a-cicd-variable) the `SEC_REGISTRY_PASSWORD` variable.
{{< /alert >}}
| Key | Value |
|-------------------------|-----------------------------------------------------------------------------|
| `SEC_REGISTRY_IMAGE` | `registry.gitlab.com/security-products/$CI_PROJECT_NAME` |
| `SEC_REGISTRY_USER` | `gl-service-dev-secure-analyzers-automation` |
| `SEC_REGISTRY_PASSWORD` | Personal Access Token for `gl-service-dev-secure-analyzers-automation` user. Request an [administrator](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/29538#admin-users) to configure this token value. |
The above variables are used by the [tag_image.sh](https://gitlab.com/gitlab-org/security-products/ci-templates/blob/a784f5d/scripts/tag_image.sh#L21-26) script in the `ci-templates` project to push the container image to `registry.gitlab.com/security-products/<ANALYZER-NAME>:<TAG>`.
See the [semgrep CI/CD Variables](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/settings/ci_cd#js-cicd-variables-settings) for an example.
#### Temporary development images
The location of temporary development images is:
```shell
registry.gitlab.com/gitlab-org/security-products/analyzers/<ANALYZER-NAME>/tmp:<TAG>
```
For example, one of the development images for the [`semgrep`](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) analyzer is:
```shell
registry.gitlab.com/gitlab-org/security-products/analyzers/semgrep/tmp:7580d6b037d93646774de601be5f39c46707bf04
```
In order to
[restrict the number of people who have write access to the container registry](https://gitlab.com/gitlab-org/gitlab/-/issues/297525),
the container registry in the development project must be [made private](https://gitlab.com/gitlab-org/gitlab/-/issues/470641) by configuring the following [project features and permissions](../../user/project/settings/_index.md) settings for the project located at `https://gitlab.com/gitlab-org/security-products/analyzers/<ANALYZER-NAME>`:
- `Settings -> General -> Visibility, project features, permissions`
- `Container Registry`
- `Only Project Members`
Each group in the Sec Section is responsible for:
1. Managing the deprecation and removal schedule for their artifacts, and creating issues for this purpose.
1. Creating and configuring projects under the new location.
1. Configuring builds to push release artifacts to the new location.
1. Removing or keeping images in old locations according to their own support agreements.
### Daily rebuild of Container Images
The analyzer images are rebuilt on a daily basis to ensure that we frequently and automatically pull patches provided by vendors of the base images we rely on.
This process only applies to the images used in versions of GitLab matching the current MAJOR release. The intent is not to release a newer version each day but rather rebuild each active variant of an image and overwrite the corresponding tags:
- the `MAJOR.MINOR.PATCH` image tag (for example: `4.1.7`)
- the `MAJOR.MINOR` image tag(for example: `4.1`)
- the `MAJOR` image tag (for example: `4`)
- the `latest` image tag
The implementation of the rebuild process may vary [depending on the project](../../user/application_security/detect/vulnerability_scanner_maintenance.md), though a shared CI configuration is available in our [development ci-templates project](https://gitlab.com/gitlab-org/security-products/ci-templates/-/blob/master/includes-dev/docker.yml) to help achieving this.
## Adding new language support to GitLab Advanced SAST (GLAS)
This guide helps engineers evaluate and add new language support to GLAS. These guidelines ensure consistent quality when expanding language coverage, rather than serving as strict requirements.
### Language support readiness criteria
Adapt these guidelines to your specific language while maintaining our analyzer quality standards.
These guidelines come from our experience adding PHP support to GLAS (see [issue #514210](https://gitlab.com/gitlab-org/gitlab/-/issues/514210)) and help determine when new language support is ready for production.
#### Quality readiness
##### Cross-file analysis capability
- Support the most common dependency management patterns in the target language
- Support common inclusion mechanisms specific to the language
##### Detection quality
- Precision Rate ≥ 80% across supported CWEs
- Comprehensive test corpus for each supported CWE
- Testing against popular frameworks in the language ecosystem
#### Coverage readiness
##### Priority-based coverage
- Must cover critical injection vulnerabilities relevant to the language
- Must cover common security misconfigurations
- Must align with industry standards (OWASP Top 10, SANS CWE Top 25)
- Focus on high-impact vulnerabilities commonly found in the language
#### Support readiness
##### Documentation requirements
- Language listed and described in supported languages documentation
- CWE coverage table updated with new language column
- All supported CWEs properly marked
- Known limitations clearly documented
#### Performance readiness
##### Standard performance criteria
- Medium-sized applications: < 10 minutes
- Very large applications: < 30 minutes with multi-core options
##### Benchmark definition
- Define representative codebases for benchmarking
- Include common frameworks and libraries
## Security and Build fixes of Go
The `Dockerfile` of the Secure analyzers implemented in Go must reference a `MAJOR` release of Go, and not a `MINOR` revision.
This ensures that the version of Go used to compile the analyzer includes all the security fixes available at a given time.
For example, the multi-stage Dockerfile of an analyzer must use the `golang:1.15-alpine` image
to build the analyzer CLI, but not `golang:1.15.4-alpine`.
When a `MINOR` revision of Go is released, and when it includes security fixes,
project maintainers must check whether the Secure analyzers need to be re-built.
The version of Go used for the build should appear in the log of the `build` job corresponding to the release,
and it can also be extracted from the Go binary using the [strings](https://en.wikipedia.org/wiki/Strings_(Unix)) command.
If the latest image of the analyzer was built with the affected version of Go, then it needs to be rebuilt.
To rebuild the image, maintainers can either:
- trigger a new pipeline for the Git tag that corresponds to the stable release
- create a new Git tag where the `BUILD` number is incremented
- trigger a pipeline for the default branch, and where the `PUBLISH_IMAGES` variable is set to a non-empty value
Either way a new Docker image is built, and it's published with the same image tags: `MAJOR.MINOR.PATCH` and `MAJOR`.
This workflow assumes full compatibility between `MINOR` revisions of the same `MAJOR` release of Go.
If there's a compatibility issue, the project pipeline will fail when running the tests.
In that case, it might be necessary to reference a `MINOR` revision of Go in the Dockerfile
and document that exception until the compatibility issue has been resolved.
Since it is NOT referenced in the `Dockerfile`, the `MINOR` revision of Go is NOT mentioned in the project changelog.
There may be times where it makes sense to use a build tag as the changes made are build related and don't
require a changelog entry. For example, pushing Docker images to a new registry location.
### Git tag to rebuild
When creating a new Git tag to rebuild the analyzer,
the new tag has the same `MAJOR.MINOR.PATCH` version as before,
but the `BUILD` number (as defined in [semver](https://semver.org/)) is incremented.
For instance, if the latest release of the analyzer is `v1.2.3`,
and if the corresponding Docker image was built using an affected version of Go,
then maintainers create the Git tag `v1.2.3+1` to rebuild the image.
If the latest release is `v1.2.3+1`, then they create `v1.2.3+2`.
The build number is automatically removed from the image tag.
To illustrate, creating a Git tag `v1.2.3+1` in the `gemnasium` project
makes the pipeline rebuild the image, and push it as `gemnasium:1.2.3`.
The Git tag created to rebuild has a simple message that explains why the new build is needed.
Example: `Rebuild with Go 1.15.6`.
The tag has no release notes, and no release is created.
To create a new Git tag to rebuild the analyzer, follow these steps:
1. Create a new Git tag and provide a message
```shell
git tag -a v1.2.3+1 -m "Rebuild with Go 1.15.6"
```
1. Push the tags to the repo
```shell
git push origin --tags
```
1. A new pipeline for the Git tag will be triggered and a new image will be built and tagged.
1. Run a new pipeline for the `master` branch in order to run the full suite of tests and generate a new vulnerability report for the newly tagged image. This is necessary because the release pipeline triggered in step `3.` above runs only a subset of tests, for example, it does not perform a `Container Scanning` analysis.
### Monthly release process
This should be done on the **18th of each month**. Though, this is a soft deadline and there is no harm in doing it within a few days after.
First, create a new issue for a release with a script from this repo: `./scripts/release_issue.rb MAJOR.MINOR`.
This issue will guide you through the whole release process. In general, you have to perform the following tasks:
- Check the list of supported technologies in GitLab documentation.
- [Supported languages in SAST](../../user/application_security/sast/_index.md#supported-languages-and-frameworks)
- [Supported languages in DS](../../user/application_security/dependency_scanning/_index.md#supported-languages-and-package-managers)
- [Supported languages in LS](../../user/compliance/license_scanning_of_cyclonedx_files/_index.md#supported-languages-and-package-managers)
- Check that CI **_job definitions are still accurate_** in vendored CI/CD templates and **_all of the ENV vars are propagated_** to the Docker containers upon `docker run` per tool.
- [SAST vendored CI/CD template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml)
- [Dependency Scanning vendored CI/CD template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/Dependency-Scanning.gitlab-ci.yml)
- [Container Scanning CI/CD template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/Container-Scanning.gitlab-ci.yml)
If needed, go to the pipeline corresponding to the last Git tag,
and trigger the manual job that controls the build of this image.
#### Dependency updates
All dependencies and upstream scanners (if any) used in the analyzer source are updated on a monthly cadence which primarily includes security fixes and non-breaking changes.
##### SAST and Secret Detection
SAST and Secret Detection teams use an internal tool ([SastBot](https://gitlab.com/gitlab-org/security-products/analyzers/sast-analyzer-deps-bot#dependency-update-automation)) to automate dependency management of SAST and Pipeline-based Secret Detection analyzers. SastBot generates MRs on the **8th of each month** and distributes their assignment among team members to take them forward for review. For details on the process, see [Dependency Update Automation](https://gitlab.com/gitlab-org/security-products/analyzers/sast-analyzer-deps-bot#dependency-update-automation).
SastBot requires different access tokens for each job. It uses the `DEP_GITLAB_TOKEN` environment variable to retrieve the token when running scheduled pipeline jobs.
| Scheduled Pipeline | Token Source | Role | Scope | `DEP_GITLAB_TOKEN` Token Configuration Location | Token Expiry |
|--------------------|--------------|------|-------|-------------------------------------------------|--------------|
| `Merge Request Metadata Update` | [`security-products/analyzers`](https://gitlab.com/gitlab-org/security-products/analyzers) group | `developer` | `api` | Settings \> CI/CI Variables section (Masked, Protected, Hidden) | `Jul 25, 2026` |
| `Release Issue Creation` | [`security-products/release`](https://gitlab.com/gitlab-org/security-products/release) project | `planner` | `api` | Configuration section of the scheduled pipeline job | `Jul 28, 2026` |
| Analyzers | [`sast-bot`](https://gitlab.com/gitlab-org/security-products/analyzers/sast-bot) group | `developer` | `api` | Configuration section of the scheduled pipeline job | `Jul 28, 2026` |
|
https://docs.gitlab.com/development/cyclonedx_property_taxonomy
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/cyclonedx_property_taxonomy.md
|
2025-08-13
|
doc/development/sec
|
[
"doc",
"development",
"sec"
] |
cyclonedx_property_taxonomy.md
|
Security Risk Management
|
Security Insights
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab CycloneDX property taxonomy
| null |
This document defines the namespaces and properties used by the `gitlab` namespace
in the [CycloneDX Property Taxonomy](https://github.com/CycloneDX/cyclonedx-property-taxonomy).
{{< alert type="note" >}}
Before making changes to this file, reach out to the threat insights engineering team,
`@gitlab-org/govern/threat-insights`.
{{< /alert >}}
## Where properties should be located
The `Property of` column describes what object a property may be attached to.
- Properties attached to the `metadata` apply to all objects in the document.
- Properties attached to an individual object apply to that object and any others nested underneath it.
- Objects which may nest themselves (such as `components`) may only have properties applied to the top-level object.
## `gitlab` namespace taxonomy
| Namespace | Description |
| --------------------- | ----------- |
| `meta` | Namespace for data about the property schema. |
| `dependency_scanning` | Namespace for data related to dependency scanning. |
| `container_scanning` | Namespace for data related to container scanning. |
## `gitlab:meta` namespace taxonomy
| Property | Description | Property of |
| ---------------------------- | ----------- | ----------- |
| `gitlab:meta:schema_version` | Used by GitLab to determine how to parse the properties in a report. Must be `1`. | `metadata` |
## `gitlab:dependency_scanning` namespace taxonomy
### Properties
| Property | Description | Example values | Property of |
| ---------------------------------------- | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:category` | The name of the category or dependency group that the dependency belongs to. If no category is specified, `production` is used by default. | `production`, `development`, `test` | `components` |
### Namespaces
| Namespace | Description |
| -------------------------------------------- | ----------- |
| `gitlab:dependency_scanning:input_file` | Namespace for information about the input file analyzed to produce the dependency. |
| `gitlab:dependency_scanning:source_file` | Namespace for information about the file you can edit to manage the dependency. |
| `gitlab:dependency_scanning:package_manager` | Namespace for information about the package manager associated with the dependency. |
| `gitlab:dependency_scanning:language` | Namespace for information about the programming language associated with the dependency. |
## `gitlab:dependency_scanning:input_file` namespace taxonomy
| Property | Description | Example values | Property of |
| --------------------------------------------- | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:input_file:path` | The path, relative to the root of the repository, to the file analyzed to produce the dependency. Usually, the lock file. | `package-lock.json`, `Gemfile.lock`, `go.sum` | `metadata`, `component` |
## `gitlab:dependency_scanning:source_file` namespace taxonomy
| Property | Description | Example values | Property of |
| -------------------------------------------- | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:source_file:path` | The path, relative to the root of the repository, to the file you can edit to manage the dependency. | `package.json`, `Gemfile`, `go.mod` | `metadata`, `component` |
## `gitlab:dependency_scanning:package_manager` namespace taxonomy
| Property | Description | Example values | Property of |
| ------------------------------------------------- | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:package_manager:name` | The name of the package manager associated with the dependency | `npm`, `bundler`, `go` | `metadata`, `component` |
## `gitlab:dependency_scanning:language` namespace taxonomy
| Property | Description | Example values | Property of |
| ------------------------------------------ | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:language:name` | The name of the programming language associated with the dependency | `JavaScript`, `Ruby`, `Go` | `metadata`, `component` |
## `gitlab:dependency_scanning_component` namespace taxonomy
| Property | Description | Example values | Property of |
| ------------------------------------------ | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning_component:reachability` | Identifies if a component is used | `in_use`, `not_found` | `component` |
## `gitlab:container_scanning` namespace taxonomy
### Namespaces
| Namespace | Description |
| -------------------------------------------- | ----------- |
| `gitlab:container_scanning:image` | Namespace for information about the scanned image. |
| `gitlab:container_scanning:operating_system` | Namespace for information about the operating system associated with the scanned image. |
## `gitlab:container_scanning:image` namespace taxonomy
| Property | Description | Example values | Property of |
| ---------------------------------------| ----------- | -------------- | ----------- |
| `gitlab:container_scanning:image:name` | The name of the scanned image. | `registry.gitlab.com/gitlab-org/security-products/analyzers/gemnasium/tmp/main` | `metadata`, `component` |
| `gitlab:container_scanning:image:tag` | The tag of the scanned image. | `91d61f07e0a4b3dd34b39d77f47f6f9bf48cde0a` | `metadata`, `component` |
## `gitlab:container_scanning:operating_system` namespace taxonomy
| Property | Description | Example values | Property of |
| ---------------------------------------| ----------- | -------------- | ----------- |
| `gitlab:container_scanning:operating_system:name` | The name of the operation system. | `alpine` | `metadata`, `component` |
| `gitlab:container_scanning:operating_system:version` | The version of the operation system. | `3.1.8` | `metadata`, `component` |
|
---
stage: Security Risk Management
group: Security Insights
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: GitLab CycloneDX property taxonomy
breadcrumbs:
- doc
- development
- sec
---
This document defines the namespaces and properties used by the `gitlab` namespace
in the [CycloneDX Property Taxonomy](https://github.com/CycloneDX/cyclonedx-property-taxonomy).
{{< alert type="note" >}}
Before making changes to this file, reach out to the threat insights engineering team,
`@gitlab-org/govern/threat-insights`.
{{< /alert >}}
## Where properties should be located
The `Property of` column describes what object a property may be attached to.
- Properties attached to the `metadata` apply to all objects in the document.
- Properties attached to an individual object apply to that object and any others nested underneath it.
- Objects which may nest themselves (such as `components`) may only have properties applied to the top-level object.
## `gitlab` namespace taxonomy
| Namespace | Description |
| --------------------- | ----------- |
| `meta` | Namespace for data about the property schema. |
| `dependency_scanning` | Namespace for data related to dependency scanning. |
| `container_scanning` | Namespace for data related to container scanning. |
## `gitlab:meta` namespace taxonomy
| Property | Description | Property of |
| ---------------------------- | ----------- | ----------- |
| `gitlab:meta:schema_version` | Used by GitLab to determine how to parse the properties in a report. Must be `1`. | `metadata` |
## `gitlab:dependency_scanning` namespace taxonomy
### Properties
| Property | Description | Example values | Property of |
| ---------------------------------------- | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:category` | The name of the category or dependency group that the dependency belongs to. If no category is specified, `production` is used by default. | `production`, `development`, `test` | `components` |
### Namespaces
| Namespace | Description |
| -------------------------------------------- | ----------- |
| `gitlab:dependency_scanning:input_file` | Namespace for information about the input file analyzed to produce the dependency. |
| `gitlab:dependency_scanning:source_file` | Namespace for information about the file you can edit to manage the dependency. |
| `gitlab:dependency_scanning:package_manager` | Namespace for information about the package manager associated with the dependency. |
| `gitlab:dependency_scanning:language` | Namespace for information about the programming language associated with the dependency. |
## `gitlab:dependency_scanning:input_file` namespace taxonomy
| Property | Description | Example values | Property of |
| --------------------------------------------- | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:input_file:path` | The path, relative to the root of the repository, to the file analyzed to produce the dependency. Usually, the lock file. | `package-lock.json`, `Gemfile.lock`, `go.sum` | `metadata`, `component` |
## `gitlab:dependency_scanning:source_file` namespace taxonomy
| Property | Description | Example values | Property of |
| -------------------------------------------- | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:source_file:path` | The path, relative to the root of the repository, to the file you can edit to manage the dependency. | `package.json`, `Gemfile`, `go.mod` | `metadata`, `component` |
## `gitlab:dependency_scanning:package_manager` namespace taxonomy
| Property | Description | Example values | Property of |
| ------------------------------------------------- | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:package_manager:name` | The name of the package manager associated with the dependency | `npm`, `bundler`, `go` | `metadata`, `component` |
## `gitlab:dependency_scanning:language` namespace taxonomy
| Property | Description | Example values | Property of |
| ------------------------------------------ | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning:language:name` | The name of the programming language associated with the dependency | `JavaScript`, `Ruby`, `Go` | `metadata`, `component` |
## `gitlab:dependency_scanning_component` namespace taxonomy
| Property | Description | Example values | Property of |
| ------------------------------------------ | ----------- | -------------- | ----------- |
| `gitlab:dependency_scanning_component:reachability` | Identifies if a component is used | `in_use`, `not_found` | `component` |
## `gitlab:container_scanning` namespace taxonomy
### Namespaces
| Namespace | Description |
| -------------------------------------------- | ----------- |
| `gitlab:container_scanning:image` | Namespace for information about the scanned image. |
| `gitlab:container_scanning:operating_system` | Namespace for information about the operating system associated with the scanned image. |
## `gitlab:container_scanning:image` namespace taxonomy
| Property | Description | Example values | Property of |
| ---------------------------------------| ----------- | -------------- | ----------- |
| `gitlab:container_scanning:image:name` | The name of the scanned image. | `registry.gitlab.com/gitlab-org/security-products/analyzers/gemnasium/tmp/main` | `metadata`, `component` |
| `gitlab:container_scanning:image:tag` | The tag of the scanned image. | `91d61f07e0a4b3dd34b39d77f47f6f9bf48cde0a` | `metadata`, `component` |
## `gitlab:container_scanning:operating_system` namespace taxonomy
| Property | Description | Example values | Property of |
| ---------------------------------------| ----------- | -------------- | ----------- |
| `gitlab:container_scanning:operating_system:name` | The name of the operation system. | `alpine` | `metadata`, `component` |
| `gitlab:container_scanning:operating_system:version` | The version of the operation system. | `3.1.8` | `metadata`, `component` |
|
https://docs.gitlab.com/development/token_revocation_api
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/token_revocation_api.md
|
2025-08-13
|
doc/development/sec
|
[
"doc",
"development",
"sec"
] |
token_revocation_api.md
|
Application Security Testing
|
Static Analysis
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Token Revocation API
| null |
The Token Revocation API is an externally-deployed HTTP API that interfaces with GitLab
to receive and revoke API tokens and other secrets detected by GitLab Secret Detection.
See the [high-level architecture](../../user/application_security/secret_detection/automatic_response.md)
to understand the Secret Detection post-processing and revocation flow.
GitLab.com uses the internally-maintained [Secret Revocation Service](https://gitlab.com/gitlab-com/gl-security/engineering-and-research/automation-team/secret-revocation-service)
(team-members only) as its Token Revocation API. For GitLab Self-Managed, you can create
your own API and configure GitLab to use it.
## Implement a Token Revocation API for self-managed
GitLab Self-Managed instances interested in using the revocation capabilities must:
- Implement and deploy your own Token Revocation API.
- Configure the GitLab instance to use the Token Revocation API.
Your service must:
- Match the API specification below.
- Provide two endpoints:
- Fetching revocable token types.
- Revoking leaked tokens.
- Be rate-limited and idempotent.
Requests to the documented endpoints are authenticated using API tokens passed in
the `Authorization` header. Request and response bodies, if present, are
expected to have the content type `application/json`.
All endpoints may return these responses:
- `401 Unauthorized`
- `405 Method Not Allowed`
- `500 Internal Server Error`
### `GET /v1/revocable_token_types`
Returns the valid `type` values for use in the `revoke_tokens` endpoint.
{{< alert type="note" >}}
These values match the concatenation of [the `secrets` analyzer's](../../user/application_security/secret_detection/pipeline/_index.md)
[primary identifier](../integrations/secure.md#identifiers) by means
of concatenating the `primary_identifier.type` and `primary_identifier.value`.
For example, the value `gitleaks_rule_id_gitlab_personal_access_token` matches the following finding identifier:
{{< /alert >}}
```json
{"type": "gitleaks_rule_id", "name": "Gitleaks rule ID GitLab Personal Access Token", "value": "GitLab Personal Access Token"}
```
| Status Code | Description |
| ----- | --- |
| `200` | The response body contains the valid token `type` values. |
Example response body:
```json
{
"types": ["gitleaks_rule_id_gitlab_personal_access_token"]
}
```
### `POST /v1/revoke_tokens`
Accepts a list of tokens to be revoked by the appropriate provider. Your service is responsible for communicating
with each provider to revoke the token.
| Status Code | Description |
| ----- | --- |
| `204` | All submitted tokens have been accepted for eventual revocation. |
| `400` | The request body is invalid or one of the submitted token types is not supported. The request should not be retried. |
| `429` | The provider has received too many requests. The request should be retried later. |
Example request body (space characters added to `token` value to prevent secret detection warnings):
```json
[{
"type": "gitleaks_rule_id_gitlab_personal_access_token",
"token": "glpat - 8GMtG8Mf4EnMJzmAWDU",
"location": "https://example.com/some-repo/blob/abcdefghijklmnop/compromisedfile1.java"
},
{
"type": "gitleaks_rule_id_gitlab_personal_access_token",
"token": "glpat - tG84EGK33nMLLDE70zU",
"location": "https://example.com/some-repo/blob/abcdefghijklmnop/compromisedfile2.java"
}]
```
### Configure GitLab to interface with the Token Revocation API
You must configure the following database settings in the GitLab instance:
| Setting | Type | Description |
| ------- | ---- | ----------- |
| `secret_detection_token_revocation_enabled` | Boolean | Whether automatic token revocation is enabled |
| `secret_detection_token_revocation_url` | String | A fully-qualified URL to the `/v1/revoke_tokens` endpoint of the Token Revocation API |
| `secret_detection_revocation_token_types_url` | String | A fully-qualified URL to the `/v1/revocable_token_types` endpoint of the Token Revocation API |
| `secret_detection_token_revocation_token` | String | A pre-shared token to authenticate requests to the Token Revocation API |
For example, to configure these values in the
[Rails console](../../administration/operations/rails_console.md#starting-a-rails-console-session):
```ruby
::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_token: 'MYSECRETTOKEN')
::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_url: 'https://gitlab.example.com/revocation_service/v1/revoke_tokens')
::Gitlab::CurrentSettings.update!(secret_detection_revocation_token_types_url: 'https://gitlab.example.com/revocation_service/v1/revocable_token_types')
::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_enabled: true)
```
After you configure these values, the Token Revocation API will be called according to the
[high-level architecture](../../user/application_security/secret_detection/automatic_response.md#high-level-architecture)
diagram.
|
---
stage: Application Security Testing
group: Static Analysis
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Token Revocation API
breadcrumbs:
- doc
- development
- sec
---
The Token Revocation API is an externally-deployed HTTP API that interfaces with GitLab
to receive and revoke API tokens and other secrets detected by GitLab Secret Detection.
See the [high-level architecture](../../user/application_security/secret_detection/automatic_response.md)
to understand the Secret Detection post-processing and revocation flow.
GitLab.com uses the internally-maintained [Secret Revocation Service](https://gitlab.com/gitlab-com/gl-security/engineering-and-research/automation-team/secret-revocation-service)
(team-members only) as its Token Revocation API. For GitLab Self-Managed, you can create
your own API and configure GitLab to use it.
## Implement a Token Revocation API for self-managed
GitLab Self-Managed instances interested in using the revocation capabilities must:
- Implement and deploy your own Token Revocation API.
- Configure the GitLab instance to use the Token Revocation API.
Your service must:
- Match the API specification below.
- Provide two endpoints:
- Fetching revocable token types.
- Revoking leaked tokens.
- Be rate-limited and idempotent.
Requests to the documented endpoints are authenticated using API tokens passed in
the `Authorization` header. Request and response bodies, if present, are
expected to have the content type `application/json`.
All endpoints may return these responses:
- `401 Unauthorized`
- `405 Method Not Allowed`
- `500 Internal Server Error`
### `GET /v1/revocable_token_types`
Returns the valid `type` values for use in the `revoke_tokens` endpoint.
{{< alert type="note" >}}
These values match the concatenation of [the `secrets` analyzer's](../../user/application_security/secret_detection/pipeline/_index.md)
[primary identifier](../integrations/secure.md#identifiers) by means
of concatenating the `primary_identifier.type` and `primary_identifier.value`.
For example, the value `gitleaks_rule_id_gitlab_personal_access_token` matches the following finding identifier:
{{< /alert >}}
```json
{"type": "gitleaks_rule_id", "name": "Gitleaks rule ID GitLab Personal Access Token", "value": "GitLab Personal Access Token"}
```
| Status Code | Description |
| ----- | --- |
| `200` | The response body contains the valid token `type` values. |
Example response body:
```json
{
"types": ["gitleaks_rule_id_gitlab_personal_access_token"]
}
```
### `POST /v1/revoke_tokens`
Accepts a list of tokens to be revoked by the appropriate provider. Your service is responsible for communicating
with each provider to revoke the token.
| Status Code | Description |
| ----- | --- |
| `204` | All submitted tokens have been accepted for eventual revocation. |
| `400` | The request body is invalid or one of the submitted token types is not supported. The request should not be retried. |
| `429` | The provider has received too many requests. The request should be retried later. |
Example request body (space characters added to `token` value to prevent secret detection warnings):
```json
[{
"type": "gitleaks_rule_id_gitlab_personal_access_token",
"token": "glpat - 8GMtG8Mf4EnMJzmAWDU",
"location": "https://example.com/some-repo/blob/abcdefghijklmnop/compromisedfile1.java"
},
{
"type": "gitleaks_rule_id_gitlab_personal_access_token",
"token": "glpat - tG84EGK33nMLLDE70zU",
"location": "https://example.com/some-repo/blob/abcdefghijklmnop/compromisedfile2.java"
}]
```
### Configure GitLab to interface with the Token Revocation API
You must configure the following database settings in the GitLab instance:
| Setting | Type | Description |
| ------- | ---- | ----------- |
| `secret_detection_token_revocation_enabled` | Boolean | Whether automatic token revocation is enabled |
| `secret_detection_token_revocation_url` | String | A fully-qualified URL to the `/v1/revoke_tokens` endpoint of the Token Revocation API |
| `secret_detection_revocation_token_types_url` | String | A fully-qualified URL to the `/v1/revocable_token_types` endpoint of the Token Revocation API |
| `secret_detection_token_revocation_token` | String | A pre-shared token to authenticate requests to the Token Revocation API |
For example, to configure these values in the
[Rails console](../../administration/operations/rails_console.md#starting-a-rails-console-session):
```ruby
::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_token: 'MYSECRETTOKEN')
::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_url: 'https://gitlab.example.com/revocation_service/v1/revoke_tokens')
::Gitlab::CurrentSettings.update!(secret_detection_revocation_token_types_url: 'https://gitlab.example.com/revocation_service/v1/revocable_token_types')
::Gitlab::CurrentSettings.update!(secret_detection_token_revocation_enabled: true)
```
After you configure these values, the Token Revocation API will be called according to the
[high-level architecture](../../user/application_security/secret_detection/automatic_response.md#high-level-architecture)
diagram.
|
https://docs.gitlab.com/development/sbom_dependency_graph_ingestion_overview
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/sbom_dependency_graph_ingestion_overview.md
|
2025-08-13
|
doc/development/sec
|
[
"doc",
"development",
"sec"
] |
sbom_dependency_graph_ingestion_overview.md
|
Security Risk Management
|
Security Insights
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
SBoM dependency graph ingestion overview
| null |
## Overview
The process starts after all `SBoM::Occurrence` models have been ingested because we ingest them in slices and it would be tricky to process that in slices as well.
All work happens in a background worker which will be added in a subsequent MR so that we do not increase the time it takes to ingest an SBoM report. This means that there will be a delay between when the SBoM report is ingested and before the dependency graph is updated.
All record pertaining to dependency graphs are stored in `sbom_graph_paths` database table and has foreign keys to `sbom_occurrences` as well as `projects` for easier filtering.
## Implementation details
{{< alert type="note" >}}
This feature is a work in progress so this document can get out of date
{{< /alert >}}
1. [Sbom::Ingestion::IngestReportService](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/sbom/ingestion/ingest_report_service.rb#L5) is responsible for consuming the SBoM report.
1. After it's done, we fire off [Sbom::BuildDependencyGraphWorker](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/sbom/build_dependency_graph_worker.rb) which kicks off the dependency graph calculation to a background worker.
1. [Sbom::BuildDependencyGraph](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/sbom/build_dependency_graph.rb) does the actual heavy lifting for us. The class is documented so the details are omitted here.
1. We will [skip calculation](https://gitlab.com/groups/gitlab-org/-/epics/17340) of the dependency graph if the SBoM report did not change.
1. [Sbom::PathFinder](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/finders/sbom/path_finder.rb) returns all possible paths to reach target dependency. Do note that this accepts an `Sbom::Occurrence` because `(name, version)` pair is not precise enough when working with monorepos.
## Details
1. The database table is designed as a [closure table](https://www.slideshare.net/slideshow/models-for-hierarchical-data/4179181)
1. The database table structure is available [here](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql#L22509).
1. When a dependency is transitive then the corresponding `Sbom::Occurrence#ancestors` will contain entries.
1. When a dependency is a direct dependency then the corresponding `Sbom::Occurrence#ancestors` will contain an `{}`.
1. Dependencies can be both direct and transitive.
1. There can be more than one version of a given dependency in a project (for example Node allows that).
1. There can be more than one `Sbom::Occurrence` for a given dependency version, for example in monorepos. These `Sbom::Occurrence` rows should have a different `input_file_path` and `source_id` (however we will not use `source_id` when building the dependency tree to avoid SQL JOIN).
|
---
stage: Security Risk Management
group: Security Insights
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: SBoM dependency graph ingestion overview
breadcrumbs:
- doc
- development
- sec
---
## Overview
The process starts after all `SBoM::Occurrence` models have been ingested because we ingest them in slices and it would be tricky to process that in slices as well.
All work happens in a background worker which will be added in a subsequent MR so that we do not increase the time it takes to ingest an SBoM report. This means that there will be a delay between when the SBoM report is ingested and before the dependency graph is updated.
All record pertaining to dependency graphs are stored in `sbom_graph_paths` database table and has foreign keys to `sbom_occurrences` as well as `projects` for easier filtering.
## Implementation details
{{< alert type="note" >}}
This feature is a work in progress so this document can get out of date
{{< /alert >}}
1. [Sbom::Ingestion::IngestReportService](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/sbom/ingestion/ingest_report_service.rb#L5) is responsible for consuming the SBoM report.
1. After it's done, we fire off [Sbom::BuildDependencyGraphWorker](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/sbom/build_dependency_graph_worker.rb) which kicks off the dependency graph calculation to a background worker.
1. [Sbom::BuildDependencyGraph](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/sbom/build_dependency_graph.rb) does the actual heavy lifting for us. The class is documented so the details are omitted here.
1. We will [skip calculation](https://gitlab.com/groups/gitlab-org/-/epics/17340) of the dependency graph if the SBoM report did not change.
1. [Sbom::PathFinder](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/finders/sbom/path_finder.rb) returns all possible paths to reach target dependency. Do note that this accepts an `Sbom::Occurrence` because `(name, version)` pair is not precise enough when working with monorepos.
## Details
1. The database table is designed as a [closure table](https://www.slideshare.net/slideshow/models-for-hierarchical-data/4179181)
1. The database table structure is available [here](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql#L22509).
1. When a dependency is transitive then the corresponding `Sbom::Occurrence#ancestors` will contain entries.
1. When a dependency is a direct dependency then the corresponding `Sbom::Occurrence#ancestors` will contain an `{}`.
1. Dependencies can be both direct and transitive.
1. There can be more than one version of a given dependency in a project (for example Node allows that).
1. There can be more than one `Sbom::Occurrence` for a given dependency version, for example in monorepos. These `Sbom::Occurrence` rows should have a different `input_file_path` and `source_id` (however we will not use `source_id` when building the dependency tree to avoid SQL JOIN).
|
https://docs.gitlab.com/development/gemnasium_analyzer_data
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/gemnasium_analyzer_data.md
|
2025-08-13
|
doc/development/sec
|
[
"doc",
"development",
"sec"
] |
gemnasium_analyzer_data.md
|
Application Security Testing
|
Composition Analysis
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Gemnasium analyzer data
| null |
The following table lists the data available for the Gemnasium analyzer.
| Property \ Tool | Gemnasium |
|:----------------------------------------------|:---------:|
| Severity | {{< icon name="check-circle" >}} Yes |
| Title | {{< icon name="check-circle" >}} Yes |
| File | {{< icon name="check-circle" >}} Yes |
| Start line | {{< icon name="dotted-circle" >}} No |
| End line | {{< icon name="dotted-circle" >}} No |
| External ID (for example, CVE) | {{< icon name="check-circle" >}} Yes |
| URLs | {{< icon name="check-circle" >}} Yes |
| Internal doc/explanation | {{< icon name="check-circle" >}} Yes |
| Solution | {{< icon name="check-circle" >}} Yes |
| Confidence | {{< icon name="dotted-circle" >}} No |
| Affected item (for example, class or package) | {{< icon name="check-circle" >}} Yes |
| Source code extract | {{< icon name="dotted-circle" >}} No |
| Internal ID | {{< icon name="check-circle" >}} Yes |
| Date | {{< icon name="check-circle" >}} Yes |
| Credits | {{< icon name="check-circle" >}} Yes |
- {{< icon name="check-circle" >}} Yes => we have that data
- {{< icon name="dotted-circle" >}} No => we don't have that data, or it would need to develop specific or inefficient/unreliable logic to obtain it.
The values provided by these tools are heterogeneous, so they are sometimes normalized into common
values (for example, `severity`, `confidence`, etc).
|
---
stage: Application Security Testing
group: Composition Analysis
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Gemnasium analyzer data
breadcrumbs:
- doc
- development
- sec
---
The following table lists the data available for the Gemnasium analyzer.
| Property \ Tool | Gemnasium |
|:----------------------------------------------|:---------:|
| Severity | {{< icon name="check-circle" >}} Yes |
| Title | {{< icon name="check-circle" >}} Yes |
| File | {{< icon name="check-circle" >}} Yes |
| Start line | {{< icon name="dotted-circle" >}} No |
| End line | {{< icon name="dotted-circle" >}} No |
| External ID (for example, CVE) | {{< icon name="check-circle" >}} Yes |
| URLs | {{< icon name="check-circle" >}} Yes |
| Internal doc/explanation | {{< icon name="check-circle" >}} Yes |
| Solution | {{< icon name="check-circle" >}} Yes |
| Confidence | {{< icon name="dotted-circle" >}} No |
| Affected item (for example, class or package) | {{< icon name="check-circle" >}} Yes |
| Source code extract | {{< icon name="dotted-circle" >}} No |
| Internal ID | {{< icon name="check-circle" >}} Yes |
| Date | {{< icon name="check-circle" >}} Yes |
| Credits | {{< icon name="check-circle" >}} Yes |
- {{< icon name="check-circle" >}} Yes => we have that data
- {{< icon name="dotted-circle" >}} No => we don't have that data, or it would need to develop specific or inefficient/unreliable logic to obtain it.
The values provided by these tools are heterogeneous, so they are sometimes normalized into common
values (for example, `severity`, `confidence`, etc).
|
https://docs.gitlab.com/development/sec
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/sec
|
[
"doc",
"development",
"sec"
] |
_index.md
|
Application Security Testing
|
Static Analysis
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sec section development guidelines
| null |
The Sec section is responsible for GitLab application security features, the "Sec" part of
DevSecOps. Development guides that are specific to the Sec section are listed here.
See [Terminology](../../user/application_security/terminology/_index.md) for an overview of our shared terminology.
## Architecture
- [Overview](#overview)
- [Scanning](#scanning)
- [Processing, visualization, and management](#processing-visualization-and-management)
- [Severity Levels](../../user/application_security/vulnerabilities/severities.md)
- [Analyzer Development](analyzer_development_guide.md)
## Overview
The architecture supporting the Secure features is split into two main parts:
- Scanning
- Processing, visualization, and management
```mermaid
flowchart LR
subgraph G1[Scanning]
Scanner
Analyzer
CI[CI Jobs]
end
subgraph G2[Processing, visualization, and management]
Parsers
Database
Views
Interactions
end
G1 --Report Artifact--> G2
```
### Scanning
The scanning part is responsible for finding vulnerabilities in given resources, and exporting results.
The scans are executed in CI/CD jobs via several small projects called [Analyzers](../../user/application_security/terminology/_index.md#analyzer), which can be found in our [Analyzers subgroup](https://gitlab.com/gitlab-org/security-products/analyzers).
The Analyzers are wrappers around security tools called [Scanners](../../user/application_security/terminology/_index.md#scanner), developed internally or externally, to integrate them into GitLab.
The Analyzers are mainly written in Go.
Some 3rd party integrators also make additional Scanners available by following our [integration documentation](../integrations/secure.md), which leverages the same architecture.
The results of the scans are exported as JSON reports that must comply with the [Secure report format](../../user/application_security/terminology/_index.md#secure-report-format) and are uploaded as [CI/CD Job Report artifacts](../../ci/jobs/job_artifacts.md) to make them available for processing after the pipelines completes.
### Processing, visualization, and management
After the data is available as a Report Artifact it can be processed by the GitLab Rails application to enable our security features, including:
- [Security Dashboards](../../user/application_security/security_dashboard/_index.md), Merge Request widget, Pipeline view, and so on.
- [Security scan results](../../user/application_security/detect/security_scan_results.md).
- [Approval rules](../../user/application_security/policies/merge_request_approval_policies.md).
Depending on the context, the security reports may be stored either in the database or stay as Report Artifacts for on-demand access.
#### Security report ingestion overview
For details on how GitLab processes the reports generated by the scanners, see
[Security report ingestion overview](security_report_ingestion_overview.md).
## CI/CD template development
While CI/CD templates are the responsibility of the Verify section, many are critical to the Sec Section's feature usage.
If you are working with CI/CD templates, read the [development guide for GitLab CI/CD templates](../cicd/templates.md).
## Importance of the primary identifier
Within analyzer JSON reports, the [`identifiers` field](../integrations/secure.md#identifiers) contains a collection of types and categories by which
a vulnerability can be described (that is, a CWE family).
The first item in the `identifiers` collection is known as the [primary identifier](../../user/application_security/terminology/_index.md#primary-identifier),
a critical component to both describing and tracking vulnerabilities.
In most other cases, the `identifiers` collection is unordered, where the remaining secondary identifiers act as metadata for grouping vulnerabilities
(see [Analyzer vulnerability translation](#analyzer-vulnerability-translation) below for the exception).
Any time the primary identifier changes and a project pipeline is re-run, ingestion of the new report will "orphan" the previous DB record.
Because our processing logic relies on generating a delta of two different vulnerabilities, it can end up looking rather confusing. For example:

After being [merged](../integrations/secure.md#tracking-and-merging-vulnerabilities), the previous vulnerability is listed as "remediated" and the introduced as ["detected"](../../user/application_security/vulnerabilities/_index.md#vulnerability-status-values).
### Guiding principles for ensuring primary identifier stability
- A primary identifier should never change unless we have a compelling reason.
- Analyzer supporting vulnerability translation must include the legacy primary identifiers in a secondary position to prevent "orphaning" of results.
- Beyond the primary identifier, the order of secondary identifiers does not matter.
- The identifier is unique based on a combination of the `Type` and `Value` fields (see [identifier fingerprint](https://gitlab.com/gitlab-org/gitlab/-/blob/v15.5.1-ee/lib/gitlab/ci/reports/security/identifier.rb#L63)).
- If we change the primary identifier, rolling back analyzers to previous versions will not fix the orphaned results. The data previously ingested into our database is an artifact of previous jobs with few ways of automating data migrations.
### Analyzer vulnerability translation
In the case of the SAST Semgrep analyzer, there is a secondary identifier of particular importance: the identifier linking the report's vulnerability
to the legacy analyzer (that is, bandit or ESLint).
To [enable vulnerability translation](../../user/application_security/sast/analyzers.md#vulnerability-translation)
the Semgrep analyzer relies on a secondary identifier exactly matching the primary identifier of the legacy analyzer.
For example, when [`eslint`](https://gitlab.com/gitlab-org/security-products/analyzers/eslint) was previously used to generate vulnerability records,
the [`semgrep`](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) analyzer must produce an identifier collection containing the
original ESLint primary identifier.
Given the original `eslint` report:
```json
{
"version": "14.0.4",
"vulnerabilities": [
{
"identifiers": [
{
"type": "eslint_rule_id",
"name": "ESLint rule ID security/detect-eval-with-expression",
"value": "security/detect-eval-with-expression"
}
]
}
]
}
```
The corresponding Semgrep report must contain the `eslint_rule_id`:
```json
{
"version": "14.0.4",
"vulnerabilities": [
{
"identifiers": [
{
"type": "semgrep_id",
"name": "eslint.detect-eval-with-expression",
"value": "eslint.detect-eval-with-expression",
"url": "https://semgrep.dev/r/gitlab.eslint.detect-eval-with-expression"
},
{
"type": "eslint_rule_id",
"name": "ESLint rule ID security/detect-eval-with-expression",
"value": "security/detect-eval-with-expression"
}
]
}
]
}
```
[Tracking of vulnerabilities](../integrations/secure.md#tracking-and-merging-vulnerabilities) relies on a combination of the two identifiers
to remap DB records previously generated with the legacy analyzers to those generated with the new `semgrep` ones.
## Development Setup: Package Metadata Database synchronization
For security scanning and license compliance features that use the Package Metadata Database (PMDB), you need to set up PMDB synchronization in your development environment.
See the [Package Metadata Synchronization guide](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/package_metadata_synchronization.md) in the GDK documentation for detailed setup instructions.
|
---
stage: Application Security Testing
group: Static Analysis
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sec section development guidelines
breadcrumbs:
- doc
- development
- sec
---
The Sec section is responsible for GitLab application security features, the "Sec" part of
DevSecOps. Development guides that are specific to the Sec section are listed here.
See [Terminology](../../user/application_security/terminology/_index.md) for an overview of our shared terminology.
## Architecture
- [Overview](#overview)
- [Scanning](#scanning)
- [Processing, visualization, and management](#processing-visualization-and-management)
- [Severity Levels](../../user/application_security/vulnerabilities/severities.md)
- [Analyzer Development](analyzer_development_guide.md)
## Overview
The architecture supporting the Secure features is split into two main parts:
- Scanning
- Processing, visualization, and management
```mermaid
flowchart LR
subgraph G1[Scanning]
Scanner
Analyzer
CI[CI Jobs]
end
subgraph G2[Processing, visualization, and management]
Parsers
Database
Views
Interactions
end
G1 --Report Artifact--> G2
```
### Scanning
The scanning part is responsible for finding vulnerabilities in given resources, and exporting results.
The scans are executed in CI/CD jobs via several small projects called [Analyzers](../../user/application_security/terminology/_index.md#analyzer), which can be found in our [Analyzers subgroup](https://gitlab.com/gitlab-org/security-products/analyzers).
The Analyzers are wrappers around security tools called [Scanners](../../user/application_security/terminology/_index.md#scanner), developed internally or externally, to integrate them into GitLab.
The Analyzers are mainly written in Go.
Some 3rd party integrators also make additional Scanners available by following our [integration documentation](../integrations/secure.md), which leverages the same architecture.
The results of the scans are exported as JSON reports that must comply with the [Secure report format](../../user/application_security/terminology/_index.md#secure-report-format) and are uploaded as [CI/CD Job Report artifacts](../../ci/jobs/job_artifacts.md) to make them available for processing after the pipelines completes.
### Processing, visualization, and management
After the data is available as a Report Artifact it can be processed by the GitLab Rails application to enable our security features, including:
- [Security Dashboards](../../user/application_security/security_dashboard/_index.md), Merge Request widget, Pipeline view, and so on.
- [Security scan results](../../user/application_security/detect/security_scan_results.md).
- [Approval rules](../../user/application_security/policies/merge_request_approval_policies.md).
Depending on the context, the security reports may be stored either in the database or stay as Report Artifacts for on-demand access.
#### Security report ingestion overview
For details on how GitLab processes the reports generated by the scanners, see
[Security report ingestion overview](security_report_ingestion_overview.md).
## CI/CD template development
While CI/CD templates are the responsibility of the Verify section, many are critical to the Sec Section's feature usage.
If you are working with CI/CD templates, read the [development guide for GitLab CI/CD templates](../cicd/templates.md).
## Importance of the primary identifier
Within analyzer JSON reports, the [`identifiers` field](../integrations/secure.md#identifiers) contains a collection of types and categories by which
a vulnerability can be described (that is, a CWE family).
The first item in the `identifiers` collection is known as the [primary identifier](../../user/application_security/terminology/_index.md#primary-identifier),
a critical component to both describing and tracking vulnerabilities.
In most other cases, the `identifiers` collection is unordered, where the remaining secondary identifiers act as metadata for grouping vulnerabilities
(see [Analyzer vulnerability translation](#analyzer-vulnerability-translation) below for the exception).
Any time the primary identifier changes and a project pipeline is re-run, ingestion of the new report will "orphan" the previous DB record.
Because our processing logic relies on generating a delta of two different vulnerabilities, it can end up looking rather confusing. For example:

After being [merged](../integrations/secure.md#tracking-and-merging-vulnerabilities), the previous vulnerability is listed as "remediated" and the introduced as ["detected"](../../user/application_security/vulnerabilities/_index.md#vulnerability-status-values).
### Guiding principles for ensuring primary identifier stability
- A primary identifier should never change unless we have a compelling reason.
- Analyzer supporting vulnerability translation must include the legacy primary identifiers in a secondary position to prevent "orphaning" of results.
- Beyond the primary identifier, the order of secondary identifiers does not matter.
- The identifier is unique based on a combination of the `Type` and `Value` fields (see [identifier fingerprint](https://gitlab.com/gitlab-org/gitlab/-/blob/v15.5.1-ee/lib/gitlab/ci/reports/security/identifier.rb#L63)).
- If we change the primary identifier, rolling back analyzers to previous versions will not fix the orphaned results. The data previously ingested into our database is an artifact of previous jobs with few ways of automating data migrations.
### Analyzer vulnerability translation
In the case of the SAST Semgrep analyzer, there is a secondary identifier of particular importance: the identifier linking the report's vulnerability
to the legacy analyzer (that is, bandit or ESLint).
To [enable vulnerability translation](../../user/application_security/sast/analyzers.md#vulnerability-translation)
the Semgrep analyzer relies on a secondary identifier exactly matching the primary identifier of the legacy analyzer.
For example, when [`eslint`](https://gitlab.com/gitlab-org/security-products/analyzers/eslint) was previously used to generate vulnerability records,
the [`semgrep`](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) analyzer must produce an identifier collection containing the
original ESLint primary identifier.
Given the original `eslint` report:
```json
{
"version": "14.0.4",
"vulnerabilities": [
{
"identifiers": [
{
"type": "eslint_rule_id",
"name": "ESLint rule ID security/detect-eval-with-expression",
"value": "security/detect-eval-with-expression"
}
]
}
]
}
```
The corresponding Semgrep report must contain the `eslint_rule_id`:
```json
{
"version": "14.0.4",
"vulnerabilities": [
{
"identifiers": [
{
"type": "semgrep_id",
"name": "eslint.detect-eval-with-expression",
"value": "eslint.detect-eval-with-expression",
"url": "https://semgrep.dev/r/gitlab.eslint.detect-eval-with-expression"
},
{
"type": "eslint_rule_id",
"name": "ESLint rule ID security/detect-eval-with-expression",
"value": "security/detect-eval-with-expression"
}
]
}
]
}
```
[Tracking of vulnerabilities](../integrations/secure.md#tracking-and-merging-vulnerabilities) relies on a combination of the two identifiers
to remap DB records previously generated with the legacy analyzers to those generated with the new `semgrep` ones.
## Development Setup: Package Metadata Database synchronization
For security scanning and license compliance features that use the Package Metadata Database (PMDB), you need to set up PMDB synchronization in your development environment.
See the [Package Metadata Synchronization guide](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/package_metadata_synchronization.md) in the GDK documentation for detailed setup instructions.
|
https://docs.gitlab.com/development/vulnerability_tracking
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/vulnerability_tracking.md
|
2025-08-13
|
doc/development/sec
|
[
"doc",
"development",
"sec"
] |
vulnerability_tracking.md
|
Security Risk Management
|
Security Insights
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Vulnerability tracking overview
| null |
At GitLab we run Git combined with automated security testing in Continuous
Integration and Continuous Delivery (CI/CD) processes. These processes
continuously monitor code changes to detect security vulnerabilities as early
as possible. Security testing often involves multiple Static Application
Security Testing (SAST) tools, each specialized in detecting specific
vulnerabilities, such as hardcoded passwords or insecure data flows. A
heterogeneous SAST setup, using multiple tools, helps minimize the software's
attack surface. The security findings from these tools undergo Vulnerability
Management, a semi-manual process of understanding, categorizing, storing, and
acting on them.
Code volatility (the constant change of the project's source code) and double reporting
(the overlap of findings reported by multiple tools) are potential sources of duplication,
imposing futile auditing effort on the analyst.
Vulnerability tracking is an automated process that helps deduplicate and
track vulnerabilities throughout the lifetime of a software project.
Our Vulnerability tracking method is based on [Scope+Offset](https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator/-/blob/master/README.md) (internal).
The predecessor to the `Scope+Offset` method was line-based fingerprinting which is more
fragile, resulting in many already detected vulnerabilities to be re-introduced.
Avoiding duplication was the motivation for implementing the `Scope+Offset` method.
[See the corresponding research issue for more background](https://gitlab.com/groups/gitlab-org/-/epics/4626) (internal).
## Components
On a very high level, the vulnerability tracking flow is depicted below. For the remainder of this section, we assume that the SAST analyzer and the Tracking Calculator represent the tracking signature *producer* component and the Rails backend represents the tracking signature *consumer* component for the purposes Vulnerability tracking. The components are explained in more detail below.
``` mermaid
flowchart LR
R["Repository"]
S("SAST Analyzer [CI]")
T("tracking-calculator [CI]")
B("Rails backend")
R --code--> S --gl-sast-report.json--> T --augmented gl-sast-report.json--> B
R --code --> T
```
### Tracking signature producer
The SAST Analyzer runs in a CI context, analyzes the source code and produces a `gl-sast-report.json` file. The [Tracking Calculator](https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator) computes scopes by means of the source code and matches them with the vulnerabilities listed in the `gl-sast-report.json`. If there is a match, Tracking Calculator computes signatures (by means of Scope+Offset) and includes each into the original report (augmenting `gl-sast-report`) by means of the `tracking` object (depicted below).
``` json
"tracking": {
"type": "source",
"items": [
{
"file": "test.c",
"line_start": 12,
"line_end": 12,
"signatures": [
{
"algorithm": "scope_offset_compressed",
"value": "test.c|main()[0]:5"
},
{
"algorithm": "scope_offset",
"value": "test.c|main()[0]:8"
}
]
}
]
}
```
Tracking Calculator is directly embedded into the [Docker image of the SAST Analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/blob/52bedd15745ddb6124662e0dcda331e2e64b000b/Dockerfile#L5) (internal)
and invoked by means of [this script](https://gitlab.com/gitlab-org/security-products/post-analyzers/scripts/-/blob/474cfd78054d97291155045eaef66aa3b7919368/start.sh).
Tracking Calculator already [performs deduplication](https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator/-/blob/c7b6f255ad030e6b9da58c12fa87204b8df71129/trackinginfo/sast.go#L127)
that is enabled by default. In the example above we have two different
algorithms `scope_offset_compressed` and `scope_offset` where
`scope_offset_compressed` is considered an improvement of `scope_offset` so
that `scope_offset_compressed` is assigned a higher priority.
If `scope_offset` and `scope_offset_compressed` agree on the same fingerprint,
only the result from `scope_offset_compressed` would be added as it is
considered the algorithm with the higher priority.
The report is then ingested into the consumer component where these signatures
are used to generate vulnerability fingerprints by means of the vulnerability
UUID.
---
### Tracking signature consumer
In the Rails code we differentiate between security findings (findings that
originate from the report) and vulnerability findings (persisted in the DB).
Security findings are generated when the [reports is parsed](https://gitlab.com/gitlab-org/gitlab/-/blob/e2f0c25d56d7ee5e85e00093331e55197fe66151/lib/gitlab/ci/parsers/security/common.rb#L98);
this is also the place where the [UUID is generated](https://gitlab.com/gitlab-org/gitlab/-/blob/415453f3bf788579f47fb8b471629beb1e063d56/app/services/security/vulnerability_uuid.rb#L6).
#### Storing security findings temporarily
The diagram below depicts the flow that is executed on all pipelines for
storing security findings temporarily. One of the most interesting Components
from the vulnerability tracking perspective is the `OverrideUuidsService`.
The `OverrideUuidsService` matches security findings against vulnerability findings on the signature level. If
there is a match, the UUID of the security finding is overwritten
accordingly. The `StoreFindingsService` stores the re-calibrated findings in
the `security_findings` table. Detailed documentation about how
vulnerabilities are created, starting from the security report, is available
[here](security_report_ingestion_overview.md#vulnerability-creation-from-security-reports).
Source Code References:
- [StoreScansWorker](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/workers/security/store_scans_worker.rb#L19)
- [StoreScansService](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/services/security/store_scans_service.rb#L19)
- [StoreGroupedScansService](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/services/security/store_grouped_scans_service.rb#L60)
- [StoreScanService](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/security/store_scan_service.rb#L47)
- [OverrideUuidsService](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/override_uuids_service.rb)
- [StoreFindingsService](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/services/security/store_findings_service.rb)
``` mermaid
sequenceDiagram
Producer->>Sidekiq: gl-sast-report.json
Sidekiq->>StoreScansWorker: <<start>>
StoreScansWorker->>StoreScansService: pipeline id
loop for all artifacts in "grouped" artifacts
StoreScansService->>StoreGroupedScansService: artifacts
loop for every artifact in artifacts
StoreGroupedScansService->>StoreScanService: artifact
StoreScanService->>OverrideUuidsService: security-report
StoreScanService->>StoreFindingsService: store findings
end
end
```
#### Scenario 2: Merge request security widget
The second scenario relates to the merge request security widget.
Source code references:
- [MergeRequest](https://gitlab.com/gitlab-org/gitlab/-/blob/1172e63f2485b8f3690895a3798f067429d98732/app/models/merge_request.rb?page=2#L1975)
- [CompareSecurityReportsService](https://gitlab.com/gitlab-org/gitlab/-/blob/1172e63f2485b8f3690895a3798f067429d98732/ee/app/services/ci/compare_security_reports_service.rb#L10)
- [VulnerabilityReportsComparer](https://gitlab.com/gitlab-org/gitlab/-/blob/da6e2037cd494ac8b73bc3ee9e69009c4cdcf124/ee/lib/gitlab/ci/reports/security/vulnerability_reports_comparer.rb#L96)
The `VulnerabilityReportsComparer` computes the number of newly added or fixed
findings. It first compares the security findings between default and
non-default branches to compute the number of added and fixed findings. This
component filters results by not re-displaying security findings that
correspond to vulnerability findings by [recalibrating the security finding UUIDs](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/ci/reports/security/vulnerability_reports_comparer.rb#L70).
The logic implemented in the
[`UUIDOverrider`](https://gitlab.com/gitlab-org/gitlab/-/blob/1172e63f2485b8f3690895a3798f067429d98732/ee/lib/gitlab/ci/reports/security/vulnerability_reports_comparer.rb#L161)
is very similar to
[OverrideUuidsService](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/services/security/store_scan_service.rb#L47).
``` mermaid
sequenceDiagram
MergeRequestModel->>CompareSecurityReportsService: compare_sast_reports
CompareSecurityReportsService->>VulnerabilityReportsComparer: calculate_changes
```
#### Scenario 3: Report ingestion
This is the point where either a security finding becomes a vulnerability or the
vulnerability that corresponds to a security finding is updated. This scenario
becomes relevant when a pipeline triggered on the default branch upon merging a
non-default branch into the default branch. In our context, we are most
interested in those cases where we have security findings with
`overridden_uuid` set which implies that there was a clash with an already
existing vulnerability; `overridden_uuid` holds the UUID of the security
finding that was overridden by the corresponding vulnerability UUID.
The sequence below is executed to update the UUID of a vulnerability
(fingerprint). The recomputation takes place in the
`UpdateVulnerabilityUuids`, ultimately invoking a database update by means of
[`UpdateVulnerabilityUuidsVulnerabilityFinding` class](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/tasks/update_vulnerability_uuids/vulnerability_findings.rb).
Source Code References:
- [IngestReportsService](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/ingest_reports_service.rb#L55)
- [IngestReportService](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/ingest_report_service.rb#L41)
- [IngestReportSliceService](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/ingest_report_slice_service.rb#L37)
- [UpdateVulnerabilityUuids](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/tasks/update_vulnerability_uuids.rb#L67)
- [FindingMap](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/finding_map.rb)
``` mermaid
sequenceDiagram
IngestReportsService->>IngestReportService: security_scan
IngestReportService->>IngestReportSliceService: sliced security_scan
IngestReportSliceService->>UpdateVulnerabilityUuids: findings map
```
## Hierarchy: Why are algorithms prioritized and what is the impact of this prioritization?
The supported algorithms are defined in [`VulnerabilityFindingSignatureHelpers`](https://gitlab.com/gitlab-org/gitlab/-/blob/1172e63f2485b8f3690895a3798f067429d98732/app/models/concerns/vulnerability_finding_signature_helpers.rb). Algorithms are assigned priorities (the integer values in the map below). A higher priority indicates that an algorithm is considered as better than a lower priority algorithm. In other words, going from a lower priority to a higher priority algorithms corresponds to `coarsening` (better deduplication performance) and going from a higher priority algorithm to a lower priority algorithm corresponds to a `refinement` (weaker deduplication performance).
``` ruby
ALGORITHM_TYPES = {
hash: 1,
location: 2,
scope_offset: 3,
scope_offset_compressed: 4,
rule_value: 5
}.with_indifferent_access.freeze
```
|
---
stage: Security Risk Management
group: Security Insights
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Vulnerability tracking overview
breadcrumbs:
- doc
- development
- sec
---
At GitLab we run Git combined with automated security testing in Continuous
Integration and Continuous Delivery (CI/CD) processes. These processes
continuously monitor code changes to detect security vulnerabilities as early
as possible. Security testing often involves multiple Static Application
Security Testing (SAST) tools, each specialized in detecting specific
vulnerabilities, such as hardcoded passwords or insecure data flows. A
heterogeneous SAST setup, using multiple tools, helps minimize the software's
attack surface. The security findings from these tools undergo Vulnerability
Management, a semi-manual process of understanding, categorizing, storing, and
acting on them.
Code volatility (the constant change of the project's source code) and double reporting
(the overlap of findings reported by multiple tools) are potential sources of duplication,
imposing futile auditing effort on the analyst.
Vulnerability tracking is an automated process that helps deduplicate and
track vulnerabilities throughout the lifetime of a software project.
Our Vulnerability tracking method is based on [Scope+Offset](https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator/-/blob/master/README.md) (internal).
The predecessor to the `Scope+Offset` method was line-based fingerprinting which is more
fragile, resulting in many already detected vulnerabilities to be re-introduced.
Avoiding duplication was the motivation for implementing the `Scope+Offset` method.
[See the corresponding research issue for more background](https://gitlab.com/groups/gitlab-org/-/epics/4626) (internal).
## Components
On a very high level, the vulnerability tracking flow is depicted below. For the remainder of this section, we assume that the SAST analyzer and the Tracking Calculator represent the tracking signature *producer* component and the Rails backend represents the tracking signature *consumer* component for the purposes Vulnerability tracking. The components are explained in more detail below.
``` mermaid
flowchart LR
R["Repository"]
S("SAST Analyzer [CI]")
T("tracking-calculator [CI]")
B("Rails backend")
R --code--> S --gl-sast-report.json--> T --augmented gl-sast-report.json--> B
R --code --> T
```
### Tracking signature producer
The SAST Analyzer runs in a CI context, analyzes the source code and produces a `gl-sast-report.json` file. The [Tracking Calculator](https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator) computes scopes by means of the source code and matches them with the vulnerabilities listed in the `gl-sast-report.json`. If there is a match, Tracking Calculator computes signatures (by means of Scope+Offset) and includes each into the original report (augmenting `gl-sast-report`) by means of the `tracking` object (depicted below).
``` json
"tracking": {
"type": "source",
"items": [
{
"file": "test.c",
"line_start": 12,
"line_end": 12,
"signatures": [
{
"algorithm": "scope_offset_compressed",
"value": "test.c|main()[0]:5"
},
{
"algorithm": "scope_offset",
"value": "test.c|main()[0]:8"
}
]
}
]
}
```
Tracking Calculator is directly embedded into the [Docker image of the SAST Analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/blob/52bedd15745ddb6124662e0dcda331e2e64b000b/Dockerfile#L5) (internal)
and invoked by means of [this script](https://gitlab.com/gitlab-org/security-products/post-analyzers/scripts/-/blob/474cfd78054d97291155045eaef66aa3b7919368/start.sh).
Tracking Calculator already [performs deduplication](https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator/-/blob/c7b6f255ad030e6b9da58c12fa87204b8df71129/trackinginfo/sast.go#L127)
that is enabled by default. In the example above we have two different
algorithms `scope_offset_compressed` and `scope_offset` where
`scope_offset_compressed` is considered an improvement of `scope_offset` so
that `scope_offset_compressed` is assigned a higher priority.
If `scope_offset` and `scope_offset_compressed` agree on the same fingerprint,
only the result from `scope_offset_compressed` would be added as it is
considered the algorithm with the higher priority.
The report is then ingested into the consumer component where these signatures
are used to generate vulnerability fingerprints by means of the vulnerability
UUID.
---
### Tracking signature consumer
In the Rails code we differentiate between security findings (findings that
originate from the report) and vulnerability findings (persisted in the DB).
Security findings are generated when the [reports is parsed](https://gitlab.com/gitlab-org/gitlab/-/blob/e2f0c25d56d7ee5e85e00093331e55197fe66151/lib/gitlab/ci/parsers/security/common.rb#L98);
this is also the place where the [UUID is generated](https://gitlab.com/gitlab-org/gitlab/-/blob/415453f3bf788579f47fb8b471629beb1e063d56/app/services/security/vulnerability_uuid.rb#L6).
#### Storing security findings temporarily
The diagram below depicts the flow that is executed on all pipelines for
storing security findings temporarily. One of the most interesting Components
from the vulnerability tracking perspective is the `OverrideUuidsService`.
The `OverrideUuidsService` matches security findings against vulnerability findings on the signature level. If
there is a match, the UUID of the security finding is overwritten
accordingly. The `StoreFindingsService` stores the re-calibrated findings in
the `security_findings` table. Detailed documentation about how
vulnerabilities are created, starting from the security report, is available
[here](security_report_ingestion_overview.md#vulnerability-creation-from-security-reports).
Source Code References:
- [StoreScansWorker](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/workers/security/store_scans_worker.rb#L19)
- [StoreScansService](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/services/security/store_scans_service.rb#L19)
- [StoreGroupedScansService](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/services/security/store_grouped_scans_service.rb#L60)
- [StoreScanService](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/services/security/store_scan_service.rb#L47)
- [OverrideUuidsService](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/override_uuids_service.rb)
- [StoreFindingsService](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/services/security/store_findings_service.rb)
``` mermaid
sequenceDiagram
Producer->>Sidekiq: gl-sast-report.json
Sidekiq->>StoreScansWorker: <<start>>
StoreScansWorker->>StoreScansService: pipeline id
loop for all artifacts in "grouped" artifacts
StoreScansService->>StoreGroupedScansService: artifacts
loop for every artifact in artifacts
StoreGroupedScansService->>StoreScanService: artifact
StoreScanService->>OverrideUuidsService: security-report
StoreScanService->>StoreFindingsService: store findings
end
end
```
#### Scenario 2: Merge request security widget
The second scenario relates to the merge request security widget.
Source code references:
- [MergeRequest](https://gitlab.com/gitlab-org/gitlab/-/blob/1172e63f2485b8f3690895a3798f067429d98732/app/models/merge_request.rb?page=2#L1975)
- [CompareSecurityReportsService](https://gitlab.com/gitlab-org/gitlab/-/blob/1172e63f2485b8f3690895a3798f067429d98732/ee/app/services/ci/compare_security_reports_service.rb#L10)
- [VulnerabilityReportsComparer](https://gitlab.com/gitlab-org/gitlab/-/blob/da6e2037cd494ac8b73bc3ee9e69009c4cdcf124/ee/lib/gitlab/ci/reports/security/vulnerability_reports_comparer.rb#L96)
The `VulnerabilityReportsComparer` computes the number of newly added or fixed
findings. It first compares the security findings between default and
non-default branches to compute the number of added and fixed findings. This
component filters results by not re-displaying security findings that
correspond to vulnerability findings by [recalibrating the security finding UUIDs](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/ci/reports/security/vulnerability_reports_comparer.rb#L70).
The logic implemented in the
[`UUIDOverrider`](https://gitlab.com/gitlab-org/gitlab/-/blob/1172e63f2485b8f3690895a3798f067429d98732/ee/lib/gitlab/ci/reports/security/vulnerability_reports_comparer.rb#L161)
is very similar to
[OverrideUuidsService](https://gitlab.com/gitlab-org/gitlab/-/blob/308529403c2d5ec0049b223cf444163bede4672e/ee/app/services/security/store_scan_service.rb#L47).
``` mermaid
sequenceDiagram
MergeRequestModel->>CompareSecurityReportsService: compare_sast_reports
CompareSecurityReportsService->>VulnerabilityReportsComparer: calculate_changes
```
#### Scenario 3: Report ingestion
This is the point where either a security finding becomes a vulnerability or the
vulnerability that corresponds to a security finding is updated. This scenario
becomes relevant when a pipeline triggered on the default branch upon merging a
non-default branch into the default branch. In our context, we are most
interested in those cases where we have security findings with
`overridden_uuid` set which implies that there was a clash with an already
existing vulnerability; `overridden_uuid` holds the UUID of the security
finding that was overridden by the corresponding vulnerability UUID.
The sequence below is executed to update the UUID of a vulnerability
(fingerprint). The recomputation takes place in the
`UpdateVulnerabilityUuids`, ultimately invoking a database update by means of
[`UpdateVulnerabilityUuidsVulnerabilityFinding` class](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/tasks/update_vulnerability_uuids/vulnerability_findings.rb).
Source Code References:
- [IngestReportsService](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/ingest_reports_service.rb#L55)
- [IngestReportService](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/ingest_report_service.rb#L41)
- [IngestReportSliceService](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/ingest_report_slice_service.rb#L37)
- [UpdateVulnerabilityUuids](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/tasks/update_vulnerability_uuids.rb#L67)
- [FindingMap](https://gitlab.com/gitlab-org/gitlab/-/blob/1b2cc434e43b533c0b393b8c319797e69745498e/ee/app/services/security/ingestion/finding_map.rb)
``` mermaid
sequenceDiagram
IngestReportsService->>IngestReportService: security_scan
IngestReportService->>IngestReportSliceService: sliced security_scan
IngestReportSliceService->>UpdateVulnerabilityUuids: findings map
```
## Hierarchy: Why are algorithms prioritized and what is the impact of this prioritization?
The supported algorithms are defined in [`VulnerabilityFindingSignatureHelpers`](https://gitlab.com/gitlab-org/gitlab/-/blob/1172e63f2485b8f3690895a3798f067429d98732/app/models/concerns/vulnerability_finding_signature_helpers.rb). Algorithms are assigned priorities (the integer values in the map below). A higher priority indicates that an algorithm is considered as better than a lower priority algorithm. In other words, going from a lower priority to a higher priority algorithms corresponds to `coarsening` (better deduplication performance) and going from a higher priority algorithm to a lower priority algorithm corresponds to a `refinement` (weaker deduplication performance).
``` ruby
ALGORITHM_TYPES = {
hash: 1,
location: 2,
scope_offset: 3,
scope_offset_compressed: 4,
rule_value: 5
}.with_indifferent_access.freeze
```
|
https://docs.gitlab.com/development/testing_levels
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/testing_levels.md
|
2025-08-13
|
doc/development/testing_guide
|
[
"doc",
"development",
"testing_guide"
] |
testing_levels.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Testing levels
| null |

_This diagram demonstrates the relative priority of each test type we use. `e2e` stands for end-to-end._
As of 2025-02-03, we have the following estimated distribution of tests per level:
| Test level | Community Edition | Enterprise Edition | Community + Enterprise Edition |
|-------------------------------------------------------------------|-------------------|--------------------|--------------------------------|
| Black-box tests at the system level (aka end-to-end or QA tests) | 401 (0.14%) | 303 (0.10%) | 704 (0.24%) |
| White-box tests at the system level (aka system or feature tests) | 8,362 (2.90%) | 4,082 (1.41%) | 12,444 (4.31%) |
| Integration tests | 39,716 (13.76%) | 17,411 (6.03%) | 57,127 (19.79%) |
| Unit tests | 139,504 (48.32%) | 78,955 (27.35%) | 218,459 (75.66%) |
## Unit tests
Formal definition: <https://en.wikipedia.org/wiki/Unit_testing>
These kind of tests ensure that a single unit of code (a method) works as
expected (given an input, it has a predictable output). These tests should be
isolated as much as possible. For example, model methods that don't do anything
with the database shouldn't need a DB record. Classes that don't need database
records should use stubs/doubles as much as possible.
| Code path | Tests path | Testing engine | Notes |
| --------- | ---------- | -------------- | ----- |
| `app/assets/javascripts/` | `spec/frontend/` | Jest | More details in the [Frontend Testing guide](frontend_testing.md) section. |
| `app/finders/` | `spec/finders/` | RSpec | |
| `app/graphql/` | `spec/graphql/` | RSpec | |
| `app/helpers/` | `spec/helpers/` | RSpec | |
| `app/models/` | `spec/models/` | RSpec | |
| `app/policies/` | `spec/policies/` | RSpec | |
| `app/presenters/` | `spec/presenters/` | RSpec | |
| `app/serializers/` | `spec/serializers/` | RSpec | |
| `app/services/` | `spec/services/` | RSpec | |
| `app/uploaders/` | `spec/uploaders/` | RSpec | |
| `app/validators/` | `spec/validators/` | RSpec | |
| `app/views/` | `spec/views/` | RSpec | |
| `app/workers/` | `spec/workers/` | RSpec | |
| `bin/` | `spec/bin/` | RSpec | |
| `config/` | `spec/config/` | RSpec | |
| `config/initializers/` | `spec/initializers/` | RSpec | |
| `config/routes.rb`, `config/routes/` | `spec/routing/` | RSpec | |
| `config/puma.example.development.rb` | `spec/rack_servers/` | RSpec | |
| `db/` | `spec/db/` | RSpec | |
| `db/{post_,}migrate/` | `spec/migrations/` | RSpec | More details in the [Testing Rails migrations guide](testing_migrations_guide.md). |
| `Gemfile` | `spec/dependencies/`, `spec/sidekiq/` | RSpec | |
| `lib/` | `spec/lib/` | RSpec | |
| `lib/tasks/` | `spec/tasks/` | RSpec | |
| `rubocop/` | `spec/rubocop/` | RSpec | |
| `spec/support/` | `spec/support_specs/` | RSpec | |
### Frontend unit tests
Unit tests are on the lowest abstraction level and typically test functionality
that is not directly perceivable by a user.
```mermaid
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
plain---GraphQL;
Vue---plain;
Vue---Vuex;
Vue---GraphQL;
browser---plain;
browser---Vue;
plain---backend;
Vuex---backend;
GraphQL---backend;
Vue---backend;
backend---database;
backend---feature-flags;
backend---license-checks;
class plain tested;
class Vuex tested;
classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090
classDef label stroke-width:0;
classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
subgraph " "
tested;
mocked;
class tested tested;
end
```
#### When to use unit tests
- **Exported functions and classes**:
Anything exported can be reused at various places in ways you have no control over.
You should document the expected behavior of the public interface with tests.
- **Vuex actions**:
Any Vuex action must work in a consistent way, independent of the component it is triggered from.
- **Vuex mutations**:
For complex Vuex mutations, you should separate the tests from other parts of the Vuex store to simplify problem-solving.
#### When not to use unit tests
- **Non-exported functions or classes**:
Anything not exported from a module can be considered private or an implementation detail, and doesn't need to be tested.
- **Constants**:
Testing the value of a constant means copying it, resulting in extra effort without additional confidence that the value is correct.
- **Vue components**:
Computed properties, methods, and lifecycle hooks can be considered an implementation detail of components, are implicitly covered by component tests, and don't need to be tested.
For more information, see the [official Vue guidelines](https://v1.test-utils.vuejs.org/guides/#getting-started).
#### What to mock in unit tests
- **State of the class under test**:
Modifying the state of the class under test directly rather than using methods of the class avoids side effects in test setup.
- **Other exported classes**:
Every class must be tested in isolation to prevent test scenarios from growing exponentially.
- **Single DOM elements if passed as parameters**:
For tests only operating on single DOM elements, rather than a whole page, creating these elements is cheaper than loading an entire HTML fixture.
- **All server requests**:
When running frontend unit tests, the backend may not be reachable, so all outgoing requests need to be mocked.
- **Asynchronous background operations**:
Background operations cannot be stopped or waited on, so they continue running in the following tests and cause side effects.
#### What not to mock in unit tests
- **Non-exported functions or classes**:
Everything that is not exported can be considered private to the module, and is implicitly tested through the exported classes and functions.
- **Methods of the class under test**:
By mocking methods of the class under test, the mocks are tested and not the real methods.
- **Utility functions (pure functions, or those that only modify parameters)**:
If a function has no side effects because it has no state, it is safe to not mock it in tests.
- **Full HTML pages**:
Avoid loading the HTML of a full page in unit tests, as it slows down tests.
### Frontend component tests
Component tests cover the state of a single component that is perceivable by a user depending on external signals such as user input, events fired from other components, or application state.
```mermaid
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
plain---GraphQL;
Vue---plain;
Vue---Vuex;
Vue---GraphQL;
browser---plain;
browser---Vue;
plain---backend;
Vuex---backend;
GraphQL---backend;
Vue---backend;
backend---database;
backend---feature-flags;
backend---license-checks;
class Vue tested;
classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090
classDef label stroke-width:0;
classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
subgraph " "
tested;
mocked;
class tested tested;
end
```
#### When to use component tests
- **Vue components**
#### When not to use component tests
- **Vue applications**:
Vue applications may contain many components.
Testing them on a component level requires too much effort.
Therefore they are tested on frontend integration level.
- **HAML templates**:
HAML templates contain only Markup and no frontend-side logic.
Therefore they are not complete components.
#### What to mock in component tests
- **Side effects**:
Anything that can change external state (for example, a network request) should be mocked.
- **Child components**:
Every component is tested individually, so child components are mocked.
See also [`shallowMount()`](https://v1.test-utils.vuejs.org/api/#shallowmount)
#### What not to mock in component tests
- **Methods or computed properties of the component under test**:
By mocking part of the component under test, the mocks are tested and not the real component.
- **Vuex**:
Keep Vuex unmocked to avoid fragile and false-positive tests.
Set the Vuex to a proper state using mutations.
Mock the side-effects, not the Vuex actions.
## Integration tests
Formal definition: <https://en.wikipedia.org/wiki/Integration_testing>
These kind of tests ensure that individual parts of the application work well
together, without the overhead of the actual app environment (such as the browser).
These tests should assert at the request/response level: status code, headers,
body.
They're useful, for example, to test permissions, redirections, API endpoints, what view is rendered, and so forth.
| Code path | Tests path | Testing engine | Notes |
| --------- | ---------- | -------------- | ----- |
| `app/controllers/` | `spec/requests/`, `spec/controllers` | RSpec | Request specs are preferred over legacy controller specs. Request specs are encouraged for API endpoints. |
| `app/mailers/` | `spec/mailers/` | RSpec | |
| `lib/api/` | `spec/requests/api/` | RSpec | |
| `app/assets/javascripts/` | `spec/frontend/` | Jest | [More details below](#frontend-integration-tests) |
### Frontend integration tests
Integration tests cover the interaction between all components on a single page.
Their abstraction level is comparable to how a user would interact with the UI.
```mermaid
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
plain---GraphQL;
Vue---plain;
Vue---Vuex;
Vue---GraphQL;
browser---plain;
browser---Vue;
plain---backend;
Vuex---backend;
GraphQL---backend;
Vue---backend;
backend---database;
backend---feature-flags;
backend---license-checks;
class plain tested;
class Vue tested;
class Vuex tested;
class GraphQL tested;
class browser tested;
linkStyle 0,1,2,3,4,5,6 stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090
classDef label stroke-width:0;
classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
subgraph " "
tested;
mocked;
class tested tested;
end
```
#### When to use integration tests
- **Page bundles (`index.js` files in `app/assets/javascripts/pages/`)**:
Testing the page bundles ensures the corresponding frontend components integrate well.
- **Vue applications outside of page bundles**:
Testing Vue applications as a whole ensures the corresponding frontend components integrate well.
#### What to mock in integration tests
- **HAML views (use fixtures instead)**:
Rendering HAML views requires a Rails environment including a running database, which you cannot rely on in frontend tests.
- **All server requests**:
Similar to unit and component tests, when running component tests, the backend may not be reachable, so all outgoing requests must be mocked.
- **Asynchronous background operations that are not perceivable on the page**:
Background operations that affect the page must be tested on this level.
All other background operations cannot be stopped or waited on, so they continue running in the following tests and cause side effects.
#### What not to mock in integration tests
- **DOM**:
Testing on the real DOM ensures your components work in the intended environment.
Part of DOM testing is delegated to [cross-browser testing](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/45).
- **Properties or state of components**:
On this level, all tests can only perform actions a user would do.
For example: to change the state of a component, a click event would be fired.
- **Vuex stores**:
When testing the frontend code of a page as a whole, the interaction between Vue components and Vuex stores is covered as well.
### About controller tests
GitLab is [transitioning from controller specs to request specs](https://gitlab.com/groups/gitlab-org/-/epics/5076).
In an ideal world, controllers should be thin. However, when this is not the
case, it's acceptable to write a system or feature test without JavaScript instead
of a controller test. Testing a fat controller usually involves a lot of stubbing, such as:
```ruby
controller.instance_variable_set(:@user, user)
```
and use methods [deprecated in Rails 5](https://gitlab.com/gitlab-org/gitlab/-/issues/16260).
## White-box tests at the system level (formerly known as System / Feature tests)
Formal definitions:
- <https://en.wikipedia.org/wiki/System_testing>
- <https://en.wikipedia.org/wiki/White-box_testing>
These kind of tests ensure the GitLab Rails application (for example,
`gitlab-foss`/`gitlab`) works as expected from a browser point of view.
Note that:
- knowledge of the internals of the application are still required
- data needed for the tests are usually created directly using RSpec factories
- expectations are often set on the database or objects state
These tests should only be used when:
- the functionality/component being tested is small
- the internal state of the objects/database needs to be tested
- it cannot be tested at a lower level
For instance, to test the breadcrumbs on a given page, writing a system test
makes sense since it's a small component, which cannot be tested at the unit or
controller level.
Only test the happy path, but make sure to add a test case for any regression
that couldn't have been caught at lower levels with better tests (for example, if a
regression is found, regression tests should be added at the lowest level
possible).
| Tests path | Testing engine | Notes |
| ---------- | -------------- | ----- |
| `spec/features/` | [Capybara](https://github.com/teamcapybara/capybara) + [RSpec](https://github.com/rspec/rspec-rails#feature-specs) | If your test has the `:js` metadata, the browser driver is [Selenium](https://github.com/teamcapybara/capybara#selenium), otherwise it's using [RackTest](https://github.com/teamcapybara/capybara#racktest). |
### Frontend feature tests
In contrast to [frontend integration tests](#frontend-integration-tests), feature
tests make requests against the real backend instead of using fixtures.
This also implies that database queries are executed which makes this category significantly slower.
See also:
- The [RSpec testing guidelines](best_practices.md#rspec).
- System / Feature tests in the [Testing Best Practices](best_practices.md#system--feature-tests).
```mermaid
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
plain---GraphQL;
Vue---plain;
Vue---Vuex;
Vue---GraphQL;
browser---plain;
browser---Vue;
plain---backend;
Vuex---backend;
GraphQL---backend;
Vue---backend;
backend---database;
backend---feature-flags;
backend---license-checks;
class backend tested;
class plain tested;
class Vue tested;
class Vuex tested;
class GraphQL tested;
class browser tested;
linkStyle 0,1,2,3,4,5,6,7,8,9,10 stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090
classDef label stroke-width:0;
classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
subgraph " "
tested;
mocked;
class tested tested;
end
```
#### When to use feature tests
- Use cases that require a backend, and cannot be tested using fixtures.
- Behavior that is not part of a page bundle, but defined globally.
#### Relevant notes
A `:js` flag is added to the test to make sure the full environment is loaded:
```ruby
scenario 'successfully', :js do
sign_in(create(:admin))
end
```
The steps of each test are written using ([capybara methods](https://www.rubydoc.info/gems/capybara)).
XHR (XMLHttpRequest) calls might require you to use `wait_for_requests` in between steps, such as:
```ruby
find('.form-control').native.send_keys(:enter)
wait_for_requests
expect(page).not_to have_selector('.card')
```
### Consider **not** writing a system test
If we're confident that the low-level components work well (and we should be if
we have enough Unit & Integration tests), we shouldn't need to duplicate their
thorough testing at the System test level.
It's very easy to add tests, but a lot harder to remove or improve tests, so one
should take care of not introducing too many (slow and duplicated) tests.
The reasons why we should follow these best practices are as follows:
- System tests are slow to run because they spin up the entire application stack
in a headless browser, and even slower when they integrate a JS driver
- When system tests run with a JavaScript driver, the tests are run in a
different thread than the application. This means it does not share a
database connection and your test must commit the transactions in
order for the running application to see the data (and vice-versa). In that
case we need to truncate the database after each spec instead of
rolling back a transaction (the faster strategy that's in use for other kind
of tests). This is slower than transactions, however, so we want to use
truncation only when necessary.
## Black-box tests at the system level, aka end-to-end tests
Formal definitions:
- <https://en.wikipedia.org/wiki/System_testing>
- <https://en.wikipedia.org/wiki/Black-box_testing>
GitLab consists of [multiple pieces](../architecture.md#components) such as [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell), [GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse),
[Gitaly](https://gitlab.com/gitlab-org/gitaly), [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages), [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner), and GitLab Rails. All these pieces
are configured and packaged by [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab).
The QA framework and instance-level scenarios are [part of GitLab Rails](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/qa) so that
they're always in-sync with the codebase (especially the views).
Note that:
- knowledge of the internals of the application are not required
- data needed for the tests can only be created using the GUI or the API
- expectations can only be made against the browser page and API responses
Every new feature should come with a [test plan](https://gitlab.com/gitlab-org/gitlab/-/tree/master/.gitlab/issue_templates/Test%20plan.md).
| Tests path | Testing engine | Notes |
| ---------- | -------------- | ----- |
| `qa/qa/specs/features/` | [Capybara](https://github.com/teamcapybara/capybara) + [RSpec](https://github.com/rspec/rspec-rails#feature-specs) + Custom QA framework | Tests should be placed under their corresponding [Product category](https://handbook.gitlab.com/handbook/product/categories/) |
> See [end-to-end tests](end_to_end/_index.md) for more information.
Note that `qa/spec` contains unit tests of the QA framework itself, not to be
confused with the application's [unit tests](#unit-tests) or
[end-to-end tests](#black-box-tests-at-the-system-level-aka-end-to-end-tests).
### Smoke tests
Smoke tests are quick tests that may be run at any time (especially after the
pre-deployment migrations).
These tests run against the UI and ensure that basic functionality is working.
> See [Smoke Tests](smoke.md) for more information.
### GitLab QA orchestrator
[GitLab QA orchestrator](https://gitlab.com/gitlab-org/gitlab-qa) is a tool that allows you to test that all these pieces integrate well together by building a Docker image for a given version of GitLab Rails and running end-to-end tests (using Capybara) against it.
Learn more in the [GitLab QA orchestrator README](https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md).
## EE-specific tests
EE-specific tests follows the same organization, but under the `ee/spec` folder.
## How to test at the correct level?
As many things in life, deciding what to test at each level of testing is a
trade-off:
- Unit tests are usually cheap, and you should consider them like the basement
of your house: you need them to be confident that your code is behaving
correctly. However if you run only unit tests without integration / system
tests, you might [miss](https://twitter.com/ThePracticalDev/status/850748070698651649) the
[big](https://twitter.com/timbray/status/822470746773409794) /
[picture](https://twitter.com/withzombies/status/829716565834752000) !
- Integration tests are a bit more expensive, but don't abuse them. A system test
is often better than an integration test that is stubbing a lot of internals.
- System tests are expensive (compared to unit tests), even more if they require
a JavaScript driver. Make sure to follow the guidelines in the [Speed](best_practices.md#test-slowness)
section.
Another way to see it is to think about the "cost of tests", this is well
explained [in this article](https://medium.com/table-xi/high-cost-tests-and-high-value-tests-a86e27a54df#.2ulyh3a4e)
and the basic idea is that the cost of a test includes:
- The time it takes to write the test
- The time it takes to run the test every time the suite runs
- The time it takes to understand the test
- The time it takes to fix the test if it breaks and the underlying code is OK
- Maybe, the time it takes to change the code to make the code testable.
### Frontend-related tests
There are cases where the behavior you are testing is not worth the time spent
running the full application, for example, if you are testing styling, animation,
edge cases or small actions that don't involve the backend,
you should write an integration test using [Frontend integration tests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/frontend_integration/README.md).
---
[Return to Testing documentation](_index.md)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Testing levels
breadcrumbs:
- doc
- development
- testing_guide
---

_This diagram demonstrates the relative priority of each test type we use. `e2e` stands for end-to-end._
As of 2025-02-03, we have the following estimated distribution of tests per level:
| Test level | Community Edition | Enterprise Edition | Community + Enterprise Edition |
|-------------------------------------------------------------------|-------------------|--------------------|--------------------------------|
| Black-box tests at the system level (aka end-to-end or QA tests) | 401 (0.14%) | 303 (0.10%) | 704 (0.24%) |
| White-box tests at the system level (aka system or feature tests) | 8,362 (2.90%) | 4,082 (1.41%) | 12,444 (4.31%) |
| Integration tests | 39,716 (13.76%) | 17,411 (6.03%) | 57,127 (19.79%) |
| Unit tests | 139,504 (48.32%) | 78,955 (27.35%) | 218,459 (75.66%) |
## Unit tests
Formal definition: <https://en.wikipedia.org/wiki/Unit_testing>
These kind of tests ensure that a single unit of code (a method) works as
expected (given an input, it has a predictable output). These tests should be
isolated as much as possible. For example, model methods that don't do anything
with the database shouldn't need a DB record. Classes that don't need database
records should use stubs/doubles as much as possible.
| Code path | Tests path | Testing engine | Notes |
| --------- | ---------- | -------------- | ----- |
| `app/assets/javascripts/` | `spec/frontend/` | Jest | More details in the [Frontend Testing guide](frontend_testing.md) section. |
| `app/finders/` | `spec/finders/` | RSpec | |
| `app/graphql/` | `spec/graphql/` | RSpec | |
| `app/helpers/` | `spec/helpers/` | RSpec | |
| `app/models/` | `spec/models/` | RSpec | |
| `app/policies/` | `spec/policies/` | RSpec | |
| `app/presenters/` | `spec/presenters/` | RSpec | |
| `app/serializers/` | `spec/serializers/` | RSpec | |
| `app/services/` | `spec/services/` | RSpec | |
| `app/uploaders/` | `spec/uploaders/` | RSpec | |
| `app/validators/` | `spec/validators/` | RSpec | |
| `app/views/` | `spec/views/` | RSpec | |
| `app/workers/` | `spec/workers/` | RSpec | |
| `bin/` | `spec/bin/` | RSpec | |
| `config/` | `spec/config/` | RSpec | |
| `config/initializers/` | `spec/initializers/` | RSpec | |
| `config/routes.rb`, `config/routes/` | `spec/routing/` | RSpec | |
| `config/puma.example.development.rb` | `spec/rack_servers/` | RSpec | |
| `db/` | `spec/db/` | RSpec | |
| `db/{post_,}migrate/` | `spec/migrations/` | RSpec | More details in the [Testing Rails migrations guide](testing_migrations_guide.md). |
| `Gemfile` | `spec/dependencies/`, `spec/sidekiq/` | RSpec | |
| `lib/` | `spec/lib/` | RSpec | |
| `lib/tasks/` | `spec/tasks/` | RSpec | |
| `rubocop/` | `spec/rubocop/` | RSpec | |
| `spec/support/` | `spec/support_specs/` | RSpec | |
### Frontend unit tests
Unit tests are on the lowest abstraction level and typically test functionality
that is not directly perceivable by a user.
```mermaid
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
plain---GraphQL;
Vue---plain;
Vue---Vuex;
Vue---GraphQL;
browser---plain;
browser---Vue;
plain---backend;
Vuex---backend;
GraphQL---backend;
Vue---backend;
backend---database;
backend---feature-flags;
backend---license-checks;
class plain tested;
class Vuex tested;
classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090
classDef label stroke-width:0;
classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
subgraph " "
tested;
mocked;
class tested tested;
end
```
#### When to use unit tests
- **Exported functions and classes**:
Anything exported can be reused at various places in ways you have no control over.
You should document the expected behavior of the public interface with tests.
- **Vuex actions**:
Any Vuex action must work in a consistent way, independent of the component it is triggered from.
- **Vuex mutations**:
For complex Vuex mutations, you should separate the tests from other parts of the Vuex store to simplify problem-solving.
#### When not to use unit tests
- **Non-exported functions or classes**:
Anything not exported from a module can be considered private or an implementation detail, and doesn't need to be tested.
- **Constants**:
Testing the value of a constant means copying it, resulting in extra effort without additional confidence that the value is correct.
- **Vue components**:
Computed properties, methods, and lifecycle hooks can be considered an implementation detail of components, are implicitly covered by component tests, and don't need to be tested.
For more information, see the [official Vue guidelines](https://v1.test-utils.vuejs.org/guides/#getting-started).
#### What to mock in unit tests
- **State of the class under test**:
Modifying the state of the class under test directly rather than using methods of the class avoids side effects in test setup.
- **Other exported classes**:
Every class must be tested in isolation to prevent test scenarios from growing exponentially.
- **Single DOM elements if passed as parameters**:
For tests only operating on single DOM elements, rather than a whole page, creating these elements is cheaper than loading an entire HTML fixture.
- **All server requests**:
When running frontend unit tests, the backend may not be reachable, so all outgoing requests need to be mocked.
- **Asynchronous background operations**:
Background operations cannot be stopped or waited on, so they continue running in the following tests and cause side effects.
#### What not to mock in unit tests
- **Non-exported functions or classes**:
Everything that is not exported can be considered private to the module, and is implicitly tested through the exported classes and functions.
- **Methods of the class under test**:
By mocking methods of the class under test, the mocks are tested and not the real methods.
- **Utility functions (pure functions, or those that only modify parameters)**:
If a function has no side effects because it has no state, it is safe to not mock it in tests.
- **Full HTML pages**:
Avoid loading the HTML of a full page in unit tests, as it slows down tests.
### Frontend component tests
Component tests cover the state of a single component that is perceivable by a user depending on external signals such as user input, events fired from other components, or application state.
```mermaid
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
plain---GraphQL;
Vue---plain;
Vue---Vuex;
Vue---GraphQL;
browser---plain;
browser---Vue;
plain---backend;
Vuex---backend;
GraphQL---backend;
Vue---backend;
backend---database;
backend---feature-flags;
backend---license-checks;
class Vue tested;
classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090
classDef label stroke-width:0;
classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
subgraph " "
tested;
mocked;
class tested tested;
end
```
#### When to use component tests
- **Vue components**
#### When not to use component tests
- **Vue applications**:
Vue applications may contain many components.
Testing them on a component level requires too much effort.
Therefore they are tested on frontend integration level.
- **HAML templates**:
HAML templates contain only Markup and no frontend-side logic.
Therefore they are not complete components.
#### What to mock in component tests
- **Side effects**:
Anything that can change external state (for example, a network request) should be mocked.
- **Child components**:
Every component is tested individually, so child components are mocked.
See also [`shallowMount()`](https://v1.test-utils.vuejs.org/api/#shallowmount)
#### What not to mock in component tests
- **Methods or computed properties of the component under test**:
By mocking part of the component under test, the mocks are tested and not the real component.
- **Vuex**:
Keep Vuex unmocked to avoid fragile and false-positive tests.
Set the Vuex to a proper state using mutations.
Mock the side-effects, not the Vuex actions.
## Integration tests
Formal definition: <https://en.wikipedia.org/wiki/Integration_testing>
These kind of tests ensure that individual parts of the application work well
together, without the overhead of the actual app environment (such as the browser).
These tests should assert at the request/response level: status code, headers,
body.
They're useful, for example, to test permissions, redirections, API endpoints, what view is rendered, and so forth.
| Code path | Tests path | Testing engine | Notes |
| --------- | ---------- | -------------- | ----- |
| `app/controllers/` | `spec/requests/`, `spec/controllers` | RSpec | Request specs are preferred over legacy controller specs. Request specs are encouraged for API endpoints. |
| `app/mailers/` | `spec/mailers/` | RSpec | |
| `lib/api/` | `spec/requests/api/` | RSpec | |
| `app/assets/javascripts/` | `spec/frontend/` | Jest | [More details below](#frontend-integration-tests) |
### Frontend integration tests
Integration tests cover the interaction between all components on a single page.
Their abstraction level is comparable to how a user would interact with the UI.
```mermaid
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
plain---GraphQL;
Vue---plain;
Vue---Vuex;
Vue---GraphQL;
browser---plain;
browser---Vue;
plain---backend;
Vuex---backend;
GraphQL---backend;
Vue---backend;
backend---database;
backend---feature-flags;
backend---license-checks;
class plain tested;
class Vue tested;
class Vuex tested;
class GraphQL tested;
class browser tested;
linkStyle 0,1,2,3,4,5,6 stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090
classDef label stroke-width:0;
classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
subgraph " "
tested;
mocked;
class tested tested;
end
```
#### When to use integration tests
- **Page bundles (`index.js` files in `app/assets/javascripts/pages/`)**:
Testing the page bundles ensures the corresponding frontend components integrate well.
- **Vue applications outside of page bundles**:
Testing Vue applications as a whole ensures the corresponding frontend components integrate well.
#### What to mock in integration tests
- **HAML views (use fixtures instead)**:
Rendering HAML views requires a Rails environment including a running database, which you cannot rely on in frontend tests.
- **All server requests**:
Similar to unit and component tests, when running component tests, the backend may not be reachable, so all outgoing requests must be mocked.
- **Asynchronous background operations that are not perceivable on the page**:
Background operations that affect the page must be tested on this level.
All other background operations cannot be stopped or waited on, so they continue running in the following tests and cause side effects.
#### What not to mock in integration tests
- **DOM**:
Testing on the real DOM ensures your components work in the intended environment.
Part of DOM testing is delegated to [cross-browser testing](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/45).
- **Properties or state of components**:
On this level, all tests can only perform actions a user would do.
For example: to change the state of a component, a click event would be fired.
- **Vuex stores**:
When testing the frontend code of a page as a whole, the interaction between Vue components and Vuex stores is covered as well.
### About controller tests
GitLab is [transitioning from controller specs to request specs](https://gitlab.com/groups/gitlab-org/-/epics/5076).
In an ideal world, controllers should be thin. However, when this is not the
case, it's acceptable to write a system or feature test without JavaScript instead
of a controller test. Testing a fat controller usually involves a lot of stubbing, such as:
```ruby
controller.instance_variable_set(:@user, user)
```
and use methods [deprecated in Rails 5](https://gitlab.com/gitlab-org/gitlab/-/issues/16260).
## White-box tests at the system level (formerly known as System / Feature tests)
Formal definitions:
- <https://en.wikipedia.org/wiki/System_testing>
- <https://en.wikipedia.org/wiki/White-box_testing>
These kind of tests ensure the GitLab Rails application (for example,
`gitlab-foss`/`gitlab`) works as expected from a browser point of view.
Note that:
- knowledge of the internals of the application are still required
- data needed for the tests are usually created directly using RSpec factories
- expectations are often set on the database or objects state
These tests should only be used when:
- the functionality/component being tested is small
- the internal state of the objects/database needs to be tested
- it cannot be tested at a lower level
For instance, to test the breadcrumbs on a given page, writing a system test
makes sense since it's a small component, which cannot be tested at the unit or
controller level.
Only test the happy path, but make sure to add a test case for any regression
that couldn't have been caught at lower levels with better tests (for example, if a
regression is found, regression tests should be added at the lowest level
possible).
| Tests path | Testing engine | Notes |
| ---------- | -------------- | ----- |
| `spec/features/` | [Capybara](https://github.com/teamcapybara/capybara) + [RSpec](https://github.com/rspec/rspec-rails#feature-specs) | If your test has the `:js` metadata, the browser driver is [Selenium](https://github.com/teamcapybara/capybara#selenium), otherwise it's using [RackTest](https://github.com/teamcapybara/capybara#racktest). |
### Frontend feature tests
In contrast to [frontend integration tests](#frontend-integration-tests), feature
tests make requests against the real backend instead of using fixtures.
This also implies that database queries are executed which makes this category significantly slower.
See also:
- The [RSpec testing guidelines](best_practices.md#rspec).
- System / Feature tests in the [Testing Best Practices](best_practices.md#system--feature-tests).
```mermaid
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
plain---GraphQL;
Vue---plain;
Vue---Vuex;
Vue---GraphQL;
browser---plain;
browser---Vue;
plain---backend;
Vuex---backend;
GraphQL---backend;
Vue---backend;
backend---database;
backend---feature-flags;
backend---license-checks;
class backend tested;
class plain tested;
class Vue tested;
class Vuex tested;
class GraphQL tested;
class browser tested;
linkStyle 0,1,2,3,4,5,6,7,8,9,10 stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090
classDef label stroke-width:0;
classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5;
subgraph " "
tested;
mocked;
class tested tested;
end
```
#### When to use feature tests
- Use cases that require a backend, and cannot be tested using fixtures.
- Behavior that is not part of a page bundle, but defined globally.
#### Relevant notes
A `:js` flag is added to the test to make sure the full environment is loaded:
```ruby
scenario 'successfully', :js do
sign_in(create(:admin))
end
```
The steps of each test are written using ([capybara methods](https://www.rubydoc.info/gems/capybara)).
XHR (XMLHttpRequest) calls might require you to use `wait_for_requests` in between steps, such as:
```ruby
find('.form-control').native.send_keys(:enter)
wait_for_requests
expect(page).not_to have_selector('.card')
```
### Consider **not** writing a system test
If we're confident that the low-level components work well (and we should be if
we have enough Unit & Integration tests), we shouldn't need to duplicate their
thorough testing at the System test level.
It's very easy to add tests, but a lot harder to remove or improve tests, so one
should take care of not introducing too many (slow and duplicated) tests.
The reasons why we should follow these best practices are as follows:
- System tests are slow to run because they spin up the entire application stack
in a headless browser, and even slower when they integrate a JS driver
- When system tests run with a JavaScript driver, the tests are run in a
different thread than the application. This means it does not share a
database connection and your test must commit the transactions in
order for the running application to see the data (and vice-versa). In that
case we need to truncate the database after each spec instead of
rolling back a transaction (the faster strategy that's in use for other kind
of tests). This is slower than transactions, however, so we want to use
truncation only when necessary.
## Black-box tests at the system level, aka end-to-end tests
Formal definitions:
- <https://en.wikipedia.org/wiki/System_testing>
- <https://en.wikipedia.org/wiki/Black-box_testing>
GitLab consists of [multiple pieces](../architecture.md#components) such as [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell), [GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse),
[Gitaly](https://gitlab.com/gitlab-org/gitaly), [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages), [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner), and GitLab Rails. All these pieces
are configured and packaged by [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab).
The QA framework and instance-level scenarios are [part of GitLab Rails](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/qa) so that
they're always in-sync with the codebase (especially the views).
Note that:
- knowledge of the internals of the application are not required
- data needed for the tests can only be created using the GUI or the API
- expectations can only be made against the browser page and API responses
Every new feature should come with a [test plan](https://gitlab.com/gitlab-org/gitlab/-/tree/master/.gitlab/issue_templates/Test%20plan.md).
| Tests path | Testing engine | Notes |
| ---------- | -------------- | ----- |
| `qa/qa/specs/features/` | [Capybara](https://github.com/teamcapybara/capybara) + [RSpec](https://github.com/rspec/rspec-rails#feature-specs) + Custom QA framework | Tests should be placed under their corresponding [Product category](https://handbook.gitlab.com/handbook/product/categories/) |
> See [end-to-end tests](end_to_end/_index.md) for more information.
Note that `qa/spec` contains unit tests of the QA framework itself, not to be
confused with the application's [unit tests](#unit-tests) or
[end-to-end tests](#black-box-tests-at-the-system-level-aka-end-to-end-tests).
### Smoke tests
Smoke tests are quick tests that may be run at any time (especially after the
pre-deployment migrations).
These tests run against the UI and ensure that basic functionality is working.
> See [Smoke Tests](smoke.md) for more information.
### GitLab QA orchestrator
[GitLab QA orchestrator](https://gitlab.com/gitlab-org/gitlab-qa) is a tool that allows you to test that all these pieces integrate well together by building a Docker image for a given version of GitLab Rails and running end-to-end tests (using Capybara) against it.
Learn more in the [GitLab QA orchestrator README](https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md).
## EE-specific tests
EE-specific tests follows the same organization, but under the `ee/spec` folder.
## How to test at the correct level?
As many things in life, deciding what to test at each level of testing is a
trade-off:
- Unit tests are usually cheap, and you should consider them like the basement
of your house: you need them to be confident that your code is behaving
correctly. However if you run only unit tests without integration / system
tests, you might [miss](https://twitter.com/ThePracticalDev/status/850748070698651649) the
[big](https://twitter.com/timbray/status/822470746773409794) /
[picture](https://twitter.com/withzombies/status/829716565834752000) !
- Integration tests are a bit more expensive, but don't abuse them. A system test
is often better than an integration test that is stubbing a lot of internals.
- System tests are expensive (compared to unit tests), even more if they require
a JavaScript driver. Make sure to follow the guidelines in the [Speed](best_practices.md#test-slowness)
section.
Another way to see it is to think about the "cost of tests", this is well
explained [in this article](https://medium.com/table-xi/high-cost-tests-and-high-value-tests-a86e27a54df#.2ulyh3a4e)
and the basic idea is that the cost of a test includes:
- The time it takes to write the test
- The time it takes to run the test every time the suite runs
- The time it takes to understand the test
- The time it takes to fix the test if it breaks and the underlying code is OK
- Maybe, the time it takes to change the code to make the code testable.
### Frontend-related tests
There are cases where the behavior you are testing is not worth the time spent
running the full application, for example, if you are testing styling, animation,
edge cases or small actions that don't involve the backend,
you should write an integration test using [Frontend integration tests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/frontend_integration/README.md).
---
[Return to Testing documentation](_index.md)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.