url
stringlengths 24
122
| repo_url
stringlengths 60
156
| date_extracted
stringdate 2025-08-13 00:00:00
2025-08-13 00:00:00
| root
stringlengths 3
85
| breadcrumbs
listlengths 1
6
| filename
stringlengths 6
60
| stage
stringclasses 33
values | group
stringclasses 81
values | info
stringclasses 22
values | title
stringlengths 3
110
⌀ | description
stringlengths 11
359
⌀ | clean_text
stringlengths 47
3.32M
| rich_text
stringlengths 321
3.32M
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://docs.gitlab.com/application_limits
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/application_limits.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
application_limits.md
|
GitLab Delivery
|
Self Managed
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Application limits development
| null |
This document provides a development guide for contributors to add application
limits to GitLab.
## Documentation
First of all, you have to gather information and decide which are the different
limits that are set for the different GitLab tiers. Coordinate with others to [document](../administration/instance_limits.md)
and communicate those limits.
There is a guide about [introducing application limits](https://handbook.gitlab.com/handbook/product/product-processes/#introducing-application-limits).
## Implement plan limits
### Insert database plan limits
In the `plan_limits` table, create a new column and insert the limit values.
It's recommended to create two separate migration script files.
1. Add a new column to the `plan_limits` table with non-null default value that
represents desired limit, such as:
```ruby
add_column(:plan_limits, :project_hooks, :integer, default: 100, null: false)
```
Plan limits entries set to `0` mean that limits are not enabled. You should
use this setting only in special and documented circumstances.
1. (Optionally) Create the database migration that fine-tunes each level with a
desired limit using the `create_or_update_plan_limit` migration helper.
The plans in this migration should match the [plans on GitLab.com](#subscription-plans).
If a plan is missed, customers on that plan would receive the default limit, which might be
`0` (unlimited).
For example:
```ruby
class InsertProjectHooksPlanLimits < Gitlab::Database::Migration[2.1]
def up
create_or_update_plan_limit('project_hooks', 'default', 0)
create_or_update_plan_limit('project_hooks', 'free', 10)
create_or_update_plan_limit('project_hooks', 'premium', 30)
create_or_update_plan_limit('project_hooks', 'premium_trial', 30)
create_or_update_plan_limit('project_hooks', 'ultimate', 100)
create_or_update_plan_limit('project_hooks', 'ultimate_trial', 100)
create_or_update_plan_limit('project_hooks', 'ultimate_trial_paid_customer', 100)
create_or_update_plan_limit('project_hooks', 'opensource', 100)
end
def down
create_or_update_plan_limit('project_hooks', 'default', 0)
create_or_update_plan_limit('project_hooks', 'free', 0)
create_or_update_plan_limit('project_hooks', 'premium', 0)
create_or_update_plan_limit('project_hooks', 'premium_trial', 0)
create_or_update_plan_limit('project_hooks', 'ultimate', 0)
create_or_update_plan_limit('project_hooks', 'ultimate_trial', 0)
create_or_update_plan_limit('project_hooks', 'ultimate_trial_paid_customer', 0)
create_or_update_plan_limit('project_hooks', 'opensource', 0)
end
end
```
Some plans exist only on GitLab.com. This is a no-op for plans
that do not exist.
To set limits in your migration only for GitLab.com and allow other instances
to use the default limits, add `return unless Gitlab.com?` to the start of
the `#up` and `#down` methods to make the migration a no-op for other instances.
### Plan limits validation
#### Get current limit
Access to the current limit can be done through the project or the namespace,
such as:
```ruby
project.actual_limits.project_hooks
```
#### Check current limit
There is one method `PlanLimits#exceeded?` to check if the current limit is
being exceeded. You can use either an `ActiveRecord` object or an `Integer`.
Ensures that the count of the records does not exceed the defined limit, such as:
```ruby
project.actual_limits.exceeded?(:project_hooks, ProjectHook.where(project: project))
```
Ensures that the number does not exceed the defined limit, such as:
```ruby
project.actual_limits.exceeded?(:project_hooks, 10)
```
#### `Limitable` concern
The [`Limitable` concern](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/concerns/limitable.rb)
can be used to validate that a model does not exceed the limits. It ensures
that the count of the records for the current model does not exceed the defined
limit.
You must specify the limit scope of the object being validated
and the limit name if it's different from the pluralized model name.
```ruby
class ProjectHook
include Limitable
self.limit_name = 'project_hooks' # Optional as ProjectHook corresponds with project_hooks
self.limit_scope = :project
end
```
To test the model, you can include the shared examples.
```ruby
it_behaves_like 'includes Limitable concern' do
subject { build(:project_hook, project: create(:project)) }
end
```
### Testing instance-wide limits
Instance-wide features always use `default` Plan, as instance-wide features
do not have license assigned.
```ruby
class InstanceVariable
include Limitable
self.limit_name = 'instance_variables' # Optional as InstanceVariable corresponds with instance_variables
self.limit_scope = Limitable::GLOBAL_SCOPE
end
```
### Subscription Plans
GitLab Self-Managed:
- `default`: Everyone.
GitLab.com:
- `default`: Any system-wide feature.
- `free`: Namespaces and projects with a Free subscription.
- `premium`: Namespaces and projects with a Premium subscription.
- `premium_trial`: Namespaces and projects with a Premium Trial subscription.
- `ultimate`: Namespaces and projects with an Ultimate subscription.
- `ultimate_trial`: Namespaces and projects with an Ultimate Trial subscription.
- `ultimate_trial_paid_customer`: Namespaces and projects on a Premium subscription that are trialling Ultimate for 30 days.
- `opensource`: Namespaces and projects that are member of GitLab Open Source program.
There is an `early_adopter` plan on GitLab.com that has no subscriptions.
The `test` environment doesn't have any plans.
## Implement rate limits using `Rack::Attack`
We use the [`Rack::Attack`](https://github.com/rack/rack-attack) middleware to throttle Rack requests.
This applies to Rails controllers, Grape endpoints, and any other Rack requests.
The process for adding a new throttle is loosely:
1. Add new fields to the [rate_limits JSONB column](https://gitlab.com/gitlab-org/gitlab/-/blob/63b37287ae028842fcdcf56d311e6bb0c7e09e79/app/models/application_setting.rb#L603)
in the `ApplicationSetting` model.
1. Update the JSON schema validator for the [rate_limits column](https://gitlab.com/gitlab-org/gitlab/-/blob/63b37287ae028842fcdcf56d311e6bb0c7e09e79/app/validators/json_schemas/application_setting_rate_limits.json).
1. Extend `Gitlab::RackAttack` and `Gitlab::RackAttack::Request` to configure the new rate limit,
and apply it to the desired requests.
1. Add the new settings to the **Admin** area form in `app/views/admin/application_settings/_ip_limits.html.haml`.
1. Document the new settings in [User and IP rate limits](../administration/settings/user_and_ip_rate_limits.md) and [Application settings API](../api/settings.md).
1. Configure the rate limit for GitLab.com and document it in [rate limits on GitLab.com](../user/gitlab_com/_index.md#rate-limits-on-gitlabcom).
Refer to these past issues for implementation details:
- [Create a separate rate limit for the Files API](https://gitlab.com/gitlab-org/gitlab/-/issues/335075).
- [Create a separate rate limit for unauthenticated API traffic](https://gitlab.com/gitlab-org/gitlab/-/issues/335300).
## Implement rate limits using `Gitlab::ApplicationRateLimiter`
This module implements a custom rate limiter that can be used to throttle
certain actions. Unlike `Rack::Attack` and `Rack::Throttle`, which operate at
the middleware level, this can be used at the controller or API level.
See the `CheckRateLimit` concern for use in controllers. In other parts of the code
the `Gitlab::ApplicationRateLimiter` module can be called directly.
## Next rate limiting architecture
In May 2022 we've started working on the next iteration of our application
limits framework using a forward looking rate limiting architecture.
We are working on defining new requirements and designing the next
architecture, so if you need new functionalities to add new limits, instead of
building them right now, consider contributing to the
[Rate Limiting Architecture Working Group](https://handbook.gitlab.com/handbook/company/working-groups/rate-limit-architecture/)
Examples of what features we might want to build into the next iteration of
rate limiting architecture:
1. Making it possible to define and override limits per namespace / per plan.
1. Automatically generating documentation about what limits are implemented and
what the defaults are.
1. Defining limits in a single place that can be found and explored.
1. Soft and hard limits, with support for notifying users when a limit is
approaching.
|
---
stage: GitLab Delivery
group: Self Managed
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Application limits development
breadcrumbs:
- doc
- development
---
This document provides a development guide for contributors to add application
limits to GitLab.
## Documentation
First of all, you have to gather information and decide which are the different
limits that are set for the different GitLab tiers. Coordinate with others to [document](../administration/instance_limits.md)
and communicate those limits.
There is a guide about [introducing application limits](https://handbook.gitlab.com/handbook/product/product-processes/#introducing-application-limits).
## Implement plan limits
### Insert database plan limits
In the `plan_limits` table, create a new column and insert the limit values.
It's recommended to create two separate migration script files.
1. Add a new column to the `plan_limits` table with non-null default value that
represents desired limit, such as:
```ruby
add_column(:plan_limits, :project_hooks, :integer, default: 100, null: false)
```
Plan limits entries set to `0` mean that limits are not enabled. You should
use this setting only in special and documented circumstances.
1. (Optionally) Create the database migration that fine-tunes each level with a
desired limit using the `create_or_update_plan_limit` migration helper.
The plans in this migration should match the [plans on GitLab.com](#subscription-plans).
If a plan is missed, customers on that plan would receive the default limit, which might be
`0` (unlimited).
For example:
```ruby
class InsertProjectHooksPlanLimits < Gitlab::Database::Migration[2.1]
def up
create_or_update_plan_limit('project_hooks', 'default', 0)
create_or_update_plan_limit('project_hooks', 'free', 10)
create_or_update_plan_limit('project_hooks', 'premium', 30)
create_or_update_plan_limit('project_hooks', 'premium_trial', 30)
create_or_update_plan_limit('project_hooks', 'ultimate', 100)
create_or_update_plan_limit('project_hooks', 'ultimate_trial', 100)
create_or_update_plan_limit('project_hooks', 'ultimate_trial_paid_customer', 100)
create_or_update_plan_limit('project_hooks', 'opensource', 100)
end
def down
create_or_update_plan_limit('project_hooks', 'default', 0)
create_or_update_plan_limit('project_hooks', 'free', 0)
create_or_update_plan_limit('project_hooks', 'premium', 0)
create_or_update_plan_limit('project_hooks', 'premium_trial', 0)
create_or_update_plan_limit('project_hooks', 'ultimate', 0)
create_or_update_plan_limit('project_hooks', 'ultimate_trial', 0)
create_or_update_plan_limit('project_hooks', 'ultimate_trial_paid_customer', 0)
create_or_update_plan_limit('project_hooks', 'opensource', 0)
end
end
```
Some plans exist only on GitLab.com. This is a no-op for plans
that do not exist.
To set limits in your migration only for GitLab.com and allow other instances
to use the default limits, add `return unless Gitlab.com?` to the start of
the `#up` and `#down` methods to make the migration a no-op for other instances.
### Plan limits validation
#### Get current limit
Access to the current limit can be done through the project or the namespace,
such as:
```ruby
project.actual_limits.project_hooks
```
#### Check current limit
There is one method `PlanLimits#exceeded?` to check if the current limit is
being exceeded. You can use either an `ActiveRecord` object or an `Integer`.
Ensures that the count of the records does not exceed the defined limit, such as:
```ruby
project.actual_limits.exceeded?(:project_hooks, ProjectHook.where(project: project))
```
Ensures that the number does not exceed the defined limit, such as:
```ruby
project.actual_limits.exceeded?(:project_hooks, 10)
```
#### `Limitable` concern
The [`Limitable` concern](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/concerns/limitable.rb)
can be used to validate that a model does not exceed the limits. It ensures
that the count of the records for the current model does not exceed the defined
limit.
You must specify the limit scope of the object being validated
and the limit name if it's different from the pluralized model name.
```ruby
class ProjectHook
include Limitable
self.limit_name = 'project_hooks' # Optional as ProjectHook corresponds with project_hooks
self.limit_scope = :project
end
```
To test the model, you can include the shared examples.
```ruby
it_behaves_like 'includes Limitable concern' do
subject { build(:project_hook, project: create(:project)) }
end
```
### Testing instance-wide limits
Instance-wide features always use `default` Plan, as instance-wide features
do not have license assigned.
```ruby
class InstanceVariable
include Limitable
self.limit_name = 'instance_variables' # Optional as InstanceVariable corresponds with instance_variables
self.limit_scope = Limitable::GLOBAL_SCOPE
end
```
### Subscription Plans
GitLab Self-Managed:
- `default`: Everyone.
GitLab.com:
- `default`: Any system-wide feature.
- `free`: Namespaces and projects with a Free subscription.
- `premium`: Namespaces and projects with a Premium subscription.
- `premium_trial`: Namespaces and projects with a Premium Trial subscription.
- `ultimate`: Namespaces and projects with an Ultimate subscription.
- `ultimate_trial`: Namespaces and projects with an Ultimate Trial subscription.
- `ultimate_trial_paid_customer`: Namespaces and projects on a Premium subscription that are trialling Ultimate for 30 days.
- `opensource`: Namespaces and projects that are member of GitLab Open Source program.
There is an `early_adopter` plan on GitLab.com that has no subscriptions.
The `test` environment doesn't have any plans.
## Implement rate limits using `Rack::Attack`
We use the [`Rack::Attack`](https://github.com/rack/rack-attack) middleware to throttle Rack requests.
This applies to Rails controllers, Grape endpoints, and any other Rack requests.
The process for adding a new throttle is loosely:
1. Add new fields to the [rate_limits JSONB column](https://gitlab.com/gitlab-org/gitlab/-/blob/63b37287ae028842fcdcf56d311e6bb0c7e09e79/app/models/application_setting.rb#L603)
in the `ApplicationSetting` model.
1. Update the JSON schema validator for the [rate_limits column](https://gitlab.com/gitlab-org/gitlab/-/blob/63b37287ae028842fcdcf56d311e6bb0c7e09e79/app/validators/json_schemas/application_setting_rate_limits.json).
1. Extend `Gitlab::RackAttack` and `Gitlab::RackAttack::Request` to configure the new rate limit,
and apply it to the desired requests.
1. Add the new settings to the **Admin** area form in `app/views/admin/application_settings/_ip_limits.html.haml`.
1. Document the new settings in [User and IP rate limits](../administration/settings/user_and_ip_rate_limits.md) and [Application settings API](../api/settings.md).
1. Configure the rate limit for GitLab.com and document it in [rate limits on GitLab.com](../user/gitlab_com/_index.md#rate-limits-on-gitlabcom).
Refer to these past issues for implementation details:
- [Create a separate rate limit for the Files API](https://gitlab.com/gitlab-org/gitlab/-/issues/335075).
- [Create a separate rate limit for unauthenticated API traffic](https://gitlab.com/gitlab-org/gitlab/-/issues/335300).
## Implement rate limits using `Gitlab::ApplicationRateLimiter`
This module implements a custom rate limiter that can be used to throttle
certain actions. Unlike `Rack::Attack` and `Rack::Throttle`, which operate at
the middleware level, this can be used at the controller or API level.
See the `CheckRateLimit` concern for use in controllers. In other parts of the code
the `Gitlab::ApplicationRateLimiter` module can be called directly.
## Next rate limiting architecture
In May 2022 we've started working on the next iteration of our application
limits framework using a forward looking rate limiting architecture.
We are working on defining new requirements and designing the next
architecture, so if you need new functionalities to add new limits, instead of
building them right now, consider contributing to the
[Rate Limiting Architecture Working Group](https://handbook.gitlab.com/handbook/company/working-groups/rate-limit-architecture/)
Examples of what features we might want to build into the next iteration of
rate limiting architecture:
1. Making it possible to define and override limits per namespace / per plan.
1. Automatically generating documentation about what limits are implemented and
what the defaults are.
1. Defining limits in a single place that can be found and explored.
1. Soft and hard limits, with support for notifying users when a limit is
approaching.
|
https://docs.gitlab.com/image_scaling
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/image_scaling.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
image_scaling.md
|
Tenant Scale
|
Organizations
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Image scaling guide
| null |
This section contains a brief overview of the GitLab image scaler and how to work with it.
For a general introduction to the history of image scaling at GitLab, you might be interested in
[this Unfiltered blog post](https://about.gitlab.com/blog/2020/11/02/scaling-down-how-we-prototyped-an-image-scaler-at-gitlab/).
## Why image scaling?
Since version 13.6, GitLab scales down images on demand to reduce the page data footprint.
This both reduces the amount of data "on the wire", but also helps with rendering performance,
since the browser has less work to do.
## When do we scale images?
Generally, the image scaler is triggered whenever a client requests an image resource by adding
the `width` parameter to the query string. However, we only scale images of certain kinds and formats.
Whether we allow an image to be rescaled or not is decided by combination of hard-coded rules and configuration settings.
The hard-coded rules only permit:
- [Project, group and user avatars](https://gitlab.com/gitlab-org/gitlab/-/blob/fd08748862a5fe5c25b919079858146ea85843ae/app/controllers/concerns/send_file_upload.rb#L65-67)
- [PNG or JPEG images](https://gitlab.com/gitlab-org/gitlab/-/blob/5dff8fa3814f2a683d8884f468cba1ec06a60972/lib/gitlab/file_type_detection.rb#L23)
- [Specific dimensions](https://gitlab.com/gitlab-org/gitlab/-/blob/5dff8fa3814f2a683d8884f468cba1ec06a60972/app/models/concerns/avatarable.rb#L6)
Furthermore, configuration in Workhorse can lead to the image scaler rejecting a request if:
- The image file is too large (controlled by `max_filesize`, we only rescale images that do not exceed a configured size in bytes, see [`max_filesize`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/workhorse/config.toml.example#L22)).
- Too many image scalers are already running (controlled by [`max_scaler_procs`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/workhorse/config.toml.example#L21)).
For instance, here are two different URLs that serve the GitLab project avatar both in its
original size and scaled down to 64 pixels. Only the second request will trigger the image scaler:
- [`https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/images/gitlab-logo-extra-whitespace.png`](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/images/gitlab-logo-extra-whitespace.png)
- [`https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/images/gitlab-logo-extra-whitespace.png?width=64`](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/images/gitlab-logo-extra-whitespace.png?width=64)
## Where do we scale images?
Rails and Workhorse currently collaborate to rescale images. This is a common implementation and performance
pattern in GitLab: important business logic such as request authentication and validation
happens in Rails, whereas the "heavy lifting", scaling and serving the binary data, happens in Workhorse.
The overall request flow is as follows:
```mermaid
sequenceDiagram
Client->>+Workhorse: GET /uploads/-/system/project/avatar/278964/logo-extra-whitespace.png?width=64
Workhorse->>+Rails: forward request
Rails->>+Rails: validate request
Rails->>+Rails: resolve image location
Rails-->>-Workhorse: Gitlab-Workhorse-Send-Data: send-scaled-image
Workhorse->>+Workhorse: invoke image scaler
Workhorse-->>-Client: 200 OK
```
### Rails
Currently, image scaling is limited to `Upload` entities, specifically avatars as mentioned above.
Therefore, all image scaling related logic in Rails is currently found in the
[`send_file_upload`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/concerns/send_file_upload.rb)
controller mixin. Upon receiving a request coming from a client through Workhorse, we check whether
it should trigger the image scaler as per the criteria mentioned above, and if so, render a special response
header field (`Gitlab-Workhorse-Send-Data`) with the necessary parameters for Workhorse to carry
out the scaling request. If Rails decides the request does not constitute a valid image scaling request,
we follow the path we take to serve any ordinary upload.
### Workhorse
Assuming Rails decided the request to be valid, Workhorse will take over. Upon receiving the `send-scaled-image`
instruction through the Rails response, a [special response injector](https://gitlab.com/gitlab-org/gitlab/-/blob/master/workhorse/internal/imageresizer/image_resizer.go)
will be invoked that knows how to rescale images. The only inputs it requires are the location of the image
(a path if the image resides in block storage, or a URL to remote storage otherwise) and the desired width.
Workhorse will handle the location transparently so Rails does not need to be concerned with where the image
actually resides.
Additionally, to request validation in Rails, Workhorse will run several pre-condition checks to ensure that
we can actually rescale the image, such as making sure we wouldn't outgrow our scaler process budget but also
if the file meets the configured maximum allowed size constraint (to keep memory consumption in check).
To actually scale the image, Workhorse will finally fork into a child process that performs the actual
scaling work, and stream the result back to the client.
#### Caching rescaled images
We currently do not store rescaled images anywhere; the scaler runs every time a smaller version is requested.
However, Workhorse implements standard conditional HTTP request strategies that allow us to skip the scaler
if the image in the client cache is up-to-date.
To that end we transmit a `Last-Modified` header field carrying the UTC
timestamp of the original image file and match it against the `If-Modified-Since` header field in client requests.
Only if the original image has changed and rescaling becomes necessary do we run the scaler again.
|
---
stage: Tenant Scale
group: Organizations
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Image scaling guide
breadcrumbs:
- doc
- development
---
This section contains a brief overview of the GitLab image scaler and how to work with it.
For a general introduction to the history of image scaling at GitLab, you might be interested in
[this Unfiltered blog post](https://about.gitlab.com/blog/2020/11/02/scaling-down-how-we-prototyped-an-image-scaler-at-gitlab/).
## Why image scaling?
Since version 13.6, GitLab scales down images on demand to reduce the page data footprint.
This both reduces the amount of data "on the wire", but also helps with rendering performance,
since the browser has less work to do.
## When do we scale images?
Generally, the image scaler is triggered whenever a client requests an image resource by adding
the `width` parameter to the query string. However, we only scale images of certain kinds and formats.
Whether we allow an image to be rescaled or not is decided by combination of hard-coded rules and configuration settings.
The hard-coded rules only permit:
- [Project, group and user avatars](https://gitlab.com/gitlab-org/gitlab/-/blob/fd08748862a5fe5c25b919079858146ea85843ae/app/controllers/concerns/send_file_upload.rb#L65-67)
- [PNG or JPEG images](https://gitlab.com/gitlab-org/gitlab/-/blob/5dff8fa3814f2a683d8884f468cba1ec06a60972/lib/gitlab/file_type_detection.rb#L23)
- [Specific dimensions](https://gitlab.com/gitlab-org/gitlab/-/blob/5dff8fa3814f2a683d8884f468cba1ec06a60972/app/models/concerns/avatarable.rb#L6)
Furthermore, configuration in Workhorse can lead to the image scaler rejecting a request if:
- The image file is too large (controlled by `max_filesize`, we only rescale images that do not exceed a configured size in bytes, see [`max_filesize`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/workhorse/config.toml.example#L22)).
- Too many image scalers are already running (controlled by [`max_scaler_procs`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/workhorse/config.toml.example#L21)).
For instance, here are two different URLs that serve the GitLab project avatar both in its
original size and scaled down to 64 pixels. Only the second request will trigger the image scaler:
- [`https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/images/gitlab-logo-extra-whitespace.png`](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/images/gitlab-logo-extra-whitespace.png)
- [`https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/images/gitlab-logo-extra-whitespace.png?width=64`](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/source/images/gitlab-logo-extra-whitespace.png?width=64)
## Where do we scale images?
Rails and Workhorse currently collaborate to rescale images. This is a common implementation and performance
pattern in GitLab: important business logic such as request authentication and validation
happens in Rails, whereas the "heavy lifting", scaling and serving the binary data, happens in Workhorse.
The overall request flow is as follows:
```mermaid
sequenceDiagram
Client->>+Workhorse: GET /uploads/-/system/project/avatar/278964/logo-extra-whitespace.png?width=64
Workhorse->>+Rails: forward request
Rails->>+Rails: validate request
Rails->>+Rails: resolve image location
Rails-->>-Workhorse: Gitlab-Workhorse-Send-Data: send-scaled-image
Workhorse->>+Workhorse: invoke image scaler
Workhorse-->>-Client: 200 OK
```
### Rails
Currently, image scaling is limited to `Upload` entities, specifically avatars as mentioned above.
Therefore, all image scaling related logic in Rails is currently found in the
[`send_file_upload`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/concerns/send_file_upload.rb)
controller mixin. Upon receiving a request coming from a client through Workhorse, we check whether
it should trigger the image scaler as per the criteria mentioned above, and if so, render a special response
header field (`Gitlab-Workhorse-Send-Data`) with the necessary parameters for Workhorse to carry
out the scaling request. If Rails decides the request does not constitute a valid image scaling request,
we follow the path we take to serve any ordinary upload.
### Workhorse
Assuming Rails decided the request to be valid, Workhorse will take over. Upon receiving the `send-scaled-image`
instruction through the Rails response, a [special response injector](https://gitlab.com/gitlab-org/gitlab/-/blob/master/workhorse/internal/imageresizer/image_resizer.go)
will be invoked that knows how to rescale images. The only inputs it requires are the location of the image
(a path if the image resides in block storage, or a URL to remote storage otherwise) and the desired width.
Workhorse will handle the location transparently so Rails does not need to be concerned with where the image
actually resides.
Additionally, to request validation in Rails, Workhorse will run several pre-condition checks to ensure that
we can actually rescale the image, such as making sure we wouldn't outgrow our scaler process budget but also
if the file meets the configured maximum allowed size constraint (to keep memory consumption in check).
To actually scale the image, Workhorse will finally fork into a child process that performs the actual
scaling work, and stream the result back to the client.
#### Caching rescaled images
We currently do not store rescaled images anywhere; the scaler runs every time a smaller version is requested.
However, Workhorse implements standard conditional HTTP request strategies that allow us to skip the scaler
if the image in the client cache is up-to-date.
To that end we transmit a `Last-Modified` header field carrying the UTC
timestamp of the original image file and match it against the `If-Modified-Since` header field in client requests.
Only if the original image has changed and rescaling becomes necessary do we run the scaler again.
|
https://docs.gitlab.com/prometheus_metrics
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/prometheus_metrics.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
prometheus_metrics.md
|
Monitor
|
Platform Insights
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Prometheus metrics development guidelines
| null |
GitLab provides [Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md)
to monitor itself.
## Adding a new metric
This section describes how to add new metrics for self-monitoring
([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/15440)).
1. Select the [type of metric](https://gitlab.com/gitlab-org/ruby/gems/prometheus-client-mmap#metrics):
- `Gitlab::Metrics.counter`
- `Gitlab::Metrics.gauge`
- `Gitlab::Metrics.histogram`
- `Gitlab::Metrics.summary`
1. Select the appropriate name for your metric. Refer to the guidelines
for [Prometheus metric names](https://prometheus.io/docs/practices/naming/#metric-names).
1. Update the list of [GitLab Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md).
1. Carefully choose what labels you want to add to your metric. Values with high cardinality,
like `project_path`, or `project_id` are strongly discouraged because they can affect our services
availability due to the fact that each set of labels is exposed as a new entry in the `/metrics` endpoint.
For example, a histogram with 10 buckets and a label with 100 values would generate 1000
entries in the export endpoint.
1. Trigger the relevant page or code that records the new metric.
1. Check that the new metric appears at `/-/metrics`.
For metrics that are not bounded to a specific context (`request`, `process`, `machine`, `namespace`, etc),
generate them from a cron-based Sidekiq job:
- For Geo related metrics, check `Geo::MetricsUpdateService`.
- For other "global" / instance-wide metrics, check: `Metrics::GlobalMetricsUpdateService`.
{{< alert type="warning" >}}
When exporting metrics from Sidekiq in a multi-instance deployment:
- The same exporter is not guaranteed to be queried consistently.
- This is especially problematic for gauge metrics, as each Sidekiq worker will continue reporting the last recorded value
until that specific worker runs the metric collection code again.
- This can lead to inconsistent or stale metrics data across your monitoring system.
For more reliable metrics collection, consider creating the exporter as a custom exporter
in [`gitlab-exporter`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter/)
{{< /alert >}}
For more details, see [issue 406583](https://gitlab.com/gitlab-org/gitlab/-/issues/406583),
where we also discuss a possible solution using a push-gateway.
|
---
stage: Monitor
group: Platform Insights
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab Prometheus metrics development guidelines
breadcrumbs:
- doc
- development
---
GitLab provides [Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md)
to monitor itself.
## Adding a new metric
This section describes how to add new metrics for self-monitoring
([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/15440)).
1. Select the [type of metric](https://gitlab.com/gitlab-org/ruby/gems/prometheus-client-mmap#metrics):
- `Gitlab::Metrics.counter`
- `Gitlab::Metrics.gauge`
- `Gitlab::Metrics.histogram`
- `Gitlab::Metrics.summary`
1. Select the appropriate name for your metric. Refer to the guidelines
for [Prometheus metric names](https://prometheus.io/docs/practices/naming/#metric-names).
1. Update the list of [GitLab Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md).
1. Carefully choose what labels you want to add to your metric. Values with high cardinality,
like `project_path`, or `project_id` are strongly discouraged because they can affect our services
availability due to the fact that each set of labels is exposed as a new entry in the `/metrics` endpoint.
For example, a histogram with 10 buckets and a label with 100 values would generate 1000
entries in the export endpoint.
1. Trigger the relevant page or code that records the new metric.
1. Check that the new metric appears at `/-/metrics`.
For metrics that are not bounded to a specific context (`request`, `process`, `machine`, `namespace`, etc),
generate them from a cron-based Sidekiq job:
- For Geo related metrics, check `Geo::MetricsUpdateService`.
- For other "global" / instance-wide metrics, check: `Metrics::GlobalMetricsUpdateService`.
{{< alert type="warning" >}}
When exporting metrics from Sidekiq in a multi-instance deployment:
- The same exporter is not guaranteed to be queried consistently.
- This is especially problematic for gauge metrics, as each Sidekiq worker will continue reporting the last recorded value
until that specific worker runs the metric collection code again.
- This can lead to inconsistent or stale metrics data across your monitoring system.
For more reliable metrics collection, consider creating the exporter as a custom exporter
in [`gitlab-exporter`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter/)
{{< /alert >}}
For more details, see [issue 406583](https://gitlab.com/gitlab-org/gitlab/-/issues/406583),
where we also discuss a possible solution using a push-gateway.
|
https://docs.gitlab.com/callouts
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/callouts.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
callouts.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Callouts
| null |
Callouts are a mechanism for presenting notifications to users. Users can dismiss the notifications, and the notifications can stay dismissed for a predefined duration. Notification dismissal is persistent across page loads and different user devices.
## Callout contexts
**Global context**: Callouts can be displayed to a user regardless of where they are in the application. For example, we can show a notification that reminds the user to have two-factor authentication recovery codes stored in a safe place. Dismissing this type of callout is effective for the particular user across the whole GitLab instance, no matter where they encountered the callout.
**Group and project contexts**: Callouts can also be displayed to a specific user and have a particular context binding, like a group or a project context. For example, group owners can be notified that their group is running out of available seats. Dismissing that callout would be effective for the particular user only in this particular group, while they would still see the same callout in other groups, if applicable.
Regardless of the context, dismissing a callout is only effective for the given user. Other users still see their relevant callouts.
## Callout IDs
Callouts use unique names to identify them, and a unique value to store dismissals data. For example:
```ruby
amazing_alert: 42,
```
Here `amazing_alert` is the callout ID, and `42` is a unique number to be used to register dismissals in the database. Here's how a group callout would be saved:
```plaintext
id | user_id | group_id | feature_name | dismissed_at
----+---------+----------+--------------+-------------------------------
0 | 1 | 4 | 42 | 2025-05-21 00:00:00.000000+00
```
To create a new callout ID, add a new key to the `feature_name` enum in the relevant context type registry file, using a unique name and a sequential value:
- Global context: `app/models/users/callout.rb`. Callouts are dismissed by a user globally. Related notifications would not be displayed anywhere in the GitLab instance for that user.
- Group context: `app/models/users/group_callout.rb`. Callouts are dismissed by a user in a given group. Related notifications are still shown to the user in other groups.
- Project context: `app/models/users/project_callout.rb`. Callouts dismissed by a user in a given project. Related notifications are still shown to the user in other projects.
**NOTE**: do not reuse old enum values, as it may lead to false-positive dismissals. Instead, create a new sequential number.
### Deprecating a callout
When we no longer need a callout, we can remove it from the callout ID enums. But since dismissal records in the DB use the numerical value of the enum, we need to explicitly preserve the deprecated ID from being reused, so that old dismissals don't affect the new callouts. Thus to remove a callout ID:
1. Remove the key/value pair from the enum hash
1. Leave an inline comment, mentioning the deprecated ID and the MR removing the callout
For example:
```diff
- amazing_alert: 42,
+ # 42 removed in https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121920
```
## Server-side rendered callouts
This section describes using callouts when they are rendered on the server in `.haml` views, partials, or components.
### Dismissing the callouts on the client side
JavaScript helpers for callouts rely on certain selectors and data attributes to be present on the HTML of the notification, to properly call dismissal API endpoints, and hide the notification in the runtime. The wrapper of the notification needs to have a `.js-persistent-callout` CSS class with the following data-attributes:
```javascript
{
featureId, // Unique callout ID
dismissEndpoint, // Dismiss endpoint, unique for each callout context type
groupId, // optional, required for the group context
projectId, // optional, required for the project context
deferLinks, // optional, allows executing certain action alongside the dismissal
}
```
For the dismissal trigger, the wrapper needs to contain at least one `.js-close` element and optionally `.deferred-link` links (if `deferLinks` is `true`). See `app/assets/javascripts/persistent_user_callout.js` for more details.
#### Defining the dismissal endpoint
For the JS to properly register the dismissal — apart from the `featureId`, we need to provide the `dismissEndpoint` URL, different for each context. Here are path helpers to use for each context:
- Global context: `callouts_path`
- Group context: `group_callouts_path`
- Project context: `project_callouts_path`
### Detecting the dismissal on the server side
Usually before rendering the callout, we check if it has been dismissed. `User` model on the Backend has helpers to detect dismissals in different contexts:
- Global context: `user.dismissed_callout?(feature_name:, ignore_dismissal_earlier_than: nil)`
- Group context: `user.dismissed_callout_for_group?(feature_name:, group:, ignore_dismissal_earlier_than: nil)`
- Project context: `user.dismissed_callout_for_project?(feature_name:, project:, ignore_dismissal_earlier_than: nil)`
**NOTE**: `feature_name` is the Callout ID, described above. In our example, it would be `amazing_alert`
#### Setting expiration for dismissals using `ignore_dismissal_earlier_than` parameter
Some callouts can be displayed once and after the dismissal should never appear again. Others need to pop-up repeatedly, even if dismissed.
Without the `ignore_dismissal_earlier_than` parameter callout dismissals will stay effective indefinitely. Once the user has dismissed the callout, it would stay dismissed.
If we pass `ignore_dismissal_earlier_than` a value, for example, `30.days.ago`, the dismissed callout would re-appear after this duration.
**NOTE**: expired or deprecated dismissals are not automatically removed from the database. This parameter only checks if the callout has been dismissed within the defined period.
### Example usage
Here's an example `.haml` file:
```haml
- return if amazing_alert_callout_dismissed?(group)
= render Pajamas::AlertComponent.new(title: s_('AmazingAlert|Amazing title'),
variant: :warning,
alert_options: { class: 'js-persistent-callout', data: amazing_alert_callout_data(group) }) do |c|
- c.with_body do
= s_('AmazingAlert|This is an amazing alert body.')
```
With a corresponding `.rb` helper:
```ruby
# frozen_string_literal: true
module AmazingAlertHelper
def amazing_alert_callout_dismissed?(group)
user_dismissed_for_group("amazing_alert", group.root_ancestor, 30.days.ago)
end
def amazing_alert_callout_data(group)
{
feature_id: "amazing_alert",
dismiss_endpoint: group_callouts_path,
group_id: group.root_ancestor.id
}
end
end
```
## Client-side rendered callouts
This section describes using callouts when they are rendered on the client in `.vue` components.
### Dismissing the callouts on the client side
For Vue components, we have a `<user-callout-dismisser>` wrapper, that integrates with GraphQL API to simplify dismissing and checking the dismissed state of a callout. Here's an example usage:
```vue
<user-callout-dismisser feature-name="my_user_callout">
<template #default="{ dismiss, shouldShowCallout }">
<my-callout-component
v-if="shouldShowCallout"
@close="dismiss"
/>
</template>
</user-callout-dismisser>
```
See `app/assets/javascripts/vue_shared/components/user_callout_dismisser.vue` for more details.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Callouts
breadcrumbs:
- doc
- development
---
Callouts are a mechanism for presenting notifications to users. Users can dismiss the notifications, and the notifications can stay dismissed for a predefined duration. Notification dismissal is persistent across page loads and different user devices.
## Callout contexts
**Global context**: Callouts can be displayed to a user regardless of where they are in the application. For example, we can show a notification that reminds the user to have two-factor authentication recovery codes stored in a safe place. Dismissing this type of callout is effective for the particular user across the whole GitLab instance, no matter where they encountered the callout.
**Group and project contexts**: Callouts can also be displayed to a specific user and have a particular context binding, like a group or a project context. For example, group owners can be notified that their group is running out of available seats. Dismissing that callout would be effective for the particular user only in this particular group, while they would still see the same callout in other groups, if applicable.
Regardless of the context, dismissing a callout is only effective for the given user. Other users still see their relevant callouts.
## Callout IDs
Callouts use unique names to identify them, and a unique value to store dismissals data. For example:
```ruby
amazing_alert: 42,
```
Here `amazing_alert` is the callout ID, and `42` is a unique number to be used to register dismissals in the database. Here's how a group callout would be saved:
```plaintext
id | user_id | group_id | feature_name | dismissed_at
----+---------+----------+--------------+-------------------------------
0 | 1 | 4 | 42 | 2025-05-21 00:00:00.000000+00
```
To create a new callout ID, add a new key to the `feature_name` enum in the relevant context type registry file, using a unique name and a sequential value:
- Global context: `app/models/users/callout.rb`. Callouts are dismissed by a user globally. Related notifications would not be displayed anywhere in the GitLab instance for that user.
- Group context: `app/models/users/group_callout.rb`. Callouts are dismissed by a user in a given group. Related notifications are still shown to the user in other groups.
- Project context: `app/models/users/project_callout.rb`. Callouts dismissed by a user in a given project. Related notifications are still shown to the user in other projects.
**NOTE**: do not reuse old enum values, as it may lead to false-positive dismissals. Instead, create a new sequential number.
### Deprecating a callout
When we no longer need a callout, we can remove it from the callout ID enums. But since dismissal records in the DB use the numerical value of the enum, we need to explicitly preserve the deprecated ID from being reused, so that old dismissals don't affect the new callouts. Thus to remove a callout ID:
1. Remove the key/value pair from the enum hash
1. Leave an inline comment, mentioning the deprecated ID and the MR removing the callout
For example:
```diff
- amazing_alert: 42,
+ # 42 removed in https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121920
```
## Server-side rendered callouts
This section describes using callouts when they are rendered on the server in `.haml` views, partials, or components.
### Dismissing the callouts on the client side
JavaScript helpers for callouts rely on certain selectors and data attributes to be present on the HTML of the notification, to properly call dismissal API endpoints, and hide the notification in the runtime. The wrapper of the notification needs to have a `.js-persistent-callout` CSS class with the following data-attributes:
```javascript
{
featureId, // Unique callout ID
dismissEndpoint, // Dismiss endpoint, unique for each callout context type
groupId, // optional, required for the group context
projectId, // optional, required for the project context
deferLinks, // optional, allows executing certain action alongside the dismissal
}
```
For the dismissal trigger, the wrapper needs to contain at least one `.js-close` element and optionally `.deferred-link` links (if `deferLinks` is `true`). See `app/assets/javascripts/persistent_user_callout.js` for more details.
#### Defining the dismissal endpoint
For the JS to properly register the dismissal — apart from the `featureId`, we need to provide the `dismissEndpoint` URL, different for each context. Here are path helpers to use for each context:
- Global context: `callouts_path`
- Group context: `group_callouts_path`
- Project context: `project_callouts_path`
### Detecting the dismissal on the server side
Usually before rendering the callout, we check if it has been dismissed. `User` model on the Backend has helpers to detect dismissals in different contexts:
- Global context: `user.dismissed_callout?(feature_name:, ignore_dismissal_earlier_than: nil)`
- Group context: `user.dismissed_callout_for_group?(feature_name:, group:, ignore_dismissal_earlier_than: nil)`
- Project context: `user.dismissed_callout_for_project?(feature_name:, project:, ignore_dismissal_earlier_than: nil)`
**NOTE**: `feature_name` is the Callout ID, described above. In our example, it would be `amazing_alert`
#### Setting expiration for dismissals using `ignore_dismissal_earlier_than` parameter
Some callouts can be displayed once and after the dismissal should never appear again. Others need to pop-up repeatedly, even if dismissed.
Without the `ignore_dismissal_earlier_than` parameter callout dismissals will stay effective indefinitely. Once the user has dismissed the callout, it would stay dismissed.
If we pass `ignore_dismissal_earlier_than` a value, for example, `30.days.ago`, the dismissed callout would re-appear after this duration.
**NOTE**: expired or deprecated dismissals are not automatically removed from the database. This parameter only checks if the callout has been dismissed within the defined period.
### Example usage
Here's an example `.haml` file:
```haml
- return if amazing_alert_callout_dismissed?(group)
= render Pajamas::AlertComponent.new(title: s_('AmazingAlert|Amazing title'),
variant: :warning,
alert_options: { class: 'js-persistent-callout', data: amazing_alert_callout_data(group) }) do |c|
- c.with_body do
= s_('AmazingAlert|This is an amazing alert body.')
```
With a corresponding `.rb` helper:
```ruby
# frozen_string_literal: true
module AmazingAlertHelper
def amazing_alert_callout_dismissed?(group)
user_dismissed_for_group("amazing_alert", group.root_ancestor, 30.days.ago)
end
def amazing_alert_callout_data(group)
{
feature_id: "amazing_alert",
dismiss_endpoint: group_callouts_path,
group_id: group.root_ancestor.id
}
end
end
```
## Client-side rendered callouts
This section describes using callouts when they are rendered on the client in `.vue` components.
### Dismissing the callouts on the client side
For Vue components, we have a `<user-callout-dismisser>` wrapper, that integrates with GraphQL API to simplify dismissing and checking the dismissed state of a callout. Here's an example usage:
```vue
<user-callout-dismisser feature-name="my_user_callout">
<template #default="{ dismiss, shouldShowCallout }">
<my-callout-component
v-if="shouldShowCallout"
@close="dismiss"
/>
</template>
</user-callout-dismisser>
```
See `app/assets/javascripts/vue_shared/components/user_callout_dismisser.vue` for more details.
|
https://docs.gitlab.com/identity_verification
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/identity_verification.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
identity_verification.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Identity verification development
| null |
For information on this feature that are not development-specific, see the [feature documentation](../security/identity_verification.md).
## Logging
You can triage and debug issues raised by identity verification with the [GitLab production logs](https://log.gprd.gitlab.net).
### View logs associated to a user and email verification
To view logs associated to the [email stage](../security/identity_verification.md#email-verification) for a user:
- Query the GitLab production logs with the following KQL:
```plaintext
json.controller:"RegistrationsIdentityVerificationController" AND json.username:replace_username_here
```
Valuable debugging information can be found in the `json.action` and `json.location` columns.
### View logs associated to a user and phone verification
To view logs associated to the [phone stage](../security/identity_verification.md#phone-number-verification) for a user:
- Query the GitLab production logs with the following KQL:
```plaintext
json.message: "IdentityVerification::Phone" AND json.username:replace_username_here
```
On rows where `json.event` is `Failed Attempt`, you can find valuable debugging information in the `json.reason` column such as:
| Reason | Description |
|---------|-------------|
| `invalid_phone_number` | Either there was a typo in the phone number, or the user used a VOIP number. GitLab does not allow users to sign up with non-mobile phone numbers. |
| `invalid_code` | The user entered an incorrect verification code. |
| `rate_limited` | The user had 10 or more failed attempts, so they were rate-limited for one hour. |
| `related_to_banned_user` | The user tried a phone number already related to a banned user. |
#### View Telesign SMS status update logs
To view Telesign status updates logs for SMS sent to a user, query the GitLab production logs with:
```plaintext
json.message: "IdentityVerification::Phone" AND json.event: "Telesign transaction status update" AND json.username:<username>
```
Status update logs include the following fields:
| Field | Description |
|---------|-------------|
| `telesign_status` | Delivery status of the SMS. See the [Telesign documentation](https://developer.telesign.com/enterprise/reference/smsdeliveryreports#status-codes) for possible status codes and their descriptions. |
| `telesign_status_updated_on` | A timestamp indicating when the SMS delivery status was last updated. |
| `telesign_errors` | Errors that occurred during delivery. See the [Telesign documentation](https://developer.telesign.com/enterprise/reference/smsdeliveryreports#status-codes) for possible error codes and their descriptions. |
### View logs associated to a user and credit card verification
To view logs associated to the [credit card stage](../security/identity_verification.md#credit-card-verification) for a user:
- Query the GitLab production logs with the following KQL:
```plaintext
json.message: "IdentityVerification::CreditCard" AND json.username:replace_username_here
```
On rows where `json.event` is `Failed Attempt`, you can find valuable debugging information in the `json.reason` column such as:
| Reason | Description |
|---------|-------------|
| `rate_limited` | The user had 10 or more failed attempts, so they were rate-limited for one hour. |
| `related_to_banned_user` | The user tried a credit card number already related to a banned user. |
### View logs associated with high-risk users
To view logs associated with the [credit card stage](../security/identity_verification.md#credit-card-verification) for high-risk users:
- Query the GitLab production logs with the following KQL:
```plaintext
json.controller:"GitlabSubscriptions::SubscriptionsController" AND json.action:"payment_form" AND json.params.value:"cc_registration_validation"
```
## Code walkthrough
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a walkthrough and high level explanation of the code, see [Identity Verification - Code walkthrough](https://www.youtube.com/watch?v=DIsnMiNzND8).
## QA Integration
For end-to-end production and staging tests to function properly, GitLab allows QA users to bypass [Account email Verification](../security/email_verification.md) when:
- The `User-Agent` for the request matches the configured `GITLAB_QA_USER_AGENT`.
- Disable [email verification](testing_guide/end_to_end/best_practices/users.md#disable-email-verification)
## Additional resources
<!-- markdownlint-disable MD044 -->
The [Anti-abuse team](https://handbook.gitlab.com/handbook/engineering/development/sec/software-supply-chain-security/anti-abuse/#group-members) owns identity verification. You can join our channel on Slack: [#g_anti-abuse](https://gitlab.slack.com/archives/C03EH5HCLPR).
<!-- markdownlint-enable MD044 -->
For help with Telesign:
<!-- markdownlint-disable MD044 -->
- Telesign/GitLab collaboration channel on Slack: [#gitlab-telesign-support](https://gitlab.slack.com/archives/C052EAXB6BY)
<!-- markdownlint-enable MD044 -->
- Telesign support contact: `support@telesign.com`
- [Telesign portal](https://teleportal.telesign.com/)
- [Telesign documentation](https://developer.telesign.com/enterprise/docs/get-started-with-docs)
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Identity verification development
breadcrumbs:
- doc
- development
---
For information on this feature that are not development-specific, see the [feature documentation](../security/identity_verification.md).
## Logging
You can triage and debug issues raised by identity verification with the [GitLab production logs](https://log.gprd.gitlab.net).
### View logs associated to a user and email verification
To view logs associated to the [email stage](../security/identity_verification.md#email-verification) for a user:
- Query the GitLab production logs with the following KQL:
```plaintext
json.controller:"RegistrationsIdentityVerificationController" AND json.username:replace_username_here
```
Valuable debugging information can be found in the `json.action` and `json.location` columns.
### View logs associated to a user and phone verification
To view logs associated to the [phone stage](../security/identity_verification.md#phone-number-verification) for a user:
- Query the GitLab production logs with the following KQL:
```plaintext
json.message: "IdentityVerification::Phone" AND json.username:replace_username_here
```
On rows where `json.event` is `Failed Attempt`, you can find valuable debugging information in the `json.reason` column such as:
| Reason | Description |
|---------|-------------|
| `invalid_phone_number` | Either there was a typo in the phone number, or the user used a VOIP number. GitLab does not allow users to sign up with non-mobile phone numbers. |
| `invalid_code` | The user entered an incorrect verification code. |
| `rate_limited` | The user had 10 or more failed attempts, so they were rate-limited for one hour. |
| `related_to_banned_user` | The user tried a phone number already related to a banned user. |
#### View Telesign SMS status update logs
To view Telesign status updates logs for SMS sent to a user, query the GitLab production logs with:
```plaintext
json.message: "IdentityVerification::Phone" AND json.event: "Telesign transaction status update" AND json.username:<username>
```
Status update logs include the following fields:
| Field | Description |
|---------|-------------|
| `telesign_status` | Delivery status of the SMS. See the [Telesign documentation](https://developer.telesign.com/enterprise/reference/smsdeliveryreports#status-codes) for possible status codes and their descriptions. |
| `telesign_status_updated_on` | A timestamp indicating when the SMS delivery status was last updated. |
| `telesign_errors` | Errors that occurred during delivery. See the [Telesign documentation](https://developer.telesign.com/enterprise/reference/smsdeliveryreports#status-codes) for possible error codes and their descriptions. |
### View logs associated to a user and credit card verification
To view logs associated to the [credit card stage](../security/identity_verification.md#credit-card-verification) for a user:
- Query the GitLab production logs with the following KQL:
```plaintext
json.message: "IdentityVerification::CreditCard" AND json.username:replace_username_here
```
On rows where `json.event` is `Failed Attempt`, you can find valuable debugging information in the `json.reason` column such as:
| Reason | Description |
|---------|-------------|
| `rate_limited` | The user had 10 or more failed attempts, so they were rate-limited for one hour. |
| `related_to_banned_user` | The user tried a credit card number already related to a banned user. |
### View logs associated with high-risk users
To view logs associated with the [credit card stage](../security/identity_verification.md#credit-card-verification) for high-risk users:
- Query the GitLab production logs with the following KQL:
```plaintext
json.controller:"GitlabSubscriptions::SubscriptionsController" AND json.action:"payment_form" AND json.params.value:"cc_registration_validation"
```
## Code walkthrough
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a walkthrough and high level explanation of the code, see [Identity Verification - Code walkthrough](https://www.youtube.com/watch?v=DIsnMiNzND8).
## QA Integration
For end-to-end production and staging tests to function properly, GitLab allows QA users to bypass [Account email Verification](../security/email_verification.md) when:
- The `User-Agent` for the request matches the configured `GITLAB_QA_USER_AGENT`.
- Disable [email verification](testing_guide/end_to_end/best_practices/users.md#disable-email-verification)
## Additional resources
<!-- markdownlint-disable MD044 -->
The [Anti-abuse team](https://handbook.gitlab.com/handbook/engineering/development/sec/software-supply-chain-security/anti-abuse/#group-members) owns identity verification. You can join our channel on Slack: [#g_anti-abuse](https://gitlab.slack.com/archives/C03EH5HCLPR).
<!-- markdownlint-enable MD044 -->
For help with Telesign:
<!-- markdownlint-disable MD044 -->
- Telesign/GitLab collaboration channel on Slack: [#gitlab-telesign-support](https://gitlab.slack.com/archives/C052EAXB6BY)
<!-- markdownlint-enable MD044 -->
- Telesign support contact: `support@telesign.com`
- [Telesign portal](https://teleportal.telesign.com/)
- [Telesign documentation](https://developer.telesign.com/enterprise/docs/get-started-with-docs)
|
https://docs.gitlab.com/features_inside_dot_gitlab
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/features_inside_dot_gitlab.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
features_inside_dot_gitlab.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Features inside the `.gitlab/` directory
| null |
We have implemented standard features that depend on configuration files in the `.gitlab/` directory. You can find `.gitlab/` in various GitLab repositories.
When implementing new features, refer to these existing features to avoid conflicts:
- [Description templates](../user/project/description_templates.md#create-a-description-template): `.gitlab/issue_templates/`.
- [Merge request templates](../user/project/description_templates.md#create-a-merge-request-template): `.gitlab/merge_request_templates/`.
- [GitLab agent for Kubernetes](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent): `.gitlab/agents/`.
- [CODEOWNERS](../user/project/codeowners/_index.md#set-up-code-owners): `.gitlab/CODEOWNERS`.
- [Route Maps](../ci/review_apps/_index.md#route-maps): `.gitlab/route-map.yml`.
- [Customize Auto DevOps Helm Values](../topics/autodevops/customize.md#customize-helm-chart-values): `.gitlab/auto-deploy-values.yaml`.
- [Insights](../user/project/insights/_index.md#for-projects): `.gitlab/insights.yml`.
- [Service Desk Templates](../user/project/service_desk/configure.md#customize-emails-sent-to-external-participants): `.gitlab/service_desk_templates/`.
- [Secret Detection Custom Rulesets](../user/application_security/secret_detection/pipeline/configure.md#customize-analyzer-rulesets): `.gitlab/secret-detection-ruleset.toml`
- [Static Analysis Custom Rulesets](../user/application_security/sast/customize_rulesets.md#create-the-configuration-file): `.gitlab/sast-ruleset.toml`
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Features inside the `.gitlab/` directory
breadcrumbs:
- doc
- development
---
We have implemented standard features that depend on configuration files in the `.gitlab/` directory. You can find `.gitlab/` in various GitLab repositories.
When implementing new features, refer to these existing features to avoid conflicts:
- [Description templates](../user/project/description_templates.md#create-a-description-template): `.gitlab/issue_templates/`.
- [Merge request templates](../user/project/description_templates.md#create-a-merge-request-template): `.gitlab/merge_request_templates/`.
- [GitLab agent for Kubernetes](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent): `.gitlab/agents/`.
- [CODEOWNERS](../user/project/codeowners/_index.md#set-up-code-owners): `.gitlab/CODEOWNERS`.
- [Route Maps](../ci/review_apps/_index.md#route-maps): `.gitlab/route-map.yml`.
- [Customize Auto DevOps Helm Values](../topics/autodevops/customize.md#customize-helm-chart-values): `.gitlab/auto-deploy-values.yaml`.
- [Insights](../user/project/insights/_index.md#for-projects): `.gitlab/insights.yml`.
- [Service Desk Templates](../user/project/service_desk/configure.md#customize-emails-sent-to-external-participants): `.gitlab/service_desk_templates/`.
- [Secret Detection Custom Rulesets](../user/application_security/secret_detection/pipeline/configure.md#customize-analyzer-rulesets): `.gitlab/secret-detection-ruleset.toml`
- [Static Analysis Custom Rulesets](../user/application_security/sast/customize_rulesets.md#create-the-configuration-file): `.gitlab/sast-ruleset.toml`
|
https://docs.gitlab.com/ruby3_gotchas
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ruby3_gotchas.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
ruby3_gotchas.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Ruby 3 gotchas
| null |
This section documents several problems we found while working on [Ruby 3 support](https://gitlab.com/groups/gitlab-org/-/epics/5149)
and which led to subtle bugs or test failures that were difficult to understand. We encourage every GitLab contributor
who writes Ruby code on a regular basis to familiarize themselves with these issues.
To find the complete list of changes to the Ruby 3 language and standard library, see
[Ruby Changes](https://rubyreferences.github.io/rubychanges/3.0.html).
## `Hash#each` consistently yields a 2-element array to lambdas
Consider the following code snippet:
```ruby
def foo(a, b)
p [a, b]
end
def bar(a, b = 2)
p [a, b]
end
foo_lambda = method(:foo).to_proc
bar_lambda = method(:bar).to_proc
{ a: 1 }.each(&foo_lambda)
{ a: 1 }.each(&bar_lambda)
```
In Ruby 2.7, the output of this program suggests that yielding hash entries to lambdas behaves
differently depending on how many required arguments there are:
```ruby
# Ruby 2.7
{ a: 1 }.each(&foo_lambda) # prints [:a, 1]
{ a: 1 }.each(&bar_lambda) # prints [[:a, 1], 2]
```
Ruby 3 makes this behavior consistent and always attempts to yield hash entries as a single `[key, value]` array:
```ruby
# Ruby 3.0
{ a: 1 }.each(&foo_lambda) # `foo': wrong number of arguments (given 1, expected 2) (ArgumentError)
{ a: 1 }.each(&bar_lambda) # prints [[:a, 1], 2]
```
To write code that works under both 2.7 and 3.0, consider the following options:
- Always pass the lambda body as a block: `{ a: 1 }.each { |a, b| p [a, b] }`.
- Deconstruct the lambda arguments: `{ a: 1 }.each(&->((a, b)) { p [a, b] })`.
We recommend always passing the block explicitly, and prefer two required arguments as block parameters.
For more information, see [Ruby issue 12706](https://bugs.ruby-lang.org/issues/12706).
## `Symbol#to_proc` returns signature metadata consistent with lambdas
A common idiom in Ruby is to obtain `Proc` objects using the `&:<symbol>` shorthand and
pass them to higher-order functions:
```ruby
[1, 2, 3].each(&:to_s)
```
Ruby desugars `&:<symbol>` to `Symbol#to_proc`. We can call it with
the method _receiver_ as its first argument (here: `Integer`), and all method _arguments_
(here: none) as its remaining arguments.
This behaves the same in both Ruby 2.7 and Ruby 3. Where Ruby 3 diverges is when capturing
this `Proc` object and inspecting its call signature.
This is often done when writing DSLs or using other forms of meta-programming:
```ruby
p = :foo.to_proc # This usually happens via a conversion through `&:foo`
# Ruby 2.7: prints [[:rest]] (-1)
# Ruby 3.0: prints [[:req], [:rest]] (-2)
puts "#{p.parameters} (#{p.arity})"
```
Ruby 2.7 reports zero required and one optional parameter for this `Proc` object, while Ruby 3 reports one required
and one optional parameter. Ruby 2.7 is incorrect: the first argument must
always be passed, as it is the receiver of the method the `Proc` object represents, and methods cannot be
called without a receiver.
Ruby 3 corrects this: the code that tests `Proc` object arity or parameter lists might now break and
has to be updated.
For more information, see [Ruby issue 16260](https://bugs.ruby-lang.org/issues/16260).
## `OpenStruct` does not evaluate fields lazily
The `OpenStruct` implementation has undergone a partial rewrite in Ruby 3, resulting in
behavioral changes. In Ruby 2.7, `OpenStruct` defines methods lazily, when the method is first accessed.
In Ruby 3.0, it defines these methods eagerly in the initializer, which can break classes that inherit from `OpenStruct`
and override these methods.
Don't inherit from `OpenStruct` for these reasons; ideally, don't use it at all.
`OpenStruct` is [considered problematic](https://ruby-doc.org/stdlib-3.0.2/libdoc/ostruct/rdoc/OpenStruct.html#class-OpenStruct-label-Caveats).
When writing new code, prefer a `Struct` instead, which is simpler in implementation, although less flexible.
## `Regexp` and `Range` instances are frozen
It is not necessary anymore to explicitly freeze `Regexp` or `Range` instances because Ruby 3 freezes
them automatically upon creation.
This has a subtle side-effect: Tests that stub method calls on these types now fail with an error because
RSpec cannot stub frozen objects:
```ruby
# Ruby 2.7: works
# Ruby 3.0: error: "can't modify frozen object"
allow(subject.function_returning_range).to receive(:max).and_return(42)
```
Rewrite affected tests by not stubbing method calls on frozen objects. The example above can be rewritten as:
```ruby
# Works with any Ruby version
allow(subject).to receive(:function_returning_range).and_return(1..42)
```
## Table tests fail with Ruby 3.0.2
Ruby 3.0.2 has a known bug that causes [table tests](testing_guide/best_practices.md#table-based--parameterized-tests)
to fail when table values consist of integer values.
The reasons are documented in [issue 337614](https://gitlab.com/gitlab-org/gitlab/-/issues/337614).
This problem has been fixed in Ruby and the fix is expected to be included in Ruby 3.0.3.
The problem only affects users who run an unpatched Ruby 3.0.2. This is likely the case when you
installed Ruby manually or via tools like `asdf`. Users of the `gitlab-development-kit (GDK)`
are also affected by this problem.
Build images are not affected because they include the patch set addressing this bug.
## Deprecations are not caught in DeprecationToolkit if the method is stubbed
We rely on `deprecation_toolkit` to fail fast when using functionality that is deprecated in Ruby 2 and removed in Ruby 3.
A common issue caught during the transition from Ruby 2 to Ruby 3 relates to
the [separation of positional and keyword arguments in Ruby 3.0](https://www.ruby-lang.org/en/news/2019/12/12/separation-of-positional-and-keyword-arguments-in-ruby-3-0/).
Unfortunately, if the author has stubbed such methods in tests, deprecations would not be caught.
We run automated detection for this warning in tests via `deprecation_toolkit`,
but it relies on the fact that `Kernel#warn` emits a warning, so stubbing out this call will effectively remove the call to warn, which means `deprecation_toolkit` will never see the deprecation warnings.
Stubbing out the implementation removes that warning, and we never pick it up, so the build is green.
Refer to [issue 364099](https://gitlab.com/gitlab-org/gitlab/-/issues/364099) for more context.
## Testing in `irb` and `rails console`
Another pitfall is that testing in `irb`/`rails c` silences the deprecation warning,
since `irb` in Ruby 2.7.x has a [bug](https://bugs.ruby-lang.org/issues/17377) that prevents deprecation warnings from showing.
When writing code and performing code reviews, pay extra attention to method calls of the form `f({k: v})`.
This is valid in Ruby 2 when `f` takes either a `Hash` or keyword arguments, but Ruby 3 only considers this valid if `f` takes a `Hash`.
For Ruby 3 compliance, this should be changed to one of the following invocations if `f` takes keyword arguments:
- `f(**{k: v})`
- `f(k: v)`
## RSpec `with` argument matcher fails for shorthand Hash syntax
Because keyword arguments ("kwargs") are a first-class concept in Ruby 3, keyword arguments are not
converted into internal `Hash` instances anymore. This leads to RSpec method argument matchers failing
when the receiver takes a positional options hash instead of kwargs:
```ruby
def m(options={}); end
```
```ruby
expect(subject).to receive(:m).with(a: 42)
```
In Ruby 3 this expectations fails with the following error:
```plaintext
Failure/Error:
#<subject> received :m with unexpected arguments
expected: ({:a=>42})
got: ({:a=>42})
```
This happens because RSpec uses a kwargs argument matcher here, but the method takes a hash.
It works in Ruby 2, because `a: 42` is converted to a hash first and RSpec will use a hash argument matcher.
A workaround is to not use the shorthand syntax and pass an actual `Hash` instead whenever we know a method
to take an options hash:
```ruby
# Note the braces around the key-value pair.
expect(subject).to receive(:m).with({ a: 42 })
```
For more information, see [the official issue report for RSpec](https://github.com/rspec/rspec-mocks/issues/1460).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Ruby 3 gotchas
breadcrumbs:
- doc
- development
---
This section documents several problems we found while working on [Ruby 3 support](https://gitlab.com/groups/gitlab-org/-/epics/5149)
and which led to subtle bugs or test failures that were difficult to understand. We encourage every GitLab contributor
who writes Ruby code on a regular basis to familiarize themselves with these issues.
To find the complete list of changes to the Ruby 3 language and standard library, see
[Ruby Changes](https://rubyreferences.github.io/rubychanges/3.0.html).
## `Hash#each` consistently yields a 2-element array to lambdas
Consider the following code snippet:
```ruby
def foo(a, b)
p [a, b]
end
def bar(a, b = 2)
p [a, b]
end
foo_lambda = method(:foo).to_proc
bar_lambda = method(:bar).to_proc
{ a: 1 }.each(&foo_lambda)
{ a: 1 }.each(&bar_lambda)
```
In Ruby 2.7, the output of this program suggests that yielding hash entries to lambdas behaves
differently depending on how many required arguments there are:
```ruby
# Ruby 2.7
{ a: 1 }.each(&foo_lambda) # prints [:a, 1]
{ a: 1 }.each(&bar_lambda) # prints [[:a, 1], 2]
```
Ruby 3 makes this behavior consistent and always attempts to yield hash entries as a single `[key, value]` array:
```ruby
# Ruby 3.0
{ a: 1 }.each(&foo_lambda) # `foo': wrong number of arguments (given 1, expected 2) (ArgumentError)
{ a: 1 }.each(&bar_lambda) # prints [[:a, 1], 2]
```
To write code that works under both 2.7 and 3.0, consider the following options:
- Always pass the lambda body as a block: `{ a: 1 }.each { |a, b| p [a, b] }`.
- Deconstruct the lambda arguments: `{ a: 1 }.each(&->((a, b)) { p [a, b] })`.
We recommend always passing the block explicitly, and prefer two required arguments as block parameters.
For more information, see [Ruby issue 12706](https://bugs.ruby-lang.org/issues/12706).
## `Symbol#to_proc` returns signature metadata consistent with lambdas
A common idiom in Ruby is to obtain `Proc` objects using the `&:<symbol>` shorthand and
pass them to higher-order functions:
```ruby
[1, 2, 3].each(&:to_s)
```
Ruby desugars `&:<symbol>` to `Symbol#to_proc`. We can call it with
the method _receiver_ as its first argument (here: `Integer`), and all method _arguments_
(here: none) as its remaining arguments.
This behaves the same in both Ruby 2.7 and Ruby 3. Where Ruby 3 diverges is when capturing
this `Proc` object and inspecting its call signature.
This is often done when writing DSLs or using other forms of meta-programming:
```ruby
p = :foo.to_proc # This usually happens via a conversion through `&:foo`
# Ruby 2.7: prints [[:rest]] (-1)
# Ruby 3.0: prints [[:req], [:rest]] (-2)
puts "#{p.parameters} (#{p.arity})"
```
Ruby 2.7 reports zero required and one optional parameter for this `Proc` object, while Ruby 3 reports one required
and one optional parameter. Ruby 2.7 is incorrect: the first argument must
always be passed, as it is the receiver of the method the `Proc` object represents, and methods cannot be
called without a receiver.
Ruby 3 corrects this: the code that tests `Proc` object arity or parameter lists might now break and
has to be updated.
For more information, see [Ruby issue 16260](https://bugs.ruby-lang.org/issues/16260).
## `OpenStruct` does not evaluate fields lazily
The `OpenStruct` implementation has undergone a partial rewrite in Ruby 3, resulting in
behavioral changes. In Ruby 2.7, `OpenStruct` defines methods lazily, when the method is first accessed.
In Ruby 3.0, it defines these methods eagerly in the initializer, which can break classes that inherit from `OpenStruct`
and override these methods.
Don't inherit from `OpenStruct` for these reasons; ideally, don't use it at all.
`OpenStruct` is [considered problematic](https://ruby-doc.org/stdlib-3.0.2/libdoc/ostruct/rdoc/OpenStruct.html#class-OpenStruct-label-Caveats).
When writing new code, prefer a `Struct` instead, which is simpler in implementation, although less flexible.
## `Regexp` and `Range` instances are frozen
It is not necessary anymore to explicitly freeze `Regexp` or `Range` instances because Ruby 3 freezes
them automatically upon creation.
This has a subtle side-effect: Tests that stub method calls on these types now fail with an error because
RSpec cannot stub frozen objects:
```ruby
# Ruby 2.7: works
# Ruby 3.0: error: "can't modify frozen object"
allow(subject.function_returning_range).to receive(:max).and_return(42)
```
Rewrite affected tests by not stubbing method calls on frozen objects. The example above can be rewritten as:
```ruby
# Works with any Ruby version
allow(subject).to receive(:function_returning_range).and_return(1..42)
```
## Table tests fail with Ruby 3.0.2
Ruby 3.0.2 has a known bug that causes [table tests](testing_guide/best_practices.md#table-based--parameterized-tests)
to fail when table values consist of integer values.
The reasons are documented in [issue 337614](https://gitlab.com/gitlab-org/gitlab/-/issues/337614).
This problem has been fixed in Ruby and the fix is expected to be included in Ruby 3.0.3.
The problem only affects users who run an unpatched Ruby 3.0.2. This is likely the case when you
installed Ruby manually or via tools like `asdf`. Users of the `gitlab-development-kit (GDK)`
are also affected by this problem.
Build images are not affected because they include the patch set addressing this bug.
## Deprecations are not caught in DeprecationToolkit if the method is stubbed
We rely on `deprecation_toolkit` to fail fast when using functionality that is deprecated in Ruby 2 and removed in Ruby 3.
A common issue caught during the transition from Ruby 2 to Ruby 3 relates to
the [separation of positional and keyword arguments in Ruby 3.0](https://www.ruby-lang.org/en/news/2019/12/12/separation-of-positional-and-keyword-arguments-in-ruby-3-0/).
Unfortunately, if the author has stubbed such methods in tests, deprecations would not be caught.
We run automated detection for this warning in tests via `deprecation_toolkit`,
but it relies on the fact that `Kernel#warn` emits a warning, so stubbing out this call will effectively remove the call to warn, which means `deprecation_toolkit` will never see the deprecation warnings.
Stubbing out the implementation removes that warning, and we never pick it up, so the build is green.
Refer to [issue 364099](https://gitlab.com/gitlab-org/gitlab/-/issues/364099) for more context.
## Testing in `irb` and `rails console`
Another pitfall is that testing in `irb`/`rails c` silences the deprecation warning,
since `irb` in Ruby 2.7.x has a [bug](https://bugs.ruby-lang.org/issues/17377) that prevents deprecation warnings from showing.
When writing code and performing code reviews, pay extra attention to method calls of the form `f({k: v})`.
This is valid in Ruby 2 when `f` takes either a `Hash` or keyword arguments, but Ruby 3 only considers this valid if `f` takes a `Hash`.
For Ruby 3 compliance, this should be changed to one of the following invocations if `f` takes keyword arguments:
- `f(**{k: v})`
- `f(k: v)`
## RSpec `with` argument matcher fails for shorthand Hash syntax
Because keyword arguments ("kwargs") are a first-class concept in Ruby 3, keyword arguments are not
converted into internal `Hash` instances anymore. This leads to RSpec method argument matchers failing
when the receiver takes a positional options hash instead of kwargs:
```ruby
def m(options={}); end
```
```ruby
expect(subject).to receive(:m).with(a: 42)
```
In Ruby 3 this expectations fails with the following error:
```plaintext
Failure/Error:
#<subject> received :m with unexpected arguments
expected: ({:a=>42})
got: ({:a=>42})
```
This happens because RSpec uses a kwargs argument matcher here, but the method takes a hash.
It works in Ruby 2, because `a: 42` is converted to a hash first and RSpec will use a hash argument matcher.
A workaround is to not use the shorthand syntax and pass an actual `Hash` instead whenever we know a method
to take an options hash:
```ruby
# Note the braces around the key-value pair.
expect(subject).to receive(:m).with({ a: 42 })
```
For more information, see [the official issue report for RSpec](https://github.com/rspec/rspec-mocks/issues/1460).
|
https://docs.gitlab.com/kubernetes
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/kubernetes.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
kubernetes.md
|
Deploy
|
Environments
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Kubernetes integration development guidelines
| null |
This document provides various guidelines when developing for the GitLab
[Kubernetes integration](../user/infrastructure/clusters/_index.md).
## Development
### Architecture
Some Kubernetes operations, such as creating restricted project
namespaces are performed on the GitLab Rails application. These
operations are performed using a [client library](#client-library),
and carry an element of risk. The operations are
run as the same user running the GitLab Rails application. For more information,
read the [security](#security) section below.
Some Kubernetes operations, such as installing cluster applications are
performed on one-off pods on the Kubernetes cluster itself. These
installation pods are named `install-<application_name>` and
are created within the `gitlab-managed-apps` namespace.
In terms of code organization, we generally add objects that represent
Kubernetes resources in
[`lib/gitlab/kubernetes`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/lib/gitlab/kubernetes).
### Client library
We use the [`kubeclient`](https://rubygems.org/gems/kubeclient) gem to
perform Kubernetes API calls. As the `kubeclient` gem does not support
different API Groups (such as `apis/rbac.authorization.k8s.io`) from a
single client, we have created a wrapper class,
[`Gitlab::Kubernetes::KubeClient`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/kubernetes/kube_client.rb)
that enable you to achieve this.
Selected Kubernetes API groups are supported. Do add support
for new API groups or methods to
[`Gitlab::Kubernetes::KubeClient`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/kubernetes/kube_client.rb)
if you need to use them. New API groups or API group versions can be
added to `SUPPORTED_API_GROUPS` - internally, this creates an
internal client for that group. New methods can be added as a delegation
to the relevant internal client.
### Performance considerations
All calls to the Kubernetes API must be in a background process. Don't
perform Kubernetes API calls within a web request. This blocks
webserver, and can lead to a denial-of-service (DoS) attack in GitLab as
the Kubernetes cluster response times are outside of our control.
The easiest way to ensure your calls happen a background process is to
delegate any such work to happen in a [Sidekiq worker](sidekiq/_index.md).
You may want to make calls to Kubernetes and return the response, but a background
worker isn't a good fit. Consider using
[reactive caching](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/concerns/reactive_caching.rb).
For example:
```ruby
def calculate_reactive_cache!
{ pods: cluster.platform_kubernetes.kubeclient.get_pods }
end
def pods
with_reactive_cache do |data|
data[:pods]
end
end
```
### Testing
We have some WebMock stubs in
[`KubernetesHelpers`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/helpers/kubernetes_helpers.rb)
which can help with mocking out calls to Kubernetes API in your tests.
### Amazon EKS integration
This section outlines the process for allowing a GitLab instance to create EKS clusters.
The following prerequisites are required:
A `Customer` AWS account. The EKS cluster is created in this account. The following
resources must be present:
- A provisioning role that has permissions to create the cluster
and associated resources. It must list the `GitLab` AWS account
as a trusted entity.
- A VPC, management role, security group, and subnets for use by the cluster.
A `GitLab` AWS account. This is the account which performs
the provisioning actions. The following resources must be present:
- A service account with permissions to assume the provisioning
role in the `Customer` account above.
- Credentials for this service account configured in GitLab via
the `kubernetes` section of `gitlab.yml`.
The process for creating a cluster is as follows:
1. Using the `:provision_role_external_id`, GitLab assumes the role provided
by `:provision_role_arn` and stores a set of temporary credentials on the
provider record. By default these credentials are valid for one hour.
1. A CloudFormation stack is created, based on the
[`AWS CloudFormation EKS template`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/17036/diffs#diff-content-b79f1d78113a9b1ab02b37ca4a756c3a9b8c2ae8).
This triggers creation of all resources required for an EKS cluster.
1. GitLab polls the status of the stack until all resources are ready,
which takes somewhere between 10 and 15 minutes in most cases.
1. When the stack is ready, GitLab stores the cluster details and generates
another set of temporary credentials, this time to allow connecting to
the cluster via `kubeclient`. These credentials are valid for one minute.
1. GitLab configures the worker nodes so that they are able to authenticate
to the cluster, and creates a service account for itself for future operations.
1. Credentials that are no longer required are removed. This deletes the following
attributes:
- `access_key_id`
- `secret_access_key`
- `session_token`
## Security
### Server Side Request Forgery (SSRF) attacks
As URLs for Kubernetes clusters are user controlled it is easily
susceptible to Server Side Request Forgery (SSRF) attacks. You should
understand the mitigation strategies if you are adding more API calls to
a cluster.
Mitigation strategies include:
1. Not allowing redirects to attacker controller resources:
[`Kubeclient::KubeClient`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/kubernetes/kube_client.rb#)
can be configured to prevent any redirects by passing in
`http_max_redirects: 0` as an option.
1. Not exposing error messages: by doing so, we
prevent attackers from triggering errors to expose results from
attacker controlled requests. For example, we do not expose (or store)
raw error messages:
```ruby
rescue Kubernetes::HttpError => e
# bad
# app.make_errored!("Kubernetes error: #{e.message}")
# good
app.make_errored!("Kubernetes error: #{e.error_code}")
```
## Debugging Kubernetes integrations
Logs related to the Kubernetes integration can be found in
[`kubernetes.log`](../administration/logs/_index.md#kuberneteslog-deprecated). On a local
GDK install, these logs are present in `log/kubernetes.log`.
You can also follow the installation logs to debug issues related to
installation. Once the installation/upgrade is underway, wait for the
pod to be created. Then run the following to obtain the pods logs as
they are written:
```shell
kubectl logs <pod_name> --follow -n gitlab-managed-apps
```
|
---
stage: Deploy
group: Environments
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Kubernetes integration development guidelines
breadcrumbs:
- doc
- development
---
This document provides various guidelines when developing for the GitLab
[Kubernetes integration](../user/infrastructure/clusters/_index.md).
## Development
### Architecture
Some Kubernetes operations, such as creating restricted project
namespaces are performed on the GitLab Rails application. These
operations are performed using a [client library](#client-library),
and carry an element of risk. The operations are
run as the same user running the GitLab Rails application. For more information,
read the [security](#security) section below.
Some Kubernetes operations, such as installing cluster applications are
performed on one-off pods on the Kubernetes cluster itself. These
installation pods are named `install-<application_name>` and
are created within the `gitlab-managed-apps` namespace.
In terms of code organization, we generally add objects that represent
Kubernetes resources in
[`lib/gitlab/kubernetes`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/lib/gitlab/kubernetes).
### Client library
We use the [`kubeclient`](https://rubygems.org/gems/kubeclient) gem to
perform Kubernetes API calls. As the `kubeclient` gem does not support
different API Groups (such as `apis/rbac.authorization.k8s.io`) from a
single client, we have created a wrapper class,
[`Gitlab::Kubernetes::KubeClient`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/kubernetes/kube_client.rb)
that enable you to achieve this.
Selected Kubernetes API groups are supported. Do add support
for new API groups or methods to
[`Gitlab::Kubernetes::KubeClient`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/kubernetes/kube_client.rb)
if you need to use them. New API groups or API group versions can be
added to `SUPPORTED_API_GROUPS` - internally, this creates an
internal client for that group. New methods can be added as a delegation
to the relevant internal client.
### Performance considerations
All calls to the Kubernetes API must be in a background process. Don't
perform Kubernetes API calls within a web request. This blocks
webserver, and can lead to a denial-of-service (DoS) attack in GitLab as
the Kubernetes cluster response times are outside of our control.
The easiest way to ensure your calls happen a background process is to
delegate any such work to happen in a [Sidekiq worker](sidekiq/_index.md).
You may want to make calls to Kubernetes and return the response, but a background
worker isn't a good fit. Consider using
[reactive caching](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/concerns/reactive_caching.rb).
For example:
```ruby
def calculate_reactive_cache!
{ pods: cluster.platform_kubernetes.kubeclient.get_pods }
end
def pods
with_reactive_cache do |data|
data[:pods]
end
end
```
### Testing
We have some WebMock stubs in
[`KubernetesHelpers`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/helpers/kubernetes_helpers.rb)
which can help with mocking out calls to Kubernetes API in your tests.
### Amazon EKS integration
This section outlines the process for allowing a GitLab instance to create EKS clusters.
The following prerequisites are required:
A `Customer` AWS account. The EKS cluster is created in this account. The following
resources must be present:
- A provisioning role that has permissions to create the cluster
and associated resources. It must list the `GitLab` AWS account
as a trusted entity.
- A VPC, management role, security group, and subnets for use by the cluster.
A `GitLab` AWS account. This is the account which performs
the provisioning actions. The following resources must be present:
- A service account with permissions to assume the provisioning
role in the `Customer` account above.
- Credentials for this service account configured in GitLab via
the `kubernetes` section of `gitlab.yml`.
The process for creating a cluster is as follows:
1. Using the `:provision_role_external_id`, GitLab assumes the role provided
by `:provision_role_arn` and stores a set of temporary credentials on the
provider record. By default these credentials are valid for one hour.
1. A CloudFormation stack is created, based on the
[`AWS CloudFormation EKS template`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/17036/diffs#diff-content-b79f1d78113a9b1ab02b37ca4a756c3a9b8c2ae8).
This triggers creation of all resources required for an EKS cluster.
1. GitLab polls the status of the stack until all resources are ready,
which takes somewhere between 10 and 15 minutes in most cases.
1. When the stack is ready, GitLab stores the cluster details and generates
another set of temporary credentials, this time to allow connecting to
the cluster via `kubeclient`. These credentials are valid for one minute.
1. GitLab configures the worker nodes so that they are able to authenticate
to the cluster, and creates a service account for itself for future operations.
1. Credentials that are no longer required are removed. This deletes the following
attributes:
- `access_key_id`
- `secret_access_key`
- `session_token`
## Security
### Server Side Request Forgery (SSRF) attacks
As URLs for Kubernetes clusters are user controlled it is easily
susceptible to Server Side Request Forgery (SSRF) attacks. You should
understand the mitigation strategies if you are adding more API calls to
a cluster.
Mitigation strategies include:
1. Not allowing redirects to attacker controller resources:
[`Kubeclient::KubeClient`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/kubernetes/kube_client.rb#)
can be configured to prevent any redirects by passing in
`http_max_redirects: 0` as an option.
1. Not exposing error messages: by doing so, we
prevent attackers from triggering errors to expose results from
attacker controlled requests. For example, we do not expose (or store)
raw error messages:
```ruby
rescue Kubernetes::HttpError => e
# bad
# app.make_errored!("Kubernetes error: #{e.message}")
# good
app.make_errored!("Kubernetes error: #{e.error_code}")
```
## Debugging Kubernetes integrations
Logs related to the Kubernetes integration can be found in
[`kubernetes.log`](../administration/logs/_index.md#kuberneteslog-deprecated). On a local
GDK install, these logs are present in `log/kubernetes.log`.
You can also follow the installation logs to debug issues related to
installation. Once the installation/upgrade is underway, wait for the
pod to be created. Then run the following to obtain the pods logs as
they are written:
```shell
kubectl logs <pod_name> --follow -n gitlab-managed-apps
```
|
https://docs.gitlab.com/advanced_search
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/advanced_search.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
advanced_search.md
|
AI-powered
|
Global Search
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Advanced search development guidelines
| null |
This page includes information about developing and working with Advanced search, which is powered by Elasticsearch.
Information on how to enable Advanced search and perform the initial indexing is in
the [Elasticsearch integration documentation](../integration/advanced_search/elasticsearch.md#enable-advanced-search).
## Deep dive resources
These recordings and presentations provide in-depth knowledge about the Advanced search implementation:
| Date | Topic | Presenter | Resources | GitLab Version |
|:-----------:|-----------------------------------------------------------------------------------------------------------|:----------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------:|
| July 2024 | Advanced search basics, integration, indexing, and search | Terri Chu | <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>[Recording on YouTube](https://youtu.be/5OXK1isDaks) (GitLab team members only)<br>[Google slides](https://docs.google.com/presentation/d/1Fy3pfFIGK_2ZCoB93EksRKhaS7uuNp81I3L5_joWa04/edit?usp=sharing_) (GitLab team members only) | GitLab 17.0 |
| June 2021 | GitLabs data migration process for Advanced search | Dmitry Gruzd | [Blog post](https://about.gitlab.com/blog/2021/06/01/advanced-search-data-migrations/) | GitLab 13.12 |
| August 2020 | [GitLab-specific architecture for multi-indices support](#zero-downtime-reindexing-with-multiple-indices) | Mark Chao | [Recording on YouTube](https://www.youtube.com/watch?v=0WdPR9oB2fg)<br>[Google slides](https://lulalala.gitlab.io/gitlab-elasticsearch-deepdive/) | GitLab 13.3 |
| June 2019 | GitLab [Elasticsearch integration](../integration/advanced_search/elasticsearch.md) | Mario de la Ossa | <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>[Recording on YouTube](https://www.youtube.com/watch?v=vrvl-tN2EaA)<br>[Google slides](https://docs.google.com/presentation/d/1H-pCzI_LNrgrL5pJAIQgvLX8Ji0-jIKOg1QeJQzChug/edit)<br>[PDF](https://gitlab.com/gitlab-org/create-stage/uploads/c5aa32b6b07476fa8b597004899ec538/Elasticsearch_Deep_Dive.pdf) | GitLab 12.0 |
## Elasticsearch configuration
### Supported versions
See [Version Requirements](../integration/advanced_search/elasticsearch.md#version-requirements).
Developers making significant changes to Elasticsearch queries should test their features against all our supported versions.
### Setting up your development environment
- See the [Elasticsearch GDK setup instructions](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/elasticsearch.md)
- Ensure [Elasticsearch is running](#setting-up-your-development-environment):
```shell
curl "http://localhost:9200"
```
<!-- vale gitlab_base.Spelling = NO -->
- [Run Kibana](https://www.elastic.co/guide/en/kibana/current/install.html#_install_kibana_yourself) to interact
with your local Elasticsearch cluster. Alternatively, you can use [Cerebro](https://github.com/lmenezes/cerebro) or a
similar tool.
<!-- vale gitlab_base.Spelling = YES -->
- To tail the logs for Elasticsearch, run this command:
```shell
tail -f log/elasticsearch.log
```
### Helpful Rake tasks
- `gitlab:elastic:test:index_size`: Tells you how much space the current index is using, as well as how many documents
are in the index.
- `gitlab:elastic:test:index_size_change`: Outputs index size, reindexes, and outputs index size again. Useful when
testing improvements to indexing size.
Additionally, if you need large repositories or multiple forks for testing,
consider [following these instructions](rake_tasks.md#extra-project-seed-options)
## Development workflow
### Development tips
- [Creating indices from scratch](advanced_search/tips.md#creating-all-indices-from-scratch-and-populating-with-local-data)
- [Index data](advanced_search/tips.md#index-data)
- [Updating dependent associations in the index](advanced_search/tips.md#dependent-association-index-updates)
- [Kibana](advanced_search/tips.md#kibana)
- [Running tests with Elasticsearch](advanced_search/tips.md#testing)
- [Testing migrations](advanced_search/tips.md#advanced-search-migrations)
- [Viewing index status](advanced_search/tips.md#viewing-index-status)
### Debugging & troubleshooting
#### Debugging Elasticsearch queries
The `ELASTIC_CLIENT_DEBUG` environment variable enables
the [debug option for the Elasticsearch client](https://gitlab.com/gitlab-org/gitlab/-/blob/76bd885119795096611cb94e364149d1ef006fef/ee/lib/gitlab/elastic/client.rb#L50)
in development or test environments. If you need to debug Elasticsearch HTTP queries generated from
code or tests, it can be enabled before running specs or starting the Rails console:
```console
ELASTIC_CLIENT_DEBUG=1 bundle exec rspec ee/spec/workers/search/elastic/trigger_indexing_worker_spec.rb
export ELASTIC_CLIENT_DEBUG=1
rails console
```
#### Getting `flood stage disk watermark [95%] exceeded`
You might get an error such as
```plaintext
[2018-10-31T15:54:19,762][WARN ][o.e.c.r.a.DiskThresholdMonitor] [pval5Ct]
flood stage disk watermark [95%] exceeded on
[pval5Ct7SieH90t5MykM5w][pval5Ct][/usr/local/var/lib/elasticsearch/nodes/0] free: 56.2gb[3%],
all indices on this node will be marked read-only
```
This is because you've exceeded the disk space threshold - it thinks you don't have enough disk space left, based on the
default 95% threshold.
In addition, the `read_only_allow_delete` setting will be set to `true`. It will block indexing, `forcemerge`, etc
```shell
curl "http://localhost:9200/gitlab-development/_settings?pretty"
```
Add this to your `elasticsearch.yml` file:
```yaml
# turn off the disk allocator
cluster.routing.allocation.disk.threshold_enabled: false
```
_or_
```yaml
# set your own limits
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 5gb # ES 6.x only
cluster.routing.allocation.disk.watermark.low: 15gb
cluster.routing.allocation.disk.watermark.high: 10gb
```
Restart Elasticsearch, and the `read_only_allow_delete` will clear on its own.
_from "Disk-based Shard Allocation | Elasticsearch
Reference" [5.6](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/disk-allocator.html#disk-allocator)
and [6.x](https://www.elastic.co/guide/en/elasticsearch/reference/6.7/disk-allocator.html)_
### Performance monitoring
#### Prometheus
GitLab exports [Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md)
relating to the number of requests and timing for all web/API requests and Sidekiq jobs,
which can help diagnose performance trends and compare how Elasticsearch timing
is impacting overall performance relative to the time spent doing other things.
##### Indexing queues
GitLab also exports [Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md)
for indexing queues, which can help diagnose performance bottlenecks and determine
whether your GitLab instance or Elasticsearch server can keep up with
the volume of updates.
#### Logs
All indexing happens in Sidekiq, so much of the relevant logs for the
Elasticsearch integration can be found in
[`sidekiq.log`](../administration/logs/_index.md#sidekiqlog). In particular, all
Sidekiq workers that make requests to Elasticsearch in any way will log the
number of requests and time taken querying/writing to Elasticsearch. This can
be useful to understand whether or not your cluster is keeping up with
indexing.
Searching Elasticsearch is done via ordinary web workers handling requests. Any
requests to load a page or make an API request, which then make requests to
Elasticsearch, will log the number of requests and the time taken to
[`production_json.log`](../administration/logs/_index.md#production_jsonlog). These
logs will also include the time spent on Database and Gitaly requests, which
may help to diagnose which part of the search is performing poorly.
There are additional logs specific to Elasticsearch that are sent to
[`elasticsearch.log`](../administration/logs/_index.md#elasticsearchlog)
that may contain information to help diagnose performance issues.
#### Performance Bar
Elasticsearch requests will be displayed in the
[`Performance Bar`](../administration/monitoring/performance/performance_bar.md), which can
be used both locally in development and on any deployed GitLab instance to
diagnose poor search performance. This will show the exact queries being made,
which is useful to diagnose why a search might be slow.
#### Correlation ID and `X-Opaque-Id`
Our [correlation ID](distributed_tracing.md#developer-guidelines-for-working-with-correlation-ids)
is forwarded by all requests from Rails to Elasticsearch as the
[`X-Opaque-Id`](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html#_identifying_running_tasks)
header which allows us to track any
[tasks](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html)
in the cluster back the request in GitLab.
## Architecture
The framework used to communicate to Elasticsearch is in the process of a refactor tracked in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/13873).
### Indexing Overview
Advanced search selectively indexes data. Each data type follows a specific indexing pipeline:
| Data type | How is it queued | Where is it queued | Where does indexing occur |
|---------------------|------------------------------------------------------------------------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Database records | Record changes through ActiveRecord callbacks and `Gitlab::EventStore` | Redis ZSET | [`ElasticIndexInitialBulkCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/elastic_index_initial_bulk_cron_worker.rb), [`ElasticIndexBulkCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/elastic_index_bulk_cron_worker.rb) |
| Git repository data | Branch push service and default branch change worker | Sidekiq | [`Search::Elastic::CommitIndexerWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/search/elastic/commit_indexer_worker.rb), [`ElasticWikiIndexerWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/elastic_wiki_indexer_worker.rb) |
| Embeddings | Record changes through ActiveRecord callbacks and `Gitlab::EventStore` | Redis ZSET | [`ElasticEmbeddingBulkCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/search/elastic_index_embedding_bulk_cron_worker.rb) |
### Indexing Components
#### External Indexer
For repository content, GitLab uses a
dedicated [indexer written in Go](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer) to efficiently process
files.
#### Rails Indexing Lifecycle
1. **Initial Indexing**: Administrators trigger the first complete index via the Admin UI or a Rake task
1. **Ongoing Updates**: After initial setup, GitLab maintains index currency through:
- Model callbacks (`after_create`, `after_update`, `after_destroy`) defined in [`/ee/app/models/concerns/elastic/application_versioned_search.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/models/concerns/elastic/application_versioned_search.rb)
- A Redis [`ZSET`](https://redis.io/docs/latest/develop/data-types/#sorted-sets) that tracks all pending changes
- Scheduled [Sidekiq workers](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/concerns/elastic/bulk_cron_worker.rb) that process these queues in batches using
Elasticsearch's [Bulk Request API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html)
### Search and Security
The [query builder framework](#query-builder-framework) generates search queries and handles access control logic. This
portion of the codebase requires particular attention during development and code review, as it has historically been a
source of security vulnerabilities.
The final step in returning search results is
to [redact unauthorized results](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/app/services/search_service.rb#L147)
for the current user to catch problems with the queries or race conditions.
### Migration framework
GitLabs Advanced search includes a robust migration framework that streamlines index maintenance and updates. This
system provides significant benefits:
- **Selective Reindexing**: Only updates specific document types when needed, avoiding full re-indexes
- **Automated Maintenance**: Updates proceed without requiring human intervention
- **Consistent Experience**: Provides the same migration path for both GitLab.com and GitLab Self-Managed instances
#### Framework Components
The migration system consists of:
- **Migration Runner**: A [cron worker](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/elastic/migration_worker.rb) that executes every 5 minutes to check for and process pending migrations.
- **Migration Files**: Similar to database migrations, these Ruby files define the migration steps with accompanying
YAML documentation
- **Migration Status Tracking**: All migration states are stored in a dedicated Elasticsearch index
- **Migration Lifecycle States**: Each migration progresses through stages: pending → in progress → complete (or halted
if issues arise)
#### Configuration Options
Migrations can be fine-tuned with various parameters:
- **Batching**: Control the document batch size for optimal performance
- **Throttling**: Adjust indexing speed to balance between migration speed and system load
- **Space Requirements**: Verify sufficient disk space before migrations begin to prevent interruptions
- **Skip condition**: Define a condition for skipping the migration
This framework makes index schema changes, field updates, and data migrations reliable and unobtrusive for all GitLab
installations.
### Search DSL
This section covers the Search DSL (Domain Specific Language) supported by GitLab, which is compatible with both
Elasticsearch and OpenSearch implementations.
#### Custom routing
[Custom routing](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-routing-field.html#_searching_with_custom_routing)
is used in Elasticsearch for document types. The routing format is usually `project_<project_id>` for project associated data
and `group_<root_namespace_id>` for group associated data. Routing is set during indexing and searching operations and tells
Elasticsearch what shards to put the data into. Some of the benefits and tradeoffs to using custom routing are:
- Project and group scoped searches are much faster since not all shards have to be hit.
- Routing is not used if too many shards would be hit for global and group scoped searches.
- Shard size imbalance might occur.
<!-- vale gitlab_base.Spelling = NO -->
#### Existing analyzers and tokenizers
The following analyzers and tokenizers are defined in
[`ee/lib/elastic/latest/config.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/ee/lib/elastic/latest/config.rb).
<!-- vale gitlab_base.Spelling = YES -->
##### Analyzers
###### `path_analyzer`
Used when indexing blobs' paths. Uses the `path_tokenizer` and the `lowercase` and `asciifolding` filters.
See the `path_tokenizer` explanation below for an example.
###### `sha_analyzer`
Used in blobs and commits. Uses the `sha_tokenizer` and the `lowercase` and `asciifolding` filters.
See the `sha_tokenizer` explanation later below for an example.
###### `code_analyzer`
Used when indexing a blob's filename and content. Uses the `whitespace` tokenizer
and the [`word_delimiter_graph`](https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-word-delimiter-graph-tokenfilter.html),
`lowercase`, and `asciifolding` filters.
The `whitespace` tokenizer was selected to have more control over how tokens are split. For example the string `Foo::bar(4)` needs to generate tokens like `Foo` and `bar(4)` to be properly searched.
See the `code` filter for an explanation on how tokens are split.
##### Tokenizers
###### `sha_tokenizer`
This is a custom tokenizer that uses the
[`edgeNGram` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenizer.html)
to allow SHAs to be searchable by any sub-set of it (minimum of 5 chars).
Example:
`240c29dc7e` becomes:
- `240c2`
- `240c29`
- `240c29d`
- `240c29dc`
- `240c29dc7`
- `240c29dc7e`
###### `path_tokenizer`
This is a custom tokenizer that uses the
[`path_hierarchy` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pathhierarchy-tokenizer.html)
with `reverse: true` to allow searches to find paths no matter how much or how little of the path is given as input.
Example:
`'/some/path/application.js'` becomes:
- `'/some/path/application.js'`
- `'some/path/application.js'`
- `'path/application.js'`
- `'application.js'`
#### Common gotchas
- Searches can have their own analyzers. Remember to check when editing analyzers.
- `Character` filters (as opposed to token filters) always replace the original character. These filters can hinder exact searches.
## Implementation guide
### Add a new document type to Elasticsearch
If data cannot be added to one of the [existing indices in Elasticsearch](../integration/advanced_search/elasticsearch.md#advanced-search-index-scopes), follow these instructions to set up a new index and populate it.
#### Recommended process for adding a new document type
Have any MRs reviewed by a member of the Global Search team:
1. [Setup your development environment](#setting-up-your-development-environment)
1. [Create the index](#create-the-index).
1. [Validate expected queries](#validate-expected-queries)
1. [Create a new Elasticsearch reference](#create-a-new-elastic-reference).
1. Perform [continuous updates](#continuous-updates) behind a feature flag. Enable the flag fully before the backfill.
1. [Backfill the data](#backfilling-data).
After indexing is done, the index is ready for search.
#### Create the index
All new indexes must have:
- `project_id` and `namespace_id` fields (if available). One of the fields must be used for [custom routing](#custom-routing).
- A `traversal_ids` field for efficient global and group search. Populate the field with `object.namespace.elastic_namespace_ancestry`
- Fields for authorization:
- For project data - `visibility_level`
- For group data - `namespace_visibility_level`
- Any required access level fields. These correspond to project feature access levels such as `issues_access_level` or `repository_access_level`
- A `schema_version` integer field in a `YYWW` (year/week) format. This field is used for data migrations.
1. Create a `Search::Elastic::Types::` class in `ee/lib/search/elastic/types/`.
1. Define the following class methods:
- `index_name`: in the format `gitlab-<env>-<type>` (for example, `gitlab-production-work_items`).
- `mappings`: a hash containing the index schema such as fields, data types, and analyzers.
- `settings`: a hash containing the index settings such as replicas and tokenizers.
The default is good enough for most cases.
1. Add a new [advanced search migration](search/advanced_search_migration_styleguide.md) to create the index
by executing `scripts/elastic-migration` and following the instructions.
The migration name must be in the format `Create<Name>Index`.
1. Use the [`Search::Elastic::MigrationCreateIndexHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationcreateindexhelper)
helper and the `'migration creates a new index'` shared example for the specification file created.
1. Add the target class to `Gitlab::Elastic::Helper::ES_SEPARATE_CLASSES`.
1. To test the index creation, run `Elastic::MigrationWorker.new.perform` in a console and check that the index
has been created with the correct mappings and settings:
```shell
curl "http://localhost:9200/gitlab-development-<type>/_mappings" | jq .`
```
```shell
curl "http://localhost:9200/gitlab-development-<type>/_settings" | jq .`
```
##### PostgreSQL to Elasticsearch mappings
Data types for primary and foreign keys must match the column type in the database. For example, the database column
type `integer` maps to `integer` and `bigint` maps to `long` in the mapping.
{{< alert type="warning" >}}
[Nested fields](https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html#_limits_on_nested_mappings_and_objects) introduce significant overhead. A flattened multi-value approach is recommended instead.
{{< /alert >}}
| PostgreSQL type | Elasticsearch mapping |
|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| bigint | long |
| smallint | short |
| integer | integer |
| boolean | boolean |
| array | keyword |
| timestamp | date |
| character varying, text | Depends on query requirements. Use [`text`](https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/text) for full-text search and [`keyword`](https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/keyword) for term queries, sorting, or aggregations |
##### Validate expected queries
Before creating a new index, it's crucial to validate that the planned mappings will support your expected queries.
Verifying mapping compatibility upfront helps avoid issues that would require index rebuilding later.
#### Create a new Elastic Reference
Create a `Search::Elastic::References::` class in `ee/lib/search/elastic/references/`.
The reference is used to perform bulk operations in Elasticsearch.
The file must inherit from `Search::Elastic::Reference` and define the following constant and methods:
```ruby
include Search::Elastic::Concerns::DatabaseReference # if there is a corresponding database record for every document
SCHEMA_VERSION = 24_46 # integer in YYWW format
override :serialize
def self.serialize(record)
# a string representation of the reference
end
override :instantiate
def self.instantiate(string)
# deserialize the string and call initialize
end
override :preload_indexing_data
def self.preload_indexing_data(refs)
# remove this method if `Search::Elastic::Concerns::DatabaseReference` is included
# otherwise return refs
end
def initialize
# initialize with instance variables
end
override :identifier
def identifier
# a way to identify the reference
end
override :routing
def routing
# Optional: an identifier to route the document in Elasticsearch
end
override :operation
def operation
# one of `:index`, `:upsert` or `:delete`
end
override :serialize
def serialize
# a string representation of the reference
end
override :as_indexed_json
def as_indexed_json
# a hash containing the document representation for this reference
end
override :index_name
def index_name
# index name
end
def model_klass
# set to the model class if `Search::Elastic::Concerns::DatabaseReference` is included
end
```
To add data to the index, an instance of the new reference class is called in
`Elastic::ProcessBookkeepingService.track!()` to add the data to a queue of
references for indexing.
A cron worker pulls queued references and bulk-indexes the items into Elasticsearch.
To test that the indexing operation works, call `Elastic::ProcessBookkeepingService.track!()`
with an instance of the reference class and run `Elastic::ProcessBookkeepingService.new.execute`.
The logs show the updates. To check the document in the index, run this command:
```shell
curl "http://localhost:9200/gitlab-development-<type>/_search"
```
##### Common gotchas
- Index operations actually perform an upsert. If the document exists, it performs a partial update by merging fields sent
with the existing document fields. If you want to explicitly remove fields or set them to empty, the `as_indexed_json`
must send `nil` or an empty array.
#### Data consistency
Now that we have an index and a way to bulk index the new document type into Elasticsearch, we need to add data into the index. This consists of doing a backfill and doing continuous updates to ensure the index data is up to date.
The backfill is done by calling `Elastic::ProcessInitialBookkeepingService.track!()` with an instance of `Search::Elastic::Reference` for every document that should be indexed.
The continuous update is done by calling `Elastic::ProcessBookkeepingService.track!()` with an instance of `Search::Elastic::Reference` for every document that should be created/updated/deleted.
##### Backfilling data
Add a new [Advanced Search migration](search/advanced_search_migration_styleguide.md) to backfill data by executing `scripts/elastic-migration` and following the instructions.
Use the [`MigrationDatabaseBackfillHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationdatabasebackfillhelper). The [`BackfillWorkItems` migration](https://gitlab.com/gitlab-org/search-team/migration-graveyard/-/blob/09354f497698037fc21f5a65e5c2d0a70edd81eb/lib/migrate/20240816132114_backfill_work_items.rb) can be used as an example.
To test the backfill, run `Elastic::MigrationWorker.new.perform` in a console a couple of times and see that the index was populated.
Tail the logs to see the progress of the migration:
```shell
tail -f log/elasticsearch.log
```
##### Continuous updates
For `ActiveRecord` objects, the `ApplicationVersionedSearch` concern can be included on the model to index data based on callbacks. If that's not suitable, call `Elastic::ProcessBookkeepingService.track!()` with an instance of `Search::Elastic::Reference` whenever a document should be indexed.
Always check for `Gitlab::CurrentSettings.elasticsearch_indexing?` and `use_elasticsearch?` because some GitLab Self-Managed instances do not have Elasticsearch enabled and [namespace limiting](../integration/advanced_search/elasticsearch.md#limit-the-amount-of-namespace-and-project-data-to-index) can be enabled.
Also check that the index is able to handle the index request. For example, check that the index exists if it was added in the current major release by verifying that the migration to add the index was completed: `Elastic::DataMigrationService.migration_has_finished?`.
##### Transfers and deletes
Project and group transfers and deletes must make updates to the index to avoid orphaned data. Orphaned data may occur
when [custom routing](#custom-routing) changes due to a transfer. Data in the old shard must be cleaned up. Elasticsearch
updates for transfers are handled in the [`Projects::TransferService`](https://gitlab.com/gitlab-org/gitlab/-/blob/4d2a86ed035d3c2a960f5b89f2424bee990dc8ab/ee/app/services/ee/projects/transfer_service.rb)
and [`Groups::TransferService`](https://gitlab.com/gitlab-org/gitlab/-/blob/4d2a86ed035d3c2a960f5b89f2424bee990dc8ab/ee/app/services/ee/groups/transfer_service.rb).
Indexes that contain a `project_id` field must use the [`Search::Elastic::DeleteWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/ee/app/workers/search/elastic/delete_worker.rb).
Indexes that contain a `namespace_id` field and no `project_id` field must use [`Search::ElasticGroupAssociationDeletionWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/ee/app/workers/search/elastic_group_association_deletion_worker.rb).
1. Add the indexed class to `excluded_classes` in [`ElasticDeleteProjectWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/ee/app/workers/elastic_delete_project_worker.rb)
1. Create a new service in the `::Search::Elastic::Delete` namespace to delete documents from the index
1. Update the worker to use the new service
### Implementing search for a new document type
Search data is available in [`SearchController`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/controllers/search_controller.rb) and
[Search API](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/lib/api/search.rb). Both use the [`SearchService`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search_service.rb) to return results.
The `SearchService` can be used to return results outside the `SearchController` and `Search API`.
#### Recommended process for implementing search for a new document type
Create the following MRs and have them reviewed by a member of the Global Search team:
1. [Enable the new scope](#search-scopes).
1. Create a [query builder](#creating-a-query).
1. Implement all [model requirements](#model-requirements).
1. [Add the new scope to `Gitlab::Elastic::SearchResults`](#results-classes) behind a feature flag.
1. Add support for the scope in [`Search::API`](https://gitlab.com/gitlab-org/gitlab/-/blob/bc063cd323323a7b27b7c9c9ddfc19591f49100c/lib/api/search.rb) (if applicable)
1. Add specs which must include [permissions tests](#permissions-tests)
1. [Test the new scope](#testing-scopes)
1. Update documentation for [Advanced search](../user/search/advanced_search.md), [Search API](../api/search.md) and,
[Roles and permissions](../user/permissions.md) (if applicable)
#### Search scopes
The `SearchService` exposes searching at [global](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search/global_service.rb),
[group](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search/group_service.rb), and [project](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search/project_service.rb) levels.
New scopes must be added to the following constants:
- `ALLOWED_SCOPES` (or override `allowed_scopes` method) in each EE `SearchService` file
- `ALLOWED_SCOPES` in `Gitlab::Search::AbuseDetection`
- `search_tab_ability_map` method in `Search::Navigation`. Override in the EE version if needed
{{< alert type="note" >}}
Global search can be disabled for a scope. You can do the following changes for disabling global search:
{{< /alert >}}
1. Add an application setting named `global_search_SCOPE_enabled` that defaults to `true` under the `search` jsonb accessor in [`app/models/application_setting.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/d52af9fafd5016ea25a665a9d5cb797b37a39b10/app/models/application_setting.rb#L738).
1. Add an entry in JSON schema validator file [`application_setting_search.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/d52af9fafd5016ea25a665a9d5cb797b37a39b10/app/validators/json_schemas/application_setting_search.json)
1. Add the setting checkbox in the Admin UI by creating an entry in `global_search_settings_checkboxes` method in [`ApplicationSettingsHelper`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/helpers/application_settings_helper.rb#L75`).
1. Add it to the `global_search_enabled_for_scope?` method in [`SearchService`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search_service.rb#L106).
1. Remember that EE-only settings should be added in the EE versions of the files
#### Results classes
The search results class available are:
| Search type | Search level | Class |
|-------------------|--------------|-----------------------------------------|
| Basic search | global | `Gitlab::SearchResults` |
| Basic search | group | `Gitlab::GroupSearchResults` |
| Basic search | project | `Gitlab::ProjectSearchResults` |
| Advanced search | global | `Gitlab::Elastic::SearchResults` |
| Advanced search | group | `Gitlab::Elastic::GroupSearchResults` |
| Advanced search | project | `Gitlab::Elastic::ProjectSearchResults` |
| Exact code search | global | `Search::Zoekt::SearchResults` |
| Exact code search | group | `Search::Zoekt::SearchResults` |
| Exact code search | project | `Search::Zoekt::SearchResults` |
| All search types | All levels | `Search::EmptySearchResults` |
The result class returns the following data:
1. `objects` - paginated from Elasticsearch transformed into database records or POROs
1. `formatted_count` - document count returned from Elasticsearch
1. `highlight_map` - map of highlighted fields from Elasticsearch
1. `failed?` - if a failure occurred
1. `error` - error message returned from Elasticsearch
1. `aggregations` - (optional) aggregations from Elasticsearch
New scopes must add support to these methods within `Gitlab::Elastic::SearchResults` class:
- `objects`
- `formatted_count`
- `highlight_map`
- `failed?`
- `error`
### Updating an existing scope
Updates may include adding and removing document fields or changes to authorization. To update an existing
scope, find the code used to generate queries and JSON for indexing.
- Queries are generated in `QueryBuilder` classes
- Indexed documents are built in `Reference` classes
We also support a legacy `Proxy` framework:
- Queries are generated in `ClassProxy` classes
- Indexed documents are built in `InstanceProxy` classes
Always aim to create new search filters in the `QueryBuilder` framework, even if they are used in the legacy framework.
#### Adding a field
##### Add the field to the index
1. Add the field to the index mapping to add it newly created indices and create a migration to add the field to existing indices in the same MR to avoid mapping schema drift. Use the [`MigrationUpdateMappingsHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationupdatemappingshelper)
1. Populate the new field in the document JSON. The code must check the migration is complete using
`::Elastic::DataMigrationService.migration_has_finished?`
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Create a migration to backfill the field in the index. If it's a not-nullable field, use [`MigrationBackfillHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationbackfillhelper), or [`MigrationReindexBasedOnSchemaVersion`](search/advanced_search_migration_styleguide.md#searchelasticmigrationreindexbasedonschemaversion) if it's a nullable field.
##### If the new field is an associated record
1. Update specs for [`Elastic::ProcessBookkeepingService`](https://gitlab.com/gitlab-org/gitlab/blob/8ce9add3bc412a32e655322bfcd9dcc996670f82/ee/spec/services/elastic/process_bookkeeping_service_spec.rb)
create associated records
1. Update N+1 specs for `preload_search_data` to create associated data records
1. Review [Updating dependent associations in the index](advanced_search/tips.md#dependent-association-index-updates)
##### Expose the field to the search service
1. Add the filter to the [`Search::Filter` concern](https://gitlab.com/gitlab-org/gitlab/-/blob/21bc3a986d27194c2387f4856ec1c5d5ef6fb4ff/app/services/concerns/search/filter.rb).
The concern is used in the `Search::GlobalService`, `Search::GroupService` and `Search::ProjectService`.
1. Pass the field for the scope by updating the `scope_options` method. The method is defined in
`Gitlab::Elastic::SearchResults` with overrides in `Gitlab::Elastic::GroupSearchResults` and
`Gitlab::Elastic::ProjectSearchResults`.
1. Use the field in the [query builder](#creating-a-query) by adding [an existing filter](#available-filters)
or [creating a new one](#creating-a-filter).
1. Track the filter usage in searches in the [`SearchController`](https://gitlab.com/gitlab-org/gitlab/-/blob/21bc3a986d27194c2387f4856ec1c5d5ef6fb4ff/app/controllers/search_controller.rb#L277)
#### Changing mapping of an existing field
1. Update the field type in the index mapping to change it for newly created indices
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Create a migration to reindex all documents
using [Zero downtime reindexing](search/advanced_search_migration_styleguide.md#zero-downtime-reindex-migration).
Use the [`Search::Elastic::MigrationReindexTaskHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationreindextaskhelper)
#### Changing field content
1. Update the field content in the document JSON
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Create a migration to update documents. Use the [`MigrationReindexBasedOnSchemaVersion`](search/advanced_search_migration_styleguide.md#searchelasticmigrationreindexbasedonschemaversion)
#### Cleaning up documents from an index
This may be used if documents are split from one index into separate indices or to remove data left in the index due to
bugs.
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Create a migration to index all records. Use the [`MigrationDatabaseBackfillHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationdatabasebackfillhelper)
1. Create a migration to remove all documents with the previous `SCHEMA_VERSION`. Use the [`MigrationDeleteBasedOnSchemaVersion`](search/advanced_search_migration_styleguide.md#searchelasticmigrationdeletebasedonschemaversion)
#### Removing a field
The removal must be split across multiple milestones to
support [multi-version compatibility](search/advanced_search_migration_styleguide.md#multi-version-compatibility).
To avoid dynamic mapping errors, the field must be removed from all documents before
a [Zero downtime reindexing](search/advanced_search_migration_styleguide.md#zero-downtime-reindex-migration).
Milestone `M`:
1. Remove the field from the index mapping to remove it from newly created indices
1. Stop populating the field in the document JSON
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Remove any [filters which use the field](#available-filters) from the [query builder](#creating-a-query)
1. Update the `scope_options` method to remove the field for the scope you are updating. The method is defined in
`Gitlab::Elastic::SearchResults` with overrides in `Gitlab::Elastic::GroupSearchResults` and
`Gitlab::Elastic::ProjectSearchResults`.
If the field is not used by other scopes:
1. Remove the field from the [`Search::Filter` concern](https://gitlab.com/gitlab-org/gitlab/-/blob/21bc3a986d27194c2387f4856ec1c5d5ef6fb4ff/app/services/concerns/search/filter.rb).
The concern is used in the `Search::GlobalService`, `Search::GroupService`, and `Search::ProjectService`.
1. Remove filter tracking in searches in the [`SearchController`](https://gitlab.com/gitlab-org/gitlab/-/blob/21bc3a986d27194c2387f4856ec1c5d5ef6fb4ff/app/controllers/search_controller.rb#L277)
Milestone `M+1`:
1. Create a migration to remove the field from all documents in the index. Use the [`MigrationRemoveFieldsHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationremovefieldshelper)
1. Create a migration to reindex all documents with the field removed
using [Zero downtime reindexing](search/advanced_search_migration_styleguide.md#zero-downtime-reindex-migration).
Use the [`Search::Elastic::MigrationReindexTaskHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationreindextaskhelper)
#### Updating authorization
In the `QueryBuilder` framework, authorization is handled at the project level with the
[`by_search_level_and_membership` filter](#by_search_level_and_membership) and at the group level
with the [`by_search_level_and_group_membership` filter](#by_search_level_and_group_membership).
In the legacy `Proxy` framework, the authorization is handled inside the class.
Both frameworks use `Search::GroupsFinder` and `Search::ProjectsFinder` to query the groups and projects a user
has direct access to search. Search relies upon group and project visibility level and feature access level settings
for each scope. See [roles and permissions documentation](../user/permissions.md) for more information.
## Query builder framework
The query builder framework is used to build Elasticsearch queries. We also support a legacy query framework implemented
in the `Elastic::Latest::ApplicationClassProxy` class and classes that inherit it.
{{< alert type="note" >}}
New document types must use the query builder framework.
{{< /alert >}}
### Creating a query
A query is built using:
- a query from `Search::Elastic::Queries`
- one or more filters from `::Search::Elastic::Filters`
- (optional) aggregations from `::Search::Elastic::Aggregations`
- one or more formats from `::Search::Elastic::Formats`
New scopes must create a new query builder class that inherits from `Search::Elastic::QueryBuilder`.
The query builder framework provides a collection of pre-built filters to handle common search scenarios. These filters
simplify the process of constructing complex query conditions without having to write raw Elasticsearch query DSL.
### Creating a filter
Filters are essential components in building effective Elasticsearch queries. They help narrow down search results
without affecting the relevance scoring.
- All filters must be documented.
- Filters are created as class level methods in `Search::Elastic::Filters`
- The method should start with `by_`.
- The method must take `query_hash` and `options` parameters only.
- `query_hash` is expected to contain a hash with this format.
```json
{ "query":
{ "bool":
{
"must": [],
"must_not": [],
"should": [],
"filters": [],
"minimum_should_match": null
}
}
}
```
- Use `add_filter` to add the filter to the query hash. Filters should add to the `filters` to avoid calculating score.
The score calculation is done by the query itself.
- Use `context.name(:filters)` around the filter to add a name to the filter. This helps identify which part of a query
and filter have allowed a result to be returned by the search
```ruby
def by_new_filter_type(query_hash:, options:)
filter_selected_value = options[:field_value]
context.name(:filters) do
add_filter(query_hash, :query, :bool, :filter) do
{ term: { field_name: { _name: context.name(:field_name), value: filter_selected_value } } }
end
end
end
```
### Understanding Queries vs Filters
Queries in Elasticsearch serve two key purposes: filtering documents and calculating relevance scores. When building
search functionality:
- **Queries** are essential when relevance scoring is required to rank results by how well they match search criteria.
They use the Boolean query's `must`, `should`, and `must_not` clauses, all of which influence the document's final
relevance score.
- **Filters** (within query context) determine whether documents appear in search results without affecting their score.
For search operations where results only need to be included/excluded without ranking by relevance, using filters
alone is more efficient and performs better at scale.
Choose the appropriate approach based on your search requirements - use queries with scoring clauses for ranked results,
and rely on filters for simple inclusion/exclusion logic.
### Filter Requirements and Usage
To use any filter:
1. The index mapping must include all required fields specified in each filter's documentation
1. Pass the appropriate parameters via the `options` hash when calling the filter
1. Each filter will generate the appropriate JSON structure and add it to your `query_hash`
Filters can be composed together to create sophisticated search queries while maintaining readable and maintainable
code.
### Sending queries to Elasticsearch
The queries are sent to `::Gitlab::Search::Client` from `Gitlab::Elastic::SearchResults`.
Results are parsed through a `Search::Elastic::ResponseMapper` to translate
the response from Elasticsearch.
#### Model requirements
The model must respond to the `to_ability_name` method so that the redaction logic can check if it has
`Ability.allowed?(current_user, :"read_#{object.to_ability_name}", object)?`. The method must be added if
it does not exist.
The model must define a `preload_search_data` scope to avoid N+1s.
### Available Queries
All query builders must return a standardized `query_hash` structure that conforms to Elasticsearch's Boolean query
syntax. The `Search::Elastic::BoolExpr` class provides an interface for constructing Boolean queries.
The required query hash structure is:
```json
{
"query": {
"bool": {
"must": [],
"must_not": [],
"should": [],
"filters": [],
"minimum_should_match": null
}
}
}
```
#### `by_iid`
Query by `iid` field and document type. Requires `type` and `iid` fields.
```json
{
"query": {
"bool": {
"filter": [
{
"term": {
"iid": {
"_name": "milestone:related:iid",
"value": 1
}
}
},
{
"term": {
"type": {
"_name": "doc:is_a:milestone",
"value": "milestone"
}
}
}
]
}
}
}
```
#### `by_full_text`
Performs a full text search. This query will use `by_multi_match_query` or `by_simple_query_string` if Advanced search syntax is used in the query string.
#### `by_multi_match_query`
Uses `multi_match` Elasticsearch API. Can be customized with the following options:
- `count_only` - uses the Boolean query clause `filter`. Scoring and highlighting are not performed.
- `query` - if no query is passed, uses `match_all` Elasticsearch API
- `keyword_match_clause` - if `:should` is passed, uses the Boolean query clause `should`. Default: `must` clause
```json
{
"query": {
"bool": {
"must": [
{
"bool": {
"must": [],
"must_not": [],
"should": [
{
"multi_match": {
"_name": "project:multi_match:and:search_terms",
"fields": [
"name^10",
"name_with_namespace^2",
"path_with_namespace",
"path^9",
"description"
],
"query": "search",
"operator": "and",
"lenient": true
}
},
{
"multi_match": {
"_name": "project:multi_match_phrase:search_terms",
"type": "phrase",
"fields": [
"name^10",
"name_with_namespace^2",
"path_with_namespace",
"path^9",
"description"
],
"query": "search",
"lenient": true
}
}
],
"filter": [],
"minimum_should_match": 1
}
}
],
"must_not": [],
"should": [],
"filter": [],
"minimum_should_match": null
}
}
}
```
#### `by_simple_query_string`
Uses `simple_query_string` Elasticsearch API. Can be customized with the following options:
- `count_only` - uses the Boolean query clause `filter`. Scoring and highlighting are not performed.
- `query` - if no query is passed, uses `match_all` Elasticsearch API
- `keyword_match_clause` - if `:should` is passed, uses the Boolean query clause `should`. Default: `must` clause
```json
{
"query": {
"bool": {
"must": [
{
"simple_query_string": {
"_name": "project:match:search_terms",
"fields": [
"name^10",
"name_with_namespace^2",
"path_with_namespace",
"path^9",
"description"
],
"query": "search",
"lenient": true,
"default_operator": "and"
}
}
],
"must_not": [],
"should": [],
"filter": [],
"minimum_should_match": null
}
}
}
```
#### `by_knn`
Requires options: `vectors_supported` (set to `:elasticsearch` or `:opensearch`) and `embedding_field`. Callers may optionally provide options: `embeddings`
Performs a hybrid search using embeddings. Uses `full_text_search` unless embeddings are supported.
{{< alert type="warning" >}}
Elasticsearch and OpenSearch DSL for `knn` queries is different. To support both, this query must be used with the `by_knn` filter.
{{< /alert >}}
The example below is for Elasticsearch.
```json
{
"query": {
"bool": {
"must": [
{
"bool": {
"must": [],
"must_not": [],
"should": [
{
"multi_match": {
"_name": "work_item:multi_match:and:search_terms",
"fields": [
"iid^50",
"title^2",
"description"
],
"query": "test",
"operator": "and",
"lenient": true
}
},
{
"multi_match": {
"_name": "work_item:multi_match_phrase:search_terms",
"type": "phrase",
"fields": [
"iid^50",
"title^2",
"description"
],
"query": "test",
"lenient": true
}
}
],
"filter": [],
"minimum_should_match": 1
}
}
],
"must_not": [],
"should": [],
"filter": [],
"minimum_should_match": null
}
},
"knn": {
"field": "embedding_0",
"query_vector": [
0.030752448365092278,
-0.05360432341694832
],
"boost": 5,
"k": 25,
"num_candidates": 100,
"similarity": 0.6,
"filter": []
}
}
```
### Available Filters
The following sections detail each available filter, its required fields, supported options, and example output.
#### `by_type`
Requires `type` field. Query with `doc_type` in options.
```json
{
"term": {
"type": {
"_name": "filters:doc:is_a:milestone",
"value": "milestone"
}
}
}
```
#### `by_group_level_confidentiality`
Requires `current_user` and `group_ids` fields. Query based on the permissions to user to read confidential group entities.
```json
{
"bool": {
"must": [
{
"term": {
"confidential": {
"value": true,
"_name": "confidential:true"
}
}
},
{
"terms": {
"namespace_id": [
1
],
"_name": "groups:can:read_confidential_work_items"
}
}
]
},
"should": {
"term": {
"confidential": {
"value": false,
"_name": "confidential:false"
}
}
}
}
```
#### `by_project_confidentiality`
Requires `confidential`, `author_id`, `assignee_id`, `project_id` fields. Query with `confidential` in options.
```json
{
"bool": {
"should": [
{
"term": {
"confidential": {
"_name": "filters:non_confidential",
"value": false
}
}
},
{
"bool": {
"must": [
{
"term": {
"confidential": {
"_name": "filters:confidential",
"value": true
}
}
},
{
"bool": {
"should": [
{
"term": {
"author_id": {
"_name": "filters:confidential:as_author",
"value": 1
}
}
},
{
"term": {
"assignee_id": {
"_name": "filters:confidential:as_assignee",
"value": 1
}
}
},
{
"terms": {
"_name": "filters:confidential:project:membership:id",
"project_id": [
12345
]
}
}
]
}
}
]
}
}
]
}
}
```
#### `by_label_ids`
Requires `label_ids` field. Query with `label_names` in options.
```json
{
"bool": {
"must": [
{
"terms": {
"_name": "filters:label_ids",
"label_ids": [
1
]
}
}
]
}
}
```
#### `by_archived`
Requires `archived` field. Query with `search_level` and `include_archived` in options.
```json
{
"bool": {
"_name": "filters:non_archived",
"should": [
{
"bool": {
"filter": {
"term": {
"archived": {
"value": false
}
}
}
}
},
{
"bool": {
"must_not": {
"exists": {
"field": "archived"
}
}
}
}
]
}
}
```
#### `by_state`
Requires `state` field. Supports values: `all`, `opened`, `closed`, and `merged`. Query with `state` in options.
```json
{
"match": {
"state": {
"_name": "filters:state",
"query": "opened"
}
}
}
```
#### `by_not_hidden`
Requires `hidden` field. Not applied for admins.
```json
{
"term": {
"hidden": {
"_name": "filters:not_hidden",
"value": false
}
}
}
```
#### `by_work_item_type_ids`
Requires `work_item_type_id` field. Query with `work_item_type_ids` or `not_work_item_type_ids` in options.
```json
{
"bool": {
"must_not": {
"terms": {
"_name": "filters:not_work_item_type_ids",
"work_item_type_id": [
8
]
}
}
}
}
```
#### `by_author`
Requires `author_id` field. Query with `author_username` or `not_author_username` in options.
```json
{
"bool": {
"should": [
{
"term": {
"author_id": {
"_name": "filters:author",
"value": 1
}
}
}
],
"minimum_should_match": 1
}
}
```
#### `by_target_branch`
Requires `target_branch` field. Query with `target_branch` or `not_target_branch` in options.
```json
{
"bool": {
"should": [
{
"term": {
"target_branch": {
"_name": "filters:target_branch",
"value": "master"
}
}
}
],
"minimum_should_match": 1
}
}
```
#### `by_source_branch`
Requires `source_branch` field. Query with `source_branch` or `not_source_branch` in options.
```json
{
"bool": {
"should": [
{
"term": {
"source_branch": {
"_name": "filters:source_branch",
"value": "master"
}
}
}
],
"minimum_should_match": 1
}
}
```
#### `by_search_level_and_group_membership`
Requires `current_user`, `group_ids`, `traversal_id`, `search_level` fields. Query with `search_level` and
filter on `namespace_visibility_level` based on permissions user has for each group.
{{< alert type="note" >}}
This filter can be used in place of `by_search_level_and_membership` if the data being searched does not contain the `project_id` field.
{{< /alert >}}
{{< alert type="note" >}}
Examples are shown for an authenticated user. The JSON may be different for users with authorizations, admins, external, or anonymous users
{{< /alert >}}
##### global
```json
{
"bool": {
"should": [
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 20,
"_name": "filters:namespace_visibility_level:public"
}
}
}
]
}
},
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 10,
"_name": "filters:namespace_visibility_level:internal"
}
}
}
]
}
},
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 0,
"_name": "filters:namespace_visibility_level:private"
}
}
},
{
"terms": {
"namespace_id": [
33,
22
]
}
}
]
}
}
],
"minimum_should_match": 1
}
}
```
##### group
```json
[
{
"bool": {
"_name": "filters:level:group",
"minimum_should_match": 1,
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:level:group:ancestry_filter:descendants",
"value": "22-"
}
}
}
]
}
},
{
"bool": {
"should": [
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 20,
"_name": "filters:namespace_visibility_level:public"
}
}
}
]
}
},
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 10,
"_name": "filters:namespace_visibility_level:internal"
}
}
}
]
}
},
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 0,
"_name": "filters:namespace_visibility_level:private"
}
}
},
{
"terms": {
"namespace_id": [
22
]
}
}
]
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"_name": "filters:level:group",
"minimum_should_match": 1,
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:level:group:ancestry_filter:descendants",
"value": "22-"
}
}
}
]
}
}
]
```
#### `by_search_level_and_membership`
Requires `project_id`, `traversal_id` and project visibility (defaulting to `visibility_level` but can set with the `project_visibility_level_field` option) fields. Supports feature `*_access_level` fields. Query with `search_level`
and optionally `project_ids`, `group_ids`, `features`, and `current_user` in options.
Filtering is applied for:
- search level for global, group, or project
- membership for direct membership to groups and projects or shared membership through direct access to a group
- any feature access levels passed through `features`
{{< alert type="note" >}}
Examples are shown for a logged in user. The JSON may be different for users with authorizations, admins, external, or anonymous users
{{< /alert >}}
##### global
```json
{
"bool": {
"_name": "filters:permissions:global",
"should": [
{
"bool": {
"must": [
{
"terms": {
"_name": "filters:permissions:global:visibility_level:public_and_internal",
"visibility_level": [
20,
10
]
}
}
],
"should": [
{
"terms": {
"_name": "filters:permissions:global:repository_access_level:enabled",
"repository_access_level": [
20
]
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"must": [
{
"bool": {
"should": [
{
"terms": {
"_name": "filters:permissions:global:repository_access_level:enabled_or_private",
"repository_access_level": [
20,
10
]
}
}
],
"minimum_should_match": 1
}
}
],
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:permissions:global:ancestry_filter:descendants",
"value": "123-"
}
}
},
{
"terms": {
"_name": "filters:permissions:global:project:member",
"project_id": [
456
]
}
}
],
"minimum_should_match": 1
}
}
],
"minimum_should_match": 1
}
}
```
##### group
```json
[
{
"bool": {
"_name": "filters:level:group",
"minimum_should_match": 1,
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:level:group:ancestry_filter:descendants",
"value": "123-"
}
}
}
]
}
},
{
"bool": {
"_name": "filters:permissions:group",
"should": [
{
"bool": {
"must": [
{
"terms": {
"_name": "filters:permissions:group:visibility_level:public_and_internal",
"visibility_level": [
20,
10
]
}
}
],
"should": [
{
"terms": {
"_name": "filters:permissions:group:repository_access_level:enabled",
"repository_access_level": [
20
]
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"must": [
{
"bool": {
"should": [
{
"terms": {
"_name": "filters:permissions:group:repository_access_level:enabled_or_private",
"repository_access_level": [
20,
10
]
}
}
],
"minimum_should_match": 1
}
}
],
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:permissions:group:ancestry_filter:descendants",
"value": "123-"
}
}
}
],
"minimum_should_match": 1
}
}
],
"minimum_should_match": 1
}
}
]
```
##### project
```json
[
{
"bool": {
"_name": "filters:level:project",
"must": {
"terms": {
"project_id": [
456
]
}
}
}
},
{
"bool": {
"_name": "filters:permissions:project",
"should": [
{
"bool": {
"must": [
{
"terms": {
"_name": "filters:permissions:project:visibility_level:public_and_internal",
"visibility_level": [
20,
10
]
}
}
],
"should": [
{
"terms": {
"_name": "filters:permissions:project:repository_access_level:enabled",
"repository_access_level": [
20
]
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"must": [
{
"bool": {
"should": [
{
"terms": {
"_name": "filters:permissions:project:repository_access_level:enabled_or_private",
"repository_access_level": [
20,
10
]
}
}
],
"minimum_should_match": 1
}
}
],
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:permissions:project:ancestry_filter:descendants",
"value": "123-"
}
}
}
],
"minimum_should_match": 1
}
}
],
"minimum_should_match": 1
}
}
]
```
#### `by_knn`
Requires options: `vectors_supported` (set to `:elasticsearch` or `:opensearch`) and `embedding_field`. Callers may optionally provide options: `embeddings`
{{< alert type="warning" >}}
Elasticsearch and OpenSearch DSL for `knn` queries is different. To support both, this filter must be used with the
`by_knn` query.
{{< /alert >}}
#### `by_noteable_type`
Requires `noteable_type` field. Query with `noteable_type` in options. Sets `_source` to only return `noteable_id` field.
```json
{
"term": {
"noteable_type": {
"_name": "filters:related:issue",
"value": "Issue"
}
}
}
```
## Testing scopes
Test any scope in the Rails console
```ruby
search_service = ::SearchService.new(User.first, { search: 'foo', scope: 'SCOPE_NAME' })
search_service.search_objects
```
### Permissions tests
Search code has a final security check in `SearchService#redact_unauthorized_results`. This prevents
unauthorized results from being returned to users who don't have permission to view them. The check is
done in Ruby to handle inconsistencies in Elasticsearch permissions data due to bugs or indexing delays.
New scopes must add visibility specs to ensure proper access control.
To test that permissions are properly enforced, add tests using the [`'search respects visibility'` shared example](https://gitlab.com/gitlab-org/gitlab/-/blob/a489ad0fe4b4d1e392272736b020cf9bd43646da/ee/spec/support/shared_examples/services/search_service_shared_examples.rb)
in the EE specs:
- `ee/spec/services/ee/search/global_service_spec.rb`
- `ee/spec/services/ee/search/group_service_spec.rb`
- `ee/spec/services/ee/search/project_service_spec.rb`
## Zero-downtime reindexing with multiple indices
{{< alert type="note" >}}
This is not applicable yet as multiple indices functionality is not fully implemented.
{{< /alert >}}
Currently, GitLab can only handle a single version of setting. Any setting/schema changes would require reindexing everything from scratch. Since reindexing can take a long time, this can cause search functionality downtime.
To avoid downtime, GitLab is working to support multiple indices that
can function at the same time. Whenever the schema changes, the administrator
will be able to create a new index and reindex to it, while searches
continue to go to the older, stable index. Any data updates will be
forwarded to both indices. Once the new index is ready, an administrator can
mark it active, which will direct all searches to it, and remove the old
index.
This is also helpful for migrating to new servers, for example, moving to/from AWS.
Currently, we are on the process of migrating to this new design. Everything is hardwired to work with one single version for now.
|
---
stage: AI-powered
group: Global Search
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Advanced search development guidelines
breadcrumbs:
- doc
- development
---
This page includes information about developing and working with Advanced search, which is powered by Elasticsearch.
Information on how to enable Advanced search and perform the initial indexing is in
the [Elasticsearch integration documentation](../integration/advanced_search/elasticsearch.md#enable-advanced-search).
## Deep dive resources
These recordings and presentations provide in-depth knowledge about the Advanced search implementation:
| Date | Topic | Presenter | Resources | GitLab Version |
|:-----------:|-----------------------------------------------------------------------------------------------------------|:----------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------:|
| July 2024 | Advanced search basics, integration, indexing, and search | Terri Chu | <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>[Recording on YouTube](https://youtu.be/5OXK1isDaks) (GitLab team members only)<br>[Google slides](https://docs.google.com/presentation/d/1Fy3pfFIGK_2ZCoB93EksRKhaS7uuNp81I3L5_joWa04/edit?usp=sharing_) (GitLab team members only) | GitLab 17.0 |
| June 2021 | GitLabs data migration process for Advanced search | Dmitry Gruzd | [Blog post](https://about.gitlab.com/blog/2021/06/01/advanced-search-data-migrations/) | GitLab 13.12 |
| August 2020 | [GitLab-specific architecture for multi-indices support](#zero-downtime-reindexing-with-multiple-indices) | Mark Chao | [Recording on YouTube](https://www.youtube.com/watch?v=0WdPR9oB2fg)<br>[Google slides](https://lulalala.gitlab.io/gitlab-elasticsearch-deepdive/) | GitLab 13.3 |
| June 2019 | GitLab [Elasticsearch integration](../integration/advanced_search/elasticsearch.md) | Mario de la Ossa | <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>[Recording on YouTube](https://www.youtube.com/watch?v=vrvl-tN2EaA)<br>[Google slides](https://docs.google.com/presentation/d/1H-pCzI_LNrgrL5pJAIQgvLX8Ji0-jIKOg1QeJQzChug/edit)<br>[PDF](https://gitlab.com/gitlab-org/create-stage/uploads/c5aa32b6b07476fa8b597004899ec538/Elasticsearch_Deep_Dive.pdf) | GitLab 12.0 |
## Elasticsearch configuration
### Supported versions
See [Version Requirements](../integration/advanced_search/elasticsearch.md#version-requirements).
Developers making significant changes to Elasticsearch queries should test their features against all our supported versions.
### Setting up your development environment
- See the [Elasticsearch GDK setup instructions](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/elasticsearch.md)
- Ensure [Elasticsearch is running](#setting-up-your-development-environment):
```shell
curl "http://localhost:9200"
```
<!-- vale gitlab_base.Spelling = NO -->
- [Run Kibana](https://www.elastic.co/guide/en/kibana/current/install.html#_install_kibana_yourself) to interact
with your local Elasticsearch cluster. Alternatively, you can use [Cerebro](https://github.com/lmenezes/cerebro) or a
similar tool.
<!-- vale gitlab_base.Spelling = YES -->
- To tail the logs for Elasticsearch, run this command:
```shell
tail -f log/elasticsearch.log
```
### Helpful Rake tasks
- `gitlab:elastic:test:index_size`: Tells you how much space the current index is using, as well as how many documents
are in the index.
- `gitlab:elastic:test:index_size_change`: Outputs index size, reindexes, and outputs index size again. Useful when
testing improvements to indexing size.
Additionally, if you need large repositories or multiple forks for testing,
consider [following these instructions](rake_tasks.md#extra-project-seed-options)
## Development workflow
### Development tips
- [Creating indices from scratch](advanced_search/tips.md#creating-all-indices-from-scratch-and-populating-with-local-data)
- [Index data](advanced_search/tips.md#index-data)
- [Updating dependent associations in the index](advanced_search/tips.md#dependent-association-index-updates)
- [Kibana](advanced_search/tips.md#kibana)
- [Running tests with Elasticsearch](advanced_search/tips.md#testing)
- [Testing migrations](advanced_search/tips.md#advanced-search-migrations)
- [Viewing index status](advanced_search/tips.md#viewing-index-status)
### Debugging & troubleshooting
#### Debugging Elasticsearch queries
The `ELASTIC_CLIENT_DEBUG` environment variable enables
the [debug option for the Elasticsearch client](https://gitlab.com/gitlab-org/gitlab/-/blob/76bd885119795096611cb94e364149d1ef006fef/ee/lib/gitlab/elastic/client.rb#L50)
in development or test environments. If you need to debug Elasticsearch HTTP queries generated from
code or tests, it can be enabled before running specs or starting the Rails console:
```console
ELASTIC_CLIENT_DEBUG=1 bundle exec rspec ee/spec/workers/search/elastic/trigger_indexing_worker_spec.rb
export ELASTIC_CLIENT_DEBUG=1
rails console
```
#### Getting `flood stage disk watermark [95%] exceeded`
You might get an error such as
```plaintext
[2018-10-31T15:54:19,762][WARN ][o.e.c.r.a.DiskThresholdMonitor] [pval5Ct]
flood stage disk watermark [95%] exceeded on
[pval5Ct7SieH90t5MykM5w][pval5Ct][/usr/local/var/lib/elasticsearch/nodes/0] free: 56.2gb[3%],
all indices on this node will be marked read-only
```
This is because you've exceeded the disk space threshold - it thinks you don't have enough disk space left, based on the
default 95% threshold.
In addition, the `read_only_allow_delete` setting will be set to `true`. It will block indexing, `forcemerge`, etc
```shell
curl "http://localhost:9200/gitlab-development/_settings?pretty"
```
Add this to your `elasticsearch.yml` file:
```yaml
# turn off the disk allocator
cluster.routing.allocation.disk.threshold_enabled: false
```
_or_
```yaml
# set your own limits
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 5gb # ES 6.x only
cluster.routing.allocation.disk.watermark.low: 15gb
cluster.routing.allocation.disk.watermark.high: 10gb
```
Restart Elasticsearch, and the `read_only_allow_delete` will clear on its own.
_from "Disk-based Shard Allocation | Elasticsearch
Reference" [5.6](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/disk-allocator.html#disk-allocator)
and [6.x](https://www.elastic.co/guide/en/elasticsearch/reference/6.7/disk-allocator.html)_
### Performance monitoring
#### Prometheus
GitLab exports [Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md)
relating to the number of requests and timing for all web/API requests and Sidekiq jobs,
which can help diagnose performance trends and compare how Elasticsearch timing
is impacting overall performance relative to the time spent doing other things.
##### Indexing queues
GitLab also exports [Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md)
for indexing queues, which can help diagnose performance bottlenecks and determine
whether your GitLab instance or Elasticsearch server can keep up with
the volume of updates.
#### Logs
All indexing happens in Sidekiq, so much of the relevant logs for the
Elasticsearch integration can be found in
[`sidekiq.log`](../administration/logs/_index.md#sidekiqlog). In particular, all
Sidekiq workers that make requests to Elasticsearch in any way will log the
number of requests and time taken querying/writing to Elasticsearch. This can
be useful to understand whether or not your cluster is keeping up with
indexing.
Searching Elasticsearch is done via ordinary web workers handling requests. Any
requests to load a page or make an API request, which then make requests to
Elasticsearch, will log the number of requests and the time taken to
[`production_json.log`](../administration/logs/_index.md#production_jsonlog). These
logs will also include the time spent on Database and Gitaly requests, which
may help to diagnose which part of the search is performing poorly.
There are additional logs specific to Elasticsearch that are sent to
[`elasticsearch.log`](../administration/logs/_index.md#elasticsearchlog)
that may contain information to help diagnose performance issues.
#### Performance Bar
Elasticsearch requests will be displayed in the
[`Performance Bar`](../administration/monitoring/performance/performance_bar.md), which can
be used both locally in development and on any deployed GitLab instance to
diagnose poor search performance. This will show the exact queries being made,
which is useful to diagnose why a search might be slow.
#### Correlation ID and `X-Opaque-Id`
Our [correlation ID](distributed_tracing.md#developer-guidelines-for-working-with-correlation-ids)
is forwarded by all requests from Rails to Elasticsearch as the
[`X-Opaque-Id`](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html#_identifying_running_tasks)
header which allows us to track any
[tasks](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html)
in the cluster back the request in GitLab.
## Architecture
The framework used to communicate to Elasticsearch is in the process of a refactor tracked in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/13873).
### Indexing Overview
Advanced search selectively indexes data. Each data type follows a specific indexing pipeline:
| Data type | How is it queued | Where is it queued | Where does indexing occur |
|---------------------|------------------------------------------------------------------------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Database records | Record changes through ActiveRecord callbacks and `Gitlab::EventStore` | Redis ZSET | [`ElasticIndexInitialBulkCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/elastic_index_initial_bulk_cron_worker.rb), [`ElasticIndexBulkCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/elastic_index_bulk_cron_worker.rb) |
| Git repository data | Branch push service and default branch change worker | Sidekiq | [`Search::Elastic::CommitIndexerWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/search/elastic/commit_indexer_worker.rb), [`ElasticWikiIndexerWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/elastic_wiki_indexer_worker.rb) |
| Embeddings | Record changes through ActiveRecord callbacks and `Gitlab::EventStore` | Redis ZSET | [`ElasticEmbeddingBulkCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/search/elastic_index_embedding_bulk_cron_worker.rb) |
### Indexing Components
#### External Indexer
For repository content, GitLab uses a
dedicated [indexer written in Go](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer) to efficiently process
files.
#### Rails Indexing Lifecycle
1. **Initial Indexing**: Administrators trigger the first complete index via the Admin UI or a Rake task
1. **Ongoing Updates**: After initial setup, GitLab maintains index currency through:
- Model callbacks (`after_create`, `after_update`, `after_destroy`) defined in [`/ee/app/models/concerns/elastic/application_versioned_search.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/models/concerns/elastic/application_versioned_search.rb)
- A Redis [`ZSET`](https://redis.io/docs/latest/develop/data-types/#sorted-sets) that tracks all pending changes
- Scheduled [Sidekiq workers](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/concerns/elastic/bulk_cron_worker.rb) that process these queues in batches using
Elasticsearch's [Bulk Request API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html)
### Search and Security
The [query builder framework](#query-builder-framework) generates search queries and handles access control logic. This
portion of the codebase requires particular attention during development and code review, as it has historically been a
source of security vulnerabilities.
The final step in returning search results is
to [redact unauthorized results](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/app/services/search_service.rb#L147)
for the current user to catch problems with the queries or race conditions.
### Migration framework
GitLabs Advanced search includes a robust migration framework that streamlines index maintenance and updates. This
system provides significant benefits:
- **Selective Reindexing**: Only updates specific document types when needed, avoiding full re-indexes
- **Automated Maintenance**: Updates proceed without requiring human intervention
- **Consistent Experience**: Provides the same migration path for both GitLab.com and GitLab Self-Managed instances
#### Framework Components
The migration system consists of:
- **Migration Runner**: A [cron worker](https://gitlab.com/gitlab-org/gitlab/-/blob/409b55d072b0008baca42dc53bda3e3dc56f588a/ee/app/workers/elastic/migration_worker.rb) that executes every 5 minutes to check for and process pending migrations.
- **Migration Files**: Similar to database migrations, these Ruby files define the migration steps with accompanying
YAML documentation
- **Migration Status Tracking**: All migration states are stored in a dedicated Elasticsearch index
- **Migration Lifecycle States**: Each migration progresses through stages: pending → in progress → complete (or halted
if issues arise)
#### Configuration Options
Migrations can be fine-tuned with various parameters:
- **Batching**: Control the document batch size for optimal performance
- **Throttling**: Adjust indexing speed to balance between migration speed and system load
- **Space Requirements**: Verify sufficient disk space before migrations begin to prevent interruptions
- **Skip condition**: Define a condition for skipping the migration
This framework makes index schema changes, field updates, and data migrations reliable and unobtrusive for all GitLab
installations.
### Search DSL
This section covers the Search DSL (Domain Specific Language) supported by GitLab, which is compatible with both
Elasticsearch and OpenSearch implementations.
#### Custom routing
[Custom routing](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-routing-field.html#_searching_with_custom_routing)
is used in Elasticsearch for document types. The routing format is usually `project_<project_id>` for project associated data
and `group_<root_namespace_id>` for group associated data. Routing is set during indexing and searching operations and tells
Elasticsearch what shards to put the data into. Some of the benefits and tradeoffs to using custom routing are:
- Project and group scoped searches are much faster since not all shards have to be hit.
- Routing is not used if too many shards would be hit for global and group scoped searches.
- Shard size imbalance might occur.
<!-- vale gitlab_base.Spelling = NO -->
#### Existing analyzers and tokenizers
The following analyzers and tokenizers are defined in
[`ee/lib/elastic/latest/config.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/ee/lib/elastic/latest/config.rb).
<!-- vale gitlab_base.Spelling = YES -->
##### Analyzers
###### `path_analyzer`
Used when indexing blobs' paths. Uses the `path_tokenizer` and the `lowercase` and `asciifolding` filters.
See the `path_tokenizer` explanation below for an example.
###### `sha_analyzer`
Used in blobs and commits. Uses the `sha_tokenizer` and the `lowercase` and `asciifolding` filters.
See the `sha_tokenizer` explanation later below for an example.
###### `code_analyzer`
Used when indexing a blob's filename and content. Uses the `whitespace` tokenizer
and the [`word_delimiter_graph`](https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-word-delimiter-graph-tokenfilter.html),
`lowercase`, and `asciifolding` filters.
The `whitespace` tokenizer was selected to have more control over how tokens are split. For example the string `Foo::bar(4)` needs to generate tokens like `Foo` and `bar(4)` to be properly searched.
See the `code` filter for an explanation on how tokens are split.
##### Tokenizers
###### `sha_tokenizer`
This is a custom tokenizer that uses the
[`edgeNGram` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenizer.html)
to allow SHAs to be searchable by any sub-set of it (minimum of 5 chars).
Example:
`240c29dc7e` becomes:
- `240c2`
- `240c29`
- `240c29d`
- `240c29dc`
- `240c29dc7`
- `240c29dc7e`
###### `path_tokenizer`
This is a custom tokenizer that uses the
[`path_hierarchy` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pathhierarchy-tokenizer.html)
with `reverse: true` to allow searches to find paths no matter how much or how little of the path is given as input.
Example:
`'/some/path/application.js'` becomes:
- `'/some/path/application.js'`
- `'some/path/application.js'`
- `'path/application.js'`
- `'application.js'`
#### Common gotchas
- Searches can have their own analyzers. Remember to check when editing analyzers.
- `Character` filters (as opposed to token filters) always replace the original character. These filters can hinder exact searches.
## Implementation guide
### Add a new document type to Elasticsearch
If data cannot be added to one of the [existing indices in Elasticsearch](../integration/advanced_search/elasticsearch.md#advanced-search-index-scopes), follow these instructions to set up a new index and populate it.
#### Recommended process for adding a new document type
Have any MRs reviewed by a member of the Global Search team:
1. [Setup your development environment](#setting-up-your-development-environment)
1. [Create the index](#create-the-index).
1. [Validate expected queries](#validate-expected-queries)
1. [Create a new Elasticsearch reference](#create-a-new-elastic-reference).
1. Perform [continuous updates](#continuous-updates) behind a feature flag. Enable the flag fully before the backfill.
1. [Backfill the data](#backfilling-data).
After indexing is done, the index is ready for search.
#### Create the index
All new indexes must have:
- `project_id` and `namespace_id` fields (if available). One of the fields must be used for [custom routing](#custom-routing).
- A `traversal_ids` field for efficient global and group search. Populate the field with `object.namespace.elastic_namespace_ancestry`
- Fields for authorization:
- For project data - `visibility_level`
- For group data - `namespace_visibility_level`
- Any required access level fields. These correspond to project feature access levels such as `issues_access_level` or `repository_access_level`
- A `schema_version` integer field in a `YYWW` (year/week) format. This field is used for data migrations.
1. Create a `Search::Elastic::Types::` class in `ee/lib/search/elastic/types/`.
1. Define the following class methods:
- `index_name`: in the format `gitlab-<env>-<type>` (for example, `gitlab-production-work_items`).
- `mappings`: a hash containing the index schema such as fields, data types, and analyzers.
- `settings`: a hash containing the index settings such as replicas and tokenizers.
The default is good enough for most cases.
1. Add a new [advanced search migration](search/advanced_search_migration_styleguide.md) to create the index
by executing `scripts/elastic-migration` and following the instructions.
The migration name must be in the format `Create<Name>Index`.
1. Use the [`Search::Elastic::MigrationCreateIndexHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationcreateindexhelper)
helper and the `'migration creates a new index'` shared example for the specification file created.
1. Add the target class to `Gitlab::Elastic::Helper::ES_SEPARATE_CLASSES`.
1. To test the index creation, run `Elastic::MigrationWorker.new.perform` in a console and check that the index
has been created with the correct mappings and settings:
```shell
curl "http://localhost:9200/gitlab-development-<type>/_mappings" | jq .`
```
```shell
curl "http://localhost:9200/gitlab-development-<type>/_settings" | jq .`
```
##### PostgreSQL to Elasticsearch mappings
Data types for primary and foreign keys must match the column type in the database. For example, the database column
type `integer` maps to `integer` and `bigint` maps to `long` in the mapping.
{{< alert type="warning" >}}
[Nested fields](https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html#_limits_on_nested_mappings_and_objects) introduce significant overhead. A flattened multi-value approach is recommended instead.
{{< /alert >}}
| PostgreSQL type | Elasticsearch mapping |
|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| bigint | long |
| smallint | short |
| integer | integer |
| boolean | boolean |
| array | keyword |
| timestamp | date |
| character varying, text | Depends on query requirements. Use [`text`](https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/text) for full-text search and [`keyword`](https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/keyword) for term queries, sorting, or aggregations |
##### Validate expected queries
Before creating a new index, it's crucial to validate that the planned mappings will support your expected queries.
Verifying mapping compatibility upfront helps avoid issues that would require index rebuilding later.
#### Create a new Elastic Reference
Create a `Search::Elastic::References::` class in `ee/lib/search/elastic/references/`.
The reference is used to perform bulk operations in Elasticsearch.
The file must inherit from `Search::Elastic::Reference` and define the following constant and methods:
```ruby
include Search::Elastic::Concerns::DatabaseReference # if there is a corresponding database record for every document
SCHEMA_VERSION = 24_46 # integer in YYWW format
override :serialize
def self.serialize(record)
# a string representation of the reference
end
override :instantiate
def self.instantiate(string)
# deserialize the string and call initialize
end
override :preload_indexing_data
def self.preload_indexing_data(refs)
# remove this method if `Search::Elastic::Concerns::DatabaseReference` is included
# otherwise return refs
end
def initialize
# initialize with instance variables
end
override :identifier
def identifier
# a way to identify the reference
end
override :routing
def routing
# Optional: an identifier to route the document in Elasticsearch
end
override :operation
def operation
# one of `:index`, `:upsert` or `:delete`
end
override :serialize
def serialize
# a string representation of the reference
end
override :as_indexed_json
def as_indexed_json
# a hash containing the document representation for this reference
end
override :index_name
def index_name
# index name
end
def model_klass
# set to the model class if `Search::Elastic::Concerns::DatabaseReference` is included
end
```
To add data to the index, an instance of the new reference class is called in
`Elastic::ProcessBookkeepingService.track!()` to add the data to a queue of
references for indexing.
A cron worker pulls queued references and bulk-indexes the items into Elasticsearch.
To test that the indexing operation works, call `Elastic::ProcessBookkeepingService.track!()`
with an instance of the reference class and run `Elastic::ProcessBookkeepingService.new.execute`.
The logs show the updates. To check the document in the index, run this command:
```shell
curl "http://localhost:9200/gitlab-development-<type>/_search"
```
##### Common gotchas
- Index operations actually perform an upsert. If the document exists, it performs a partial update by merging fields sent
with the existing document fields. If you want to explicitly remove fields or set them to empty, the `as_indexed_json`
must send `nil` or an empty array.
#### Data consistency
Now that we have an index and a way to bulk index the new document type into Elasticsearch, we need to add data into the index. This consists of doing a backfill and doing continuous updates to ensure the index data is up to date.
The backfill is done by calling `Elastic::ProcessInitialBookkeepingService.track!()` with an instance of `Search::Elastic::Reference` for every document that should be indexed.
The continuous update is done by calling `Elastic::ProcessBookkeepingService.track!()` with an instance of `Search::Elastic::Reference` for every document that should be created/updated/deleted.
##### Backfilling data
Add a new [Advanced Search migration](search/advanced_search_migration_styleguide.md) to backfill data by executing `scripts/elastic-migration` and following the instructions.
Use the [`MigrationDatabaseBackfillHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationdatabasebackfillhelper). The [`BackfillWorkItems` migration](https://gitlab.com/gitlab-org/search-team/migration-graveyard/-/blob/09354f497698037fc21f5a65e5c2d0a70edd81eb/lib/migrate/20240816132114_backfill_work_items.rb) can be used as an example.
To test the backfill, run `Elastic::MigrationWorker.new.perform` in a console a couple of times and see that the index was populated.
Tail the logs to see the progress of the migration:
```shell
tail -f log/elasticsearch.log
```
##### Continuous updates
For `ActiveRecord` objects, the `ApplicationVersionedSearch` concern can be included on the model to index data based on callbacks. If that's not suitable, call `Elastic::ProcessBookkeepingService.track!()` with an instance of `Search::Elastic::Reference` whenever a document should be indexed.
Always check for `Gitlab::CurrentSettings.elasticsearch_indexing?` and `use_elasticsearch?` because some GitLab Self-Managed instances do not have Elasticsearch enabled and [namespace limiting](../integration/advanced_search/elasticsearch.md#limit-the-amount-of-namespace-and-project-data-to-index) can be enabled.
Also check that the index is able to handle the index request. For example, check that the index exists if it was added in the current major release by verifying that the migration to add the index was completed: `Elastic::DataMigrationService.migration_has_finished?`.
##### Transfers and deletes
Project and group transfers and deletes must make updates to the index to avoid orphaned data. Orphaned data may occur
when [custom routing](#custom-routing) changes due to a transfer. Data in the old shard must be cleaned up. Elasticsearch
updates for transfers are handled in the [`Projects::TransferService`](https://gitlab.com/gitlab-org/gitlab/-/blob/4d2a86ed035d3c2a960f5b89f2424bee990dc8ab/ee/app/services/ee/projects/transfer_service.rb)
and [`Groups::TransferService`](https://gitlab.com/gitlab-org/gitlab/-/blob/4d2a86ed035d3c2a960f5b89f2424bee990dc8ab/ee/app/services/ee/groups/transfer_service.rb).
Indexes that contain a `project_id` field must use the [`Search::Elastic::DeleteWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/ee/app/workers/search/elastic/delete_worker.rb).
Indexes that contain a `namespace_id` field and no `project_id` field must use [`Search::ElasticGroupAssociationDeletionWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/ee/app/workers/search/elastic_group_association_deletion_worker.rb).
1. Add the indexed class to `excluded_classes` in [`ElasticDeleteProjectWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/ee/app/workers/elastic_delete_project_worker.rb)
1. Create a new service in the `::Search::Elastic::Delete` namespace to delete documents from the index
1. Update the worker to use the new service
### Implementing search for a new document type
Search data is available in [`SearchController`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/controllers/search_controller.rb) and
[Search API](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/lib/api/search.rb). Both use the [`SearchService`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search_service.rb) to return results.
The `SearchService` can be used to return results outside the `SearchController` and `Search API`.
#### Recommended process for implementing search for a new document type
Create the following MRs and have them reviewed by a member of the Global Search team:
1. [Enable the new scope](#search-scopes).
1. Create a [query builder](#creating-a-query).
1. Implement all [model requirements](#model-requirements).
1. [Add the new scope to `Gitlab::Elastic::SearchResults`](#results-classes) behind a feature flag.
1. Add support for the scope in [`Search::API`](https://gitlab.com/gitlab-org/gitlab/-/blob/bc063cd323323a7b27b7c9c9ddfc19591f49100c/lib/api/search.rb) (if applicable)
1. Add specs which must include [permissions tests](#permissions-tests)
1. [Test the new scope](#testing-scopes)
1. Update documentation for [Advanced search](../user/search/advanced_search.md), [Search API](../api/search.md) and,
[Roles and permissions](../user/permissions.md) (if applicable)
#### Search scopes
The `SearchService` exposes searching at [global](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search/global_service.rb),
[group](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search/group_service.rb), and [project](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search/project_service.rb) levels.
New scopes must be added to the following constants:
- `ALLOWED_SCOPES` (or override `allowed_scopes` method) in each EE `SearchService` file
- `ALLOWED_SCOPES` in `Gitlab::Search::AbuseDetection`
- `search_tab_ability_map` method in `Search::Navigation`. Override in the EE version if needed
{{< alert type="note" >}}
Global search can be disabled for a scope. You can do the following changes for disabling global search:
{{< /alert >}}
1. Add an application setting named `global_search_SCOPE_enabled` that defaults to `true` under the `search` jsonb accessor in [`app/models/application_setting.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/d52af9fafd5016ea25a665a9d5cb797b37a39b10/app/models/application_setting.rb#L738).
1. Add an entry in JSON schema validator file [`application_setting_search.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/d52af9fafd5016ea25a665a9d5cb797b37a39b10/app/validators/json_schemas/application_setting_search.json)
1. Add the setting checkbox in the Admin UI by creating an entry in `global_search_settings_checkboxes` method in [`ApplicationSettingsHelper`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/helpers/application_settings_helper.rb#L75`).
1. Add it to the `global_search_enabled_for_scope?` method in [`SearchService`](https://gitlab.com/gitlab-org/gitlab/-/blob/0105b56d6ad86e04ef46492dcf5537553505b678/app/services/search_service.rb#L106).
1. Remember that EE-only settings should be added in the EE versions of the files
#### Results classes
The search results class available are:
| Search type | Search level | Class |
|-------------------|--------------|-----------------------------------------|
| Basic search | global | `Gitlab::SearchResults` |
| Basic search | group | `Gitlab::GroupSearchResults` |
| Basic search | project | `Gitlab::ProjectSearchResults` |
| Advanced search | global | `Gitlab::Elastic::SearchResults` |
| Advanced search | group | `Gitlab::Elastic::GroupSearchResults` |
| Advanced search | project | `Gitlab::Elastic::ProjectSearchResults` |
| Exact code search | global | `Search::Zoekt::SearchResults` |
| Exact code search | group | `Search::Zoekt::SearchResults` |
| Exact code search | project | `Search::Zoekt::SearchResults` |
| All search types | All levels | `Search::EmptySearchResults` |
The result class returns the following data:
1. `objects` - paginated from Elasticsearch transformed into database records or POROs
1. `formatted_count` - document count returned from Elasticsearch
1. `highlight_map` - map of highlighted fields from Elasticsearch
1. `failed?` - if a failure occurred
1. `error` - error message returned from Elasticsearch
1. `aggregations` - (optional) aggregations from Elasticsearch
New scopes must add support to these methods within `Gitlab::Elastic::SearchResults` class:
- `objects`
- `formatted_count`
- `highlight_map`
- `failed?`
- `error`
### Updating an existing scope
Updates may include adding and removing document fields or changes to authorization. To update an existing
scope, find the code used to generate queries and JSON for indexing.
- Queries are generated in `QueryBuilder` classes
- Indexed documents are built in `Reference` classes
We also support a legacy `Proxy` framework:
- Queries are generated in `ClassProxy` classes
- Indexed documents are built in `InstanceProxy` classes
Always aim to create new search filters in the `QueryBuilder` framework, even if they are used in the legacy framework.
#### Adding a field
##### Add the field to the index
1. Add the field to the index mapping to add it newly created indices and create a migration to add the field to existing indices in the same MR to avoid mapping schema drift. Use the [`MigrationUpdateMappingsHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationupdatemappingshelper)
1. Populate the new field in the document JSON. The code must check the migration is complete using
`::Elastic::DataMigrationService.migration_has_finished?`
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Create a migration to backfill the field in the index. If it's a not-nullable field, use [`MigrationBackfillHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationbackfillhelper), or [`MigrationReindexBasedOnSchemaVersion`](search/advanced_search_migration_styleguide.md#searchelasticmigrationreindexbasedonschemaversion) if it's a nullable field.
##### If the new field is an associated record
1. Update specs for [`Elastic::ProcessBookkeepingService`](https://gitlab.com/gitlab-org/gitlab/blob/8ce9add3bc412a32e655322bfcd9dcc996670f82/ee/spec/services/elastic/process_bookkeeping_service_spec.rb)
create associated records
1. Update N+1 specs for `preload_search_data` to create associated data records
1. Review [Updating dependent associations in the index](advanced_search/tips.md#dependent-association-index-updates)
##### Expose the field to the search service
1. Add the filter to the [`Search::Filter` concern](https://gitlab.com/gitlab-org/gitlab/-/blob/21bc3a986d27194c2387f4856ec1c5d5ef6fb4ff/app/services/concerns/search/filter.rb).
The concern is used in the `Search::GlobalService`, `Search::GroupService` and `Search::ProjectService`.
1. Pass the field for the scope by updating the `scope_options` method. The method is defined in
`Gitlab::Elastic::SearchResults` with overrides in `Gitlab::Elastic::GroupSearchResults` and
`Gitlab::Elastic::ProjectSearchResults`.
1. Use the field in the [query builder](#creating-a-query) by adding [an existing filter](#available-filters)
or [creating a new one](#creating-a-filter).
1. Track the filter usage in searches in the [`SearchController`](https://gitlab.com/gitlab-org/gitlab/-/blob/21bc3a986d27194c2387f4856ec1c5d5ef6fb4ff/app/controllers/search_controller.rb#L277)
#### Changing mapping of an existing field
1. Update the field type in the index mapping to change it for newly created indices
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Create a migration to reindex all documents
using [Zero downtime reindexing](search/advanced_search_migration_styleguide.md#zero-downtime-reindex-migration).
Use the [`Search::Elastic::MigrationReindexTaskHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationreindextaskhelper)
#### Changing field content
1. Update the field content in the document JSON
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Create a migration to update documents. Use the [`MigrationReindexBasedOnSchemaVersion`](search/advanced_search_migration_styleguide.md#searchelasticmigrationreindexbasedonschemaversion)
#### Cleaning up documents from an index
This may be used if documents are split from one index into separate indices or to remove data left in the index due to
bugs.
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Create a migration to index all records. Use the [`MigrationDatabaseBackfillHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationdatabasebackfillhelper)
1. Create a migration to remove all documents with the previous `SCHEMA_VERSION`. Use the [`MigrationDeleteBasedOnSchemaVersion`](search/advanced_search_migration_styleguide.md#searchelasticmigrationdeletebasedonschemaversion)
#### Removing a field
The removal must be split across multiple milestones to
support [multi-version compatibility](search/advanced_search_migration_styleguide.md#multi-version-compatibility).
To avoid dynamic mapping errors, the field must be removed from all documents before
a [Zero downtime reindexing](search/advanced_search_migration_styleguide.md#zero-downtime-reindex-migration).
Milestone `M`:
1. Remove the field from the index mapping to remove it from newly created indices
1. Stop populating the field in the document JSON
1. Bump the `SCHEMA_VERSION` for the document JSON. The format is year and week number: `YYYYWW`
1. Remove any [filters which use the field](#available-filters) from the [query builder](#creating-a-query)
1. Update the `scope_options` method to remove the field for the scope you are updating. The method is defined in
`Gitlab::Elastic::SearchResults` with overrides in `Gitlab::Elastic::GroupSearchResults` and
`Gitlab::Elastic::ProjectSearchResults`.
If the field is not used by other scopes:
1. Remove the field from the [`Search::Filter` concern](https://gitlab.com/gitlab-org/gitlab/-/blob/21bc3a986d27194c2387f4856ec1c5d5ef6fb4ff/app/services/concerns/search/filter.rb).
The concern is used in the `Search::GlobalService`, `Search::GroupService`, and `Search::ProjectService`.
1. Remove filter tracking in searches in the [`SearchController`](https://gitlab.com/gitlab-org/gitlab/-/blob/21bc3a986d27194c2387f4856ec1c5d5ef6fb4ff/app/controllers/search_controller.rb#L277)
Milestone `M+1`:
1. Create a migration to remove the field from all documents in the index. Use the [`MigrationRemoveFieldsHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationremovefieldshelper)
1. Create a migration to reindex all documents with the field removed
using [Zero downtime reindexing](search/advanced_search_migration_styleguide.md#zero-downtime-reindex-migration).
Use the [`Search::Elastic::MigrationReindexTaskHelper`](search/advanced_search_migration_styleguide.md#searchelasticmigrationreindextaskhelper)
#### Updating authorization
In the `QueryBuilder` framework, authorization is handled at the project level with the
[`by_search_level_and_membership` filter](#by_search_level_and_membership) and at the group level
with the [`by_search_level_and_group_membership` filter](#by_search_level_and_group_membership).
In the legacy `Proxy` framework, the authorization is handled inside the class.
Both frameworks use `Search::GroupsFinder` and `Search::ProjectsFinder` to query the groups and projects a user
has direct access to search. Search relies upon group and project visibility level and feature access level settings
for each scope. See [roles and permissions documentation](../user/permissions.md) for more information.
## Query builder framework
The query builder framework is used to build Elasticsearch queries. We also support a legacy query framework implemented
in the `Elastic::Latest::ApplicationClassProxy` class and classes that inherit it.
{{< alert type="note" >}}
New document types must use the query builder framework.
{{< /alert >}}
### Creating a query
A query is built using:
- a query from `Search::Elastic::Queries`
- one or more filters from `::Search::Elastic::Filters`
- (optional) aggregations from `::Search::Elastic::Aggregations`
- one or more formats from `::Search::Elastic::Formats`
New scopes must create a new query builder class that inherits from `Search::Elastic::QueryBuilder`.
The query builder framework provides a collection of pre-built filters to handle common search scenarios. These filters
simplify the process of constructing complex query conditions without having to write raw Elasticsearch query DSL.
### Creating a filter
Filters are essential components in building effective Elasticsearch queries. They help narrow down search results
without affecting the relevance scoring.
- All filters must be documented.
- Filters are created as class level methods in `Search::Elastic::Filters`
- The method should start with `by_`.
- The method must take `query_hash` and `options` parameters only.
- `query_hash` is expected to contain a hash with this format.
```json
{ "query":
{ "bool":
{
"must": [],
"must_not": [],
"should": [],
"filters": [],
"minimum_should_match": null
}
}
}
```
- Use `add_filter` to add the filter to the query hash. Filters should add to the `filters` to avoid calculating score.
The score calculation is done by the query itself.
- Use `context.name(:filters)` around the filter to add a name to the filter. This helps identify which part of a query
and filter have allowed a result to be returned by the search
```ruby
def by_new_filter_type(query_hash:, options:)
filter_selected_value = options[:field_value]
context.name(:filters) do
add_filter(query_hash, :query, :bool, :filter) do
{ term: { field_name: { _name: context.name(:field_name), value: filter_selected_value } } }
end
end
end
```
### Understanding Queries vs Filters
Queries in Elasticsearch serve two key purposes: filtering documents and calculating relevance scores. When building
search functionality:
- **Queries** are essential when relevance scoring is required to rank results by how well they match search criteria.
They use the Boolean query's `must`, `should`, and `must_not` clauses, all of which influence the document's final
relevance score.
- **Filters** (within query context) determine whether documents appear in search results without affecting their score.
For search operations where results only need to be included/excluded without ranking by relevance, using filters
alone is more efficient and performs better at scale.
Choose the appropriate approach based on your search requirements - use queries with scoring clauses for ranked results,
and rely on filters for simple inclusion/exclusion logic.
### Filter Requirements and Usage
To use any filter:
1. The index mapping must include all required fields specified in each filter's documentation
1. Pass the appropriate parameters via the `options` hash when calling the filter
1. Each filter will generate the appropriate JSON structure and add it to your `query_hash`
Filters can be composed together to create sophisticated search queries while maintaining readable and maintainable
code.
### Sending queries to Elasticsearch
The queries are sent to `::Gitlab::Search::Client` from `Gitlab::Elastic::SearchResults`.
Results are parsed through a `Search::Elastic::ResponseMapper` to translate
the response from Elasticsearch.
#### Model requirements
The model must respond to the `to_ability_name` method so that the redaction logic can check if it has
`Ability.allowed?(current_user, :"read_#{object.to_ability_name}", object)?`. The method must be added if
it does not exist.
The model must define a `preload_search_data` scope to avoid N+1s.
### Available Queries
All query builders must return a standardized `query_hash` structure that conforms to Elasticsearch's Boolean query
syntax. The `Search::Elastic::BoolExpr` class provides an interface for constructing Boolean queries.
The required query hash structure is:
```json
{
"query": {
"bool": {
"must": [],
"must_not": [],
"should": [],
"filters": [],
"minimum_should_match": null
}
}
}
```
#### `by_iid`
Query by `iid` field and document type. Requires `type` and `iid` fields.
```json
{
"query": {
"bool": {
"filter": [
{
"term": {
"iid": {
"_name": "milestone:related:iid",
"value": 1
}
}
},
{
"term": {
"type": {
"_name": "doc:is_a:milestone",
"value": "milestone"
}
}
}
]
}
}
}
```
#### `by_full_text`
Performs a full text search. This query will use `by_multi_match_query` or `by_simple_query_string` if Advanced search syntax is used in the query string.
#### `by_multi_match_query`
Uses `multi_match` Elasticsearch API. Can be customized with the following options:
- `count_only` - uses the Boolean query clause `filter`. Scoring and highlighting are not performed.
- `query` - if no query is passed, uses `match_all` Elasticsearch API
- `keyword_match_clause` - if `:should` is passed, uses the Boolean query clause `should`. Default: `must` clause
```json
{
"query": {
"bool": {
"must": [
{
"bool": {
"must": [],
"must_not": [],
"should": [
{
"multi_match": {
"_name": "project:multi_match:and:search_terms",
"fields": [
"name^10",
"name_with_namespace^2",
"path_with_namespace",
"path^9",
"description"
],
"query": "search",
"operator": "and",
"lenient": true
}
},
{
"multi_match": {
"_name": "project:multi_match_phrase:search_terms",
"type": "phrase",
"fields": [
"name^10",
"name_with_namespace^2",
"path_with_namespace",
"path^9",
"description"
],
"query": "search",
"lenient": true
}
}
],
"filter": [],
"minimum_should_match": 1
}
}
],
"must_not": [],
"should": [],
"filter": [],
"minimum_should_match": null
}
}
}
```
#### `by_simple_query_string`
Uses `simple_query_string` Elasticsearch API. Can be customized with the following options:
- `count_only` - uses the Boolean query clause `filter`. Scoring and highlighting are not performed.
- `query` - if no query is passed, uses `match_all` Elasticsearch API
- `keyword_match_clause` - if `:should` is passed, uses the Boolean query clause `should`. Default: `must` clause
```json
{
"query": {
"bool": {
"must": [
{
"simple_query_string": {
"_name": "project:match:search_terms",
"fields": [
"name^10",
"name_with_namespace^2",
"path_with_namespace",
"path^9",
"description"
],
"query": "search",
"lenient": true,
"default_operator": "and"
}
}
],
"must_not": [],
"should": [],
"filter": [],
"minimum_should_match": null
}
}
}
```
#### `by_knn`
Requires options: `vectors_supported` (set to `:elasticsearch` or `:opensearch`) and `embedding_field`. Callers may optionally provide options: `embeddings`
Performs a hybrid search using embeddings. Uses `full_text_search` unless embeddings are supported.
{{< alert type="warning" >}}
Elasticsearch and OpenSearch DSL for `knn` queries is different. To support both, this query must be used with the `by_knn` filter.
{{< /alert >}}
The example below is for Elasticsearch.
```json
{
"query": {
"bool": {
"must": [
{
"bool": {
"must": [],
"must_not": [],
"should": [
{
"multi_match": {
"_name": "work_item:multi_match:and:search_terms",
"fields": [
"iid^50",
"title^2",
"description"
],
"query": "test",
"operator": "and",
"lenient": true
}
},
{
"multi_match": {
"_name": "work_item:multi_match_phrase:search_terms",
"type": "phrase",
"fields": [
"iid^50",
"title^2",
"description"
],
"query": "test",
"lenient": true
}
}
],
"filter": [],
"minimum_should_match": 1
}
}
],
"must_not": [],
"should": [],
"filter": [],
"minimum_should_match": null
}
},
"knn": {
"field": "embedding_0",
"query_vector": [
0.030752448365092278,
-0.05360432341694832
],
"boost": 5,
"k": 25,
"num_candidates": 100,
"similarity": 0.6,
"filter": []
}
}
```
### Available Filters
The following sections detail each available filter, its required fields, supported options, and example output.
#### `by_type`
Requires `type` field. Query with `doc_type` in options.
```json
{
"term": {
"type": {
"_name": "filters:doc:is_a:milestone",
"value": "milestone"
}
}
}
```
#### `by_group_level_confidentiality`
Requires `current_user` and `group_ids` fields. Query based on the permissions to user to read confidential group entities.
```json
{
"bool": {
"must": [
{
"term": {
"confidential": {
"value": true,
"_name": "confidential:true"
}
}
},
{
"terms": {
"namespace_id": [
1
],
"_name": "groups:can:read_confidential_work_items"
}
}
]
},
"should": {
"term": {
"confidential": {
"value": false,
"_name": "confidential:false"
}
}
}
}
```
#### `by_project_confidentiality`
Requires `confidential`, `author_id`, `assignee_id`, `project_id` fields. Query with `confidential` in options.
```json
{
"bool": {
"should": [
{
"term": {
"confidential": {
"_name": "filters:non_confidential",
"value": false
}
}
},
{
"bool": {
"must": [
{
"term": {
"confidential": {
"_name": "filters:confidential",
"value": true
}
}
},
{
"bool": {
"should": [
{
"term": {
"author_id": {
"_name": "filters:confidential:as_author",
"value": 1
}
}
},
{
"term": {
"assignee_id": {
"_name": "filters:confidential:as_assignee",
"value": 1
}
}
},
{
"terms": {
"_name": "filters:confidential:project:membership:id",
"project_id": [
12345
]
}
}
]
}
}
]
}
}
]
}
}
```
#### `by_label_ids`
Requires `label_ids` field. Query with `label_names` in options.
```json
{
"bool": {
"must": [
{
"terms": {
"_name": "filters:label_ids",
"label_ids": [
1
]
}
}
]
}
}
```
#### `by_archived`
Requires `archived` field. Query with `search_level` and `include_archived` in options.
```json
{
"bool": {
"_name": "filters:non_archived",
"should": [
{
"bool": {
"filter": {
"term": {
"archived": {
"value": false
}
}
}
}
},
{
"bool": {
"must_not": {
"exists": {
"field": "archived"
}
}
}
}
]
}
}
```
#### `by_state`
Requires `state` field. Supports values: `all`, `opened`, `closed`, and `merged`. Query with `state` in options.
```json
{
"match": {
"state": {
"_name": "filters:state",
"query": "opened"
}
}
}
```
#### `by_not_hidden`
Requires `hidden` field. Not applied for admins.
```json
{
"term": {
"hidden": {
"_name": "filters:not_hidden",
"value": false
}
}
}
```
#### `by_work_item_type_ids`
Requires `work_item_type_id` field. Query with `work_item_type_ids` or `not_work_item_type_ids` in options.
```json
{
"bool": {
"must_not": {
"terms": {
"_name": "filters:not_work_item_type_ids",
"work_item_type_id": [
8
]
}
}
}
}
```
#### `by_author`
Requires `author_id` field. Query with `author_username` or `not_author_username` in options.
```json
{
"bool": {
"should": [
{
"term": {
"author_id": {
"_name": "filters:author",
"value": 1
}
}
}
],
"minimum_should_match": 1
}
}
```
#### `by_target_branch`
Requires `target_branch` field. Query with `target_branch` or `not_target_branch` in options.
```json
{
"bool": {
"should": [
{
"term": {
"target_branch": {
"_name": "filters:target_branch",
"value": "master"
}
}
}
],
"minimum_should_match": 1
}
}
```
#### `by_source_branch`
Requires `source_branch` field. Query with `source_branch` or `not_source_branch` in options.
```json
{
"bool": {
"should": [
{
"term": {
"source_branch": {
"_name": "filters:source_branch",
"value": "master"
}
}
}
],
"minimum_should_match": 1
}
}
```
#### `by_search_level_and_group_membership`
Requires `current_user`, `group_ids`, `traversal_id`, `search_level` fields. Query with `search_level` and
filter on `namespace_visibility_level` based on permissions user has for each group.
{{< alert type="note" >}}
This filter can be used in place of `by_search_level_and_membership` if the data being searched does not contain the `project_id` field.
{{< /alert >}}
{{< alert type="note" >}}
Examples are shown for an authenticated user. The JSON may be different for users with authorizations, admins, external, or anonymous users
{{< /alert >}}
##### global
```json
{
"bool": {
"should": [
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 20,
"_name": "filters:namespace_visibility_level:public"
}
}
}
]
}
},
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 10,
"_name": "filters:namespace_visibility_level:internal"
}
}
}
]
}
},
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 0,
"_name": "filters:namespace_visibility_level:private"
}
}
},
{
"terms": {
"namespace_id": [
33,
22
]
}
}
]
}
}
],
"minimum_should_match": 1
}
}
```
##### group
```json
[
{
"bool": {
"_name": "filters:level:group",
"minimum_should_match": 1,
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:level:group:ancestry_filter:descendants",
"value": "22-"
}
}
}
]
}
},
{
"bool": {
"should": [
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 20,
"_name": "filters:namespace_visibility_level:public"
}
}
}
]
}
},
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 10,
"_name": "filters:namespace_visibility_level:internal"
}
}
}
]
}
},
{
"bool": {
"filter": [
{
"term": {
"namespace_visibility_level": {
"value": 0,
"_name": "filters:namespace_visibility_level:private"
}
}
},
{
"terms": {
"namespace_id": [
22
]
}
}
]
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"_name": "filters:level:group",
"minimum_should_match": 1,
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:level:group:ancestry_filter:descendants",
"value": "22-"
}
}
}
]
}
}
]
```
#### `by_search_level_and_membership`
Requires `project_id`, `traversal_id` and project visibility (defaulting to `visibility_level` but can set with the `project_visibility_level_field` option) fields. Supports feature `*_access_level` fields. Query with `search_level`
and optionally `project_ids`, `group_ids`, `features`, and `current_user` in options.
Filtering is applied for:
- search level for global, group, or project
- membership for direct membership to groups and projects or shared membership through direct access to a group
- any feature access levels passed through `features`
{{< alert type="note" >}}
Examples are shown for a logged in user. The JSON may be different for users with authorizations, admins, external, or anonymous users
{{< /alert >}}
##### global
```json
{
"bool": {
"_name": "filters:permissions:global",
"should": [
{
"bool": {
"must": [
{
"terms": {
"_name": "filters:permissions:global:visibility_level:public_and_internal",
"visibility_level": [
20,
10
]
}
}
],
"should": [
{
"terms": {
"_name": "filters:permissions:global:repository_access_level:enabled",
"repository_access_level": [
20
]
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"must": [
{
"bool": {
"should": [
{
"terms": {
"_name": "filters:permissions:global:repository_access_level:enabled_or_private",
"repository_access_level": [
20,
10
]
}
}
],
"minimum_should_match": 1
}
}
],
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:permissions:global:ancestry_filter:descendants",
"value": "123-"
}
}
},
{
"terms": {
"_name": "filters:permissions:global:project:member",
"project_id": [
456
]
}
}
],
"minimum_should_match": 1
}
}
],
"minimum_should_match": 1
}
}
```
##### group
```json
[
{
"bool": {
"_name": "filters:level:group",
"minimum_should_match": 1,
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:level:group:ancestry_filter:descendants",
"value": "123-"
}
}
}
]
}
},
{
"bool": {
"_name": "filters:permissions:group",
"should": [
{
"bool": {
"must": [
{
"terms": {
"_name": "filters:permissions:group:visibility_level:public_and_internal",
"visibility_level": [
20,
10
]
}
}
],
"should": [
{
"terms": {
"_name": "filters:permissions:group:repository_access_level:enabled",
"repository_access_level": [
20
]
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"must": [
{
"bool": {
"should": [
{
"terms": {
"_name": "filters:permissions:group:repository_access_level:enabled_or_private",
"repository_access_level": [
20,
10
]
}
}
],
"minimum_should_match": 1
}
}
],
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:permissions:group:ancestry_filter:descendants",
"value": "123-"
}
}
}
],
"minimum_should_match": 1
}
}
],
"minimum_should_match": 1
}
}
]
```
##### project
```json
[
{
"bool": {
"_name": "filters:level:project",
"must": {
"terms": {
"project_id": [
456
]
}
}
}
},
{
"bool": {
"_name": "filters:permissions:project",
"should": [
{
"bool": {
"must": [
{
"terms": {
"_name": "filters:permissions:project:visibility_level:public_and_internal",
"visibility_level": [
20,
10
]
}
}
],
"should": [
{
"terms": {
"_name": "filters:permissions:project:repository_access_level:enabled",
"repository_access_level": [
20
]
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"must": [
{
"bool": {
"should": [
{
"terms": {
"_name": "filters:permissions:project:repository_access_level:enabled_or_private",
"repository_access_level": [
20,
10
]
}
}
],
"minimum_should_match": 1
}
}
],
"should": [
{
"prefix": {
"traversal_ids": {
"_name": "filters:permissions:project:ancestry_filter:descendants",
"value": "123-"
}
}
}
],
"minimum_should_match": 1
}
}
],
"minimum_should_match": 1
}
}
]
```
#### `by_knn`
Requires options: `vectors_supported` (set to `:elasticsearch` or `:opensearch`) and `embedding_field`. Callers may optionally provide options: `embeddings`
{{< alert type="warning" >}}
Elasticsearch and OpenSearch DSL for `knn` queries is different. To support both, this filter must be used with the
`by_knn` query.
{{< /alert >}}
#### `by_noteable_type`
Requires `noteable_type` field. Query with `noteable_type` in options. Sets `_source` to only return `noteable_id` field.
```json
{
"term": {
"noteable_type": {
"_name": "filters:related:issue",
"value": "Issue"
}
}
}
```
## Testing scopes
Test any scope in the Rails console
```ruby
search_service = ::SearchService.new(User.first, { search: 'foo', scope: 'SCOPE_NAME' })
search_service.search_objects
```
### Permissions tests
Search code has a final security check in `SearchService#redact_unauthorized_results`. This prevents
unauthorized results from being returned to users who don't have permission to view them. The check is
done in Ruby to handle inconsistencies in Elasticsearch permissions data due to bugs or indexing delays.
New scopes must add visibility specs to ensure proper access control.
To test that permissions are properly enforced, add tests using the [`'search respects visibility'` shared example](https://gitlab.com/gitlab-org/gitlab/-/blob/a489ad0fe4b4d1e392272736b020cf9bd43646da/ee/spec/support/shared_examples/services/search_service_shared_examples.rb)
in the EE specs:
- `ee/spec/services/ee/search/global_service_spec.rb`
- `ee/spec/services/ee/search/group_service_spec.rb`
- `ee/spec/services/ee/search/project_service_spec.rb`
## Zero-downtime reindexing with multiple indices
{{< alert type="note" >}}
This is not applicable yet as multiple indices functionality is not fully implemented.
{{< /alert >}}
Currently, GitLab can only handle a single version of setting. Any setting/schema changes would require reindexing everything from scratch. Since reindexing can take a long time, this can cause search functionality downtime.
To avoid downtime, GitLab is working to support multiple indices that
can function at the same time. Whenever the schema changes, the administrator
will be able to create a new index and reindex to it, while searches
continue to go to the older, stable index. Any data updates will be
forwarded to both indices. Once the new index is ready, an administrator can
mark it active, which will direct all searches to it, and remove the old
index.
This is also helpful for migrating to new servers, for example, moving to/from AWS.
Currently, we are on the process of migrating to this new design. Everything is hardwired to work with one single version for now.
|
https://docs.gitlab.com/emails
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/emails.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
emails.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Working with email in development
| null |
## Ensuring compatibility with mailer Sidekiq jobs
A Sidekiq job is enqueued whenever `deliver_later` is called on an `ActionMailer`.
If a mailer argument needs to be added or removed, it is important to ensure
both backward and forward compatibility. Adhere to the Sidekiq steps for
[changing the arguments for a worker](sidekiq/compatibility_across_updates.md#changing-the-arguments-for-a-worker).
The same applies to a new mailer method, or a new mailer. If you introduce either,
follow the steps for [adding new workers](sidekiq/compatibility_across_updates.md#adding-new-workers).
This includes wrapping the new method with a [feature flag](feature_flags/_index.md)
so the new mailer can be disabled if a problem arises after deployment.
In the following example from [`NotificationService`](https://gitlab.com/gitlab-org/gitlab/-/blob/33ccb22e4fc271dbaac94b003a7a1a2915a13441/app/services/notification_service.rb#L74)
adding or removing an argument in this mailer's definition may cause problems
during deployment before all Rails and Sidekiq nodes have the updated code.
```ruby
mailer.unknown_sign_in_email(user, ip, time).deliver_later
```
## Sent emails
To view rendered emails "sent" in your development instance, visit
[`/rails/letter_opener`](http://localhost:3000/rails/letter_opener).
[S/MIME signed](../administration/smime_signing_email.md) emails
[cannot be currently previewed](https://github.com/fgrehm/letter_opener_web/issues/96) with
`letter_opener`.
## Mailer previews
Rails provides a way to preview our mailer templates in HTML and plaintext using
sample data.
The previews live in [`app/mailers/previews`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/mailers/previews) and can be viewed at
[`/rails/mailers`](http://localhost:3000/rails/mailers).
See the [Rails guides](https://guides.rubyonrails.org/action_mailer_basics.html#previewing-emails) for more information.
## Incoming email
1. Go to the GitLab installation directory.
1. Find the `incoming_email` section in `config/gitlab.yml`, enable the
feature and fill in the details for your specific IMAP server and email
account:
Configuration for Gmail / Google Apps, assumes mailbox `gitlab-incoming@gmail.com`:
```yaml
incoming_email:
enabled: true
# The email address including the %{key} placeholder that will be replaced to reference the
# item being replied to. This %{key} should be included in its entirety within the email
# address and not replaced by another value.
# For example: emailaddress+%{key}@gmail.com.
# The placeholder must appear in the "user" part of the address (before the `@`). It can be omitted but some features,
# including Service Desk, may not work properly.
address: "gitlab-incoming+%{key}@gmail.com"
# Email account username
# With third party providers, this is usually the full email address.
# With self-hosted email servers, this is usually the user part of the email address.
user: "gitlab-incoming@gmail.com"
# Email account password
password: "[REDACTED]"
# IMAP server host
host: "imap.gmail.com"
# IMAP server port
port: 993
# Whether the IMAP server uses SSL
ssl: true
# Whether the IMAP server uses StartTLS
start_tls: false
# The mailbox where incoming mail will end up. Usually "inbox".
mailbox: "inbox"
# The IDLE command timeout.
idle_timeout: 60
# Whether to expunge (permanently remove) messages from the mailbox when they are marked as deleted after delivery
expunge_deleted: false
```
As mentioned, the part after `+` is ignored, and this message is sent to the mailbox for `gitlab-incoming@gmail.com`.
1. Read the [MailRoom Gem updates](#mailroom-gem-updates) section for more details before you proceed to make sure you have the right version of MailRoom installed. In summary, you need to update the `gitlab-mail_room` version in the `Gemfile` to the latest `gitlab-mail_room` temporarily and run `bundle install`. **Do not commit** this change as it's a temporary workaround.
1. Run this command in the GitLab root directory to launch `mail_room`:
```shell
bundle exec mail_room -q -c config/mail_room.yml
```
1. Verify that everything is configured correctly:
```shell
bundle exec rake gitlab:incoming_email:check RAILS_ENV=development
```
1. Reply by email should now be working.
## Email namespace
GitLab supports the new format for email handler addresses. This was done to
support catch-all mailboxes.
If you need to implement a feature which requires a new email handler, follow these rules
for the format of the email key:
- Actions are always at the end, separated by `-`. For example `-issue` or `-merge-request`
- If your feature is related to a project, the key begins with the project identifiers (project path slug
and project ID), separated by `-`. For example, `gitlab-org-gitlab-foss-20`
- Additional information, such as an author's token, can be added between the project identifiers and
the action, separated by `-`. For example, `gitlab-org-gitlab-foss-20-Author_Token12345678-issue`
- You register your handlers in `lib/gitlab/email/handler.rb`
Examples of valid email keys:
- `gitlab-org-gitlab-foss-20-Author_Token12345678-issue` (create a new issue)
- `gitlab-org-gitlab-foss-20-Author_Token12345678-merge-request` (create a new merge request)
- `1234567890abcdef1234567890abcdef-unsubscribe` (unsubscribe from a conversation)
- `1234567890abcdef1234567890abcdef` (reply to a conversation)
The action `-issue-` is used in GitLab as the handler for the Service Desk feature.
### Legacy format
Although we continue to support the older legacy format, no new features should use a legacy format.
These are the only valid legacy formats for an email handler:
- `path/to/project+namespace`
- `path/to/project+namespace+action`
- `namespace`
- `namespace+action`
In GitLab, the handler for the Service Desk feature is `path/to/project`.
### MailRoom Gem updates
We use [`gitlab-mail_room`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-mail_room), a
fork of [`MailRoom`](https://github.com/tpitale/mail_room/), to ensure
that we can make updates quickly to the gem if necessary. We try to upstream
changes as soon as possible and keep the two projects in sync.
To update MailRoom:
1. Update `Gemfile` in GitLab Rails (see [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/116494)).
1. Update the Helm Chart configuration (see [example merge request](https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/854)).
---
[Return to Development documentation](_index.md)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Working with email in development
breadcrumbs:
- doc
- development
---
## Ensuring compatibility with mailer Sidekiq jobs
A Sidekiq job is enqueued whenever `deliver_later` is called on an `ActionMailer`.
If a mailer argument needs to be added or removed, it is important to ensure
both backward and forward compatibility. Adhere to the Sidekiq steps for
[changing the arguments for a worker](sidekiq/compatibility_across_updates.md#changing-the-arguments-for-a-worker).
The same applies to a new mailer method, or a new mailer. If you introduce either,
follow the steps for [adding new workers](sidekiq/compatibility_across_updates.md#adding-new-workers).
This includes wrapping the new method with a [feature flag](feature_flags/_index.md)
so the new mailer can be disabled if a problem arises after deployment.
In the following example from [`NotificationService`](https://gitlab.com/gitlab-org/gitlab/-/blob/33ccb22e4fc271dbaac94b003a7a1a2915a13441/app/services/notification_service.rb#L74)
adding or removing an argument in this mailer's definition may cause problems
during deployment before all Rails and Sidekiq nodes have the updated code.
```ruby
mailer.unknown_sign_in_email(user, ip, time).deliver_later
```
## Sent emails
To view rendered emails "sent" in your development instance, visit
[`/rails/letter_opener`](http://localhost:3000/rails/letter_opener).
[S/MIME signed](../administration/smime_signing_email.md) emails
[cannot be currently previewed](https://github.com/fgrehm/letter_opener_web/issues/96) with
`letter_opener`.
## Mailer previews
Rails provides a way to preview our mailer templates in HTML and plaintext using
sample data.
The previews live in [`app/mailers/previews`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/mailers/previews) and can be viewed at
[`/rails/mailers`](http://localhost:3000/rails/mailers).
See the [Rails guides](https://guides.rubyonrails.org/action_mailer_basics.html#previewing-emails) for more information.
## Incoming email
1. Go to the GitLab installation directory.
1. Find the `incoming_email` section in `config/gitlab.yml`, enable the
feature and fill in the details for your specific IMAP server and email
account:
Configuration for Gmail / Google Apps, assumes mailbox `gitlab-incoming@gmail.com`:
```yaml
incoming_email:
enabled: true
# The email address including the %{key} placeholder that will be replaced to reference the
# item being replied to. This %{key} should be included in its entirety within the email
# address and not replaced by another value.
# For example: emailaddress+%{key}@gmail.com.
# The placeholder must appear in the "user" part of the address (before the `@`). It can be omitted but some features,
# including Service Desk, may not work properly.
address: "gitlab-incoming+%{key}@gmail.com"
# Email account username
# With third party providers, this is usually the full email address.
# With self-hosted email servers, this is usually the user part of the email address.
user: "gitlab-incoming@gmail.com"
# Email account password
password: "[REDACTED]"
# IMAP server host
host: "imap.gmail.com"
# IMAP server port
port: 993
# Whether the IMAP server uses SSL
ssl: true
# Whether the IMAP server uses StartTLS
start_tls: false
# The mailbox where incoming mail will end up. Usually "inbox".
mailbox: "inbox"
# The IDLE command timeout.
idle_timeout: 60
# Whether to expunge (permanently remove) messages from the mailbox when they are marked as deleted after delivery
expunge_deleted: false
```
As mentioned, the part after `+` is ignored, and this message is sent to the mailbox for `gitlab-incoming@gmail.com`.
1. Read the [MailRoom Gem updates](#mailroom-gem-updates) section for more details before you proceed to make sure you have the right version of MailRoom installed. In summary, you need to update the `gitlab-mail_room` version in the `Gemfile` to the latest `gitlab-mail_room` temporarily and run `bundle install`. **Do not commit** this change as it's a temporary workaround.
1. Run this command in the GitLab root directory to launch `mail_room`:
```shell
bundle exec mail_room -q -c config/mail_room.yml
```
1. Verify that everything is configured correctly:
```shell
bundle exec rake gitlab:incoming_email:check RAILS_ENV=development
```
1. Reply by email should now be working.
## Email namespace
GitLab supports the new format for email handler addresses. This was done to
support catch-all mailboxes.
If you need to implement a feature which requires a new email handler, follow these rules
for the format of the email key:
- Actions are always at the end, separated by `-`. For example `-issue` or `-merge-request`
- If your feature is related to a project, the key begins with the project identifiers (project path slug
and project ID), separated by `-`. For example, `gitlab-org-gitlab-foss-20`
- Additional information, such as an author's token, can be added between the project identifiers and
the action, separated by `-`. For example, `gitlab-org-gitlab-foss-20-Author_Token12345678-issue`
- You register your handlers in `lib/gitlab/email/handler.rb`
Examples of valid email keys:
- `gitlab-org-gitlab-foss-20-Author_Token12345678-issue` (create a new issue)
- `gitlab-org-gitlab-foss-20-Author_Token12345678-merge-request` (create a new merge request)
- `1234567890abcdef1234567890abcdef-unsubscribe` (unsubscribe from a conversation)
- `1234567890abcdef1234567890abcdef` (reply to a conversation)
The action `-issue-` is used in GitLab as the handler for the Service Desk feature.
### Legacy format
Although we continue to support the older legacy format, no new features should use a legacy format.
These are the only valid legacy formats for an email handler:
- `path/to/project+namespace`
- `path/to/project+namespace+action`
- `namespace`
- `namespace+action`
In GitLab, the handler for the Service Desk feature is `path/to/project`.
### MailRoom Gem updates
We use [`gitlab-mail_room`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-mail_room), a
fork of [`MailRoom`](https://github.com/tpitale/mail_room/), to ensure
that we can make updates quickly to the gem if necessary. We try to upstream
changes as soon as possible and keep the two projects in sync.
To update MailRoom:
1. Update `Gemfile` in GitLab Rails (see [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/116494)).
1. Update the Helm Chart configuration (see [example merge request](https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/854)).
---
[Return to Development documentation](_index.md)
|
https://docs.gitlab.com/feature_development
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/feature_development.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
feature_development.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Feature development
| null |
Consult these topics for information on contributing to specific GitLab features.
## UX and Frontend guides
- [GitLab Design System](https://design.gitlab.com/), for building GitLab with
existing CSS styles and elements
- [Frontend guidelines](fe_guide/_index.md)
- [Emoji guide](fe_guide/emojis.md)
## Backend guides
### General
- [Software design guides](software_design.md)
- [GitLab EventStore](event_store.md) to publish/subscribe to domain events
- [GitLab utilities](utilities.md)
- [Newlines style guide](backend/ruby_style_guide.md#newlines-style-guide)
- [Logging](logging.md)
- [Dealing with email/mailers](emails.md)
- [Kubernetes integration guidelines](kubernetes.md)
- [Permissions](permissions.md)
- [Code comments](code_comments.md)
- [FIPS 140-2 and 140-3](fips_gitlab.md)
- [`Gemfile` guidelines](gemfile.md)
- [Ruby upgrade guidelines](ruby_upgrade.md)
### Things to be aware of
- [Gotchas](gotchas.md) to avoid
- [Avoid modules with instance variables](module_with_instance_variables.md), if
possible
- [Guidelines for reusing abstractions](reusing_abstractions.md)
- [Ruby 3 gotchas](ruby3_gotchas.md)
### Rails Framework related
- [Routing](routing.md)
- [Rails initializers](rails_initializers.md)
- [Mass Inserting Models](mass_insert.md)
- [Issuable-like Rails models](issuable-like-models.md)
- [Issue types vs first-class types](issue_types.md)
- [DeclarativePolicy framework](policies.md)
- [Rails update guidelines](rails_update.md)
### Debugging
- [Pry debugging](pry_debugging.md)
- [Sidekiq debugging](../administration/sidekiq/sidekiq_troubleshooting.md)
- [VS Code debugging](vs_code_debugging.md)
### Git specifics
- [How Git object deduplication works in GitLab](git_object_deduplication.md)
- [Git LFS](lfs.md)
### API
- [API style guide](api_styleguide.md) for contributing to the API
- [GraphQL API style guide](api_graphql_styleguide.md) for contributing to the
[GraphQL API](../api/graphql/_index.md)
### GitLab components and features
- [Developing against interacting components or features](interacting_components.md)
- [Manage feature flags](feature_flags/_index.md)
- [Implementing Enterprise Edition features](ee_features.md)
- [Accessing session data](session.md)
- [How to dump production data to staging](database/db_dump.md)
- [Geo development](geo.md)
- [Redis guidelines](redis.md)
- [Adding a new Redis instance](redis/new_redis_instance.md)
- [Sidekiq guidelines](sidekiq/_index.md) for working with Sidekiq workers
- [Working with Gitaly](gitaly.md)
- [Advanced search integration docs](advanced_search.md)
- [Working with merge request diffs](merge_request_concepts/diffs/_index.md)
- [Approval Rules](merge_request_concepts/approval_rules.md)
- [Repository mirroring](repository_mirroring.md)
- [Uploads development guide](uploads/_index.md)
- [Auto DevOps development guide](auto_devops.md)
- [Renaming features](renaming_features.md)
- [Code Intelligence](code_intelligence/_index.md)
- [Feature categorization](feature_categorization/_index.md)
- [Wikis development guide](wikis.md)
- [Image scaling guide](image_scaling.md)
- [Cascading Settings](cascading_settings.md)
- [Shell commands](shell_commands.md) in the GitLab codebase
- [Value Stream Analytics development guide](value_stream_analytics.md)
- [Application limits](application_limits.md)
- [AI features](ai_features/_index.md)
- [Application settings](application_settings.md)
- [Remote Development](remote_development/_index.md)
- [Markdown (GLFM) development guide](gitlab_flavored_markdown/_index.md)
- [Webhooks development guide](webhooks.md)
### Import and Export
- [Add new relations to the direct transfer importer](bulk_imports/contributing.md)
- [Principles of importer design](import/principles_of_importer_design.md)
- [Working with the GitHub importer](github_importer.md)
- [Import/Export development documentation](import_export.md)
- [Test Import Project](import_project.md)
- [Group migration](bulk_import.md)
- [Export to CSV](export_csv.md)
- [User contribution mapping](user_contribution_mapping.md)
### Integrations
- [Integrations development guide](integrations/_index.md)
- [GitLab for Jira Cloud app](integrations/jira_connect.md)
- [Security Scanners](integrations/secure.md)
- [Secure Partner Integration](integrations/secure_partner_integration.md)
- [How to run Jenkins in development environment](integrations/jenkins.md)
The following integration guides are internal. Some integrations require access to administrative accounts of third-party services and are available only for GitLab team members to contribute to:
- [Jira integration development](https://gitlab.com/gitlab-org/foundations/import-and-integrate/team/-/blob/main/integrations/jira.md)
- [GitLab for Slack app development](https://gitlab.com/gitlab-org/foundations/import-and-integrate/team/-/blob/main/integrations/slack.md)
## Performance guides
- [Performance guidelines](performance.md) for writing code, benchmarks, and
certain patterns to avoid.
- [Caching guidelines](caching.md) for using caching in Rails under a GitLab environment.
- [Merge request performance guidelines](merge_request_concepts/performance.md)
for ensuring merge requests do not negatively impact GitLab performance
- [Profiling](profiling.md) a URL or tracking down N+1 queries using Bullet.
- [Cached queries guidelines](cached_queries.md), for tracking down N+1 queries
masked by query caching, memory profiling and why should we avoid cached
queries.
- [JSON guidelines](json.md) for how to handle JSON in a performant manner.
- [GraphQL API optimizations](api_graphql_styleguide.md#optimizations) for how to optimize GraphQL code.
## Data stores guides
- [Database guidelines](database/_index.md).
- [Data retention policies](data_retention_policies.md)
- [Gitaly guidelines](gitaly.md)
## Testing guides
- [Testing standards and style guidelines](testing_guide/_index.md)
- [Frontend testing standards and style guidelines](testing_guide/frontend_testing.md)
## Refactoring guides
- [Refactoring guidelines](refactoring_guide/_index.md)
## Deprecation guides
- [Deprecation guidelines](deprecation_guidelines/_index.md)
## Documentation guides
- [Writing documentation](documentation/_index.md)
- [Documentation style guide](documentation/styleguide/_index.md)
- [Markdown](../user/markdown.md)
## Internationalization (i18n) guides
- [Introduction](i18n/_index.md)
- [Externalization](i18n/externalization.md)
- [Translation](i18n/translation.md)
## Analytics Instrumentation guides
- [Service Ping guide](internal_analytics/service_ping/_index.md)
- [Internal Events guide](internal_analytics/internal_event_instrumentation/quick_start.md)
## Experiment guide
- [Introduction](experiment_guide/_index.md)
## Build guides
- [Building a package for testing purposes](build_test_package.md)
## Compliance
- [Licensing](licensing.md) for ensuring license compliance
## Domain-specific guides
- [CI/CD development documentation](cicd/_index.md)
- [Sec Section development documentation](sec/_index.md)
## Technical Reference by Group
- [Create: Source Code BE](backend/create_source_code_be/_index.md)
## Other development guides
- [Defining relations between files using projections](projections.md)
- [Compatibility with multiple versions of the application running at the same time](multi_version_compatibility.md)
- [Features inside `.gitlab/`](features_inside_dot_gitlab.md)
- [Dashboards for stage groups](stage_group_observability/_index.md)
- [Preventing transient bugs](transient/prevention-patterns.md)
- [GitLab Application SLIs](application_slis/_index.md)
- [Spam protection and CAPTCHA development guide](spam_protection_and_captcha/_index.md)
- [RuboCop development guide](rubocop_development_guide.md)
## Other GitLab Development Kit (GDK) guides
- [Using GitLab Runner with the GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/runner.md)
- [Using the Web IDE terminal with the GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/web_ide_terminal_gdk_setup.md)
- [Gitpod configuration internals page](gitpod_internals.md)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Feature development
breadcrumbs:
- doc
- development
---
Consult these topics for information on contributing to specific GitLab features.
## UX and Frontend guides
- [GitLab Design System](https://design.gitlab.com/), for building GitLab with
existing CSS styles and elements
- [Frontend guidelines](fe_guide/_index.md)
- [Emoji guide](fe_guide/emojis.md)
## Backend guides
### General
- [Software design guides](software_design.md)
- [GitLab EventStore](event_store.md) to publish/subscribe to domain events
- [GitLab utilities](utilities.md)
- [Newlines style guide](backend/ruby_style_guide.md#newlines-style-guide)
- [Logging](logging.md)
- [Dealing with email/mailers](emails.md)
- [Kubernetes integration guidelines](kubernetes.md)
- [Permissions](permissions.md)
- [Code comments](code_comments.md)
- [FIPS 140-2 and 140-3](fips_gitlab.md)
- [`Gemfile` guidelines](gemfile.md)
- [Ruby upgrade guidelines](ruby_upgrade.md)
### Things to be aware of
- [Gotchas](gotchas.md) to avoid
- [Avoid modules with instance variables](module_with_instance_variables.md), if
possible
- [Guidelines for reusing abstractions](reusing_abstractions.md)
- [Ruby 3 gotchas](ruby3_gotchas.md)
### Rails Framework related
- [Routing](routing.md)
- [Rails initializers](rails_initializers.md)
- [Mass Inserting Models](mass_insert.md)
- [Issuable-like Rails models](issuable-like-models.md)
- [Issue types vs first-class types](issue_types.md)
- [DeclarativePolicy framework](policies.md)
- [Rails update guidelines](rails_update.md)
### Debugging
- [Pry debugging](pry_debugging.md)
- [Sidekiq debugging](../administration/sidekiq/sidekiq_troubleshooting.md)
- [VS Code debugging](vs_code_debugging.md)
### Git specifics
- [How Git object deduplication works in GitLab](git_object_deduplication.md)
- [Git LFS](lfs.md)
### API
- [API style guide](api_styleguide.md) for contributing to the API
- [GraphQL API style guide](api_graphql_styleguide.md) for contributing to the
[GraphQL API](../api/graphql/_index.md)
### GitLab components and features
- [Developing against interacting components or features](interacting_components.md)
- [Manage feature flags](feature_flags/_index.md)
- [Implementing Enterprise Edition features](ee_features.md)
- [Accessing session data](session.md)
- [How to dump production data to staging](database/db_dump.md)
- [Geo development](geo.md)
- [Redis guidelines](redis.md)
- [Adding a new Redis instance](redis/new_redis_instance.md)
- [Sidekiq guidelines](sidekiq/_index.md) for working with Sidekiq workers
- [Working with Gitaly](gitaly.md)
- [Advanced search integration docs](advanced_search.md)
- [Working with merge request diffs](merge_request_concepts/diffs/_index.md)
- [Approval Rules](merge_request_concepts/approval_rules.md)
- [Repository mirroring](repository_mirroring.md)
- [Uploads development guide](uploads/_index.md)
- [Auto DevOps development guide](auto_devops.md)
- [Renaming features](renaming_features.md)
- [Code Intelligence](code_intelligence/_index.md)
- [Feature categorization](feature_categorization/_index.md)
- [Wikis development guide](wikis.md)
- [Image scaling guide](image_scaling.md)
- [Cascading Settings](cascading_settings.md)
- [Shell commands](shell_commands.md) in the GitLab codebase
- [Value Stream Analytics development guide](value_stream_analytics.md)
- [Application limits](application_limits.md)
- [AI features](ai_features/_index.md)
- [Application settings](application_settings.md)
- [Remote Development](remote_development/_index.md)
- [Markdown (GLFM) development guide](gitlab_flavored_markdown/_index.md)
- [Webhooks development guide](webhooks.md)
### Import and Export
- [Add new relations to the direct transfer importer](bulk_imports/contributing.md)
- [Principles of importer design](import/principles_of_importer_design.md)
- [Working with the GitHub importer](github_importer.md)
- [Import/Export development documentation](import_export.md)
- [Test Import Project](import_project.md)
- [Group migration](bulk_import.md)
- [Export to CSV](export_csv.md)
- [User contribution mapping](user_contribution_mapping.md)
### Integrations
- [Integrations development guide](integrations/_index.md)
- [GitLab for Jira Cloud app](integrations/jira_connect.md)
- [Security Scanners](integrations/secure.md)
- [Secure Partner Integration](integrations/secure_partner_integration.md)
- [How to run Jenkins in development environment](integrations/jenkins.md)
The following integration guides are internal. Some integrations require access to administrative accounts of third-party services and are available only for GitLab team members to contribute to:
- [Jira integration development](https://gitlab.com/gitlab-org/foundations/import-and-integrate/team/-/blob/main/integrations/jira.md)
- [GitLab for Slack app development](https://gitlab.com/gitlab-org/foundations/import-and-integrate/team/-/blob/main/integrations/slack.md)
## Performance guides
- [Performance guidelines](performance.md) for writing code, benchmarks, and
certain patterns to avoid.
- [Caching guidelines](caching.md) for using caching in Rails under a GitLab environment.
- [Merge request performance guidelines](merge_request_concepts/performance.md)
for ensuring merge requests do not negatively impact GitLab performance
- [Profiling](profiling.md) a URL or tracking down N+1 queries using Bullet.
- [Cached queries guidelines](cached_queries.md), for tracking down N+1 queries
masked by query caching, memory profiling and why should we avoid cached
queries.
- [JSON guidelines](json.md) for how to handle JSON in a performant manner.
- [GraphQL API optimizations](api_graphql_styleguide.md#optimizations) for how to optimize GraphQL code.
## Data stores guides
- [Database guidelines](database/_index.md).
- [Data retention policies](data_retention_policies.md)
- [Gitaly guidelines](gitaly.md)
## Testing guides
- [Testing standards and style guidelines](testing_guide/_index.md)
- [Frontend testing standards and style guidelines](testing_guide/frontend_testing.md)
## Refactoring guides
- [Refactoring guidelines](refactoring_guide/_index.md)
## Deprecation guides
- [Deprecation guidelines](deprecation_guidelines/_index.md)
## Documentation guides
- [Writing documentation](documentation/_index.md)
- [Documentation style guide](documentation/styleguide/_index.md)
- [Markdown](../user/markdown.md)
## Internationalization (i18n) guides
- [Introduction](i18n/_index.md)
- [Externalization](i18n/externalization.md)
- [Translation](i18n/translation.md)
## Analytics Instrumentation guides
- [Service Ping guide](internal_analytics/service_ping/_index.md)
- [Internal Events guide](internal_analytics/internal_event_instrumentation/quick_start.md)
## Experiment guide
- [Introduction](experiment_guide/_index.md)
## Build guides
- [Building a package for testing purposes](build_test_package.md)
## Compliance
- [Licensing](licensing.md) for ensuring license compliance
## Domain-specific guides
- [CI/CD development documentation](cicd/_index.md)
- [Sec Section development documentation](sec/_index.md)
## Technical Reference by Group
- [Create: Source Code BE](backend/create_source_code_be/_index.md)
## Other development guides
- [Defining relations between files using projections](projections.md)
- [Compatibility with multiple versions of the application running at the same time](multi_version_compatibility.md)
- [Features inside `.gitlab/`](features_inside_dot_gitlab.md)
- [Dashboards for stage groups](stage_group_observability/_index.md)
- [Preventing transient bugs](transient/prevention-patterns.md)
- [GitLab Application SLIs](application_slis/_index.md)
- [Spam protection and CAPTCHA development guide](spam_protection_and_captcha/_index.md)
- [RuboCop development guide](rubocop_development_guide.md)
## Other GitLab Development Kit (GDK) guides
- [Using GitLab Runner with the GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/runner.md)
- [Using the Web IDE terminal with the GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/web_ide_terminal_gdk_setup.md)
- [Gitpod configuration internals page](gitpod_internals.md)
|
https://docs.gitlab.com/gotchas
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/gotchas.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
gotchas.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Gotchas
| null |
The purpose of this guide is to document potential "gotchas" that contributors
might encounter or should avoid during development of GitLab CE and EE.
## Do not read files from app/assets directory
Omnibus GitLab has [dropped the `app/assets` directory](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/2456),
after asset compilation. The `ee/app/assets`, `vendor/assets` directories are dropped as well.
This means that reading files from that directory fails in Omnibus-installed GitLab instances:
```ruby
file = Rails.root.join('app/assets/images/logo.svg')
# This file does not exist, read will fail with:
# Errno::ENOENT: No such file or directory @ rb_sysopen
File.read(file)
```
## Do not assert against the absolute value of a sequence-generated attribute
Consider the following factory:
```ruby
FactoryBot.define do
factory :label do
sequence(:title) { |n| "label#{n}" }
end
end
```
Consider the following API spec:
```ruby
require 'spec_helper'
RSpec.describe API::Labels do
it 'creates a first label' do
create(:label)
get api("/projects/#{project.id}/labels", user)
expect(response).to have_gitlab_http_status(:ok)
expect(json_response.first['name']).to eq('label1')
end
it 'creates a second label' do
create(:label)
get api("/projects/#{project.id}/labels", user)
expect(response).to have_gitlab_http_status(:ok)
expect(json_response.first['name']).to eq('label1')
end
end
```
When run, this spec doesn't do what we might expect:
```shell
1) API::API reproduce sequence issue creates a second label
Failure/Error: expect(json_response.first['name']).to eq('label1')
expected: "label1"
got: "label2"
(compared using ==)
```
This is because FactoryBot sequences are not reset for each example.
Remember that sequence-generated values exist only to avoid having to
explicitly set attributes that have a uniqueness constraint when using a factory.
### Solution
If you assert against a sequence-generated attribute's value, you should set it
explicitly. Also, the value you set shouldn't match the sequence pattern.
For instance, using our `:label` factory, writing `create(:label, title: 'foo')`
is ok, but `create(:label, title: 'label1')` is not.
Following is the fixed API spec:
```ruby
require 'spec_helper'
RSpec.describe API::Labels do
it 'creates a first label' do
create(:label, title: 'foo')
get api("/projects/#{project.id}/labels", user)
expect(response).to have_gitlab_http_status(:ok)
expect(json_response.first['name']).to eq('foo')
end
it 'creates a second label' do
create(:label, title: 'bar')
get api("/projects/#{project.id}/labels", user)
expect(response).to have_gitlab_http_status(:ok)
expect(json_response.first['name']).to eq('bar')
end
end
```
## Avoid using `expect_any_instance_of` or `allow_any_instance_of` in RSpec
### Why
- Because it is not isolated therefore it might be broken at times.
- Because it doesn't work whenever the method we want to stub was defined
in a prepended module, which is very likely the case in EE. We could see
error like this:
```plaintext
1.1) Failure/Error: expect_any_instance_of(ApplicationSetting).to receive_messages(messages)
Using `any_instance` to stub a method (elasticsearch_indexing) that has been defined on a prepended module (EE::ApplicationSetting) is not supported.
```
### Alternatives
Instead, use any of these:
- `expect_next_instance_of`
- `allow_next_instance_of`
- `expect_next_found_instance_of`
- `allow_next_found_instance_of`
For example:
```ruby
# Don't do this:
expect_any_instance_of(Project).to receive(:add_import_job)
# Don't do this:
allow_any_instance_of(Project).to receive(:add_import_job)
```
We could write:
```ruby
# Do this:
expect_next_instance_of(Project) do |project|
expect(project).to receive(:add_import_job)
end
# Do this:
allow_next_instance_of(Project) do |project|
allow(project).to receive(:add_import_job)
end
# Do this:
expect_next_found_instance_of(Project) do |project|
expect(project).to receive(:add_import_job)
end
# Do this:
allow_next_found_instance_of(Project) do |project|
allow(project).to receive(:add_import_job)
end
```
Since Active Record is not calling the `.new` method on model classes to instantiate the objects,
you should use `expect_next_found_instance_of` or `allow_next_found_instance_of` mock helpers to set up mock on objects returned by Active Record query & finder methods._
It is also possible to set mocks and expectations for multiple instances of the same Active Record model by using the `expect_next_found_(number)_instances_of` and `allow_next_found_(number)_instances_of` helpers, like so;
```ruby
expect_next_found_2_instances_of(Project) do |project|
expect(project).to receive(:add_import_job)
end
allow_next_found_2_instances_of(Project) do |project|
allow(project).to receive(:add_import_job)
end
```
If we also want to initialize the instance with some particular arguments, we
could also pass it like:
```ruby
# Do this:
expect_next_instance_of(MergeRequests::RefreshService, project, user) do |refresh_service|
expect(refresh_service).to receive(:execute).with(oldrev, newrev, ref)
end
```
This would expect the following:
```ruby
# Above expects:
refresh_service = MergeRequests::RefreshService.new(project, user)
refresh_service.execute(oldrev, newrev, ref)
```
## Do not `rescue Exception`
See ["Why is it bad style to `rescue Exception => e` in Ruby?"](https://stackoverflow.com/questions/10048173/why-is-it-bad-style-to-rescue-exception-e-in-ruby).
This rule is [enforced automatically by RuboCop](https://gitlab.com/gitlab-org/gitlab-foss/blob/8-4-stable/.rubocop.yml#L911-914).
## Do not use inline JavaScript in views
Using the inline `:javascript` Haml filters comes with a
performance overhead. Using inline JavaScript is not a good way to structure your code and should be avoided.
We've [removed these two filters](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/initializers/hamlit.rb)
in an initializer.
### Further reading
- Stack Overflow: [Why you should not write inline JavaScript](https://softwareengineering.stackexchange.com/questions/86589/why-should-i-avoid-inline-scripting)
## Storing assets that do not require pre-compiling
Assets that need to be served to the user are stored under the `app/assets` directory, which is later pre-compiled and placed in the `public/` directory.
However, you cannot access the content of any file from within `app/assets` from the application code, as we do not include that folder in production installations as a [space saving measure](https://gitlab.com/gitlab-org/omnibus-gitlab/-/commit/ca049f990b223f5e1e412830510a7516222810be).
```ruby
support_bot = Users::Internal.support_bot
# accessing a file from the `app/assets` folder
support_bot.avatar = Rails.root.join('app', 'assets', 'images', 'bot_avatars', 'support_bot.png').open
support_bot.save!
```
While the code above works in local environments, it errors out in production installations as the `app/assets` folder is not included.
### Solution
The alternative is the `lib/assets` folder. Use it if you need to add assets (like images) to the repository that meet the following conditions:
- The assets do not need to be directly served to the user (and hence need not be pre-compiled).
- The assets do need to be accessed via application code.
In short:
Use `app/assets` for storing any asset that needs to be precompiled and served to the end user.
Use `lib/assets` for storing any asset that does not need to be served to the end user directly, but is still required to be accessed by the application code.
MR for reference: [!37671](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37671)
## Do not override `has_many through:` or `has_one through:` associations
Associations with the `:through` option should not be overridden as we could accidentally
destroy the wrong object.
This is because the `destroy()` method behaves differently when acting on
`has_many through:` and `has_one through:` associations.
```ruby
group.users.destroy(id)
```
The code example above reads as if we are destroying a `User` record, but behind the scenes, it is destroying a `Member` record. This is because the `users` association is defined on `Group` as a `has_many through:` association:
```ruby
class Group < Namespace
has_many :group_members, -> { where(requested_at: nil).where.not(members: { access_level: Gitlab::Access::MINIMAL_ACCESS }) }, dependent: :destroy, as: :source
has_many :users, through: :group_members
end
```
And Rails has the following [behavior](https://api.rubyonrails.org/classes/ActiveRecord/Associations/ClassMethods.html#method-i-has_many) on using `destroy()` on such associations:
> If the :through option is used, then the join records are destroyed instead, not the objects themselves.
This is why a `Member` record, which is the join record connecting a `User` and `Group`, is being destroyed.
Now, if we override the `users` association, so like:
```ruby
class Group < Namespace
has_many :group_members, -> { where(requested_at: nil).where.not(members: { access_level: Gitlab::Access::MINIMAL_ACCESS }) }, dependent: :destroy, as: :source
has_many :users, through: :group_members
def users
super.where(admin: false)
end
end
```
The overridden method now changes the above behavior of `destroy()`, such that if we execute
```ruby
group.users.destroy(id)
```
a `User` record will be deleted, which can lead to data loss.
In short, overriding a `has_many through:` or `has_one through:` association can prove dangerous.
To prevent this from happening, we are introducing an
automated check in [!131455](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131455).
For more information, see [issue 424536](https://gitlab.com/gitlab-org/gitlab/-/issues/424536).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Gotchas
breadcrumbs:
- doc
- development
---
The purpose of this guide is to document potential "gotchas" that contributors
might encounter or should avoid during development of GitLab CE and EE.
## Do not read files from app/assets directory
Omnibus GitLab has [dropped the `app/assets` directory](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/2456),
after asset compilation. The `ee/app/assets`, `vendor/assets` directories are dropped as well.
This means that reading files from that directory fails in Omnibus-installed GitLab instances:
```ruby
file = Rails.root.join('app/assets/images/logo.svg')
# This file does not exist, read will fail with:
# Errno::ENOENT: No such file or directory @ rb_sysopen
File.read(file)
```
## Do not assert against the absolute value of a sequence-generated attribute
Consider the following factory:
```ruby
FactoryBot.define do
factory :label do
sequence(:title) { |n| "label#{n}" }
end
end
```
Consider the following API spec:
```ruby
require 'spec_helper'
RSpec.describe API::Labels do
it 'creates a first label' do
create(:label)
get api("/projects/#{project.id}/labels", user)
expect(response).to have_gitlab_http_status(:ok)
expect(json_response.first['name']).to eq('label1')
end
it 'creates a second label' do
create(:label)
get api("/projects/#{project.id}/labels", user)
expect(response).to have_gitlab_http_status(:ok)
expect(json_response.first['name']).to eq('label1')
end
end
```
When run, this spec doesn't do what we might expect:
```shell
1) API::API reproduce sequence issue creates a second label
Failure/Error: expect(json_response.first['name']).to eq('label1')
expected: "label1"
got: "label2"
(compared using ==)
```
This is because FactoryBot sequences are not reset for each example.
Remember that sequence-generated values exist only to avoid having to
explicitly set attributes that have a uniqueness constraint when using a factory.
### Solution
If you assert against a sequence-generated attribute's value, you should set it
explicitly. Also, the value you set shouldn't match the sequence pattern.
For instance, using our `:label` factory, writing `create(:label, title: 'foo')`
is ok, but `create(:label, title: 'label1')` is not.
Following is the fixed API spec:
```ruby
require 'spec_helper'
RSpec.describe API::Labels do
it 'creates a first label' do
create(:label, title: 'foo')
get api("/projects/#{project.id}/labels", user)
expect(response).to have_gitlab_http_status(:ok)
expect(json_response.first['name']).to eq('foo')
end
it 'creates a second label' do
create(:label, title: 'bar')
get api("/projects/#{project.id}/labels", user)
expect(response).to have_gitlab_http_status(:ok)
expect(json_response.first['name']).to eq('bar')
end
end
```
## Avoid using `expect_any_instance_of` or `allow_any_instance_of` in RSpec
### Why
- Because it is not isolated therefore it might be broken at times.
- Because it doesn't work whenever the method we want to stub was defined
in a prepended module, which is very likely the case in EE. We could see
error like this:
```plaintext
1.1) Failure/Error: expect_any_instance_of(ApplicationSetting).to receive_messages(messages)
Using `any_instance` to stub a method (elasticsearch_indexing) that has been defined on a prepended module (EE::ApplicationSetting) is not supported.
```
### Alternatives
Instead, use any of these:
- `expect_next_instance_of`
- `allow_next_instance_of`
- `expect_next_found_instance_of`
- `allow_next_found_instance_of`
For example:
```ruby
# Don't do this:
expect_any_instance_of(Project).to receive(:add_import_job)
# Don't do this:
allow_any_instance_of(Project).to receive(:add_import_job)
```
We could write:
```ruby
# Do this:
expect_next_instance_of(Project) do |project|
expect(project).to receive(:add_import_job)
end
# Do this:
allow_next_instance_of(Project) do |project|
allow(project).to receive(:add_import_job)
end
# Do this:
expect_next_found_instance_of(Project) do |project|
expect(project).to receive(:add_import_job)
end
# Do this:
allow_next_found_instance_of(Project) do |project|
allow(project).to receive(:add_import_job)
end
```
Since Active Record is not calling the `.new` method on model classes to instantiate the objects,
you should use `expect_next_found_instance_of` or `allow_next_found_instance_of` mock helpers to set up mock on objects returned by Active Record query & finder methods._
It is also possible to set mocks and expectations for multiple instances of the same Active Record model by using the `expect_next_found_(number)_instances_of` and `allow_next_found_(number)_instances_of` helpers, like so;
```ruby
expect_next_found_2_instances_of(Project) do |project|
expect(project).to receive(:add_import_job)
end
allow_next_found_2_instances_of(Project) do |project|
allow(project).to receive(:add_import_job)
end
```
If we also want to initialize the instance with some particular arguments, we
could also pass it like:
```ruby
# Do this:
expect_next_instance_of(MergeRequests::RefreshService, project, user) do |refresh_service|
expect(refresh_service).to receive(:execute).with(oldrev, newrev, ref)
end
```
This would expect the following:
```ruby
# Above expects:
refresh_service = MergeRequests::RefreshService.new(project, user)
refresh_service.execute(oldrev, newrev, ref)
```
## Do not `rescue Exception`
See ["Why is it bad style to `rescue Exception => e` in Ruby?"](https://stackoverflow.com/questions/10048173/why-is-it-bad-style-to-rescue-exception-e-in-ruby).
This rule is [enforced automatically by RuboCop](https://gitlab.com/gitlab-org/gitlab-foss/blob/8-4-stable/.rubocop.yml#L911-914).
## Do not use inline JavaScript in views
Using the inline `:javascript` Haml filters comes with a
performance overhead. Using inline JavaScript is not a good way to structure your code and should be avoided.
We've [removed these two filters](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/initializers/hamlit.rb)
in an initializer.
### Further reading
- Stack Overflow: [Why you should not write inline JavaScript](https://softwareengineering.stackexchange.com/questions/86589/why-should-i-avoid-inline-scripting)
## Storing assets that do not require pre-compiling
Assets that need to be served to the user are stored under the `app/assets` directory, which is later pre-compiled and placed in the `public/` directory.
However, you cannot access the content of any file from within `app/assets` from the application code, as we do not include that folder in production installations as a [space saving measure](https://gitlab.com/gitlab-org/omnibus-gitlab/-/commit/ca049f990b223f5e1e412830510a7516222810be).
```ruby
support_bot = Users::Internal.support_bot
# accessing a file from the `app/assets` folder
support_bot.avatar = Rails.root.join('app', 'assets', 'images', 'bot_avatars', 'support_bot.png').open
support_bot.save!
```
While the code above works in local environments, it errors out in production installations as the `app/assets` folder is not included.
### Solution
The alternative is the `lib/assets` folder. Use it if you need to add assets (like images) to the repository that meet the following conditions:
- The assets do not need to be directly served to the user (and hence need not be pre-compiled).
- The assets do need to be accessed via application code.
In short:
Use `app/assets` for storing any asset that needs to be precompiled and served to the end user.
Use `lib/assets` for storing any asset that does not need to be served to the end user directly, but is still required to be accessed by the application code.
MR for reference: [!37671](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37671)
## Do not override `has_many through:` or `has_one through:` associations
Associations with the `:through` option should not be overridden as we could accidentally
destroy the wrong object.
This is because the `destroy()` method behaves differently when acting on
`has_many through:` and `has_one through:` associations.
```ruby
group.users.destroy(id)
```
The code example above reads as if we are destroying a `User` record, but behind the scenes, it is destroying a `Member` record. This is because the `users` association is defined on `Group` as a `has_many through:` association:
```ruby
class Group < Namespace
has_many :group_members, -> { where(requested_at: nil).where.not(members: { access_level: Gitlab::Access::MINIMAL_ACCESS }) }, dependent: :destroy, as: :source
has_many :users, through: :group_members
end
```
And Rails has the following [behavior](https://api.rubyonrails.org/classes/ActiveRecord/Associations/ClassMethods.html#method-i-has_many) on using `destroy()` on such associations:
> If the :through option is used, then the join records are destroyed instead, not the objects themselves.
This is why a `Member` record, which is the join record connecting a `User` and `Group`, is being destroyed.
Now, if we override the `users` association, so like:
```ruby
class Group < Namespace
has_many :group_members, -> { where(requested_at: nil).where.not(members: { access_level: Gitlab::Access::MINIMAL_ACCESS }) }, dependent: :destroy, as: :source
has_many :users, through: :group_members
def users
super.where(admin: false)
end
end
```
The overridden method now changes the above behavior of `destroy()`, such that if we execute
```ruby
group.users.destroy(id)
```
a `User` record will be deleted, which can lead to data loss.
In short, overriding a `has_many through:` or `has_one through:` association can prove dangerous.
To prevent this from happening, we are introducing an
automated check in [!131455](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131455).
For more information, see [issue 424536](https://gitlab.com/gitlab-org/gitlab/-/issues/424536).
|
https://docs.gitlab.com/licensing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/licensing.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
licensing.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Licensing and Compatibility
| null |
[GitLab Community Edition](https://gitlab.com/gitlab-org/gitlab-foss/) (CE) is licensed [under the terms of the MIT License](https://gitlab.com/gitlab-org/gitlab-foss/blob/master/LICENSE). [GitLab Enterprise Edition](https://gitlab.com/gitlab-org/gitlab/) (EE) is licensed under "[The GitLab Enterprise Edition (EE) license](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/LICENSE)" wherein there are more restrictions.
## Automated Testing
To comply with the terms the libraries we use are licensed under, we have to make sure to check new gems for compatible licenses whenever they're added. To automate this process, we use the [License Finder](https://github.com/pivotal/LicenseFinder) gem by Pivotal. It runs every time a new commit is pushed and verifies that all gems and node modules in the bundle use a license that doesn't conflict with the licensing of either GitLab Community Edition or GitLab Enterprise Edition.
There are some limitations with the automated testing, however. CSS, JavaScript, or Ruby libraries which are not included by way of Bundler, npm, or Yarn (for instance those manually copied into our source tree in the `vendor` directory), must be verified manually and independently. Take care whenever one such library is used, as automated tests don't catch problematic licenses from them.
Some gems may not include their license information in their `gemspec` file, and some node modules may not include their license information in their `package.json` file. These aren't detected by License Finder, and must be verified manually.
### License Finder commands
There are a few basic commands License Finder provides that you need to manage license detection.
To verify that the checks are passing, and/or to see what dependencies are causing the checks to fail:
```shell
bundle exec license_finder
```
To allowlist a new license:
```shell
license_finder permitted_licenses add MIT
```
To denylist a new license:
```shell
license_finder restricted_licenses add Unlicense
```
To tell License Finder about a dependency's license if it isn't auto-detected:
```shell
license_finder licenses add my_unknown_dependency MIT
```
For all of the above, include `--why "Reason"` and `--who "My Name"` so the `decisions.yml` file can keep track of when, why, and who approved of a dependency.
More detailed information on how the gem and its commands work is available in the [License Finder README](https://github.com/pivotal/LicenseFinder).
## Getting an unknown or Lead licensed software approved
We sometimes need to use third-party software whose license is not part of the Blue Oak Council
license list, or is marked as Lead-rated in the list. In this case, the use-case needs to be
legal-approved before the software can be installed. More on this can be [found in the Handbook](https://handbook.gitlab.com/handbook/legal/product/#using-open-source-software).
To get legal approval, follow these steps:
1. Create a new [legal issue](https://gitlab.com/gitlab-com/legal-and-compliance/-/issues/new?issuable_template=general-legal-template). Make sure to include as many details as possible:
- What license is the software using?
- How and where will it be used?
- Is it being vendored or forked, or will we be using the upstream project?
- Any relevant links.
1. After the usage has been legal-approved, allowlist the software in the GitLab project.
See [License Finder commands](#license-finder-commands) above.
1. Make sure the software is also recognized by Omnibus. Create a new MR against the [`omnibus-gitlab`](https://gitlab.com/gitlab-org/omnibus-gitlab)
project. Refer to [this MR](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/6870)
for an example of what the changes should look like. You'll need to edit the following files:
- `lib/gitlab/license/analyzer.rb`
- `support/dependency_decisions.yml`
## Encryption keys
If your license was created in your local development or staging environment for Customers Portal or License App, an environment variable called `GITLAB_LICENSE_MODE` with the value `test` needs to be set to use the correct decryption key.
Those projects are set to use a test license encryption key by default.
## Additional information
See the [Open Source](https://handbook.gitlab.com/handbook/engineering/open-source/#using-open-source-software) page for more information on licensing.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab Licensing and Compatibility
breadcrumbs:
- doc
- development
---
[GitLab Community Edition](https://gitlab.com/gitlab-org/gitlab-foss/) (CE) is licensed [under the terms of the MIT License](https://gitlab.com/gitlab-org/gitlab-foss/blob/master/LICENSE). [GitLab Enterprise Edition](https://gitlab.com/gitlab-org/gitlab/) (EE) is licensed under "[The GitLab Enterprise Edition (EE) license](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/LICENSE)" wherein there are more restrictions.
## Automated Testing
To comply with the terms the libraries we use are licensed under, we have to make sure to check new gems for compatible licenses whenever they're added. To automate this process, we use the [License Finder](https://github.com/pivotal/LicenseFinder) gem by Pivotal. It runs every time a new commit is pushed and verifies that all gems and node modules in the bundle use a license that doesn't conflict with the licensing of either GitLab Community Edition or GitLab Enterprise Edition.
There are some limitations with the automated testing, however. CSS, JavaScript, or Ruby libraries which are not included by way of Bundler, npm, or Yarn (for instance those manually copied into our source tree in the `vendor` directory), must be verified manually and independently. Take care whenever one such library is used, as automated tests don't catch problematic licenses from them.
Some gems may not include their license information in their `gemspec` file, and some node modules may not include their license information in their `package.json` file. These aren't detected by License Finder, and must be verified manually.
### License Finder commands
There are a few basic commands License Finder provides that you need to manage license detection.
To verify that the checks are passing, and/or to see what dependencies are causing the checks to fail:
```shell
bundle exec license_finder
```
To allowlist a new license:
```shell
license_finder permitted_licenses add MIT
```
To denylist a new license:
```shell
license_finder restricted_licenses add Unlicense
```
To tell License Finder about a dependency's license if it isn't auto-detected:
```shell
license_finder licenses add my_unknown_dependency MIT
```
For all of the above, include `--why "Reason"` and `--who "My Name"` so the `decisions.yml` file can keep track of when, why, and who approved of a dependency.
More detailed information on how the gem and its commands work is available in the [License Finder README](https://github.com/pivotal/LicenseFinder).
## Getting an unknown or Lead licensed software approved
We sometimes need to use third-party software whose license is not part of the Blue Oak Council
license list, or is marked as Lead-rated in the list. In this case, the use-case needs to be
legal-approved before the software can be installed. More on this can be [found in the Handbook](https://handbook.gitlab.com/handbook/legal/product/#using-open-source-software).
To get legal approval, follow these steps:
1. Create a new [legal issue](https://gitlab.com/gitlab-com/legal-and-compliance/-/issues/new?issuable_template=general-legal-template). Make sure to include as many details as possible:
- What license is the software using?
- How and where will it be used?
- Is it being vendored or forked, or will we be using the upstream project?
- Any relevant links.
1. After the usage has been legal-approved, allowlist the software in the GitLab project.
See [License Finder commands](#license-finder-commands) above.
1. Make sure the software is also recognized by Omnibus. Create a new MR against the [`omnibus-gitlab`](https://gitlab.com/gitlab-org/omnibus-gitlab)
project. Refer to [this MR](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/6870)
for an example of what the changes should look like. You'll need to edit the following files:
- `lib/gitlab/license/analyzer.rb`
- `support/dependency_decisions.yml`
## Encryption keys
If your license was created in your local development or staging environment for Customers Portal or License App, an environment variable called `GITLAB_LICENSE_MODE` with the value `test` needs to be set to use the correct decryption key.
Those projects are set to use a test license encryption key by default.
## Additional information
See the [Open Source](https://handbook.gitlab.com/handbook/engineering/open-source/#using-open-source-software) page for more information on licensing.
|
https://docs.gitlab.com/work_items
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/work_items.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
work_items.md
|
Plan
|
Project Management
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Work items and work item types
| null |
Work items introduce a flexible model that standardizes and extends issue tracking capabilities in GitLab.
With work items, you can define different types that can be customized with various widgets to meet
specific needs - whether you're tracking bugs, incidents, test cases, or other units of work.
This architectural documentation covers the development details and implementation strategies for
work items and work item types.
## Challenges
Issues have the potential to be a centralized hub for collaboration.
We need to accept the
fact that different issue types require different fields and different context, depending
on what job they are being used to accomplish. For example:
- A bug needs to list steps to reproduce.
- An incident needs references to stack traces and other contextual information relevant only
to that incident.
Instead of each object type diverging into a separate model, we can standardize on an underlying
common model that we can customize with the widgets (one or more attributes) it contains.
Here are some problems with current issues usage and why we are looking into work items:
- Using labels to show issue types is cumbersome and makes reporting views more complex.
- Issue types are one of the top two use cases of labels, so it makes sense to provide first class
support for them.
- Issues are starting to become cluttered as we add more capabilities to them, and they are not
perfect:
- There is no consistent pattern for how to surface relationships to other objects.
- There is not a coherent interaction model across different types of issues because we use
labels for this.
- The various implementations of issue types lack flexibility and extensibility.
- Epics, issues, requirements, and others all have similar but just subtle enough
differences in common interactions that the user needs to hold a complicated mental
model of how they each behave.
- Issues are not extensible enough to support all of the emerging jobs they need to facilitate.
- Codebase maintainability and feature development becomes a bigger challenge as we grow the Issue type
beyond its core role of issue tracking into supporting the different work item types and handling
logic and structure differences.
- New functionality is typically implemented with first class objects that import behavior from issues via
shared concerns. This leads to duplicated effort and ultimately small differences between common interactions. This
leads to inconsistent UX.
## Work item terminology
To avoid confusion and ensure [communication is efficient](https://handbook.gitlab.com/handbook/communication/#mecefu-terms), we will use the following terms exclusively when discussing work items. This list is the [single source of truth (SSoT)](https://handbook.gitlab.com/handbook/values/#single-source-of-truth) for Work Item terminology.
| Term | Description | Example of misuse | Should be |
| --- | --- | --- | --- |
| work item type | Classes of work item; for example: issue, requirement, test case, incident, or task | _Epics will eventually become issues_ | _Epics will eventually become a **work item type**_ |
| work item | An instance of a work item type | | |
| work item view | The new frontend view that renders work items of any type | _This should be rendered in the new view_ | _This should be rendered in the work item view_ |
| legacy object | An object that has been or will be converted to a Work Item Type | _Epics will be migrated from a standalone/old/former object to a work item type_ | _Epics will be converted from a legacy object to a work item type_ |
| legacy issue view | The existing view used to render issues and incidents | _Issues continue to be rendered in the old view_ | _Issues continue to be rendered in the legacy issue view_ |
| issue | The existing issue model | | |
| issuable | Any model currently using the issuable module (issues, epics and MRs) | _Incidents are an **issuable**_ | _Incidents are a **work item type**_ |
| widget | A UI element to present or allow interaction with specific work item data | | |
Some terms have been used in the past but have since become confusing and are now discouraged.
| Term | Description | Example of misuse | Should be |
| --- | --- | --- | --- |
| issue type | A former way to refer to classes of work item | _Tasks are an **issue type**_ | _Tasks are a **work item type**_ |
## Work items development
During development, work items progress through three stages, managed by using feature flags:
1. `work_items_alpha` for internal team testing ([`gitlab-org/plan-stage`](https://gitlab.com/gitlab-org/plan-stage)).
1. `work_items_beta` for broader internal GitLab testing ([`gitlab-org`](https://gitlab.com/gitlab-org) and [`gitlab-com`](https://gitlab.com/gitlab-com)).
1. `work_items`, enabled by default for SaaS and self-managed environments.
_Other groups may be included. For the latest information, query the feature flags within [chatops](feature_flags/controls.md)._
For more information about these feature flags, see
[Work Items Architecture Blueprint](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/work_items/#feature-flags).
## Migration strategy
WI model will be built on top of the existing `Issue` model and we'll gradually migrate `Issue`
model code to the WI model.
One way to approach it is:
```ruby
class WorkItems::WorkItem < ApplicationRecord
self.table_name = 'issues'
# ... all the current issue.rb code
end
class Issue < WorkItems::WorkItem
# Do not add code to this class add to WorkItems:WorkItem
end
```
We already use the concept of WITs within `issues` table through `issue_type`
column. There are `issue`, `incident`, and `test_case` issue types. To extend this
so that in future we can allow users to define custom WITs, we will
move the `issue_type` to a separate table: `work_item_types`. The migration process of `issue_type`
to `work_item_types` will involve creating the set of WITs for all root-level groups as described in
[this epic](https://gitlab.com/groups/gitlab-org/-/epics/6536).
{{< alert type="note" >}}
At first, defining a WIT will only be possible at the root-level group, which would then be inherited by subgroups.
We will investigate the possibility of defining new WITs at subgroup levels at a later iteration.
{{< /alert >}}
## Introducing `work_item_types` table
For example, suppose there are three root-level groups with IDs: `11`, `12`, and `13`. Also,
assume the following base types: `issue: 0`, `incident: 1`, `test_case: 2`.
The respective `work_item_types` records:
| `namespace_id` | `base_type` | `title` |
| -------------- | ----------- | --------- |
| 11 | 0 | Issue |
| 11 | 1 | Incident |
| 11 | 2 | Test Case |
| 12 | 0 | Issue |
| 12 | 1 | Incident |
| 12 | 2 | Test Case |
| 13 | 0 | Issue |
| 13 | 1 | Incident |
| 13 | 2 | Test Case |
What we will do to achieve this:
1. Add a `work_item_type_id` column to the `issues` table.
1. Ensure we write to both `issues#issue_type` and `issues#work_item_type_id` columns for
new or updated issues.
1. Backfill the `work_item_type_id` column to point to the `work_item_types#id` corresponding
to issue's project top-level groups. For example:
```ruby
issue.project.root_group.work_item_types.where(base_type: issue.issue_type).first.id.
```
1. After `issues#work_item_type_id` is populated, we can switch our queries from
using `issue_type` to using `work_item_type_id`.
To introduce a new WIT there are two options:
- Follow the first step of the above process. We will still need to run a migration
that adds a new WIT for all root-level groups to make the WIT available to
all users. Besides a long-running migration, we'll need to
insert several million records to `work_item_types`. This might be unwanted for users
that do not want or need additional WITs in their workflow.
- Create an opt-in flow, so that the record in `work_item_types` for specific root-level group
is created only when a customer opts in. However, this implies a lower discoverability
of the newly introduced work item type.
## Work item type widgets
A widget is a single component that can exist on a work item. This component can be used on one or
many work item types and can be lightly customized at the point of implementation.
A widget contains both the frontend UI (if present) and the associated logic for presenting and
managing any data used by the widget. There can be a one-to-many connection between the data model
and widgets. It means there can be multiple widgets that use or manage the same data, and they could
be present at the same time (for example, a read-only summary widget and an editable detail widget,
or two widgets showing two different filtered views of the same model).
Widgets should be differentiated by their **purpose**. When possible, this purpose should be
abstracted to the highest reasonable level to maximize reusability. For example, the widget for
managing "tasks" was built as "child items". Rather than managing one type of child, it's abstracted
up to managing any children.
All WITs will share the same pool of predefined widgets and will be customized by
which widgets are active on a specific WIT. Every attribute (column or association)
will become a widget with self-encapsulated functionality regardless of the WIT it belongs to.
Because any WIT can have any widget, we only need to define which widget is active for a
specific WIT. So, after switching the type of a specific work item, we display a different set
of widgets.
Read more about [work item widgets](work_items_widgets.md) and how to create a new one.
## Widgets metadata
In order to customize each WIT with corresponding active widgets we will need a data
structure to map each WIT to specific widgets.
The intent is for work item types to be highly configurable, both by GitLab for
implementing various work item schemes for customers (an opinionated GitLab
workflow, or SAFe 5, etc), and eventually for customers to customize their own
workflows.
In this case, a work item scheme would be defined as a set of types with
certain characteristics (some widgets enabled, others not), such as an Epic,
Story, Bug, and Task, etc.
As we're building a new work item architecture, we want to build the ability to
define these various types in a very flexible manner. Having GitLab use
this system first (without introducing customer customization) allows us to
better build out the initial system.
Work item's `base_type` is used to define static mapping of what
widgets are available for each type (current status), this definition should be
rather stored in a database table. The exact structure of the WIT widgets metadata
is [still to be defined](https://gitlab.com/gitlab-org/gitlab/-/issues/370599).
`base_type` was added to help convert other types of resources (requirements
and incidents) into work items. Eventually (when these resources become regular
work items), `base_type` will be removed.
Until the architecture of WIT widgets is finalized, we are holding off on the creation of new work item
types. If a new work item type is absolutely necessary, reach out to a
member of the [Project Management Engineering Team](https://gitlab.com/gitlab-org/gitlab/-/issues/370599).
## Creating a new work item type in the database
We have completed the removal of the `issue_type` column from the issues table, in favor of using the new
`work_item_types` table as described in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/6536)).
After the introduction of the `work_item_types` table, we added more `work_item_types`, and we want to make it
easier for other teams to do so. To introduce a new `work_item_type`, you must:
1. Write a database migration to create a new record in the `work_item_types` table.
1. Update `Gitlab::DatabaseImporters::WorkItems::BaseTypeImporter`.
The following MRs demonstrate how to introduce new `work_item_types`:
- [MR example 1](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127482)
- [MR example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127917)
### Write a database migration
First, write a database migration that creates the new record in the `work_item_types` table.
Keep the following in mind when you write your migration:
- **Important**: Exclude new type from existing APIs.
- We probably want to exclude newly created work items of this type from showing
up in existing features (like issue lists) until we fully release a feature. For this reason,
we have to add a new type to
[this exclude list](https://gitlab.com/gitlab-org/gitlab/-/blob/a0a52dd05b5d3c6ca820b672f9c0626840d2429b/app/models/work_items/type.rb#L84),
unless it is expected that users can create new issues and work items with the new type as soon as the migration
is executed.
- Use a regular migration, not a post-deploy.
- We believe it would be beneficial to use
[regular migrations](migration_style_guide.md#choose-an-appropriate-migration-type)
to add new work item types instead of a
[post deploy migration](database/post_deployment_migrations.md).
This way, follow-up MRs that depend on the type being created can assume it exists right away,
instead of having to wait for the next release.
**Important**: Because we use a regular migration, we need to make sure it does two things:
1. Don't exceed the [time guidelines](migration_style_guide.md#how-long-a-migration-should-take) of regular migrations.
1. Make sure the migration is [backwards-compatible](multi_version_compatibility.md).
This means that deployed code should continue to work even if the MR that introduced this migration is
rolled back and the migration is not.
- Migrations should avoid failures.
- We expect data related to `work_item_types` to be in a certain state when running the migration that will create a new
type. At the moment, we write migrations that check the data and don't fail in the event we find
it in an inconsistent state. There's a discussion about how much we can rely on the state of data based on seeds and
migrations in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/423483). We can only
have a successful pipeline if we write the migration so it doesn't fail if data exists in an inconsistent
state. We probably need to update some of the database jobs in order to change this.
- Add widget definitions for the new type.
- The migration adds the new work item type as well as the widget definitions that are required for each work item.
The widgets you choose depend on the feature the new work item supports, but there are some that probably
all new work items need, like `Description`.
- Optional. Create hierarchy restrictions.
- In one of the example MRs we also insert records in the `work_item_hierarchy_restrictions` table. This is only
necessary if the new work item type is going to use the `Hierarchy` widget. In this table, you must add what
work item type can have children and of what type. Also, you should specify the hierarchy depth for work items of the same
type. By default a cross-hierarchy (cross group or project) relationship is disabled when creating new restrictions but
it can be enabled by specifying a value for `cross_hierarchy_enabled`. Due to the restrictions being cached for the work item type, it's also
required to call `clear_reactive_cache!` on the associated work item types.
- Optional. Create linked item restrictions.
- Similarly to the `Hierarchy` widget, the `Linked items` widget also supports rules defining which work item types can be
linked to other types. A restriction can specify if the source type can be related to or blocking a target type. Current restrictions:
| Type | Can be related to | Can block | Can be blocked by |
|------------|------------------------------------------|------------------------------------------|------------------------------------------|
| Epic | Epic, issue, task, objective, key result | Epic, issue, task, objective, key result | Epic, issue, task |
| Issue | Epic, issue, task, objective, key result | Epic, issue, task, objective, key result | Epic, issue, task |
| Task | Epic, issue, task, objective, key result | Epic, issue, task, objective, key result | Epic, issue, task |
| Objective | Epic, issue, task, objective, key result | Objective, key result | Epic, issue, task, objective, key result |
| Key result | Epic, issue, task, objective, key result | Objective, key result | Epic, issue, task, objective, key result |
- Use shared examples for migrations specs.
There are different shared examples you should use for the different migration types (new work item type, new widget definition, etc) in
[`add_work_item_widget_shared_examples.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/14c0a4df57a562a7c2dd4baed98f26d208a2e6ce/spec/support/shared_examples/migrations/add_work_item_widget_shared_examples.rb).
#### Example of adding a ticket work item
The `Ticket` work item type already exists in the database, but we'll use it as an example migration.
Note that for a new type you need to use a new name and ENUM value.
```ruby
class AddTicketWorkItemType < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
ISSUE_ENUM_VALUE = 0
# Enum value comes from the model where the enum is defined in
# https://gitlab.com/gitlab-org/gitlab/-/blob/1253f12abddb69cd1418c9e13e289d828b489f36/app/models/work_items/type.rb#L30.
# A new work item type should simply pick the next integer value.
TICKET_ENUM_VALUE = 8
TICKET_NAME = 'Ticket'
# Widget definitions also have an enum defined in
# https://gitlab.com/gitlab-org/gitlab/-/blob/1253f12abddb69cd1418c9e13e289d828b489f36/app/models/work_items/widget_definition.rb#L17.
# We need to provide both the enum and name as we plan to support custom widget names in the future.
TICKET_WIDGETS = {
'Assignees' => 0,
'Description' => 1,
'Hierarchy' => 2,
'Labels' => 3,
'Milestone' => 4,
'Notes' => 5,
'Start and due date' => 6,
'Health status' => 7,
'Weight' => 8,
'Iteration' => 9,
'Notifications' => 14,
'Current user todos' => 15,
'Award emoji' => 16
}.freeze
class MigrationWorkItemType < MigrationRecord
self.table_name = 'work_item_types'
end
class MigrationWidgetDefinition < MigrationRecord
self.table_name = 'work_item_widget_definitions'
end
class MigrationHierarchyRestriction < MigrationRecord
self.table_name = 'work_item_hierarchy_restrictions'
end
def up
existing_ticket_work_item_type = MigrationWorkItemType.find_by(base_type: TICKET_ENUM_VALUE, namespace_id: nil)
return say('Ticket work item type record exists, skipping creation') if existing_ticket_work_item_type
new_ticket_work_item_type = MigrationWorkItemType.create(
name: TICKET_NAME,
namespace_id: nil,
base_type: TICKET_ENUM_VALUE,
icon_name: 'issue-type-issue'
)
return say('Ticket work item type create record failed, skipping creation') if new_ticket_work_item_type.new_record?
widgets = TICKET_WIDGETS.map do |widget_name, widget_enum_value|
{
work_item_type_id: new_ticket_work_item_type.id,
name: widget_name,
widget_type: widget_enum_value
}
end
MigrationWidgetDefinition.upsert_all(
widgets,
unique_by: :index_work_item_widget_definitions_on_default_witype_and_name
)
issue_type = MigrationWorkItemType.find_by(base_type: ISSUE_ENUM_VALUE, namespace_id: nil)
return say('Issue work item type not found, skipping hierarchy restrictions creation') unless issue_type
# This part of the migration is only necessary if the new type uses the `Hierarchy` widget.
restrictions = [
{ parent_type_id: new_ticket_work_item_type.id, child_type_id: new_ticket_work_item_type.id, maximum_depth: 1 },
{ parent_type_id: new_ticket_work_item_type.id, child_type_id: issue_type.id, maximum_depth: 1 }
]
MigrationHierarchyRestriction.upsert_all(
restrictions,
unique_by: :index_work_item_hierarchy_restrictions_on_parent_and_child
)
end
def down
# There's the remote possibility that issues could already be
# using this issue type, with a tight foreign constraint.
# Therefore we will not attempt to remove any data.
end
end
```
<!-- markdownlint-disable-next-line MD044 -->
### Update Gitlab::DatabaseImporters::WorkItems::BaseTypeImporter
The [BaseTypeImporter](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database_importers/work_items/base_type_importer.rb)
is where we can clearly visualize the structure of the types we have and what widgets are associated with each of them.
`BaseTypeImporter` is the single source of truth for fresh GitLab installs and also our test suite. This should always
reflect what we change with migrations.
Similarly, the single sources of truth for hierarchy and linked item restrictions are defined in [HierarchyRestrictionsImporter](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database_importers/work_items/hierarchy_restrictions_importer.rb) and [RelatedLinksRestrictionsImporter](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/database_importers/work_items/related_links_restrictions_importer.rb), respectively.
**Important**: These importers should be updated whenever the corresponding database tables are modified.
## Custom work item types
With the WIT widget metadata and the workflow around mapping WIT to specific
widgets, we will be able to expose custom WITs to the users. Users will be able
to create their own WITs and customize them with widgets from the predefined pool.
## Custom widgets
The end goal is to allow users to define custom widgets and use these custom
widgets on any WIT. But this is a much further iteration and requires additional
investigation to determine both data and application architecture to be used.
## Migrate requirements and epics to work item types
We'll migrate requirements and epics into work item types, with their own set
of widgets. To achieve that, we'll migrate data to the `issues` table,
and we'll keep current `requirements` and `epics` tables to be used as proxies for old references to ensure
backward compatibility with already existing references.
### Migrate requirements to work item types
Currently `Requirement` attributes are a subset of `Issue` attributes, so the migration
consists mainly of:
- Data migration.
- Keeping backwards compatibility at API levels.
- Ensuring that old references continue to work.
The migration to a different underlying data structure should be seamless to the end user.
### Migrate epics to work item types
`Epic` has some extra functionality that the `Issue` WIT does not currently have.
So, migrating epics to a work item type requires providing feature parity between the current `Epic` object and WITs.
The main missing features are:
- Get work items to the group level. This is dependent on [Consolidate Groups and Projects](https://gitlab.com/gitlab-org/architecture/tasks/-/issues/7)
initiative.
- A hierarchy widget: the ability to structure work items into hierarchies.
- Inherited date widget.
To avoid disrupting workflows for users who are already using epics, we will introduce a new WIT
called `Feature` that will provide feature parity with epics at the project-level. Having that combined with progress
on [Consolidate Groups and Projects](https://gitlab.com/gitlab-org/architecture/tasks/-/issues/7) front will help us
provide a smooth migration path of epics to WIT with minimal disruption to user workflow.
## Work item, work item type, and widgets roadmap
We will move towards work items, work item types, and custom widgets (CW) in an iterative process.
For a rough outline of the work ahead of us, see [epic 6033](https://gitlab.com/groups/gitlab-org/-/epics/6033).
## Redis HLL Counter Schema
We need a more scalable Redis counter schema for work items that is inclusive of Plan xMAU, Project Management xMAU, Certify xMAU, and
Product Planning xMAU. We cannot aggregate and dedupe events across features within a group or at the stage level with
our current Redis slot schema.
All three Plan product groups will be using the same base object (`work item`). Each product group still needs to
track MAU.
### Proposed aggregate counter schema
```mermaid
graph TD
Event[Specific Interaction Counter] --> AC[Aggregate Counters]
AC --> Plan[Plan xMAU]
AC --> PM[Project Management xMAU]
AC --> PP[Product Planning xMAU]
AC --> Cer[Certify xMAU]
AC --> WI[Work Items Users]
```
### Implementation
The new aggregate schema is already implemented and we are already tracking work item unique actions
in [GitLab.com](https://gitlab.com).
For implementation details, this [MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/93231) can be used
as a reference. The MR covers the definition of new unique actions, event tracking in the code and also
adding the new unique actions to the required aggregate counters.
## Related topics
- [Design management](../user/project/issues/design_management.md)
|
---
stage: Plan
group: Project Management
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Work items and work item types
breadcrumbs:
- doc
- development
---
Work items introduce a flexible model that standardizes and extends issue tracking capabilities in GitLab.
With work items, you can define different types that can be customized with various widgets to meet
specific needs - whether you're tracking bugs, incidents, test cases, or other units of work.
This architectural documentation covers the development details and implementation strategies for
work items and work item types.
## Challenges
Issues have the potential to be a centralized hub for collaboration.
We need to accept the
fact that different issue types require different fields and different context, depending
on what job they are being used to accomplish. For example:
- A bug needs to list steps to reproduce.
- An incident needs references to stack traces and other contextual information relevant only
to that incident.
Instead of each object type diverging into a separate model, we can standardize on an underlying
common model that we can customize with the widgets (one or more attributes) it contains.
Here are some problems with current issues usage and why we are looking into work items:
- Using labels to show issue types is cumbersome and makes reporting views more complex.
- Issue types are one of the top two use cases of labels, so it makes sense to provide first class
support for them.
- Issues are starting to become cluttered as we add more capabilities to them, and they are not
perfect:
- There is no consistent pattern for how to surface relationships to other objects.
- There is not a coherent interaction model across different types of issues because we use
labels for this.
- The various implementations of issue types lack flexibility and extensibility.
- Epics, issues, requirements, and others all have similar but just subtle enough
differences in common interactions that the user needs to hold a complicated mental
model of how they each behave.
- Issues are not extensible enough to support all of the emerging jobs they need to facilitate.
- Codebase maintainability and feature development becomes a bigger challenge as we grow the Issue type
beyond its core role of issue tracking into supporting the different work item types and handling
logic and structure differences.
- New functionality is typically implemented with first class objects that import behavior from issues via
shared concerns. This leads to duplicated effort and ultimately small differences between common interactions. This
leads to inconsistent UX.
## Work item terminology
To avoid confusion and ensure [communication is efficient](https://handbook.gitlab.com/handbook/communication/#mecefu-terms), we will use the following terms exclusively when discussing work items. This list is the [single source of truth (SSoT)](https://handbook.gitlab.com/handbook/values/#single-source-of-truth) for Work Item terminology.
| Term | Description | Example of misuse | Should be |
| --- | --- | --- | --- |
| work item type | Classes of work item; for example: issue, requirement, test case, incident, or task | _Epics will eventually become issues_ | _Epics will eventually become a **work item type**_ |
| work item | An instance of a work item type | | |
| work item view | The new frontend view that renders work items of any type | _This should be rendered in the new view_ | _This should be rendered in the work item view_ |
| legacy object | An object that has been or will be converted to a Work Item Type | _Epics will be migrated from a standalone/old/former object to a work item type_ | _Epics will be converted from a legacy object to a work item type_ |
| legacy issue view | The existing view used to render issues and incidents | _Issues continue to be rendered in the old view_ | _Issues continue to be rendered in the legacy issue view_ |
| issue | The existing issue model | | |
| issuable | Any model currently using the issuable module (issues, epics and MRs) | _Incidents are an **issuable**_ | _Incidents are a **work item type**_ |
| widget | A UI element to present or allow interaction with specific work item data | | |
Some terms have been used in the past but have since become confusing and are now discouraged.
| Term | Description | Example of misuse | Should be |
| --- | --- | --- | --- |
| issue type | A former way to refer to classes of work item | _Tasks are an **issue type**_ | _Tasks are a **work item type**_ |
## Work items development
During development, work items progress through three stages, managed by using feature flags:
1. `work_items_alpha` for internal team testing ([`gitlab-org/plan-stage`](https://gitlab.com/gitlab-org/plan-stage)).
1. `work_items_beta` for broader internal GitLab testing ([`gitlab-org`](https://gitlab.com/gitlab-org) and [`gitlab-com`](https://gitlab.com/gitlab-com)).
1. `work_items`, enabled by default for SaaS and self-managed environments.
_Other groups may be included. For the latest information, query the feature flags within [chatops](feature_flags/controls.md)._
For more information about these feature flags, see
[Work Items Architecture Blueprint](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/work_items/#feature-flags).
## Migration strategy
WI model will be built on top of the existing `Issue` model and we'll gradually migrate `Issue`
model code to the WI model.
One way to approach it is:
```ruby
class WorkItems::WorkItem < ApplicationRecord
self.table_name = 'issues'
# ... all the current issue.rb code
end
class Issue < WorkItems::WorkItem
# Do not add code to this class add to WorkItems:WorkItem
end
```
We already use the concept of WITs within `issues` table through `issue_type`
column. There are `issue`, `incident`, and `test_case` issue types. To extend this
so that in future we can allow users to define custom WITs, we will
move the `issue_type` to a separate table: `work_item_types`. The migration process of `issue_type`
to `work_item_types` will involve creating the set of WITs for all root-level groups as described in
[this epic](https://gitlab.com/groups/gitlab-org/-/epics/6536).
{{< alert type="note" >}}
At first, defining a WIT will only be possible at the root-level group, which would then be inherited by subgroups.
We will investigate the possibility of defining new WITs at subgroup levels at a later iteration.
{{< /alert >}}
## Introducing `work_item_types` table
For example, suppose there are three root-level groups with IDs: `11`, `12`, and `13`. Also,
assume the following base types: `issue: 0`, `incident: 1`, `test_case: 2`.
The respective `work_item_types` records:
| `namespace_id` | `base_type` | `title` |
| -------------- | ----------- | --------- |
| 11 | 0 | Issue |
| 11 | 1 | Incident |
| 11 | 2 | Test Case |
| 12 | 0 | Issue |
| 12 | 1 | Incident |
| 12 | 2 | Test Case |
| 13 | 0 | Issue |
| 13 | 1 | Incident |
| 13 | 2 | Test Case |
What we will do to achieve this:
1. Add a `work_item_type_id` column to the `issues` table.
1. Ensure we write to both `issues#issue_type` and `issues#work_item_type_id` columns for
new or updated issues.
1. Backfill the `work_item_type_id` column to point to the `work_item_types#id` corresponding
to issue's project top-level groups. For example:
```ruby
issue.project.root_group.work_item_types.where(base_type: issue.issue_type).first.id.
```
1. After `issues#work_item_type_id` is populated, we can switch our queries from
using `issue_type` to using `work_item_type_id`.
To introduce a new WIT there are two options:
- Follow the first step of the above process. We will still need to run a migration
that adds a new WIT for all root-level groups to make the WIT available to
all users. Besides a long-running migration, we'll need to
insert several million records to `work_item_types`. This might be unwanted for users
that do not want or need additional WITs in their workflow.
- Create an opt-in flow, so that the record in `work_item_types` for specific root-level group
is created only when a customer opts in. However, this implies a lower discoverability
of the newly introduced work item type.
## Work item type widgets
A widget is a single component that can exist on a work item. This component can be used on one or
many work item types and can be lightly customized at the point of implementation.
A widget contains both the frontend UI (if present) and the associated logic for presenting and
managing any data used by the widget. There can be a one-to-many connection between the data model
and widgets. It means there can be multiple widgets that use or manage the same data, and they could
be present at the same time (for example, a read-only summary widget and an editable detail widget,
or two widgets showing two different filtered views of the same model).
Widgets should be differentiated by their **purpose**. When possible, this purpose should be
abstracted to the highest reasonable level to maximize reusability. For example, the widget for
managing "tasks" was built as "child items". Rather than managing one type of child, it's abstracted
up to managing any children.
All WITs will share the same pool of predefined widgets and will be customized by
which widgets are active on a specific WIT. Every attribute (column or association)
will become a widget with self-encapsulated functionality regardless of the WIT it belongs to.
Because any WIT can have any widget, we only need to define which widget is active for a
specific WIT. So, after switching the type of a specific work item, we display a different set
of widgets.
Read more about [work item widgets](work_items_widgets.md) and how to create a new one.
## Widgets metadata
In order to customize each WIT with corresponding active widgets we will need a data
structure to map each WIT to specific widgets.
The intent is for work item types to be highly configurable, both by GitLab for
implementing various work item schemes for customers (an opinionated GitLab
workflow, or SAFe 5, etc), and eventually for customers to customize their own
workflows.
In this case, a work item scheme would be defined as a set of types with
certain characteristics (some widgets enabled, others not), such as an Epic,
Story, Bug, and Task, etc.
As we're building a new work item architecture, we want to build the ability to
define these various types in a very flexible manner. Having GitLab use
this system first (without introducing customer customization) allows us to
better build out the initial system.
Work item's `base_type` is used to define static mapping of what
widgets are available for each type (current status), this definition should be
rather stored in a database table. The exact structure of the WIT widgets metadata
is [still to be defined](https://gitlab.com/gitlab-org/gitlab/-/issues/370599).
`base_type` was added to help convert other types of resources (requirements
and incidents) into work items. Eventually (when these resources become regular
work items), `base_type` will be removed.
Until the architecture of WIT widgets is finalized, we are holding off on the creation of new work item
types. If a new work item type is absolutely necessary, reach out to a
member of the [Project Management Engineering Team](https://gitlab.com/gitlab-org/gitlab/-/issues/370599).
## Creating a new work item type in the database
We have completed the removal of the `issue_type` column from the issues table, in favor of using the new
`work_item_types` table as described in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/6536)).
After the introduction of the `work_item_types` table, we added more `work_item_types`, and we want to make it
easier for other teams to do so. To introduce a new `work_item_type`, you must:
1. Write a database migration to create a new record in the `work_item_types` table.
1. Update `Gitlab::DatabaseImporters::WorkItems::BaseTypeImporter`.
The following MRs demonstrate how to introduce new `work_item_types`:
- [MR example 1](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127482)
- [MR example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127917)
### Write a database migration
First, write a database migration that creates the new record in the `work_item_types` table.
Keep the following in mind when you write your migration:
- **Important**: Exclude new type from existing APIs.
- We probably want to exclude newly created work items of this type from showing
up in existing features (like issue lists) until we fully release a feature. For this reason,
we have to add a new type to
[this exclude list](https://gitlab.com/gitlab-org/gitlab/-/blob/a0a52dd05b5d3c6ca820b672f9c0626840d2429b/app/models/work_items/type.rb#L84),
unless it is expected that users can create new issues and work items with the new type as soon as the migration
is executed.
- Use a regular migration, not a post-deploy.
- We believe it would be beneficial to use
[regular migrations](migration_style_guide.md#choose-an-appropriate-migration-type)
to add new work item types instead of a
[post deploy migration](database/post_deployment_migrations.md).
This way, follow-up MRs that depend on the type being created can assume it exists right away,
instead of having to wait for the next release.
**Important**: Because we use a regular migration, we need to make sure it does two things:
1. Don't exceed the [time guidelines](migration_style_guide.md#how-long-a-migration-should-take) of regular migrations.
1. Make sure the migration is [backwards-compatible](multi_version_compatibility.md).
This means that deployed code should continue to work even if the MR that introduced this migration is
rolled back and the migration is not.
- Migrations should avoid failures.
- We expect data related to `work_item_types` to be in a certain state when running the migration that will create a new
type. At the moment, we write migrations that check the data and don't fail in the event we find
it in an inconsistent state. There's a discussion about how much we can rely on the state of data based on seeds and
migrations in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/423483). We can only
have a successful pipeline if we write the migration so it doesn't fail if data exists in an inconsistent
state. We probably need to update some of the database jobs in order to change this.
- Add widget definitions for the new type.
- The migration adds the new work item type as well as the widget definitions that are required for each work item.
The widgets you choose depend on the feature the new work item supports, but there are some that probably
all new work items need, like `Description`.
- Optional. Create hierarchy restrictions.
- In one of the example MRs we also insert records in the `work_item_hierarchy_restrictions` table. This is only
necessary if the new work item type is going to use the `Hierarchy` widget. In this table, you must add what
work item type can have children and of what type. Also, you should specify the hierarchy depth for work items of the same
type. By default a cross-hierarchy (cross group or project) relationship is disabled when creating new restrictions but
it can be enabled by specifying a value for `cross_hierarchy_enabled`. Due to the restrictions being cached for the work item type, it's also
required to call `clear_reactive_cache!` on the associated work item types.
- Optional. Create linked item restrictions.
- Similarly to the `Hierarchy` widget, the `Linked items` widget also supports rules defining which work item types can be
linked to other types. A restriction can specify if the source type can be related to or blocking a target type. Current restrictions:
| Type | Can be related to | Can block | Can be blocked by |
|------------|------------------------------------------|------------------------------------------|------------------------------------------|
| Epic | Epic, issue, task, objective, key result | Epic, issue, task, objective, key result | Epic, issue, task |
| Issue | Epic, issue, task, objective, key result | Epic, issue, task, objective, key result | Epic, issue, task |
| Task | Epic, issue, task, objective, key result | Epic, issue, task, objective, key result | Epic, issue, task |
| Objective | Epic, issue, task, objective, key result | Objective, key result | Epic, issue, task, objective, key result |
| Key result | Epic, issue, task, objective, key result | Objective, key result | Epic, issue, task, objective, key result |
- Use shared examples for migrations specs.
There are different shared examples you should use for the different migration types (new work item type, new widget definition, etc) in
[`add_work_item_widget_shared_examples.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/14c0a4df57a562a7c2dd4baed98f26d208a2e6ce/spec/support/shared_examples/migrations/add_work_item_widget_shared_examples.rb).
#### Example of adding a ticket work item
The `Ticket` work item type already exists in the database, but we'll use it as an example migration.
Note that for a new type you need to use a new name and ENUM value.
```ruby
class AddTicketWorkItemType < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
ISSUE_ENUM_VALUE = 0
# Enum value comes from the model where the enum is defined in
# https://gitlab.com/gitlab-org/gitlab/-/blob/1253f12abddb69cd1418c9e13e289d828b489f36/app/models/work_items/type.rb#L30.
# A new work item type should simply pick the next integer value.
TICKET_ENUM_VALUE = 8
TICKET_NAME = 'Ticket'
# Widget definitions also have an enum defined in
# https://gitlab.com/gitlab-org/gitlab/-/blob/1253f12abddb69cd1418c9e13e289d828b489f36/app/models/work_items/widget_definition.rb#L17.
# We need to provide both the enum and name as we plan to support custom widget names in the future.
TICKET_WIDGETS = {
'Assignees' => 0,
'Description' => 1,
'Hierarchy' => 2,
'Labels' => 3,
'Milestone' => 4,
'Notes' => 5,
'Start and due date' => 6,
'Health status' => 7,
'Weight' => 8,
'Iteration' => 9,
'Notifications' => 14,
'Current user todos' => 15,
'Award emoji' => 16
}.freeze
class MigrationWorkItemType < MigrationRecord
self.table_name = 'work_item_types'
end
class MigrationWidgetDefinition < MigrationRecord
self.table_name = 'work_item_widget_definitions'
end
class MigrationHierarchyRestriction < MigrationRecord
self.table_name = 'work_item_hierarchy_restrictions'
end
def up
existing_ticket_work_item_type = MigrationWorkItemType.find_by(base_type: TICKET_ENUM_VALUE, namespace_id: nil)
return say('Ticket work item type record exists, skipping creation') if existing_ticket_work_item_type
new_ticket_work_item_type = MigrationWorkItemType.create(
name: TICKET_NAME,
namespace_id: nil,
base_type: TICKET_ENUM_VALUE,
icon_name: 'issue-type-issue'
)
return say('Ticket work item type create record failed, skipping creation') if new_ticket_work_item_type.new_record?
widgets = TICKET_WIDGETS.map do |widget_name, widget_enum_value|
{
work_item_type_id: new_ticket_work_item_type.id,
name: widget_name,
widget_type: widget_enum_value
}
end
MigrationWidgetDefinition.upsert_all(
widgets,
unique_by: :index_work_item_widget_definitions_on_default_witype_and_name
)
issue_type = MigrationWorkItemType.find_by(base_type: ISSUE_ENUM_VALUE, namespace_id: nil)
return say('Issue work item type not found, skipping hierarchy restrictions creation') unless issue_type
# This part of the migration is only necessary if the new type uses the `Hierarchy` widget.
restrictions = [
{ parent_type_id: new_ticket_work_item_type.id, child_type_id: new_ticket_work_item_type.id, maximum_depth: 1 },
{ parent_type_id: new_ticket_work_item_type.id, child_type_id: issue_type.id, maximum_depth: 1 }
]
MigrationHierarchyRestriction.upsert_all(
restrictions,
unique_by: :index_work_item_hierarchy_restrictions_on_parent_and_child
)
end
def down
# There's the remote possibility that issues could already be
# using this issue type, with a tight foreign constraint.
# Therefore we will not attempt to remove any data.
end
end
```
<!-- markdownlint-disable-next-line MD044 -->
### Update Gitlab::DatabaseImporters::WorkItems::BaseTypeImporter
The [BaseTypeImporter](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database_importers/work_items/base_type_importer.rb)
is where we can clearly visualize the structure of the types we have and what widgets are associated with each of them.
`BaseTypeImporter` is the single source of truth for fresh GitLab installs and also our test suite. This should always
reflect what we change with migrations.
Similarly, the single sources of truth for hierarchy and linked item restrictions are defined in [HierarchyRestrictionsImporter](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database_importers/work_items/hierarchy_restrictions_importer.rb) and [RelatedLinksRestrictionsImporter](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/database_importers/work_items/related_links_restrictions_importer.rb), respectively.
**Important**: These importers should be updated whenever the corresponding database tables are modified.
## Custom work item types
With the WIT widget metadata and the workflow around mapping WIT to specific
widgets, we will be able to expose custom WITs to the users. Users will be able
to create their own WITs and customize them with widgets from the predefined pool.
## Custom widgets
The end goal is to allow users to define custom widgets and use these custom
widgets on any WIT. But this is a much further iteration and requires additional
investigation to determine both data and application architecture to be used.
## Migrate requirements and epics to work item types
We'll migrate requirements and epics into work item types, with their own set
of widgets. To achieve that, we'll migrate data to the `issues` table,
and we'll keep current `requirements` and `epics` tables to be used as proxies for old references to ensure
backward compatibility with already existing references.
### Migrate requirements to work item types
Currently `Requirement` attributes are a subset of `Issue` attributes, so the migration
consists mainly of:
- Data migration.
- Keeping backwards compatibility at API levels.
- Ensuring that old references continue to work.
The migration to a different underlying data structure should be seamless to the end user.
### Migrate epics to work item types
`Epic` has some extra functionality that the `Issue` WIT does not currently have.
So, migrating epics to a work item type requires providing feature parity between the current `Epic` object and WITs.
The main missing features are:
- Get work items to the group level. This is dependent on [Consolidate Groups and Projects](https://gitlab.com/gitlab-org/architecture/tasks/-/issues/7)
initiative.
- A hierarchy widget: the ability to structure work items into hierarchies.
- Inherited date widget.
To avoid disrupting workflows for users who are already using epics, we will introduce a new WIT
called `Feature` that will provide feature parity with epics at the project-level. Having that combined with progress
on [Consolidate Groups and Projects](https://gitlab.com/gitlab-org/architecture/tasks/-/issues/7) front will help us
provide a smooth migration path of epics to WIT with minimal disruption to user workflow.
## Work item, work item type, and widgets roadmap
We will move towards work items, work item types, and custom widgets (CW) in an iterative process.
For a rough outline of the work ahead of us, see [epic 6033](https://gitlab.com/groups/gitlab-org/-/epics/6033).
## Redis HLL Counter Schema
We need a more scalable Redis counter schema for work items that is inclusive of Plan xMAU, Project Management xMAU, Certify xMAU, and
Product Planning xMAU. We cannot aggregate and dedupe events across features within a group or at the stage level with
our current Redis slot schema.
All three Plan product groups will be using the same base object (`work item`). Each product group still needs to
track MAU.
### Proposed aggregate counter schema
```mermaid
graph TD
Event[Specific Interaction Counter] --> AC[Aggregate Counters]
AC --> Plan[Plan xMAU]
AC --> PM[Project Management xMAU]
AC --> PP[Product Planning xMAU]
AC --> Cer[Certify xMAU]
AC --> WI[Work Items Users]
```
### Implementation
The new aggregate schema is already implemented and we are already tracking work item unique actions
in [GitLab.com](https://gitlab.com).
For implementation details, this [MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/93231) can be used
as a reference. The MR covers the definition of new unique actions, event tracking in the code and also
adding the new unique actions to the required aggregate counters.
## Related topics
- [Design management](../user/project/issues/design_management.md)
|
https://docs.gitlab.com/rails_update
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/rails_update.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
rails_update.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Rails upgrade guidelines
| null |
We strive to run GitLab using the latest Rails releases to benefit from performance, security updates, and new features.
## Rails upgrade approach
1. [Prepare an MR for GitLab](#prepare-an-mr-for-gitlab).
1. [Create patch releases and backports for security patches](#create-patch-releases-and-backports-for-security-patches).
### Prepare an MR for GitLab
1. Check the [Upgrading Ruby on Rails](https://guides.rubyonrails.org/upgrading_ruby_on_rails.html) guide and prepare the application for the upcoming changes.
1. Update the `rails` gem version in `Gemfile`.
1. Run `bundle update --conservative rails`.
1. For major and minor version updates, run `bin/rails app:update` and check if any of the suggested changes should be applied.
1. Update the `activesupport` version in `qa/Gemfile`.
1. Run `bundle update --conservative activesupport` in the `qa` folder.
1. Run `find gems -name Gemfile -exec bundle update --gemfile {} activesupport --patch --conservative \;` and replace `--patch` in the command with `--minor` or `--major` as needed.
1. Resolve any Bundler conflicts.
1. Ensure that `@rails/ujs` and `@rails/actioncable` npm packages match the new rails version in [`package.json`](https://gitlab.com/gitlab-org/gitlab/blob/master/package.json).
1. Run `yarn patch-package @rails/ujs` after updating this to ensure our local patch file version matches.
1. Create an MR with the `pipeline:run-all-rspec` label and see if pipeline breaks.
1. To resolve and debug spec failures use `git bisect` against the rails repository. See the [debugging section](#git-bisect-against-rails) below.
1. Include links to the Gem diffs between the two versions in the merge request description. For example, this is the gem diff for
[`activesupport` 6.1.3.2 to 6.1.4.1](https://my.diffend.io/gems/activerecord/6.1.3.2/6.1.4.1).
### Prepare an MR for Gitaly
No longer necessary as Gitaly no longer has Ruby code.
### Create patch releases and backports for security patches
If the Rails upgrade was over a patch release and it contains important security fixes,
make sure to release it in a
GitLab patch release to self-managed customers. Consult with our [release managers](https://about.gitlab.com/community/release-managers/)
for how to proceed.
### Deprecation Logger
We also log Ruby and Rails deprecation warnings into a dedicated log file, `log/deprecation_json.log`. It provides
clues when there is code that is not adequately covered by tests and hence would slip past `DeprecationToolkitEnv`.
For GitLab SaaS, GitLab team members can inspect these log events in Kibana (`https://log.gprd.gitlab.net/goto/f7cebf1ff05038d901ba2c45925c7e01`).
## Git bisect against Rails
Usually, if you know which Rails change caused the spec to fail, it adds additional context and
helps to find the fix for the failure.
To efficiently and quickly find which Rails change caused the spec failure you can use the
[`git bisect`](https://git-scm.com/docs/git-bisect) command against the Rails repository:
1. Clone the `rails` project in a folder of your choice. For example, it might be the GDK root dir:
```shell
cd <GDK_FOLDER>
git clone https://github.com/rails/rails.git
```
1. Replace the `gem 'rails'` line in GitLab `Gemfile` with:
```ruby
gem 'rails', ENV['RAILS_VERSION'], path: ENV['RAILS_FOLDER']
```
1. Set the `RAILS_FOLDER` environment variable with the folder you cloned Rails into:
```shell
export RAILS_FOLDER="<GDK_FOLDER>/rails"
```
1. Change the directory to `RAILS_FOLDER` and set the range for the `git bisect` command:
```shell
cd $RAILS_FOLDER
git bisect start <NEW_VERSION_TAG> <OLD_VERSION_TAG>
```
Where `<NEW_VERSION_TAG>` is the tag where the spec is red and `<OLD_VERSION_TAG>` is the one with the green spec.
For example, `git bisect start v6.1.4.1 v6.1.3.2` if we're upgrading from version 6.1.3.2 to 6.1.4.1.
Replace `<NEW_VERSION_TAG>` with the tag where the spec is red and `<OLD_VERSION_TAG>` with the one with the green spec. For example, `git bisect start v6.1.4.1 v6.1.3.2` if we're upgrading from version 6.1.3.2 to 6.1.4.1.
In the output, you can see how many steps approximately it takes to find the commit.
1. Start the `git bisect` process and pass spec's filenames to `scripts/rails-update-bisect` as arguments. It can be faster to pick only one example instead of an entire spec file.
```shell
git bisect run <GDK_FOLDER>/gitlab/scripts/rails-update-bisect spec/models/ability_spec.rb
# OR
git bisect run <GDK_FOLDER>/gitlab/scripts/rails-update-bisect spec/models/ability_spec.rb:7
```
1. When the process is completed, `git bisect` prints the commit hash, which you can use to find the corresponding MR in the [`rails/rails`](https://github.com/rails/rails) repository.
1. Execute `git bisect reset` to exit the `bisect` mode.
1. Revert the changes to `Gemfile`:
```shell
git checkout -- Gemfile
```
### Follow-up reading material
- [Upgrading Ruby on Rails guide](https://guides.rubyonrails.org/upgrading_ruby_on_rails.html)
- [Rails releases page](https://github.com/rails/rails/releases)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Rails upgrade guidelines
breadcrumbs:
- doc
- development
---
We strive to run GitLab using the latest Rails releases to benefit from performance, security updates, and new features.
## Rails upgrade approach
1. [Prepare an MR for GitLab](#prepare-an-mr-for-gitlab).
1. [Create patch releases and backports for security patches](#create-patch-releases-and-backports-for-security-patches).
### Prepare an MR for GitLab
1. Check the [Upgrading Ruby on Rails](https://guides.rubyonrails.org/upgrading_ruby_on_rails.html) guide and prepare the application for the upcoming changes.
1. Update the `rails` gem version in `Gemfile`.
1. Run `bundle update --conservative rails`.
1. For major and minor version updates, run `bin/rails app:update` and check if any of the suggested changes should be applied.
1. Update the `activesupport` version in `qa/Gemfile`.
1. Run `bundle update --conservative activesupport` in the `qa` folder.
1. Run `find gems -name Gemfile -exec bundle update --gemfile {} activesupport --patch --conservative \;` and replace `--patch` in the command with `--minor` or `--major` as needed.
1. Resolve any Bundler conflicts.
1. Ensure that `@rails/ujs` and `@rails/actioncable` npm packages match the new rails version in [`package.json`](https://gitlab.com/gitlab-org/gitlab/blob/master/package.json).
1. Run `yarn patch-package @rails/ujs` after updating this to ensure our local patch file version matches.
1. Create an MR with the `pipeline:run-all-rspec` label and see if pipeline breaks.
1. To resolve and debug spec failures use `git bisect` against the rails repository. See the [debugging section](#git-bisect-against-rails) below.
1. Include links to the Gem diffs between the two versions in the merge request description. For example, this is the gem diff for
[`activesupport` 6.1.3.2 to 6.1.4.1](https://my.diffend.io/gems/activerecord/6.1.3.2/6.1.4.1).
### Prepare an MR for Gitaly
No longer necessary as Gitaly no longer has Ruby code.
### Create patch releases and backports for security patches
If the Rails upgrade was over a patch release and it contains important security fixes,
make sure to release it in a
GitLab patch release to self-managed customers. Consult with our [release managers](https://about.gitlab.com/community/release-managers/)
for how to proceed.
### Deprecation Logger
We also log Ruby and Rails deprecation warnings into a dedicated log file, `log/deprecation_json.log`. It provides
clues when there is code that is not adequately covered by tests and hence would slip past `DeprecationToolkitEnv`.
For GitLab SaaS, GitLab team members can inspect these log events in Kibana (`https://log.gprd.gitlab.net/goto/f7cebf1ff05038d901ba2c45925c7e01`).
## Git bisect against Rails
Usually, if you know which Rails change caused the spec to fail, it adds additional context and
helps to find the fix for the failure.
To efficiently and quickly find which Rails change caused the spec failure you can use the
[`git bisect`](https://git-scm.com/docs/git-bisect) command against the Rails repository:
1. Clone the `rails` project in a folder of your choice. For example, it might be the GDK root dir:
```shell
cd <GDK_FOLDER>
git clone https://github.com/rails/rails.git
```
1. Replace the `gem 'rails'` line in GitLab `Gemfile` with:
```ruby
gem 'rails', ENV['RAILS_VERSION'], path: ENV['RAILS_FOLDER']
```
1. Set the `RAILS_FOLDER` environment variable with the folder you cloned Rails into:
```shell
export RAILS_FOLDER="<GDK_FOLDER>/rails"
```
1. Change the directory to `RAILS_FOLDER` and set the range for the `git bisect` command:
```shell
cd $RAILS_FOLDER
git bisect start <NEW_VERSION_TAG> <OLD_VERSION_TAG>
```
Where `<NEW_VERSION_TAG>` is the tag where the spec is red and `<OLD_VERSION_TAG>` is the one with the green spec.
For example, `git bisect start v6.1.4.1 v6.1.3.2` if we're upgrading from version 6.1.3.2 to 6.1.4.1.
Replace `<NEW_VERSION_TAG>` with the tag where the spec is red and `<OLD_VERSION_TAG>` with the one with the green spec. For example, `git bisect start v6.1.4.1 v6.1.3.2` if we're upgrading from version 6.1.3.2 to 6.1.4.1.
In the output, you can see how many steps approximately it takes to find the commit.
1. Start the `git bisect` process and pass spec's filenames to `scripts/rails-update-bisect` as arguments. It can be faster to pick only one example instead of an entire spec file.
```shell
git bisect run <GDK_FOLDER>/gitlab/scripts/rails-update-bisect spec/models/ability_spec.rb
# OR
git bisect run <GDK_FOLDER>/gitlab/scripts/rails-update-bisect spec/models/ability_spec.rb:7
```
1. When the process is completed, `git bisect` prints the commit hash, which you can use to find the corresponding MR in the [`rails/rails`](https://github.com/rails/rails) repository.
1. Execute `git bisect reset` to exit the `bisect` mode.
1. Revert the changes to `Gemfile`:
```shell
git checkout -- Gemfile
```
### Follow-up reading material
- [Upgrading Ruby on Rails guide](https://guides.rubyonrails.org/upgrading_ruby_on_rails.html)
- [Rails releases page](https://github.com/rails/rails/releases)
|
https://docs.gitlab.com/reusing_abstractions
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/reusing_abstractions.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
reusing_abstractions.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Guidelines for reusing abstractions
| null |
As GitLab has grown, different patterns emerged across the codebase. Service
classes, serializers, and presenters are just a few. These patterns made it easy
to reuse code, but at the same time make it easy to accidentally reuse the wrong
abstraction in a particular place.
## Why these guidelines are necessary
Code reuse is good, but sometimes this can lead to shoehorning the wrong
abstraction into a particular use case. This in turn can have a negative impact
on maintainability, the ability to easily debug problems, or even performance.
An example would be to use `ProjectsFinder` in `IssuesFinder` to limit issues to
those belonging to a set of projects. While initially this may seem like a good
idea, both classes provide a very high level interface with very little control.
This means that `IssuesFinder` may not be able to produce a better optimized
database query, as a large portion of the query is controlled by the internals
of `ProjectsFinder`.
To work around this problem, you would use the same code used by
`ProjectsFinder`, instead of using `ProjectsFinder` itself directly. This allows
you to compose your behavior better, giving you more control over the behavior
of the code.
To illustrate, consider the following code from `IssuableFinder#projects`:
```ruby
return @projects = project if project?
projects =
if current_user && params[:authorized_only].presence && !current_user_related?
current_user.authorized_projects
elsif group
finder_options = { include_subgroups: params[:include_subgroups], exclude_shared: true }
GroupProjectsFinder.new(group: group, current_user: current_user, options: finder_options).execute
else
ProjectsFinder.new(current_user: current_user).execute
end
@projects = projects.with_feature_available_for_user(klass, current_user).reorder(nil)
```
Here we determine what projects to scope our data to, using three different
approaches. When a group is specified, we use `GroupProjectsFinder` to retrieve
all the projects of that group. On the surface this seems harmless: it is easy
to use, and we only need two lines of code.
In reality, things can get hairy very quickly. For example, the query produced
by `GroupProjectsFinder` may start out simple. Over time more and more
functionality is added to this (high level) interface. Instead of _only_
affecting the cases where this is necessary, it may also start affecting
`IssuableFinder` in a negative way. For example, the query produced by
`GroupProjectsFinder` may include unnecessary conditions. Since we're using a
finder here, we can't easily opt-out of that behavior. We could add options to
do so, but then we'd need as many options as we have features. Every option adds
two code paths, which means that for four features we have to cover 8 different
code paths.
A much more reliable (and pleasant) way of dealing with this, is to use
the underlying bits that make up `GroupProjectsFinder` directly. This means we
may need a little bit more code in `IssuableFinder`, but it also gives us much
more control and certainty. This means we might end up with something like this:
```ruby
return @projects = project if project?
projects =
if current_user && params[:authorized_only].presence && !current_user_related?
current_user.authorized_projects
elsif group
current_user
.owned_groups(subgroups: params[:include_subgroups])
.projects
.any_additional_method_calls
.that_might_be_necessary
else
current_user
.projects_visible_to_user
.any_additional_method_calls
.that_might_be_necessary
end
@projects = projects.with_feature_available_for_user(klass, current_user).reorder(nil)
```
This is just a sketch, but it shows the general idea: we would use whatever the
`GroupProjectsFinder` and `ProjectsFinder` finders use under the hoods.
## End goal
The guidelines in this document are meant to foster better code reuse, by
clearly defining what can be reused where, and what to do when you cannot reuse
something. Clearly separating abstractions makes it harder to use the wrong one,
makes it easier to debug the code, and (hopefully) results in fewer performance
problems.
## Abstractions
Now let's take a look at the various abstraction levels available, and what they
can (or cannot) reuse. For this we can use the following table, which defines
the various abstractions and what they can (not) reuse:
| Abstraction | Service classes | Finders | Presenters | Serializers | Model instance method | Model class methods | Active Record | Worker
|:-----------------------|:-----------------|:---------|:------------|:--------------|:------------------------|:----------------------|:----------------|:--------
| Controller/API endpoint| Yes | Yes | Yes | Yes | Yes | No | No | No
| Service class | Yes | Yes | No | No | Yes | No | No | Yes
| Finder | No | No | No | No | Yes | Yes | No | No
| Presenter | No | Yes | No | No | Yes | Yes | No | No
| Serializer | No | Yes | No | No | Yes | Yes | No | No
| Model class method | No | No | No | No | Yes | Yes | Yes | No
| Model instance method | No | Yes | No | No | Yes | Yes | Yes | Yes
| Worker | Yes | Yes | No | No | Yes | No | No | Yes
### Controllers
Everything in `app/controllers`.
Controllers should not do much work on their own, instead they pass input
to other classes and present the results.
### API endpoints
Everything in `lib/api` (the REST API) and `app/graphql` (the GraphQL API).
API endpoints have the same abstraction level as controllers.
### Service classes
Everything that resides in `app/services`.
Service classes represent operations that coordinates changes between models
(such as entities and value objects). Changes impact the state of the application.
1. When an object makes no changes to the state of the application, then it's not a service.
It may be a [finder](#finders) or a value object.
1. When there is no operation, there is no need to execute a service. The class would
probably be better designed as an entity, a value object, or a policy.
When implementing a service class, consider using the following patterns:
1. A service class initializer should contain in its arguments:
1. A [model](#models) instance that is being acted upon. Should be first positional
argument of the initializer. The argument name of the argument is left to the
developer's discretion, such as: `issue`, `project`, `merge_request`.
1. When service represents an action initiated by a user or executed in the
context of a user, the initializer must have the `current_user:` keyword argument.
Services with the `current_user:` argument run high-level business logic
and must validate user authorization to perform their operations.
1. When service does not have a user context and it's not directly initiated
by a user (like background service or side-effects), the `current_user:`
argument is not needed. This describes low-level domain logic or instance-wide logic.
1. For all additional data required by a service, the explicit keyword arguments are recommended.
When a service requires too long of a list of arguments, consider splitting them into:
- `params`: A hash with model properties that will be assigned directly.
- `options`: A hash with extra parameters (which need to be processed,
and are not model properties). The `options` hash should be stored in an instance variable.
```ruby
# merge_request: A model instance that is being acted upon.
# assignee: new MR assignee that will be assigned to the MR
# after the service is executed.
def initialize(merge_request, assignee:)
@merge_request = merge_request
@assignee = assignee
end
```
```ruby
# issue: A model instance that is being acted upon.
# current_user: Current user.
# params: Model properties.
# options: Configuration for this service. Can be any of the following:
# - notify: Whether to send a notification to the current user.
# - cc: Email address to copy when sending a notification.
def initialize(issue:, current_user:, params: {}, options: {})
@issue = issue
@current_user = current_user
@params = params
@options = options
end
```
1. The service class should implements a single public instance method `#execute`, which invokes service class behavior:
- The `#execute` method takes no arguments. All required data is passed into initializer.
1. If a return value is needed, the `#execute` method should returns its result via [`ServiceResponse`](#serviceresponse) object.
Several base classes implement the service classes convention. You may consider inheriting from:
- `BaseContainerService` for services scoped by container (project or group).
- `BaseProjectService` for services scoped to projects.
- `BaseGroupService` for services scoped to groups.
For some domains or [bounded contexts](software_design.md#bounded-contexts), it may make sense for
service classes to use different patterns. For example, the Remote Development domain uses a
[layered architecture](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#layered-architecture)
with domain logic isolated to a separate domain layer following a standard pattern, which allows for a very
[minimal service layer](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#minimal-service-layer)
which consists of only a single reusable `CommonService` class. It also uses
[functional patterns with stateless singleton class methods](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#functional-patterns).
See the Remote Development [service layer code example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#service-layer-code-example) for more details.
However, even though the invocation signature of services via this pattern is different,
it still respects the standard Service layer contracts of always returning all results via a
[`ServiceResponse`](#serviceresponse) object, and performing
[defense-in-depth authorization](permissions/authorizations.md#where-should-permissions-be-checked).
Classes that are not service objects should be
[created elsewhere](software_design.md#use-namespaces-to-define-bounded-contexts),
such as in `lib`.
#### ServiceResponse
Service classes usually have an `execute` method, which can return a
`ServiceResponse`. You can use `ServiceResponse.success` and
`ServiceResponse.error` to return a response in `execute` method.
In a successful case:
```ruby
response = ServiceResponse.success(message: 'Branch was deleted')
response.success? # => true
response.error? # => false
response.status # => :success
response.message # => 'Branch was deleted'
```
In a failed case:
```ruby
response = ServiceResponse.error(message: 'Unsupported operation')
response.success? # => false
response.error? # => true
response.status # => :error
response.message # => 'Unsupported operation'
```
An additional payload can also be attached:
```ruby
response = ServiceResponse.success(payload: { issue: issue })
response.payload[:issue] # => issue
```
Error responses can also specify the failure `reason` which can be used by the caller
to understand the nature of the failure.
The caller, if an HTTP endpoint, could translate the reason symbol into an HTTP status code:
```ruby
response = ServiceResponse.error(
message: 'Job is in a state that cannot be retried',
reason: :job_not_retrieable)
if response.success?
head :ok
elsif response.reason == :job_not_retriable
head :unprocessable_entity
else
head :bad_request
end
```
For common failures such as resource `:not_found` or operation `:forbidden`, we could
leverage the Rails [HTTP status symbols](http://www.railsstatuscodes.com/) as long as
they are sufficiently specific for the domain logic involved.
For other failures use domain-specific reasons whenever possible.
For example: `:job_not_retriable`, `:duplicate_package`, `:merge_request_not_mergeable`.
### Finders
Everything in `app/finders`, typically used for retrieving data from a database.
Finders cannot reuse other finders in an attempt to better control the SQL
queries they produce.
Finders' `execute` method should return `ActiveRecord::Relation`. Exceptions
can be added to `spec/support/finder_collection_allowlist.yml`.
See [`#298771`](https://gitlab.com/gitlab-org/gitlab/-/issues/298771) for more details.
### Presenters
Everything in `app/presenters`, used for exposing complex data to a Rails view,
without having to create many instance variables.
See [the documentation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/presenters/README.md) for more information.
### Serializers
Everything in `app/serializers`, used for presenting the response to a request,
typically in JSON.
### Models
Classes and modules in `app/models` represent domain concepts that encapsulate both
[data and behavior](https://en.wikipedia.org/wiki/Domain_model).
These classes can interact directly with a data store (like ActiveRecord models) or
can be a thin wrapper (Plain Old Ruby Objects) on top of ActiveRecord models to express a
richer domain concept.
[Entities and Value Objects](https://martinfowler.com/bliki/EvansClassification.html)
that represent domain concepts are considered domain models.
Some examples:
- [`DesignManagement::DesignAtVersion`](https://gitlab.com/gitlab-org/gitlab/-/blob/b62ce98cff8e0530210670f9cb0314221181b77f/app/models/design_management/design_at_version.rb)
is a model that leverages validations to combine designs and versions.
- [`Ci::Minutes::Usage`](https://gitlab.com/gitlab-org/gitlab/-/blob/ec52f19f7325410177c00fef06379f55ab7cab67/ee/app/models/ci/minutes/usage.rb)
is a Value Object that provides [compute usage](../ci/pipelines/compute_minutes.md)
for a given namespace.
#### Model class methods
These are class methods defined by _GitLab itself_, including the following
methods provided by Active Record:
- `find`
- `find_by_id`
- `delete_all`
- `destroy`
- `destroy_all`
Any other methods such as `find_by(some_column: X)` are not included, and
instead fall under the "Active Record" abstraction.
#### Model instance methods
Instance methods defined on Active Record models by _GitLab itself_. Methods
provided by Active Record are not included, except for the following methods:
- `save`
- `update`
- `destroy`
- `delete`
#### Active Record
The API provided by Active Record itself, such as the `where` method, `save`,
`delete_all`, and so on.
### Worker
Everything in `app/workers`.
Use `SomeWorker.perform_async` or `SomeWorker.perform_in` to schedule Sidekiq
jobs. Never directly invoke a worker using `SomeWorker.new.perform`.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Guidelines for reusing abstractions
breadcrumbs:
- doc
- development
---
As GitLab has grown, different patterns emerged across the codebase. Service
classes, serializers, and presenters are just a few. These patterns made it easy
to reuse code, but at the same time make it easy to accidentally reuse the wrong
abstraction in a particular place.
## Why these guidelines are necessary
Code reuse is good, but sometimes this can lead to shoehorning the wrong
abstraction into a particular use case. This in turn can have a negative impact
on maintainability, the ability to easily debug problems, or even performance.
An example would be to use `ProjectsFinder` in `IssuesFinder` to limit issues to
those belonging to a set of projects. While initially this may seem like a good
idea, both classes provide a very high level interface with very little control.
This means that `IssuesFinder` may not be able to produce a better optimized
database query, as a large portion of the query is controlled by the internals
of `ProjectsFinder`.
To work around this problem, you would use the same code used by
`ProjectsFinder`, instead of using `ProjectsFinder` itself directly. This allows
you to compose your behavior better, giving you more control over the behavior
of the code.
To illustrate, consider the following code from `IssuableFinder#projects`:
```ruby
return @projects = project if project?
projects =
if current_user && params[:authorized_only].presence && !current_user_related?
current_user.authorized_projects
elsif group
finder_options = { include_subgroups: params[:include_subgroups], exclude_shared: true }
GroupProjectsFinder.new(group: group, current_user: current_user, options: finder_options).execute
else
ProjectsFinder.new(current_user: current_user).execute
end
@projects = projects.with_feature_available_for_user(klass, current_user).reorder(nil)
```
Here we determine what projects to scope our data to, using three different
approaches. When a group is specified, we use `GroupProjectsFinder` to retrieve
all the projects of that group. On the surface this seems harmless: it is easy
to use, and we only need two lines of code.
In reality, things can get hairy very quickly. For example, the query produced
by `GroupProjectsFinder` may start out simple. Over time more and more
functionality is added to this (high level) interface. Instead of _only_
affecting the cases where this is necessary, it may also start affecting
`IssuableFinder` in a negative way. For example, the query produced by
`GroupProjectsFinder` may include unnecessary conditions. Since we're using a
finder here, we can't easily opt-out of that behavior. We could add options to
do so, but then we'd need as many options as we have features. Every option adds
two code paths, which means that for four features we have to cover 8 different
code paths.
A much more reliable (and pleasant) way of dealing with this, is to use
the underlying bits that make up `GroupProjectsFinder` directly. This means we
may need a little bit more code in `IssuableFinder`, but it also gives us much
more control and certainty. This means we might end up with something like this:
```ruby
return @projects = project if project?
projects =
if current_user && params[:authorized_only].presence && !current_user_related?
current_user.authorized_projects
elsif group
current_user
.owned_groups(subgroups: params[:include_subgroups])
.projects
.any_additional_method_calls
.that_might_be_necessary
else
current_user
.projects_visible_to_user
.any_additional_method_calls
.that_might_be_necessary
end
@projects = projects.with_feature_available_for_user(klass, current_user).reorder(nil)
```
This is just a sketch, but it shows the general idea: we would use whatever the
`GroupProjectsFinder` and `ProjectsFinder` finders use under the hoods.
## End goal
The guidelines in this document are meant to foster better code reuse, by
clearly defining what can be reused where, and what to do when you cannot reuse
something. Clearly separating abstractions makes it harder to use the wrong one,
makes it easier to debug the code, and (hopefully) results in fewer performance
problems.
## Abstractions
Now let's take a look at the various abstraction levels available, and what they
can (or cannot) reuse. For this we can use the following table, which defines
the various abstractions and what they can (not) reuse:
| Abstraction | Service classes | Finders | Presenters | Serializers | Model instance method | Model class methods | Active Record | Worker
|:-----------------------|:-----------------|:---------|:------------|:--------------|:------------------------|:----------------------|:----------------|:--------
| Controller/API endpoint| Yes | Yes | Yes | Yes | Yes | No | No | No
| Service class | Yes | Yes | No | No | Yes | No | No | Yes
| Finder | No | No | No | No | Yes | Yes | No | No
| Presenter | No | Yes | No | No | Yes | Yes | No | No
| Serializer | No | Yes | No | No | Yes | Yes | No | No
| Model class method | No | No | No | No | Yes | Yes | Yes | No
| Model instance method | No | Yes | No | No | Yes | Yes | Yes | Yes
| Worker | Yes | Yes | No | No | Yes | No | No | Yes
### Controllers
Everything in `app/controllers`.
Controllers should not do much work on their own, instead they pass input
to other classes and present the results.
### API endpoints
Everything in `lib/api` (the REST API) and `app/graphql` (the GraphQL API).
API endpoints have the same abstraction level as controllers.
### Service classes
Everything that resides in `app/services`.
Service classes represent operations that coordinates changes between models
(such as entities and value objects). Changes impact the state of the application.
1. When an object makes no changes to the state of the application, then it's not a service.
It may be a [finder](#finders) or a value object.
1. When there is no operation, there is no need to execute a service. The class would
probably be better designed as an entity, a value object, or a policy.
When implementing a service class, consider using the following patterns:
1. A service class initializer should contain in its arguments:
1. A [model](#models) instance that is being acted upon. Should be first positional
argument of the initializer. The argument name of the argument is left to the
developer's discretion, such as: `issue`, `project`, `merge_request`.
1. When service represents an action initiated by a user or executed in the
context of a user, the initializer must have the `current_user:` keyword argument.
Services with the `current_user:` argument run high-level business logic
and must validate user authorization to perform their operations.
1. When service does not have a user context and it's not directly initiated
by a user (like background service or side-effects), the `current_user:`
argument is not needed. This describes low-level domain logic or instance-wide logic.
1. For all additional data required by a service, the explicit keyword arguments are recommended.
When a service requires too long of a list of arguments, consider splitting them into:
- `params`: A hash with model properties that will be assigned directly.
- `options`: A hash with extra parameters (which need to be processed,
and are not model properties). The `options` hash should be stored in an instance variable.
```ruby
# merge_request: A model instance that is being acted upon.
# assignee: new MR assignee that will be assigned to the MR
# after the service is executed.
def initialize(merge_request, assignee:)
@merge_request = merge_request
@assignee = assignee
end
```
```ruby
# issue: A model instance that is being acted upon.
# current_user: Current user.
# params: Model properties.
# options: Configuration for this service. Can be any of the following:
# - notify: Whether to send a notification to the current user.
# - cc: Email address to copy when sending a notification.
def initialize(issue:, current_user:, params: {}, options: {})
@issue = issue
@current_user = current_user
@params = params
@options = options
end
```
1. The service class should implements a single public instance method `#execute`, which invokes service class behavior:
- The `#execute` method takes no arguments. All required data is passed into initializer.
1. If a return value is needed, the `#execute` method should returns its result via [`ServiceResponse`](#serviceresponse) object.
Several base classes implement the service classes convention. You may consider inheriting from:
- `BaseContainerService` for services scoped by container (project or group).
- `BaseProjectService` for services scoped to projects.
- `BaseGroupService` for services scoped to groups.
For some domains or [bounded contexts](software_design.md#bounded-contexts), it may make sense for
service classes to use different patterns. For example, the Remote Development domain uses a
[layered architecture](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#layered-architecture)
with domain logic isolated to a separate domain layer following a standard pattern, which allows for a very
[minimal service layer](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#minimal-service-layer)
which consists of only a single reusable `CommonService` class. It also uses
[functional patterns with stateless singleton class methods](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#functional-patterns).
See the Remote Development [service layer code example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#service-layer-code-example) for more details.
However, even though the invocation signature of services via this pattern is different,
it still respects the standard Service layer contracts of always returning all results via a
[`ServiceResponse`](#serviceresponse) object, and performing
[defense-in-depth authorization](permissions/authorizations.md#where-should-permissions-be-checked).
Classes that are not service objects should be
[created elsewhere](software_design.md#use-namespaces-to-define-bounded-contexts),
such as in `lib`.
#### ServiceResponse
Service classes usually have an `execute` method, which can return a
`ServiceResponse`. You can use `ServiceResponse.success` and
`ServiceResponse.error` to return a response in `execute` method.
In a successful case:
```ruby
response = ServiceResponse.success(message: 'Branch was deleted')
response.success? # => true
response.error? # => false
response.status # => :success
response.message # => 'Branch was deleted'
```
In a failed case:
```ruby
response = ServiceResponse.error(message: 'Unsupported operation')
response.success? # => false
response.error? # => true
response.status # => :error
response.message # => 'Unsupported operation'
```
An additional payload can also be attached:
```ruby
response = ServiceResponse.success(payload: { issue: issue })
response.payload[:issue] # => issue
```
Error responses can also specify the failure `reason` which can be used by the caller
to understand the nature of the failure.
The caller, if an HTTP endpoint, could translate the reason symbol into an HTTP status code:
```ruby
response = ServiceResponse.error(
message: 'Job is in a state that cannot be retried',
reason: :job_not_retrieable)
if response.success?
head :ok
elsif response.reason == :job_not_retriable
head :unprocessable_entity
else
head :bad_request
end
```
For common failures such as resource `:not_found` or operation `:forbidden`, we could
leverage the Rails [HTTP status symbols](http://www.railsstatuscodes.com/) as long as
they are sufficiently specific for the domain logic involved.
For other failures use domain-specific reasons whenever possible.
For example: `:job_not_retriable`, `:duplicate_package`, `:merge_request_not_mergeable`.
### Finders
Everything in `app/finders`, typically used for retrieving data from a database.
Finders cannot reuse other finders in an attempt to better control the SQL
queries they produce.
Finders' `execute` method should return `ActiveRecord::Relation`. Exceptions
can be added to `spec/support/finder_collection_allowlist.yml`.
See [`#298771`](https://gitlab.com/gitlab-org/gitlab/-/issues/298771) for more details.
### Presenters
Everything in `app/presenters`, used for exposing complex data to a Rails view,
without having to create many instance variables.
See [the documentation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/presenters/README.md) for more information.
### Serializers
Everything in `app/serializers`, used for presenting the response to a request,
typically in JSON.
### Models
Classes and modules in `app/models` represent domain concepts that encapsulate both
[data and behavior](https://en.wikipedia.org/wiki/Domain_model).
These classes can interact directly with a data store (like ActiveRecord models) or
can be a thin wrapper (Plain Old Ruby Objects) on top of ActiveRecord models to express a
richer domain concept.
[Entities and Value Objects](https://martinfowler.com/bliki/EvansClassification.html)
that represent domain concepts are considered domain models.
Some examples:
- [`DesignManagement::DesignAtVersion`](https://gitlab.com/gitlab-org/gitlab/-/blob/b62ce98cff8e0530210670f9cb0314221181b77f/app/models/design_management/design_at_version.rb)
is a model that leverages validations to combine designs and versions.
- [`Ci::Minutes::Usage`](https://gitlab.com/gitlab-org/gitlab/-/blob/ec52f19f7325410177c00fef06379f55ab7cab67/ee/app/models/ci/minutes/usage.rb)
is a Value Object that provides [compute usage](../ci/pipelines/compute_minutes.md)
for a given namespace.
#### Model class methods
These are class methods defined by _GitLab itself_, including the following
methods provided by Active Record:
- `find`
- `find_by_id`
- `delete_all`
- `destroy`
- `destroy_all`
Any other methods such as `find_by(some_column: X)` are not included, and
instead fall under the "Active Record" abstraction.
#### Model instance methods
Instance methods defined on Active Record models by _GitLab itself_. Methods
provided by Active Record are not included, except for the following methods:
- `save`
- `update`
- `destroy`
- `delete`
#### Active Record
The API provided by Active Record itself, such as the `where` method, `save`,
`delete_all`, and so on.
### Worker
Everything in `app/workers`.
Use `SomeWorker.perform_async` or `SomeWorker.perform_in` to schedule Sidekiq
jobs. Never directly invoke a worker using `SomeWorker.new.perform`.
|
https://docs.gitlab.com/auto_devops
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/auto_devops.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
auto_devops.md
|
Deploy
|
Environments
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Auto DevOps development guidelines
| null |
This document provides a development guide for contributors to
[Auto DevOps](../topics/autodevops/_index.md).
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
An [Auto DevOps technical walk-through](https://youtu.be/G7RTLeToz9E)
is also available on YouTube.
## Development
Auto DevOps builds on top of GitLab CI/CD to create an automatic pipeline
based on your project contents. When Auto DevOps is enabled for a
project, the user does not need to explicitly include any pipeline configuration
through a `.gitlab-ci.yml` file.
In the absence of a `.gitlab-ci.yml` file, the
[Auto DevOps CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml)
is used implicitly to configure the pipeline for the project. This
template is a top-level template that includes other sub-templates,
which then defines jobs.
Some jobs use images that are built from external projects:
- [Auto Build](../topics/autodevops/stages.md#auto-build) uses
[configuration](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Build.gitlab-ci.yml)
in which the `build` job uses an image that is built using the
[`auto-build-image`](https://gitlab.com/gitlab-org/cluster-integration/auto-build-image)
project.
- [Auto Deploy](../topics/autodevops/stages.md#auto-deploy) uses
[configuration](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml)
in which the jobs defined in this template use an image that is built using the
[`auto-deploy-image`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image)
project. By default, the Helm chart defined in
[`auto-deploy-app`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app) is used to deploy.
There are extra variables that get passed to the CI jobs when Auto
DevOps is enabled that are not present in a typical CI job. These can be
found in
[`ProjectAutoDevops`](https://gitlab.com/gitlab-org/gitlab/-/blob/bf69484afa94e091c3e1383945f60dbe4e8681af/app/models/project_auto_devops.rb).
## Development environment
See the [Simple way to develop/test Kubernetes workflows with a local cluster](https://gitlab.com/gitlab-org/gitlab-development-kit/-/issues/1064)
issue for discussion around setting up Auto DevOps development environments.
## Monitoring on GitLab.com
The metric
[`auto_devops_completed_pipelines_total`](https://dashboards.gitlab.net/explore?schemaVersion=1&panes=%7B%22m95%22:%7B%22datasource%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22sum%28increase%28auto_devops_pipelines_completed_total%7Benvironment%3D%5C%22gprd%5C%22%7D%5B60m%5D%29%29%20by%20%28status%29%22,%22range%22:true,%22instant%22:true,%22datasource%22:%7B%22type%22:%22prometheus%22,%22uid%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%7D,%22editorMode%22:%22code%22,%22legendFormat%22:%22__auto%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&orgId=1)
(only available to GitLab team members) counts completed Auto DevOps
pipelines, labeled by status.
|
---
stage: Deploy
group: Environments
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Auto DevOps development guidelines
breadcrumbs:
- doc
- development
---
This document provides a development guide for contributors to
[Auto DevOps](../topics/autodevops/_index.md).
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
An [Auto DevOps technical walk-through](https://youtu.be/G7RTLeToz9E)
is also available on YouTube.
## Development
Auto DevOps builds on top of GitLab CI/CD to create an automatic pipeline
based on your project contents. When Auto DevOps is enabled for a
project, the user does not need to explicitly include any pipeline configuration
through a `.gitlab-ci.yml` file.
In the absence of a `.gitlab-ci.yml` file, the
[Auto DevOps CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml)
is used implicitly to configure the pipeline for the project. This
template is a top-level template that includes other sub-templates,
which then defines jobs.
Some jobs use images that are built from external projects:
- [Auto Build](../topics/autodevops/stages.md#auto-build) uses
[configuration](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Build.gitlab-ci.yml)
in which the `build` job uses an image that is built using the
[`auto-build-image`](https://gitlab.com/gitlab-org/cluster-integration/auto-build-image)
project.
- [Auto Deploy](../topics/autodevops/stages.md#auto-deploy) uses
[configuration](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml)
in which the jobs defined in this template use an image that is built using the
[`auto-deploy-image`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image)
project. By default, the Helm chart defined in
[`auto-deploy-app`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app) is used to deploy.
There are extra variables that get passed to the CI jobs when Auto
DevOps is enabled that are not present in a typical CI job. These can be
found in
[`ProjectAutoDevops`](https://gitlab.com/gitlab-org/gitlab/-/blob/bf69484afa94e091c3e1383945f60dbe4e8681af/app/models/project_auto_devops.rb).
## Development environment
See the [Simple way to develop/test Kubernetes workflows with a local cluster](https://gitlab.com/gitlab-org/gitlab-development-kit/-/issues/1064)
issue for discussion around setting up Auto DevOps development environments.
## Monitoring on GitLab.com
The metric
[`auto_devops_completed_pipelines_total`](https://dashboards.gitlab.net/explore?schemaVersion=1&panes=%7B%22m95%22:%7B%22datasource%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22sum%28increase%28auto_devops_pipelines_completed_total%7Benvironment%3D%5C%22gprd%5C%22%7D%5B60m%5D%29%29%20by%20%28status%29%22,%22range%22:true,%22instant%22:true,%22datasource%22:%7B%22type%22:%22prometheus%22,%22uid%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%7D,%22editorMode%22:%22code%22,%22legendFormat%22:%22__auto%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&orgId=1)
(only available to GitLab team members) counts completed Auto DevOps
pipelines, labeled by status.
|
https://docs.gitlab.com/file_storage
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/file_storage.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
file_storage.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
File Storage in GitLab
| null |
We use the [CarrierWave](https://github.com/carrierwaveuploader/carrierwave) gem to handle file upload, store and retrieval.
File uploads should be accelerated by workhorse, for details refer to [uploads development documentation](uploads/_index.md).
There are many places where file uploading is used, according to contexts:
- System
- Instance Logo (logo visible in sign in/sign up pages)
- Header Logo (one displayed in the navigation bar)
- Group
- Group avatars
- User
- User avatars
- User snippet attachments
- Project
- Project avatars
- Issues/MR/Notes Markdown attachments
- Issues/MR/Notes Legacy Markdown attachments
- CI Artifacts (archive, metadata, trace)
- LFS Objects
- Merge request diffs
- Design Management design thumbnails
- Topic
- Topic avatars
## Disk storage
GitLab started saving everything on local disk. While directory location changed from previous versions,
they are still not 100% standardized. You can see them below:
| Description | In DB? | Relative path (from CarrierWave.root) | Uploader class | Model type |
| ------------------------------------- | ------ | ----------------------------------------------------------- | ---------------------- | ---------- |
| Instance logo | yes | `uploads/-/system/appearance/logo/:id/:filename` | `AttachmentUploader` | Appearance |
| Header logo | yes | `uploads/-/system/appearance/header_logo/:id/:filename` | `AttachmentUploader` | Appearance |
| Group avatars | yes | `uploads/-/system/group/avatar/:id/:filename` | `AvatarUploader` | Group |
| User avatars | yes | `uploads/-/system/user/avatar/:id/:filename` | `AvatarUploader` | User |
| User snippet attachments | yes | `uploads/-/system/personal_snippet/:id/:random_hex/:filename` | `PersonalFileUploader` | Snippet |
| Project avatars | yes | `uploads/-/system/project/avatar/:id/:filename` | `AvatarUploader` | Project |
| Topic avatars | yes | `uploads/-/system/projects/topic/avatar/:id/:filename` | `AvatarUploader` | Topic |
| Issues/MR/Notes Markdown attachments | yes | `uploads/:hash_project_id/:random_hex/:filename` | `FileUploader` | Project |
| Design Management design thumbnails | yes | `uploads/-/system/design_management/action/image_v432x230/:id/:filename` | `DesignManagement::DesignV432x230Uploader` | DesignManagement::Action |
| CI Artifacts (CE) | yes | `shared/artifacts/:disk_hash[0..1]/:disk_hash[2..3]/:disk_hash/:year_:month_:date/:job_id/:job_artifact_id` (`:disk_hash` is SHA256 digest of `project_id`) | `JobArtifactUploader` | Ci::JobArtifact |
| LFS Objects (CE) | yes | `shared/lfs-objects/:hex/:hex/:object_hash` | `LfsObjectUploader` | LfsObject |
| External merge request diffs | yes | `shared/external-diffs/merge_request_diffs/mr-:parent_id/diff-:id` | `ExternalDiffUploader` | MergeRequestDiff |
| Issuable metric images | yes | `uploads/-/system/issuable_metric_image/file/:id/:filename` | `IssuableMetricImageUploader` | IssuableMetricImage |
CI Artifacts and LFS Objects behave differently in CE and EE. In CE they inherit the `GitlabUploader`
while in EE they inherit the `ObjectStorage` and store files in and S3 API compatible object store.
Attachments for issues, merge requests (MR), and notes in Markdown use
[hashed storage](../administration/repository_storage_paths.md) with the hash of the project ID.
We provide an [all-in-one Rake task](../administration/raketasks/uploads/migrate.md)
to migrate all uploads to object storage in one go. If a new Uploader class or model
type is introduced, make sure you add a Rake task invocation corresponding to it to the
[category list](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/uploads/migrate.rake).
### Path segments
Files are stored at multiple locations and use different path schemes.
All the `GitlabUploader` derived classes should comply with this path segment schema:
```plaintext
| GitlabUploader
| ----------------------- + ------------------------- + --------------------------------- + -------------------------------- |
| `<gitlab_root>/public/` | `uploads/-/system/` | `user/avatar/:id/` | `:filename` |
| ----------------------- + ------------------------- + --------------------------------- + -------------------------------- |
| `CarrierWave.root` | `GitlabUploader.base_dir` | `GitlabUploader#dynamic_segment` | `CarrierWave::Uploader#filename` |
| | `CarrierWave::Uploader#store_dir` | |
| FileUploader
| ----------------------- + ------------------------- + --------------------------------- + -------------------------------- |
| `<gitlab_root>/shared/` | `artifacts/` | `:year_:month/:id` | `:filename` |
| `<gitlab_root>/shared/` | `snippets/` | `:secret/` | `:filename` |
| ----------------------- + ------------------------- + --------------------------------- + -------------------------------- |
| `CarrierWave.root` | `GitlabUploader.base_dir` | `GitlabUploader#dynamic_segment` | `CarrierWave::Uploader#filename` |
| | `CarrierWave::Uploader#store_dir` | |
| | | `FileUploader#upload_path` |
| ObjectStore::Concern (store = remote)
| ----------------------- + ------------------------- + ----------------------------------- + -------------------------------- |
| `<bucket_name>` | <ignored> | `user/avatar/:id/` | `:filename` |
| ----------------------- + ------------------------- + ----------------------------------- + -------------------------------- |
| `#fog_dir` | `GitlabUploader.base_dir` | `GitlabUploader#dynamic_segment` | `CarrierWave::Uploader#filename` |
| | | `ObjectStorage::Concern#store_dir` | |
| | | `ObjectStorage::Concern#upload_path` |
```
The `RecordsUploads::Concern` concern creates an `Upload` entry for every file stored by a `GitlabUploader` persisting the dynamic parts of the path using
`GitlabUploader#dynamic_path`. You may then use the `Upload#build_uploader` method to manipulate the file.
## Object Storage
By including the `ObjectStorage::Concern` in the `GitlabUploader` derived class, you may enable the object storage for this uploader. To enable the object storage
in your uploader, you need to either 1) include `RecordsUpload::Concern` and prepend `ObjectStorage::Extension::RecordsUploads` or 2) mount the uploader and create a new field named `<mount>_store`.
The `CarrierWave::Uploader#store_dir` is overridden to
- `GitlabUploader.base_dir` + `GitlabUploader.dynamic_segment` when the store is LOCAL
- `GitlabUploader.dynamic_segment` when the store is REMOTE (the bucket name is used to namespace)
### Using `ObjectStorage::Extension::RecordsUploads`
This concern includes `RecordsUploads::Concern` if not already included.
The `ObjectStorage::Concern` uploader searches for the matching `Upload` to select the correct object store. The `Upload` is mapped using `#store_dirs + identifier` for each store (LOCAL/REMOTE).
```ruby
class SongUploader < GitlabUploader
include RecordsUploads::Concern
include ObjectStorage::Concern
prepend ObjectStorage::Extension::RecordsUploads
...
end
class Thing < ActiveRecord::Base
mount :theme, SongUploader # we have a great theme song!
...
end
```
### Using a mounted uploader
The `ObjectStorage::Concern` queries the `model.<mount>_store` attribute to select the correct object store.
This column must be present in the model schema.
```ruby
class SongUploader < GitlabUploader
include ObjectStorage::Concern
...
end
class Thing < ActiveRecord::Base
attr_reader :theme_store # this is an ActiveRecord attribute
mount :theme, SongUploader # we have a great theme song!
def theme_store
super || ObjectStorage::Store::LOCAL
end
...
end
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: File Storage in GitLab
breadcrumbs:
- doc
- development
---
We use the [CarrierWave](https://github.com/carrierwaveuploader/carrierwave) gem to handle file upload, store and retrieval.
File uploads should be accelerated by workhorse, for details refer to [uploads development documentation](uploads/_index.md).
There are many places where file uploading is used, according to contexts:
- System
- Instance Logo (logo visible in sign in/sign up pages)
- Header Logo (one displayed in the navigation bar)
- Group
- Group avatars
- User
- User avatars
- User snippet attachments
- Project
- Project avatars
- Issues/MR/Notes Markdown attachments
- Issues/MR/Notes Legacy Markdown attachments
- CI Artifacts (archive, metadata, trace)
- LFS Objects
- Merge request diffs
- Design Management design thumbnails
- Topic
- Topic avatars
## Disk storage
GitLab started saving everything on local disk. While directory location changed from previous versions,
they are still not 100% standardized. You can see them below:
| Description | In DB? | Relative path (from CarrierWave.root) | Uploader class | Model type |
| ------------------------------------- | ------ | ----------------------------------------------------------- | ---------------------- | ---------- |
| Instance logo | yes | `uploads/-/system/appearance/logo/:id/:filename` | `AttachmentUploader` | Appearance |
| Header logo | yes | `uploads/-/system/appearance/header_logo/:id/:filename` | `AttachmentUploader` | Appearance |
| Group avatars | yes | `uploads/-/system/group/avatar/:id/:filename` | `AvatarUploader` | Group |
| User avatars | yes | `uploads/-/system/user/avatar/:id/:filename` | `AvatarUploader` | User |
| User snippet attachments | yes | `uploads/-/system/personal_snippet/:id/:random_hex/:filename` | `PersonalFileUploader` | Snippet |
| Project avatars | yes | `uploads/-/system/project/avatar/:id/:filename` | `AvatarUploader` | Project |
| Topic avatars | yes | `uploads/-/system/projects/topic/avatar/:id/:filename` | `AvatarUploader` | Topic |
| Issues/MR/Notes Markdown attachments | yes | `uploads/:hash_project_id/:random_hex/:filename` | `FileUploader` | Project |
| Design Management design thumbnails | yes | `uploads/-/system/design_management/action/image_v432x230/:id/:filename` | `DesignManagement::DesignV432x230Uploader` | DesignManagement::Action |
| CI Artifacts (CE) | yes | `shared/artifacts/:disk_hash[0..1]/:disk_hash[2..3]/:disk_hash/:year_:month_:date/:job_id/:job_artifact_id` (`:disk_hash` is SHA256 digest of `project_id`) | `JobArtifactUploader` | Ci::JobArtifact |
| LFS Objects (CE) | yes | `shared/lfs-objects/:hex/:hex/:object_hash` | `LfsObjectUploader` | LfsObject |
| External merge request diffs | yes | `shared/external-diffs/merge_request_diffs/mr-:parent_id/diff-:id` | `ExternalDiffUploader` | MergeRequestDiff |
| Issuable metric images | yes | `uploads/-/system/issuable_metric_image/file/:id/:filename` | `IssuableMetricImageUploader` | IssuableMetricImage |
CI Artifacts and LFS Objects behave differently in CE and EE. In CE they inherit the `GitlabUploader`
while in EE they inherit the `ObjectStorage` and store files in and S3 API compatible object store.
Attachments for issues, merge requests (MR), and notes in Markdown use
[hashed storage](../administration/repository_storage_paths.md) with the hash of the project ID.
We provide an [all-in-one Rake task](../administration/raketasks/uploads/migrate.md)
to migrate all uploads to object storage in one go. If a new Uploader class or model
type is introduced, make sure you add a Rake task invocation corresponding to it to the
[category list](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/uploads/migrate.rake).
### Path segments
Files are stored at multiple locations and use different path schemes.
All the `GitlabUploader` derived classes should comply with this path segment schema:
```plaintext
| GitlabUploader
| ----------------------- + ------------------------- + --------------------------------- + -------------------------------- |
| `<gitlab_root>/public/` | `uploads/-/system/` | `user/avatar/:id/` | `:filename` |
| ----------------------- + ------------------------- + --------------------------------- + -------------------------------- |
| `CarrierWave.root` | `GitlabUploader.base_dir` | `GitlabUploader#dynamic_segment` | `CarrierWave::Uploader#filename` |
| | `CarrierWave::Uploader#store_dir` | |
| FileUploader
| ----------------------- + ------------------------- + --------------------------------- + -------------------------------- |
| `<gitlab_root>/shared/` | `artifacts/` | `:year_:month/:id` | `:filename` |
| `<gitlab_root>/shared/` | `snippets/` | `:secret/` | `:filename` |
| ----------------------- + ------------------------- + --------------------------------- + -------------------------------- |
| `CarrierWave.root` | `GitlabUploader.base_dir` | `GitlabUploader#dynamic_segment` | `CarrierWave::Uploader#filename` |
| | `CarrierWave::Uploader#store_dir` | |
| | | `FileUploader#upload_path` |
| ObjectStore::Concern (store = remote)
| ----------------------- + ------------------------- + ----------------------------------- + -------------------------------- |
| `<bucket_name>` | <ignored> | `user/avatar/:id/` | `:filename` |
| ----------------------- + ------------------------- + ----------------------------------- + -------------------------------- |
| `#fog_dir` | `GitlabUploader.base_dir` | `GitlabUploader#dynamic_segment` | `CarrierWave::Uploader#filename` |
| | | `ObjectStorage::Concern#store_dir` | |
| | | `ObjectStorage::Concern#upload_path` |
```
The `RecordsUploads::Concern` concern creates an `Upload` entry for every file stored by a `GitlabUploader` persisting the dynamic parts of the path using
`GitlabUploader#dynamic_path`. You may then use the `Upload#build_uploader` method to manipulate the file.
## Object Storage
By including the `ObjectStorage::Concern` in the `GitlabUploader` derived class, you may enable the object storage for this uploader. To enable the object storage
in your uploader, you need to either 1) include `RecordsUpload::Concern` and prepend `ObjectStorage::Extension::RecordsUploads` or 2) mount the uploader and create a new field named `<mount>_store`.
The `CarrierWave::Uploader#store_dir` is overridden to
- `GitlabUploader.base_dir` + `GitlabUploader.dynamic_segment` when the store is LOCAL
- `GitlabUploader.dynamic_segment` when the store is REMOTE (the bucket name is used to namespace)
### Using `ObjectStorage::Extension::RecordsUploads`
This concern includes `RecordsUploads::Concern` if not already included.
The `ObjectStorage::Concern` uploader searches for the matching `Upload` to select the correct object store. The `Upload` is mapped using `#store_dirs + identifier` for each store (LOCAL/REMOTE).
```ruby
class SongUploader < GitlabUploader
include RecordsUploads::Concern
include ObjectStorage::Concern
prepend ObjectStorage::Extension::RecordsUploads
...
end
class Thing < ActiveRecord::Base
mount :theme, SongUploader # we have a great theme song!
...
end
```
### Using a mounted uploader
The `ObjectStorage::Concern` queries the `model.<mount>_store` attribute to select the correct object store.
This column must be present in the model schema.
```ruby
class SongUploader < GitlabUploader
include ObjectStorage::Concern
...
end
class Thing < ActiveRecord::Base
attr_reader :theme_store # this is an ActiveRecord attribute
mount :theme, SongUploader # we have a great theme song!
def theme_store
super || ObjectStorage::Store::LOCAL
end
...
end
```
|
https://docs.gitlab.com/gitpod_internals
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/gitpod_internals.md
|
2025-08-13
|
doc/development
|
[
"doc",
"development"
] |
gitpod_internals.md
|
none
|
Engineering Productivity
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Gitpod internal configuration
| null |
## Settings
The settings for `gitlab-org/gitlab` are defined under a [project's settings in a Gitpod dashboard](https://gitpod.io/t/gitlab-org/gitlab/settings). To view the settings, you must first join the `gitlab-org` team on [Gitpod.io](https://gitpod.io/). You can join the team using the bookmark link at the top of `#gitpod-gdk` internal Slack channel.
The current settings are:
- `Enable Incremental Prebuilds`: Uses an earlier successful prebuild to create new prebuilds faster.
- `Use Last Successful Prebuild`: Skips waiting for prebuilds currently in progress and uses the last successful prebuild from previous commits on the same branch.
## Webhooks
A webhook that starts with `https://gitpod.io/` is created to enable prebuilds (see [Enabling Prebuilds](https://www.gitpod.io/docs/configure/authentication/gitlab#enabling-prebuilds) for more details). The webhook is maintained by an [Engineering Productivity team](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/).
You can find this webhook in [Webhook Settings in `gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab/-/hooks). If you cannot access this setting, chat to the [Engineering Productivity team](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/).
### Troubleshooting a failed webhook
If a webhook failed to connect for a long time, then it may have been disabled in the project.
To re-enable a failing or failed webhook, send a test request in [Webhook Settings](https://gitlab.com/gitlab-org/gitlab/-/hooks). See [Re-enable disabled webhooks page](../user/project/integrations/webhooks.md#re-enable-disabled-webhooks) for more details.
After re-enabling, check the prebuilds' health in a [project's prebuilds](https://gitpod.io/t/gitlab-org/gitlab/prebuilds) and confirm that prebuilds start without any errors.
|
---
stage: none
group: Engineering Productivity
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Gitpod internal configuration
breadcrumbs:
- doc
- development
---
## Settings
The settings for `gitlab-org/gitlab` are defined under a [project's settings in a Gitpod dashboard](https://gitpod.io/t/gitlab-org/gitlab/settings). To view the settings, you must first join the `gitlab-org` team on [Gitpod.io](https://gitpod.io/). You can join the team using the bookmark link at the top of `#gitpod-gdk` internal Slack channel.
The current settings are:
- `Enable Incremental Prebuilds`: Uses an earlier successful prebuild to create new prebuilds faster.
- `Use Last Successful Prebuild`: Skips waiting for prebuilds currently in progress and uses the last successful prebuild from previous commits on the same branch.
## Webhooks
A webhook that starts with `https://gitpod.io/` is created to enable prebuilds (see [Enabling Prebuilds](https://www.gitpod.io/docs/configure/authentication/gitlab#enabling-prebuilds) for more details). The webhook is maintained by an [Engineering Productivity team](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/).
You can find this webhook in [Webhook Settings in `gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab/-/hooks). If you cannot access this setting, chat to the [Engineering Productivity team](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/).
### Troubleshooting a failed webhook
If a webhook failed to connect for a long time, then it may have been disabled in the project.
To re-enable a failing or failed webhook, send a test request in [Webhook Settings](https://gitlab.com/gitlab-org/gitlab/-/hooks). See [Re-enable disabled webhooks page](../user/project/integrations/webhooks.md#re-enable-disabled-webhooks) for more details.
After re-enabling, check the prebuilds' health in a [project's prebuilds](https://gitpod.io/t/gitlab-org/gitlab/prebuilds) and confirm that prebuilds start without any errors.
|
https://docs.gitlab.com/development/python_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/python_guide
|
[
"doc",
"development",
"python_guide"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Python development guidelines
| null |
This document describes conventions and practices we adopt at GitLab when developing Python code. While GitLab is built
primarily on Ruby on Rails, we use Python when needed to leverage the ecosystem.
Some examples of Python in our codebase:
- [AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/tree/main/ai_gateway)
- [Duo Workflow Service](https://gitlab.com/gitlab-org/duo-workflow/duo-workflow-service)
- [Evaluation Framework](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library)
- [CloudConnector Python library](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/tree/main/src/python)
This documentation does not cover guidelines for Python usage on Data Science projects. For those, refer to the [Data Team Platform Guide](https://handbook.gitlab.com/handbook/enterprise-data/platform/python-guide/).
## Design principles
- Tooling should help contributors achieve their goals, both on short and long term.
- A developer familiar with a Python codebase in GitLab should feel familiar with any other Python codebase at GitLab.
- This documentation should support all contributors, regardless of their goals and incentives: from Python experts to one-off contributors.
- We strive to follow external guidelines, but if needed we will choose conventions that better support GitLab contributors.
## When should I consider Python for development
Ruby should always be the first choice for development at GitLab, as we have a larger community, better support, and easier deployment. However, there are occasions where using Python is worth breaking the pattern. For example,
when working with AI and ML, most of the open source uses Python, and using Ruby would require building and maintaining
large codebases.
## Learning Python
[Resources to get started, examples and tips.](getting_started.md)
## Creating a new Python application
[Scaffolding libraries and pipelines for a new codebase](create_project.md)
## Conventions Style Guidelines
[Writing consistent codebases](styleguide.md)
## Code review and maintainership guidelines
[Guidelines on creating MRs and reviewing them](maintainership.md)
## Deploying a Python codebase
[Deploying libraries, utilities and services.](deployment.md)
## Python as part of the Monorepo
[Guide on libraries in the monorepo that use Python](monorepo.md)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Python development guidelines
breadcrumbs:
- doc
- development
- python_guide
---
This document describes conventions and practices we adopt at GitLab when developing Python code. While GitLab is built
primarily on Ruby on Rails, we use Python when needed to leverage the ecosystem.
Some examples of Python in our codebase:
- [AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/tree/main/ai_gateway)
- [Duo Workflow Service](https://gitlab.com/gitlab-org/duo-workflow/duo-workflow-service)
- [Evaluation Framework](https://gitlab.com/gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/prompt-library)
- [CloudConnector Python library](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/tree/main/src/python)
This documentation does not cover guidelines for Python usage on Data Science projects. For those, refer to the [Data Team Platform Guide](https://handbook.gitlab.com/handbook/enterprise-data/platform/python-guide/).
## Design principles
- Tooling should help contributors achieve their goals, both on short and long term.
- A developer familiar with a Python codebase in GitLab should feel familiar with any other Python codebase at GitLab.
- This documentation should support all contributors, regardless of their goals and incentives: from Python experts to one-off contributors.
- We strive to follow external guidelines, but if needed we will choose conventions that better support GitLab contributors.
## When should I consider Python for development
Ruby should always be the first choice for development at GitLab, as we have a larger community, better support, and easier deployment. However, there are occasions where using Python is worth breaking the pattern. For example,
when working with AI and ML, most of the open source uses Python, and using Ruby would require building and maintaining
large codebases.
## Learning Python
[Resources to get started, examples and tips.](getting_started.md)
## Creating a new Python application
[Scaffolding libraries and pipelines for a new codebase](create_project.md)
## Conventions Style Guidelines
[Writing consistent codebases](styleguide.md)
## Code review and maintainership guidelines
[Guidelines on creating MRs and reviewing them](maintainership.md)
## Deploying a Python codebase
[Deploying libraries, utilities and services.](deployment.md)
## Python as part of the Monorepo
[Guide on libraries in the monorepo that use Python](monorepo.md)
|
https://docs.gitlab.com/development/styleguide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/styleguide.md
|
2025-08-13
|
doc/development/python_guide
|
[
"doc",
"development",
"python_guide"
] |
styleguide.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Python style guide
| null |
## Testing
### Overview
Testing at GitLab, including in Python codebases is a core priority rather than an afterthought. It is therefore important to consider test design quality alongside feature design from the start.
Use [Pytest](https://docs.pytest.org/en/stable/) for Python testing.
### Recommended reading
- [Five Factor Testing](https://madeintandem.com/blog/five-factor-testing/): Why do we need tests?
- [Principles of Automated Testing](https://www.lihaoyi.com/post/PrinciplesofAutomatedTesting.html): Levels of testing. Prioritize tests. Cost of tests.
### Testing levels
Before writing tests, understand the different testing levels and determine the appropriate level for your changes.
[Learn more about the different testing levels](../testing_guide/testing_levels.md), and how to decide at what level your changes should be tested.
### Recommendations
#### Name test files the same as the files they are testing
For unit tests, naming the test file with `test_{file_being_tested}.py` and placing it in the same directory structure
helps with later discoverability of tests. This also avoids confusion between files that have the same name, but
different modules.
```shell
File: /foo/bar/cool_feature.py
# Bad
Test file: /tests/my_cool_feature.py
# Good
Test file: /tests/foo/bar/test_cool_feature.py
```
#### Using NamedTuples to define parametrized test cases
[Pytest parametrized tests](https://docs.pytest.org/en/stable/how-to/parametrize.html) effectively reduce code
repetition, but they rely on tuples for test case definition, unlike Ruby's more readable syntax. As your parameters or
test cases increase, these tuple-based tests become harder to understand and maintain.
By using Python [NamedTuples](https://docs.python.org/3/library/typing.html#typing.NamedTuple) instead, you can:
- Enforce clearer organization with named fields.
- Make tests more self-documenting.
- Easily define default values for parameters.
- Improve readability in complex test scenarios.
```python
# Good: Short examples, with small numbers of arguments. Easy to map what each value maps to each argument
@pytest.mark.parametrize(
(
"argument1",
"argument2",
"expected_result",
),
[
# description of case 1,
("value1", "value2", 200),
# description of case 2,
...,
],
)
def test_get_product_price(argument1, argument2, expected_result):
assert get_product_price(value1, value2) == expected_cost
# Bad: difficult to map a value to an argument, and to add or remove arguments when updating test cases
@pytest.mark.parametrize(
(
"argument1",
"argument2",
"argument3",
"expected_response",
),
[
# Test case 1:
(
"value1",
{
...
},
...
),
# Test case 2:
...
]
)
def test_my_function(argument1, argument2, argument3, expected_response):
...
# Good: NamedTuples improve readability for larger test scenarios.
from typing import NamedTuple
class TestMyFunction:
class Case(NamedTuple):
argument1: str
argument2: int = 3
argument3: dict
expected_response: int
TEST_CASE_1 = Case(
argument1="my argument",
argument3={
"key": "value"
},
expected_response=2
)
TEST_CASE_2 = Case(
...
)
@pytest.mark.parametrize(
"test_case", [TEST_CASE_1, TEST_CASE_2]
)
def test_my_function(test_case):
assert my_function(case.argument1, case.argument2, case.argument3) == case.expected_response
```
#### Mocking
- Use `unittest.mock` library.
- Mock at the right level, for example, at method call boundaries.
- Mock external services and APIs.
## Code style
It's recommended to use automated tools to ensure code quality and security.
Consider running the following tools in your CI pipeline as well as locally:
### Formatting tools
- **Black**: Code formatter that enforces a consistent style
- **isort**: Sorts and organizes import statements
### Linting tools
- **flake8**: Checks for PEP-8 compliance and common errors
- **pylint**: More comprehensive linter for code quality
- **mypy**: Static type checker for Python
### Testing tools
- **pytest**: Testing framework with coverage reporting
### Security tools
- **Dependency scanning**: Checks for vulnerabilities in dependencies
- **Secret detection**: Ensures no secrets are committed to the repository
- **SAST (semgrep)**: Static Application Security Testing
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Python style guide
breadcrumbs:
- doc
- development
- python_guide
---
## Testing
### Overview
Testing at GitLab, including in Python codebases is a core priority rather than an afterthought. It is therefore important to consider test design quality alongside feature design from the start.
Use [Pytest](https://docs.pytest.org/en/stable/) for Python testing.
### Recommended reading
- [Five Factor Testing](https://madeintandem.com/blog/five-factor-testing/): Why do we need tests?
- [Principles of Automated Testing](https://www.lihaoyi.com/post/PrinciplesofAutomatedTesting.html): Levels of testing. Prioritize tests. Cost of tests.
### Testing levels
Before writing tests, understand the different testing levels and determine the appropriate level for your changes.
[Learn more about the different testing levels](../testing_guide/testing_levels.md), and how to decide at what level your changes should be tested.
### Recommendations
#### Name test files the same as the files they are testing
For unit tests, naming the test file with `test_{file_being_tested}.py` and placing it in the same directory structure
helps with later discoverability of tests. This also avoids confusion between files that have the same name, but
different modules.
```shell
File: /foo/bar/cool_feature.py
# Bad
Test file: /tests/my_cool_feature.py
# Good
Test file: /tests/foo/bar/test_cool_feature.py
```
#### Using NamedTuples to define parametrized test cases
[Pytest parametrized tests](https://docs.pytest.org/en/stable/how-to/parametrize.html) effectively reduce code
repetition, but they rely on tuples for test case definition, unlike Ruby's more readable syntax. As your parameters or
test cases increase, these tuple-based tests become harder to understand and maintain.
By using Python [NamedTuples](https://docs.python.org/3/library/typing.html#typing.NamedTuple) instead, you can:
- Enforce clearer organization with named fields.
- Make tests more self-documenting.
- Easily define default values for parameters.
- Improve readability in complex test scenarios.
```python
# Good: Short examples, with small numbers of arguments. Easy to map what each value maps to each argument
@pytest.mark.parametrize(
(
"argument1",
"argument2",
"expected_result",
),
[
# description of case 1,
("value1", "value2", 200),
# description of case 2,
...,
],
)
def test_get_product_price(argument1, argument2, expected_result):
assert get_product_price(value1, value2) == expected_cost
# Bad: difficult to map a value to an argument, and to add or remove arguments when updating test cases
@pytest.mark.parametrize(
(
"argument1",
"argument2",
"argument3",
"expected_response",
),
[
# Test case 1:
(
"value1",
{
...
},
...
),
# Test case 2:
...
]
)
def test_my_function(argument1, argument2, argument3, expected_response):
...
# Good: NamedTuples improve readability for larger test scenarios.
from typing import NamedTuple
class TestMyFunction:
class Case(NamedTuple):
argument1: str
argument2: int = 3
argument3: dict
expected_response: int
TEST_CASE_1 = Case(
argument1="my argument",
argument3={
"key": "value"
},
expected_response=2
)
TEST_CASE_2 = Case(
...
)
@pytest.mark.parametrize(
"test_case", [TEST_CASE_1, TEST_CASE_2]
)
def test_my_function(test_case):
assert my_function(case.argument1, case.argument2, case.argument3) == case.expected_response
```
#### Mocking
- Use `unittest.mock` library.
- Mock at the right level, for example, at method call boundaries.
- Mock external services and APIs.
## Code style
It's recommended to use automated tools to ensure code quality and security.
Consider running the following tools in your CI pipeline as well as locally:
### Formatting tools
- **Black**: Code formatter that enforces a consistent style
- **isort**: Sorts and organizes import statements
### Linting tools
- **flake8**: Checks for PEP-8 compliance and common errors
- **pylint**: More comprehensive linter for code quality
- **mypy**: Static type checker for Python
### Testing tools
- **pytest**: Testing framework with coverage reporting
### Security tools
- **Dependency scanning**: Checks for vulnerabilities in dependencies
- **Secret detection**: Ensures no secrets are committed to the repository
- **SAST (semgrep)**: Static Application Security Testing
|
https://docs.gitlab.com/development/monorepo
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/monorepo.md
|
2025-08-13
|
doc/development/python_guide
|
[
"doc",
"development",
"python_guide"
] |
monorepo.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Python as part of the Monorepo
| null |
GitLab requires Python as a dependency for [reStructuredText](https://docutils.sourceforge.io/rst.html) markup rendering. It requires Python 3.
## Installation
There are several ways of installing Python on your system. To be able to use the same version we use in production,
we suggest you use [`pyenv`](https://github.com/pyenv/pyenv). It works and behaves similarly to its counterpart in the
Ruby world: [`rbenv`](https://github.com/rbenv/rbenv).
### macOS
To install `pyenv` on macOS, you can use [Homebrew](https://brew.sh/) with:
```shell
brew install pyenv
```
### Windows
`pyenv` does not officially support Windows and does not work in Windows outside the Windows Subsystem for Linux. If you are a Windows user, you can use `pyenv-win`.
To install `pyenv-win` on Windows, run the following PowerShell command:
```shell
Invoke-WebRequest -UseBasicParsing -Uri "https://raw.githubusercontent.com/pyenv-win/pyenv-win/master/pyenv-win/install-pyenv-win.ps1" -OutFile "./install-pyenv-win.ps1"; &"./install-pyenv-win.ps1"
```
[Learn more about `pyenv-win`](https://github.com/pyenv-win/pyenv-win).
### Linux
To install `pyenv` on Linux, you can run the command below:
```shell
curl "https://pyenv.run" | bash
```
Alternatively, you may find `pyenv` available as a system package via your distribution's package manager.
You can read more about it in [the `pyenv` prerequisites](https://github.com/pyenv/pyenv-installer#prerequisites).
### Shell integration
`Pyenv` installation adds required changes to Bash. If you use a different shell,
check for any additional steps required for it.
For Fish, you can install a plugin for [Fisher](https://github.com/jorgebucaran/fisher):
```shell
fisher add fisherman/pyenv
```
Or for [Oh My Fish](https://github.com/oh-my-fish/oh-my-fish):
```shell
omf install pyenv
```
### Dependency management
While GitLab doesn't directly contain any Python scripts, because we depend on Python to render
[reStructuredText](https://docutils.sourceforge.io/rst.html) markup, we need to keep track on dependencies
on the main project level, so we can run that on our development machines.
Recently, an equivalent to the `Gemfile` and the [Bundler](https://bundler.io/) project has been introduced to Python:
`Pipfile` and [Pipenv](https://pipenv.readthedocs.io/en/latest/).
A `Pipfile` with the dependencies now exists in the root folder. To install them, run:
```shell
pipenv install
```
Running this command installs both the required Python version as well as required pip dependencies.
### Use instructions
To run any Python code under the Pipenv environment, you need to first start a `virtualenv` based on the dependencies
of the application. With Pipenv, this is a simple as running:
```shell
pipenv shell
```
After running that command, you can run GitLab on the same shell and it uses the Python and dependencies
installed from the `pipenv install` command.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Python as part of the Monorepo
breadcrumbs:
- doc
- development
- python_guide
---
GitLab requires Python as a dependency for [reStructuredText](https://docutils.sourceforge.io/rst.html) markup rendering. It requires Python 3.
## Installation
There are several ways of installing Python on your system. To be able to use the same version we use in production,
we suggest you use [`pyenv`](https://github.com/pyenv/pyenv). It works and behaves similarly to its counterpart in the
Ruby world: [`rbenv`](https://github.com/rbenv/rbenv).
### macOS
To install `pyenv` on macOS, you can use [Homebrew](https://brew.sh/) with:
```shell
brew install pyenv
```
### Windows
`pyenv` does not officially support Windows and does not work in Windows outside the Windows Subsystem for Linux. If you are a Windows user, you can use `pyenv-win`.
To install `pyenv-win` on Windows, run the following PowerShell command:
```shell
Invoke-WebRequest -UseBasicParsing -Uri "https://raw.githubusercontent.com/pyenv-win/pyenv-win/master/pyenv-win/install-pyenv-win.ps1" -OutFile "./install-pyenv-win.ps1"; &"./install-pyenv-win.ps1"
```
[Learn more about `pyenv-win`](https://github.com/pyenv-win/pyenv-win).
### Linux
To install `pyenv` on Linux, you can run the command below:
```shell
curl "https://pyenv.run" | bash
```
Alternatively, you may find `pyenv` available as a system package via your distribution's package manager.
You can read more about it in [the `pyenv` prerequisites](https://github.com/pyenv/pyenv-installer#prerequisites).
### Shell integration
`Pyenv` installation adds required changes to Bash. If you use a different shell,
check for any additional steps required for it.
For Fish, you can install a plugin for [Fisher](https://github.com/jorgebucaran/fisher):
```shell
fisher add fisherman/pyenv
```
Or for [Oh My Fish](https://github.com/oh-my-fish/oh-my-fish):
```shell
omf install pyenv
```
### Dependency management
While GitLab doesn't directly contain any Python scripts, because we depend on Python to render
[reStructuredText](https://docutils.sourceforge.io/rst.html) markup, we need to keep track on dependencies
on the main project level, so we can run that on our development machines.
Recently, an equivalent to the `Gemfile` and the [Bundler](https://bundler.io/) project has been introduced to Python:
`Pipfile` and [Pipenv](https://pipenv.readthedocs.io/en/latest/).
A `Pipfile` with the dependencies now exists in the root folder. To install them, run:
```shell
pipenv install
```
Running this command installs both the required Python version as well as required pip dependencies.
### Use instructions
To run any Python code under the Pipenv environment, you need to first start a `virtualenv` based on the dependencies
of the application. With Pipenv, this is a simple as running:
```shell
pipenv shell
```
After running that command, you can run GitLab on the same shell and it uses the Python and dependencies
installed from the `pipenv install` command.
|
https://docs.gitlab.com/development/deployment
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/deployment.md
|
2025-08-13
|
doc/development/python_guide
|
[
"doc",
"development",
"python_guide"
] |
deployment.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Deploying Python repositories
| null |
## Python Libraries and Utilities
We deploy libraries and utilities to pypi with the [`gitlab` user](https://pypi.org/user/gitlab/) using `poetry`. Configure the deployment in the `pyproject.toml` file:
```toml
[tool.poetry]
name = "gitlab-<your package name>"
version = "0.1.0"
description = "<Description of your library/utility>"
authors = ["gitlab"]
readme = "README.md"
packages = [{ include = "<your module>" }]
homepage = ""https://gitlab.com/gitlab/<path/to/repository>"
repository = "https://gitlab.com/gitlab/<path/to/repository>"
```
Refer to [poetry's documentation](https://python-poetry.org/docs/pyproject/) for additional configuration options.
To configure deployment of the PyPI package:
1. [Authenticate to PyPI](https://pypi.org/account/login/) using the "PyPI GitLab" credentials found in 1Password (PyPI does not support organizations as of now).
1. Create a token under `Account Settings > Add API Tokens`.
1. For the initial publish, select `Entire account (all projects)` scope. If the project already exists, scope the token to the specific project.
1. Configure credentials:
Locally:
```shell
poetry config pypi-token.pypi <your-api-token>
```
To configure deployment with CI, set the `POETRY_PYPI_TOKEN_PYPI` to the token created. Alternatively, define a [trusted publisher](https://docs.pypi.org/trusted-publishers/) for the project, in which case no token is needed.
1. Use [Poetry to publish](https://python-poetry.org/docs/cli/#publish) your package:
```shell
poetry publish
```
## Python Services
### Runway deployment for .com
Services for GitLab.com, GitLab Dedicated and self-hosted customers using CloudConnect are deployed using [Runway](https://docs.runway.gitlab.com/welcome/onboarding/).
Refer to the project documentation on how to add or manage Runway services.
### Deploying in self-hosted environments
Deploying services to self-hosted environments poses challenges as services are not part of the monolith. Currently, Runway does not support self-hosted instances, and Omnibus does not support Python services, so deployment is only possible by users pulling the service image.
#### Image guidelines
1. Use a different user than the root user
1. Configure poetry variables correctly to avoid runtime issues
1. Use [multi-stage Docker builds](https://docs.docker.com/build/building/multi-stage/) images to create lighter images
[Example of image](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/blob/main/Dockerfile#L41-L47)
#### Versioning
Self-hosted customers need to know which version of the service is compatible with their GitLab installation. Python services do not make use of [managed versioning](https://gitlab.com/gitlab-org/release/docs/-/tree/master/components/managed-versioning), so each service needs to handle its versioning and release cuts.
If a service is accessible through cloud-connector, it must adhere to [GitLab Statement Support](https://about.gitlab.com/support/statement-of-support/#version-support), providing stable deployments for the current and previous 2 majors releases of GitLab.
##### Tips
###### Create versions that match GitLab release
When supporting self-hosted deployment, it's important to have a version tag that matches GitLab versions, making it easier
for users to configure the different components of their environment. Add a pipeline to GitLab the GitLab release process
that tags the service repo with the same tag, which will then trigger a pipeline to create an image with the defined tag.
Example: [a pipeline on GitLab](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/aigw-tagging.gitlab-ci.yml) creates a tag on AI Gateway
that [releases a new image](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/.gitlab/ci/build.gitlab-ci.yml?ref_type=heads#L24).
###### Multiple release deployments
Supporting 3 major versions can lead to a confusing codebase due to too many code paths. An alternative to keep support while
allowing code clean ups is to provide deployments for multiple versions of the service. For example, suppose GitLab is on
version `19.5`, this would need three deployments of the service:
- One for service version `17.11`, which provides support for all GitLab `17.x` versions
- One for service version `18.11`, which provides support for all GitLab `18.x` versions
- One for service version `19.5`, which provides support for GitLab versions `19.0`-`19.5`.
Once version 18.0 is released, unused code from versions 17.x can be safely removed, since a legacy deployment will be present.
Then, once version 20.0 is released, and GitLab version 17.x is not supported anymore, the legacy deployment can also be removed.
#### Publishing images
Images must be published in the container registry of the project.
It's also recommend to publish the images on DockerHub. To create an image repository on Docker Hub, create an account with your GitLab handle and create an Access Request to be added to the [GitLab organization](https://hub.docker.com/u/gitlab). Once the image repository is created, make sure the user `gitlabcibuild` has read and write access to the repository.
#### Linux package deployment
To be added.
### Deployment on GitLab Dedicated
Deployment of Python services on GitLab Dedicated is not currently supported
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Deploying Python repositories
breadcrumbs:
- doc
- development
- python_guide
---
## Python Libraries and Utilities
We deploy libraries and utilities to pypi with the [`gitlab` user](https://pypi.org/user/gitlab/) using `poetry`. Configure the deployment in the `pyproject.toml` file:
```toml
[tool.poetry]
name = "gitlab-<your package name>"
version = "0.1.0"
description = "<Description of your library/utility>"
authors = ["gitlab"]
readme = "README.md"
packages = [{ include = "<your module>" }]
homepage = ""https://gitlab.com/gitlab/<path/to/repository>"
repository = "https://gitlab.com/gitlab/<path/to/repository>"
```
Refer to [poetry's documentation](https://python-poetry.org/docs/pyproject/) for additional configuration options.
To configure deployment of the PyPI package:
1. [Authenticate to PyPI](https://pypi.org/account/login/) using the "PyPI GitLab" credentials found in 1Password (PyPI does not support organizations as of now).
1. Create a token under `Account Settings > Add API Tokens`.
1. For the initial publish, select `Entire account (all projects)` scope. If the project already exists, scope the token to the specific project.
1. Configure credentials:
Locally:
```shell
poetry config pypi-token.pypi <your-api-token>
```
To configure deployment with CI, set the `POETRY_PYPI_TOKEN_PYPI` to the token created. Alternatively, define a [trusted publisher](https://docs.pypi.org/trusted-publishers/) for the project, in which case no token is needed.
1. Use [Poetry to publish](https://python-poetry.org/docs/cli/#publish) your package:
```shell
poetry publish
```
## Python Services
### Runway deployment for .com
Services for GitLab.com, GitLab Dedicated and self-hosted customers using CloudConnect are deployed using [Runway](https://docs.runway.gitlab.com/welcome/onboarding/).
Refer to the project documentation on how to add or manage Runway services.
### Deploying in self-hosted environments
Deploying services to self-hosted environments poses challenges as services are not part of the monolith. Currently, Runway does not support self-hosted instances, and Omnibus does not support Python services, so deployment is only possible by users pulling the service image.
#### Image guidelines
1. Use a different user than the root user
1. Configure poetry variables correctly to avoid runtime issues
1. Use [multi-stage Docker builds](https://docs.docker.com/build/building/multi-stage/) images to create lighter images
[Example of image](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/blob/main/Dockerfile#L41-L47)
#### Versioning
Self-hosted customers need to know which version of the service is compatible with their GitLab installation. Python services do not make use of [managed versioning](https://gitlab.com/gitlab-org/release/docs/-/tree/master/components/managed-versioning), so each service needs to handle its versioning and release cuts.
If a service is accessible through cloud-connector, it must adhere to [GitLab Statement Support](https://about.gitlab.com/support/statement-of-support/#version-support), providing stable deployments for the current and previous 2 majors releases of GitLab.
##### Tips
###### Create versions that match GitLab release
When supporting self-hosted deployment, it's important to have a version tag that matches GitLab versions, making it easier
for users to configure the different components of their environment. Add a pipeline to GitLab the GitLab release process
that tags the service repo with the same tag, which will then trigger a pipeline to create an image with the defined tag.
Example: [a pipeline on GitLab](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/aigw-tagging.gitlab-ci.yml) creates a tag on AI Gateway
that [releases a new image](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/.gitlab/ci/build.gitlab-ci.yml?ref_type=heads#L24).
###### Multiple release deployments
Supporting 3 major versions can lead to a confusing codebase due to too many code paths. An alternative to keep support while
allowing code clean ups is to provide deployments for multiple versions of the service. For example, suppose GitLab is on
version `19.5`, this would need three deployments of the service:
- One for service version `17.11`, which provides support for all GitLab `17.x` versions
- One for service version `18.11`, which provides support for all GitLab `18.x` versions
- One for service version `19.5`, which provides support for GitLab versions `19.0`-`19.5`.
Once version 18.0 is released, unused code from versions 17.x can be safely removed, since a legacy deployment will be present.
Then, once version 20.0 is released, and GitLab version 17.x is not supported anymore, the legacy deployment can also be removed.
#### Publishing images
Images must be published in the container registry of the project.
It's also recommend to publish the images on DockerHub. To create an image repository on Docker Hub, create an account with your GitLab handle and create an Access Request to be added to the [GitLab organization](https://hub.docker.com/u/gitlab). Once the image repository is created, make sure the user `gitlabcibuild` has read and write access to the repository.
#### Linux package deployment
To be added.
### Deployment on GitLab Dedicated
Deployment of Python services on GitLab Dedicated is not currently supported
|
https://docs.gitlab.com/development/getting_started
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/getting_started.md
|
2025-08-13
|
doc/development/python_guide
|
[
"doc",
"development",
"python_guide"
] |
getting_started.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Getting Started with Python in GitLab
| null |
## Onboarding Guide
This guide helps non-Python developers get started with Python quickly and efficiently.
1. **Set up Python**:
- Install Python from the official [Python website](https://www.python.org/downloads/).
- Python can also be installed with [Mise](https://mise.jdx.dev/lang/python.html):
```shell
mise use python@3.14
```
- While macOS comes with Python pre-installed, it's strongly advised to install and use a separate version of Python
1. **Install Poetry** for package management:
- Poetry is a modern, Python-specific dependency manager that simplifies packaging and dependency handling. To install it, run:
```shell
curl --silent --show-error --location "https://install.python-poetry.org" | python3 -
```
- Poetry can also be installed with Mise:
```shell
mise install poetry
```
- Make sure to read the Poetry installation [guide](https://python-poetry.org/docs/) for full installation details
- Once installed, create a new Python project with Poetry:
```shell
poetry new my_project
cd my_project
poetry install
```
1. **Run and Debug Existing Code**
- Familiarize yourself with the project's structure by following the `README.md`.
- Use tools like `pdb` or IDE debugging features to debug code. Example:
```shell
poetry shell
python -m pdb <file_name>.py
```
- Both [PyCharm](https://www.jetbrains.com/help/pycharm/debugging-your-first-python-application.html)
and [VSCode](https://code.visualstudio.com/docs/python/debugging) provide great tools to debug your code
---
## Learning resources
If you are new to Python or looking to refresh your knowledge, this section provides various materials for
learning the language.
1. **[Zen of Python](https://peps.python.org/pep-0020/)**
Zen of Python - PEP 20 - is an essential read, it shapes how you think about Python and write "Pythonic" code.
1. **[Python Cheatsheet](https://www.pythoncheatsheet.org)**
A comprehensive reference covering essential Python syntax, built-in functions, and useful libraries.
This is ideal for both beginners and experienced users who want a quick, organized summary of Python's key features.
1. **[100-page Python Intro](https://learnbyexample.github.io/100_page_python_intro)**
Brief guide provides a straightforward introduction to Python, covering all the essentials needed to start programming effectively. It's a beginner-friendly option that covers everything from syntax to debugging and testing.
1. **[Learn X in Y Minutes: Python](https://learnxinyminutes.com/docs/python)**
A very brief, high-level introduction cuts directly to the core syntax and features of Python, making it a valuable quick start for developers transitioning to Python.
1. **[Exercism Python Track](https://exercism.io/tracks/python)**
Use Exercism's Python track as a foundation for learning Python concepts and best practices. Exercism provides hands-on practice with mentoring support, making it an excellent resource for mastering Python through coding exercises and feedback.
When building Python APIs, we use FastAPI and Pydantic. To get started with building and reviewing these technologies, refer to the following resources:
1. **[FastAPI Documentation](https://fastapi.tiangolo.com/)**
FastAPI is a modern web framework for building APIs with Python. This resource will help you learn how to create fast and efficient web applications and APIs. FastAPI is especially useful for building Python applications with high performance and scalability.
1. **[Pydantic Documentation](https://pydantic-docs.helpmanual.io/)**
Pydantic is a Python library for data validation and settings management using Python type annotations. Learn how to integrate Pydantic into your Python projects for easier data validation and management, particularly when working with FastAPI.
We use pytest for testing Python code. To learn more about writing and running tests with pytest, refer to the following resources:
1. **[pytest Documentation](https://docs.pytest.org/en/stable/)**
pytest is a popular testing framework for Python that makes it easy to write simple and scalable tests. This resource provides comprehensive documentation on how to write and run tests using pytest, including fixtures, plugins, and test discovery.
1. **[Python Testing with pytest (Book)](https://pragprog.com/titles/bopytest2/python-testing-with-pytest-second-edition/)**
This book is a comprehensive guide to testing Python code with pytest. It covers everything from the basics of writing tests to advanced topics like fixtures, plugins, and test organization.
1. **[Python Function to flowchart](https://gitlab.com/srayner/funcgraph/)**
This project takes any Python function and automatically creates a visual flowchart showing how the code works.
---
### Learning Group
A collaborative space for developers to study Python, FastAPI, and Pydantic, focusing on building real-world apps.
Refer to [Track and Propose Sessions for Python Learning Group](https://gitlab.com/gitlab-org/gitlab/-/issues/512600) issue for ongoing updates and discussions.
**Core Topics for Group Learning**:
1. **Basic Python Syntax**:
- Learn Python concepts such as variables, functions, loops, and conditionals.
- Practice at [Exercism Python Track](https://exercism.io/tracks/python).
1. **FastAPI and Pydantic**:
- Learn how to build APIs using FastAPI and validate data with Pydantic.
- Key resources:
- [FastAPI Documentation](https://fastapi.tiangolo.com/)
- [Pydantic Documentation](https://pydantic-docs.helpmanual.io/)
### Communication
- Stay updated by following the [learning group issue](https://gitlab.com/gitlab-org/gitlab/-/issues/517449)
- Join the discussion on Slack: **#python_getting_started**
---
### Python Review Office Hours
- **Bi-weekly sessions** for code review and discussion, led by experienced Python developers.
- These sessions are designed to help you improve your Python skills through practical feedback.
- Feel free to add the office hours to your calendar.
---
### Encourage Recorded Group Meetings
All review and study group meetings will be recorded and shared, covering key concepts in Python, FastAPI, and Pydantic. These recordings are great for revisiting topics or catching up if you miss a session.
Add any uploaded videos to the [Python Resources](https://www.youtube.com/playlist?list=PL05JrBw4t0Kq4i9FD276WtOL1dSSm9a1G) playlist.
---
### Mentorship Process
1:1 mentorship for Python is possible and encouraged. For more information on how to get started with a mentor, see the [GitLab Mentoring Handbook](https://handbook.gitlab.com/handbook/engineering/careers/mentoring/#mentoring).
---
## More learning resources
In addition to the resources already mentioned, this section provides various materials for learning the language and
it's ecosystem. In no particular order.
1. **[A Whirlwind Tour of Python (Jupyter Notebook)](https://github.com/jakevdp/WhirlwindTourOfPython)**
A fast-paced introduction to Python fundamentals, tailored especially for data science practitioners but works well for everyone who wants to get just the basic understanding of the language.
This is a Jupiter Notebook which makes this guide an interactive resource as well as a good introduction to Jupiter Notebook itself.
1. **[Python imports](https://realpython.com/absolute-vs-relative-python-imports/)**
Even for Pythonistas with a couple of projects under their belt, imports can be confusing! You're probably reading this because you'd like to gain a deeper understanding of imports in Python, particularly absolute and relative imports.
1. **[Python -m flag](https://www.geeksforgeeks.org/what-is-the-use-of-python-m-flag/)**
Learning the -m flag helps you run Python tools correctly by ensuring they use the right Python environment, avoiding common setup headaches.
1. **[Poetry vs pip](https://www.datacamp.com/tutorial/python-poetry)**
`virtualenv` and `pip` are built-in tool to handle project dependencies and environments. Why and when should you use
Poetry?
1. **[Python roadmap](https://roadmap.sh/python)**
Step by step guide to becoming a Python developer in 2025. Use this for inspiration and finding additional resources.
1. **[Programiz Python basics](https://programiz.pro/course/learn-python-basics)**
Step into the world of programming with this beginner-friendly Python course and build a strong programming foundation.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Getting Started with Python in GitLab
breadcrumbs:
- doc
- development
- python_guide
---
## Onboarding Guide
This guide helps non-Python developers get started with Python quickly and efficiently.
1. **Set up Python**:
- Install Python from the official [Python website](https://www.python.org/downloads/).
- Python can also be installed with [Mise](https://mise.jdx.dev/lang/python.html):
```shell
mise use python@3.14
```
- While macOS comes with Python pre-installed, it's strongly advised to install and use a separate version of Python
1. **Install Poetry** for package management:
- Poetry is a modern, Python-specific dependency manager that simplifies packaging and dependency handling. To install it, run:
```shell
curl --silent --show-error --location "https://install.python-poetry.org" | python3 -
```
- Poetry can also be installed with Mise:
```shell
mise install poetry
```
- Make sure to read the Poetry installation [guide](https://python-poetry.org/docs/) for full installation details
- Once installed, create a new Python project with Poetry:
```shell
poetry new my_project
cd my_project
poetry install
```
1. **Run and Debug Existing Code**
- Familiarize yourself with the project's structure by following the `README.md`.
- Use tools like `pdb` or IDE debugging features to debug code. Example:
```shell
poetry shell
python -m pdb <file_name>.py
```
- Both [PyCharm](https://www.jetbrains.com/help/pycharm/debugging-your-first-python-application.html)
and [VSCode](https://code.visualstudio.com/docs/python/debugging) provide great tools to debug your code
---
## Learning resources
If you are new to Python or looking to refresh your knowledge, this section provides various materials for
learning the language.
1. **[Zen of Python](https://peps.python.org/pep-0020/)**
Zen of Python - PEP 20 - is an essential read, it shapes how you think about Python and write "Pythonic" code.
1. **[Python Cheatsheet](https://www.pythoncheatsheet.org)**
A comprehensive reference covering essential Python syntax, built-in functions, and useful libraries.
This is ideal for both beginners and experienced users who want a quick, organized summary of Python's key features.
1. **[100-page Python Intro](https://learnbyexample.github.io/100_page_python_intro)**
Brief guide provides a straightforward introduction to Python, covering all the essentials needed to start programming effectively. It's a beginner-friendly option that covers everything from syntax to debugging and testing.
1. **[Learn X in Y Minutes: Python](https://learnxinyminutes.com/docs/python)**
A very brief, high-level introduction cuts directly to the core syntax and features of Python, making it a valuable quick start for developers transitioning to Python.
1. **[Exercism Python Track](https://exercism.io/tracks/python)**
Use Exercism's Python track as a foundation for learning Python concepts and best practices. Exercism provides hands-on practice with mentoring support, making it an excellent resource for mastering Python through coding exercises and feedback.
When building Python APIs, we use FastAPI and Pydantic. To get started with building and reviewing these technologies, refer to the following resources:
1. **[FastAPI Documentation](https://fastapi.tiangolo.com/)**
FastAPI is a modern web framework for building APIs with Python. This resource will help you learn how to create fast and efficient web applications and APIs. FastAPI is especially useful for building Python applications with high performance and scalability.
1. **[Pydantic Documentation](https://pydantic-docs.helpmanual.io/)**
Pydantic is a Python library for data validation and settings management using Python type annotations. Learn how to integrate Pydantic into your Python projects for easier data validation and management, particularly when working with FastAPI.
We use pytest for testing Python code. To learn more about writing and running tests with pytest, refer to the following resources:
1. **[pytest Documentation](https://docs.pytest.org/en/stable/)**
pytest is a popular testing framework for Python that makes it easy to write simple and scalable tests. This resource provides comprehensive documentation on how to write and run tests using pytest, including fixtures, plugins, and test discovery.
1. **[Python Testing with pytest (Book)](https://pragprog.com/titles/bopytest2/python-testing-with-pytest-second-edition/)**
This book is a comprehensive guide to testing Python code with pytest. It covers everything from the basics of writing tests to advanced topics like fixtures, plugins, and test organization.
1. **[Python Function to flowchart](https://gitlab.com/srayner/funcgraph/)**
This project takes any Python function and automatically creates a visual flowchart showing how the code works.
---
### Learning Group
A collaborative space for developers to study Python, FastAPI, and Pydantic, focusing on building real-world apps.
Refer to [Track and Propose Sessions for Python Learning Group](https://gitlab.com/gitlab-org/gitlab/-/issues/512600) issue for ongoing updates and discussions.
**Core Topics for Group Learning**:
1. **Basic Python Syntax**:
- Learn Python concepts such as variables, functions, loops, and conditionals.
- Practice at [Exercism Python Track](https://exercism.io/tracks/python).
1. **FastAPI and Pydantic**:
- Learn how to build APIs using FastAPI and validate data with Pydantic.
- Key resources:
- [FastAPI Documentation](https://fastapi.tiangolo.com/)
- [Pydantic Documentation](https://pydantic-docs.helpmanual.io/)
### Communication
- Stay updated by following the [learning group issue](https://gitlab.com/gitlab-org/gitlab/-/issues/517449)
- Join the discussion on Slack: **#python_getting_started**
---
### Python Review Office Hours
- **Bi-weekly sessions** for code review and discussion, led by experienced Python developers.
- These sessions are designed to help you improve your Python skills through practical feedback.
- Feel free to add the office hours to your calendar.
---
### Encourage Recorded Group Meetings
All review and study group meetings will be recorded and shared, covering key concepts in Python, FastAPI, and Pydantic. These recordings are great for revisiting topics or catching up if you miss a session.
Add any uploaded videos to the [Python Resources](https://www.youtube.com/playlist?list=PL05JrBw4t0Kq4i9FD276WtOL1dSSm9a1G) playlist.
---
### Mentorship Process
1:1 mentorship for Python is possible and encouraged. For more information on how to get started with a mentor, see the [GitLab Mentoring Handbook](https://handbook.gitlab.com/handbook/engineering/careers/mentoring/#mentoring).
---
## More learning resources
In addition to the resources already mentioned, this section provides various materials for learning the language and
it's ecosystem. In no particular order.
1. **[A Whirlwind Tour of Python (Jupyter Notebook)](https://github.com/jakevdp/WhirlwindTourOfPython)**
A fast-paced introduction to Python fundamentals, tailored especially for data science practitioners but works well for everyone who wants to get just the basic understanding of the language.
This is a Jupiter Notebook which makes this guide an interactive resource as well as a good introduction to Jupiter Notebook itself.
1. **[Python imports](https://realpython.com/absolute-vs-relative-python-imports/)**
Even for Pythonistas with a couple of projects under their belt, imports can be confusing! You're probably reading this because you'd like to gain a deeper understanding of imports in Python, particularly absolute and relative imports.
1. **[Python -m flag](https://www.geeksforgeeks.org/what-is-the-use-of-python-m-flag/)**
Learning the -m flag helps you run Python tools correctly by ensuring they use the right Python environment, avoiding common setup headaches.
1. **[Poetry vs pip](https://www.datacamp.com/tutorial/python-poetry)**
`virtualenv` and `pip` are built-in tool to handle project dependencies and environments. Why and when should you use
Poetry?
1. **[Python roadmap](https://roadmap.sh/python)**
Step by step guide to becoming a Python developer in 2025. Use this for inspiration and finding additional resources.
1. **[Programiz Python basics](https://programiz.pro/course/learn-python-basics)**
Step into the world of programming with this beginner-friendly Python course and build a strong programming foundation.
|
https://docs.gitlab.com/development/maintainership
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/maintainership.md
|
2025-08-13
|
doc/development/python_guide
|
[
"doc",
"development",
"python_guide"
] |
maintainership.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Python Merge Requests Guidelines
| null |
GitLab standard [code review guidelines](../code_review.md#approval-guidelines) apply to Python projects as well.
## How to set up a Python code review process
There are two main approaches to set up a Python code review process at GitLab:
1. **Established Projects**: Larger Python projects typically have their own dedicated pool of reviewers through reviewer-roulette. To set this up, see [Setting Up Reviewer Roulette](#setting-up-reviewer-roulette).
1. **Smaller Projects**: For projects with fewer contributors, we maintain a shared pool of Python reviewers across GitLab.
### Setting Up Reviewer Roulette
This section explains how to integrate your project with [reviewer roulette](../code_review.md#reviewer-roulette) and other resources to connect project contributors with Python experts for code reviews.
For both large and small projects, Reviewer Roulette can automate the reviewer assignment process. To set up:
1. Add the Python project to the list of [GitLab projects](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/data/projects.yml?ref_type=heads).
1. Project maintainer(s) should add a group for the project in the [GitLab.org maintainers repository](https://gitlab.com/gitlab-org/maintainers)
1. Install and configure [Dangerfiles](https://gitlab.com/gitlab-org/ruby/gems/gitlab-dangerfiles) in your project, ensuring [CI is properly set up](https://gitlab.com/gitlab-org/ruby/gems/gitlab-dangerfiles#ci-configuration) to enable the Reviewer Roulette plugin.
Then, depending on your project size:
- **For large projects with sufficient contributors**:
- Eligible team members should add the Python project to the `projects` field in their individual entry in [team_members](https://gitlab.com/gitlab-com/www-gitlab-com/-/tree/master/data/team_members/person) or [team_database](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/doc/team_database.md), specifying appropriate roles such as reviewer or maintainer.
- Add the [individual roulette configuration](https://gitlab.com/gitlab-org/python/code-review-templates/-/tree/main/individual_roulette?ref_type=heads) to your project.
- **For smaller projects (for example, fewer than 10 contributors)**:
- Leverage the company wide pool of Python experts by adding the [shared pool configuration](https://gitlab.com/gitlab-org/python/code-review-templates/-/tree/main/shared_pull/danger?ref_type=heads) to your project.
- You can also encourage contributors or other non-domain reviewers to reach out in your team's Slack channel for domain expertise where needed.
When a merge request is created, Review Roulette will randomly select qualified reviewers based on your configuration.
### Additional recommendations
For more information, see [reviewer roulette](../code_review.md#reviewer-roulette)
### Ask for help
If contributors have questions or need additional help with Python-specific reviews, direct them to the GitLab `#python` or `#python_maintainers` Slack channels for assistance.
## How to become Python maintainer
Established projects have their own pools of reviewers and maintainers. Smaller or new projects can benefit from the help of established Python experts at GitLab.
### GitLab Python experts
GitLab Python experts are professionals with Python expertise who contribute to improving code quality across different projects.
To become one:
1. Create a merge request to add `python: maintainer` competency under `projects` to your [team](https://gitlab.com/gitlab-com/www-gitlab-com/-/tree/master/data/team_members/person?ref_type=heads) file.
1. Use [this](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/.gitlab/merge_request_templates/Python%20expert.md) template and follow the described process.
Once your merge request is merged, you'll be added to the Python maintainers group.
### Reviewers and maintainers of a specific project
Each project can establish their own review process. Review the maintainership guidelines and/or contact current maintainers for more information.
## Maintainer responsibilities
In addition to code reviews, maintainers are responsible for guiding architectural decisions and monitoring and adopting relevant engineering practices introduced in GitLab.com into their Python projects. This helps to ensure Python projects are consistent and aligned with company standards. Maintaining consistency simplifies transitions between GitLab.com and Python projects while reducing context switching overhead.
**Technical prerequisites for Maintainers**:
- Strong experience with the Python frameworks used in the specific project. Commonly used frameworks include: [FastAPI](https://fastapi.tiangolo.com/) and [Pydantic](https://docs.pydantic.dev/latest/), etc.
- Proficiency with Python testing frameworks such as `pytest`, including advanced testing strategies (for example, mocking, integration tests, and test-driven development).
- Understanding of backwards compatibility considerations ([Work item](https://gitlab.com/gitlab-org/gitlab/-/issues/514689)).
**Code review objectives**:
- Verify and confirm changes adheres to style guide ([Work item](https://gitlab.com/gitlab-org/gitlab/-/issues/506689)) and existing patterns in the project.
- Where applicable, ensure test coverage is added for the changes introduced in the MR.
- Review for performance implications.
- Check for security vulnerabilities.
- Assess code change impact on existing systems.
- Verify that the MR has the correct [MR type label](../labels/_index.md#type-labels) and is assigned to the current milestone.
**Additional responsibilities**:
- Maintain accurate and complete documentation.
- Monitor and update package dependencies as necessary.
- Mentor other engineers on Python best practices.
- Evaluate and propose new tools and libraries.
- Monitor performance and propose optimizations.
- Ensure security standards are maintained.
- Ensure the project is consistent and aligned with GitLab standards by regularly monitoring and adopting relevant engineering practices introduced in GitLab.com.
- Establish and enforce clear code review processes.
## Code review best practices
When writing and reviewing code, follow our Style Guides. Code authors and reviewers are encouraged to pay attention
to these areas:
## Review focus areas
When reviewing Python code at GitLab, consider the following areas:
### 1. Code style
- Code follows our agreed [Python formatting standards](styleguide.md) (enforced in pipeline).
- Naming conventions are clear and descriptive.
- Docstrings are used for all public functions and classes.
### 2. Code quality
- Functions are focused, not overly complex and testable.
- Code is readable without excessive comments.
- No unused code or commented-out code.
### 3. Testing
- Test coverage is adequate for new code.
- Tests follow the naming convention `test_{file_being_tested}.py`.
- Mocks are used appropriately for external dependencies.
### 4. Documentation
- Functions and classes have clear docstrings.
- Complex logic has explanatory comments.
- Documentation is updated when adding features.
### 5. Security
- Code follows GitLab security guidelines.
- Inputs are properly validated.
- Error handling is appropriate.
### Backward Compatibility Requirements
When maintaining customer-facing services, maintainers must ensure backward compatibility across supported GitLab versions.
See the GitLab [Statement of Support](https://about.gitlab.com/support/statement-of-support/#version-support)
and Python [deployment guidelines](deployment.md#versioning).
Before merging changes, verify that they maintain compatibility with all supported versions to prevent disruption for users on different GitLab releases.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Python Merge Requests Guidelines
breadcrumbs:
- doc
- development
- python_guide
---
GitLab standard [code review guidelines](../code_review.md#approval-guidelines) apply to Python projects as well.
## How to set up a Python code review process
There are two main approaches to set up a Python code review process at GitLab:
1. **Established Projects**: Larger Python projects typically have their own dedicated pool of reviewers through reviewer-roulette. To set this up, see [Setting Up Reviewer Roulette](#setting-up-reviewer-roulette).
1. **Smaller Projects**: For projects with fewer contributors, we maintain a shared pool of Python reviewers across GitLab.
### Setting Up Reviewer Roulette
This section explains how to integrate your project with [reviewer roulette](../code_review.md#reviewer-roulette) and other resources to connect project contributors with Python experts for code reviews.
For both large and small projects, Reviewer Roulette can automate the reviewer assignment process. To set up:
1. Add the Python project to the list of [GitLab projects](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/data/projects.yml?ref_type=heads).
1. Project maintainer(s) should add a group for the project in the [GitLab.org maintainers repository](https://gitlab.com/gitlab-org/maintainers)
1. Install and configure [Dangerfiles](https://gitlab.com/gitlab-org/ruby/gems/gitlab-dangerfiles) in your project, ensuring [CI is properly set up](https://gitlab.com/gitlab-org/ruby/gems/gitlab-dangerfiles#ci-configuration) to enable the Reviewer Roulette plugin.
Then, depending on your project size:
- **For large projects with sufficient contributors**:
- Eligible team members should add the Python project to the `projects` field in their individual entry in [team_members](https://gitlab.com/gitlab-com/www-gitlab-com/-/tree/master/data/team_members/person) or [team_database](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/doc/team_database.md), specifying appropriate roles such as reviewer or maintainer.
- Add the [individual roulette configuration](https://gitlab.com/gitlab-org/python/code-review-templates/-/tree/main/individual_roulette?ref_type=heads) to your project.
- **For smaller projects (for example, fewer than 10 contributors)**:
- Leverage the company wide pool of Python experts by adding the [shared pool configuration](https://gitlab.com/gitlab-org/python/code-review-templates/-/tree/main/shared_pull/danger?ref_type=heads) to your project.
- You can also encourage contributors or other non-domain reviewers to reach out in your team's Slack channel for domain expertise where needed.
When a merge request is created, Review Roulette will randomly select qualified reviewers based on your configuration.
### Additional recommendations
For more information, see [reviewer roulette](../code_review.md#reviewer-roulette)
### Ask for help
If contributors have questions or need additional help with Python-specific reviews, direct them to the GitLab `#python` or `#python_maintainers` Slack channels for assistance.
## How to become Python maintainer
Established projects have their own pools of reviewers and maintainers. Smaller or new projects can benefit from the help of established Python experts at GitLab.
### GitLab Python experts
GitLab Python experts are professionals with Python expertise who contribute to improving code quality across different projects.
To become one:
1. Create a merge request to add `python: maintainer` competency under `projects` to your [team](https://gitlab.com/gitlab-com/www-gitlab-com/-/tree/master/data/team_members/person?ref_type=heads) file.
1. Use [this](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/.gitlab/merge_request_templates/Python%20expert.md) template and follow the described process.
Once your merge request is merged, you'll be added to the Python maintainers group.
### Reviewers and maintainers of a specific project
Each project can establish their own review process. Review the maintainership guidelines and/or contact current maintainers for more information.
## Maintainer responsibilities
In addition to code reviews, maintainers are responsible for guiding architectural decisions and monitoring and adopting relevant engineering practices introduced in GitLab.com into their Python projects. This helps to ensure Python projects are consistent and aligned with company standards. Maintaining consistency simplifies transitions between GitLab.com and Python projects while reducing context switching overhead.
**Technical prerequisites for Maintainers**:
- Strong experience with the Python frameworks used in the specific project. Commonly used frameworks include: [FastAPI](https://fastapi.tiangolo.com/) and [Pydantic](https://docs.pydantic.dev/latest/), etc.
- Proficiency with Python testing frameworks such as `pytest`, including advanced testing strategies (for example, mocking, integration tests, and test-driven development).
- Understanding of backwards compatibility considerations ([Work item](https://gitlab.com/gitlab-org/gitlab/-/issues/514689)).
**Code review objectives**:
- Verify and confirm changes adheres to style guide ([Work item](https://gitlab.com/gitlab-org/gitlab/-/issues/506689)) and existing patterns in the project.
- Where applicable, ensure test coverage is added for the changes introduced in the MR.
- Review for performance implications.
- Check for security vulnerabilities.
- Assess code change impact on existing systems.
- Verify that the MR has the correct [MR type label](../labels/_index.md#type-labels) and is assigned to the current milestone.
**Additional responsibilities**:
- Maintain accurate and complete documentation.
- Monitor and update package dependencies as necessary.
- Mentor other engineers on Python best practices.
- Evaluate and propose new tools and libraries.
- Monitor performance and propose optimizations.
- Ensure security standards are maintained.
- Ensure the project is consistent and aligned with GitLab standards by regularly monitoring and adopting relevant engineering practices introduced in GitLab.com.
- Establish and enforce clear code review processes.
## Code review best practices
When writing and reviewing code, follow our Style Guides. Code authors and reviewers are encouraged to pay attention
to these areas:
## Review focus areas
When reviewing Python code at GitLab, consider the following areas:
### 1. Code style
- Code follows our agreed [Python formatting standards](styleguide.md) (enforced in pipeline).
- Naming conventions are clear and descriptive.
- Docstrings are used for all public functions and classes.
### 2. Code quality
- Functions are focused, not overly complex and testable.
- Code is readable without excessive comments.
- No unused code or commented-out code.
### 3. Testing
- Test coverage is adequate for new code.
- Tests follow the naming convention `test_{file_being_tested}.py`.
- Mocks are used appropriately for external dependencies.
### 4. Documentation
- Functions and classes have clear docstrings.
- Complex logic has explanatory comments.
- Documentation is updated when adding features.
### 5. Security
- Code follows GitLab security guidelines.
- Inputs are properly validated.
- Error handling is appropriate.
### Backward Compatibility Requirements
When maintaining customer-facing services, maintainers must ensure backward compatibility across supported GitLab versions.
See the GitLab [Statement of Support](https://about.gitlab.com/support/statement-of-support/#version-support)
and Python [deployment guidelines](deployment.md#versioning).
Before merging changes, verify that they maintain compatibility with all supported versions to prevent disruption for users on different GitLab releases.
|
https://docs.gitlab.com/development/create_project
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/create_project.md
|
2025-08-13
|
doc/development/python_guide
|
[
"doc",
"development",
"python_guide"
] |
create_project.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Create a new python projects
| null |
When creating a new Python repository, some guidelines help keep our code standardized.
## Recommended libraries
### Development & testing
- [`pytest`](https://docs.pytest.org/): Primary testing framework for writing and running tests.
- [`pytest-cov`](https://pytest-cov.readthedocs.io/): Test coverage reporting plugin for `pytest`.
- [`black`](https://black.readthedocs.io/): Opinionated code formatter that ensures consistent code
style.
- [`flake8`](https://flake8.pycqa.org/): Linter for style enforcement.
- [`pylint`](https://pylint.pycqa.org/): Comprehensive linter for error detection and quality
enforcement.
- [`mypy`](https://mypy.readthedocs.io/): Static type checker.
- [`isort`](https://pycqa.github.io/isort/): Utility to sort imports.
### Package manager & build system
- [`poetry`](https://python-poetry.org/): Modern packaging and dependency management.
### Common utilities
- [`typer`](https://typer.tiangolo.com/): Library for building CLI applications.
- [`python-dotenv`](https://saurabh-kumar.com/python-dotenv/): Environment variable management.
- [`pydantic`](https://docs.pydantic.dev/latest/): Data validation and settings management using
Python type annotations.
- [`fastapi`](https://fastapi.tiangolo.com): Modern, high-performance web framework for building
APIs.
- [`structlog`](https://www.structlog.org/): Structured logging library.
- [`httpx`](https://docs.pydantic.dev/latest/): Asynchronous and performant HTTP client.
- [`rich`](https://rich.readthedocs.io/en/latest/): Terminal formatting library for rich text.
- [`sqlmodel`](https://sqlmodel.tiangolo.com/): Intuitive and robust ORM.
- [`tqdm`](https://github.com/tqdm/tqdm): Fast, extensible progress bar for CLI.
## Recommended folder structure
Depending on the type of project, for example, API service, CLI application or library, the folder
structure can be varied. The following structure is for a standard CLI application.
```plaintext
project_name/
├── .gitlab/ # GitLab-specific configuration
│ ├── issue_templates/ # Issue templates
│ └── merge_request_templates/ # MR templates
├── .gitlab-ci.yml # CI/CD configuration
├── project_name/ # Main package directory
│ ├── __init__.py # Package initialization
│ ├── cli.py # Command-line interface entry points
│ ├── config.py # Configuration handling
│ └── core/ # Core functionality
│ └── __init__.py
├── tests/ # Test directory
│ ├── __init__.py
│ ├── conftest.py # pytest fixtures and configuration
│ └── test_*.py # Test modules
├── docs/ # Documentation
├── scripts/ # Utility scripts
├── README.md # Project overview
├── CONTRIBUTING.md # Contribution guidelines
├── LICENSE # License information
├── pyproject.toml # Project metadata and dependencies (Poetry)
```
## Linter configuration
We should consolidate configurations into `pyproject.toml` as much as possible.
### `pyproject.toml`
```toml
[tool.black]
line-length = 120
[tool.isort]
profile = "black"
[tool.mypy]
python_version = 3.12
ignore_missing_imports = true
[tool.pylint.main]
jobs = 0
load-plugins = [
# custom plugins
]
[tool.pylint.messages_control]
enable = [
# custom plugins
]
[tool.pylint.reports]
score = "no"
```
### `setup.cfg`
```ini
[flake8]
extend-ignore = E203,E501
extend-exclude = **/__init__.py,.venv,tests
indent-size = 4
max-line-length = 120
```
## Example Makefile
```makefile
# Excerpt from project Makefile showing common targets
# lint
.PHONY: install-lint-deps
install-lint-deps:
@echo "Installing lint dependencies..."
@poetry install --only lint
.PHONY: format
format: black isort
.PHONY: black
black: install-lint-deps
@echo "Running black format..."
@poetry run black ${CI_PROJECT_DIR}
.PHONY: isort
isort: install-lint-deps
@echo "Running isort format..."
@poetry run isort ${CI_PROJECT_DIR}
.PHONY: lint
lint: flake8 check-black check-isort check-pylint check-mypy
.PHONY: flake8
flake8: install-lint-deps
@echo "Running flake8..."
@poetry run flake8 ${CI_PROJECT_DIR}
.PHONY: check-black
check-black: install-lint-deps
@echo "Running black check..."
@poetry run black --check ${CI_PROJECT_DIR}
.PHONY: check-isort
check-isort: install-lint-deps
@echo "Running isort check..."
@poetry run isort --check-only ${CI_PROJECT_DIR}
.PHONY: check-pylint
check-pylint: install-lint-deps install-test-deps
@echo "Running pylint check..."
@poetry run pylint ${CI_PROJECT_DIR}
.PHONY: check-mypy
check-mypy: install-lint-deps
@echo "Running mypy check..."
@poetry run mypy ${CI_PROJECT_DIR}
# test
.PHONY: test
test: install-test-deps
@echo "Running tests..."
@poetry run pytest
.PHONY: test-coverage
test-coverage: install-test-deps
@echo "Running tests with coverage..."
@poetry run pytest --cov=duo_workflow_service --cov=lints --cov-report term --cov-report html
```
## Example GitLab CI Configuration
```yaml
# Excerpt from .gitlab-ci.yml showing linting and testing jobs
image: python:3.13
stages:
- lint
- test
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
POETRY_CACHE_DIR: "$CI_PROJECT_DIR/.cache/poetry"
POETRY_VERSION: "2.1.2"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- $PIP_CACHE_DIR
- $POETRY_CACHE_DIR
- .venv/
# Base template for Python jobs
.poetry:
before_script:
- pip install poetry==${POETRY_VERSION}
- poetry config virtualenvs.in-project true
- poetry add --dev black isort flake8 pylint mypy pytest pytest-cov
# Linting jobs
black:
extends: .poetry
stage: lint
script:
- poetry run black --check ${CI_PROJECT_DIR}
isort:
extends: .poetry
stage: lint
script:
- poetry run isort --check-only ${CI_PROJECT_DIR}
flake8:
extends: .poetry
stage: lint
script:
- poetry run flake8 ${CI_PROJECT_DIR}
pylint:
extends: .poetry
stage: lint
script:
- poetry run pylint ${CI_PROJECT_DIR}
mypy:
extends: .poetry
stage: lint
script:
- poetry run mypy ${CI_PROJECT_DIR}
# Testing jobs
test:
extends: .poetry
stage: test
script:
- poetry run pytest --cov=duo_workflow_service --cov-report=term --cov-report=xml:coverage.xml --junitxml=junit.xml
coverage: '/TOTAL.+?(\d+\%)/'
artifacts:
when: always
reports:
junit: junit.xml
coverage_report:
coverage_format: cobertura
path: coverage.xml
```
## Adding reviewer roulette
We recommend reviewer roulette to distribute review workload across reviewers and maintainers. A pool of Python Reviewers is available
for small Python projects and can be configured following [these steps](maintainership.md#how-to-set-up-a-python-code-review-process).
To create a pool of reviewers specific to a project:
1. Follow the
[GitLab Dangerfiles instructions](https://gitlab.com/gitlab-org/ruby/gems/gitlab-dangerfiles/-/blob/master/README.md#simple_roulette)
to add the configuration to your project.
1. Implement the
[Danger Reviewer component](https://gitlab.com/gitlab-org/components/danger-review#example) in
your GitLab CI pipeline to automatically trigger the roulette.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Create a new python projects
breadcrumbs:
- doc
- development
- python_guide
---
When creating a new Python repository, some guidelines help keep our code standardized.
## Recommended libraries
### Development & testing
- [`pytest`](https://docs.pytest.org/): Primary testing framework for writing and running tests.
- [`pytest-cov`](https://pytest-cov.readthedocs.io/): Test coverage reporting plugin for `pytest`.
- [`black`](https://black.readthedocs.io/): Opinionated code formatter that ensures consistent code
style.
- [`flake8`](https://flake8.pycqa.org/): Linter for style enforcement.
- [`pylint`](https://pylint.pycqa.org/): Comprehensive linter for error detection and quality
enforcement.
- [`mypy`](https://mypy.readthedocs.io/): Static type checker.
- [`isort`](https://pycqa.github.io/isort/): Utility to sort imports.
### Package manager & build system
- [`poetry`](https://python-poetry.org/): Modern packaging and dependency management.
### Common utilities
- [`typer`](https://typer.tiangolo.com/): Library for building CLI applications.
- [`python-dotenv`](https://saurabh-kumar.com/python-dotenv/): Environment variable management.
- [`pydantic`](https://docs.pydantic.dev/latest/): Data validation and settings management using
Python type annotations.
- [`fastapi`](https://fastapi.tiangolo.com): Modern, high-performance web framework for building
APIs.
- [`structlog`](https://www.structlog.org/): Structured logging library.
- [`httpx`](https://docs.pydantic.dev/latest/): Asynchronous and performant HTTP client.
- [`rich`](https://rich.readthedocs.io/en/latest/): Terminal formatting library for rich text.
- [`sqlmodel`](https://sqlmodel.tiangolo.com/): Intuitive and robust ORM.
- [`tqdm`](https://github.com/tqdm/tqdm): Fast, extensible progress bar for CLI.
## Recommended folder structure
Depending on the type of project, for example, API service, CLI application or library, the folder
structure can be varied. The following structure is for a standard CLI application.
```plaintext
project_name/
├── .gitlab/ # GitLab-specific configuration
│ ├── issue_templates/ # Issue templates
│ └── merge_request_templates/ # MR templates
├── .gitlab-ci.yml # CI/CD configuration
├── project_name/ # Main package directory
│ ├── __init__.py # Package initialization
│ ├── cli.py # Command-line interface entry points
│ ├── config.py # Configuration handling
│ └── core/ # Core functionality
│ └── __init__.py
├── tests/ # Test directory
│ ├── __init__.py
│ ├── conftest.py # pytest fixtures and configuration
│ └── test_*.py # Test modules
├── docs/ # Documentation
├── scripts/ # Utility scripts
├── README.md # Project overview
├── CONTRIBUTING.md # Contribution guidelines
├── LICENSE # License information
├── pyproject.toml # Project metadata and dependencies (Poetry)
```
## Linter configuration
We should consolidate configurations into `pyproject.toml` as much as possible.
### `pyproject.toml`
```toml
[tool.black]
line-length = 120
[tool.isort]
profile = "black"
[tool.mypy]
python_version = 3.12
ignore_missing_imports = true
[tool.pylint.main]
jobs = 0
load-plugins = [
# custom plugins
]
[tool.pylint.messages_control]
enable = [
# custom plugins
]
[tool.pylint.reports]
score = "no"
```
### `setup.cfg`
```ini
[flake8]
extend-ignore = E203,E501
extend-exclude = **/__init__.py,.venv,tests
indent-size = 4
max-line-length = 120
```
## Example Makefile
```makefile
# Excerpt from project Makefile showing common targets
# lint
.PHONY: install-lint-deps
install-lint-deps:
@echo "Installing lint dependencies..."
@poetry install --only lint
.PHONY: format
format: black isort
.PHONY: black
black: install-lint-deps
@echo "Running black format..."
@poetry run black ${CI_PROJECT_DIR}
.PHONY: isort
isort: install-lint-deps
@echo "Running isort format..."
@poetry run isort ${CI_PROJECT_DIR}
.PHONY: lint
lint: flake8 check-black check-isort check-pylint check-mypy
.PHONY: flake8
flake8: install-lint-deps
@echo "Running flake8..."
@poetry run flake8 ${CI_PROJECT_DIR}
.PHONY: check-black
check-black: install-lint-deps
@echo "Running black check..."
@poetry run black --check ${CI_PROJECT_DIR}
.PHONY: check-isort
check-isort: install-lint-deps
@echo "Running isort check..."
@poetry run isort --check-only ${CI_PROJECT_DIR}
.PHONY: check-pylint
check-pylint: install-lint-deps install-test-deps
@echo "Running pylint check..."
@poetry run pylint ${CI_PROJECT_DIR}
.PHONY: check-mypy
check-mypy: install-lint-deps
@echo "Running mypy check..."
@poetry run mypy ${CI_PROJECT_DIR}
# test
.PHONY: test
test: install-test-deps
@echo "Running tests..."
@poetry run pytest
.PHONY: test-coverage
test-coverage: install-test-deps
@echo "Running tests with coverage..."
@poetry run pytest --cov=duo_workflow_service --cov=lints --cov-report term --cov-report html
```
## Example GitLab CI Configuration
```yaml
# Excerpt from .gitlab-ci.yml showing linting and testing jobs
image: python:3.13
stages:
- lint
- test
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
POETRY_CACHE_DIR: "$CI_PROJECT_DIR/.cache/poetry"
POETRY_VERSION: "2.1.2"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- $PIP_CACHE_DIR
- $POETRY_CACHE_DIR
- .venv/
# Base template for Python jobs
.poetry:
before_script:
- pip install poetry==${POETRY_VERSION}
- poetry config virtualenvs.in-project true
- poetry add --dev black isort flake8 pylint mypy pytest pytest-cov
# Linting jobs
black:
extends: .poetry
stage: lint
script:
- poetry run black --check ${CI_PROJECT_DIR}
isort:
extends: .poetry
stage: lint
script:
- poetry run isort --check-only ${CI_PROJECT_DIR}
flake8:
extends: .poetry
stage: lint
script:
- poetry run flake8 ${CI_PROJECT_DIR}
pylint:
extends: .poetry
stage: lint
script:
- poetry run pylint ${CI_PROJECT_DIR}
mypy:
extends: .poetry
stage: lint
script:
- poetry run mypy ${CI_PROJECT_DIR}
# Testing jobs
test:
extends: .poetry
stage: test
script:
- poetry run pytest --cov=duo_workflow_service --cov-report=term --cov-report=xml:coverage.xml --junitxml=junit.xml
coverage: '/TOTAL.+?(\d+\%)/'
artifacts:
when: always
reports:
junit: junit.xml
coverage_report:
coverage_format: cobertura
path: coverage.xml
```
## Adding reviewer roulette
We recommend reviewer roulette to distribute review workload across reviewers and maintainers. A pool of Python Reviewers is available
for small Python projects and can be configured following [these steps](maintainership.md#how-to-set-up-a-python-code-review-process).
To create a pool of reviewers specific to a project:
1. Follow the
[GitLab Dangerfiles instructions](https://gitlab.com/gitlab-org/ruby/gems/gitlab-dangerfiles/-/blob/master/README.md#simple_roulette)
to add the configuration to your project.
1. Implement the
[Danger Reviewer component](https://gitlab.com/gitlab-org/components/danger-review#example) in
your GitLab CI pipeline to automatically trigger the roulette.
|
https://docs.gitlab.com/development/remote_development
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/remote_development
|
[
"doc",
"development",
"remote_development"
] |
_index.md
|
Create
|
Web IDE
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Remote Development developer guidelines
| null |
[Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/105783) in GitLab 16.0.
## Workspaces feature developer documentation
Currently, the majority of the developer documentation for the [Remote Development Workspaces feature](../../user/workspace/_index.md)
is located in the separate [`gitlab-remote-development-docs` project](https://gitlab.com/gitlab-org/remote-development/gitlab-remote-development-docs/-/blob/main/README.md).
Parts of that documentation will eventually be migrated here.
|
---
stage: Create
group: Web IDE
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Remote Development developer guidelines
breadcrumbs:
- doc
- development
- remote_development
---
[Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/105783) in GitLab 16.0.
## Workspaces feature developer documentation
Currently, the majority of the developer documentation for the [Remote Development Workspaces feature](../../user/workspace/_index.md)
is located in the separate [`gitlab-remote-development-docs` project](https://gitlab.com/gitlab-org/remote-development/gitlab-remote-development-docs/-/blob/main/README.md).
Parts of that documentation will eventually be migrated here.
|
https://docs.gitlab.com/development/spam_protection_and_captcha
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/spam_protection_and_captcha
|
[
"doc",
"development",
"spam_protection_and_captcha"
] |
_index.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Spam protection and CAPTCHA
| null |
This guide provides an overview of how to add spam protection and CAPTCHA support to new areas of the
GitLab application.
## Add spam protection and CAPTCHA support to a new area
To add this support, you must implement the following areas as applicable:
1. [Model and Services](model_and_services.md): The basic prerequisite
changes to the backend code which are required to add spam or CAPTCHA API and UI support
for a feature which does not yet have support.
1. [REST API](rest_api.md): The changes needed to add
spam or CAPTCHA support to Grape REST API endpoints. Refer to the related
[REST API documentation](../../api/rest/troubleshooting.md#requests-detected-as-spam).
1. [GraphQL API](graphql_api.md): The changes needed to add spam or CAPTCHA support to GraphQL
mutations. Refer to the related
[GraphQL API documentation](../../api/graphql/_index.md#resolve-mutations-detected-as-spam).
1. [Web UI](web_ui.md): The various possible scenarios encountered when adding
spam/CAPTCHA support to the web UI, depending on whether the UI is JavaScript API-based (Vue or
plain JavaScript) or HTML-form (HAML) based.
You should also perform manual exploratory testing of the new feature. Refer to
[Exploratory testing](exploratory_testing.md) for more information.
## Spam-related model and API fields
Multiple levels of spam flagging determine how spam is handled. These levels are referenced in
[`Spam::SpamConstants`](https://gitlab.com/gitlab-org/gitlab/blob/master/app/services/spam/spam_constants.rb#L4-4),
and used various places in the application, such as
[`Spam::SpamActionService#perform_spam_service_check`](https://gitlab.com/gitlab-org/gitlab/blob/d7585b56c9e7dc69414af306d82906e28befe7da/app/services/spam/spam_action_service.rb#L61-61).
The possible values include:
- `BLOCK_USER`
- `DISALLOW`
- `CONDITIONAL_ALLOW`
- `OVERRIDE_VIA_ALLOW_POSSIBLE_SPAM`
- `ALLOW`
- `NOOP`
## Related topics
- [Spam and CAPTCHA support in the GraphQL API](../../api/graphql/_index.md#resolve-mutations-detected-as-spam)
- [Spam and CAPTCHA support in the REST API](../../api/rest/troubleshooting.md#requests-detected-as-spam)
- [reCAPTCHA Spam and Anti-bot Protection](../../integration/recaptcha.md)
- [Akismet and spam logs](../../integration/akismet.md)
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Spam protection and CAPTCHA
breadcrumbs:
- doc
- development
- spam_protection_and_captcha
---
This guide provides an overview of how to add spam protection and CAPTCHA support to new areas of the
GitLab application.
## Add spam protection and CAPTCHA support to a new area
To add this support, you must implement the following areas as applicable:
1. [Model and Services](model_and_services.md): The basic prerequisite
changes to the backend code which are required to add spam or CAPTCHA API and UI support
for a feature which does not yet have support.
1. [REST API](rest_api.md): The changes needed to add
spam or CAPTCHA support to Grape REST API endpoints. Refer to the related
[REST API documentation](../../api/rest/troubleshooting.md#requests-detected-as-spam).
1. [GraphQL API](graphql_api.md): The changes needed to add spam or CAPTCHA support to GraphQL
mutations. Refer to the related
[GraphQL API documentation](../../api/graphql/_index.md#resolve-mutations-detected-as-spam).
1. [Web UI](web_ui.md): The various possible scenarios encountered when adding
spam/CAPTCHA support to the web UI, depending on whether the UI is JavaScript API-based (Vue or
plain JavaScript) or HTML-form (HAML) based.
You should also perform manual exploratory testing of the new feature. Refer to
[Exploratory testing](exploratory_testing.md) for more information.
## Spam-related model and API fields
Multiple levels of spam flagging determine how spam is handled. These levels are referenced in
[`Spam::SpamConstants`](https://gitlab.com/gitlab-org/gitlab/blob/master/app/services/spam/spam_constants.rb#L4-4),
and used various places in the application, such as
[`Spam::SpamActionService#perform_spam_service_check`](https://gitlab.com/gitlab-org/gitlab/blob/d7585b56c9e7dc69414af306d82906e28befe7da/app/services/spam/spam_action_service.rb#L61-61).
The possible values include:
- `BLOCK_USER`
- `DISALLOW`
- `CONDITIONAL_ALLOW`
- `OVERRIDE_VIA_ALLOW_POSSIBLE_SPAM`
- `ALLOW`
- `NOOP`
## Related topics
- [Spam and CAPTCHA support in the GraphQL API](../../api/graphql/_index.md#resolve-mutations-detected-as-spam)
- [Spam and CAPTCHA support in the REST API](../../api/rest/troubleshooting.md#requests-detected-as-spam)
- [reCAPTCHA Spam and Anti-bot Protection](../../integration/recaptcha.md)
- [Akismet and spam logs](../../integration/akismet.md)
|
https://docs.gitlab.com/development/rest_api
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/rest_api.md
|
2025-08-13
|
doc/development/spam_protection_and_captcha
|
[
"doc",
"development",
"spam_protection_and_captcha"
] |
rest_api.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
REST API spam protection and CAPTCHA support
| null |
If the model can be modified via the REST API, you must also add support to all of the
relevant API endpoints which may modify spammable or spam-related attributes. This
definitely includes the `POST` and `PUT` mutations, but may also include others, such as those
related to changing a model's confidential/public flag.
## Add support to the REST endpoints
The main steps are:
1. Add `helpers SpammableActions::CaptchaCheck::RestApiActionsSupport` in your `resource`.
1. Pass `perform_spam_check: true` to the Update Service class constructor.
It is set to `true` by default in the Create Service.
1. After you create or update the `Spammable` model instance, call `#check_spam_action_response!`,
save the created or updated instance in a variable.
1. Identify the error handling logic for the `failure` case of the request,
when create or update was not successful. These indicate possible spam detection,
which adds an error to the `Spammable` instance.
The error is usually similar to `render_api_error!` or `render_validation_error!`.
1. Wrap the existing error handling logic in a
`with_captcha_check_rest_api(spammable: my_spammable_instance)` call, passing the `Spammable`
model instance you saved in a variable as the `spammable:` named argument. This call will:
1. Perform the necessary spam checks on the model.
1. If spam is detected:
- Raise a Grape `#error!` exception with a descriptive spam-specific error message.
- Include the relevant information added as error fields to the response.
For more details on these fields, refer to the section in the REST API documentation on
[Resolve requests detected as spam](../../api/rest/troubleshooting.md#requests-detected-as-spam).
{{< alert type="note" >}}
If you use the standard ApolloLink or Axios interceptor CAPTCHA support described
above, you can ignore the field details, because they are handled
automatically. They become relevant if you attempt to use the GraphQL API directly to
process a failed check for potential spam, and resubmit the request with a solved
CAPTCHA response.
{{< /alert >}}
Here is an example for the `post` and `put` actions on the `snippets` resource:
```ruby
module API
class Snippets < ::API::Base
#...
resource :snippets do
# This helper provides `#with_captcha_check_rest_api`
helpers SpammableActions::CaptchaCheck::RestApiActionsSupport
post do
#...
service_response = ::Snippets::CreateService.new(project: nil, current_user: current_user, params: attrs).execute
snippet = service_response.payload[:snippet]
if service_response.success?
present snippet, with: Entities::PersonalSnippet, current_user: current_user
else
# Wrap the normal error response in a `with_captcha_check_rest_api(spammable: snippet)` block
with_captcha_check_rest_api(spammable: snippet) do
# If possible spam was detected, an exception would have been thrown by
# `#with_captcha_check_rest_api` for Grape to handle via `error!`
render_api_error!({ error: service_response.message }, service_response.http_status)
end
end
end
put ':id' do
#...
service_response = ::Snippets::UpdateService.new(project: nil, current_user: current_user, params: attrs, perform_spam_check: true).execute(snippet)
snippet = service_response.payload[:snippet]
if service_response.success?
present snippet, with: Entities::PersonalSnippet, current_user: current_user
else
# Wrap the normal error response in a `with_captcha_check_rest_api(spammable: snippet)` block
with_captcha_check_rest_api(spammable: snippet) do
# If possible spam was detected, an exception would have been thrown by
# `#with_captcha_check_rest_api` for Grape to handle via `error!`
render_api_error!({ error: service_response.message }, service_response.http_status)
end
end
end
```
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: REST API spam protection and CAPTCHA support
breadcrumbs:
- doc
- development
- spam_protection_and_captcha
---
If the model can be modified via the REST API, you must also add support to all of the
relevant API endpoints which may modify spammable or spam-related attributes. This
definitely includes the `POST` and `PUT` mutations, but may also include others, such as those
related to changing a model's confidential/public flag.
## Add support to the REST endpoints
The main steps are:
1. Add `helpers SpammableActions::CaptchaCheck::RestApiActionsSupport` in your `resource`.
1. Pass `perform_spam_check: true` to the Update Service class constructor.
It is set to `true` by default in the Create Service.
1. After you create or update the `Spammable` model instance, call `#check_spam_action_response!`,
save the created or updated instance in a variable.
1. Identify the error handling logic for the `failure` case of the request,
when create or update was not successful. These indicate possible spam detection,
which adds an error to the `Spammable` instance.
The error is usually similar to `render_api_error!` or `render_validation_error!`.
1. Wrap the existing error handling logic in a
`with_captcha_check_rest_api(spammable: my_spammable_instance)` call, passing the `Spammable`
model instance you saved in a variable as the `spammable:` named argument. This call will:
1. Perform the necessary spam checks on the model.
1. If spam is detected:
- Raise a Grape `#error!` exception with a descriptive spam-specific error message.
- Include the relevant information added as error fields to the response.
For more details on these fields, refer to the section in the REST API documentation on
[Resolve requests detected as spam](../../api/rest/troubleshooting.md#requests-detected-as-spam).
{{< alert type="note" >}}
If you use the standard ApolloLink or Axios interceptor CAPTCHA support described
above, you can ignore the field details, because they are handled
automatically. They become relevant if you attempt to use the GraphQL API directly to
process a failed check for potential spam, and resubmit the request with a solved
CAPTCHA response.
{{< /alert >}}
Here is an example for the `post` and `put` actions on the `snippets` resource:
```ruby
module API
class Snippets < ::API::Base
#...
resource :snippets do
# This helper provides `#with_captcha_check_rest_api`
helpers SpammableActions::CaptchaCheck::RestApiActionsSupport
post do
#...
service_response = ::Snippets::CreateService.new(project: nil, current_user: current_user, params: attrs).execute
snippet = service_response.payload[:snippet]
if service_response.success?
present snippet, with: Entities::PersonalSnippet, current_user: current_user
else
# Wrap the normal error response in a `with_captcha_check_rest_api(spammable: snippet)` block
with_captcha_check_rest_api(spammable: snippet) do
# If possible spam was detected, an exception would have been thrown by
# `#with_captcha_check_rest_api` for Grape to handle via `error!`
render_api_error!({ error: service_response.message }, service_response.http_status)
end
end
end
put ':id' do
#...
service_response = ::Snippets::UpdateService.new(project: nil, current_user: current_user, params: attrs, perform_spam_check: true).execute(snippet)
snippet = service_response.payload[:snippet]
if service_response.success?
present snippet, with: Entities::PersonalSnippet, current_user: current_user
else
# Wrap the normal error response in a `with_captcha_check_rest_api(spammable: snippet)` block
with_captcha_check_rest_api(spammable: snippet) do
# If possible spam was detected, an exception would have been thrown by
# `#with_captcha_check_rest_api` for Grape to handle via `error!`
render_api_error!({ error: service_response.message }, service_response.http_status)
end
end
end
```
|
https://docs.gitlab.com/development/exploratory_testing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/exploratory_testing.md
|
2025-08-13
|
doc/development/spam_protection_and_captcha
|
[
"doc",
"development",
"spam_protection_and_captcha"
] |
exploratory_testing.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Exploratory testing of CAPTCHAs
| null |
You can reliably test CAPTCHA on review apps, and in your local development environment (GDK).
You can always:
- Force a reCAPTCHA to appear where it is supported.
- Force a checkbox to display, instead of street sign images to find and select.
To set up testing, follow the configuration on this page.
## Use appropriate test data
Make sure you are testing a scenario which has spam/CAPTCHA enabled. For example:
make sure you are editing a public snippet, as only public snippets are checked for spam.
## Enable feature flags
Enable any relevant feature flag, if the spam/CAPTCHA support is behind a feature flag.
## Set up Akismet and reCAPTCHA
1. To set up reCAPTCHA:
1. Review the [GitLab reCAPTCHA documentation](../../integration/recaptcha.md).
1. Follow the instructions provided by Google to get the official [test reCAPTCHA credentials](https://developers.google.com/recaptcha/docs/faq#id-like-to-run-automated-tests-with-recaptcha.-what-should-i-do).
1. For **Site key**, use: `6LeIxAcTAAAAAJcZVRqyHh71UMIEGNQ_MXjiZKhI`
1. For **Secret key**, use: `6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe`
1. Go to **Admin -> Settings -> Reporting** settings: `http://gdk.test:3000/admin/application_settings/reporting#js-spam-settings`
1. Expand the **Spam and Anti-bot Protection** section.
1. Select **Enable reCAPTCHA**. Enabling for login is not required unless you are testing that feature.
1. Enter the **Site key** and **Secret key**.
1. To set up Akismet:
1. Review the [GitLab documentation on Akismet](../../integration/akismet.md).
1. Get an Akismet API key. You can sign up for [a testing key from Akismet](https://akismet.com).
You must enter your local host (such as`gdk.test`) and email when signing up.
1. Go to GitLab Akismet settings page, for example:
`http://gdk.test:3000/admin/application_settings/reporting#js-spam-settings`
1. Enable Akismet and enter your Akismet **API key**.
1. To force an Akismet false-positive spam check, refer to the
[Akismet API documentation](https://akismet.com/developers/detailed-docs/comment-check/) and
[Akismet Getting Started documentation](https://akismet.com/support/getting-started/confirm/) for more details:
1. You can use `akismet-guaranteed-spam@example.com` as the author email to force spam using the following steps:
1. Go to user email settings: `http://gdk.test:3000/-/profile/emails`
1. Add `akismet-guaranteed-spam@example.com` as a secondary email for the administrator user.
1. Confirm it in the Rails console: `bin/rails c` -> `User.find_by_username('root').emails.last.confirm`
1. Switch this verified email to be your primary email:
1. Go to **Avatar dropdown list -> Edit Profile -> Main Settings**.
1. For **Email**, enter `akismet-guaranteed-spam@example.com` to replace `admin@example.com`.
1. Select **Update Profile Settings** to save your changes.
## Test in the web UI
After you have all the above configuration in place, you can test CAPTCHAs. Test
in an area of the application which already has CAPTCHA support, such as:
- Creating or editing an issue.
- Creating or editing a public snippet. Only **public** snippets are checked for spam.
## Test in a development environment
After you force Spam Flagging + CAPTCHA using the steps above, you can test the
behavior with any spam-protected model/controller action.
### Test with CAPTCHA enabled (`CONDITIONAL_ALLOW` verdict)
If CAPTCHA is enabled in these areas, you must solve the CAPTCHA popup modal before you can resubmit the form:
- **Admin -> Settings -> Reporting -> Spam**
- **Anti-bot Protection -> Enable reCAPTCHA**
### Testing with CAPTCHA disabled (`DISALLOW` verdict)
If CAPTCHA is disabled in **Admin -> Settings -> Reporting -> Spam** and **Anti-bot Protection -> Enable reCAPTCHA**,
no CAPTCHA popup displays. You are prevented from submitting the form at all.
### HTML page to render reCAPTCHA
{{< alert type="note" >}}
If you use **the Google official test reCAPTCHA credentials** listed in
[Set up Akismet and reCAPTCHA](#set-up-akismet-and-recaptcha), the
CAPTCHA response string does not matter. It can be any string. If you use a
real, valid key pair, you must solve the CAPTCHA to obtain a
valid CAPTCHA response to use. You can do this once only, and only before it expires.
{{< /alert >}}
To directly test the GraphQL API via GraphQL Explorer (`http://gdk.test:3000/-/graphql-explorer`),
get a reCAPTCHA response string via this form: `public/recaptcha.html` (`http://gdk.test:3000/recaptcha.html`):
```html
<html>
<head>
<title>reCAPTCHA demo: Explicit render after an onload callback</title>
<script type="text/javascript">
var onloadCallback = function() {
grecaptcha.render('html_element', {
'sitekey' : '6Ld05AsaAAAAAMsm1yTUp4qsdFARN15rQJPPqv6i'
});
};
function onSubmit() {
window.document.getElementById('recaptchaResponse').innerHTML = grecaptcha.getResponse();
return false;
}
</script>
</head>
<body>
<form onsubmit="return onSubmit()">
<div id="html_element"></div>
<br>
<input type="submit" value="Submit">
</form>
<div>
<h1>recaptchaResponse:</h1>
<div id="recaptchaResponse"></div>
</div>
<script src="https://www.google.com/recaptcha/api.js?onload=onloadCallback&render=explicit"
async defer>
</script>
</body>
</html>
```
## Spam/CAPTCHA API exploratory testing examples
These sections describe the steps needed to perform manual exploratory testing of
various scenarios of the Spam and CAPTCHA behavior for the REST and GraphQL APIs.
For the prerequisites, you must:
1. Perform all the steps listed above to enable Spam and CAPTCHA in the development environment,
and force form submissions to require a CAPTCHA.
1. Ensure you have created an HTML page to render CAPTCHA under the `/public` directory,
with a page that contains a form to manually generate a valid CAPTCHA response string.
If you use **Google's official test reCAPTCHA credentials** listed in
[Set up Akismet and reCAPTCHA](#set-up-akismet-and-recaptcha), the contents of the
CAPTCHA response string don't matter.
1. Go to **Admin -> Settings -> Reporting -> Spam and Anti-bot protection**.
1. Select or clear **Enable reCAPTCHA** and **Enable Akismet** according to your
scenario's needs.
The following examples use snippet creation as an example. You could also use
snippet updates, issue creation, or issue updates. Issues and snippets are the
only models with full Spam and CAPTCHA support.
### Initial setup
1. Create an API token.
1. Export it in your terminal for the REST commands: `export PRIVATE_TOKEN=<your_api_token>`
1. Ensure you are signed into the GitLab development environment at `localhost:3000` before using GraphiQL explorer,
because it uses your authenticated user as authorization for running GraphQL queries.
1. For the GraphQL examples, use the GraphiQL explorer at `http://localhost:3000/-/graphql-explorer`.
1. Use the `--include` (`-i`) option to `curl` to print the HTTP response headers, including the status code.
### Scenario: Akismet and CAPTCHA enabled
In this example, Akismet and CAPTCHA are enabled:
1. [Initial request](#initial-request).
#### Initial request
This initial request fails because no CAPTCHA response is provided.
REST request:
```shell
curl --request POST --header "PRIVATE-TOKEN: $PRIVATE_TOKEN" "http://localhost:3000/api/v4/snippets?title=Title&file_name=FileName&content=Content&visibility=public"
```
REST response:
```shell
{"needs_captcha_response":true,"spam_log_id":42,"captcha_site_key":"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","message":{"error":"Your snippet has been recognized as spam. Please, change the content or solve the reCAPTCHA to proceed."}}
```
GraphQL request:
```graphql
mutation {
createSnippet(input: {
title: "Title"
visibilityLevel: public
blobActions: [
{
action: create
filePath: "BlobPath"
content: "BlobContent"
}
]
}) {
snippet {
id
title
}
errors
}
}
```
GraphQL response:
```json
{
"data": {
"createSnippet": null
},
"errors": [
{
"message": "Request denied. Solve CAPTCHA challenge and retry",
"locations": [
{
"line": 22,
"column": 5
}
],
"path": [
"createSnippet"
],
"extensions": {
"needs_captcha_response": true,
"spam_log_id": 140,
"captcha_site_key": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}
}
]
}
```
#### Second request
This request succeeds because a CAPTCHA response is provided.
REST request:
```shell
export CAPTCHA_RESPONSE="<CAPTCHA response obtained from HTML page to render CAPTCHA>"
export SPAM_LOG_ID="<spam_log_id obtained from initial REST response>"
curl --request POST --header "PRIVATE-TOKEN: $PRIVATE_TOKEN" --header "X-GitLab-Captcha-Response: $CAPTCHA_RESPONSE" --header "X-GitLab-Spam-Log-Id: $SPAM_LOG_ID" "http://localhost:3000/api/v4/snippets?title=Title&file_name=FileName&content=Content&visibility=public"
```
REST response:
```shell
{"id":42,"title":"Title","description":null,"visibility":"public", "other_fields": "..."}
```
GraphQL request:
{{< alert type="note" >}}
The GitLab GraphiQL implementation doesn't allow passing of headers, so we must write
this as a `curl` query. Here, `--data-binary` is used to properly handle escaped double quotes
in the JSON-embedded query.
{{< /alert >}}
```shell
export CAPTCHA_RESPONSE="<CAPTCHA response obtained from HTML page to render CAPTCHA>"
export SPAM_LOG_ID="<spam_log_id obtained from initial REST response>"
curl --include "http://localhost:3000/api/graphql" --header "Authorization: Bearer $PRIVATE_TOKEN" --header "Content-Type: application/json" --header "X-GitLab-Captcha-Response: $CAPTCHA_RESPONSE" --header "X-GitLab-Spam-Log-Id: $SPAM_LOG_ID" --request POST --data-binary '{"query": "mutation {createSnippet(input: {title: \"Title\" visibilityLevel: public blobActions: [ { action: create filePath: \"BlobPath\" content: \"BlobContent\" } ] }) { snippet { id title } errors }}"}'
```
GraphQL response:
```json
{"data":{"createSnippet":{"snippet":{"id":"gid://gitlab/PersonalSnippet/42","title":"Title"},"errors":[]}}}
```
### Scenario: Akismet enabled, CAPTCHA disabled
For this scenario, ensure you clear **Enable reCAPTCHA** in the **Admin** area settings as described above.
If CAPTCHA is not enabled, any request flagged as potential spam fails with no chance to resubmit,
even if it could otherwise be resubmitted if CAPTCHA were enabled and successfully solved.
The REST request is the same as if CAPTCHA was enabled:
```shell
curl --request POST --header "PRIVATE-TOKEN: $PRIVATE_TOKEN" "http://localhost:3000/api/v4/snippets?title=Title&file_name=FileName&content=Content&visibility=public"
```
REST response:
```shell
{"message":{"error":"Your snippet has been recognized as spam and has been discarded."}}
```
GraphQL request:
```graphql
mutation {
createSnippet(input: {
title: "Title"
visibilityLevel: public
blobActions: [
{
action: create
filePath: "BlobPath"
content: "BlobContent"
}
]
}) {
snippet {
id
title
}
errors
}
}
```
GraphQL response:
```json
{
"data": {
"createSnippet": null
},
"errors": [
{
"message": "Request denied. Spam detected",
"locations": [
{
"line": 22,
"column": 5
}
],
"path": [
"createSnippet"
],
"extensions": {
"spam": true
}
}
]
}
```
### Scenario: `allow_possible_spam` application setting enabled
With the `allow_possible_spam` application setting enabled, the API returns a 200 response. Any
valid request is successful and no CAPTCHA is presented, even if the request is considered
spam.
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Exploratory testing of CAPTCHAs
breadcrumbs:
- doc
- development
- spam_protection_and_captcha
---
You can reliably test CAPTCHA on review apps, and in your local development environment (GDK).
You can always:
- Force a reCAPTCHA to appear where it is supported.
- Force a checkbox to display, instead of street sign images to find and select.
To set up testing, follow the configuration on this page.
## Use appropriate test data
Make sure you are testing a scenario which has spam/CAPTCHA enabled. For example:
make sure you are editing a public snippet, as only public snippets are checked for spam.
## Enable feature flags
Enable any relevant feature flag, if the spam/CAPTCHA support is behind a feature flag.
## Set up Akismet and reCAPTCHA
1. To set up reCAPTCHA:
1. Review the [GitLab reCAPTCHA documentation](../../integration/recaptcha.md).
1. Follow the instructions provided by Google to get the official [test reCAPTCHA credentials](https://developers.google.com/recaptcha/docs/faq#id-like-to-run-automated-tests-with-recaptcha.-what-should-i-do).
1. For **Site key**, use: `6LeIxAcTAAAAAJcZVRqyHh71UMIEGNQ_MXjiZKhI`
1. For **Secret key**, use: `6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe`
1. Go to **Admin -> Settings -> Reporting** settings: `http://gdk.test:3000/admin/application_settings/reporting#js-spam-settings`
1. Expand the **Spam and Anti-bot Protection** section.
1. Select **Enable reCAPTCHA**. Enabling for login is not required unless you are testing that feature.
1. Enter the **Site key** and **Secret key**.
1. To set up Akismet:
1. Review the [GitLab documentation on Akismet](../../integration/akismet.md).
1. Get an Akismet API key. You can sign up for [a testing key from Akismet](https://akismet.com).
You must enter your local host (such as`gdk.test`) and email when signing up.
1. Go to GitLab Akismet settings page, for example:
`http://gdk.test:3000/admin/application_settings/reporting#js-spam-settings`
1. Enable Akismet and enter your Akismet **API key**.
1. To force an Akismet false-positive spam check, refer to the
[Akismet API documentation](https://akismet.com/developers/detailed-docs/comment-check/) and
[Akismet Getting Started documentation](https://akismet.com/support/getting-started/confirm/) for more details:
1. You can use `akismet-guaranteed-spam@example.com` as the author email to force spam using the following steps:
1. Go to user email settings: `http://gdk.test:3000/-/profile/emails`
1. Add `akismet-guaranteed-spam@example.com` as a secondary email for the administrator user.
1. Confirm it in the Rails console: `bin/rails c` -> `User.find_by_username('root').emails.last.confirm`
1. Switch this verified email to be your primary email:
1. Go to **Avatar dropdown list -> Edit Profile -> Main Settings**.
1. For **Email**, enter `akismet-guaranteed-spam@example.com` to replace `admin@example.com`.
1. Select **Update Profile Settings** to save your changes.
## Test in the web UI
After you have all the above configuration in place, you can test CAPTCHAs. Test
in an area of the application which already has CAPTCHA support, such as:
- Creating or editing an issue.
- Creating or editing a public snippet. Only **public** snippets are checked for spam.
## Test in a development environment
After you force Spam Flagging + CAPTCHA using the steps above, you can test the
behavior with any spam-protected model/controller action.
### Test with CAPTCHA enabled (`CONDITIONAL_ALLOW` verdict)
If CAPTCHA is enabled in these areas, you must solve the CAPTCHA popup modal before you can resubmit the form:
- **Admin -> Settings -> Reporting -> Spam**
- **Anti-bot Protection -> Enable reCAPTCHA**
### Testing with CAPTCHA disabled (`DISALLOW` verdict)
If CAPTCHA is disabled in **Admin -> Settings -> Reporting -> Spam** and **Anti-bot Protection -> Enable reCAPTCHA**,
no CAPTCHA popup displays. You are prevented from submitting the form at all.
### HTML page to render reCAPTCHA
{{< alert type="note" >}}
If you use **the Google official test reCAPTCHA credentials** listed in
[Set up Akismet and reCAPTCHA](#set-up-akismet-and-recaptcha), the
CAPTCHA response string does not matter. It can be any string. If you use a
real, valid key pair, you must solve the CAPTCHA to obtain a
valid CAPTCHA response to use. You can do this once only, and only before it expires.
{{< /alert >}}
To directly test the GraphQL API via GraphQL Explorer (`http://gdk.test:3000/-/graphql-explorer`),
get a reCAPTCHA response string via this form: `public/recaptcha.html` (`http://gdk.test:3000/recaptcha.html`):
```html
<html>
<head>
<title>reCAPTCHA demo: Explicit render after an onload callback</title>
<script type="text/javascript">
var onloadCallback = function() {
grecaptcha.render('html_element', {
'sitekey' : '6Ld05AsaAAAAAMsm1yTUp4qsdFARN15rQJPPqv6i'
});
};
function onSubmit() {
window.document.getElementById('recaptchaResponse').innerHTML = grecaptcha.getResponse();
return false;
}
</script>
</head>
<body>
<form onsubmit="return onSubmit()">
<div id="html_element"></div>
<br>
<input type="submit" value="Submit">
</form>
<div>
<h1>recaptchaResponse:</h1>
<div id="recaptchaResponse"></div>
</div>
<script src="https://www.google.com/recaptcha/api.js?onload=onloadCallback&render=explicit"
async defer>
</script>
</body>
</html>
```
## Spam/CAPTCHA API exploratory testing examples
These sections describe the steps needed to perform manual exploratory testing of
various scenarios of the Spam and CAPTCHA behavior for the REST and GraphQL APIs.
For the prerequisites, you must:
1. Perform all the steps listed above to enable Spam and CAPTCHA in the development environment,
and force form submissions to require a CAPTCHA.
1. Ensure you have created an HTML page to render CAPTCHA under the `/public` directory,
with a page that contains a form to manually generate a valid CAPTCHA response string.
If you use **Google's official test reCAPTCHA credentials** listed in
[Set up Akismet and reCAPTCHA](#set-up-akismet-and-recaptcha), the contents of the
CAPTCHA response string don't matter.
1. Go to **Admin -> Settings -> Reporting -> Spam and Anti-bot protection**.
1. Select or clear **Enable reCAPTCHA** and **Enable Akismet** according to your
scenario's needs.
The following examples use snippet creation as an example. You could also use
snippet updates, issue creation, or issue updates. Issues and snippets are the
only models with full Spam and CAPTCHA support.
### Initial setup
1. Create an API token.
1. Export it in your terminal for the REST commands: `export PRIVATE_TOKEN=<your_api_token>`
1. Ensure you are signed into the GitLab development environment at `localhost:3000` before using GraphiQL explorer,
because it uses your authenticated user as authorization for running GraphQL queries.
1. For the GraphQL examples, use the GraphiQL explorer at `http://localhost:3000/-/graphql-explorer`.
1. Use the `--include` (`-i`) option to `curl` to print the HTTP response headers, including the status code.
### Scenario: Akismet and CAPTCHA enabled
In this example, Akismet and CAPTCHA are enabled:
1. [Initial request](#initial-request).
#### Initial request
This initial request fails because no CAPTCHA response is provided.
REST request:
```shell
curl --request POST --header "PRIVATE-TOKEN: $PRIVATE_TOKEN" "http://localhost:3000/api/v4/snippets?title=Title&file_name=FileName&content=Content&visibility=public"
```
REST response:
```shell
{"needs_captcha_response":true,"spam_log_id":42,"captcha_site_key":"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","message":{"error":"Your snippet has been recognized as spam. Please, change the content or solve the reCAPTCHA to proceed."}}
```
GraphQL request:
```graphql
mutation {
createSnippet(input: {
title: "Title"
visibilityLevel: public
blobActions: [
{
action: create
filePath: "BlobPath"
content: "BlobContent"
}
]
}) {
snippet {
id
title
}
errors
}
}
```
GraphQL response:
```json
{
"data": {
"createSnippet": null
},
"errors": [
{
"message": "Request denied. Solve CAPTCHA challenge and retry",
"locations": [
{
"line": 22,
"column": 5
}
],
"path": [
"createSnippet"
],
"extensions": {
"needs_captcha_response": true,
"spam_log_id": 140,
"captcha_site_key": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}
}
]
}
```
#### Second request
This request succeeds because a CAPTCHA response is provided.
REST request:
```shell
export CAPTCHA_RESPONSE="<CAPTCHA response obtained from HTML page to render CAPTCHA>"
export SPAM_LOG_ID="<spam_log_id obtained from initial REST response>"
curl --request POST --header "PRIVATE-TOKEN: $PRIVATE_TOKEN" --header "X-GitLab-Captcha-Response: $CAPTCHA_RESPONSE" --header "X-GitLab-Spam-Log-Id: $SPAM_LOG_ID" "http://localhost:3000/api/v4/snippets?title=Title&file_name=FileName&content=Content&visibility=public"
```
REST response:
```shell
{"id":42,"title":"Title","description":null,"visibility":"public", "other_fields": "..."}
```
GraphQL request:
{{< alert type="note" >}}
The GitLab GraphiQL implementation doesn't allow passing of headers, so we must write
this as a `curl` query. Here, `--data-binary` is used to properly handle escaped double quotes
in the JSON-embedded query.
{{< /alert >}}
```shell
export CAPTCHA_RESPONSE="<CAPTCHA response obtained from HTML page to render CAPTCHA>"
export SPAM_LOG_ID="<spam_log_id obtained from initial REST response>"
curl --include "http://localhost:3000/api/graphql" --header "Authorization: Bearer $PRIVATE_TOKEN" --header "Content-Type: application/json" --header "X-GitLab-Captcha-Response: $CAPTCHA_RESPONSE" --header "X-GitLab-Spam-Log-Id: $SPAM_LOG_ID" --request POST --data-binary '{"query": "mutation {createSnippet(input: {title: \"Title\" visibilityLevel: public blobActions: [ { action: create filePath: \"BlobPath\" content: \"BlobContent\" } ] }) { snippet { id title } errors }}"}'
```
GraphQL response:
```json
{"data":{"createSnippet":{"snippet":{"id":"gid://gitlab/PersonalSnippet/42","title":"Title"},"errors":[]}}}
```
### Scenario: Akismet enabled, CAPTCHA disabled
For this scenario, ensure you clear **Enable reCAPTCHA** in the **Admin** area settings as described above.
If CAPTCHA is not enabled, any request flagged as potential spam fails with no chance to resubmit,
even if it could otherwise be resubmitted if CAPTCHA were enabled and successfully solved.
The REST request is the same as if CAPTCHA was enabled:
```shell
curl --request POST --header "PRIVATE-TOKEN: $PRIVATE_TOKEN" "http://localhost:3000/api/v4/snippets?title=Title&file_name=FileName&content=Content&visibility=public"
```
REST response:
```shell
{"message":{"error":"Your snippet has been recognized as spam and has been discarded."}}
```
GraphQL request:
```graphql
mutation {
createSnippet(input: {
title: "Title"
visibilityLevel: public
blobActions: [
{
action: create
filePath: "BlobPath"
content: "BlobContent"
}
]
}) {
snippet {
id
title
}
errors
}
}
```
GraphQL response:
```json
{
"data": {
"createSnippet": null
},
"errors": [
{
"message": "Request denied. Spam detected",
"locations": [
{
"line": 22,
"column": 5
}
],
"path": [
"createSnippet"
],
"extensions": {
"spam": true
}
}
]
}
```
### Scenario: `allow_possible_spam` application setting enabled
With the `allow_possible_spam` application setting enabled, the API returns a 200 response. Any
valid request is successful and no CAPTCHA is presented, even if the request is considered
spam.
|
https://docs.gitlab.com/development/model_and_services
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/model_and_services.md
|
2025-08-13
|
doc/development/spam_protection_and_captcha
|
[
"doc",
"development",
"spam_protection_and_captcha"
] |
model_and_services.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Model and services spam protection and CAPTCHA support
| null |
Before adding any spam or CAPTCHA support to the REST API, GraphQL API, or Web UI, you must
first add the necessary support to:
1. The backend ActiveRecord models.
1. The services layer.
All or most of the following changes are required, regardless of the type of spam or CAPTCHA request
implementation you are supporting. Some newer features which are completely based on the GraphQL API
may not have any controllers, and don't require you to add the `mark_as_spam` action to the controller.
To do this:
1. [Add `Spammable` support to the ActiveRecord model](#add-spammable-support-to-the-activerecord-model).
1. [Add support for the `mark_as_spam` action to the controller](#add-support-for-the-mark_as_spam-action-to-the-controller).
1. [Add a call to `check_for_spam` to the execute method of services](#add-a-call-to-check_for_spam-to-the-execute-method-of-services).
## Add `Spammable` support to the ActiveRecord model
1. Include the `Spammable` module in the model class:
```ruby
include Spammable
```
1. Add: `attr_spammable` to indicate which fields can be checked for spam. Up to
two fields per model are supported: a "`title`" and a "`description`". You can
designate which fields to consider the "`title`" or "`description`". For example,
this line designates the `content` field as the `description`:
```ruby
attr_spammable :content, spam_description: true
```
1. Add a `#check_for_spam?` method implementation:
```ruby
def check_for_spam?(user:)
# Return a boolean result based on various applicable checks, which may include
# which attributes have changed, the type of user, whether the data is publicly
# visible, and other criteria. This may vary based on the type of model, and
# may change over time as spam checking requirements evolve.
end
```
Refer to other existing `Spammable` models'
implementations of this method for examples of the required logic checks.
## Add support for the `mark_as_spam` action to the controller
The `SpammableActions::AkismetMarkAsSpamAction` module adds support for a `#mark_as_spam` action
to a controller. This controller allows administrators to manage spam for the associated
`Spammable` model in the [**Spam log** section](../../integration/akismet.md) of the **Admin** area.
1. Include the `SpammableActions::AkismetMarkAsSpamAction` module in the controller.
```ruby
include SpammableActions::AkismetMarkAsSpamAction
```
1. Add a `#spammable_path` method implementation. The spam administration page redirects
to this page after edits. Refer to other existing controllers' implementations
of this method for examples of the type of path logic required. In general, it should
be the `#show` action for the `Spammable` model's controller.
```ruby
def spammable_path
widget_path(widget)
end
```
{{< alert type="note" >}}
There may be other changes needed to controllers, depending on how the feature is
implemented. See [Web UI](web_ui.md) for more details.
{{< /alert >}}
## Add a call to `check_for_spam` to the execute method of services
This approach applies to any service which can persist spammable attributes:
1. In the relevant Create or Update service under `app/services`, call the `check_for_spam` method on the model.
1. If the spam check fails:
- An error is added to the model, which causes it to be invalid and prevents it from being saved.
- The `needs_recaptcha` property is set to `true`.
These changes to the model enable it for handling by the subsequent backend and frontend CAPTCHA logic.
Make these changes to each relevant service:
1. In the `execute` method, call the `check_for_spam` method on the model.
(You can also use `before_create` or `before_update`, if the service
uses that pattern.) This method uses named arguments, so its usage is clear if
you refer to existing examples. However, two important considerations exist:
1. The `check_for_spam` must be executed _after_ all necessary changes are made to
the unsaved (and dirty) `Spammable` model instance. This ordering ensures
spammable attributes exist to be spam-checked.
1. The `check_for_spam` must be executed _before_ the model is checked for errors and
attempting a `save`. If potential spam is detected in the model's changed attributes, we must prevent a save.
```ruby
module Widget
class CreateService < ::Widget::BaseService
# NOTE: We add a default value of `true` for `perform_spam_check`, because spam checking is likely to be necessary.
def initialize(project:, current_user: nil, params: {}, perform_spam_check: true)
super(project: project, current_user: current_user, params: params)
@perform_spam_check = perform_spam_check
end
def execute
widget = Widget::BuildService.new(project, current_user, params).execute
# More code that may manipulate dirty model before it is spam checked.
# NOTE: do this AFTER the spammable model is instantiated, but BEFORE
# it is validated or saved.
widget.check_for_spam(user: current_user, action: :create) if perform_spam_check
# Possibly more code related to saving model, but should not change any attributes.
widget.save
end
private
attr_reader :perform_spam_check
```
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Model and services spam protection and CAPTCHA support
breadcrumbs:
- doc
- development
- spam_protection_and_captcha
---
Before adding any spam or CAPTCHA support to the REST API, GraphQL API, or Web UI, you must
first add the necessary support to:
1. The backend ActiveRecord models.
1. The services layer.
All or most of the following changes are required, regardless of the type of spam or CAPTCHA request
implementation you are supporting. Some newer features which are completely based on the GraphQL API
may not have any controllers, and don't require you to add the `mark_as_spam` action to the controller.
To do this:
1. [Add `Spammable` support to the ActiveRecord model](#add-spammable-support-to-the-activerecord-model).
1. [Add support for the `mark_as_spam` action to the controller](#add-support-for-the-mark_as_spam-action-to-the-controller).
1. [Add a call to `check_for_spam` to the execute method of services](#add-a-call-to-check_for_spam-to-the-execute-method-of-services).
## Add `Spammable` support to the ActiveRecord model
1. Include the `Spammable` module in the model class:
```ruby
include Spammable
```
1. Add: `attr_spammable` to indicate which fields can be checked for spam. Up to
two fields per model are supported: a "`title`" and a "`description`". You can
designate which fields to consider the "`title`" or "`description`". For example,
this line designates the `content` field as the `description`:
```ruby
attr_spammable :content, spam_description: true
```
1. Add a `#check_for_spam?` method implementation:
```ruby
def check_for_spam?(user:)
# Return a boolean result based on various applicable checks, which may include
# which attributes have changed, the type of user, whether the data is publicly
# visible, and other criteria. This may vary based on the type of model, and
# may change over time as spam checking requirements evolve.
end
```
Refer to other existing `Spammable` models'
implementations of this method for examples of the required logic checks.
## Add support for the `mark_as_spam` action to the controller
The `SpammableActions::AkismetMarkAsSpamAction` module adds support for a `#mark_as_spam` action
to a controller. This controller allows administrators to manage spam for the associated
`Spammable` model in the [**Spam log** section](../../integration/akismet.md) of the **Admin** area.
1. Include the `SpammableActions::AkismetMarkAsSpamAction` module in the controller.
```ruby
include SpammableActions::AkismetMarkAsSpamAction
```
1. Add a `#spammable_path` method implementation. The spam administration page redirects
to this page after edits. Refer to other existing controllers' implementations
of this method for examples of the type of path logic required. In general, it should
be the `#show` action for the `Spammable` model's controller.
```ruby
def spammable_path
widget_path(widget)
end
```
{{< alert type="note" >}}
There may be other changes needed to controllers, depending on how the feature is
implemented. See [Web UI](web_ui.md) for more details.
{{< /alert >}}
## Add a call to `check_for_spam` to the execute method of services
This approach applies to any service which can persist spammable attributes:
1. In the relevant Create or Update service under `app/services`, call the `check_for_spam` method on the model.
1. If the spam check fails:
- An error is added to the model, which causes it to be invalid and prevents it from being saved.
- The `needs_recaptcha` property is set to `true`.
These changes to the model enable it for handling by the subsequent backend and frontend CAPTCHA logic.
Make these changes to each relevant service:
1. In the `execute` method, call the `check_for_spam` method on the model.
(You can also use `before_create` or `before_update`, if the service
uses that pattern.) This method uses named arguments, so its usage is clear if
you refer to existing examples. However, two important considerations exist:
1. The `check_for_spam` must be executed _after_ all necessary changes are made to
the unsaved (and dirty) `Spammable` model instance. This ordering ensures
spammable attributes exist to be spam-checked.
1. The `check_for_spam` must be executed _before_ the model is checked for errors and
attempting a `save`. If potential spam is detected in the model's changed attributes, we must prevent a save.
```ruby
module Widget
class CreateService < ::Widget::BaseService
# NOTE: We add a default value of `true` for `perform_spam_check`, because spam checking is likely to be necessary.
def initialize(project:, current_user: nil, params: {}, perform_spam_check: true)
super(project: project, current_user: current_user, params: params)
@perform_spam_check = perform_spam_check
end
def execute
widget = Widget::BuildService.new(project, current_user, params).execute
# More code that may manipulate dirty model before it is spam checked.
# NOTE: do this AFTER the spammable model is instantiated, but BEFORE
# it is validated or saved.
widget.check_for_spam(user: current_user, action: :create) if perform_spam_check
# Possibly more code related to saving model, but should not change any attributes.
widget.save
end
private
attr_reader :perform_spam_check
```
|
https://docs.gitlab.com/development/graphql_api
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/graphql_api.md
|
2025-08-13
|
doc/development/spam_protection_and_captcha
|
[
"doc",
"development",
"spam_protection_and_captcha"
] |
graphql_api.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GraphQL API spam protection and CAPTCHA support
| null |
If the model can be modified via the GraphQL API, you must also add support to all of the
relevant GraphQL mutations which may modify spammable or spam-related attributes. This
definitely includes the `Create` and `Update` mutations, but may also include others, such as those
related to changing a model's confidential/public flag.
## Add support to the GraphQL mutations
The main steps are:
1. Use `include Mutations::SpamProtection` in your mutation.
1. Pass `perform_spam_check: true` to the Update Service class constructor.
It is set to `true` by default in the Create Service.
1. After you create or update the `Spammable` model instance, call `#check_spam_action_response!`
and pass it the model instance. This call:
1. Performs the necessary spam checks on the model.
1. If spam is detected:
- Raises a `GraphQL::ExecutionError` exception.
- Includes the relevant information added as error fields to the response via the `extensions:` parameter.
For more details on these fields, refer to the section in the GraphQL API documentation on
[Resolve mutations detected as spam](../../api/graphql/_index.md#resolve-mutations-detected-as-spam).
{{< alert type="note" >}}
If you use the standard ApolloLink or Axios interceptor CAPTCHA support described
above, you can ignore the field details, because they are handled
automatically. They become relevant if you attempt to use the GraphQL API directly to
process a failed check for potential spam, and resubmit the request with a solved
CAPTCHA response.
{{< /alert >}}
For example:
```ruby
module Mutations
module Widgets
class Create < BaseMutation
include Mutations::SpamProtection
def resolve(args)
service_response = ::Widgets::CreateService.new(
project: project,
current_user: current_user,
params: args
).execute
widget = service_response.payload[:widget]
check_spam_action_response!(widget)
# If possible spam was detected, an exception would have been thrown by
# `#check_spam_action_response!`, so the normal resolve return logic can follow below.
end
end
end
end
```
Refer to the [Exploratory Testing](exploratory_testing.md) section for instructions on how to test
CAPTCHA behavior in the GraphQL API.
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GraphQL API spam protection and CAPTCHA support
breadcrumbs:
- doc
- development
- spam_protection_and_captcha
---
If the model can be modified via the GraphQL API, you must also add support to all of the
relevant GraphQL mutations which may modify spammable or spam-related attributes. This
definitely includes the `Create` and `Update` mutations, but may also include others, such as those
related to changing a model's confidential/public flag.
## Add support to the GraphQL mutations
The main steps are:
1. Use `include Mutations::SpamProtection` in your mutation.
1. Pass `perform_spam_check: true` to the Update Service class constructor.
It is set to `true` by default in the Create Service.
1. After you create or update the `Spammable` model instance, call `#check_spam_action_response!`
and pass it the model instance. This call:
1. Performs the necessary spam checks on the model.
1. If spam is detected:
- Raises a `GraphQL::ExecutionError` exception.
- Includes the relevant information added as error fields to the response via the `extensions:` parameter.
For more details on these fields, refer to the section in the GraphQL API documentation on
[Resolve mutations detected as spam](../../api/graphql/_index.md#resolve-mutations-detected-as-spam).
{{< alert type="note" >}}
If you use the standard ApolloLink or Axios interceptor CAPTCHA support described
above, you can ignore the field details, because they are handled
automatically. They become relevant if you attempt to use the GraphQL API directly to
process a failed check for potential spam, and resubmit the request with a solved
CAPTCHA response.
{{< /alert >}}
For example:
```ruby
module Mutations
module Widgets
class Create < BaseMutation
include Mutations::SpamProtection
def resolve(args)
service_response = ::Widgets::CreateService.new(
project: project,
current_user: current_user,
params: args
).execute
widget = service_response.payload[:widget]
check_spam_action_response!(widget)
# If possible spam was detected, an exception would have been thrown by
# `#check_spam_action_response!`, so the normal resolve return logic can follow below.
end
end
end
end
```
Refer to the [Exploratory Testing](exploratory_testing.md) section for instructions on how to test
CAPTCHA behavior in the GraphQL API.
|
https://docs.gitlab.com/development/web_ui
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/web_ui.md
|
2025-08-13
|
doc/development/spam_protection_and_captcha
|
[
"doc",
"development",
"spam_protection_and_captcha"
] |
web_ui.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Web UI spam protection and CAPTCHA support
| null |
The approach for adding spam protection and CAPTCHA support to a new UI area of the GitLab application
depends upon how the existing code is implemented.
## Supported scenarios of request submissions
Three different scenarios are supported. Two are used with JavaScript XHR/Fetch requests
for either Apollo or Axios, and one is used only with standard HTML form requests:
1. A JavaScript-based submission (possibly via Vue)
1. Using Apollo (GraphQL API via Fetch/XHR request)
1. Using Axios (REST API via Fetch/XHR request)
1. A standard HTML form submission (HTML request)
Some parts of the implementation depend upon which of these scenarios you must support.
## Implementation tasks specific to JavaScript XHR/Fetch requests
Two approaches are fully supported:
1. Apollo, using the GraphQL API.
1. Axios, using either the GraphQL API.
The spam and CAPTCHA-related data communication between the frontend and backend requires no
additional fields being added to the models. Instead, communication is handled:
- Through custom header values in the request.
- Through top-level JSON fields in the response.
The spam and CAPTCHA-related logic is also cleanly abstracted into reusable modules and helper methods
which can wrap existing logic, and only alter the existing flow if potential spam
is detected or a CAPTCHA display is needed. This approach allows the spam and CAPTCHA
support to be added to new areas of the application with minimal changes to
existing logic. In the case of the frontend, potentially **zero** changes are needed!
On the frontend, this is handled abstractly and transparently using `ApolloLink` for Apollo, and an
Axios interceptor for Axios. The CAPTCHA display is handled by a standard GitLab UI / Pajamas modal
component. You can find all the relevant frontend code under `app/assets/javascripts/captcha`.
However, even though the actual handling of the request interception and
modal is transparent, without any mandatory changes to the involved JavaScript or Vue components
for the form or page, changes in request or error handling may be required. Changes are needed
because the existing behavior may not work correctly: for example, if a failed or canceled
CAPTCHA display interrupts the standard request flow or UI updates.
Careful exploratory testing of all scenarios is important to uncover any potential
problems.
This sequence diagram illustrates the standard CAPTCHA flow for JavaScript XHR/Fetch requests
on the frontend:
```mermaid
sequenceDiagram
participant U as User
participant V as Vue/JS Application
participant A as ApolloLink or Axios Interceptor
participant G as GitLab API
U->>V: Save model
V->>A: Request
A->>G: Request
G--xA: Response with error and spam/CAPTCHA related fields
A->>U: CAPTCHA presented in modal
U->>A: CAPTCHA solved to obtain valid CAPTCHA response
A->>G: Request with valid CAPTCHA response and SpamLog ID in headers
G-->>A: Response with success
A-->>V: Response with success
```
The backend is also cleanly abstracted via mixin modules and helper methods. The three main
changes required to the relevant backend controller actions (typically just `create`/`update`) are:
1. Pass `perform_spam_check: true` to the Update Service class constructor.
It is set to `true` by default in the Create Service.
1. If the spam check indicates the changes to the model are possibly spam, then:
- An error is added to the model.
- The `needs_recaptcha` property on the model is set to true.
1. Wrap the existing controller action return value (rendering or redirecting) in a block passed to
a `#with_captcha_check_json_format` helper method, which transparently handles:
1. Check if CAPTCHA is enabled, and if so, proceeding with the next step.
1. Checking if there the model contains an error, and the `needs_recaptcha` flag is true.
- If yes: Add the appropriate spam or CAPTCHA fields to the JSON response, and return
a `409 - Conflict` HTTP status code.
- If no (if CAPTCHA is disabled or if no spam was detected): The standard request return
logic passed in the block is run.
Thanks to the abstractions, it's more straightforward to implement than it is to explain it.
You don't have to worry much about the hidden details!
Make these changes:
## Add support to the controller actions
If the feature's frontend submits directly to controller actions, and does not only use the GraphQL
API, then you must add support to the appropriate controllers.
The action methods may be directly in the controller class, or they may be abstracted
to a module included in the controller class. Our example uses a module. The
only difference when directly modifying the controller:
`extend ActiveSupport::Concern` is not required.
```ruby
module WidgetsActions
# NOTE: This `extend` probably already exists, but it MUST be moved to occur BEFORE all
# `include` statements. Otherwise, confusing bugs may occur in which the methods
# in the included modules cannot be found.
extend ActiveSupport::Concern
include SpammableActions::CaptchaCheck::JsonFormatActionsSupport
def create
widget = ::Widgets::CreateService.new(
project: project,
current_user: current_user,
params: params
).execute
respond_to do |format|
format.json do
with_captcha_check_json_format do
# The action's existing `render json: ...` (or wrapper method) and related logic. Possibly
# including different rendering cases if the model is valid or not. It's all wrapped here
# within the `with_captcha_check_json_format` block. For example:
if widget.valid?
render json: serializer.represent(widget)
else
render json: { errors: widget.errors.full_messages }, status: :unprocessable_entity
end
end
end
end
end
end
```
## Implementation tasks specific to HTML form requests
Some areas of the application have not been converted to use the GraphQL API via
a JavaScript client, but instead rely on standard Rails HAML form submissions via an
`HTML` MIME type request. In these areas, the action returns a pre-rendered HTML (HAML) page
as the response body. Unfortunately, in this case
[it is not possible](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66427#note_636989204)
to use any of the JavaScript-based frontend support as described above. Instead we must use an
alternate approach which handles the rendering of the CAPTCHA form via a HAML template.
Everything is still cleanly abstracted, and the implementation in the backend
controllers is virtually identical to the JavaScript/JSON based approach. Replace the
word `JSON` with `HTML` (using the appropriate case) in the module names and helper methods.
The action methods might be directly in the controller, or they
might be in a module. In this example, they are directly in the
controller, and we also do an `update` method instead of `create`:
```ruby
class WidgetsController < ApplicationController
include SpammableActions::CaptchaCheck::HtmlFormatActionsSupport
def update
# Existing logic to find the `widget` model instance...
::Widgets::UpdateService.new(
project: project,
current_user: current_user,
params: params,
perform_spam_check: true
).execute(widget)
respond_to do |format|
format.html do
if widget.valid?
# NOTE: `spammable_path` is required by the `SpammableActions::AkismetMarkAsSpamAction`
# module, and it should have already been implemented on this controller according to
# the instructions above. It is reused here to avoid duplicating the route helper call.
redirect_to spammable_path
else
# If we got here, there were errors on the model instance - from a failed spam check
# and/or other validation errors on the model. Either way, we'll re-render the form,
# and if a CAPTCHA render is necessary, it will be automatically handled by
# `with_captcha_check_html_format`
with_captcha_check_html_format { render :edit }
end
end
end
end
end
```
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Web UI spam protection and CAPTCHA support
breadcrumbs:
- doc
- development
- spam_protection_and_captcha
---
The approach for adding spam protection and CAPTCHA support to a new UI area of the GitLab application
depends upon how the existing code is implemented.
## Supported scenarios of request submissions
Three different scenarios are supported. Two are used with JavaScript XHR/Fetch requests
for either Apollo or Axios, and one is used only with standard HTML form requests:
1. A JavaScript-based submission (possibly via Vue)
1. Using Apollo (GraphQL API via Fetch/XHR request)
1. Using Axios (REST API via Fetch/XHR request)
1. A standard HTML form submission (HTML request)
Some parts of the implementation depend upon which of these scenarios you must support.
## Implementation tasks specific to JavaScript XHR/Fetch requests
Two approaches are fully supported:
1. Apollo, using the GraphQL API.
1. Axios, using either the GraphQL API.
The spam and CAPTCHA-related data communication between the frontend and backend requires no
additional fields being added to the models. Instead, communication is handled:
- Through custom header values in the request.
- Through top-level JSON fields in the response.
The spam and CAPTCHA-related logic is also cleanly abstracted into reusable modules and helper methods
which can wrap existing logic, and only alter the existing flow if potential spam
is detected or a CAPTCHA display is needed. This approach allows the spam and CAPTCHA
support to be added to new areas of the application with minimal changes to
existing logic. In the case of the frontend, potentially **zero** changes are needed!
On the frontend, this is handled abstractly and transparently using `ApolloLink` for Apollo, and an
Axios interceptor for Axios. The CAPTCHA display is handled by a standard GitLab UI / Pajamas modal
component. You can find all the relevant frontend code under `app/assets/javascripts/captcha`.
However, even though the actual handling of the request interception and
modal is transparent, without any mandatory changes to the involved JavaScript or Vue components
for the form or page, changes in request or error handling may be required. Changes are needed
because the existing behavior may not work correctly: for example, if a failed or canceled
CAPTCHA display interrupts the standard request flow or UI updates.
Careful exploratory testing of all scenarios is important to uncover any potential
problems.
This sequence diagram illustrates the standard CAPTCHA flow for JavaScript XHR/Fetch requests
on the frontend:
```mermaid
sequenceDiagram
participant U as User
participant V as Vue/JS Application
participant A as ApolloLink or Axios Interceptor
participant G as GitLab API
U->>V: Save model
V->>A: Request
A->>G: Request
G--xA: Response with error and spam/CAPTCHA related fields
A->>U: CAPTCHA presented in modal
U->>A: CAPTCHA solved to obtain valid CAPTCHA response
A->>G: Request with valid CAPTCHA response and SpamLog ID in headers
G-->>A: Response with success
A-->>V: Response with success
```
The backend is also cleanly abstracted via mixin modules and helper methods. The three main
changes required to the relevant backend controller actions (typically just `create`/`update`) are:
1. Pass `perform_spam_check: true` to the Update Service class constructor.
It is set to `true` by default in the Create Service.
1. If the spam check indicates the changes to the model are possibly spam, then:
- An error is added to the model.
- The `needs_recaptcha` property on the model is set to true.
1. Wrap the existing controller action return value (rendering or redirecting) in a block passed to
a `#with_captcha_check_json_format` helper method, which transparently handles:
1. Check if CAPTCHA is enabled, and if so, proceeding with the next step.
1. Checking if there the model contains an error, and the `needs_recaptcha` flag is true.
- If yes: Add the appropriate spam or CAPTCHA fields to the JSON response, and return
a `409 - Conflict` HTTP status code.
- If no (if CAPTCHA is disabled or if no spam was detected): The standard request return
logic passed in the block is run.
Thanks to the abstractions, it's more straightforward to implement than it is to explain it.
You don't have to worry much about the hidden details!
Make these changes:
## Add support to the controller actions
If the feature's frontend submits directly to controller actions, and does not only use the GraphQL
API, then you must add support to the appropriate controllers.
The action methods may be directly in the controller class, or they may be abstracted
to a module included in the controller class. Our example uses a module. The
only difference when directly modifying the controller:
`extend ActiveSupport::Concern` is not required.
```ruby
module WidgetsActions
# NOTE: This `extend` probably already exists, but it MUST be moved to occur BEFORE all
# `include` statements. Otherwise, confusing bugs may occur in which the methods
# in the included modules cannot be found.
extend ActiveSupport::Concern
include SpammableActions::CaptchaCheck::JsonFormatActionsSupport
def create
widget = ::Widgets::CreateService.new(
project: project,
current_user: current_user,
params: params
).execute
respond_to do |format|
format.json do
with_captcha_check_json_format do
# The action's existing `render json: ...` (or wrapper method) and related logic. Possibly
# including different rendering cases if the model is valid or not. It's all wrapped here
# within the `with_captcha_check_json_format` block. For example:
if widget.valid?
render json: serializer.represent(widget)
else
render json: { errors: widget.errors.full_messages }, status: :unprocessable_entity
end
end
end
end
end
end
```
## Implementation tasks specific to HTML form requests
Some areas of the application have not been converted to use the GraphQL API via
a JavaScript client, but instead rely on standard Rails HAML form submissions via an
`HTML` MIME type request. In these areas, the action returns a pre-rendered HTML (HAML) page
as the response body. Unfortunately, in this case
[it is not possible](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66427#note_636989204)
to use any of the JavaScript-based frontend support as described above. Instead we must use an
alternate approach which handles the rendering of the CAPTCHA form via a HAML template.
Everything is still cleanly abstracted, and the implementation in the backend
controllers is virtually identical to the JavaScript/JSON based approach. Replace the
word `JSON` with `HTML` (using the appropriate case) in the module names and helper methods.
The action methods might be directly in the controller, or they
might be in a module. In this example, they are directly in the
controller, and we also do an `update` method instead of `create`:
```ruby
class WidgetsController < ApplicationController
include SpammableActions::CaptchaCheck::HtmlFormatActionsSupport
def update
# Existing logic to find the `widget` model instance...
::Widgets::UpdateService.new(
project: project,
current_user: current_user,
params: params,
perform_spam_check: true
).execute(widget)
respond_to do |format|
format.html do
if widget.valid?
# NOTE: `spammable_path` is required by the `SpammableActions::AkismetMarkAsSpamAction`
# module, and it should have already been implemented on this controller according to
# the instructions above. It is reused here to avoid duplicating the route helper call.
redirect_to spammable_path
else
# If we got here, there were errors on the model instance - from a failed spam check
# and/or other validation errors on the model. Either way, we'll re-render the form,
# and if a CAPTCHA render is necessary, it will be automatically handled by
# `with_captcha_check_html_format`
with_captcha_check_html_format { render :edit }
end
end
end
end
end
```
|
https://docs.gitlab.com/development/ux
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/ux
|
[
"doc",
"development",
"ux"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Contribute to UX design
| null |
## UX Design
These instructions are specifically for those wanting to make UX design contributions to GitLab.
The UX department at GitLab uses [Figma](https://www.figma.com/) for all of its designs, and you can see our [Design Repository documentation](https://gitlab.com/gitlab-org/gitlab-design/blob/master/README.md#getting-started) for details on working with our files.
You may leverage the [Pajamas UI Kit](https://www.figma.com/community/file/781156790581391771/component-library/component-library) in Figma to create mockups for your proposals. However, we will also gladly accept handmade drawings and sketches, wireframes, manipulated DOM screenshots, or prototypes. You can find design resources documentation in our [Design System](https://design.gitlab.com/). Use it to understand where and when to use common design solutions.
## Contributing to Pajamas
To contribute to [Pajamas design system](https://design.gitlab.com/) and the [UI kit](https://www.figma.com/community/file/781156790581391771/component-library), follow the [contribution guidelines](https://design.gitlab.com/get-started/contributing/) documented in the handbook. While the instructions are code-focused, they will help you understand the overall process of contributing.
## Contributing to other issues
1. Review the list of available UX issues that are currently [seeking community contribution](https://gitlab.com/groups/gitlab-org/-/issues/?sort=weight&state=opened&label_name%5B%5D=UX&label_name%5B%5D=Seeking%20community%20contributions&first_page_size=100).
1. Find an issue that does not have an Assignee to ensure someone else is not working on a solution. Add the `~"workflow::design"` and `~"Community contribution"` labels and mention `@gitlab-com/gitlab-ux/reviewers` to request they assign the issue to you.
1. Add your design proposal to the issue description/[design management](../../user/project/issues/design_management.md) section. Remember to keep the scope of the proposal/change small following our [MVCs guidelines](https://handbook.gitlab.com/handbook/values/#minimal-viable-change-mvc).
1. If you have any questions or are ready for a review of your proposal, mention `@gitlab-com/gitlab-ux/reviewers` in a comment to make your request.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Contribute to UX design
breadcrumbs:
- doc
- development
- ux
---
## UX Design
These instructions are specifically for those wanting to make UX design contributions to GitLab.
The UX department at GitLab uses [Figma](https://www.figma.com/) for all of its designs, and you can see our [Design Repository documentation](https://gitlab.com/gitlab-org/gitlab-design/blob/master/README.md#getting-started) for details on working with our files.
You may leverage the [Pajamas UI Kit](https://www.figma.com/community/file/781156790581391771/component-library/component-library) in Figma to create mockups for your proposals. However, we will also gladly accept handmade drawings and sketches, wireframes, manipulated DOM screenshots, or prototypes. You can find design resources documentation in our [Design System](https://design.gitlab.com/). Use it to understand where and when to use common design solutions.
## Contributing to Pajamas
To contribute to [Pajamas design system](https://design.gitlab.com/) and the [UI kit](https://www.figma.com/community/file/781156790581391771/component-library), follow the [contribution guidelines](https://design.gitlab.com/get-started/contributing/) documented in the handbook. While the instructions are code-focused, they will help you understand the overall process of contributing.
## Contributing to other issues
1. Review the list of available UX issues that are currently [seeking community contribution](https://gitlab.com/groups/gitlab-org/-/issues/?sort=weight&state=opened&label_name%5B%5D=UX&label_name%5B%5D=Seeking%20community%20contributions&first_page_size=100).
1. Find an issue that does not have an Assignee to ensure someone else is not working on a solution. Add the `~"workflow::design"` and `~"Community contribution"` labels and mention `@gitlab-com/gitlab-ux/reviewers` to request they assign the issue to you.
1. Add your design proposal to the issue description/[design management](../../user/project/issues/design_management.md) section. Remember to keep the scope of the proposal/change small following our [MVCs guidelines](https://handbook.gitlab.com/handbook/values/#minimal-viable-change-mvc).
1. If you have any questions or are ready for a review of your proposal, mention `@gitlab-com/gitlab-ux/reviewers` in a comment to make your request.
|
https://docs.gitlab.com/development/audit_event_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/audit_event_guide
|
[
"doc",
"development",
"audit_event_guide"
] |
_index.md
|
Software Supply Chain Security
|
Compliance
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Audit event development guidelines
| null |
This guide provides an overview of how audit events work, and how to instrument
new audit events.
## What are audit events?
Audit events are a tool for GitLab owners and administrators to view records of important
actions performed across the application.
## What should not be audit events?
While any events could trigger an audit event, not all events should. In general, events that are not good candidates for audit events are:
- Not attributable to one specific user.
- Not of specific interest to an administrator or owner persona.
- Are tracking information for product feature adoption.
- Are covered in the direction page's discussion on [what is not planned](https://about.gitlab.com/direction/govern/compliance/audit-events/#what-is-not-planned-right-now).
If you have any questions, reach out to `@gitlab-org/govern/compliance` to see if an audit event, or some other approach, may be best for your event.
## Audit event schemas
To instrument an audit event, the following attributes should be provided:
| Attribute | Type | Required? | Description |
|:-------------|:------------------------------------|:----------|:------------------------------------------------------------------|
| `name` | String | false | Action name to be audited. Represents the [type of the event](#event-type-definitions). Used for error tracking |
| `author` | User | true | User who authors the change. Can be an [internal user](../../administration/internal_users.md). For example, [dormant project deletion](../../administration/dormant_project_deletion.md) audit events are authored by `GitLab-Admin-Bot`. |
| `scope` | User, Project, Group, or Instance | true | Scope which the audit event belongs to |
| `target` | Object | true | Target object being audited |
| `message` | String | true | Message describing the action ([not translated](#i18n-and-the-audit-event-message-attribute)) |
| `created_at` | DateTime | false | The time when the action occurred. Defaults to `DateTime.current` |
## How to instrument new audit events
1. Create a [YAML type definition](#add-a-new-audit-event-type) for the new audit event.
1. Call `Gitlab::Audit::Auditor.audit`, passing an action block.
The following ways of instrumenting audit events are deprecated:
- Create a new class in `ee/lib/ee/audit/` and extend `AuditEventService`
- Call `AuditEventService` after a successful action
With `Gitlab::Audit::Auditor` service, we can instrument audit events in two ways:
- Using block for multiple events.
- Using standard method call for single events.
### Using block to record multiple events
You can use this method when events are emitted deep in the call stack.
For example, we can record multiple audit events when the user updates a merge
request approval rule. As part of this user flow, we would like to audit changes
to both approvers and approval groups. In the initiating service
(for example, `MergeRequestRuleUpdateService`), we can wrap the `execute` call as follows:
```ruby
# in the initiating service
audit_context = {
name: 'update_merge_approval_rule',
author: current_user,
scope: project_alpha,
target: merge_approval_rule,
message: 'Attempted to update an approval rule'
}
::Gitlab::Audit::Auditor.audit(audit_context) do
service.execute
end
```
In the model (for example, `ApprovalProjectRule`), we can push audit events on model
callbacks (for example, `after_save` or `after_add`).
```ruby
# in the model
include Auditable
def audit_add(model)
push_audit_event('Added an approver on Security rule')
end
def audit_remove(model)
push_audit_event('Removed an approver on Security rule')
end
```
This method does not support actions that are asynchronous, or
span across multiple processes (for example, background jobs).
### Using standard method call to record single event
This method allows recording single audit event and involves fewer moving parts.
```ruby
if merge_approval_rule.save
audit_context = {
name: 'create_merge_approval_rule',
author: current_user,
scope: project_alpha,
target: merge_approval_rule,
message: 'Created a new approval rule',
created_at: DateTime.current # Useful for pre-dating an audit event when created asynchronously.
}
::Gitlab::Audit::Auditor.audit(audit_context)
end
```
### Data volume considerations
Because every audit event is persisted to the database, consider the amount of data we expect to generate, and the rate of generation, for new
audit events. For new audit events that produce a lot of data in the database, consider adding a
[streaming-only audit event](#event-streaming) instead. If you have questions about this, feel free to ping
`@gitlab-org/govern/compliance/backend` in an issue or merge request.
## Audit event instrumentation flows
The two ways we can instrument audit events have different flows.
### Using block to record multiple events
We wrap the operation block in a `Gitlab::Audit::Auditor` which captures the
initial audit context (that is, `author`, `scope`, `target`) object that are
available at the time the operation is initiated.
Extra instrumentation is required in the interacted classes in the chain with
`Auditable` mixin to add audit events to the audit event queue via `Gitlab::Audit::EventQueue`.
The `EventQueue` is stored in a local thread via `SafeRequestStore` and then later
extracted when we record an audit event in `Gitlab::Audit::Auditor`.
```plantuml
skinparam shadowing false
skinparam BoxPadding 10
skinparam ParticipantPadding 20
participant "Instrumented Class" as A
participant "Audit::Auditor" as A1 #LightBlue
participant "Audit::EventQueue" as B #LightBlue
participant "Interacted Class" as C
participant "AuditEvent" as D
A->A1: audit <b>{ block }</b>
activate A1
A1->B: begin!
A1->C: <b>block.call</b>
activate A1 #FFBBBB
activate C
C-->B: push [ message ]
C-->A1: true
deactivate A1
deactivate C
A1->B: read
activate A1 #FFBBBB
activate B
B-->A1: [ messages ]
deactivate B
A1->D: bulk_insert!
deactivate A1
A1->B: end!
A1-->A:
deactivate A1
```
### Using standard method call to record single event
This method has a more straight-forward flow, and does not rely on `EventQueue`
and local thread.
```plantuml
skinparam shadowing false
skinparam BoxPadding 10
skinparam ParticipantPadding 20
participant "Instrumented Class" as A
participant "Audit::Auditor" as B #LightBlue
participant "AuditEvent" as C
A->B: audit
activate B
B->C: bulk_insert!
B-->A:
deactivate B
```
In addition to recording to the database, we also write these events to
[a log file](../../administration/logs/_index.md#audit_jsonlog).
## Event type definitions
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367847) in GitLab 15.4.
{{< /history >}}
All new audit events must have a type definition stored in `config/audit_events/types/` or `ee/config/audit_events/types/` that contains a single source of truth for every auditable event in GitLab.
### Add a new audit event type
To add a new audit event type:
1. Create the YAML definition. You can either:
- Use the `bin/audit-event-type` CLI to create the YAML definition automatically.
- Perform manual steps to create a new file in `config/audit_events/types/` with the filename matching the name of the event type. For example,
a definition for the event type triggered when a user is added to a project might be stored in `config/audit_events/types/project_add_user.yml`.
1. Add contents to the file that conform to the [schema](#schema) defined in `config/audit_events/types/type_schema.json`.
1. Ensure that all calls to `Gitlab::Audit::Auditor` use the `name` defined in your file.
### Schema
| Field | Required | Description |
| ----- | -------- |--------------|
| `name` | yes | Unique, lowercase and underscored name describing the type of event. Must match the filename. |
| `description` | yes | Human-readable description of how this event is triggered |
| `group` | yes | Name of the group that introduced this audit event. For example, `manage::compliance` |
| `introduced_by_issue` | yes | Issue URL that proposed the addition of this type |
| `introduced_by_mr` | yes | MR URL that added this new type |
| `milestone` | yes | Milestone in which this type was added |
| `saved_to_database` | yes | Indicate whether to persist events to database and JSON logs |
| `streamed` | yes | Indicate that events should be streamed to external services (if configured) |
| `scope` | yes | List of scopes that this audit event type is available for. Should be an Array containing one or more of `Project`, `User`, `Group` or `Instance` |
### Generate documentation
Audit event types documentation is automatically generated and [published](../../user/compliance/audit_event_types.md)
to the GitLab documentation site.
If you add a new audit event type, run the
[`gitlab:audit_event_types:compile_docs` Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/audit_event_types/audit_event_types.rake)
to update the documentation:
```shell
bundle exec rake gitlab:audit_event_types:compile_docs
```
Run the [`gitlab:audit_event_types:check_docs` Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/audit_event_types/audit_event_types.rake)
to check if the documentation is up-to-date:
```shell
bundle exec rake gitlab:audit_event_types:check_docs
```
## Event streaming
All events where the entity is a `Group` or `Project` are recorded in the audit log, and also streamed to one or more
[event streaming destinations](../../administration/compliance/audit_event_streaming.md). When the entity is a:
- `Group`, events are streamed to the group's root ancestor's event streaming destinations.
- `Project`, events are streamed to the project's root ancestor's event streaming destinations.
You can add streaming-only events that are not stored in the GitLab database. Streaming-only events are primarily intended to be used for actions that generate
a large amount of data. See [this merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/76719/diffs#d56e47632f0384722d411ed3ab5b15e947bd2265_26_36)
for an example.
This feature is under heavy development. Follow the [parent epic](https://gitlab.com/groups/gitlab-org/-/epics/5925) for updates on feature
development.
### I18N and the audit event `:message` attribute
We intentionally do not translate audit event messages because translated messages would be saved in the database and served to users, regardless of their locale settings.
For example, this could mean that we use the locale for the authenticated user to record an audit event message and stream the message to an external streaming
destination in the wrong language for that destination. Users could find that confusing.
|
---
stage: Software Supply Chain Security
group: Compliance
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Audit event development guidelines
breadcrumbs:
- doc
- development
- audit_event_guide
---
This guide provides an overview of how audit events work, and how to instrument
new audit events.
## What are audit events?
Audit events are a tool for GitLab owners and administrators to view records of important
actions performed across the application.
## What should not be audit events?
While any events could trigger an audit event, not all events should. In general, events that are not good candidates for audit events are:
- Not attributable to one specific user.
- Not of specific interest to an administrator or owner persona.
- Are tracking information for product feature adoption.
- Are covered in the direction page's discussion on [what is not planned](https://about.gitlab.com/direction/govern/compliance/audit-events/#what-is-not-planned-right-now).
If you have any questions, reach out to `@gitlab-org/govern/compliance` to see if an audit event, or some other approach, may be best for your event.
## Audit event schemas
To instrument an audit event, the following attributes should be provided:
| Attribute | Type | Required? | Description |
|:-------------|:------------------------------------|:----------|:------------------------------------------------------------------|
| `name` | String | false | Action name to be audited. Represents the [type of the event](#event-type-definitions). Used for error tracking |
| `author` | User | true | User who authors the change. Can be an [internal user](../../administration/internal_users.md). For example, [dormant project deletion](../../administration/dormant_project_deletion.md) audit events are authored by `GitLab-Admin-Bot`. |
| `scope` | User, Project, Group, or Instance | true | Scope which the audit event belongs to |
| `target` | Object | true | Target object being audited |
| `message` | String | true | Message describing the action ([not translated](#i18n-and-the-audit-event-message-attribute)) |
| `created_at` | DateTime | false | The time when the action occurred. Defaults to `DateTime.current` |
## How to instrument new audit events
1. Create a [YAML type definition](#add-a-new-audit-event-type) for the new audit event.
1. Call `Gitlab::Audit::Auditor.audit`, passing an action block.
The following ways of instrumenting audit events are deprecated:
- Create a new class in `ee/lib/ee/audit/` and extend `AuditEventService`
- Call `AuditEventService` after a successful action
With `Gitlab::Audit::Auditor` service, we can instrument audit events in two ways:
- Using block for multiple events.
- Using standard method call for single events.
### Using block to record multiple events
You can use this method when events are emitted deep in the call stack.
For example, we can record multiple audit events when the user updates a merge
request approval rule. As part of this user flow, we would like to audit changes
to both approvers and approval groups. In the initiating service
(for example, `MergeRequestRuleUpdateService`), we can wrap the `execute` call as follows:
```ruby
# in the initiating service
audit_context = {
name: 'update_merge_approval_rule',
author: current_user,
scope: project_alpha,
target: merge_approval_rule,
message: 'Attempted to update an approval rule'
}
::Gitlab::Audit::Auditor.audit(audit_context) do
service.execute
end
```
In the model (for example, `ApprovalProjectRule`), we can push audit events on model
callbacks (for example, `after_save` or `after_add`).
```ruby
# in the model
include Auditable
def audit_add(model)
push_audit_event('Added an approver on Security rule')
end
def audit_remove(model)
push_audit_event('Removed an approver on Security rule')
end
```
This method does not support actions that are asynchronous, or
span across multiple processes (for example, background jobs).
### Using standard method call to record single event
This method allows recording single audit event and involves fewer moving parts.
```ruby
if merge_approval_rule.save
audit_context = {
name: 'create_merge_approval_rule',
author: current_user,
scope: project_alpha,
target: merge_approval_rule,
message: 'Created a new approval rule',
created_at: DateTime.current # Useful for pre-dating an audit event when created asynchronously.
}
::Gitlab::Audit::Auditor.audit(audit_context)
end
```
### Data volume considerations
Because every audit event is persisted to the database, consider the amount of data we expect to generate, and the rate of generation, for new
audit events. For new audit events that produce a lot of data in the database, consider adding a
[streaming-only audit event](#event-streaming) instead. If you have questions about this, feel free to ping
`@gitlab-org/govern/compliance/backend` in an issue or merge request.
## Audit event instrumentation flows
The two ways we can instrument audit events have different flows.
### Using block to record multiple events
We wrap the operation block in a `Gitlab::Audit::Auditor` which captures the
initial audit context (that is, `author`, `scope`, `target`) object that are
available at the time the operation is initiated.
Extra instrumentation is required in the interacted classes in the chain with
`Auditable` mixin to add audit events to the audit event queue via `Gitlab::Audit::EventQueue`.
The `EventQueue` is stored in a local thread via `SafeRequestStore` and then later
extracted when we record an audit event in `Gitlab::Audit::Auditor`.
```plantuml
skinparam shadowing false
skinparam BoxPadding 10
skinparam ParticipantPadding 20
participant "Instrumented Class" as A
participant "Audit::Auditor" as A1 #LightBlue
participant "Audit::EventQueue" as B #LightBlue
participant "Interacted Class" as C
participant "AuditEvent" as D
A->A1: audit <b>{ block }</b>
activate A1
A1->B: begin!
A1->C: <b>block.call</b>
activate A1 #FFBBBB
activate C
C-->B: push [ message ]
C-->A1: true
deactivate A1
deactivate C
A1->B: read
activate A1 #FFBBBB
activate B
B-->A1: [ messages ]
deactivate B
A1->D: bulk_insert!
deactivate A1
A1->B: end!
A1-->A:
deactivate A1
```
### Using standard method call to record single event
This method has a more straight-forward flow, and does not rely on `EventQueue`
and local thread.
```plantuml
skinparam shadowing false
skinparam BoxPadding 10
skinparam ParticipantPadding 20
participant "Instrumented Class" as A
participant "Audit::Auditor" as B #LightBlue
participant "AuditEvent" as C
A->B: audit
activate B
B->C: bulk_insert!
B-->A:
deactivate B
```
In addition to recording to the database, we also write these events to
[a log file](../../administration/logs/_index.md#audit_jsonlog).
## Event type definitions
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367847) in GitLab 15.4.
{{< /history >}}
All new audit events must have a type definition stored in `config/audit_events/types/` or `ee/config/audit_events/types/` that contains a single source of truth for every auditable event in GitLab.
### Add a new audit event type
To add a new audit event type:
1. Create the YAML definition. You can either:
- Use the `bin/audit-event-type` CLI to create the YAML definition automatically.
- Perform manual steps to create a new file in `config/audit_events/types/` with the filename matching the name of the event type. For example,
a definition for the event type triggered when a user is added to a project might be stored in `config/audit_events/types/project_add_user.yml`.
1. Add contents to the file that conform to the [schema](#schema) defined in `config/audit_events/types/type_schema.json`.
1. Ensure that all calls to `Gitlab::Audit::Auditor` use the `name` defined in your file.
### Schema
| Field | Required | Description |
| ----- | -------- |--------------|
| `name` | yes | Unique, lowercase and underscored name describing the type of event. Must match the filename. |
| `description` | yes | Human-readable description of how this event is triggered |
| `group` | yes | Name of the group that introduced this audit event. For example, `manage::compliance` |
| `introduced_by_issue` | yes | Issue URL that proposed the addition of this type |
| `introduced_by_mr` | yes | MR URL that added this new type |
| `milestone` | yes | Milestone in which this type was added |
| `saved_to_database` | yes | Indicate whether to persist events to database and JSON logs |
| `streamed` | yes | Indicate that events should be streamed to external services (if configured) |
| `scope` | yes | List of scopes that this audit event type is available for. Should be an Array containing one or more of `Project`, `User`, `Group` or `Instance` |
### Generate documentation
Audit event types documentation is automatically generated and [published](../../user/compliance/audit_event_types.md)
to the GitLab documentation site.
If you add a new audit event type, run the
[`gitlab:audit_event_types:compile_docs` Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/audit_event_types/audit_event_types.rake)
to update the documentation:
```shell
bundle exec rake gitlab:audit_event_types:compile_docs
```
Run the [`gitlab:audit_event_types:check_docs` Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/audit_event_types/audit_event_types.rake)
to check if the documentation is up-to-date:
```shell
bundle exec rake gitlab:audit_event_types:check_docs
```
## Event streaming
All events where the entity is a `Group` or `Project` are recorded in the audit log, and also streamed to one or more
[event streaming destinations](../../administration/compliance/audit_event_streaming.md). When the entity is a:
- `Group`, events are streamed to the group's root ancestor's event streaming destinations.
- `Project`, events are streamed to the project's root ancestor's event streaming destinations.
You can add streaming-only events that are not stored in the GitLab database. Streaming-only events are primarily intended to be used for actions that generate
a large amount of data. See [this merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/76719/diffs#d56e47632f0384722d411ed3ab5b15e947bd2265_26_36)
for an example.
This feature is under heavy development. Follow the [parent epic](https://gitlab.com/groups/gitlab-org/-/epics/5925) for updates on feature
development.
### I18N and the audit event `:message` attribute
We intentionally do not translate audit event messages because translated messages would be saved in the database and served to users, regardless of their locale settings.
For example, this could mean that we use the locale for the authenticated user to record an audit event message and stream the message to an external streaming
destination in the wrong language for that destination. Users could find that confusing.
|
https://docs.gitlab.com/development/gitlab_shell
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/gitlab_shell
|
[
"doc",
"development",
"gitlab_shell"
] |
_index.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Shell development guidelines
| null |
GitLab Shell handles Git SSH sessions for GitLab and modifies the list of authorized keys.
GitLab Shell is not a Unix shell nor a replacement for Bash or Zsh.
GitLab supports Git LFS authentication through SSH.
## Requirements
GitLab Shell is written in Go, and needs a Go compiler to build. It still requires
Ruby to build and test, but not to run.
GitLab Shell runs on `port 22` on a Linux package installation. To use a regular SSH
service, configure it on an alternative port.
Download and install the [current version of Go](https://go.dev/dl/).
We follow the [Go Release Policy](https://go.dev/doc/devel/release#policy)
and support:
- The current stable version.
- The previous two major versions.
### Versions
The two version files relevant to GitLab Shell:
- [Stable version](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/VERSION)
- [Version deployed in GitLab SaaS](https://gitlab.com/gitlab-org/gitlab/-/blob/master/GITLAB_SHELL_VERSION)
GitLab team members can also monitor the `#announcements` internal Slack channel.
## How GitLab Shell works
When you access the GitLab server over SSH, GitLab Shell then:
1. Limits you to predefined Git commands (`git push`, `git pull`, `git fetch`).
1. Calls the GitLab Rails API to check if you are authorized, and what Gitaly server your repository is on.
1. Copies data back and forth between the SSH client and the Gitaly server.
If you access a GitLab server over HTTP(S) you end up in [`gitlab-workhorse`](../workhorse/_index.md).
### `git pull` over SSH
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
A[Git pull] --> |via SSH| B[gitlab-shell]
B -->|API call| C[gitlab-rails<br>authorization]
C -->|accept or decline| D[Gitaly session]
```
### `git push` over SSH
The `git push` command is not performed until after `gitlab-rails` accepts the push:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
subgraph User initiates
A[Git push] -->|via SSH| B[gitlab-shell]
end
subgraph Gitaly
B -->|establish Gitaly session| C[gitlab-shell pre-receive hook]
C -->|API auth call| D[Gitlab-rails]
D --> E[accept or decline push]
end
```
[Full feature list](features.md)
### Modifies `authorized_keys`
GitLab Shell modifies the `authorized_keys` file on the client machine.
## Contribute to GitLab Shell
To contribute to GitLab Shell:
1. Check if GitLab API access, and Redis with the internal API, can be reached: `make check`
1. Compile the `gitlab-shell` binaries, placing them into `bin/`: `make compile`
1. Run `make install` to build the `gitlab-shell` binaries and install them onto the file system.
The default location is `/usr/local`. To change it, set the `PREFIX` and `DESTDIR` environment variables.
1. To install GitLab from source on a single machine, run `make setup`.
It compiles the GitLab Shell binaries, and ensures that various paths on the file system
exist with the correct permissions. Do not run this command unless your installation method
documentation instructs you to.
For more information, see
[CONTRIBUTING.md](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/CONTRIBUTING.md).
### Run tests
When contributing, run tests:
1. Run tests with `bundle install` and `make test`.
1. Run Gofmt: `make verify`
1. Run both test and verify (the default Makefile target):
```shell
bundle install
make validate
```
1. If needed, configure Gitaly.
### Configure Gitaly for local testing
Some tests need a Gitaly server. The
[`docker-compose.yml`](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/docker-compose.yml) file runs Gitaly on port 8075.
To tell the tests where Gitaly is, set `GITALY_CONNECTION_INFO`:
```plaintext
export GITALY_CONNECTION_INFO='{"address": "tcp://localhost:8075", "storage": "default"}'
make test
```
If no `GITALY_CONNECTION_INFO` is set, the test suite still runs, but any
tests requiring Gitaly are skipped. The tests always run in the CI environment.
## Rate limiting
GitLab Shell performs rate-limiting by user account and project for Git operations.
GitLab Shell accepts Git operation requests and then makes a call to the Rails
rate-limiter, backed by Redis. If the `user + project` exceeds the rate limit,
then GitLab Shell then drop further connection requests for that `user + project`.
The rate-limiter is applied at the Git command (plumbing) level. Each command has
a rate limit of 600 per minute. For example, `git push` has 600 per minute, and
`git pull` has another 600 per minute.
Because they are using the same plumbing command, `git-upload-pack`, `git pull`,
and `git clone` are in effect the same command for the purposes of rate-limiting.
Gitaly also has a rate-limiter in place, but calls are never made to Gitaly if
the rate limit is exceeded in GitLab Shell (Rails).
## Logs in GitLab Shell
In general, you can determine the structure, but not content, of a GitLab Shell
or `gitlab-sshd` session by inspecting the logs. Some guidelines:
- We use [`gitlab.com/gitlab-org/labkit/log`](https://pkg.go.dev/gitlab.com/gitlab-org/labkit/log)
for logging.
- Always include a correlation ID.
- Log messages should be invariant and unique. Include accessory information in
fields, using `log.WithField`, `log.WithFields`, or `log.WithError`.
- Log both success cases and error cases.
- Logging too much is better than not logging enough. If a message seems too
verbose, consider reducing the log level before removing the message.
## GitLab SaaS
A diagram of the flow of `gitlab-shell` on GitLab.com:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
a2 --> b2
a2 --> b3
a2 --> b4
b2 --> c1
b3 --> c1
b4 --> c1
c2 --> d1
c2 --> d2
c2 --> d3
d1 --> e1
d2 --> e1
d3 --> e1
a1[Cloudflare] --> a2[TCP<br/> load balancer]
e1[Git]
subgraph HAProxy Fleet
b2[HAProxy]
b3[HAProxy]
b4[HAProxy]
end
subgraph GKE
c1[Internal TCP<br/> load balancer<br/>port 2222] --> c2[GitLab-shell<br/> pods]
end
subgraph Gitaly
d1[Gitaly]
d2[Gitaly]
d3[Gitaly]
end
```
## GitLab Shell architecture
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
participant Git on client
participant SSH server
participant AuthorizedKeysCommand
participant GitLab Shell
participant Rails
participant Gitaly
participant Git on server
Note left of Git on client: git fetch
Git on client->>+SSH server: ssh git fetch-pack request
SSH server->>+AuthorizedKeysCommand: gitlab-shell-authorized-keys-check git AAAA...
AuthorizedKeysCommand->>+Rails: GET /internal/api/authorized_keys?key=AAAA...
Note right of Rails: Lookup key ID
Rails-->>-AuthorizedKeysCommand: 200 OK, command="gitlab-shell upload-pack key_id=1"
AuthorizedKeysCommand-->>-SSH server: command="gitlab-shell upload-pack key_id=1"
SSH server->>+GitLab Shell: gitlab-shell upload-pack key_id=1
GitLab Shell->>+Rails: GET /internal/api/allowed?action=upload_pack&key_id=1
Note right of Rails: Auth check
Rails-->>-GitLab Shell: 200 OK, { gitaly: ... }
GitLab Shell->>+Gitaly: SSHService.SSHUploadPack request
Gitaly->>+Git on server: git upload-pack request
Note over Git on client,Git on server: Bidirectional communication between Git client and server
Git on server-->>-Gitaly: git upload-pack response
Gitaly -->>-GitLab Shell: SSHService.SSHUploadPack response
GitLab Shell-->>-SSH server: gitlab-shell upload-pack response
SSH server-->>-Git on client: ssh git fetch-pack response
```
## Related topics
- [LICENSE](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/LICENSE)
- [Processes](process.md)
- [Using the GitLab Shell chart](https://docs.gitlab.com/charts/charts/gitlab/gitlab-shell/)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: GitLab Shell development guidelines
breadcrumbs:
- doc
- development
- gitlab_shell
---
GitLab Shell handles Git SSH sessions for GitLab and modifies the list of authorized keys.
GitLab Shell is not a Unix shell nor a replacement for Bash or Zsh.
GitLab supports Git LFS authentication through SSH.
## Requirements
GitLab Shell is written in Go, and needs a Go compiler to build. It still requires
Ruby to build and test, but not to run.
GitLab Shell runs on `port 22` on a Linux package installation. To use a regular SSH
service, configure it on an alternative port.
Download and install the [current version of Go](https://go.dev/dl/).
We follow the [Go Release Policy](https://go.dev/doc/devel/release#policy)
and support:
- The current stable version.
- The previous two major versions.
### Versions
The two version files relevant to GitLab Shell:
- [Stable version](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/VERSION)
- [Version deployed in GitLab SaaS](https://gitlab.com/gitlab-org/gitlab/-/blob/master/GITLAB_SHELL_VERSION)
GitLab team members can also monitor the `#announcements` internal Slack channel.
## How GitLab Shell works
When you access the GitLab server over SSH, GitLab Shell then:
1. Limits you to predefined Git commands (`git push`, `git pull`, `git fetch`).
1. Calls the GitLab Rails API to check if you are authorized, and what Gitaly server your repository is on.
1. Copies data back and forth between the SSH client and the Gitaly server.
If you access a GitLab server over HTTP(S) you end up in [`gitlab-workhorse`](../workhorse/_index.md).
### `git pull` over SSH
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
A[Git pull] --> |via SSH| B[gitlab-shell]
B -->|API call| C[gitlab-rails<br>authorization]
C -->|accept or decline| D[Gitaly session]
```
### `git push` over SSH
The `git push` command is not performed until after `gitlab-rails` accepts the push:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
subgraph User initiates
A[Git push] -->|via SSH| B[gitlab-shell]
end
subgraph Gitaly
B -->|establish Gitaly session| C[gitlab-shell pre-receive hook]
C -->|API auth call| D[Gitlab-rails]
D --> E[accept or decline push]
end
```
[Full feature list](features.md)
### Modifies `authorized_keys`
GitLab Shell modifies the `authorized_keys` file on the client machine.
## Contribute to GitLab Shell
To contribute to GitLab Shell:
1. Check if GitLab API access, and Redis with the internal API, can be reached: `make check`
1. Compile the `gitlab-shell` binaries, placing them into `bin/`: `make compile`
1. Run `make install` to build the `gitlab-shell` binaries and install them onto the file system.
The default location is `/usr/local`. To change it, set the `PREFIX` and `DESTDIR` environment variables.
1. To install GitLab from source on a single machine, run `make setup`.
It compiles the GitLab Shell binaries, and ensures that various paths on the file system
exist with the correct permissions. Do not run this command unless your installation method
documentation instructs you to.
For more information, see
[CONTRIBUTING.md](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/CONTRIBUTING.md).
### Run tests
When contributing, run tests:
1. Run tests with `bundle install` and `make test`.
1. Run Gofmt: `make verify`
1. Run both test and verify (the default Makefile target):
```shell
bundle install
make validate
```
1. If needed, configure Gitaly.
### Configure Gitaly for local testing
Some tests need a Gitaly server. The
[`docker-compose.yml`](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/docker-compose.yml) file runs Gitaly on port 8075.
To tell the tests where Gitaly is, set `GITALY_CONNECTION_INFO`:
```plaintext
export GITALY_CONNECTION_INFO='{"address": "tcp://localhost:8075", "storage": "default"}'
make test
```
If no `GITALY_CONNECTION_INFO` is set, the test suite still runs, but any
tests requiring Gitaly are skipped. The tests always run in the CI environment.
## Rate limiting
GitLab Shell performs rate-limiting by user account and project for Git operations.
GitLab Shell accepts Git operation requests and then makes a call to the Rails
rate-limiter, backed by Redis. If the `user + project` exceeds the rate limit,
then GitLab Shell then drop further connection requests for that `user + project`.
The rate-limiter is applied at the Git command (plumbing) level. Each command has
a rate limit of 600 per minute. For example, `git push` has 600 per minute, and
`git pull` has another 600 per minute.
Because they are using the same plumbing command, `git-upload-pack`, `git pull`,
and `git clone` are in effect the same command for the purposes of rate-limiting.
Gitaly also has a rate-limiter in place, but calls are never made to Gitaly if
the rate limit is exceeded in GitLab Shell (Rails).
## Logs in GitLab Shell
In general, you can determine the structure, but not content, of a GitLab Shell
or `gitlab-sshd` session by inspecting the logs. Some guidelines:
- We use [`gitlab.com/gitlab-org/labkit/log`](https://pkg.go.dev/gitlab.com/gitlab-org/labkit/log)
for logging.
- Always include a correlation ID.
- Log messages should be invariant and unique. Include accessory information in
fields, using `log.WithField`, `log.WithFields`, or `log.WithError`.
- Log both success cases and error cases.
- Logging too much is better than not logging enough. If a message seems too
verbose, consider reducing the log level before removing the message.
## GitLab SaaS
A diagram of the flow of `gitlab-shell` on GitLab.com:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
a2 --> b2
a2 --> b3
a2 --> b4
b2 --> c1
b3 --> c1
b4 --> c1
c2 --> d1
c2 --> d2
c2 --> d3
d1 --> e1
d2 --> e1
d3 --> e1
a1[Cloudflare] --> a2[TCP<br/> load balancer]
e1[Git]
subgraph HAProxy Fleet
b2[HAProxy]
b3[HAProxy]
b4[HAProxy]
end
subgraph GKE
c1[Internal TCP<br/> load balancer<br/>port 2222] --> c2[GitLab-shell<br/> pods]
end
subgraph Gitaly
d1[Gitaly]
d2[Gitaly]
d3[Gitaly]
end
```
## GitLab Shell architecture
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
participant Git on client
participant SSH server
participant AuthorizedKeysCommand
participant GitLab Shell
participant Rails
participant Gitaly
participant Git on server
Note left of Git on client: git fetch
Git on client->>+SSH server: ssh git fetch-pack request
SSH server->>+AuthorizedKeysCommand: gitlab-shell-authorized-keys-check git AAAA...
AuthorizedKeysCommand->>+Rails: GET /internal/api/authorized_keys?key=AAAA...
Note right of Rails: Lookup key ID
Rails-->>-AuthorizedKeysCommand: 200 OK, command="gitlab-shell upload-pack key_id=1"
AuthorizedKeysCommand-->>-SSH server: command="gitlab-shell upload-pack key_id=1"
SSH server->>+GitLab Shell: gitlab-shell upload-pack key_id=1
GitLab Shell->>+Rails: GET /internal/api/allowed?action=upload_pack&key_id=1
Note right of Rails: Auth check
Rails-->>-GitLab Shell: 200 OK, { gitaly: ... }
GitLab Shell->>+Gitaly: SSHService.SSHUploadPack request
Gitaly->>+Git on server: git upload-pack request
Note over Git on client,Git on server: Bidirectional communication between Git client and server
Git on server-->>-Gitaly: git upload-pack response
Gitaly -->>-GitLab Shell: SSHService.SSHUploadPack response
GitLab Shell-->>-SSH server: gitlab-shell upload-pack response
SSH server-->>-Git on client: ssh git fetch-pack response
```
## Related topics
- [LICENSE](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/LICENSE)
- [Processes](process.md)
- [Using the GitLab Shell chart](https://docs.gitlab.com/charts/charts/gitlab/gitlab-shell/)
|
https://docs.gitlab.com/development/features
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/features.md
|
2025-08-13
|
doc/development/gitlab_shell
|
[
"doc",
"development",
"gitlab_shell"
] |
features.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Shell feature list
| null |
## Discover
Allows users to identify themselves on an instance with SSH. The command helps to
confirm quickly whether a user has SSH access to the instance:
```shell
ssh git@<hostname>
PTY allocation request failed on channel 0
Welcome to GitLab, @username!
Connection to staging.gitlab.com closed.
```
When permission is denied, it returns:
```shell
ssh git@<hostname>
git@<hostname>: Permission denied (publickey).
```
## Git operations
GitLab Shell provides support for Git operations over SSH by processing
`git-upload-pack`, `git-receive-pack` and `git-upload-archive` SSH commands.
It limits the set of commands to predefined Git commands:
- `git archive`
- `git clone`
- `git pull`
- `git push`
## Generate new 2FA recovery codes
Enables users to
[generate new 2FA recovery codes](../../user/profile/account/two_factor_authentication_troubleshooting.md#generate-new-recovery-codes-using-ssh):
```shell
$ ssh git@<hostname> 2fa_recovery_codes
Are you sure you want to generate new two-factor recovery codes?
Any existing recovery codes you saved will be invalidated. (yes/no)
yes
Your two-factor authentication recovery codes are:
...
```
## Verify 2FA OTP
Allows users to verify their
[2FA one-time password (OTP)](../../security/two_factor_authentication.md#2fa-for-git-over-ssh-operations):
```shell
$ ssh git@<hostname> 2fa_verify
OTP: 347419
OTP validation failed.
```
## LFS authentication
Enables users to generate credentials for LFS authentication:
```shell
$ ssh git@<hostname> git-lfs-authenticate <project-path> <upload/download>
{"header":{"Authorization":"Basic ..."},"href":"https://gitlab.com/user/project.git/info/lfs","expires_in":7200}
```
## Personal access token
Enables users to use personal access tokens with SSH:
```shell
$ ssh git@<hostname> personal_access_token <name> <scope1[,scope2,...]> [ttl_days]
Token: glpat-...
Scopes: api
Expires: 2022-02-05
```
### Configuration options
Administrators can control PAT generation with SSH.
To configure PAT settings in GitLab Shell:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit the `/etc/gitlab/gitlab.rb` file.
1. Add or modify the following configuration:
```ruby
gitlab_shell['pat'] = { enabled: true, allowed_scopes: [] }
```
- `enabled`: Set to `true` to enable PAT generation using SSH, or `false` to disable it.
- `allowed_scopes`: An array of scopes allowed for PATs generated with SSH.
Leave empty (`[]`) to allow all scopes.
1. Save the file and [Restart GitLab](../../administration/restart_gitlab.md).
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Edit the `values.yaml` file:
```yaml
gitlab:
gitlab-shell:
config:
pat:
enabled: true
allowedScopes: []
```
- `enabled`: Set to `true` to enable PAT generation using SSH, or `false` to disable it.
- `allowedScopes`: An array of scopes allowed for PATs generated with SSH.
Leave empty (`[]`) to allow all
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit the `docker-compose.yaml` file:
```yaml
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_shell['pat'] = { enabled: true, allowed_scopes: [] }
```
- `enabled`: Set to `'true'` to enable PAT generation using SSH, or `'false'` to disable it.
- `allowed_scopes`: A comma-separated list of scopes allowed for PATs generated with SSH. Leave empty (`[]`) to allow all scopes.
1. Save the file and restart GitLab and its services:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the `/home/git/gitlab-shell/config.yml` file:
```yaml
pat:
enabled: true
allowed_scopes: []
```
- `enabled`: Set to `true` to enable PAT generation using SSH, or `false` to disable it.
- `allowed_scopes`: An array of scopes allowed for PATs generated with SSH.
Leave empty (`[]`) to allow all scopes.
1. Save the file and restart GitLab Shell:
```shell
# For systems running systemd
sudo systemctl restart gitlab-shell.target
# For systems running SysV init
sudo service gitlab-shell restart
```
{{< /tab >}}
{{< /tabs >}}
{{< alert type="note" >}}
These settings only affect PAT generation with SSH and do not
impact PATs created through the web interface.
{{< /alert >}}
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab Shell feature list
breadcrumbs:
- doc
- development
- gitlab_shell
---
## Discover
Allows users to identify themselves on an instance with SSH. The command helps to
confirm quickly whether a user has SSH access to the instance:
```shell
ssh git@<hostname>
PTY allocation request failed on channel 0
Welcome to GitLab, @username!
Connection to staging.gitlab.com closed.
```
When permission is denied, it returns:
```shell
ssh git@<hostname>
git@<hostname>: Permission denied (publickey).
```
## Git operations
GitLab Shell provides support for Git operations over SSH by processing
`git-upload-pack`, `git-receive-pack` and `git-upload-archive` SSH commands.
It limits the set of commands to predefined Git commands:
- `git archive`
- `git clone`
- `git pull`
- `git push`
## Generate new 2FA recovery codes
Enables users to
[generate new 2FA recovery codes](../../user/profile/account/two_factor_authentication_troubleshooting.md#generate-new-recovery-codes-using-ssh):
```shell
$ ssh git@<hostname> 2fa_recovery_codes
Are you sure you want to generate new two-factor recovery codes?
Any existing recovery codes you saved will be invalidated. (yes/no)
yes
Your two-factor authentication recovery codes are:
...
```
## Verify 2FA OTP
Allows users to verify their
[2FA one-time password (OTP)](../../security/two_factor_authentication.md#2fa-for-git-over-ssh-operations):
```shell
$ ssh git@<hostname> 2fa_verify
OTP: 347419
OTP validation failed.
```
## LFS authentication
Enables users to generate credentials for LFS authentication:
```shell
$ ssh git@<hostname> git-lfs-authenticate <project-path> <upload/download>
{"header":{"Authorization":"Basic ..."},"href":"https://gitlab.com/user/project.git/info/lfs","expires_in":7200}
```
## Personal access token
Enables users to use personal access tokens with SSH:
```shell
$ ssh git@<hostname> personal_access_token <name> <scope1[,scope2,...]> [ttl_days]
Token: glpat-...
Scopes: api
Expires: 2022-02-05
```
### Configuration options
Administrators can control PAT generation with SSH.
To configure PAT settings in GitLab Shell:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit the `/etc/gitlab/gitlab.rb` file.
1. Add or modify the following configuration:
```ruby
gitlab_shell['pat'] = { enabled: true, allowed_scopes: [] }
```
- `enabled`: Set to `true` to enable PAT generation using SSH, or `false` to disable it.
- `allowed_scopes`: An array of scopes allowed for PATs generated with SSH.
Leave empty (`[]`) to allow all scopes.
1. Save the file and [Restart GitLab](../../administration/restart_gitlab.md).
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Edit the `values.yaml` file:
```yaml
gitlab:
gitlab-shell:
config:
pat:
enabled: true
allowedScopes: []
```
- `enabled`: Set to `true` to enable PAT generation using SSH, or `false` to disable it.
- `allowedScopes`: An array of scopes allowed for PATs generated with SSH.
Leave empty (`[]`) to allow all
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit the `docker-compose.yaml` file:
```yaml
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_shell['pat'] = { enabled: true, allowed_scopes: [] }
```
- `enabled`: Set to `'true'` to enable PAT generation using SSH, or `'false'` to disable it.
- `allowed_scopes`: A comma-separated list of scopes allowed for PATs generated with SSH. Leave empty (`[]`) to allow all scopes.
1. Save the file and restart GitLab and its services:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the `/home/git/gitlab-shell/config.yml` file:
```yaml
pat:
enabled: true
allowed_scopes: []
```
- `enabled`: Set to `true` to enable PAT generation using SSH, or `false` to disable it.
- `allowed_scopes`: An array of scopes allowed for PATs generated with SSH.
Leave empty (`[]`) to allow all scopes.
1. Save the file and restart GitLab Shell:
```shell
# For systems running systemd
sudo systemctl restart gitlab-shell.target
# For systems running SysV init
sudo service gitlab-shell restart
```
{{< /tab >}}
{{< /tabs >}}
{{< alert type="note" >}}
These settings only affect PAT generation with SSH and do not
impact PATs created through the web interface.
{{< /alert >}}
|
https://docs.gitlab.com/development/gitlab_sshd
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/gitlab_sshd.md
|
2025-08-13
|
doc/development/gitlab_shell
|
[
"doc",
"development",
"gitlab_shell"
] |
gitlab_sshd.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
`gitlab-sshd` in GitLab Shell
| null |
`gitlab-sshd` is a binary in [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell)
which runs as a persistent SSH daemon. It is intended to replace `OpenSSH` on GitLab SaaS,
and eventually other cloud-native environments. Instead of running an `sshd` process,
we run a `gitlab-sshd` process that does the same job, in a more focused manner:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
participant Git on client
participant GitLab SSHD
participant Rails
participant Gitaly
participant Git on server
Note left of Git on client: git fetch
Git on client->>+GitLab SSHD: ssh git fetch-pack request
GitLab SSHD->>+Rails: GET /internal/api/authorized_keys?key=AAAA...
Note right of Rails: Lookup key ID
Rails-->>-GitLab SSHD: 200 OK, command="gitlab-shell upload-pack key_id=1"
GitLab SSHD->>+Rails: GET /internal/api/allowed?action=upload_pack&key_id=1
Note right of Rails: Auth check
Rails-->>-GitLab SSHD: 200 OK, { gitaly: ... }
GitLab SSHD->>+Gitaly: SSHService.SSHUploadPack request
Gitaly->>+Git on server: git upload-pack request
Note over Git on client,Git on server: Bidirectional communication between Git client and server
Git on server-->>-Gitaly: git upload-pack response
Gitaly -->>-GitLab SSHD: SSHService.SSHUploadPack response
GitLab SSHD-->>-Git on client: ssh git fetch-pack response
```
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: '`gitlab-sshd` in GitLab Shell'
breadcrumbs:
- doc
- development
- gitlab_shell
---
`gitlab-sshd` is a binary in [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell)
which runs as a persistent SSH daemon. It is intended to replace `OpenSSH` on GitLab SaaS,
and eventually other cloud-native environments. Instead of running an `sshd` process,
we run a `gitlab-sshd` process that does the same job, in a more focused manner:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
participant Git on client
participant GitLab SSHD
participant Rails
participant Gitaly
participant Git on server
Note left of Git on client: git fetch
Git on client->>+GitLab SSHD: ssh git fetch-pack request
GitLab SSHD->>+Rails: GET /internal/api/authorized_keys?key=AAAA...
Note right of Rails: Lookup key ID
Rails-->>-GitLab SSHD: 200 OK, command="gitlab-shell upload-pack key_id=1"
GitLab SSHD->>+Rails: GET /internal/api/allowed?action=upload_pack&key_id=1
Note right of Rails: Auth check
Rails-->>-GitLab SSHD: 200 OK, { gitaly: ... }
GitLab SSHD->>+Gitaly: SSHService.SSHUploadPack request
Gitaly->>+Git on server: git upload-pack request
Note over Git on client,Git on server: Bidirectional communication between Git client and server
Git on server-->>-Gitaly: git upload-pack response
Gitaly -->>-GitLab SSHD: SSHService.SSHUploadPack response
GitLab SSHD-->>-Git on client: ssh git fetch-pack response
```
|
https://docs.gitlab.com/development/process
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/process.md
|
2025-08-13
|
doc/development/gitlab_shell
|
[
"doc",
"development",
"gitlab_shell"
] |
process.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Processes for GitLab Shell
| null |
## Releasing a new version
GitLab Shell is versioned by Git tags, and the version used by the Rails
application is stored in
[`GITLAB_SHELL_VERSION`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/GITLAB_SHELL_VERSION).
For each version, there is a raw version and a tag version:
- The raw version is the version number. For instance, `15.2.8`.
- The tag version is the raw version prefixed with `v`. For instance, `v15.2.8`.
To release a new version of GitLab Shell and have that version available to the
Rails application:
1. Create a merge request to update the [`CHANGELOG`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/CHANGELOG.md) with the
tag version and the [`VERSION`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/VERSION) file with the raw version.
1. Ask a maintainer to review and merge the merge request. If you're already a
maintainer, second maintainer review is not required.
1. Add a new Git tag with the tag version.
1. Update `GITLAB_SHELL_VERSION` in the Rails application to the raw version.
{{< alert type="note" >}}
This can be done as a separate merge request, or in a merge request
that uses the latest GitLab Shell changes.
{{< /alert >}}
## Security releases
GitLab Shell is included in the packages we create for GitLab. Each version of
GitLab specifies the version of GitLab Shell it uses in the `GITLAB_SHELL_VERSION`
file. Because of this specification, security fixes in GitLab Shell are tightly coupled to the
[GitLab patch release](https://handbook.gitlab.com/handbook/engineering/workflow/#security-issues) workflow.
For a security fix in GitLab Shell, two sets of merge requests are required:
1. The fix itself, in the `gitlab-org/security/gitlab-shell` repository and its
backports to the previous versions of GitLab Shell.
1. Merge requests to change the versions of GitLab Shell included in the GitLab
patch release, in the `gitlab-org/security/gitlab` repository.
The first step could be to create a merge request with a fix targeting `main`
in `gitlab-org/security/gitlab-shell`. When the merge request is approved by maintainers,
backports targeting previous 3 versions of GitLab Shell must be created. The stable
branches for those versions may not exist, so feel free to ask a maintainer to create
them. The stable branches must be created out of the GitLab Shell tags or versions
used by the 3 previous GitLab releases.
To find out the GitLab Shell version used on a particular GitLab stable release,
run this command, replacing `13-9-stable-ee` with the version you're interested in.
These commands show the version used by the `13.9` version of GitLab:
```shell
git fetch security 13-9-stable-ee
git show refs/remotes/security/13-9-stable-ee:GITLAB_SHELL_VERSION
```
Close to the GitLab patch release, a maintainer should merge the fix and backports,
and cut all the necessary GitLab Shell versions. This allows bumping the
`GITLAB_SHELL_VERSION` for `gitlab-org/security/gitlab`. The GitLab merge request
is handled by the general GitLab patch release process.
After the patch release is done, a GitLab Shell maintainer is responsible for
syncing tags and `main` to the `gitlab-org/gitlab-shell` repository.
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Processes for GitLab Shell
breadcrumbs:
- doc
- development
- gitlab_shell
---
## Releasing a new version
GitLab Shell is versioned by Git tags, and the version used by the Rails
application is stored in
[`GITLAB_SHELL_VERSION`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/GITLAB_SHELL_VERSION).
For each version, there is a raw version and a tag version:
- The raw version is the version number. For instance, `15.2.8`.
- The tag version is the raw version prefixed with `v`. For instance, `v15.2.8`.
To release a new version of GitLab Shell and have that version available to the
Rails application:
1. Create a merge request to update the [`CHANGELOG`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/CHANGELOG.md) with the
tag version and the [`VERSION`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/VERSION) file with the raw version.
1. Ask a maintainer to review and merge the merge request. If you're already a
maintainer, second maintainer review is not required.
1. Add a new Git tag with the tag version.
1. Update `GITLAB_SHELL_VERSION` in the Rails application to the raw version.
{{< alert type="note" >}}
This can be done as a separate merge request, or in a merge request
that uses the latest GitLab Shell changes.
{{< /alert >}}
## Security releases
GitLab Shell is included in the packages we create for GitLab. Each version of
GitLab specifies the version of GitLab Shell it uses in the `GITLAB_SHELL_VERSION`
file. Because of this specification, security fixes in GitLab Shell are tightly coupled to the
[GitLab patch release](https://handbook.gitlab.com/handbook/engineering/workflow/#security-issues) workflow.
For a security fix in GitLab Shell, two sets of merge requests are required:
1. The fix itself, in the `gitlab-org/security/gitlab-shell` repository and its
backports to the previous versions of GitLab Shell.
1. Merge requests to change the versions of GitLab Shell included in the GitLab
patch release, in the `gitlab-org/security/gitlab` repository.
The first step could be to create a merge request with a fix targeting `main`
in `gitlab-org/security/gitlab-shell`. When the merge request is approved by maintainers,
backports targeting previous 3 versions of GitLab Shell must be created. The stable
branches for those versions may not exist, so feel free to ask a maintainer to create
them. The stable branches must be created out of the GitLab Shell tags or versions
used by the 3 previous GitLab releases.
To find out the GitLab Shell version used on a particular GitLab stable release,
run this command, replacing `13-9-stable-ee` with the version you're interested in.
These commands show the version used by the `13.9` version of GitLab:
```shell
git fetch security 13-9-stable-ee
git show refs/remotes/security/13-9-stable-ee:GITLAB_SHELL_VERSION
```
Close to the GitLab patch release, a maintainer should merge the fix and backports,
and cut all the necessary GitLab Shell versions. This allows bumping the
`GITLAB_SHELL_VERSION` for `gitlab-org/security/gitlab`. The GitLab merge request
is handled by the general GitLab patch release process.
After the patch release is done, a GitLab Shell maintainer is responsible for
syncing tags and `main` to the `gitlab-org/gitlab-shell` repository.
|
https://docs.gitlab.com/development/go_upgrade
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/go_upgrade.md
|
2025-08-13
|
doc/development/go_guide
|
[
"doc",
"development",
"go_guide"
] |
go_upgrade.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Managing Go versions
| null |
## Overview
All Go binaries, with the exception of
[GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner) and [Security Projects](https://gitlab.com/gitlab-org/security-products), are built in
projects managed by the [Distribution team](https://handbook.gitlab.com/handbook/product/categories/#distribution-group).
The [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) project creates a
single, monolithic operating system package containing all the binaries, while
the [Cloud-Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/CNG) project
publishes a set of Docker images deployed and configured by Helm Charts or
the GitLab Operator.
## Testing against shipped Go versions
Testing matrices for all projects using Go must include the version shipped by Distribution. Check the Go version set by `GO_VERSION` for:
- [Linux package builds](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/-/blob/master/docker/VERSIONS).
- [Cloud-Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/cng/blob/master/ci_files/variables.yml).
## Supporting multiple Go versions
Individual Go projects might need to support multiple Go versions because:
- When a new version of Go is released, we should start integrating it into the CI pipelines to verify forward compatibility.
- To enable backports, we must support the versions of Go [shipped by Distribution](#testing-against-shipped-go-versions) in the latest 3 minor GitLab releases, excluding the active milestone.
## Updating Go version
We should always:
- Use the same Go version for Omnibus GitLab and Cloud Native GitLab.
- Use a [supported version](https://go.dev/doc/devel/release#policy).
- Use the most recent patch-level for that version to keep up with security fixes.
Changing the version affects every project being compiled, so it's important to
ensure that all projects have been updated to test against the new Go version
before changing the package builders to use it. Despite [Go's compatibility promise](https://go.dev/doc/go1compat),
changes between minor versions can expose bugs or cause problems in our projects.
### Version in `go.mod`
**Key Requirements:**
- Always use `0` as the patch version (for example, `go 1.23.0`, not `go 1.23.4`).
- Do not set a version newer than what is used in CNG and Omnibus, otherwise this will cause build failures.
The Go version in your `go.mod` affects all downstream projects.
When you specify a minimum Go version, any project that imports your package must use that version or newer.
This can create impossible situations for projects with different Go version constraints.
For example, if CNG uses Go 1.23.4 but your project declares `go 1.23.5` as the minimum required version, CNG will
fail to build your package.
Similarly, other projects importing your package will be forced to upgrade their Go version, which may not be feasible.
[See above](#testing-against-shipped-go-versions) to find out what versions are used in CNG and Omnibus.
From the [Go Modules Reference](https://go.dev/ref/mod#go-mod-file-go):
> The go directive sets the minimum version of Go required to use this module.
You don't need to set `go 1.24.0` to be compatible with Go 1.24.0.
Having it at `go 1.23.0` works fine.
Go 1.23.0 and any newer version will almost certainly build your package without issues thanks to the
[Go 1 compatibility promise](https://go.dev/doc/go1compat).
### Upgrade cadence
GitLab adopts major Go versions within eight months of their release
to ensure supported GitLab versions do not ship with an end-of-life
version of Go.
Minor upgrades are required if they patch security issues, fix bugs, or add
features requested by development teams and are approved by Product Management.
For more information, see:
- [The Go release cycle](https://go.dev/wiki/Go-Release-Cycle).
- [The Go release policy](https://go.dev/doc/devel/release#policy).
### Upgrade process
The upgrade process involves several key steps:
- [Track component updates and validation](#tracking-work).
- [Track component integration for release](#tracking-work).
- [Communication with stakeholders](#communication-plan).
#### Tracking work
1. Navigate to the [Build Architecture Configuration pipelines page](https://gitlab.com/gitlab-org/distribution/build-architecture/framework/configuration/-/pipelines).
1. Create a new pipeline for a dry run with these variables:
- Set `COMPONENT_UPGRADE` to `true`.
- Set `COMPONENT_NAME` to `golang.`
- Set `COMPONENT_VERSION` to the target upgrade version.
1. Run the pipeline.
1. Check for errors in the dry run pipeline. If any subscriber files throw errors because labels changed or directly responsible individuals are no
longer valid, contact the subscriber project and request they update their configuration.
1. After a successful dry-run pipeline, create another pipeline with these variables to create the upgrade epic and all associated issues:
- Set `COMPONENT_UPGRADE` to `true`.
- Set `COMPONENT_NAME` to `golang.`
- Set `COMPONENT_VERSION` to the target upgrade version.
- Set `EPIC_DRY_RUN` to `false`.
1. Run the pipeline.
#### Known dependencies using Go
The directly responsible individual for a Go upgrade must ensure all
necessary components get upgraded.
##### Prerequisites
These projects must be upgraded first and in the order they appear to allow
projects listed in the next section to build with the newer Go version.
| Component Name | Where to track work |
|----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| GitLab Runner | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-runner) |
| GitLab CI Images | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-build-images/-/issues) |
| GitLab Development Kit (GDK) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-development-kit) |
##### Required for release approval
Major Go release versions require updates to each project listed below
to allow the version to flow into their build jobs. Each project must build
successfully before the actual build environments get updates.
| Component Name | Where to track work |
|----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| [Alertmanager](https://github.com/prometheus/alertmanager) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| Docker Distribution Pruner | [Issue Tracker](https://gitlab.com/gitlab-org/docker-distribution-pruner) |
| Gitaly | [Issue Tracker](https://gitlab.com/gitlab-org/gitaly/-/issues) |
| GitLab Compose Kit | [Issuer Tracker](https://gitlab.com/gitlab-org/gitlab-compose-kit/-/issues) |
| GitLab container registry | [Issue Tracker](https://gitlab.com/gitlab-org/container-registry) |
| GitLab Elasticsearch Indexer | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer/-/issues) |
| GitLab Zoekt Indexer | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-zoekt-indexer/-/issues) |
| GitLab agent server for Kubernetes (KAS) | [Issue Tracker](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues) |
| GitLab Pages | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-pages/-/issues) |
| GitLab Shell | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-shell/-/issues) |
| GitLab Workhorse | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| LabKit | [Issue Tracker](https://gitlab.com/gitlab-org/labkit/-/issues) |
| Spamcheck | [Issue Tracker](https://gitlab.com/gitlab-org/gl-security/security-engineering/security-automation/spam/spamcheck) |
| GitLab Workspaces Proxy | [Issue Tracker](https://gitlab.com/gitlab-org/remote-development/gitlab-workspaces-proxy) |
| Devfile Gem | [Issue Tracker](https://gitlab.com/gitlab-org/ruby/gems/devfile-gem/-/tree/main/ext?ref_type=heads) |
| GitLab Operator | [Issue Tracker](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator) |
| [Node Exporter](https://github.com/prometheus/node_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [PgBouncer Exporter](https://github.com/prometheus-community/pgbouncer_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [Postgres Exporter](https://github.com/prometheus-community/postgres_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [Prometheus](https://github.com/prometheus/prometheus) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [Redis Exporter](https://github.com/oliver006/redis_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
##### Final updates for release
After all components listed in the tables above build successfully, the directly
responsible individual may then authorize updates to the build images used
to ship GitLab packages and Cloud Native images to customers.
| Component Name | Where to track work |
|----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| GitLab Omnibus Builder | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-omnibus-builder) |
| Cloud Native GitLab | [Issue Tracker](https://gitlab.com/gitlab-org/build/CNG) |
##### Released independently
Although these components must be updated, they do not block the Go/No-Go
decision for a GitLab release. If they lag behind, the directly responsible
individual should escalate them to Product and Engineering management.
| Component Name | Where to track work |
|----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| GitLab Browser-based DAST | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| GitLab Coverage Fuzzer | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| GitLab CLI (`glab`). | [Issue Tracker](https://gitlab.com/gitlab-org/cli/-/issues) |
#### Communication plan
Communication is required at several key points throughout the process and should
be included in the relevant issues as part of the definition of done:
1. Immediately after creating the epic, it should be posted to Slack. Community members must ask the pinged engineering managers for assistance with this step. The responsible GitLab team member should share a link to the epic in the following Slack channels:
- `#backend`
- `#development`
1. Immediately after merging the GitLab Development Kit Update, the same maintainer should add an entry to the engineering week-in-review sync and
announce the change in the following Slack channels:
- `#backend`
- `#development`
1. Immediately upon merge of the updated Go versions in
[Cloud-Native GitLab](https://gitlab.com/gitlab-org/build/CNG) and
[Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) add the
change to the engineering-week-in-review sync and announce in the following
Slack channels:
- `#backend`
- `#development`
- `#releases`
#### Upgrade validation
Upstream component maintainers must validate their Go-based projects using:
- Established unit tests in the codebase.
- Procedures established in [Merge Request Performance Guidelines](../merge_request_concepts/performance.md).
- Procedures established in [Performance, Reliability, and Availability guidelines](../code_review.md#performance-reliability-and-availability).
Upstream component maintainers should consider validating their Go-based
projects with:
- Isolated component operation performance tests.
Integration tests are costly and should be testing inter-component
operational issues. Isolated component testing reduces mean time to
feedback on updates and decreases resource burn across the organization.
- Components should have end-to-end test coverage in the GitLab Performance Test tool.
- Integration validation through installation of fresh packages **_and_** upgrade from previous versions for:
- Single GitLab Node
- Reference Architecture Deployment
- Geo Deployment
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Managing Go versions
breadcrumbs:
- doc
- development
- go_guide
---
## Overview
All Go binaries, with the exception of
[GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner) and [Security Projects](https://gitlab.com/gitlab-org/security-products), are built in
projects managed by the [Distribution team](https://handbook.gitlab.com/handbook/product/categories/#distribution-group).
The [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) project creates a
single, monolithic operating system package containing all the binaries, while
the [Cloud-Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/CNG) project
publishes a set of Docker images deployed and configured by Helm Charts or
the GitLab Operator.
## Testing against shipped Go versions
Testing matrices for all projects using Go must include the version shipped by Distribution. Check the Go version set by `GO_VERSION` for:
- [Linux package builds](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/-/blob/master/docker/VERSIONS).
- [Cloud-Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/cng/blob/master/ci_files/variables.yml).
## Supporting multiple Go versions
Individual Go projects might need to support multiple Go versions because:
- When a new version of Go is released, we should start integrating it into the CI pipelines to verify forward compatibility.
- To enable backports, we must support the versions of Go [shipped by Distribution](#testing-against-shipped-go-versions) in the latest 3 minor GitLab releases, excluding the active milestone.
## Updating Go version
We should always:
- Use the same Go version for Omnibus GitLab and Cloud Native GitLab.
- Use a [supported version](https://go.dev/doc/devel/release#policy).
- Use the most recent patch-level for that version to keep up with security fixes.
Changing the version affects every project being compiled, so it's important to
ensure that all projects have been updated to test against the new Go version
before changing the package builders to use it. Despite [Go's compatibility promise](https://go.dev/doc/go1compat),
changes between minor versions can expose bugs or cause problems in our projects.
### Version in `go.mod`
**Key Requirements:**
- Always use `0` as the patch version (for example, `go 1.23.0`, not `go 1.23.4`).
- Do not set a version newer than what is used in CNG and Omnibus, otherwise this will cause build failures.
The Go version in your `go.mod` affects all downstream projects.
When you specify a minimum Go version, any project that imports your package must use that version or newer.
This can create impossible situations for projects with different Go version constraints.
For example, if CNG uses Go 1.23.4 but your project declares `go 1.23.5` as the minimum required version, CNG will
fail to build your package.
Similarly, other projects importing your package will be forced to upgrade their Go version, which may not be feasible.
[See above](#testing-against-shipped-go-versions) to find out what versions are used in CNG and Omnibus.
From the [Go Modules Reference](https://go.dev/ref/mod#go-mod-file-go):
> The go directive sets the minimum version of Go required to use this module.
You don't need to set `go 1.24.0` to be compatible with Go 1.24.0.
Having it at `go 1.23.0` works fine.
Go 1.23.0 and any newer version will almost certainly build your package without issues thanks to the
[Go 1 compatibility promise](https://go.dev/doc/go1compat).
### Upgrade cadence
GitLab adopts major Go versions within eight months of their release
to ensure supported GitLab versions do not ship with an end-of-life
version of Go.
Minor upgrades are required if they patch security issues, fix bugs, or add
features requested by development teams and are approved by Product Management.
For more information, see:
- [The Go release cycle](https://go.dev/wiki/Go-Release-Cycle).
- [The Go release policy](https://go.dev/doc/devel/release#policy).
### Upgrade process
The upgrade process involves several key steps:
- [Track component updates and validation](#tracking-work).
- [Track component integration for release](#tracking-work).
- [Communication with stakeholders](#communication-plan).
#### Tracking work
1. Navigate to the [Build Architecture Configuration pipelines page](https://gitlab.com/gitlab-org/distribution/build-architecture/framework/configuration/-/pipelines).
1. Create a new pipeline for a dry run with these variables:
- Set `COMPONENT_UPGRADE` to `true`.
- Set `COMPONENT_NAME` to `golang.`
- Set `COMPONENT_VERSION` to the target upgrade version.
1. Run the pipeline.
1. Check for errors in the dry run pipeline. If any subscriber files throw errors because labels changed or directly responsible individuals are no
longer valid, contact the subscriber project and request they update their configuration.
1. After a successful dry-run pipeline, create another pipeline with these variables to create the upgrade epic and all associated issues:
- Set `COMPONENT_UPGRADE` to `true`.
- Set `COMPONENT_NAME` to `golang.`
- Set `COMPONENT_VERSION` to the target upgrade version.
- Set `EPIC_DRY_RUN` to `false`.
1. Run the pipeline.
#### Known dependencies using Go
The directly responsible individual for a Go upgrade must ensure all
necessary components get upgraded.
##### Prerequisites
These projects must be upgraded first and in the order they appear to allow
projects listed in the next section to build with the newer Go version.
| Component Name | Where to track work |
|----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| GitLab Runner | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-runner) |
| GitLab CI Images | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-build-images/-/issues) |
| GitLab Development Kit (GDK) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-development-kit) |
##### Required for release approval
Major Go release versions require updates to each project listed below
to allow the version to flow into their build jobs. Each project must build
successfully before the actual build environments get updates.
| Component Name | Where to track work |
|----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| [Alertmanager](https://github.com/prometheus/alertmanager) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| Docker Distribution Pruner | [Issue Tracker](https://gitlab.com/gitlab-org/docker-distribution-pruner) |
| Gitaly | [Issue Tracker](https://gitlab.com/gitlab-org/gitaly/-/issues) |
| GitLab Compose Kit | [Issuer Tracker](https://gitlab.com/gitlab-org/gitlab-compose-kit/-/issues) |
| GitLab container registry | [Issue Tracker](https://gitlab.com/gitlab-org/container-registry) |
| GitLab Elasticsearch Indexer | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer/-/issues) |
| GitLab Zoekt Indexer | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-zoekt-indexer/-/issues) |
| GitLab agent server for Kubernetes (KAS) | [Issue Tracker](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues) |
| GitLab Pages | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-pages/-/issues) |
| GitLab Shell | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-shell/-/issues) |
| GitLab Workhorse | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| LabKit | [Issue Tracker](https://gitlab.com/gitlab-org/labkit/-/issues) |
| Spamcheck | [Issue Tracker](https://gitlab.com/gitlab-org/gl-security/security-engineering/security-automation/spam/spamcheck) |
| GitLab Workspaces Proxy | [Issue Tracker](https://gitlab.com/gitlab-org/remote-development/gitlab-workspaces-proxy) |
| Devfile Gem | [Issue Tracker](https://gitlab.com/gitlab-org/ruby/gems/devfile-gem/-/tree/main/ext?ref_type=heads) |
| GitLab Operator | [Issue Tracker](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator) |
| [Node Exporter](https://github.com/prometheus/node_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [PgBouncer Exporter](https://github.com/prometheus-community/pgbouncer_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [Postgres Exporter](https://github.com/prometheus-community/postgres_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [Prometheus](https://github.com/prometheus/prometheus) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [Redis Exporter](https://github.com/oliver006/redis_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
##### Final updates for release
After all components listed in the tables above build successfully, the directly
responsible individual may then authorize updates to the build images used
to ship GitLab packages and Cloud Native images to customers.
| Component Name | Where to track work |
|----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| GitLab Omnibus Builder | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-omnibus-builder) |
| Cloud Native GitLab | [Issue Tracker](https://gitlab.com/gitlab-org/build/CNG) |
##### Released independently
Although these components must be updated, they do not block the Go/No-Go
decision for a GitLab release. If they lag behind, the directly responsible
individual should escalate them to Product and Engineering management.
| Component Name | Where to track work |
|----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| GitLab Browser-based DAST | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| GitLab Coverage Fuzzer | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| GitLab CLI (`glab`). | [Issue Tracker](https://gitlab.com/gitlab-org/cli/-/issues) |
#### Communication plan
Communication is required at several key points throughout the process and should
be included in the relevant issues as part of the definition of done:
1. Immediately after creating the epic, it should be posted to Slack. Community members must ask the pinged engineering managers for assistance with this step. The responsible GitLab team member should share a link to the epic in the following Slack channels:
- `#backend`
- `#development`
1. Immediately after merging the GitLab Development Kit Update, the same maintainer should add an entry to the engineering week-in-review sync and
announce the change in the following Slack channels:
- `#backend`
- `#development`
1. Immediately upon merge of the updated Go versions in
[Cloud-Native GitLab](https://gitlab.com/gitlab-org/build/CNG) and
[Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) add the
change to the engineering-week-in-review sync and announce in the following
Slack channels:
- `#backend`
- `#development`
- `#releases`
#### Upgrade validation
Upstream component maintainers must validate their Go-based projects using:
- Established unit tests in the codebase.
- Procedures established in [Merge Request Performance Guidelines](../merge_request_concepts/performance.md).
- Procedures established in [Performance, Reliability, and Availability guidelines](../code_review.md#performance-reliability-and-availability).
Upstream component maintainers should consider validating their Go-based
projects with:
- Isolated component operation performance tests.
Integration tests are costly and should be testing inter-component
operational issues. Isolated component testing reduces mean time to
feedback on updates and decreases resource burn across the organization.
- Components should have end-to-end test coverage in the GitLab Performance Test tool.
- Integration validation through installation of fresh packages **_and_** upgrade from previous versions for:
- Single GitLab Node
- Reference Architecture Deployment
- Geo Deployment
|
https://docs.gitlab.com/development/go_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/go_guide
|
[
"doc",
"development",
"go_guide"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Go standards and style guidelines
| null |
This document describes various guidelines and best practices for GitLab
projects using the [Go language](https://go.dev/).
GitLab is built on top of [Ruby on Rails](https://rubyonrails.org/), but we're
also using Go for projects where it makes sense. Go is a very powerful
language, with many advantages, and is best suited for projects with a lot of
IO (disk/network access), HTTP requests, parallel processing, and so on. Since we
have both Ruby on Rails and Go at GitLab, we should evaluate carefully which of
the two is best for the job.
This page aims to define and organize our Go guidelines, based on our various
experiences. Several projects were started with different standards and they
can still have specifics. They are described in their respective
`README.md` or `PROCESS.md` files.
## Project structure
According to the [basic layout for Go application projects](https://github.com/golang-standards/project-layout?tab=readme-ov-file#overview), there is no official Go project layout. However, there are some good suggestions
in Ben Johnson's [Standard Package Layout](https://www.gobeyond.dev/standard-package-layout/).
The following is a list of GitLab Go-based projects for inspiration:
- [Gitaly](https://gitlab.com/gitlab-org/gitaly)
- [GitLab Agent for Kubernetes](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent)
- [GitLab CLI](https://gitlab.com/gitlab-org/cli)
- [GitLab Container Registry](https://gitlab.com/gitlab-org/container-registry)
- [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator)
- [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages)
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner)
- [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell)
- [Workhorse](https://gitlab.com/gitlab-org/gitlab/-/tree/master/workhorse)
## Go language versions
The Go upgrade documentation [provides an overview](go_upgrade.md#overview)
of how GitLab manages and ships Go binary support.
If a GitLab component requires a newer version of Go,
follow the [upgrade process](go_upgrade.md#updating-go-version) to ensure no customer, team, or component is adversely impacted.
Sometimes, individual projects must also [manage builds with multiple versions of Go](go_upgrade.md#supporting-multiple-go-versions).
## Dependency Management
Go uses a source-based strategy for dependency management. Dependencies are
downloaded as source from their source repository. This differs from the more
common artifact-based strategy where dependencies are downloaded as artifacts
from a package repository that is separate from the dependency's source
repository.
Go did not have first-class support for version management prior to 1.11. That
version introduced Go modules and the use of semantic versioning. Go 1.12
introduced module proxies, which can serve as an intermediate between clients
and source version control systems, and checksum databases, which can be used to
verify the integrity of dependency downloads.
See [Dependency Management in Go](dependencies.md) for more details.
## Code Review
We follow the common principles of
[Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments).
Reviewers and maintainers should pay attention to:
- `defer` functions: ensure the presence when needed, and after `err` check.
- Inject dependencies as parameters.
- Void structs when marshaling to JSON (generates `null` instead of `[]`).
### Security
Security is our top priority at GitLab. During code reviews, we must take care
of possible security breaches in our code:
- XSS when using text/template
- CSRF Protection using Gorilla
- Use a Go version without known vulnerabilities
- Don't leak secret tokens
- SQL injections
Remember to run
[SAST](../../user/application_security/sast/_index.md) and [Dependency Scanning](../../user/application_security/dependency_scanning/_index.md) on your project (or at least the
[`gosec` analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/gosec)),
and to follow our [Security requirements](../code_review.md#security).
Web servers can take advantages of middlewares like [Secure](https://github.com/unrolled/secure).
### Finding a reviewer
Many of our projects are too small to have full-time maintainers. That's why we
have a shared pool of Go reviewers at GitLab. To find a reviewer, use the
["Go" section](https://handbook.gitlab.com/handbook/engineering/projects/#gitlab_reviewers_go)
of the "GitLab" project on the Engineering Projects
page in the handbook.
To add yourself to this list, add the following to your profile in the
[`team.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/team.yml)
file and ask your manager to review and merge.
```yaml
projects:
gitlab: reviewer go
```
## Code style and format
- Avoid global variables, even in packages. By doing so you introduce side
effects if the package is included multiple times.
- Use `goimports` before committing.
[`goimports`](https://pkg.go.dev/golang.org/x/tools/cmd/goimports)
is a tool that automatically formats Go source code using
[`Gofmt`](https://pkg.go.dev/cmd/gofmt), in addition to formatting import lines,
adding missing ones and removing unreferenced ones.
Most editors/IDEs allow you to run commands before/after saving a file, you can set it
up to run `goimports` so that it's applied to every file when saving.
- Place private methods below the first caller method in the source file.
### Automatic linting
{{< alert type="warning" >}}
The use of `registry.gitlab.com/gitlab-org/gitlab-build-images:golangci-lint-alpine` has been
[deprecated as of 16.10](https://gitlab.com/gitlab-org/gitlab-build-images/-/issues/131).
{{< /alert >}}
Use the upstream version of [golangci-lint](https://golangci-lint.run/).
See the list of linters [enabled/disabled by default](https://golangci-lint.run/usage/linters/#enabled-by-default).
Go projects should include this GitLab CI/CD job:
```yaml
variables:
GOLANGCI_LINT_VERSION: 'v1.56.2'
lint:
image: golangci/golangci-lint:$GOLANGCI_LINT_VERSION
stage: test
script:
# Write the code coverage report to gl-code-quality-report.json
# and print linting issues to stdout in the format: path/to/file:line description
# remove `--issues-exit-code 0` or set to non-zero to fail the job if linting issues are detected
- golangci-lint run --issues-exit-code 0 --print-issued-lines=false --out-format code-climate:gl-code-quality-report.json,line-number
artifacts:
reports:
codequality: gl-code-quality-report.json
paths:
- gl-code-quality-report.json
```
Including a `.golangci.yml` in the root directory of the project allows for
configuration of `golangci-lint`. All options for `golangci-lint` are listed in
this [example](https://github.com/golangci/golangci-lint/blob/master/.golangci.yml).
Once [recursive includes](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/56836)
become available, you can share job templates like this
[analyzer](https://gitlab.com/gitlab-org/security-products/ci-templates/raw/master/includes-dev/analyzer.yml).
Go GitLab linter plugins are maintained in the
[`gitlab-org/language-tools/go/linters`](https://gitlab.com/gitlab-org/language-tools/go/linters/) namespace.
### Help text style guide
If your Go project produces help text for users, consider following the advice given in the
[Help text style guide](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/help_text_style_guide.md) in the
`gitaly` project.
## Dependencies
Dependencies should be kept to the minimum. The introduction of a new
dependency should be argued in the merge request, as per our [Approval Guidelines](../code_review.md#approval-guidelines).
[Dependency Scanning](../../user/application_security/dependency_scanning/_index.md)
should be activated on all projects to ensure new dependencies
security status and license compatibility.
### Modules
In Go 1.11 and later, a standard dependency system is available behind the name
[Go Modules](https://github.com/golang/go/wiki/Modules). It provides a way to
define and lock dependencies for reproducible builds. It should be used
whenever possible.
When Go Modules are in use, there should not be a `vendor/` directory. Instead,
Go automatically downloads dependencies when they are needed to build the
project. This is in line with how dependencies are handled with Bundler in Ruby
projects, and makes merge requests easier to review.
In some cases, such as building a Go project for it to act as a dependency of a
CI run for another project, removing the `vendor/` directory means the code must
be downloaded repeatedly, which can lead to intermittent problems due to rate
limiting or network failures. In these circumstances, you should
[cache the downloaded code between](../../ci/caching/_index.md#cache-go-dependencies).
There was a
[bug on modules checksums](https://github.com/golang/go/issues/29278) in Go versions earlier than v1.11.4, so make
sure to use at least this version to avoid `checksum mismatch` errors.
### ORM
We don't use object-relational mapping libraries (ORMs) at GitLab (except
[ActiveRecord](https://guides.rubyonrails.org/active_record_basics.html) in
Ruby on Rails). Projects can be structured with services to avoid them.
[`pgx`](https://github.com/jackc/pgx) should be enough to interact with PostgreSQL
databases.
### Migrations
In the rare event of managing a hosted database, it's necessary to use a
migration system like ActiveRecord is providing. A simple library like
[Journey](https://github.com/db-journey/journey), designed to be used in
`postgres` containers, can be deployed as long-running pods. New versions
deploy a new pod, migrating the data automatically.
## Testing
### Testing frameworks
We should not use any specific library or framework for testing, as the
[standard library](https://pkg.go.dev/std) provides already everything to get
started. If there is a need for more sophisticated testing tools, the following
external dependencies might be worth considering in case we decide to use a specific
library or framework:
- [Testify](https://github.com/stretchr/testify)
- [`httpexpect`](https://github.com/gavv/httpexpect)
### Subtests
Use [subtests](https://go.dev/blog/subtests) whenever possible to improve
code readability and test output.
### Better output in tests
When comparing expected and actual values in tests, use
[`testify/require.Equal`](https://pkg.go.dev/github.com/stretchr/testify/require#Equal),
[`testify/require.EqualError`](https://pkg.go.dev/github.com/stretchr/testify/require#EqualError),
[`testify/require.EqualValues`](https://pkg.go.dev/github.com/stretchr/testify/require#EqualValues),
and others to improve readability when comparing structs, errors,
large portions of text, or JSON documents:
```go
type TestData struct {
// ...
}
func FuncUnderTest() TestData {
// ...
}
func Test(t *testing.T) {
t.Run("FuncUnderTest", func(t *testing.T) {
want := TestData{}
got := FuncUnderTest()
require.Equal(t, want, got) // expected value comes first, then comes the actual one ("diff" semantics)
})
}
```
### Table-Driven Tests
Using [Table-Driven Tests](https://github.com/golang/go/wiki/TableDrivenTests)
is generally good practice when you have multiple entries of
inputs/outputs for the same function. Below are some guidelines one can
follow when writing table-driven test. These guidelines are mostly
extracted from Go standard library source code. Keep in mind it's OK not
to follow these guidelines when it makes sense.
#### Defining test cases
Each table entry is a complete test case with inputs and expected
results, and sometimes with additional information such as a test name
to make the test output easily readable.
- [Define a slice of anonymous struct](https://github.com/golang/go/blob/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/src/encoding/csv/reader_test.go#L16)
inside of the test.
- [Define a slice of anonymous struct](https://github.com/golang/go/blob/55d31e16c12c38d36811bdee65ac1f7772148250/src/cmd/go/internal/module/module_test.go#L9-L66)
outside of the test.
- [Named structs](https://github.com/golang/go/blob/2e0cd2aef5924e48e1ceb74e3d52e76c56dd34cc/src/cmd/go/internal/modfetch/coderepo_test.go#L54-L69)
for code reuse.
- [Using `map[string]struct{}`](https://github.com/golang/go/blob/6d5caf38e37bf9aeba3291f1f0b0081f934b1187/src/cmd/trace/annotations_test.go#L180-L235).
#### Contents of the test case
- Ideally, each test case should have a field with a unique identifier
to use for naming subtests. In the Go standard library, this is commonly the
`name string` field.
- Use `want`/`expect`/`actual` when you are specifying something in the
test case that is used for assertion.
#### Variable names
- Each table-driven test map/slice of struct can be named `tests`.
- When looping through `tests` the anonymous struct can be referred
to as `tt` or `tc`.
- The description of the test can be referred to as
`name`/`testName`/`tn`.
### Benchmarks
Programs handling a lot of IO or complex operations should always include
[benchmarks](https://pkg.go.dev/testing#hdr-Benchmarks), to ensure
performance consistency over time.
## Error handling
### Adding context
Adding context before you return the error can be helpful, instead of
just returning the error. This allows developers to understand what the
program was trying to do when it entered the error state making it much
easier to debug.
For example:
```go
// Wrap the error
return nil, fmt.Errorf("get cache %s: %w", f.Name, err)
// Just add context
return nil, fmt.Errorf("saving cache %s: %v", f.Name, err)
```
A few things to keep in mind when adding context:
- Decide if you want to expose the underlying error
to the caller. If so, use `%w`, if not, you can use `%v`.
- Don't use words like `failed`, `error`, `didn't`. As it's an error,
the user already knows that something failed and this might lead to
having strings like `failed xx failed xx failed xx`. Explain _what_
failed instead.
- Error strings should not be capitalized or end with punctuation or a
newline. You can use `golint` to check for this.
### Naming
- When using sentinel errors they should always be named like `ErrXxx`.
- When creating a new error type they should always be named like
`XxxError`.
### Checking Error types
- To check error equality don't use `==`. Use
[`errors.Is`](https://pkg.go.dev/errors?tab=doc#Is) instead (for Go
versions >= 1.13).
- To check if the error is of a certain type don't use type assertion,
use [`errors.As`](https://pkg.go.dev/errors?tab=doc#As) instead (for
Go versions >= 1.13).
### References for working with errors
- [Go 1.13 errors](https://go.dev/blog/go1.13-errors).
- [Programming with errors](https://peter.bourgon.org/blog/2019/09/11/programming-with-errors.html).
- [Don't just check errors, handle them gracefully](https://dave.cheney.net/2016/04/27/dont-just-check-errors-handle-them-gracefully).
## CLIs
Every Go program is launched from the command line.
[`cli`](https://github.com/urfave/cli) is a convenient package to create command
line apps. It should be used whether the project is a daemon or a simple CLI
tool. Flags can be mapped to [environment variables](https://github.com/urfave/cli#values-from-the-environment) directly,
which documents and centralizes at the same time all the possible command line
interactions with the program. Don't use `os.GetEnv`, it hides variables deep
in the code.
## Libraries
### LabKit
[LabKit](https://gitlab.com/gitlab-org/labkit) is a place to keep common
libraries for Go services. For examples using of using LabKit, see [`workhorse`](https://gitlab.com/gitlab-org/gitlab/tree/master/workhorse)
and [`gitaly`](https://gitlab.com/gitlab-org/gitaly). LabKit exports three related pieces of functionality:
- [`gitlab.com/gitlab-org/labkit/correlation`](https://gitlab.com/gitlab-org/labkit/tree/master/correlation):
for propagating and extracting correlation ids between services.
- [`gitlab.com/gitlab-org/labkit/tracing`](https://gitlab.com/gitlab-org/labkit/tree/master/tracing):
for instrumenting Go libraries for distributed tracing.
- [`gitlab.com/gitlab-org/labkit/log`](https://gitlab.com/gitlab-org/labkit/tree/master/log):
for structured logging using Logrus.
This gives us a thin abstraction over underlying implementations that is
consistent across Workhorse, Gitaly, and possibly other Go servers. For
example, in the case of `gitlab.com/gitlab-org/labkit/tracing` we can switch
from using `Opentracing` directly to using `Zipkin` or the Go kit's own tracing wrapper
without changes to the application code, while still keeping the same
consistent configuration mechanism (that is, the `GITLAB_TRACING` environment
variable).
#### Structured (JSON) logging
Every binary ideally must have structured (JSON) logging in place as it helps
with searching and filtering the logs. LabKit provides an abstraction over [Logrus](https://github.com/sirupsen/logrus).
We use structured logging in JSON format, because all our infrastructure assumes that. When using
[Logrus](https://github.com/sirupsen/logrus) you can turn on structured
logging by using the built-in [JSON formatter](https://github.com/sirupsen/logrus#formatters). This follows the
same logging type we use in our [Ruby applications](../logging.md#use-structured-json-logging).
#### How to use Logrus
There are a few guidelines one should follow when using the
[Logrus](https://github.com/sirupsen/logrus) package:
- When printing an error use
[WithError](https://pkg.go.dev/github.com/sirupsen/logrus#WithError). For
example, `logrus.WithError(err).Error("Failed to do something")`.
- Since we use [structured logging](#structured-json-logging) we can log
fields in the context of that code path, such as the URI of the request using
[`WithField`](https://pkg.go.dev/github.com/sirupsen/logrus#WithField) or
[`WithFields`](https://pkg.go.dev/github.com/sirupsen/logrus#WithFields). For
example, `logrus.WithField("file", "/app/go").Info("Opening dir")`. If you
have to log multiple keys, always use `WithFields` instead of calling
`WithField` more than once.
### Context
Since daemons are long-running applications, they should have mechanisms to
manage cancellations, and avoid unnecessary resources consumption (which could
lead to DDoS vulnerabilities). [Go Context](https://github.com/golang/go/wiki/CodeReviewComments#contexts)
should be used in functions that can block and passed as the first parameter.
## Dockerfiles
Every project should have a `Dockerfile` at the root of their repository, to
build and run the project. Since Go program are static binaries, they should
not require any external dependency, and shells in the final image are useless.
We encourage [Multistage builds](https://docs.docker.com/build/building/multi-stage/):
- They let the user build the project with the right Go version and
dependencies.
- They generate a small, self-contained image, derived from `Scratch`.
Generated Docker images should have the program at their `Entrypoint` to create
portable commands. That way, anyone can run the image, and without parameters
it displays its help message (if `cli` has been used).
## Secure Team standards and style guidelines
The following are some style guidelines that are specific to the Secure Team.
### Code style and format
Use `goimports -local gitlab.com/gitlab-org` before committing.
[`goimports`](https://pkg.go.dev/golang.org/x/tools/cmd/goimports)
is a tool that automatically formats Go source code using
[`Gofmt`](https://pkg.go.dev/cmd/gofmt), in addition to formatting import lines,
adding missing ones and removing unreferenced ones.
By using the `-local gitlab.com/gitlab-org` option, `goimports` groups locally referenced
packages separately from external ones. See
[the imports section](https://github.com/golang/go/wiki/CodeReviewComments#imports)
of the Code Review Comments page on the Go wiki for more details.
Most editors/IDEs allow you to run commands before/after saving a file, you can set it
up to run `goimports -local gitlab.com/gitlab-org` so that it's applied to every file when saving.
### Naming branches
In addition to the GitLab [branch name rules](../../user/project/repository/branches/_index.md#name-your-branch), use only the characters `a-z`, `0-9` or `-` in branch names. This restriction is because `go get` doesn't work as expected when a branch name contains certain characters, such as a slash `/`:
```shell
$ go get -u gitlab.com/gitlab-org/security-products/analyzers/report/v3@some-user/some-feature
go get: gitlab.com/gitlab-org/security-products/analyzers/report/v3@some-user/some-feature: invalid version: version "some-user/some-feature" invalid: disallowed version string
```
If a branch name contains a slash, it forces us to refer to the commit SHA instead, which is less flexible. For example:
```shell
$ go get -u gitlab.com/gitlab-org/security-products/analyzers/report/v3@5c9a4279fa1263755718cf069d54ba8051287954
go: downloading gitlab.com/gitlab-org/security-products/analyzers/report/v3 v3.15.3-0.20221012172609-5c9a4279fa12
...
```
### Initializing slices
If initializing a slice, provide a capacity where possible to avoid extra
allocations.
**Don't**:
```go
var s2 []string
for _, val := range s1 {
s2 = append(s2, val)
}
```
**Do**:
```go
s2 := make([]string, 0, len(s1))
for _, val := range s1 {
s2 = append(s2, val)
}
```
If no capacity is passed to `make` when creating a new slice, `append`
will continuously resize the slice's backing array if it cannot hold
the values. Providing the capacity ensures that allocations are kept
to a minimum. It's recommended that the [`prealloc`](https://github.com/alexkohler/prealloc)
golanci-lint rule automatically check for this.
### Analyzer Tests
The conventional Secure [analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/) has a
[`convert` function](https://gitlab.com/gitlab-org/security-products/analyzers/command/-/blob/main/convert.go#L15-17)
that converts SAST/DAST scanner reports into
[GitLab Security Reports](https://gitlab.com/gitlab-org/security-products/security-report-schemas).
When writing tests for the `convert` function, we should make use of
[test fixtures](https://dave.cheney.net/2016/05/10/test-fixtures-in-go) using a `testdata`
directory at the root of the analyzer's repository. The `testdata` directory should
contain two subdirectories: `expect` and `reports`. The `reports` directory should
contain sample SAST/DAST scanner reports which are passed into the `convert` function
during the test setup. The `expect` directory should contain the expected GitLab Security Report
that the `convert` returns. See Secret Detection for an
[example](https://gitlab.com/gitlab-org/security-products/analyzers/secrets/-/blob/160424589ef1eed7b91b59484e019095bc7233bd/convert_test.go#L13-66).
If the scanner report is small, less than 35 lines, then feel free to
[inline the report](https://gitlab.com/gitlab-org/security-products/analyzers/sobelow/-/blob/8bd2428a/convert/convert_test.go#L13-77)
rather than use a `testdata` directory.
#### Test Diffs
The [go-cmp](https://github.com/google/go-cmp) package should be used when
comparing large structs in tests. It makes it possible to output a specific diff
where the two structs differ, rather than seeing the whole of both structs
printed out in the test logs. Here is a small example:
```go
package main
import (
"reflect"
"testing"
"github.com/google/go-cmp/cmp"
)
type Foo struct {
Desc Bar
Point Baz
}
type Bar struct {
A string
B string
}
type Baz struct {
X int
Y int
}
func TestHelloWorld(t *testing.T) {
want := Foo{
Desc: Bar{A: "a", B: "b"},
Point: Baz{X: 1, Y: 2},
}
got := Foo{
Desc: Bar{A: "a", B: "b"},
Point: Baz{X: 2, Y: 2},
}
t.Log("reflect comparison:")
if !reflect.DeepEqual(got, want) {
t.Errorf("Wrong result. want:\n%v\nGot:\n%v", want, got)
}
t.Log("cmp comparison:")
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("Wrong result. (-want +got):\n%s", diff)
}
}
```
The output demonstrates why `go-cmp` is far superior when comparing large
structs. Even though you could spot the difference with this small difference,
it quickly gets unwieldy as the data grows.
```plaintext
main_test.go:36: reflect comparison:
main_test.go:38: Wrong result. want:
{{a b} {1 2}}
Got:
{{a b} {2 2}}
main_test.go:41: cmp comparison:
main_test.go:43: Wrong result. (-want +got):
main.Foo{
Desc: {A: "a", B: "b"},
Point: main.Baz{
- X: 1,
+ X: 2,
Y: 2,
},
}
```
---
[Return to Development documentation](../_index.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Go standards and style guidelines
breadcrumbs:
- doc
- development
- go_guide
---
This document describes various guidelines and best practices for GitLab
projects using the [Go language](https://go.dev/).
GitLab is built on top of [Ruby on Rails](https://rubyonrails.org/), but we're
also using Go for projects where it makes sense. Go is a very powerful
language, with many advantages, and is best suited for projects with a lot of
IO (disk/network access), HTTP requests, parallel processing, and so on. Since we
have both Ruby on Rails and Go at GitLab, we should evaluate carefully which of
the two is best for the job.
This page aims to define and organize our Go guidelines, based on our various
experiences. Several projects were started with different standards and they
can still have specifics. They are described in their respective
`README.md` or `PROCESS.md` files.
## Project structure
According to the [basic layout for Go application projects](https://github.com/golang-standards/project-layout?tab=readme-ov-file#overview), there is no official Go project layout. However, there are some good suggestions
in Ben Johnson's [Standard Package Layout](https://www.gobeyond.dev/standard-package-layout/).
The following is a list of GitLab Go-based projects for inspiration:
- [Gitaly](https://gitlab.com/gitlab-org/gitaly)
- [GitLab Agent for Kubernetes](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent)
- [GitLab CLI](https://gitlab.com/gitlab-org/cli)
- [GitLab Container Registry](https://gitlab.com/gitlab-org/container-registry)
- [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator)
- [GitLab Pages](https://gitlab.com/gitlab-org/gitlab-pages)
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner)
- [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell)
- [Workhorse](https://gitlab.com/gitlab-org/gitlab/-/tree/master/workhorse)
## Go language versions
The Go upgrade documentation [provides an overview](go_upgrade.md#overview)
of how GitLab manages and ships Go binary support.
If a GitLab component requires a newer version of Go,
follow the [upgrade process](go_upgrade.md#updating-go-version) to ensure no customer, team, or component is adversely impacted.
Sometimes, individual projects must also [manage builds with multiple versions of Go](go_upgrade.md#supporting-multiple-go-versions).
## Dependency Management
Go uses a source-based strategy for dependency management. Dependencies are
downloaded as source from their source repository. This differs from the more
common artifact-based strategy where dependencies are downloaded as artifacts
from a package repository that is separate from the dependency's source
repository.
Go did not have first-class support for version management prior to 1.11. That
version introduced Go modules and the use of semantic versioning. Go 1.12
introduced module proxies, which can serve as an intermediate between clients
and source version control systems, and checksum databases, which can be used to
verify the integrity of dependency downloads.
See [Dependency Management in Go](dependencies.md) for more details.
## Code Review
We follow the common principles of
[Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments).
Reviewers and maintainers should pay attention to:
- `defer` functions: ensure the presence when needed, and after `err` check.
- Inject dependencies as parameters.
- Void structs when marshaling to JSON (generates `null` instead of `[]`).
### Security
Security is our top priority at GitLab. During code reviews, we must take care
of possible security breaches in our code:
- XSS when using text/template
- CSRF Protection using Gorilla
- Use a Go version without known vulnerabilities
- Don't leak secret tokens
- SQL injections
Remember to run
[SAST](../../user/application_security/sast/_index.md) and [Dependency Scanning](../../user/application_security/dependency_scanning/_index.md) on your project (or at least the
[`gosec` analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/gosec)),
and to follow our [Security requirements](../code_review.md#security).
Web servers can take advantages of middlewares like [Secure](https://github.com/unrolled/secure).
### Finding a reviewer
Many of our projects are too small to have full-time maintainers. That's why we
have a shared pool of Go reviewers at GitLab. To find a reviewer, use the
["Go" section](https://handbook.gitlab.com/handbook/engineering/projects/#gitlab_reviewers_go)
of the "GitLab" project on the Engineering Projects
page in the handbook.
To add yourself to this list, add the following to your profile in the
[`team.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/team.yml)
file and ask your manager to review and merge.
```yaml
projects:
gitlab: reviewer go
```
## Code style and format
- Avoid global variables, even in packages. By doing so you introduce side
effects if the package is included multiple times.
- Use `goimports` before committing.
[`goimports`](https://pkg.go.dev/golang.org/x/tools/cmd/goimports)
is a tool that automatically formats Go source code using
[`Gofmt`](https://pkg.go.dev/cmd/gofmt), in addition to formatting import lines,
adding missing ones and removing unreferenced ones.
Most editors/IDEs allow you to run commands before/after saving a file, you can set it
up to run `goimports` so that it's applied to every file when saving.
- Place private methods below the first caller method in the source file.
### Automatic linting
{{< alert type="warning" >}}
The use of `registry.gitlab.com/gitlab-org/gitlab-build-images:golangci-lint-alpine` has been
[deprecated as of 16.10](https://gitlab.com/gitlab-org/gitlab-build-images/-/issues/131).
{{< /alert >}}
Use the upstream version of [golangci-lint](https://golangci-lint.run/).
See the list of linters [enabled/disabled by default](https://golangci-lint.run/usage/linters/#enabled-by-default).
Go projects should include this GitLab CI/CD job:
```yaml
variables:
GOLANGCI_LINT_VERSION: 'v1.56.2'
lint:
image: golangci/golangci-lint:$GOLANGCI_LINT_VERSION
stage: test
script:
# Write the code coverage report to gl-code-quality-report.json
# and print linting issues to stdout in the format: path/to/file:line description
# remove `--issues-exit-code 0` or set to non-zero to fail the job if linting issues are detected
- golangci-lint run --issues-exit-code 0 --print-issued-lines=false --out-format code-climate:gl-code-quality-report.json,line-number
artifacts:
reports:
codequality: gl-code-quality-report.json
paths:
- gl-code-quality-report.json
```
Including a `.golangci.yml` in the root directory of the project allows for
configuration of `golangci-lint`. All options for `golangci-lint` are listed in
this [example](https://github.com/golangci/golangci-lint/blob/master/.golangci.yml).
Once [recursive includes](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/56836)
become available, you can share job templates like this
[analyzer](https://gitlab.com/gitlab-org/security-products/ci-templates/raw/master/includes-dev/analyzer.yml).
Go GitLab linter plugins are maintained in the
[`gitlab-org/language-tools/go/linters`](https://gitlab.com/gitlab-org/language-tools/go/linters/) namespace.
### Help text style guide
If your Go project produces help text for users, consider following the advice given in the
[Help text style guide](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/help_text_style_guide.md) in the
`gitaly` project.
## Dependencies
Dependencies should be kept to the minimum. The introduction of a new
dependency should be argued in the merge request, as per our [Approval Guidelines](../code_review.md#approval-guidelines).
[Dependency Scanning](../../user/application_security/dependency_scanning/_index.md)
should be activated on all projects to ensure new dependencies
security status and license compatibility.
### Modules
In Go 1.11 and later, a standard dependency system is available behind the name
[Go Modules](https://github.com/golang/go/wiki/Modules). It provides a way to
define and lock dependencies for reproducible builds. It should be used
whenever possible.
When Go Modules are in use, there should not be a `vendor/` directory. Instead,
Go automatically downloads dependencies when they are needed to build the
project. This is in line with how dependencies are handled with Bundler in Ruby
projects, and makes merge requests easier to review.
In some cases, such as building a Go project for it to act as a dependency of a
CI run for another project, removing the `vendor/` directory means the code must
be downloaded repeatedly, which can lead to intermittent problems due to rate
limiting or network failures. In these circumstances, you should
[cache the downloaded code between](../../ci/caching/_index.md#cache-go-dependencies).
There was a
[bug on modules checksums](https://github.com/golang/go/issues/29278) in Go versions earlier than v1.11.4, so make
sure to use at least this version to avoid `checksum mismatch` errors.
### ORM
We don't use object-relational mapping libraries (ORMs) at GitLab (except
[ActiveRecord](https://guides.rubyonrails.org/active_record_basics.html) in
Ruby on Rails). Projects can be structured with services to avoid them.
[`pgx`](https://github.com/jackc/pgx) should be enough to interact with PostgreSQL
databases.
### Migrations
In the rare event of managing a hosted database, it's necessary to use a
migration system like ActiveRecord is providing. A simple library like
[Journey](https://github.com/db-journey/journey), designed to be used in
`postgres` containers, can be deployed as long-running pods. New versions
deploy a new pod, migrating the data automatically.
## Testing
### Testing frameworks
We should not use any specific library or framework for testing, as the
[standard library](https://pkg.go.dev/std) provides already everything to get
started. If there is a need for more sophisticated testing tools, the following
external dependencies might be worth considering in case we decide to use a specific
library or framework:
- [Testify](https://github.com/stretchr/testify)
- [`httpexpect`](https://github.com/gavv/httpexpect)
### Subtests
Use [subtests](https://go.dev/blog/subtests) whenever possible to improve
code readability and test output.
### Better output in tests
When comparing expected and actual values in tests, use
[`testify/require.Equal`](https://pkg.go.dev/github.com/stretchr/testify/require#Equal),
[`testify/require.EqualError`](https://pkg.go.dev/github.com/stretchr/testify/require#EqualError),
[`testify/require.EqualValues`](https://pkg.go.dev/github.com/stretchr/testify/require#EqualValues),
and others to improve readability when comparing structs, errors,
large portions of text, or JSON documents:
```go
type TestData struct {
// ...
}
func FuncUnderTest() TestData {
// ...
}
func Test(t *testing.T) {
t.Run("FuncUnderTest", func(t *testing.T) {
want := TestData{}
got := FuncUnderTest()
require.Equal(t, want, got) // expected value comes first, then comes the actual one ("diff" semantics)
})
}
```
### Table-Driven Tests
Using [Table-Driven Tests](https://github.com/golang/go/wiki/TableDrivenTests)
is generally good practice when you have multiple entries of
inputs/outputs for the same function. Below are some guidelines one can
follow when writing table-driven test. These guidelines are mostly
extracted from Go standard library source code. Keep in mind it's OK not
to follow these guidelines when it makes sense.
#### Defining test cases
Each table entry is a complete test case with inputs and expected
results, and sometimes with additional information such as a test name
to make the test output easily readable.
- [Define a slice of anonymous struct](https://github.com/golang/go/blob/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/src/encoding/csv/reader_test.go#L16)
inside of the test.
- [Define a slice of anonymous struct](https://github.com/golang/go/blob/55d31e16c12c38d36811bdee65ac1f7772148250/src/cmd/go/internal/module/module_test.go#L9-L66)
outside of the test.
- [Named structs](https://github.com/golang/go/blob/2e0cd2aef5924e48e1ceb74e3d52e76c56dd34cc/src/cmd/go/internal/modfetch/coderepo_test.go#L54-L69)
for code reuse.
- [Using `map[string]struct{}`](https://github.com/golang/go/blob/6d5caf38e37bf9aeba3291f1f0b0081f934b1187/src/cmd/trace/annotations_test.go#L180-L235).
#### Contents of the test case
- Ideally, each test case should have a field with a unique identifier
to use for naming subtests. In the Go standard library, this is commonly the
`name string` field.
- Use `want`/`expect`/`actual` when you are specifying something in the
test case that is used for assertion.
#### Variable names
- Each table-driven test map/slice of struct can be named `tests`.
- When looping through `tests` the anonymous struct can be referred
to as `tt` or `tc`.
- The description of the test can be referred to as
`name`/`testName`/`tn`.
### Benchmarks
Programs handling a lot of IO or complex operations should always include
[benchmarks](https://pkg.go.dev/testing#hdr-Benchmarks), to ensure
performance consistency over time.
## Error handling
### Adding context
Adding context before you return the error can be helpful, instead of
just returning the error. This allows developers to understand what the
program was trying to do when it entered the error state making it much
easier to debug.
For example:
```go
// Wrap the error
return nil, fmt.Errorf("get cache %s: %w", f.Name, err)
// Just add context
return nil, fmt.Errorf("saving cache %s: %v", f.Name, err)
```
A few things to keep in mind when adding context:
- Decide if you want to expose the underlying error
to the caller. If so, use `%w`, if not, you can use `%v`.
- Don't use words like `failed`, `error`, `didn't`. As it's an error,
the user already knows that something failed and this might lead to
having strings like `failed xx failed xx failed xx`. Explain _what_
failed instead.
- Error strings should not be capitalized or end with punctuation or a
newline. You can use `golint` to check for this.
### Naming
- When using sentinel errors they should always be named like `ErrXxx`.
- When creating a new error type they should always be named like
`XxxError`.
### Checking Error types
- To check error equality don't use `==`. Use
[`errors.Is`](https://pkg.go.dev/errors?tab=doc#Is) instead (for Go
versions >= 1.13).
- To check if the error is of a certain type don't use type assertion,
use [`errors.As`](https://pkg.go.dev/errors?tab=doc#As) instead (for
Go versions >= 1.13).
### References for working with errors
- [Go 1.13 errors](https://go.dev/blog/go1.13-errors).
- [Programming with errors](https://peter.bourgon.org/blog/2019/09/11/programming-with-errors.html).
- [Don't just check errors, handle them gracefully](https://dave.cheney.net/2016/04/27/dont-just-check-errors-handle-them-gracefully).
## CLIs
Every Go program is launched from the command line.
[`cli`](https://github.com/urfave/cli) is a convenient package to create command
line apps. It should be used whether the project is a daemon or a simple CLI
tool. Flags can be mapped to [environment variables](https://github.com/urfave/cli#values-from-the-environment) directly,
which documents and centralizes at the same time all the possible command line
interactions with the program. Don't use `os.GetEnv`, it hides variables deep
in the code.
## Libraries
### LabKit
[LabKit](https://gitlab.com/gitlab-org/labkit) is a place to keep common
libraries for Go services. For examples using of using LabKit, see [`workhorse`](https://gitlab.com/gitlab-org/gitlab/tree/master/workhorse)
and [`gitaly`](https://gitlab.com/gitlab-org/gitaly). LabKit exports three related pieces of functionality:
- [`gitlab.com/gitlab-org/labkit/correlation`](https://gitlab.com/gitlab-org/labkit/tree/master/correlation):
for propagating and extracting correlation ids between services.
- [`gitlab.com/gitlab-org/labkit/tracing`](https://gitlab.com/gitlab-org/labkit/tree/master/tracing):
for instrumenting Go libraries for distributed tracing.
- [`gitlab.com/gitlab-org/labkit/log`](https://gitlab.com/gitlab-org/labkit/tree/master/log):
for structured logging using Logrus.
This gives us a thin abstraction over underlying implementations that is
consistent across Workhorse, Gitaly, and possibly other Go servers. For
example, in the case of `gitlab.com/gitlab-org/labkit/tracing` we can switch
from using `Opentracing` directly to using `Zipkin` or the Go kit's own tracing wrapper
without changes to the application code, while still keeping the same
consistent configuration mechanism (that is, the `GITLAB_TRACING` environment
variable).
#### Structured (JSON) logging
Every binary ideally must have structured (JSON) logging in place as it helps
with searching and filtering the logs. LabKit provides an abstraction over [Logrus](https://github.com/sirupsen/logrus).
We use structured logging in JSON format, because all our infrastructure assumes that. When using
[Logrus](https://github.com/sirupsen/logrus) you can turn on structured
logging by using the built-in [JSON formatter](https://github.com/sirupsen/logrus#formatters). This follows the
same logging type we use in our [Ruby applications](../logging.md#use-structured-json-logging).
#### How to use Logrus
There are a few guidelines one should follow when using the
[Logrus](https://github.com/sirupsen/logrus) package:
- When printing an error use
[WithError](https://pkg.go.dev/github.com/sirupsen/logrus#WithError). For
example, `logrus.WithError(err).Error("Failed to do something")`.
- Since we use [structured logging](#structured-json-logging) we can log
fields in the context of that code path, such as the URI of the request using
[`WithField`](https://pkg.go.dev/github.com/sirupsen/logrus#WithField) or
[`WithFields`](https://pkg.go.dev/github.com/sirupsen/logrus#WithFields). For
example, `logrus.WithField("file", "/app/go").Info("Opening dir")`. If you
have to log multiple keys, always use `WithFields` instead of calling
`WithField` more than once.
### Context
Since daemons are long-running applications, they should have mechanisms to
manage cancellations, and avoid unnecessary resources consumption (which could
lead to DDoS vulnerabilities). [Go Context](https://github.com/golang/go/wiki/CodeReviewComments#contexts)
should be used in functions that can block and passed as the first parameter.
## Dockerfiles
Every project should have a `Dockerfile` at the root of their repository, to
build and run the project. Since Go program are static binaries, they should
not require any external dependency, and shells in the final image are useless.
We encourage [Multistage builds](https://docs.docker.com/build/building/multi-stage/):
- They let the user build the project with the right Go version and
dependencies.
- They generate a small, self-contained image, derived from `Scratch`.
Generated Docker images should have the program at their `Entrypoint` to create
portable commands. That way, anyone can run the image, and without parameters
it displays its help message (if `cli` has been used).
## Secure Team standards and style guidelines
The following are some style guidelines that are specific to the Secure Team.
### Code style and format
Use `goimports -local gitlab.com/gitlab-org` before committing.
[`goimports`](https://pkg.go.dev/golang.org/x/tools/cmd/goimports)
is a tool that automatically formats Go source code using
[`Gofmt`](https://pkg.go.dev/cmd/gofmt), in addition to formatting import lines,
adding missing ones and removing unreferenced ones.
By using the `-local gitlab.com/gitlab-org` option, `goimports` groups locally referenced
packages separately from external ones. See
[the imports section](https://github.com/golang/go/wiki/CodeReviewComments#imports)
of the Code Review Comments page on the Go wiki for more details.
Most editors/IDEs allow you to run commands before/after saving a file, you can set it
up to run `goimports -local gitlab.com/gitlab-org` so that it's applied to every file when saving.
### Naming branches
In addition to the GitLab [branch name rules](../../user/project/repository/branches/_index.md#name-your-branch), use only the characters `a-z`, `0-9` or `-` in branch names. This restriction is because `go get` doesn't work as expected when a branch name contains certain characters, such as a slash `/`:
```shell
$ go get -u gitlab.com/gitlab-org/security-products/analyzers/report/v3@some-user/some-feature
go get: gitlab.com/gitlab-org/security-products/analyzers/report/v3@some-user/some-feature: invalid version: version "some-user/some-feature" invalid: disallowed version string
```
If a branch name contains a slash, it forces us to refer to the commit SHA instead, which is less flexible. For example:
```shell
$ go get -u gitlab.com/gitlab-org/security-products/analyzers/report/v3@5c9a4279fa1263755718cf069d54ba8051287954
go: downloading gitlab.com/gitlab-org/security-products/analyzers/report/v3 v3.15.3-0.20221012172609-5c9a4279fa12
...
```
### Initializing slices
If initializing a slice, provide a capacity where possible to avoid extra
allocations.
**Don't**:
```go
var s2 []string
for _, val := range s1 {
s2 = append(s2, val)
}
```
**Do**:
```go
s2 := make([]string, 0, len(s1))
for _, val := range s1 {
s2 = append(s2, val)
}
```
If no capacity is passed to `make` when creating a new slice, `append`
will continuously resize the slice's backing array if it cannot hold
the values. Providing the capacity ensures that allocations are kept
to a minimum. It's recommended that the [`prealloc`](https://github.com/alexkohler/prealloc)
golanci-lint rule automatically check for this.
### Analyzer Tests
The conventional Secure [analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/) has a
[`convert` function](https://gitlab.com/gitlab-org/security-products/analyzers/command/-/blob/main/convert.go#L15-17)
that converts SAST/DAST scanner reports into
[GitLab Security Reports](https://gitlab.com/gitlab-org/security-products/security-report-schemas).
When writing tests for the `convert` function, we should make use of
[test fixtures](https://dave.cheney.net/2016/05/10/test-fixtures-in-go) using a `testdata`
directory at the root of the analyzer's repository. The `testdata` directory should
contain two subdirectories: `expect` and `reports`. The `reports` directory should
contain sample SAST/DAST scanner reports which are passed into the `convert` function
during the test setup. The `expect` directory should contain the expected GitLab Security Report
that the `convert` returns. See Secret Detection for an
[example](https://gitlab.com/gitlab-org/security-products/analyzers/secrets/-/blob/160424589ef1eed7b91b59484e019095bc7233bd/convert_test.go#L13-66).
If the scanner report is small, less than 35 lines, then feel free to
[inline the report](https://gitlab.com/gitlab-org/security-products/analyzers/sobelow/-/blob/8bd2428a/convert/convert_test.go#L13-77)
rather than use a `testdata` directory.
#### Test Diffs
The [go-cmp](https://github.com/google/go-cmp) package should be used when
comparing large structs in tests. It makes it possible to output a specific diff
where the two structs differ, rather than seeing the whole of both structs
printed out in the test logs. Here is a small example:
```go
package main
import (
"reflect"
"testing"
"github.com/google/go-cmp/cmp"
)
type Foo struct {
Desc Bar
Point Baz
}
type Bar struct {
A string
B string
}
type Baz struct {
X int
Y int
}
func TestHelloWorld(t *testing.T) {
want := Foo{
Desc: Bar{A: "a", B: "b"},
Point: Baz{X: 1, Y: 2},
}
got := Foo{
Desc: Bar{A: "a", B: "b"},
Point: Baz{X: 2, Y: 2},
}
t.Log("reflect comparison:")
if !reflect.DeepEqual(got, want) {
t.Errorf("Wrong result. want:\n%v\nGot:\n%v", want, got)
}
t.Log("cmp comparison:")
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("Wrong result. (-want +got):\n%s", diff)
}
}
```
The output demonstrates why `go-cmp` is far superior when comparing large
structs. Even though you could spot the difference with this small difference,
it quickly gets unwieldy as the data grows.
```plaintext
main_test.go:36: reflect comparison:
main_test.go:38: Wrong result. want:
{{a b} {1 2}}
Got:
{{a b} {2 2}}
main_test.go:41: cmp comparison:
main_test.go:43: Wrong result. (-want +got):
main.Foo{
Desc: {A: "a", B: "b"},
Point: main.Baz{
- X: 1,
+ X: 2,
Y: 2,
},
}
```
---
[Return to Development documentation](../_index.md).
|
https://docs.gitlab.com/development/dependencies
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/dependencies.md
|
2025-08-13
|
doc/development/go_guide
|
[
"doc",
"development",
"go_guide"
] |
dependencies.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Dependency Management in Go
| null |
Go takes an unusual approach to dependency management, in that it is
source-based instead of artifact-based. In an artifact-based dependency
management system, packages consist of artifacts generated from source code and
are stored in a separate repository system from source code. For example, many
NodeJS packages use `npmjs.org` as a package repository and `github.com` as a
source repository. On the other hand, packages in Go are source code and
releasing a package does not involve artifact generation or a separate
repository. Go packages must be stored in a version control repository on a VCS
server. Dependencies are fetched directly from their VCS server or via an
intermediary proxy which itself fetches them from their VCS server.
## Versioning
Go 1.11 introduced modules and first-class package versioning to the Go ecosystem.
Prior to this, Go did not have any well-defined mechanism for version management.
While 3rd party version management tools existed, the default Go experience had
no support for versioning.
Go modules use [semantic versioning](https://semver.org). The versions of a
module are defined as VCS (version control system) tags that are valid semantic
versions prefixed with `v`. For example, to release version `1.0.0` of
`gitlab.com/my/project`, the developer must create the Git tag `v1.0.0`.
For major versions other than 0 and 1, the module name must be suffixed with
`/vX` where X is the major version. For example, version `v2.0.0` of
`gitlab.com/my/project` must be named and imported as
`gitlab.com/my/project/v2`.
Go uses 'pseudo-versions', which are special semantic versions that reference a
specific VCS commit. The prerelease component of the semantic version must be or
end with a timestamp and the first 12 characters of the commit identifier:
- `vX.0.0-yyyymmddhhmmss-abcdefabcdef`, when no earlier tagged commit exists for X.
- `vX.Y.Z-pre.0.yyyymmddhhmmss-abcdefabcdef`, when most recent prior tag is vX.Y.Z-pre.
- `vX.Y.(Z+1)-0.yyyymmddhhmmss-abcdefabcdef`, when most recent prior tag is vX.Y.Z.
If a VCS tag matches one of these patterns, it is ignored.
For a complete understanding of Go modules and versioning, see
[this series of blog posts](https://go.dev/blog/using-go-modules)
on the official Go website.
## 'Module' vs 'Package'
- A package is a folder containing `*.go` files.
- A module is a folder containing a `go.mod` file.
- A module is usually also a package, that is a folder containing a `go.mod`
file and `*.go` files.
- A module may have subdirectories, which may be packages.
- Modules usually come in the form of a VCS repository (Git, SVN, Hg, and so on).
- Any subdirectories of a module that themselves are modules are distinct,
separate modules and are excluded from the containing module.
- Given a module `repo`, if `repo/sub` contains a `go.mod` file then
`repo/sub` and any files contained therein are a separate module and not a
part of `repo`.
## Naming
The name of a module or package, excluding the standard library, must be of the
form `(sub.)*domain.tld(/path)*`. This is similar to a URL, but is not a URL.
The package name does not have a scheme (such as `https://`) and cannot have a
port number. `example.com:8443/my/package` is not a valid name.
## Fetching Packages
Prior to Go 1.12, the process for fetching a package was as follows:
1. Query `https://{package name}?go-get=1`.
1. Scan the response for the `go-import` meta tag.
1. Fetch the repository indicated by the meta tag using the indicated VCS.
The meta tag should have the form `<meta name="go-import" content="{prefix} {vcs} {url}">`.
For example, `gitlab.com/my/project git https://gitlab.com/my/project.git` indicates
that packages beginning with `gitlab.com/my/project` should be fetched from
`https://gitlab.com/my/project.git` using Git.
## Fetching Modules
Go 1.12 introduced checksum databases and module proxies.
### Checksums
In addition to `go.mod`, a module has a `go.sum` file. This file records a
SHA-256 checksum of the code and the `go.mod` file of every version of every
dependency that is referenced by the module or one of the module's dependencies.
Go continually updates `go.sum` as new dependencies are referenced.
When Go fetches the dependencies of a module, if those dependencies already have
an entry in `go.sum`, Go verifies the checksum of these dependencies. If the
checksum does not match what is in `go.sum`, the build fails. This ensures
that a given version of a module cannot be changed by its developers or by a
malicious party without causing build failures.
Go 1.12+ can be configured to use a checksum database. If configured to do so,
when Go fetches a dependency and there is no corresponding entry in `go.sum`, Go
queries the configured checksum databases for the checksum of the
dependency instead of calculating it from the downloaded dependency. If the
dependency cannot be found in the checksum database, the build fails. If the
downloaded dependency's checksum does not match the result from the checksum
database, the build fails. The following environment variables control this:
- `GOSUMDB` identifies the name, and optionally the public key and server URL,
of the checksum database to query.
- A value of `off` entirely disables checksum database queries.
- Go 1.13+ uses `sum.golang.org` if `GOSUMDB` is not defined.
- `GONOSUMDB` is a comma-separated list of module suffixes that checksum
database queries should be disabled for. Wildcards are supported.
- `GOPRIVATE` is a comma-separated list of module names that has the same
function as `GONOSUMDB` in addition to disabling other features.
### Proxies
Go 1.12+ can be configured to fetch modules from a Go proxy instead of directly
from the module's VCS. If configured to do so, when Go fetches a dependency, it
attempts to fetch the dependency from the configured proxies, in order. The
following environment variables control this:
- `GOPROXY` is a comma-separated list of module proxies to query.
- A value of `direct` entirely disables module proxy queries.
- If the last entry in the list is `direct`, Go falls back to the process
described [above](#fetching-packages) if none of the proxies can provide the
dependency.
- Go 1.13+ uses `proxy.golang.org,direct` if `GOPROXY` is not defined.
- `GONOPROXY` is a comma-separated list of module suffixes that should be
fetched directly and not from a proxy. Wildcards are supported.
- `GOPRIVATE` is a comma-separated list of module names that has the same
function as `GONOPROXY` in addition to disabling other features.
### Fetching
From Go 1.12 onward, the process for fetching a module or package is as follows:
1. If `GOPROXY` is a list of proxies and the module is not excluded by
`GONOPROXY` or `GOPRIVATE`, query them in order, and stop at the first valid
response.
1. If `GOPROXY` is `direct`, or the module is excluded, or `GOPROXY` ends with
`,direct` and no proxy provided the module, fall back.
1. Query `https://{module or package name}?go-get=1`.
1. Scan the response for the `go-import` meta tag.
1. Fetch the repository indicated by the meta tag using the indicated VCS.
1. If the `{vcs}` field is `mod`, the URL should be treated as a module proxy instead of a VCS.
1. If the module is being fetched directly and not as a dependency, stop.
1. If `go.sum` contains an entry corresponding to the module, validate the checksum and stop.
1. If `GOSUMDB` identifies a checksum database and the module is not excluded by
`GONOSUMDB` or `GOPRIVATE`, retrieve the module's checksum, add it to
`go.sum`, and validate the downloaded source against it.
1. If `GOSUMDB` is `off` or the module is excluded, calculate a checksum from
the downloaded source and add it to `go.sum`.
The downloaded source must contain a `go.mod` file. The `go.mod` file must
contain a `module` directive that specifies the name of the module. If the
module name as specified by `go.mod` does not match the name that was used to
fetch the module, the module fails to compile.
If the module is being fetched directly and no version was specified, or if the
module is being added as a dependency and no version was specified, Go uses the
most recent version of the module. If the module is fetched from a proxy, Go
queries the proxy for a list of versions and chooses the latest. If the module is
fetched directly, Go queries the repository for a list of tags and chooses the
latest that is also a valid semantic version.
## Authenticating
In versions prior to Go 1.13, support for authenticating requests made by Go was
somewhat inconsistent. Go 1.13 improved support for `.netrc` authentication. If
a request is made over HTTPS and a matching `.netrc` entry can be found, Go
adds HTTP Basic authentication credentials to the request. Go does not
authenticate requests made over HTTP. Go rejects HTTP-only entries in
`GOPROXY` that have embedded credentials.
In a future version, Go may add support for arbitrary authentication headers.
Follow [`golang/go#26232`](https://github.com/golang/go/issues/26232) for details.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Dependency Management in Go
breadcrumbs:
- doc
- development
- go_guide
---
Go takes an unusual approach to dependency management, in that it is
source-based instead of artifact-based. In an artifact-based dependency
management system, packages consist of artifacts generated from source code and
are stored in a separate repository system from source code. For example, many
NodeJS packages use `npmjs.org` as a package repository and `github.com` as a
source repository. On the other hand, packages in Go are source code and
releasing a package does not involve artifact generation or a separate
repository. Go packages must be stored in a version control repository on a VCS
server. Dependencies are fetched directly from their VCS server or via an
intermediary proxy which itself fetches them from their VCS server.
## Versioning
Go 1.11 introduced modules and first-class package versioning to the Go ecosystem.
Prior to this, Go did not have any well-defined mechanism for version management.
While 3rd party version management tools existed, the default Go experience had
no support for versioning.
Go modules use [semantic versioning](https://semver.org). The versions of a
module are defined as VCS (version control system) tags that are valid semantic
versions prefixed with `v`. For example, to release version `1.0.0` of
`gitlab.com/my/project`, the developer must create the Git tag `v1.0.0`.
For major versions other than 0 and 1, the module name must be suffixed with
`/vX` where X is the major version. For example, version `v2.0.0` of
`gitlab.com/my/project` must be named and imported as
`gitlab.com/my/project/v2`.
Go uses 'pseudo-versions', which are special semantic versions that reference a
specific VCS commit. The prerelease component of the semantic version must be or
end with a timestamp and the first 12 characters of the commit identifier:
- `vX.0.0-yyyymmddhhmmss-abcdefabcdef`, when no earlier tagged commit exists for X.
- `vX.Y.Z-pre.0.yyyymmddhhmmss-abcdefabcdef`, when most recent prior tag is vX.Y.Z-pre.
- `vX.Y.(Z+1)-0.yyyymmddhhmmss-abcdefabcdef`, when most recent prior tag is vX.Y.Z.
If a VCS tag matches one of these patterns, it is ignored.
For a complete understanding of Go modules and versioning, see
[this series of blog posts](https://go.dev/blog/using-go-modules)
on the official Go website.
## 'Module' vs 'Package'
- A package is a folder containing `*.go` files.
- A module is a folder containing a `go.mod` file.
- A module is usually also a package, that is a folder containing a `go.mod`
file and `*.go` files.
- A module may have subdirectories, which may be packages.
- Modules usually come in the form of a VCS repository (Git, SVN, Hg, and so on).
- Any subdirectories of a module that themselves are modules are distinct,
separate modules and are excluded from the containing module.
- Given a module `repo`, if `repo/sub` contains a `go.mod` file then
`repo/sub` and any files contained therein are a separate module and not a
part of `repo`.
## Naming
The name of a module or package, excluding the standard library, must be of the
form `(sub.)*domain.tld(/path)*`. This is similar to a URL, but is not a URL.
The package name does not have a scheme (such as `https://`) and cannot have a
port number. `example.com:8443/my/package` is not a valid name.
## Fetching Packages
Prior to Go 1.12, the process for fetching a package was as follows:
1. Query `https://{package name}?go-get=1`.
1. Scan the response for the `go-import` meta tag.
1. Fetch the repository indicated by the meta tag using the indicated VCS.
The meta tag should have the form `<meta name="go-import" content="{prefix} {vcs} {url}">`.
For example, `gitlab.com/my/project git https://gitlab.com/my/project.git` indicates
that packages beginning with `gitlab.com/my/project` should be fetched from
`https://gitlab.com/my/project.git` using Git.
## Fetching Modules
Go 1.12 introduced checksum databases and module proxies.
### Checksums
In addition to `go.mod`, a module has a `go.sum` file. This file records a
SHA-256 checksum of the code and the `go.mod` file of every version of every
dependency that is referenced by the module or one of the module's dependencies.
Go continually updates `go.sum` as new dependencies are referenced.
When Go fetches the dependencies of a module, if those dependencies already have
an entry in `go.sum`, Go verifies the checksum of these dependencies. If the
checksum does not match what is in `go.sum`, the build fails. This ensures
that a given version of a module cannot be changed by its developers or by a
malicious party without causing build failures.
Go 1.12+ can be configured to use a checksum database. If configured to do so,
when Go fetches a dependency and there is no corresponding entry in `go.sum`, Go
queries the configured checksum databases for the checksum of the
dependency instead of calculating it from the downloaded dependency. If the
dependency cannot be found in the checksum database, the build fails. If the
downloaded dependency's checksum does not match the result from the checksum
database, the build fails. The following environment variables control this:
- `GOSUMDB` identifies the name, and optionally the public key and server URL,
of the checksum database to query.
- A value of `off` entirely disables checksum database queries.
- Go 1.13+ uses `sum.golang.org` if `GOSUMDB` is not defined.
- `GONOSUMDB` is a comma-separated list of module suffixes that checksum
database queries should be disabled for. Wildcards are supported.
- `GOPRIVATE` is a comma-separated list of module names that has the same
function as `GONOSUMDB` in addition to disabling other features.
### Proxies
Go 1.12+ can be configured to fetch modules from a Go proxy instead of directly
from the module's VCS. If configured to do so, when Go fetches a dependency, it
attempts to fetch the dependency from the configured proxies, in order. The
following environment variables control this:
- `GOPROXY` is a comma-separated list of module proxies to query.
- A value of `direct` entirely disables module proxy queries.
- If the last entry in the list is `direct`, Go falls back to the process
described [above](#fetching-packages) if none of the proxies can provide the
dependency.
- Go 1.13+ uses `proxy.golang.org,direct` if `GOPROXY` is not defined.
- `GONOPROXY` is a comma-separated list of module suffixes that should be
fetched directly and not from a proxy. Wildcards are supported.
- `GOPRIVATE` is a comma-separated list of module names that has the same
function as `GONOPROXY` in addition to disabling other features.
### Fetching
From Go 1.12 onward, the process for fetching a module or package is as follows:
1. If `GOPROXY` is a list of proxies and the module is not excluded by
`GONOPROXY` or `GOPRIVATE`, query them in order, and stop at the first valid
response.
1. If `GOPROXY` is `direct`, or the module is excluded, or `GOPROXY` ends with
`,direct` and no proxy provided the module, fall back.
1. Query `https://{module or package name}?go-get=1`.
1. Scan the response for the `go-import` meta tag.
1. Fetch the repository indicated by the meta tag using the indicated VCS.
1. If the `{vcs}` field is `mod`, the URL should be treated as a module proxy instead of a VCS.
1. If the module is being fetched directly and not as a dependency, stop.
1. If `go.sum` contains an entry corresponding to the module, validate the checksum and stop.
1. If `GOSUMDB` identifies a checksum database and the module is not excluded by
`GONOSUMDB` or `GOPRIVATE`, retrieve the module's checksum, add it to
`go.sum`, and validate the downloaded source against it.
1. If `GOSUMDB` is `off` or the module is excluded, calculate a checksum from
the downloaded source and add it to `go.sum`.
The downloaded source must contain a `go.mod` file. The `go.mod` file must
contain a `module` directive that specifies the name of the module. If the
module name as specified by `go.mod` does not match the name that was used to
fetch the module, the module fails to compile.
If the module is being fetched directly and no version was specified, or if the
module is being added as a dependency and no version was specified, Go uses the
most recent version of the module. If the module is fetched from a proxy, Go
queries the proxy for a list of versions and chooses the latest. If the module is
fetched directly, Go queries the repository for a list of tags and chooses the
latest that is also a valid semantic version.
## Authenticating
In versions prior to Go 1.13, support for authenticating requests made by Go was
somewhat inconsistent. Go 1.13 improved support for `.netrc` authentication. If
a request is made over HTTPS and a matching `.netrc` entry can be found, Go
adds HTTP Basic authentication credentials to the request. Go does not
authenticate requests made over HTTP. Go rejects HTTP-only entries in
`GOPROXY` that have embedded credentials.
In a future version, Go may add support for arbitrary authentication headers.
Follow [`golang/go#26232`](https://github.com/golang/go/issues/26232) for details.
|
https://docs.gitlab.com/development/pipelines
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/pipelines
|
[
"doc",
"development",
"pipelines"
] |
_index.md
|
none
|
Engineering Productivity
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Pipelines for the GitLab project
| null |
Pipelines for [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) (as well as the `dev` instance's) is configured in the usual
[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml)
which itself includes files under
[`.gitlab/ci/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/.gitlab/ci)
for easier maintenance.
We're striving to [dogfood](https://handbook.gitlab.com/handbook/engineering/development/principles/#dogfooding)
GitLab [CI/CD features and best-practices](../../ci/_index.md) as much as possible.
Do not use [CI/CD components](../../ci/components/_index.md) in `gitlab-org/gitlab` pipelines
unless they are mirrored on the `dev.gitlab.com` instance. CI/CD components do not work across different instances,
and [cause failing pipelines](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/17683#note_1795756077)
on the `dev.gitlab.com` mirror if they do not exist on that instance.
## Pipeline tiers
**Under active development**: For more information, see [epic 58](https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/58).
A merge request will typically run several CI/CD pipelines. Depending on where the merge request is at in the approval process, we will trigger different kinds of pipelines. We call those kinds of pipelines **pipeline tiers**.
We currently have three tiers:
1. `pipeline::tier-1`: The merge request has no approvals
1. `pipeline::tier-2`: The merge request has at least one approval, but still requires more approvals
1. `pipeline::tier-3`: The merge request has all the approvals it needs
Typically, the lower the pipeline tier, the fastest the pipeline should be.
The higher the pipeline tier, the more confidence the pipeline should give us by running more tests
See the [Introduce "tiers" in MR pipelines](https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/58) epic for more information on the implementation.
## Predictive test jobs before a merge request is approved
**To reduce the pipeline cost and shorten the job duration, before a merge request is approved, the pipeline will run a predictive set of RSpec & Jest tests that are likely to fail for the merge request changes.**
After a merge request has been approved, the pipeline would contain the full RSpec & Jest tests. This will ensure that all tests
have been run before a merge request is merged.
### Overview of the GitLab project test dependency
To understand how the predictive test jobs are executed, we need to understand the dependency between
GitLab code (frontend and backend) and the respective tests (Jest and RSpec).
This dependency can be visualized in the following diagram:
```mermaid
flowchart LR
subgraph frontend
fe["Frontend code"]--tested with-->jest
end
subgraph backend
be["Backend code"]--tested with-->rspec
end
be--generates-->fixtures["frontend fixtures"]
fixtures--used in-->jest
```
In summary:
- RSpec tests are dependent on the backend code.
- Jest tests are dependent on both frontend and backend code, the latter through the frontend fixtures.
### Predictive Tests Dashboards
- <https://10az.online.tableau.com/#/site/gitlab/views/DRAFTTestIntelligenceAccuracy/TestIntelligenceAccuracy>
### The `detect-tests` CI job
Most CI/CD pipelines for `gitlab-org/gitlab` will run a [`detect-tests` CI job](https://gitlab.com/gitlab-org/gitlab/-/blob/2348d57cf4710f89b96b25de0cf33a455d38325e/.gitlab/ci/setup.gitlab-ci.yml#L115-154) in the `prepare` stage to detect which backend/frontend tests should be run based on the files that changed in the given MR.
The `detect-tests` job will create many files that will contain the backend/frontend tests that should be run. Those files will be read in subsequent jobs in the pipeline, and only those tests will be executed.
### RSpec predictive jobs
#### Determining predictive RSpec test files in a merge request
To identify the RSpec tests that are likely to fail in a merge request, we use *dynamic mappings* and *static mappings*.
##### Dynamic mappings
First, we use the [`test_file_finder` gem](https://gitlab.com/gitlab-org/ruby/gems/test_file_finder), with dynamic mapping strategies coming from the [`Crystalball` gem](https://gitlab.com/gitlab-org/ruby/gems/crystalball))
([see where it's used](https://gitlab.com/gitlab-org/gitlab/-/blob/2348d57cf4710f89b96b25de0cf33a455d38325e/tooling/lib/tooling/find_tests.rb#L20), and [the mapping strategies we use in Crystalball](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/crystalball_env.rb)).
In addition to `test_file_finder`, we have added several advanced mappings to detect even more tests to run:
- [`FindChanges`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/find_changes.rb) ([!74003](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/74003))
- Automatically detect Jest tests to run upon backend changes (via frontend fixtures)
- [`PartialToViewsMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/mappings/partial_to_views_mappings.rb) ([#395016](https://gitlab.com/gitlab-org/gitlab/-/issues/395016))
- Run view specs when Rails partials included in those views are changed in an MR
- [`JsToSystemSpecsMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/mappings/js_to_system_specs_mappings.rb) ([#386754](https://gitlab.com/gitlab-org/gitlab/-/issues/386754))
- Run certain system specs if a JavaScript file was changed in an MR
- [`GraphqlBaseTypeMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/mappings/graphql_base_type_mappings.rb) ([#386756](https://gitlab.com/gitlab-org/gitlab/-/issues/386756))
- If a GraphQL type class changed, we should try to identify the other GraphQL types that potentially include this type, and run their specs.
- [`ViewToSystemSpecsMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/mappings/view_to_system_specs_mappings.rb) ([#395017](https://gitlab.com/gitlab-org/gitlab/-/issues/395017))
- When a view gets changed, we try to find feature specs that would test that area of the code.
- [`ViewToJsMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/8d7dfb7c043adf931128088b9ffab3b4a39af6f5/tooling/lib/tooling/mappings/view_to_js_mappings.rb) ([#386719](https://gitlab.com/gitlab-org/gitlab/-/issues/386719))
- If a JS file is changed, we should try to identify the system specs that are covering this JS component.
- [`FindFilesUsingFeatureFlags`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tooling/lib/tooling/find_files_using_feature_flags.rb) ([#407366](https://gitlab.com/gitlab-org/gitlab/-/issues/407366))
- If a feature flag was changed, we check which Ruby file is including that feature flag, and we add it to the list of changed files in the detect-tests CI job. The remainder of the job will then detect which frontend/backend tests should be run based on those changed files.
##### Static mappings
We use the [`test_file_finder` gem](https://gitlab.com/gitlab-org/ruby/gems/test_file_finder), with a static mapping maintained in the [`tests.yml` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tests.yml) for special cases that cannot
be mapped via dynamic mappings ([see where it's used](https://gitlab.com/gitlab-org/gitlab/-/blob/2348d57cf4710f89b96b25de0cf33a455d38325e/tooling/lib/tooling/find_tests.rb#L17)).
The [test mappings](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tests.yml) contain a map of each source files to a list of test files which is dependent of the source file.
#### Exceptional cases
In addition, there are a few circumstances where we would always run the full RSpec tests:
- when the `pipeline:run-all-rspec` label is set on the merge request. This label will trigger all RSpec tests including those run in the `as-if-foss` jobs.
- when the `pipeline:mr-approved` label is set on the merge request, and if the code changes satisfy the `backend-patterns` rule. Note that this label is assigned by triage automation when the merge request is approved by any reviewer. It is not recommended to apply this label manually.
- when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
- when the merge request is created in a security mirror
- when any CI configuration file is changed (for example, `.gitlab-ci.yml` or `.gitlab/ci/**/*`)
#### Have you encountered a problem with backend predictive tests?
If so, have a look at [the Development Analytics RUNBOOK on predictive tests](https://gitlab.com/gitlab-org/quality/analytics/team/-/blob/main/runbooks/predictive-test-selection.md) for instructions on how to act upon predictive tests issues. Additionally, if you identified any test selection gaps, let `@gl-dx/development-analytics` know so that we can take the necessary steps to optimize test selections.
### Jest predictive jobs
#### Determining predictive Jest test files in a merge request
To identify the jest tests that are likely to fail in a merge request, we pass a list of all the changed files into `jest` using the [`--findRelatedTests`](https://jestjs.io/docs/cli#--findrelatedtests-spaceseparatedlistofsourcefiles) option.
In this mode, `jest` would resolve all the dependencies of related to the changed files, which include test files that have these files in the dependency chain.
#### Exceptional cases
In addition, there are a few circumstances where we would always run the full Jest tests:
- when the `pipeline:run-all-jest` label is set on the merge request
- when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
- when the merge request is created in a security mirror
- when relevant CI configuration file is changed (`.gitlab/ci/rules.gitlab-ci.yml`, `.gitlab/ci/frontend.gitlab-ci.yml`)
- when any frontend dependency file is changed (for example, `package.json`, `yarn.lock`, `config/webpack.config.js`, `config/helpers/**/*.js`)
- when any vendored JavaScript file is changed (for example, `vendor/assets/javascripts/**/*`)
The `rules` definitions for full Jest tests are defined at `.frontend:rules:jest` in
[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/42321b18b946c64d2f6f788c38844499a5ae9141/.gitlab/ci/rules.gitlab-ci.yml#L938-955).
#### Have you encountered a problem with frontend predictive tests?
If so, have a look at [the Development analytics RUNBOOK on predictive tests](https://gitlab.com/gitlab-org/quality/analytics/team/-/blob/main/runbooks/predictive-test-selection.md) for instructions on how to act upon predictive tests issues.
### Fork pipelines
We run only the predictive RSpec & Jest jobs for fork pipelines, unless the `pipeline:run-all-rspec`
label is set on the MR. The goal is to reduce the compute quota consumed by fork pipelines.
See the [experiment issue](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/1170).
## Fail-fast job in merge request pipelines
To provide faster feedback when a merge request breaks existing tests, we implemented a fail-fast mechanism.
An `rspec fail-fast` job is added in parallel to all other `rspec` jobs in a merge
request pipeline. This job runs the tests that are directly related to the changes
in the merge request.
If any of these tests fail, the `rspec fail-fast` job fails, triggering a
`fail-pipeline-early` job to run. The `fail-pipeline-early` job:
- Cancels the currently running pipeline and all in-progress jobs.
- Sets pipeline to have status `failed`.
For example:
```mermaid
graph LR
subgraph "prepare stage";
A["detect-tests"]
end
subgraph "test stage";
B["jest"];
C["rspec migration"];
D["rspec unit"];
E["rspec integration"];
F["rspec system"];
G["rspec fail-fast"];
end
subgraph "post-test stage";
Z["fail-pipeline-early"];
end
A --"artifact: list of test files"--> G
G --"on failure"--> Z
```
The `rspec fail-fast` is a no-op if there are more than 10 test files related to the
merge request. This prevents `rspec fail-fast` duration from exceeding the average
`rspec` job duration and defeating its purpose.
This number can be overridden by setting a CI/CD variable named `RSPEC_FAIL_FAST_TEST_FILE_COUNT_THRESHOLD`.
## Re-run previously failed tests in merge request pipelines
In order to reduce the feedback time after resolving failed tests for a merge request, the `rspec rspec-pg16-rerun-previous-failed-tests`
and `rspec rspec-ee-pg16-rerun-previous-failed-tests` jobs run the failed tests from the previous MR pipeline.
This was introduced on August 25th 2021, with <https://gitlab.com/gitlab-org/gitlab/-/merge_requests/69053>.
### How the failed test is re-run
1. The `detect-previous-failed-tests` job (`prepare` stage) detects the test files associated with failed RSpec
jobs from the previous MR pipeline.
1. The `rspec rspec-pg16-rerun-previous-failed-tests` and `rspec rspec-ee-pg16-rerun-previous-failed-tests` jobs
will run the test files gathered by the `detect-previous-failed-tests` job.
```mermaid
graph LR
subgraph "prepare stage";
A["detect-previous-failed-tests"]
end
subgraph "test stage";
B["rspec rspec-pg16-rerun-previous-failed-tests"];
C["rspec rspec-ee-pg16-rerun-previous-failed-tests"];
end
A --"artifact: list of test files"--> B & C
```
## Merge trains
### Current usage
[We started using merge trains in June 2024](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/154540).
At the moment, **Merge train pipelines don't run any tests**: they only enforce the
["Merging a merge request" guidelines](../code_review.md#merging-a-merge-request)
that already existed before the enablement of merge trains, but that we couldn't easily enforce.
Merge train pipelines run a single `pre-merge-checks` job which ensures the latest pipeline before merge is:
1. A [Merged Results pipeline](../../ci/pipelines/merged_results_pipelines.md)
1. A [`tier-3` pipeline](#pipeline-tiers) (a full pipeline, not a predictive one)
1. Created at most 8 hours ago (72 hours for stable branches)
We opened [a feedback issue](https://gitlab.com/gitlab-org/quality/engineering-productivity/team/-/issues/513)
to iterate on this solution.
### Next iterations
We opened [a dedicated issue to discuss the next iteration for merge trains](https://gitlab.com/gitlab-org/quality/engineering-productivity/team/-/issues/516)
to actually start running tests in merge train pipelines.
### Challenges for enabling merge trains running "full" test pipelines
#### Why do we need to have a "stable" default branch?
If the default branch is unstable (for example, the CI/CD pipelines for the default branch are failing frequently), all of the merge requests pipelines that were added AFTER a faulty merge request pipeline would have to be **canceled** and **added back to the train**, which would create a lot of delays if the merge train is long.
#### How stable does the default branch have to be?
We don't have a specific number, but we need to have better numbers for flaky tests failures and infrastructure failures (see the [Master Broken Incidents RCA Dashboard](https://10az.online.tableau.com/#/site/gitlab/workbooks/2296993/views)).
## Faster feedback for some merge requests
### Broken `master` Fixes
When you need to [fix a broken `master`](https://handbook.gitlab.com/handbook/engineering/workflow/#resolution-of-broken-master), you can add the `pipeline::expedited` label to expedite the pipelines that run on the merge request.
Note that the merge request also needs to have the `master:broken` or `master:foss-broken` label set.
### Revert MRs
To make your Revert MRs faster, use the [revert MR template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/merge_request_templates/Revert%20To%20Resolve%20Incident.md) **before** you create your merge request. It will apply the `pipeline::expedited` label and others that will expedite the pipelines that run on the merge request.
### The `pipeline::expedited` label
When this label is assigned, the following steps of the CI/CD pipeline are skipped:
- The `e2e:test-on-omnibus-ee` job.
- The `rspec:undercoverage` job.
- The entire [review apps process](../testing_guide/review_apps.md).
Apply the label to the merge request, and run a new pipeline for the MR.
## Test jobs
We have dedicated jobs for each [testing level](../testing_guide/testing_levels.md) and each job runs depending on the
changes made in your merge request.
If you want to force all the RSpec jobs to run regardless of your changes, you can add the `pipeline:run-all-rspec` label to the merge request.
{{< alert type="warning" >}}
Forcing all jobs on docs only related MRs would not have the prerequisite jobs and would lead to errors
{{< /alert >}}
### End-to-end jobs
For more information, see [End-to-end test pipelines](../testing_guide/end_to_end/test_pipelines.md).
### Observability end-to-end jobs
The [GitLab Observability Backend](https://gitlab.com/gitlab-org/opstrace/opstrace) has dedicated [end-to-end tests](https://gitlab.com/gitlab-org/opstrace/opstrace/-/tree/main/test/e2e/frontend) that run against a GitLab instance. These tests are designed to ensure the integration between GitLab and the Observability Backend is functioning correctly.
The GitLab pipeline has dedicated jobs (see [`observability-backend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/observability-backend.gitlab-ci.yml)) that can be executed from GitLab MRs. These jobs will trigger the E2E tests on the GitLab Observability Backend pipeline against a GitLab instance built from the GitLab MR branch. These jobs are useful to make sure that the GitLab changes under review will not break E2E tests on the GitLab Observability Backend pipeline.
There are two Observability end-to-end jobs:
- `e2e:observability-backend-main-branch`: executes the tests against the main branch of the GitLab Observability Backend.
- `e2e:observability-backend`: executes the tests against a branch of the GitLab Observability Backend with the same name as the MR branch.
The Observability E2E jobs are triggered automatically **only** for merge requests that touch relevant files, such as those in the `lib/gitlab/observability/` directory or specific configuration files related to observability features.
To run these jobs manually, you can add the `pipeline:run-observability-e2e-tests-main-branch` or `pipeline:run-observability-e2e-tests-current-branch` label to your merge request.
In the following example workflow, a developer creates an MR that touches Observability code and uses Observability end-to-end jobs:
1. A developer creates a GitLab MR that touches observability code. The MR automatically executes the `e2e:observability-backend-main-branch` job.
1. If `e2e:observability-backend-main-branch` fails, it means that either the MR broke something (and needs fixing), or the MR made changes that requires the e2e tests to be updated.
1. To update the e2e tests, the developer should:
1. Create a branch in the GitLab Observability Backend [repository](https://gitlab.com/gitlab-org/opstrace/opstrace), with the same name as the GitLab branch containing the breaking changes.
1. Fix the [e2e tests](https://gitlab.com/gitlab-org/opstrace/opstrace/-/tree/main/test/e2e/frontend).
1. Create a merge request with the changes.
1. The developer should add the `pipeline:run-observability-e2e-tests-current-branch` label on the GitLab MR and wait for the `e2e:observability-backend` job to succeed.
1. If `e2e:observability-backend` succeeds, the developer can merge both MRs.
In addition, the developer can manually add `pipeline:run-observability-e2e-tests-main-branch` to force the MR to run the `e2e:observability-backend-main-branch` job. This could be useful in case of changes to files that are not being tracked as related to observability.
There might be situations where the developer would need to skip those tests. To skip tests:
- For an MR, apply the label `pipeline:skip-observability-e2e-tests label`.
- For a whole project, set the CI variable `SKIP_GITLAB_OBSERVABILITY_BACKEND_TRIGGER`.
### Review app jobs
The [`start-review-app-pipeline`](../testing_guide/review_apps.md) child pipeline deploys a Review App and runs
end-to-end tests against it automatically depending on changes, and is manual in other cases.
See `.review:rules:start-review-app-pipeline` in
[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml) for
the specific list of rules.
If you want to force a Review App to be deployed regardless of your changes, you can add the
`pipeline:run-review-app` label to the merge request.
Consult the [review apps](../testing_guide/review_apps.md) dedicated page for more information.
### As-if-FOSS jobs and cross project downstream pipeline
To ensure the relevant changes are working properly in the FOSS project,
under some conditions we also run:
- `* as-if-foss` jobs in the same pipeline
- Cross project downstream FOSS pipeline
The `* as-if-foss` jobs run the GitLab test suite "as if FOSS", meaning as if
the jobs would run in the context of `gitlab-org/gitlab-foss`. On the other
hand, cross project downstream FOSS pipeline actually runs inside the FOSS
project, which should be even closer to an actual FOSS environment.
We run them in the following cases:
- when the `pipeline:run-as-if-foss` label is set on the merge request
- when the merge request is created in the `gitlab-org/security/gitlab` project
- when CI configuration file is changed (for example, `.gitlab-ci.yml` or `.gitlab/ci/**/*`)
The `* as-if-foss` jobs are run in addition to the regular EE-context jobs.
They have the `FOSS_ONLY='1'` variable set and get the `ee/` folder removed
before the tests start running.
Cross project downstream FOSS pipeline simulates merging the merge request
into the default branch in the FOSS project instead, which removes a list of
files. The list can be found in
[`.gitlab/ci/as-if-foss.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/215d1e27d74cbebaa787d35bf7dcabc5c34ebf86/.gitlab/ci/as-if-foss.gitlab-ci.yml#L22-30)
and in
[`merge-train/bin/merge-train`](https://gitlab.com/gitlab-org/merge-train/-/blob/041d942ae1b5615703b7a786982340b61620e7c5/bin/merge-train#L228-239).
The intent is to ensure that a change doesn't introduce a failure after
`gitlab-org/gitlab` is synced to `gitlab-org/gitlab-foss`.
#### Tokens set in the project variables
- `AS_IF_FOSS_TOKEN`: This is a [GitLab FOSS](https://gitlab.com/gitlab-org/gitlab-foss)
project token with `developer` role and `write_repository` permission,
to push generated `as-if-foss/*` branch.
- Note that the same name for the security project should use another token
from the security FOSS project, so that we never push security changes to
a public project.
### As-if-JH cross project downstream pipeline
#### What it is
This pipeline is also called [JiHu validation pipeline](https://handbook.gitlab.com/handbook/ceo/chief-of-staff-team/jihu-support/jihu-validation-pipelines/),
and it's currently allowed to fail. When that happens, follow
[What to do when the validation pipeline fails](https://handbook.gitlab.com/handbook/ceo/chief-of-staff-team/jihu-support/jihu-validation-pipelines/#what-to-do-when-the-validation-pipeline-failed).
#### How we run it
The `start-as-if-jh` job triggers a cross project downstream pipeline which
runs the GitLab test suite "as if JiHu", meaning as if the pipeline would run
in the context of [GitLab JH](../jh_features_review.md). These jobs are only
created in the following cases:
- when changes are made to feature flags
- when the `pipeline:run-as-if-jh` label is set on the merge request
This pipeline runs under the context of a generated branch in the
[GitLab JH validation](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation)
project, which is a mirror of the
[GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab).
The generated branch name is prefixed with `as-if-jh/` along with the branch
name in the merge request. This generated branch is based on the merge request
branch, additionally adding changes downloaded from the
[corresponding JH branch](#corresponding-jh-branch) on top to turn the whole
pipeline as if JiHu.
The intent is to ensure that a change doesn't introduce a failure after
[GitLab](https://gitlab.com/gitlab-org/gitlab) is synchronized to
[GitLab JH](https://jihulab.com/gitlab-cn/gitlab).
#### When to consider applying `pipeline:run-as-if-jh` label
If a Ruby file is renamed and there's a corresponding [`prepend_mod` line](../jh_features_review.md#jh-features-based-on-ce-or-ee-features),
it's likely that GitLab JH is relying on it and requires a corresponding
change to rename the module or class it's prepending.
#### Corresponding JH branch
You can create a corresponding JH branch on [GitLab JH](https://jihulab.com/gitlab-cn/gitlab) by
appending `-jh` to the branch name. If a corresponding JH branch is found,
as-if-jh pipeline grabs files from the respective branch, rather than from the
default branch `main-jh`.
{{< alert type="note" >}}
For now, CI will try to fetch the branch on the [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab), so it might take some time for the new JH branch to propagate to the mirror.
{{< /alert >}}
{{< alert type="note" >}}
While [GitLab JH validation](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation) is a mirror of
[GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab),
it does not include any corresponding JH branch beside the default `main-jh`.
This is why when we want to fetch corresponding JH branch we should fetch it
from the main mirror, rather than the validation project.
{{< /alert >}}
#### How as-if-JH pipeline was configured
The whole process looks like this:
{{< alert type="note" >}}
We only run `sync-as-if-jh-branch` when there are dependencies changes.
{{< /alert >}}
```mermaid
flowchart TD
subgraph "JiHuLab.com"
JH["gitlab-cn/gitlab"]
end
subgraph "GitLab.com"
Mirror["gitlab-org/gitlab-jh-mirrors/gitlab"]
subgraph MR["gitlab-org/gitlab merge request"]
Add["add-jh-files job"]
Prepare["prepare-as-if-jh-branch job"]
Add --"download artifacts"--> Prepare
end
subgraph "gitlab-org-sandbox/gitlab-jh-validation"
Sync["(*optional) sync-as-if-jh-branch job on branch as-if-jh-code-sync"]
Start["start-as-if-jh job on as-if-jh/* branch"]
AsIfJH["as-if-jh pipeline"]
end
Mirror --"pull mirror with master and main-jh"--> gitlab-org-sandbox/gitlab-jh-validation
Mirror --"download JiHu files with ADD_JH_FILES_TOKEN"--> Add
Prepare --"push as-if-jh branches with AS_IF_JH_TOKEN"--> Sync
Sync --"push as-if-jh branches with AS_IF_JH_TOKEN"--> Start
Start --> AsIfJH
end
JH --"pull mirror with corresponding JH branches"--> Mirror
```
##### Tokens set in the project variables
- `ADD_JH_FILES_TOKEN`: This is a [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab)
project token with `read_api` permission, to be able to download JiHu files.
- `AS_IF_JH_TOKEN`: This is a [GitLab JH validation](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation)
project token with `developer` role and `write_repository` permission,
to push generated `as-if-jh/*` branch.
##### How we generate the as-if-JH branch
First `add-jh-files` job will download the required JiHu files from the
corresponding JH branch, saving in artifacts. Next `prepare-as-if-jh-branch`
job will create a new branch from the merge request branch, commit the
changes, and finally push the branch to the
[validation project](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation).
Optionally, if the merge requests have changes to the dependencies, we have an
additional step to run `sync-as-if-jh-branch` job to trigger a downstream
pipeline on [`as-if-jh-code-sync` branch](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation/-/blob/as-if-jh-code-sync/jh/.gitlab-ci.yml)
in the validation project. This job will perform the same process as
[JiHu code-sync](https://jihulab.com/gitlab-cn/code-sync/-/blob/main-jh/.gitlab-ci.yml), making sure the dependencies changes can be brought to the
as-if-jh branch prior to run the validation pipeline.
If there are no dependencies changes, we don't run this process.
##### How we trigger and run the as-if-JH pipeline
After having the `as-if-jh/*` branch prepared and optionally synchronized,
`start-as-if-jh` job will trigger a pipeline in the
[validation project](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation)
to run the cross-project downstream pipeline.
##### How the GitLab JH mirror project is set up
The [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab) project is private and CI is disabled.
It's a pull mirror pulling from [GitLab JH](https://jihulab.com/gitlab-cn/gitlab),
mirroring all branches, overriding divergent refs, triggering no pipelines
when mirror is updated.
The pulling user is [`@gitlab-jh-validation-bot`](https://gitlab.com/gitlab-jh-validation-bot), who
is a maintainer in the project. The credentials can be found in the 1password
engineering vault.
No password is used from mirroring because GitLab JH is a public project.
##### How the GitLab JH validation project is set up
This [GitLab JH validation](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation) project is public and CI is enabled, with temporary
project variables set.
It's a pull mirror pulling from [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab),
mirroring specific branches: `(master|main-jh)`, overriding
divergent refs, triggering no pipelines when mirror is updated.
The pulling user is [`@gitlab-jh-validation-bot`](https://gitlab.com/gitlab-jh-validation-bot), who is a maintainer in the project, and also a
maintainer in the
[GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab).
The credentials can be found in the 1password engineering vault.
A personal access token from `@gitlab-jh-validation-bot` with
`write_repository` permission is used as the password to pull changes from
the GitLab JH mirror. Username is set with `gitlab-jh-validation-bot`.
There is also a [pipeline schedule](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation/-/pipeline_schedules)
to run maintenance pipelines with variable `SCHEDULE_TYPE` set to `maintenance`
running every day, updating cache.
The default CI/CD configuration file is also set at `jh/.gitlab-ci.yml` so it
runs exactly like [GitLab JH](https://jihulab.com/gitlab-cn/gitlab/-/blob/main-jh/jh/.gitlab-ci.yml).
Additionally, a special branch
[`as-if-jh-code-sync`](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation/-/blob/as-if-jh-code-sync/jh/.gitlab-ci.yml)
is set and protected. Maintainers can push and developers can merge for this
branch. We need to set it so developers can merge because we need to let
developers to trigger pipelines for this branch. This is a compromise
before we resolve [Developer-level users no longer able to run pipelines on protected branches](https://gitlab.com/gitlab-org/gitlab/-/issues/230939).
It's used to run `sync-as-if-jh-branch` to synchronize the dependencies
when the merge requests changed the dependencies. See
[How we generate the as-if-JH branch](#how-we-generate-the-as-if-jh-branch)
for its implementation.
###### Temporary GitLab JH validation project variables
- `BUNDLER_CHECKSUM_VERIFICATION_OPT_IN` is set to `false`
- We can remove this variable after JiHu has
[`jh/Gemfile.checksum`](https://jihulab.com/gitlab-cn/gitlab/-/blob/main-jh/jh/Gemfile.checksum)
committed. More context can be found at:
[Setting it to `false` to skip it](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/118938#note_1374688877)
##### Why do we have both the mirror project and validation project?
We have separate projects for a several reasons.
- **Security**: Previously, we had the mirror project only. However, to fully
mitigate a [security issue](https://gitlab.com/gitlab-org/gitlab/-/issues/369898),
we had to make the mirror project private.
- **Isolation**: We want to run JH code in a completely isolated and standalone project.
We should not run it under the `gitlab-org` group, which is where the mirror
project is. The validation project is completely isolated.
- **Cost**: We don't want to connect to JiHuLab.com from each merge request.
It is more cost effective to mirror the code from JiHuLab.com to
somewhere at GitLab.com, and have our merge requests fetch code from there.
This means that the validation project can fetch code from the mirror, rather
than from JiHuLab.com. The mirror project will periodically fetch from
JiHuLab.com.
- **Branch separation/security/efficiency**: We want to mirror all branches,
so that we can fetch the corresponding JH branch from JiHuLab.com. However,
we don't want to overwrite the `as-if-jh-code-sync` branch in the validation project,
because we use it to control the validation pipeline and it has access to
`AS_IF_JH_TOKEN`. However, we cannot mirror all branches except a single
one. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/413032) for details.
Given this issue, the validation project is set to only mirror `master` and
`main-jh`. Technically, we don't even need those branches, but we do want to
keep the repository up-to-date with all the default branches so that when
we push changes from the merge request, we only need to push changes from
the merge request, which can be more efficient.
- Separation of concerns:
- Validation project only has the following branches:
- `master` and `main-jh` to keep changes up-to-date.
- `as-if-jh-code-sync` for dependency synchronization.
We should never mirror this.
- `as-if-jh/*` branches from the merge requests.
We should never mirror these.
- All branches from the mirror project are all coming from JiHuLab.com.
We never push anything to the mirror project, nor does it run any
pipelines. CI/CD is disabled in the mirror project.
We can consider merging the two projects to simplify the
setup and process, but we need to make sure that all of these reasons
are no longer concerns.
### `rspec:undercoverage` job
The `rspec:undercoverage` job runs [`undercover`](https://rubygems.org/gems/undercover)
to detect, and fail if any changes introduced in the merge request has zero coverage.
The `rspec:undercoverage` job obtains coverage data from the `rspec:coverage`
job.
If the `rspec:undercoverage` job detects missing coverage due to a CE method being overridden in EE, add the `pipeline:run-as-if-foss` label to the merge request and start a new pipeline.
In the event of an emergency, or false positive from this job, add the
`pipeline:skip-undercoverage` label to the merge request to allow this job to
fail.
#### Troubleshooting `rspec:undercoverage` failures
The `rspec:undercoverage` job has [known bugs](https://gitlab.com/groups/gitlab-org/-/epics/8254)
that can cause false positive failures. Such false positive failures may also happen if you are updating database migration that is too old.
You can test coverage locally to determine if it's safe to apply `pipeline:skip-undercoverage`. For example, using `<spec>` as the name of the
test causing the failure:
1. Run `RUN_ALL_MIGRATION_TESTS=1 SIMPLECOV=1 bundle exec rspec <spec>`.
1. Run `scripts/undercoverage`.
If these commands return `undercover: ✅ No coverage is missing in latest changes` then you can apply `pipeline:skip-undercoverage` to bypass pipeline failures.
### `pajamas_adoption` job
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141368) in GitLab 16.8.
{{< /history >}}
The `pajamas_adoption` job runs the [Pajamas Adoption Scanner](https://gitlab-org.gitlab.io/frontend/pajamas-adoption-scanner/) in merge requests to prevent regressions in the adoption of the [Pajamas Design System](https://design.gitlab.com/).
The job fails if the scanner detects regressions caused by a merge request. If the regressions cannot be fixed in the merge request, add the `pipeline:skip-pajamas-adoption` label to the merge request, then retry the job.
## Test suite parallelization
Our current RSpec tests parallelization setup is as follows:
1. The `retrieve-tests-metadata` job in the `prepare` stage ensures we have a
`knapsack/report-master.json` file:
- The `knapsack/report-master.json` file is fetched from the latest `main` pipeline which runs `update-tests-metadata`
(for now it's the 2-hourly `maintenance` scheduled master pipeline), if it's not here we initialize the file with `{}`.
1. Each `[rspec|rspec-ee] [migration|unit|integration|system|geo] n m` job are run with
`knapsack rspec` and should have an evenly distributed share of tests:
- It works because the jobs have access to the `knapsack/report-master.json`
since the "artifacts from all previous stages are passed by default".
- the jobs set their own report path to
`"knapsack/${TEST_TOOL}_${TEST_LEVEL}_${DATABASE}_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json"`.
- if knapsack is doing its job, test files that are run should be listed under
`Report specs`, not under `Leftover specs`.
1. The `update-tests-metadata` job (which only runs on scheduled pipelines for
[the canonical project](https://gitlab.com/gitlab-org/gitlab) and updates the `knapsack/report-master.json` in 2 ways:
1. By default, it takes all the `knapsack/rspec*.json` files and merge them all together into a single
`knapsack/report-master.json` file that is saved as artifact.
1. (Experimental) When the `AVERAGE_KNAPSACK_REPORT` environment variable is set to `true`, instead of merging the reports, the job will calculate the average of the test duration between `knapsack/report-master.json` and `knapsack/rspec*.json` to reduce the performance impact from potentially random factors such as spec ordering, runner hardware differences, flaky tests, etc.
This experimental approach is aimed to better predict the duration for each spec files to distribute load among parallel jobs more evenly so the jobs can finish around the same time.
After that, the next pipeline uses the up-to-date `knapsack/report-master.json` file.
## Flaky tests
### Automatic skipping of flaky tests
We used to skip tests that are [known to be flaky](../testing_guide/unhealthy_tests.md#automatic-retries-and-flaky-tests-detection),
but we stopped doing so since that could actually lead to actual broken `master`.
Instead, we introduced
[a fast-quarantining process](../testing_guide/unhealthy_tests.md#fast-quarantine)
to proactively quarantine any flaky test reported in `#master-broken` incidents.
This fast-quarantining process can be disabled by setting the `$FAST_QUARANTINE`
variable to `false`.
### Automatic retry of failing tests in a separate process
Unless `$RETRY_FAILED_TESTS_IN_NEW_PROCESS` variable is set to `false` (`true` by default), RSpec tests that failed are automatically retried once in a separate
RSpec process. The goal is to get rid of most side-effects from previous tests that may lead to a subsequent test failure.
We keep track of retried tests in the `$RETRIED_TESTS_REPORT_FILE` file saved as artifact by the `rspec:flaky-tests-report` job.
See the [experiment issue](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/1148).
## Compatibility testing
By default, we run all tests with the versions that runs on GitLab.com.
Other versions (usually one back-compatible version, and one forward-compatible version) should be running in nightly scheduled pipelines.
Exceptions to this general guideline should be motivated and documented.
### Ruby versions testing
We're running Ruby 3.2 on GitLab.com, as well as for the default branch.
To prepare for the next Ruby version, we run merge requests in Ruby 3.3.
See the roadmap at
[Ruby 3.3 epic](https://gitlab.com/groups/gitlab-org/-/epics/12350)
for more details.
To make sure all supported Ruby versions are working, we also run our test
suite on dedicated 2-hourly scheduled pipelines for each supported versions.
For merge requests, you can add the following labels to run the respective
Ruby version only:
- `pipeline:run-in-ruby3_3`
### PostgreSQL versions testing
Our test suite runs against PostgreSQL 16 as GitLab.com runs on PostgreSQL 16 and
[Omnibus defaults to PG14 for new installs and upgrades](../../administration/package_information/postgresql_versions.md).
We run our test suite against PostgreSQL 14, 15, 16, and 17 on nightly scheduled pipelines.
NOTE: With the addition of PG17, we are close to the limit of nightly jobs, with 1946 out of 2000 jobs per pipeline. Adding new job families could cause the nightly pipeline to fail.
#### Current versions testing
| Where? | PostgreSQL version | Ruby version |
|-------------------------------------------------------------------------------------------------|-------------------------------------|-----------------------|
| Merge requests | 16 (default version) | 3.2 (default version) |
| `master` branch commits | 16 (default version) | 3.2 (default version) |
| `maintenance` scheduled pipelines for the `master` branch (every even-numbered hour at XX:05) | 16 (default version) | 3.2 (default version) |
| `maintenance` scheduled pipelines for the `ruby-next` branch (every odd-numbered hour at XX:10) | 16 (default version) | 3.3 |
| `nightly` scheduled pipelines for the `master` branch | 16 (default version), 14, 15 and 17 | 3.2 (default version) |
| `weekly` scheduled pipelines for the `master` branch | 16 (default version) | 3.2 (default version) |
For the next Ruby versions we're testing against with, we run
maintenance scheduled pipelines every 2 hours on the `ruby-next` branch.
`ruby-next` must not have any changes. The branch is only there to run
pipelines with another Ruby version in the scheduled maintenance pipelines.
Additionally, we have scheduled pipelines running on `ruby-sync` branch also
every 2 hours, updating all next branches to be up-to-date with
the default branch `master`. No pipelines will be triggered by this push.
The `gitlab` job in the `ruby-sync` branch uses a `gitlab-org/gitlab` project
token named `RUBY_SYNC` with `write_repository` scope and `Maintainer` role,
expiring on 2025-12-02. The token is stored in the `RUBY_SYNC_TOKEN` variable
in the pipeline schedule for `ruby-sync` branch.
### Redis versions testing
Our test suite runs against Redis 6 as GitLab.com runs on Redis 6 and
[Omnibus defaults to Redis 6 for new installs and upgrades](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/config/software/redis.rb).
We do run our test suite against Redis 7 on `nightly` scheduled pipelines, specifically when running forward-compatible PostgreSQL 15 jobs.
#### Current versions testing
| Where? | Redis version |
| ------ | ------------------ |
| MRs | 6 |
| `default branch` (non-scheduled pipelines) | 6 |
| `nightly` scheduled pipelines | 7 |
### Single database testing
By default, all tests run with [multiple databases](../database/multiple_databases.md).
We also run tests with a single database in nightly scheduled pipelines, and in merge requests that touch database-related files.
Single database tests run in two modes:
1. **Single database with one connection**. Where GitLab connects to all the tables using one connection pool.
This runs through all the jobs that end with `-single-db`
1. **Single database with two connections**. Where GitLab connects to `gitlab_main`, `gitlab_ci` database tables
using different database connections. This runs through all the jobs that end with `-single-db-ci-connection`.
If you want to force tests to run with a single database, you can add the `pipeline:run-single-db` label to the merge request.
### Elasticsearch and OpenSearch versions testing
Our test suite runs against Elasticsearch 8 as GitLab.com runs on Elasticsearch 8 when certain conditions are met.
We run our test suite against Elasticsearch 7, 8 and OpenSearch 1, 2 on nightly scheduled pipelines. All
test suites use PostgreSQL 16 because there is no dependency between the database and search backend.
| Where? | Elasticsearch version | OpenSearch Version | PostgreSQL version |
|-------------------------------------------------------------------------------------------------|-----------------------|----------------------|----------------------|
| Merge requests with label `~group::global search` or `~pipeline:run-search-tests` | 8.X (production) | | 16 (default version) |
| `nightly` scheduled pipelines for the `master` branch | 7.X, 8.X (production) | 1.X, 2.X | 16 (default version) |
| `weekly` scheduled pipelines for the `master` branch | 9.X | latest | 16 (default version) |
## Monitoring
The GitLab test suite is [monitored](../performance.md#rspec-profiling) for the `main` branch, and any branch
that includes `rspec-profile` in their name.
## Logging
- Rails logging to `log/test.log` is disabled by default in CI
[for performance reasons](https://jtway.co/speed-up-your-rails-test-suite-by-6-in-1-line-13fedb869ec4).
To override this setting, provide the
`RAILS_ENABLE_TEST_LOG` environment variable.
## CI configuration internals
See the dedicated [CI configuration internals page](internals.md).
## Performance
See the dedicated [CI configuration performance page](performance.md).
---
[Return to Development documentation](../_index.md)
|
---
stage: none
group: Engineering Productivity
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Pipelines for the GitLab project
breadcrumbs:
- doc
- development
- pipelines
---
Pipelines for [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) (as well as the `dev` instance's) is configured in the usual
[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml)
which itself includes files under
[`.gitlab/ci/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/.gitlab/ci)
for easier maintenance.
We're striving to [dogfood](https://handbook.gitlab.com/handbook/engineering/development/principles/#dogfooding)
GitLab [CI/CD features and best-practices](../../ci/_index.md) as much as possible.
Do not use [CI/CD components](../../ci/components/_index.md) in `gitlab-org/gitlab` pipelines
unless they are mirrored on the `dev.gitlab.com` instance. CI/CD components do not work across different instances,
and [cause failing pipelines](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/17683#note_1795756077)
on the `dev.gitlab.com` mirror if they do not exist on that instance.
## Pipeline tiers
**Under active development**: For more information, see [epic 58](https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/58).
A merge request will typically run several CI/CD pipelines. Depending on where the merge request is at in the approval process, we will trigger different kinds of pipelines. We call those kinds of pipelines **pipeline tiers**.
We currently have three tiers:
1. `pipeline::tier-1`: The merge request has no approvals
1. `pipeline::tier-2`: The merge request has at least one approval, but still requires more approvals
1. `pipeline::tier-3`: The merge request has all the approvals it needs
Typically, the lower the pipeline tier, the fastest the pipeline should be.
The higher the pipeline tier, the more confidence the pipeline should give us by running more tests
See the [Introduce "tiers" in MR pipelines](https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/58) epic for more information on the implementation.
## Predictive test jobs before a merge request is approved
**To reduce the pipeline cost and shorten the job duration, before a merge request is approved, the pipeline will run a predictive set of RSpec & Jest tests that are likely to fail for the merge request changes.**
After a merge request has been approved, the pipeline would contain the full RSpec & Jest tests. This will ensure that all tests
have been run before a merge request is merged.
### Overview of the GitLab project test dependency
To understand how the predictive test jobs are executed, we need to understand the dependency between
GitLab code (frontend and backend) and the respective tests (Jest and RSpec).
This dependency can be visualized in the following diagram:
```mermaid
flowchart LR
subgraph frontend
fe["Frontend code"]--tested with-->jest
end
subgraph backend
be["Backend code"]--tested with-->rspec
end
be--generates-->fixtures["frontend fixtures"]
fixtures--used in-->jest
```
In summary:
- RSpec tests are dependent on the backend code.
- Jest tests are dependent on both frontend and backend code, the latter through the frontend fixtures.
### Predictive Tests Dashboards
- <https://10az.online.tableau.com/#/site/gitlab/views/DRAFTTestIntelligenceAccuracy/TestIntelligenceAccuracy>
### The `detect-tests` CI job
Most CI/CD pipelines for `gitlab-org/gitlab` will run a [`detect-tests` CI job](https://gitlab.com/gitlab-org/gitlab/-/blob/2348d57cf4710f89b96b25de0cf33a455d38325e/.gitlab/ci/setup.gitlab-ci.yml#L115-154) in the `prepare` stage to detect which backend/frontend tests should be run based on the files that changed in the given MR.
The `detect-tests` job will create many files that will contain the backend/frontend tests that should be run. Those files will be read in subsequent jobs in the pipeline, and only those tests will be executed.
### RSpec predictive jobs
#### Determining predictive RSpec test files in a merge request
To identify the RSpec tests that are likely to fail in a merge request, we use *dynamic mappings* and *static mappings*.
##### Dynamic mappings
First, we use the [`test_file_finder` gem](https://gitlab.com/gitlab-org/ruby/gems/test_file_finder), with dynamic mapping strategies coming from the [`Crystalball` gem](https://gitlab.com/gitlab-org/ruby/gems/crystalball))
([see where it's used](https://gitlab.com/gitlab-org/gitlab/-/blob/2348d57cf4710f89b96b25de0cf33a455d38325e/tooling/lib/tooling/find_tests.rb#L20), and [the mapping strategies we use in Crystalball](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/crystalball_env.rb)).
In addition to `test_file_finder`, we have added several advanced mappings to detect even more tests to run:
- [`FindChanges`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/find_changes.rb) ([!74003](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/74003))
- Automatically detect Jest tests to run upon backend changes (via frontend fixtures)
- [`PartialToViewsMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/mappings/partial_to_views_mappings.rb) ([#395016](https://gitlab.com/gitlab-org/gitlab/-/issues/395016))
- Run view specs when Rails partials included in those views are changed in an MR
- [`JsToSystemSpecsMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/mappings/js_to_system_specs_mappings.rb) ([#386754](https://gitlab.com/gitlab-org/gitlab/-/issues/386754))
- Run certain system specs if a JavaScript file was changed in an MR
- [`GraphqlBaseTypeMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/mappings/graphql_base_type_mappings.rb) ([#386756](https://gitlab.com/gitlab-org/gitlab/-/issues/386756))
- If a GraphQL type class changed, we should try to identify the other GraphQL types that potentially include this type, and run their specs.
- [`ViewToSystemSpecsMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/28943cbd8b6d7e9a350d00e5ea5bb52123ee14a4/tooling/lib/tooling/mappings/view_to_system_specs_mappings.rb) ([#395017](https://gitlab.com/gitlab-org/gitlab/-/issues/395017))
- When a view gets changed, we try to find feature specs that would test that area of the code.
- [`ViewToJsMappings`](https://gitlab.com/gitlab-org/gitlab/-/blob/8d7dfb7c043adf931128088b9ffab3b4a39af6f5/tooling/lib/tooling/mappings/view_to_js_mappings.rb) ([#386719](https://gitlab.com/gitlab-org/gitlab/-/issues/386719))
- If a JS file is changed, we should try to identify the system specs that are covering this JS component.
- [`FindFilesUsingFeatureFlags`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tooling/lib/tooling/find_files_using_feature_flags.rb) ([#407366](https://gitlab.com/gitlab-org/gitlab/-/issues/407366))
- If a feature flag was changed, we check which Ruby file is including that feature flag, and we add it to the list of changed files in the detect-tests CI job. The remainder of the job will then detect which frontend/backend tests should be run based on those changed files.
##### Static mappings
We use the [`test_file_finder` gem](https://gitlab.com/gitlab-org/ruby/gems/test_file_finder), with a static mapping maintained in the [`tests.yml` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tests.yml) for special cases that cannot
be mapped via dynamic mappings ([see where it's used](https://gitlab.com/gitlab-org/gitlab/-/blob/2348d57cf4710f89b96b25de0cf33a455d38325e/tooling/lib/tooling/find_tests.rb#L17)).
The [test mappings](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tests.yml) contain a map of each source files to a list of test files which is dependent of the source file.
#### Exceptional cases
In addition, there are a few circumstances where we would always run the full RSpec tests:
- when the `pipeline:run-all-rspec` label is set on the merge request. This label will trigger all RSpec tests including those run in the `as-if-foss` jobs.
- when the `pipeline:mr-approved` label is set on the merge request, and if the code changes satisfy the `backend-patterns` rule. Note that this label is assigned by triage automation when the merge request is approved by any reviewer. It is not recommended to apply this label manually.
- when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
- when the merge request is created in a security mirror
- when any CI configuration file is changed (for example, `.gitlab-ci.yml` or `.gitlab/ci/**/*`)
#### Have you encountered a problem with backend predictive tests?
If so, have a look at [the Development Analytics RUNBOOK on predictive tests](https://gitlab.com/gitlab-org/quality/analytics/team/-/blob/main/runbooks/predictive-test-selection.md) for instructions on how to act upon predictive tests issues. Additionally, if you identified any test selection gaps, let `@gl-dx/development-analytics` know so that we can take the necessary steps to optimize test selections.
### Jest predictive jobs
#### Determining predictive Jest test files in a merge request
To identify the jest tests that are likely to fail in a merge request, we pass a list of all the changed files into `jest` using the [`--findRelatedTests`](https://jestjs.io/docs/cli#--findrelatedtests-spaceseparatedlistofsourcefiles) option.
In this mode, `jest` would resolve all the dependencies of related to the changed files, which include test files that have these files in the dependency chain.
#### Exceptional cases
In addition, there are a few circumstances where we would always run the full Jest tests:
- when the `pipeline:run-all-jest` label is set on the merge request
- when the merge request is created by an automation (for example, Gitaly update or MR targeting a stable branch)
- when the merge request is created in a security mirror
- when relevant CI configuration file is changed (`.gitlab/ci/rules.gitlab-ci.yml`, `.gitlab/ci/frontend.gitlab-ci.yml`)
- when any frontend dependency file is changed (for example, `package.json`, `yarn.lock`, `config/webpack.config.js`, `config/helpers/**/*.js`)
- when any vendored JavaScript file is changed (for example, `vendor/assets/javascripts/**/*`)
The `rules` definitions for full Jest tests are defined at `.frontend:rules:jest` in
[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/42321b18b946c64d2f6f788c38844499a5ae9141/.gitlab/ci/rules.gitlab-ci.yml#L938-955).
#### Have you encountered a problem with frontend predictive tests?
If so, have a look at [the Development analytics RUNBOOK on predictive tests](https://gitlab.com/gitlab-org/quality/analytics/team/-/blob/main/runbooks/predictive-test-selection.md) for instructions on how to act upon predictive tests issues.
### Fork pipelines
We run only the predictive RSpec & Jest jobs for fork pipelines, unless the `pipeline:run-all-rspec`
label is set on the MR. The goal is to reduce the compute quota consumed by fork pipelines.
See the [experiment issue](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/1170).
## Fail-fast job in merge request pipelines
To provide faster feedback when a merge request breaks existing tests, we implemented a fail-fast mechanism.
An `rspec fail-fast` job is added in parallel to all other `rspec` jobs in a merge
request pipeline. This job runs the tests that are directly related to the changes
in the merge request.
If any of these tests fail, the `rspec fail-fast` job fails, triggering a
`fail-pipeline-early` job to run. The `fail-pipeline-early` job:
- Cancels the currently running pipeline and all in-progress jobs.
- Sets pipeline to have status `failed`.
For example:
```mermaid
graph LR
subgraph "prepare stage";
A["detect-tests"]
end
subgraph "test stage";
B["jest"];
C["rspec migration"];
D["rspec unit"];
E["rspec integration"];
F["rspec system"];
G["rspec fail-fast"];
end
subgraph "post-test stage";
Z["fail-pipeline-early"];
end
A --"artifact: list of test files"--> G
G --"on failure"--> Z
```
The `rspec fail-fast` is a no-op if there are more than 10 test files related to the
merge request. This prevents `rspec fail-fast` duration from exceeding the average
`rspec` job duration and defeating its purpose.
This number can be overridden by setting a CI/CD variable named `RSPEC_FAIL_FAST_TEST_FILE_COUNT_THRESHOLD`.
## Re-run previously failed tests in merge request pipelines
In order to reduce the feedback time after resolving failed tests for a merge request, the `rspec rspec-pg16-rerun-previous-failed-tests`
and `rspec rspec-ee-pg16-rerun-previous-failed-tests` jobs run the failed tests from the previous MR pipeline.
This was introduced on August 25th 2021, with <https://gitlab.com/gitlab-org/gitlab/-/merge_requests/69053>.
### How the failed test is re-run
1. The `detect-previous-failed-tests` job (`prepare` stage) detects the test files associated with failed RSpec
jobs from the previous MR pipeline.
1. The `rspec rspec-pg16-rerun-previous-failed-tests` and `rspec rspec-ee-pg16-rerun-previous-failed-tests` jobs
will run the test files gathered by the `detect-previous-failed-tests` job.
```mermaid
graph LR
subgraph "prepare stage";
A["detect-previous-failed-tests"]
end
subgraph "test stage";
B["rspec rspec-pg16-rerun-previous-failed-tests"];
C["rspec rspec-ee-pg16-rerun-previous-failed-tests"];
end
A --"artifact: list of test files"--> B & C
```
## Merge trains
### Current usage
[We started using merge trains in June 2024](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/154540).
At the moment, **Merge train pipelines don't run any tests**: they only enforce the
["Merging a merge request" guidelines](../code_review.md#merging-a-merge-request)
that already existed before the enablement of merge trains, but that we couldn't easily enforce.
Merge train pipelines run a single `pre-merge-checks` job which ensures the latest pipeline before merge is:
1. A [Merged Results pipeline](../../ci/pipelines/merged_results_pipelines.md)
1. A [`tier-3` pipeline](#pipeline-tiers) (a full pipeline, not a predictive one)
1. Created at most 8 hours ago (72 hours for stable branches)
We opened [a feedback issue](https://gitlab.com/gitlab-org/quality/engineering-productivity/team/-/issues/513)
to iterate on this solution.
### Next iterations
We opened [a dedicated issue to discuss the next iteration for merge trains](https://gitlab.com/gitlab-org/quality/engineering-productivity/team/-/issues/516)
to actually start running tests in merge train pipelines.
### Challenges for enabling merge trains running "full" test pipelines
#### Why do we need to have a "stable" default branch?
If the default branch is unstable (for example, the CI/CD pipelines for the default branch are failing frequently), all of the merge requests pipelines that were added AFTER a faulty merge request pipeline would have to be **canceled** and **added back to the train**, which would create a lot of delays if the merge train is long.
#### How stable does the default branch have to be?
We don't have a specific number, but we need to have better numbers for flaky tests failures and infrastructure failures (see the [Master Broken Incidents RCA Dashboard](https://10az.online.tableau.com/#/site/gitlab/workbooks/2296993/views)).
## Faster feedback for some merge requests
### Broken `master` Fixes
When you need to [fix a broken `master`](https://handbook.gitlab.com/handbook/engineering/workflow/#resolution-of-broken-master), you can add the `pipeline::expedited` label to expedite the pipelines that run on the merge request.
Note that the merge request also needs to have the `master:broken` or `master:foss-broken` label set.
### Revert MRs
To make your Revert MRs faster, use the [revert MR template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/merge_request_templates/Revert%20To%20Resolve%20Incident.md) **before** you create your merge request. It will apply the `pipeline::expedited` label and others that will expedite the pipelines that run on the merge request.
### The `pipeline::expedited` label
When this label is assigned, the following steps of the CI/CD pipeline are skipped:
- The `e2e:test-on-omnibus-ee` job.
- The `rspec:undercoverage` job.
- The entire [review apps process](../testing_guide/review_apps.md).
Apply the label to the merge request, and run a new pipeline for the MR.
## Test jobs
We have dedicated jobs for each [testing level](../testing_guide/testing_levels.md) and each job runs depending on the
changes made in your merge request.
If you want to force all the RSpec jobs to run regardless of your changes, you can add the `pipeline:run-all-rspec` label to the merge request.
{{< alert type="warning" >}}
Forcing all jobs on docs only related MRs would not have the prerequisite jobs and would lead to errors
{{< /alert >}}
### End-to-end jobs
For more information, see [End-to-end test pipelines](../testing_guide/end_to_end/test_pipelines.md).
### Observability end-to-end jobs
The [GitLab Observability Backend](https://gitlab.com/gitlab-org/opstrace/opstrace) has dedicated [end-to-end tests](https://gitlab.com/gitlab-org/opstrace/opstrace/-/tree/main/test/e2e/frontend) that run against a GitLab instance. These tests are designed to ensure the integration between GitLab and the Observability Backend is functioning correctly.
The GitLab pipeline has dedicated jobs (see [`observability-backend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/observability-backend.gitlab-ci.yml)) that can be executed from GitLab MRs. These jobs will trigger the E2E tests on the GitLab Observability Backend pipeline against a GitLab instance built from the GitLab MR branch. These jobs are useful to make sure that the GitLab changes under review will not break E2E tests on the GitLab Observability Backend pipeline.
There are two Observability end-to-end jobs:
- `e2e:observability-backend-main-branch`: executes the tests against the main branch of the GitLab Observability Backend.
- `e2e:observability-backend`: executes the tests against a branch of the GitLab Observability Backend with the same name as the MR branch.
The Observability E2E jobs are triggered automatically **only** for merge requests that touch relevant files, such as those in the `lib/gitlab/observability/` directory or specific configuration files related to observability features.
To run these jobs manually, you can add the `pipeline:run-observability-e2e-tests-main-branch` or `pipeline:run-observability-e2e-tests-current-branch` label to your merge request.
In the following example workflow, a developer creates an MR that touches Observability code and uses Observability end-to-end jobs:
1. A developer creates a GitLab MR that touches observability code. The MR automatically executes the `e2e:observability-backend-main-branch` job.
1. If `e2e:observability-backend-main-branch` fails, it means that either the MR broke something (and needs fixing), or the MR made changes that requires the e2e tests to be updated.
1. To update the e2e tests, the developer should:
1. Create a branch in the GitLab Observability Backend [repository](https://gitlab.com/gitlab-org/opstrace/opstrace), with the same name as the GitLab branch containing the breaking changes.
1. Fix the [e2e tests](https://gitlab.com/gitlab-org/opstrace/opstrace/-/tree/main/test/e2e/frontend).
1. Create a merge request with the changes.
1. The developer should add the `pipeline:run-observability-e2e-tests-current-branch` label on the GitLab MR and wait for the `e2e:observability-backend` job to succeed.
1. If `e2e:observability-backend` succeeds, the developer can merge both MRs.
In addition, the developer can manually add `pipeline:run-observability-e2e-tests-main-branch` to force the MR to run the `e2e:observability-backend-main-branch` job. This could be useful in case of changes to files that are not being tracked as related to observability.
There might be situations where the developer would need to skip those tests. To skip tests:
- For an MR, apply the label `pipeline:skip-observability-e2e-tests label`.
- For a whole project, set the CI variable `SKIP_GITLAB_OBSERVABILITY_BACKEND_TRIGGER`.
### Review app jobs
The [`start-review-app-pipeline`](../testing_guide/review_apps.md) child pipeline deploys a Review App and runs
end-to-end tests against it automatically depending on changes, and is manual in other cases.
See `.review:rules:start-review-app-pipeline` in
[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml) for
the specific list of rules.
If you want to force a Review App to be deployed regardless of your changes, you can add the
`pipeline:run-review-app` label to the merge request.
Consult the [review apps](../testing_guide/review_apps.md) dedicated page for more information.
### As-if-FOSS jobs and cross project downstream pipeline
To ensure the relevant changes are working properly in the FOSS project,
under some conditions we also run:
- `* as-if-foss` jobs in the same pipeline
- Cross project downstream FOSS pipeline
The `* as-if-foss` jobs run the GitLab test suite "as if FOSS", meaning as if
the jobs would run in the context of `gitlab-org/gitlab-foss`. On the other
hand, cross project downstream FOSS pipeline actually runs inside the FOSS
project, which should be even closer to an actual FOSS environment.
We run them in the following cases:
- when the `pipeline:run-as-if-foss` label is set on the merge request
- when the merge request is created in the `gitlab-org/security/gitlab` project
- when CI configuration file is changed (for example, `.gitlab-ci.yml` or `.gitlab/ci/**/*`)
The `* as-if-foss` jobs are run in addition to the regular EE-context jobs.
They have the `FOSS_ONLY='1'` variable set and get the `ee/` folder removed
before the tests start running.
Cross project downstream FOSS pipeline simulates merging the merge request
into the default branch in the FOSS project instead, which removes a list of
files. The list can be found in
[`.gitlab/ci/as-if-foss.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/215d1e27d74cbebaa787d35bf7dcabc5c34ebf86/.gitlab/ci/as-if-foss.gitlab-ci.yml#L22-30)
and in
[`merge-train/bin/merge-train`](https://gitlab.com/gitlab-org/merge-train/-/blob/041d942ae1b5615703b7a786982340b61620e7c5/bin/merge-train#L228-239).
The intent is to ensure that a change doesn't introduce a failure after
`gitlab-org/gitlab` is synced to `gitlab-org/gitlab-foss`.
#### Tokens set in the project variables
- `AS_IF_FOSS_TOKEN`: This is a [GitLab FOSS](https://gitlab.com/gitlab-org/gitlab-foss)
project token with `developer` role and `write_repository` permission,
to push generated `as-if-foss/*` branch.
- Note that the same name for the security project should use another token
from the security FOSS project, so that we never push security changes to
a public project.
### As-if-JH cross project downstream pipeline
#### What it is
This pipeline is also called [JiHu validation pipeline](https://handbook.gitlab.com/handbook/ceo/chief-of-staff-team/jihu-support/jihu-validation-pipelines/),
and it's currently allowed to fail. When that happens, follow
[What to do when the validation pipeline fails](https://handbook.gitlab.com/handbook/ceo/chief-of-staff-team/jihu-support/jihu-validation-pipelines/#what-to-do-when-the-validation-pipeline-failed).
#### How we run it
The `start-as-if-jh` job triggers a cross project downstream pipeline which
runs the GitLab test suite "as if JiHu", meaning as if the pipeline would run
in the context of [GitLab JH](../jh_features_review.md). These jobs are only
created in the following cases:
- when changes are made to feature flags
- when the `pipeline:run-as-if-jh` label is set on the merge request
This pipeline runs under the context of a generated branch in the
[GitLab JH validation](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation)
project, which is a mirror of the
[GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab).
The generated branch name is prefixed with `as-if-jh/` along with the branch
name in the merge request. This generated branch is based on the merge request
branch, additionally adding changes downloaded from the
[corresponding JH branch](#corresponding-jh-branch) on top to turn the whole
pipeline as if JiHu.
The intent is to ensure that a change doesn't introduce a failure after
[GitLab](https://gitlab.com/gitlab-org/gitlab) is synchronized to
[GitLab JH](https://jihulab.com/gitlab-cn/gitlab).
#### When to consider applying `pipeline:run-as-if-jh` label
If a Ruby file is renamed and there's a corresponding [`prepend_mod` line](../jh_features_review.md#jh-features-based-on-ce-or-ee-features),
it's likely that GitLab JH is relying on it and requires a corresponding
change to rename the module or class it's prepending.
#### Corresponding JH branch
You can create a corresponding JH branch on [GitLab JH](https://jihulab.com/gitlab-cn/gitlab) by
appending `-jh` to the branch name. If a corresponding JH branch is found,
as-if-jh pipeline grabs files from the respective branch, rather than from the
default branch `main-jh`.
{{< alert type="note" >}}
For now, CI will try to fetch the branch on the [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab), so it might take some time for the new JH branch to propagate to the mirror.
{{< /alert >}}
{{< alert type="note" >}}
While [GitLab JH validation](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation) is a mirror of
[GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab),
it does not include any corresponding JH branch beside the default `main-jh`.
This is why when we want to fetch corresponding JH branch we should fetch it
from the main mirror, rather than the validation project.
{{< /alert >}}
#### How as-if-JH pipeline was configured
The whole process looks like this:
{{< alert type="note" >}}
We only run `sync-as-if-jh-branch` when there are dependencies changes.
{{< /alert >}}
```mermaid
flowchart TD
subgraph "JiHuLab.com"
JH["gitlab-cn/gitlab"]
end
subgraph "GitLab.com"
Mirror["gitlab-org/gitlab-jh-mirrors/gitlab"]
subgraph MR["gitlab-org/gitlab merge request"]
Add["add-jh-files job"]
Prepare["prepare-as-if-jh-branch job"]
Add --"download artifacts"--> Prepare
end
subgraph "gitlab-org-sandbox/gitlab-jh-validation"
Sync["(*optional) sync-as-if-jh-branch job on branch as-if-jh-code-sync"]
Start["start-as-if-jh job on as-if-jh/* branch"]
AsIfJH["as-if-jh pipeline"]
end
Mirror --"pull mirror with master and main-jh"--> gitlab-org-sandbox/gitlab-jh-validation
Mirror --"download JiHu files with ADD_JH_FILES_TOKEN"--> Add
Prepare --"push as-if-jh branches with AS_IF_JH_TOKEN"--> Sync
Sync --"push as-if-jh branches with AS_IF_JH_TOKEN"--> Start
Start --> AsIfJH
end
JH --"pull mirror with corresponding JH branches"--> Mirror
```
##### Tokens set in the project variables
- `ADD_JH_FILES_TOKEN`: This is a [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab)
project token with `read_api` permission, to be able to download JiHu files.
- `AS_IF_JH_TOKEN`: This is a [GitLab JH validation](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation)
project token with `developer` role and `write_repository` permission,
to push generated `as-if-jh/*` branch.
##### How we generate the as-if-JH branch
First `add-jh-files` job will download the required JiHu files from the
corresponding JH branch, saving in artifacts. Next `prepare-as-if-jh-branch`
job will create a new branch from the merge request branch, commit the
changes, and finally push the branch to the
[validation project](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation).
Optionally, if the merge requests have changes to the dependencies, we have an
additional step to run `sync-as-if-jh-branch` job to trigger a downstream
pipeline on [`as-if-jh-code-sync` branch](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation/-/blob/as-if-jh-code-sync/jh/.gitlab-ci.yml)
in the validation project. This job will perform the same process as
[JiHu code-sync](https://jihulab.com/gitlab-cn/code-sync/-/blob/main-jh/.gitlab-ci.yml), making sure the dependencies changes can be brought to the
as-if-jh branch prior to run the validation pipeline.
If there are no dependencies changes, we don't run this process.
##### How we trigger and run the as-if-JH pipeline
After having the `as-if-jh/*` branch prepared and optionally synchronized,
`start-as-if-jh` job will trigger a pipeline in the
[validation project](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation)
to run the cross-project downstream pipeline.
##### How the GitLab JH mirror project is set up
The [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab) project is private and CI is disabled.
It's a pull mirror pulling from [GitLab JH](https://jihulab.com/gitlab-cn/gitlab),
mirroring all branches, overriding divergent refs, triggering no pipelines
when mirror is updated.
The pulling user is [`@gitlab-jh-validation-bot`](https://gitlab.com/gitlab-jh-validation-bot), who
is a maintainer in the project. The credentials can be found in the 1password
engineering vault.
No password is used from mirroring because GitLab JH is a public project.
##### How the GitLab JH validation project is set up
This [GitLab JH validation](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation) project is public and CI is enabled, with temporary
project variables set.
It's a pull mirror pulling from [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab),
mirroring specific branches: `(master|main-jh)`, overriding
divergent refs, triggering no pipelines when mirror is updated.
The pulling user is [`@gitlab-jh-validation-bot`](https://gitlab.com/gitlab-jh-validation-bot), who is a maintainer in the project, and also a
maintainer in the
[GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab).
The credentials can be found in the 1password engineering vault.
A personal access token from `@gitlab-jh-validation-bot` with
`write_repository` permission is used as the password to pull changes from
the GitLab JH mirror. Username is set with `gitlab-jh-validation-bot`.
There is also a [pipeline schedule](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation/-/pipeline_schedules)
to run maintenance pipelines with variable `SCHEDULE_TYPE` set to `maintenance`
running every day, updating cache.
The default CI/CD configuration file is also set at `jh/.gitlab-ci.yml` so it
runs exactly like [GitLab JH](https://jihulab.com/gitlab-cn/gitlab/-/blob/main-jh/jh/.gitlab-ci.yml).
Additionally, a special branch
[`as-if-jh-code-sync`](https://gitlab.com/gitlab-org-sandbox/gitlab-jh-validation/-/blob/as-if-jh-code-sync/jh/.gitlab-ci.yml)
is set and protected. Maintainers can push and developers can merge for this
branch. We need to set it so developers can merge because we need to let
developers to trigger pipelines for this branch. This is a compromise
before we resolve [Developer-level users no longer able to run pipelines on protected branches](https://gitlab.com/gitlab-org/gitlab/-/issues/230939).
It's used to run `sync-as-if-jh-branch` to synchronize the dependencies
when the merge requests changed the dependencies. See
[How we generate the as-if-JH branch](#how-we-generate-the-as-if-jh-branch)
for its implementation.
###### Temporary GitLab JH validation project variables
- `BUNDLER_CHECKSUM_VERIFICATION_OPT_IN` is set to `false`
- We can remove this variable after JiHu has
[`jh/Gemfile.checksum`](https://jihulab.com/gitlab-cn/gitlab/-/blob/main-jh/jh/Gemfile.checksum)
committed. More context can be found at:
[Setting it to `false` to skip it](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/118938#note_1374688877)
##### Why do we have both the mirror project and validation project?
We have separate projects for a several reasons.
- **Security**: Previously, we had the mirror project only. However, to fully
mitigate a [security issue](https://gitlab.com/gitlab-org/gitlab/-/issues/369898),
we had to make the mirror project private.
- **Isolation**: We want to run JH code in a completely isolated and standalone project.
We should not run it under the `gitlab-org` group, which is where the mirror
project is. The validation project is completely isolated.
- **Cost**: We don't want to connect to JiHuLab.com from each merge request.
It is more cost effective to mirror the code from JiHuLab.com to
somewhere at GitLab.com, and have our merge requests fetch code from there.
This means that the validation project can fetch code from the mirror, rather
than from JiHuLab.com. The mirror project will periodically fetch from
JiHuLab.com.
- **Branch separation/security/efficiency**: We want to mirror all branches,
so that we can fetch the corresponding JH branch from JiHuLab.com. However,
we don't want to overwrite the `as-if-jh-code-sync` branch in the validation project,
because we use it to control the validation pipeline and it has access to
`AS_IF_JH_TOKEN`. However, we cannot mirror all branches except a single
one. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/413032) for details.
Given this issue, the validation project is set to only mirror `master` and
`main-jh`. Technically, we don't even need those branches, but we do want to
keep the repository up-to-date with all the default branches so that when
we push changes from the merge request, we only need to push changes from
the merge request, which can be more efficient.
- Separation of concerns:
- Validation project only has the following branches:
- `master` and `main-jh` to keep changes up-to-date.
- `as-if-jh-code-sync` for dependency synchronization.
We should never mirror this.
- `as-if-jh/*` branches from the merge requests.
We should never mirror these.
- All branches from the mirror project are all coming from JiHuLab.com.
We never push anything to the mirror project, nor does it run any
pipelines. CI/CD is disabled in the mirror project.
We can consider merging the two projects to simplify the
setup and process, but we need to make sure that all of these reasons
are no longer concerns.
### `rspec:undercoverage` job
The `rspec:undercoverage` job runs [`undercover`](https://rubygems.org/gems/undercover)
to detect, and fail if any changes introduced in the merge request has zero coverage.
The `rspec:undercoverage` job obtains coverage data from the `rspec:coverage`
job.
If the `rspec:undercoverage` job detects missing coverage due to a CE method being overridden in EE, add the `pipeline:run-as-if-foss` label to the merge request and start a new pipeline.
In the event of an emergency, or false positive from this job, add the
`pipeline:skip-undercoverage` label to the merge request to allow this job to
fail.
#### Troubleshooting `rspec:undercoverage` failures
The `rspec:undercoverage` job has [known bugs](https://gitlab.com/groups/gitlab-org/-/epics/8254)
that can cause false positive failures. Such false positive failures may also happen if you are updating database migration that is too old.
You can test coverage locally to determine if it's safe to apply `pipeline:skip-undercoverage`. For example, using `<spec>` as the name of the
test causing the failure:
1. Run `RUN_ALL_MIGRATION_TESTS=1 SIMPLECOV=1 bundle exec rspec <spec>`.
1. Run `scripts/undercoverage`.
If these commands return `undercover: ✅ No coverage is missing in latest changes` then you can apply `pipeline:skip-undercoverage` to bypass pipeline failures.
### `pajamas_adoption` job
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141368) in GitLab 16.8.
{{< /history >}}
The `pajamas_adoption` job runs the [Pajamas Adoption Scanner](https://gitlab-org.gitlab.io/frontend/pajamas-adoption-scanner/) in merge requests to prevent regressions in the adoption of the [Pajamas Design System](https://design.gitlab.com/).
The job fails if the scanner detects regressions caused by a merge request. If the regressions cannot be fixed in the merge request, add the `pipeline:skip-pajamas-adoption` label to the merge request, then retry the job.
## Test suite parallelization
Our current RSpec tests parallelization setup is as follows:
1. The `retrieve-tests-metadata` job in the `prepare` stage ensures we have a
`knapsack/report-master.json` file:
- The `knapsack/report-master.json` file is fetched from the latest `main` pipeline which runs `update-tests-metadata`
(for now it's the 2-hourly `maintenance` scheduled master pipeline), if it's not here we initialize the file with `{}`.
1. Each `[rspec|rspec-ee] [migration|unit|integration|system|geo] n m` job are run with
`knapsack rspec` and should have an evenly distributed share of tests:
- It works because the jobs have access to the `knapsack/report-master.json`
since the "artifacts from all previous stages are passed by default".
- the jobs set their own report path to
`"knapsack/${TEST_TOOL}_${TEST_LEVEL}_${DATABASE}_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json"`.
- if knapsack is doing its job, test files that are run should be listed under
`Report specs`, not under `Leftover specs`.
1. The `update-tests-metadata` job (which only runs on scheduled pipelines for
[the canonical project](https://gitlab.com/gitlab-org/gitlab) and updates the `knapsack/report-master.json` in 2 ways:
1. By default, it takes all the `knapsack/rspec*.json` files and merge them all together into a single
`knapsack/report-master.json` file that is saved as artifact.
1. (Experimental) When the `AVERAGE_KNAPSACK_REPORT` environment variable is set to `true`, instead of merging the reports, the job will calculate the average of the test duration between `knapsack/report-master.json` and `knapsack/rspec*.json` to reduce the performance impact from potentially random factors such as spec ordering, runner hardware differences, flaky tests, etc.
This experimental approach is aimed to better predict the duration for each spec files to distribute load among parallel jobs more evenly so the jobs can finish around the same time.
After that, the next pipeline uses the up-to-date `knapsack/report-master.json` file.
## Flaky tests
### Automatic skipping of flaky tests
We used to skip tests that are [known to be flaky](../testing_guide/unhealthy_tests.md#automatic-retries-and-flaky-tests-detection),
but we stopped doing so since that could actually lead to actual broken `master`.
Instead, we introduced
[a fast-quarantining process](../testing_guide/unhealthy_tests.md#fast-quarantine)
to proactively quarantine any flaky test reported in `#master-broken` incidents.
This fast-quarantining process can be disabled by setting the `$FAST_QUARANTINE`
variable to `false`.
### Automatic retry of failing tests in a separate process
Unless `$RETRY_FAILED_TESTS_IN_NEW_PROCESS` variable is set to `false` (`true` by default), RSpec tests that failed are automatically retried once in a separate
RSpec process. The goal is to get rid of most side-effects from previous tests that may lead to a subsequent test failure.
We keep track of retried tests in the `$RETRIED_TESTS_REPORT_FILE` file saved as artifact by the `rspec:flaky-tests-report` job.
See the [experiment issue](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/1148).
## Compatibility testing
By default, we run all tests with the versions that runs on GitLab.com.
Other versions (usually one back-compatible version, and one forward-compatible version) should be running in nightly scheduled pipelines.
Exceptions to this general guideline should be motivated and documented.
### Ruby versions testing
We're running Ruby 3.2 on GitLab.com, as well as for the default branch.
To prepare for the next Ruby version, we run merge requests in Ruby 3.3.
See the roadmap at
[Ruby 3.3 epic](https://gitlab.com/groups/gitlab-org/-/epics/12350)
for more details.
To make sure all supported Ruby versions are working, we also run our test
suite on dedicated 2-hourly scheduled pipelines for each supported versions.
For merge requests, you can add the following labels to run the respective
Ruby version only:
- `pipeline:run-in-ruby3_3`
### PostgreSQL versions testing
Our test suite runs against PostgreSQL 16 as GitLab.com runs on PostgreSQL 16 and
[Omnibus defaults to PG14 for new installs and upgrades](../../administration/package_information/postgresql_versions.md).
We run our test suite against PostgreSQL 14, 15, 16, and 17 on nightly scheduled pipelines.
NOTE: With the addition of PG17, we are close to the limit of nightly jobs, with 1946 out of 2000 jobs per pipeline. Adding new job families could cause the nightly pipeline to fail.
#### Current versions testing
| Where? | PostgreSQL version | Ruby version |
|-------------------------------------------------------------------------------------------------|-------------------------------------|-----------------------|
| Merge requests | 16 (default version) | 3.2 (default version) |
| `master` branch commits | 16 (default version) | 3.2 (default version) |
| `maintenance` scheduled pipelines for the `master` branch (every even-numbered hour at XX:05) | 16 (default version) | 3.2 (default version) |
| `maintenance` scheduled pipelines for the `ruby-next` branch (every odd-numbered hour at XX:10) | 16 (default version) | 3.3 |
| `nightly` scheduled pipelines for the `master` branch | 16 (default version), 14, 15 and 17 | 3.2 (default version) |
| `weekly` scheduled pipelines for the `master` branch | 16 (default version) | 3.2 (default version) |
For the next Ruby versions we're testing against with, we run
maintenance scheduled pipelines every 2 hours on the `ruby-next` branch.
`ruby-next` must not have any changes. The branch is only there to run
pipelines with another Ruby version in the scheduled maintenance pipelines.
Additionally, we have scheduled pipelines running on `ruby-sync` branch also
every 2 hours, updating all next branches to be up-to-date with
the default branch `master`. No pipelines will be triggered by this push.
The `gitlab` job in the `ruby-sync` branch uses a `gitlab-org/gitlab` project
token named `RUBY_SYNC` with `write_repository` scope and `Maintainer` role,
expiring on 2025-12-02. The token is stored in the `RUBY_SYNC_TOKEN` variable
in the pipeline schedule for `ruby-sync` branch.
### Redis versions testing
Our test suite runs against Redis 6 as GitLab.com runs on Redis 6 and
[Omnibus defaults to Redis 6 for new installs and upgrades](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/config/software/redis.rb).
We do run our test suite against Redis 7 on `nightly` scheduled pipelines, specifically when running forward-compatible PostgreSQL 15 jobs.
#### Current versions testing
| Where? | Redis version |
| ------ | ------------------ |
| MRs | 6 |
| `default branch` (non-scheduled pipelines) | 6 |
| `nightly` scheduled pipelines | 7 |
### Single database testing
By default, all tests run with [multiple databases](../database/multiple_databases.md).
We also run tests with a single database in nightly scheduled pipelines, and in merge requests that touch database-related files.
Single database tests run in two modes:
1. **Single database with one connection**. Where GitLab connects to all the tables using one connection pool.
This runs through all the jobs that end with `-single-db`
1. **Single database with two connections**. Where GitLab connects to `gitlab_main`, `gitlab_ci` database tables
using different database connections. This runs through all the jobs that end with `-single-db-ci-connection`.
If you want to force tests to run with a single database, you can add the `pipeline:run-single-db` label to the merge request.
### Elasticsearch and OpenSearch versions testing
Our test suite runs against Elasticsearch 8 as GitLab.com runs on Elasticsearch 8 when certain conditions are met.
We run our test suite against Elasticsearch 7, 8 and OpenSearch 1, 2 on nightly scheduled pipelines. All
test suites use PostgreSQL 16 because there is no dependency between the database and search backend.
| Where? | Elasticsearch version | OpenSearch Version | PostgreSQL version |
|-------------------------------------------------------------------------------------------------|-----------------------|----------------------|----------------------|
| Merge requests with label `~group::global search` or `~pipeline:run-search-tests` | 8.X (production) | | 16 (default version) |
| `nightly` scheduled pipelines for the `master` branch | 7.X, 8.X (production) | 1.X, 2.X | 16 (default version) |
| `weekly` scheduled pipelines for the `master` branch | 9.X | latest | 16 (default version) |
## Monitoring
The GitLab test suite is [monitored](../performance.md#rspec-profiling) for the `main` branch, and any branch
that includes `rspec-profile` in their name.
## Logging
- Rails logging to `log/test.log` is disabled by default in CI
[for performance reasons](https://jtway.co/speed-up-your-rails-test-suite-by-6-in-1-line-13fedb869ec4).
To override this setting, provide the
`RAILS_ENABLE_TEST_LOG` environment variable.
## CI configuration internals
See the dedicated [CI configuration internals page](internals.md).
## Performance
See the dedicated [CI configuration performance page](performance.md).
---
[Return to Development documentation](../_index.md)
|
https://docs.gitlab.com/development/performance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/performance.md
|
2025-08-13
|
doc/development/pipelines
|
[
"doc",
"development",
"pipelines"
] |
performance.md
|
none
|
Engineering Productivity
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
CI configuration performance
| null |
## Interruptible pipelines
By default, all jobs are [interruptible](../../ci/yaml/_index.md#interruptible), except the
`dont-interrupt-me` job which runs automatically on `main`, and is `manual`
otherwise.
If you want a running pipeline to finish even if you push new commits to a merge
request, be sure to start the `dont-interrupt-me` job before pushing.
## Git fetch caching
Because GitLab.com uses the [pack-objects cache](../../administration/gitaly/configure_gitaly.md#pack-objects-cache),
concurrent Git fetches of the same pipeline ref are deduplicated on
the Gitaly server (always) and served from cache (when available).
This works well for the following reasons:
- The pack-objects cache is enabled on all Gitaly servers on GitLab.com.
- The CI/CD [Git strategy setting](../../ci/pipelines/settings.md#choose-the-default-git-strategy) for `gitlab-org/gitlab` is **Git clone**,
causing all jobs to fetch the same data, which maximizes the cache hit ratio.
- We use [shallow clone](../../ci/pipelines/settings.md#limit-the-number-of-changes-fetched-during-clone) to avoid downloading the full Git
history for every job.
### Fetch repository via artifacts instead of cloning/fetching from Gitaly
Lately we see errors from Gitaly look like this: (see [the issue](https://gitlab.com/gitlab-org/gitlab/-/issues/435456))
```plaintext
fatal: remote error: GitLab is currently unable to handle this request due to load.
```
While GitLab.com uses [pack-objects cache](../../administration/gitaly/configure_gitaly.md#pack-objects-cache),
sometimes the load is still too heavy for Gitaly to handle, and
[thundering herds](https://gitlab.com/gitlab-org/gitlab/-/issues/423830) can
also be a concern that we have a lot of jobs cloning the repository around
the same time.
To mitigate and reduce loads for Gitaly, we changed some jobs to fetch the
repository from artifacts in a job instead of all cloning from Gitaly at once.
For now this applies to most of the RSpec jobs, which has the most concurrent
jobs in most pipelines. This also slightly improved the speed because fetching
from the artifacts is also slightly faster than cloning, at the cost of saving
more artifacts for each pipeline.
Based on the numbers on 2023-12-20 at [Fetch repo from artifacts for RSpec jobs](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/140330),
the extra storage cost was about 280M for each pipeline, and we save 15 seconds
for each RSpec jobs.
We do not apply this to jobs having no other job dependencies because we don't
want to delay any jobs from starting.
This behavior can be controlled by variable `CI_FETCH_REPO_GIT_STRATEGY`:
- Set to `none` means jobs using `.repo-from-artifacts` fetch repository from
artifacts in job `clone-gitlab-repo` rather than cloning.
- Set to `clone` means jobs using `.repo-from-artifacts` clone repository
as usual. Job `clone-gitlab-repo` does not run in this case.
To disable it, set `CI_FETCH_REPO_GIT_STRATEGY` to `clone`. To enable it,
set `CI_FETCH_REPO_GIT_STRATEGY` to `none`.
## Caching strategy
1. All jobs must only pull caches by default.
1. All jobs must be able to pass with an empty cache. In other words, caches are only there to speed up jobs.
1. We currently have several different cache definitions defined in
[`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml),
with fixed keys:
- `.setup-test-env-cache`
- `.ruby-cache`
- `.static-analysis-cache`
- `.rubocop-cache`
- `.ruby-node-cache`
- `.qa-cache`
- `.yarn-cache`
- `.assets-compile-cache` (the key includes `${NODE_ENV}` so it's actually two different caches).
1. These cache definitions are composed of [multiple atomic caches](../../ci/caching/_index.md#use-multiple-caches).
1. Only the following jobs, running in 2-hourly `maintenance` scheduled pipelines, are pushing (that is, updating) to the caches:
- `update-setup-test-env-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-gitaly-binaries-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-rubocop-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-qa-cache`, defined in [`.gitlab/ci/qa.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/qa.gitlab-ci.yml).
- `update-assets-compile-production-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
- `update-assets-compile-test-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
- `update-storybook-yarn-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
1. These jobs can also be forced to run in merge requests with the `pipeline:update-cache` label (this can be useful to warm the caches in a MR that updates the cache keys).
## Artifacts strategy
We limit the artifacts that are saved and retrieved by jobs to the minimum to reduce the upload/download time and costs, as well as the artifacts storage.
## Components caching
Some external components (GitLab Workhorse and frontend assets) of GitLab need to be built from source as a preliminary step for running tests.
## `cache-workhorse`
In [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/79766), and then
[this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/96297),
we introduced a new `cache-workhorse` job that:
- runs automatically for all GitLab.com `gitlab-org/gitlab` scheduled pipelines
- runs automatically for any `master` commit that touches the `workhorse/` folder
- is manual for GitLab.com's `gitlab-org`'s MRs that touches caching-related files
This job tries to download a generic package that contains GitLab Workhorse binaries needed in the GitLab test suite (under `tmp/tests/gitlab-workhorse`).
- If the package URL returns a 404:
1. It runs `scripts/setup-test-env`, so that the GitLab Workhorse binaries are built.
1. It then creates an archive which contains the binaries and upload it [as a generic package](https://gitlab.com/gitlab-org/gitlab/-/packages/).
- Otherwise, if the package already exists, it exits the job successfully.
We also changed the `setup-test-env` job to:
1. First download the GitLab Workhorse generic package build and uploaded by `cache-workhorse`.
1. If the package is retrieved successfully, its content is placed in the right folder (for example, `tmp/tests/gitlab-workhorse`), preventing the building of the binaries when `scripts/setup-test-env` is run later on.
1. If the package URL returns a 404, the behavior doesn't change compared to the current one: the GitLab Workhorse binaries are built as part of `scripts/setup-test-env`.
{{< alert type="note" >}}
The version of the package is the workhorse tree SHA (for example, `git rev-parse HEAD:workhorse`).
{{< /alert >}}
## `cache-assets`
In [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/96297),
we introduced three new `cache-assets:test`, `cache-assets:test as-if-foss`,
and `cache-assets:production` jobs that:
- never run unless `$CACHE_ASSETS_AS_PACKAGE == "true"`
- runs automatically for all GitLab.com `gitlab-org/gitlab` scheduled pipelines
- runs automatically for any `master` commit that touches the assets-related folders
- is manual for GitLab.com's `gitlab-org`'s MRs that touches caching-related files
This job tries to download a generic package that contains GitLab compiled assets
needed in the GitLab test suite (under `app/assets/javascripts/locale/**/app.js`,
and `public/assets`).
- If the package URL returns a 404:
1. It runs `bin/rake gitlab:assets:compile`, so that the GitLab assets are compiled.
1. It then creates an archive which contains the assets and uploads it [as a generic package](https://gitlab.com/gitlab-org/gitlab/-/packages/).
The package version is set to the assets folders' hash sum.
- Otherwise, if the package already exists, it exits the job successfully.
## `compile-*-assets`
We also changed the `compile-test-assets`,
and `compile-production-assets` jobs to:
1. First download the "native" cache assets, which contain:
- The [compiled assets](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/global.gitlab-ci.yml#L86-87).
- A [`cached-assets-hash.txt` file](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/global.gitlab-ci.yml#L85)
containing the `SHA256` hexdigest of all the source files on which the assets depend on.
This list of files is a pessimistic list and the assets might not depend on
some of these files. At worst we compile the assets more often, which is better than
using outdated assets.
The file is [created after assets are compiled](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/frontend.gitlab-ci.yml#L83).
1. We then we compute the `SHA256` hexdigest of all the source files the assets depend on, **for the current checked out branch**. We [store the hexdigest in the `GITLAB_ASSETS_HASH` variable](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/frontend.gitlab-ci.yml#L27).
1. If `$CACHE_ASSETS_AS_PACKAGE == "true"`, we download the generic package built and uploaded by [`cache-assets:*`](#cache-assets).
- If the cache is up-to-date for the checked out branch, we download the native cache
**and** the cache package. We could optimize that by not downloading
the genetic package but the native cache is actually very often outdated because it's
rebuilt only every 2 hours.
1. We [run the `assets_compile_script` function](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/frontend.gitlab-ci.yml#L35),
which [itself runs](https://gitlab.com/gitlab-org/gitlab/-/blob/c023191ef412e868ae957f3341208a41ca678403/scripts/utils.sh#L76)
the [`assets:compile` Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/c023191ef412e868ae957f3341208a41ca678403/lib/tasks/gitlab/assets.rake#L80-109).
This task is responsible for deciding if assets need to be compiled or not.
It [compares the `HEAD` `SHA256` hexdigest from `$GITLAB_ASSETS_HASH` with the `master` hexdigest from `cached-assets-hash.txt`](https://gitlab.com/gitlab-org/gitlab/-/blob/c023191ef412e868ae957f3341208a41ca678403/lib/tasks/gitlab/assets.rake#L86).
1. If the hashes are the same, we don't compile anything. If they're different, we compile the assets.
## Stripped binaries
By default, `setup-test-env` creates an artifact which contains stripped
binaries to [save storage and speed-up artifact downloads](https://gitlab.com/gitlab-org/gitlab/-/issues/442029#note_1775193538) of subsequent CI jobs.
To make debugging a crash from stripped binaries easier comment line with
`strip_executable_binaries` in the `setup-test-job` job and start a new pipeline.
|
---
stage: none
group: Engineering Productivity
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: CI configuration performance
breadcrumbs:
- doc
- development
- pipelines
---
## Interruptible pipelines
By default, all jobs are [interruptible](../../ci/yaml/_index.md#interruptible), except the
`dont-interrupt-me` job which runs automatically on `main`, and is `manual`
otherwise.
If you want a running pipeline to finish even if you push new commits to a merge
request, be sure to start the `dont-interrupt-me` job before pushing.
## Git fetch caching
Because GitLab.com uses the [pack-objects cache](../../administration/gitaly/configure_gitaly.md#pack-objects-cache),
concurrent Git fetches of the same pipeline ref are deduplicated on
the Gitaly server (always) and served from cache (when available).
This works well for the following reasons:
- The pack-objects cache is enabled on all Gitaly servers on GitLab.com.
- The CI/CD [Git strategy setting](../../ci/pipelines/settings.md#choose-the-default-git-strategy) for `gitlab-org/gitlab` is **Git clone**,
causing all jobs to fetch the same data, which maximizes the cache hit ratio.
- We use [shallow clone](../../ci/pipelines/settings.md#limit-the-number-of-changes-fetched-during-clone) to avoid downloading the full Git
history for every job.
### Fetch repository via artifacts instead of cloning/fetching from Gitaly
Lately we see errors from Gitaly look like this: (see [the issue](https://gitlab.com/gitlab-org/gitlab/-/issues/435456))
```plaintext
fatal: remote error: GitLab is currently unable to handle this request due to load.
```
While GitLab.com uses [pack-objects cache](../../administration/gitaly/configure_gitaly.md#pack-objects-cache),
sometimes the load is still too heavy for Gitaly to handle, and
[thundering herds](https://gitlab.com/gitlab-org/gitlab/-/issues/423830) can
also be a concern that we have a lot of jobs cloning the repository around
the same time.
To mitigate and reduce loads for Gitaly, we changed some jobs to fetch the
repository from artifacts in a job instead of all cloning from Gitaly at once.
For now this applies to most of the RSpec jobs, which has the most concurrent
jobs in most pipelines. This also slightly improved the speed because fetching
from the artifacts is also slightly faster than cloning, at the cost of saving
more artifacts for each pipeline.
Based on the numbers on 2023-12-20 at [Fetch repo from artifacts for RSpec jobs](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/140330),
the extra storage cost was about 280M for each pipeline, and we save 15 seconds
for each RSpec jobs.
We do not apply this to jobs having no other job dependencies because we don't
want to delay any jobs from starting.
This behavior can be controlled by variable `CI_FETCH_REPO_GIT_STRATEGY`:
- Set to `none` means jobs using `.repo-from-artifacts` fetch repository from
artifacts in job `clone-gitlab-repo` rather than cloning.
- Set to `clone` means jobs using `.repo-from-artifacts` clone repository
as usual. Job `clone-gitlab-repo` does not run in this case.
To disable it, set `CI_FETCH_REPO_GIT_STRATEGY` to `clone`. To enable it,
set `CI_FETCH_REPO_GIT_STRATEGY` to `none`.
## Caching strategy
1. All jobs must only pull caches by default.
1. All jobs must be able to pass with an empty cache. In other words, caches are only there to speed up jobs.
1. We currently have several different cache definitions defined in
[`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml),
with fixed keys:
- `.setup-test-env-cache`
- `.ruby-cache`
- `.static-analysis-cache`
- `.rubocop-cache`
- `.ruby-node-cache`
- `.qa-cache`
- `.yarn-cache`
- `.assets-compile-cache` (the key includes `${NODE_ENV}` so it's actually two different caches).
1. These cache definitions are composed of [multiple atomic caches](../../ci/caching/_index.md#use-multiple-caches).
1. Only the following jobs, running in 2-hourly `maintenance` scheduled pipelines, are pushing (that is, updating) to the caches:
- `update-setup-test-env-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-gitaly-binaries-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-rubocop-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-qa-cache`, defined in [`.gitlab/ci/qa.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/qa.gitlab-ci.yml).
- `update-assets-compile-production-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
- `update-assets-compile-test-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
- `update-storybook-yarn-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
1. These jobs can also be forced to run in merge requests with the `pipeline:update-cache` label (this can be useful to warm the caches in a MR that updates the cache keys).
## Artifacts strategy
We limit the artifacts that are saved and retrieved by jobs to the minimum to reduce the upload/download time and costs, as well as the artifacts storage.
## Components caching
Some external components (GitLab Workhorse and frontend assets) of GitLab need to be built from source as a preliminary step for running tests.
## `cache-workhorse`
In [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/79766), and then
[this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/96297),
we introduced a new `cache-workhorse` job that:
- runs automatically for all GitLab.com `gitlab-org/gitlab` scheduled pipelines
- runs automatically for any `master` commit that touches the `workhorse/` folder
- is manual for GitLab.com's `gitlab-org`'s MRs that touches caching-related files
This job tries to download a generic package that contains GitLab Workhorse binaries needed in the GitLab test suite (under `tmp/tests/gitlab-workhorse`).
- If the package URL returns a 404:
1. It runs `scripts/setup-test-env`, so that the GitLab Workhorse binaries are built.
1. It then creates an archive which contains the binaries and upload it [as a generic package](https://gitlab.com/gitlab-org/gitlab/-/packages/).
- Otherwise, if the package already exists, it exits the job successfully.
We also changed the `setup-test-env` job to:
1. First download the GitLab Workhorse generic package build and uploaded by `cache-workhorse`.
1. If the package is retrieved successfully, its content is placed in the right folder (for example, `tmp/tests/gitlab-workhorse`), preventing the building of the binaries when `scripts/setup-test-env` is run later on.
1. If the package URL returns a 404, the behavior doesn't change compared to the current one: the GitLab Workhorse binaries are built as part of `scripts/setup-test-env`.
{{< alert type="note" >}}
The version of the package is the workhorse tree SHA (for example, `git rev-parse HEAD:workhorse`).
{{< /alert >}}
## `cache-assets`
In [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/96297),
we introduced three new `cache-assets:test`, `cache-assets:test as-if-foss`,
and `cache-assets:production` jobs that:
- never run unless `$CACHE_ASSETS_AS_PACKAGE == "true"`
- runs automatically for all GitLab.com `gitlab-org/gitlab` scheduled pipelines
- runs automatically for any `master` commit that touches the assets-related folders
- is manual for GitLab.com's `gitlab-org`'s MRs that touches caching-related files
This job tries to download a generic package that contains GitLab compiled assets
needed in the GitLab test suite (under `app/assets/javascripts/locale/**/app.js`,
and `public/assets`).
- If the package URL returns a 404:
1. It runs `bin/rake gitlab:assets:compile`, so that the GitLab assets are compiled.
1. It then creates an archive which contains the assets and uploads it [as a generic package](https://gitlab.com/gitlab-org/gitlab/-/packages/).
The package version is set to the assets folders' hash sum.
- Otherwise, if the package already exists, it exits the job successfully.
## `compile-*-assets`
We also changed the `compile-test-assets`,
and `compile-production-assets` jobs to:
1. First download the "native" cache assets, which contain:
- The [compiled assets](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/global.gitlab-ci.yml#L86-87).
- A [`cached-assets-hash.txt` file](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/global.gitlab-ci.yml#L85)
containing the `SHA256` hexdigest of all the source files on which the assets depend on.
This list of files is a pessimistic list and the assets might not depend on
some of these files. At worst we compile the assets more often, which is better than
using outdated assets.
The file is [created after assets are compiled](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/frontend.gitlab-ci.yml#L83).
1. We then we compute the `SHA256` hexdigest of all the source files the assets depend on, **for the current checked out branch**. We [store the hexdigest in the `GITLAB_ASSETS_HASH` variable](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/frontend.gitlab-ci.yml#L27).
1. If `$CACHE_ASSETS_AS_PACKAGE == "true"`, we download the generic package built and uploaded by [`cache-assets:*`](#cache-assets).
- If the cache is up-to-date for the checked out branch, we download the native cache
**and** the cache package. We could optimize that by not downloading
the genetic package but the native cache is actually very often outdated because it's
rebuilt only every 2 hours.
1. We [run the `assets_compile_script` function](https://gitlab.com/gitlab-org/gitlab/-/blob/a6910c9086bb28e553f5e747ec2dd50af6da3c6b/.gitlab/ci/frontend.gitlab-ci.yml#L35),
which [itself runs](https://gitlab.com/gitlab-org/gitlab/-/blob/c023191ef412e868ae957f3341208a41ca678403/scripts/utils.sh#L76)
the [`assets:compile` Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/c023191ef412e868ae957f3341208a41ca678403/lib/tasks/gitlab/assets.rake#L80-109).
This task is responsible for deciding if assets need to be compiled or not.
It [compares the `HEAD` `SHA256` hexdigest from `$GITLAB_ASSETS_HASH` with the `master` hexdigest from `cached-assets-hash.txt`](https://gitlab.com/gitlab-org/gitlab/-/blob/c023191ef412e868ae957f3341208a41ca678403/lib/tasks/gitlab/assets.rake#L86).
1. If the hashes are the same, we don't compile anything. If they're different, we compile the assets.
## Stripped binaries
By default, `setup-test-env` creates an artifact which contains stripped
binaries to [save storage and speed-up artifact downloads](https://gitlab.com/gitlab-org/gitlab/-/issues/442029#note_1775193538) of subsequent CI jobs.
To make debugging a crash from stripped binaries easier comment line with
`strip_executable_binaries` in the `setup-test-job` job and start a new pipeline.
|
https://docs.gitlab.com/development/internals
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internals.md
|
2025-08-13
|
doc/development/pipelines
|
[
"doc",
"development",
"pipelines"
] |
internals.md
|
none
|
Engineering Productivity
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
CI configuration internals
| null |
## Workflow rules
Pipelines for the GitLab project are created using the [`workflow:rules` keyword](../../ci/yaml/_index.md#workflow)
feature of the GitLab CI/CD.
Pipelines are always created for the following scenarios:
- `main` branch, including on schedules, pushes, merges, and so on.
- Merge requests.
- Tags.
- Stable, `auto-deploy`, and security branches.
Pipeline creation is also affected by the following CI/CD variables:
- If `$FORCE_GITLAB_CI` is set, pipelines are created. Not recommended to use.
See [Avoid `$FORCE_GITLAB_CI`](#avoid-force_gitlab_ci).
- If `$GITLAB_INTERNAL` is not set, pipelines are not created.
No pipeline is created in any other cases (for example, when pushing a branch with no
MR for it).
The source of truth for these workflow rules is defined in [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
### Avoid `$FORCE_GITLAB_CI`
The pipeline is very complex and we need to clearly understand the kind of
pipeline we want to trigger. We need to know which jobs we should run and
which ones we shouldn't.
If we use `$FORCE_GITLAB_CI` to force trigger a pipeline,
we don't really know what kind of pipeline it is. The result can be that we don't
run the jobs we want, or we run too many jobs we don't care about.
Some more context and background can be found at:
[Avoid blanket changes to avoid unexpected run](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/102881)
Here's a list of where we're using this right now, and should try to move away
from using `$FORCE_GITLAB_CI`.
- [JiHu validation pipeline](https://handbook.gitlab.com/handbook/ceo/chief-of-staff-team/jihu-support/jihu-validation-pipelines/)
See the next section for how we can enable pipelines without using
`$FORCE_GITLAB_CI`.
#### Alternative to `$FORCE_GITLAB_CI`
Essentially, we use different variables to enable different pipelines.
An example doing this is `$START_AS_IF_FOSS`. When we want to trigger a
cross project FOSS pipeline, we set `$START_AS_IF_FOSS`, along with a set of
other variables like `$ENABLE_RSPEC_UNIT`, `$ENABLE_RSPEC_SYSTEM`, and so on
so forth to enable each jobs we want to run in the as-if-foss cross project
downstream pipeline.
The advantage of this over `$FORCE_GITLAB_CI` is that we have full control
over how we want to run the pipeline because `$START_AS_IF_FOSS` is only used
for this purpose, and changing how the pipeline behaves under this variable
will not affect other types of pipelines, while using `$FORCE_GITLAB_CI` we
do not know what exactly the pipeline is because it's used for multiple
purposes.
## Default image
The default image is defined in [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
<!-- vale gitlab_base.Spelling = NO -->
It includes Ruby, Go, Git, Git LFS, Chrome, Node, Yarn, PostgreSQL, and Graphics Magick.
<!-- vale gitlab_base.Spelling = YES -->
The images used in our pipelines are configured in the
[`gitlab-org/gitlab-build-images`](https://gitlab.com/gitlab-org/gitlab-build-images)
project, which is push-mirrored to [`gitlab/gitlab-build-images`](https://dev.gitlab.org/gitlab/gitlab-build-images)
for redundancy.
The current version of the build images can be found in the
["Used by GitLab section"](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/.gitlab-ci.yml).
## Default variables
In addition to the [predefined CI/CD variables](../../ci/variables/predefined_variables.md),
each pipeline includes default variables defined in
[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
### Variable naming
Starting in March 2025, we have begun prefixing new environment variables
that are exclusively used for the monolith CI pipelines with `GLCI_`.
This allows us to track if an environment variable is intended for CI
(`GLCI_`), the product (`GITLAB_`), or tools and systems not owned by us.
That helps us better evaluate the impact of environment variable changes
in our pipeline configuration.
## Stages
The current stages are:
- `sync`: This stage is used to synchronize changes from [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) to
[`gitlab-org/gitlab-foss`](https://gitlab.com/gitlab-org/gitlab-foss).
- `prepare`: This stage includes jobs that prepare artifacts that are needed by
jobs in subsequent stages.
- `build-images`: This stage includes jobs that prepare Docker images
that are needed by jobs in subsequent stages or downstream pipelines.
- `fixtures`: This stage includes jobs that prepare fixtures needed by frontend tests.
- `lint`: This stage includes linting and static analysis jobs.
- `test`: This stage includes most of the tests, and DB/migration jobs.
- `post-test`: This stage includes jobs that build reports or gather data from
the `test` stage's jobs (for example, coverage, Knapsack metadata, and so on).
- `review`: This stage includes jobs that build the CNG images, deploy them, and
run end-to-end tests against review apps (see [review apps](../testing_guide/review_apps.md) for details).
It also includes Docs Review App jobs.
- `qa`: This stage includes jobs that perform QA tasks against the Review App
that is deployed in stage `review`.
- `post-qa`: This stage includes jobs that build reports or gather data from
the `qa` stage's jobs (for example, Review App performance report).
- `pages`: This stage includes a job that deploys the various reports as
GitLab Pages (for example, [`coverage-ruby`](https://gitlab-org.gitlab.io/gitlab/coverage-ruby/),
and `webpack-report` (found at `https://gitlab-org.gitlab.io/gitlab/webpack-report/`, but there is
[an issue with the deployment](https://gitlab.com/gitlab-org/gitlab/-/issues/233458)).
- `notify`: This stage includes jobs that notify various failures to Slack.
## Dependency Proxy
Some of the jobs are using images from Docker Hub, where we also use
`${GITLAB_DEPENDENCY_PROXY_ADDRESS}` as a prefix to the image path, so that we pull
images from our [Dependency Proxy](../../user/packages/dependency_proxy/_index.md).
By default, this variable is set from the value of `${GITLAB_DEPENDENCY_PROXY}`.
- `CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX` is [a GitLab predefined CI/CD variable](../../ci/variables/predefined_variables.md) that gives the top-level group image prefix to pull images through the Dependency Proxy.
- `GITLAB_DEPENDENCY_PROXY` is a CI/CD variable in the [`gitlab-org`](https://gitlab.com/gitlab-org) and the [`gitlab-com`](https://gitlab.com/gitlab-com) groups. It is defined as `${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/`.
- `GITLAB_DEPENDENCY_PROXY_ADDRESS` is defined in the `gitlab-org/gitlab` project. It defaults to `"${GITLAB_DEPENDENCY_PROXY}"`, but is overridden in some cases (see the workaround section below).
In `gitlab-org/gitlab`, we'll use `GITLAB_DEPENDENCY_PROXY_ADDRESS` [due to a workaround](#work-around-for-when-a-pipeline-is-started-by-a-project-access-token-user). Everywhere else in the `gitlab-org` and `gitlab-com` groups, we should use `GITLAB_DEPENDENCY_PROXY` to use the Dependency Proxy. For any other project, you can rely on the `CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX` predefined CI/CD variable to enable the dependency proxy:
```yaml
# In the gitlab-org/gitlab project
image: ${GITLAB_DEPENDENCY_PROXY_ADDRESS}alpine:edge
# In any other project in gitlab-org and gitlab-com groups
image: ${GITLAB_DEPENDENCY_PROXY}alpine:edge
# In projects outside of gitlab-org and gitlab-com groups
image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/alpine:edge
```
Forks that reside on any other personal namespaces or groups fall back to
Docker Hub unless `GITLAB_DEPENDENCY_PROXY` is also defined there.
### Work around for when a pipeline is started by a Project access token user
When a pipeline is started by a Project access token user (for example, the `release-tools approver bot` user which
automatically updates the Gitaly version used in the main project),
[the Dependency proxy isn't accessible](https://gitlab.com/gitlab-org/gitlab/-/issues/332411#note_1130388163)
and the job fails at the `Preparing the "docker+machine" executor` step.
To work around that, we have a special workflow rule, that overrides the
`${GITLAB_DEPENDENCY_PROXY_ADDRESS}` variable so that Dependency proxy isn't used in that case:
```yaml
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $GITLAB_USER_LOGIN =~ /project_\d+_bot\d*/'
variables:
GITLAB_DEPENDENCY_PROXY_ADDRESS: ""
```
{{< alert type="note" >}}
We don't directly override the `${GITLAB_DEPENDENCY_PROXY}` variable because group-level
variables have higher precedence over `.gitlab-ci.yml` variables.
{{< /alert >}}
## External CI/CD secrets
As part of <https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/46>, in February 2024, we
started to dogfood [the usage of GCP Secret Manager](../../ci/secrets/gcp_secret_manager.md) to
[store the `ADD_JH_FILES_TOKEN` CI variable](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/144228).
As part of this, [the `qual-ci-secret-mgmt-e78c9b95` GCP project was created](https://gitlab.com/gitlab-org/quality/engineering-productivity-infrastructure/-/issues/99#note_1605141484).
## Common job definitions
Most of the jobs [extend from a few CI definitions](../../ci/yaml/_index.md#extends)
defined in [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml)
that are scoped to a single [configuration keyword](../../ci/yaml/_index.md#job-keywords).
| Job definitions | Description |
|------------------|-------------|
| `.default-retry` | Allows a job to [retry](../../ci/yaml/_index.md#retry) upon `unknown_failure`, `api_failure`, `runner_system_failure`, `job_execution_timeout`, or `stuck_or_timeout_failure`. |
| `.default-before_script` | Allows a job to use a default `before_script` definition suitable for Ruby/Rails tasks that may need a database running (for example, tests). |
| `.repo-from-artifacts` | Allows a job to fetch the repository from artifacts in `clone-gitlab-repo` instead of cloning. This should reduce GitLab.com Gitaly load and also slightly improve the speed because downloading from artifacts is faster than cloning. Note that this should be avoided to be used with jobs having `needs: []` because otherwise it'll start later and we usually want all jobs to start as soon as possible. Use this only on jobs which has other dependencies so that we don't wait longer than just cloning. Note that this behavior can be controlled via `CI_FETCH_REPO_GIT_STRATEGY`. See [Fetch repository via artifacts instead of cloning/fetching from Gitaly](performance.md#fetch-repository-via-artifacts-instead-of-cloningfetching-from-gitaly) for more details. |
| `.setup-test-env-cache` | Allows a job to use a default `cache` definition suitable for setting up test environment for subsequent Ruby/Rails tasks. |
| `.ruby-cache` | Allows a job to use a default `cache` definition suitable for Ruby tasks. |
| `.static-analysis-cache` | Allows a job to use a default `cache` definition suitable for static analysis tasks. |
| `.qa-cache` | Allows a job to use a default `cache` definition suitable for QA tasks. |
| `.yarn-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that do a `yarn install`. |
| `.assets-compile-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that compile assets. |
| `.use-pg14` | Allows a job to use the `postgres` 14, `redis`, and `rediscluster` services (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific versions of the services). |
| `.use-pg14-ee` | Same as `.use-pg14` but also use an `elasticsearch` service (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific version of the service). |
| `.use-pg15` | Allows a job to use the `postgres` 15, `redis`, and `rediscluster` services (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific versions of the services). |
| `.use-pg15-ee` | Same as `.use-pg15` but also use an `elasticsearch` service (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific version of the service). |
| `.use-pg16` | Allows a job to use the `postgres` 16, `redis`, and `rediscluster` services (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific versions of the services). |
| `.use-pg16-ee` | Same as `.use-pg16` but also use an `elasticsearch` service (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific version of the service). |
| `.use-pg17` | Allows a job to use the `postgres` 17, `redis`, and `rediscluster` services (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific versions of the services). |
| `.use-pg17-ee` | Same as `.use-pg17` but also use an `elasticsearch` service (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific version of the service). |
| `.use-buildx` | Allows a job to use the `docker buildx` tool to build Docker images. |
| `.as-if-foss` | Simulate the FOSS project by setting the `FOSS_ONLY='1'` CI/CD variable. |
| `.use-docker-in-docker` | Allows a job to use Docker in Docker. For more details, see the [handbook about CI/CD configuration](https://handbook.gitlab.com/handbook/engineering/gitlab-repositories/#cicd-configuration). |
## `rules`, `if:` conditions and `changes:` patterns
We're using the [`rules` keyword](../../ci/yaml/_index.md#rules) extensively.
All `rules` definitions are defined in
[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml),
then included in individual jobs via [`extends`](../../ci/yaml/_index.md#extends).
The `rules` definitions are composed of `if:` conditions and `changes:` patterns,
which are also defined in
[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml)
and included in `rules` definitions via [YAML anchors](../../ci/yaml/yaml_optimization.md#anchors)
### `if:` conditions
| `if:` conditions | Description | Notes |
|---------------------------------------------|-------------|-------|
| `if-not-canonical-namespace` | Matches if the project isn't in the canonical (`gitlab-org/` and `gitlab-cn/`) or security (`gitlab-org/security`) namespace. | Use to create a job for forks (by using `when: on_success` or `when: manual`), or **not** create a job for forks (by using `when: never`). |
| `if-not-ee` | Matches if the project isn't EE (that is, project name isn't `gitlab` or `gitlab-ee`). | Use to create a job only in the FOSS project (by using `when: on_success` or `when: manual`), or **not** create a job if the project is EE (by using `when: never`). |
| `if-not-foss` | Matches if the project isn't FOSS (that is, project name isn't `gitlab-foss`, `gitlab-ce`, or `gitlabhq`). | Use to create a job only in the EE project (by using `when: on_success` or `when: manual`), or **not** create a job if the project is FOSS (by using `when: never`). |
| `if-default-refs` | Matches if the pipeline is for `master`, `main`, `/^[\d-]+-stable(-ee)?$/` (stable branches), `/^\d+-\d+-auto-deploy-\d+$/` (auto-deploy branches), `/^security\//` (security branches), merge requests, and tags. | Note that jobs aren't created for branches with this default configuration. |
| `if-master-refs` | Matches if the current branch is `master` or `main`. | |
| `if-master-push` | Matches if the current branch is `master` or `main` and pipeline source is `push`. | |
| `if-master-schedule-maintenance` | Matches if the current branch is `master` or `main` and pipeline runs on a 2-hourly schedule. | |
| `if-master-schedule-nightly` | Matches if the current branch is `master` or `main` and pipeline runs on a nightly schedule. | |
| `if-auto-deploy-branches` | Matches if the current branch is an auto-deploy one. | |
| `if-master-or-tag` | Matches if the pipeline is for the `master` or `main` branch or for a tag. | |
| `if-merge-request` | Matches if the pipeline is for a merge request. | |
| `if-merge-request-title-as-if-foss` | Matches if the pipeline is for a merge request and the MR has label `~"pipeline:run-as-if-foss"`. | |
| `if-merge-request-title-update-caches` | Matches if the pipeline is for a merge request and the MR has label `~"pipeline:update-cache"`. | |
| `if-merge-request-labels-run-all-rspec` | Matches if the pipeline is for a merge request and the MR has label `~"pipeline:run-all-rspec"`. | |
| `if-merge-request-labels-run-cs-evaluation` | Matches if the pipeline is for a merge request and the MR has label `~"pipeline:run-CS-evaluation"`. | |
| `if-security-merge-request` | Matches if the pipeline is for a security merge request. | |
| `if-security-schedule` | Matches if the pipeline is for a security scheduled pipeline. | |
| `if-nightly-master-schedule` | Matches if the pipeline is for a `master` scheduled pipeline with `$NIGHTLY` set. | |
| `if-dot-com-gitlab-org-schedule` | Limits jobs creation to scheduled pipelines for the `gitlab-org` group on GitLab.com. | |
| `if-dot-com-gitlab-org-master` | Limits jobs creation to the `master` or `main` branch for the `gitlab-org` group on GitLab.com. | |
| `if-dot-com-gitlab-org-merge-request` | Limits jobs creation to merge requests for the `gitlab-org` group on GitLab.com. | |
| `if-dot-com-ee-schedule` | Limits jobs to scheduled pipelines for the `gitlab-org/gitlab` project on GitLab.com. | |
### `changes:` patterns
| `changes:` patterns | Description |
|------------------------------|--------------------------------------------------------------------------|
| `ci-patterns` | Only create job for CI configuration-related changes. |
| `ci-build-images-patterns` | Only create job for CI configuration-related changes related to the `build-images` stage. |
| `ci-review-patterns` | Only create job for CI configuration-related changes related to the `review` stage. |
| `ci-qa-patterns` | Only create job for CI configuration-related changes related to the `qa` stage. |
| `yaml-lint-patterns` | Only create job for YAML-related changes. |
| `docs-patterns` | Only create job for docs-related changes. |
| `frontend-dependency-patterns` | Only create job when frontend dependencies are updated (for example, `package.json`, and `yarn.lock`) changes. |
| `frontend-patterns-for-as-if-foss` | Only create job for frontend-related changes that have impact on FOSS. |
| `backend-patterns` | Only create job for backend-related changes. |
| `db-patterns` | Only create job for DB-related changes. |
| `backstage-patterns` | Only create job for backstage-related changes (that is, Danger, fixtures, RuboCop, specs). |
| `code-patterns` | Only create job for code-related changes. |
| `qa-patterns` | Only create job for QA-related changes. |
| `code-backstage-patterns` | Combination of `code-patterns` and `backstage-patterns`. |
| `code-qa-patterns` | Combination of `code-patterns` and `qa-patterns`. |
| `code-backstage-qa-patterns` | Combination of `code-patterns`, `backstage-patterns`, and `qa-patterns`. |
| `static-analysis-patterns` | Only create jobs for Static Analytics configuration-related changes. |
## Custom exit codes
GitLab CI uses custom exit codes to categorize different types of job failures. This helps with automated failure tracking and retry logic. To see which exit codes trigger automatic retries, check the retry rules in [GitLab global CI configuration](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml).
The table below lists current exit codes and their meanings:
| exit code | Description |
|-----------|---------------------------------------|
|110 | network connection error |
|111 | low disk space |
|112 | known flaky test failure |
|160 | failed to upload/download job artifact|
|161 | 5XX server error |
|162 | Gitaly spawn failure |
|163 | RSpec job timeout |
|164 | Redis cluster error |
|165 | segmentation fault |
|166 | EEXIST: file already exists |
|167 | `gitlab.com` overloaded |
|168 | gRPC resource exhausted |
|169 | SQL query limit exceeded |
|170 | SQL table is write protected |
This list can be expanded as new failure patterns emerge. To avoid conflicts with standard Bash exit codes, new custom codes must be 160 or higher.
## Best Practices
### When to use `extends:`, `<<: *xyz` (YAML anchors), or `!reference`
[Reference](../../ci/yaml/yaml_optimization.md)
#### Key takeaways
- If you need to **extend a hash**, you should use `extends`
- If you need to **extend an array**, you'll need to use `!reference`, or `YAML anchors` as last resort
- For more complex cases (for example, extend hash inside array, extend array inside hash, ...), you'll have to use `!reference` or `YAML anchors`
#### What can `extends` and `YAML anchors` do?
##### `extends`
- Deep merge for hashes
- NO merge for arrays. It overwrites ([source](../../ci/yaml/yaml_optimization.md#merge-details))
##### YAML anchors
- NO deep merge for hashes, BUT it can be used to extend a hash (see the example below)
- NO merge for arrays, BUT it can be used to extend an array (see the example below)
#### A great example
This example shows how to extend complex YAML data structures with `!reference` and `YAML anchors`:
```yaml
.strict-ee-only-rules:
# `rules` is an array of hashes
rules:
- if: '$CI_PROJECT_NAME !~ /^gitlab(-ee)?$/ '
when: never
# `if-security-merge-request` is a hash
.if-security-merge-request: &if-security-merge-request
if: '$CI_PROJECT_NAMESPACE == "gitlab-org/security"'
# `code-qa-patterns` is an array
.code-qa-patterns: &code-qa-patterns
- "{package.json,yarn.lock}"
- ".browserslistrc"
- "babel.config.js"
- "jest.config.{base,integration,unit}.js"
.qa:rules:as-if-foss:
rules:
# We extend the `rules` array with an array of hashes directly
- !reference [".strict-ee-only-rules", rules]
# We extend a single array entry with a hash
- <<: *if-security-merge-request
# `changes` is an array, so we pass it an entire array
changes: *code-qa-patterns
qa:selectors-as-if-foss:
# We include the rules from .qa:rules:as-if-foss in this job
extends:
- .qa:rules:as-if-foss
```
### Extend the `.fast-no-clone-job` job
Downloading the branch for the canonical project takes between 20 and 30 seconds.
Some jobs only need a limited number of files, which we can download via the GitLab API.
You can skip a job `git clone`/`git fetch` by adding the following pattern to a job.
#### Scenario 1: no `before_script` is defined in the job
This applies to the parent sections the job extends from as well.
You can just extend the `.fast-no-clone-job`:
**Before**:
```yaml
# Note: No `extends:` is present in the job
a-job:
script:
- source scripts/rspec_helpers.sh scripts/slack
- echo "No need for a git clone!"
```
**After**:
```yaml
# Note: No `extends:` is present in the job
a-job:
extends:
- .fast-no-clone-job
variables:
FILES_TO_DOWNLOAD: >
scripts/rspec_helpers.sh
scripts/slack
script:
- source scripts/rspec_helpers.sh scripts/slack
- echo "No need for a git clone!"
```
#### Scenario 2: a `before_script` block is already defined in the job (or in jobs it extends)
For this scenario, you have to:
1. Extend the `.fast-no-clone-job` as in the first scenario (this will merge the `FILES_TO_DOWNLOAD` variable with the other variables)
1. Make sure the `before_script` section from `.fast-no-clone-job` is referenced in the `before_script` we use for this job.
**Before**:
```yaml
.base-job:
before_script:
echo "Hello from .base-job"
a-job:
extends:
- .base-job
script:
- source scripts/rspec_helpers.sh scripts/slack
- echo "No need for a git clone!"
```
**After**:
```yaml
.base-job:
before_script:
echo "Hello from .base-job"
a-job:
extends:
- .base-job
- .fast-no-clone-job
variables:
FILES_TO_DOWNLOAD: >
scripts/rspec_helpers.sh
scripts/slack
before_script:
- !reference [".fast-no-clone-job", before_script]
- !reference [".base-job", before_script]
script:
- source scripts/rspec_helpers.sh scripts/slack
- echo "No need for a git clone!"
```
#### Caveats
- This pattern does not work if a script relies on `git` to access the repository, because we don't have the repository without cloning or fetching.
- The job using this pattern needs to have `curl` available.
- If you need to run `bundle install` in the job (even using `BUNDLE_ONLY`), you need to:
- Download the gems that are stored in the `gitlab-org/gitlab` project.
- You can use the `download_local_gems` shell command for that purpose.
- Include the `Gemfile`, `Gemfile.lock` and `Gemfile.checksum` (if applicable)
#### Where is this pattern used?
- For now, we use this pattern for the following jobs, and those do not block private repositories:
- `review-build-cng-env` for:
- `GITALY_SERVER_VERSION`
- `GITLAB_ELASTICSEARCH_INDEXER_VERSION`
- `GITLAB_KAS_VERSION`
- `GITLAB_PAGES_VERSION`
- `GITLAB_SHELL_VERSION`
- `scripts/trigger-build.rb`
- `VERSION`
- `review-deploy` for:
- `GITALY_SERVER_VERSION`
- `GITLAB_SHELL_VERSION`
- `scripts/review_apps/review-apps.sh`
- `scripts/review_apps/seed-dast-test-data.sh`
- `VERSION`
- `rspec:coverage` for:
- `config/bundler_setup.rb`
- `Gemfile`
- `Gemfile.checksum`
- `Gemfile.lock`
- `scripts/merge-simplecov`
- `spec/simplecov_env_core.rb`
- `spec/simplecov_env.rb`
- `prepare-as-if-foss-env` for:
- `scripts/setup/generate-as-if-foss-env.rb`
Additionally, `scripts/utils.sh` is always downloaded from the API when this pattern is used (this file contains the code for `.fast-no-clone-job`).
### Runner tags
On GitLab.com, both unprivileged and privileged runners are
available. For projects in the `gitlab-org` group and forks of those
projects, only one of the following tags should be added to a job:
- `gitlab-org`: Jobs randomly use privileged and unprivileged runners.
- `gitlab-org-docker`: Jobs must use a privileged runner. If you need [Docker-in-Docker support](../../ci/docker/using_docker_build.md#use-docker-in-docker),
use `gitlab-org-docker` instead of `gitlab-org`.
The `gitlab-org-docker` tag is added by the `.use-docker-in-docker` job
definition above.
To ensure compatibility with forks, avoid using both `gitlab-org` and
`gitlab-org-docker` simultaneously. No instance runners
have both `gitlab-org` and `gitlab-org-docker` tags. For forks of
`gitlab-org` projects, jobs will get stuck if both tags are supplied because
no matching runners are available.
See [the GitLab Repositories handbook page](https://handbook.gitlab.com/handbook/engineering/gitlab-repositories/#cicd-configuration)
for more information.
### Using the `gitlab` Ruby gem in the canonical project
When calling `require 'gitlab'` in the canonical project, it will require the `lib/gitlab.rb` file when `$LOAD_PATH` has `lib`, which happens when we're loading the application (`config/application.rb`) or tests (`spec/spec_helper.rb`).
This means we're not able to load the `gitlab` gem under the above conditions and even if we can, the constant name will conflict, breaking internal assumptions and causing random errors.
If you are working on a script that is using [the `gitlab` Ruby gem](https://github.com/NARKOZ/gitlab), you will need to take a few precautions:
#### 1 - Conditional require of the gem
To avoid potential conflicts, only require the `gitlab` gem if the `Gitlab` constant isn't defined:
```ruby
# Bad
require 'gitlab'
# Good
if Object.const_defined?(:RSpec)
# Ok, we're testing, we know we're going to stub `Gitlab`, so we just ignore
else
require 'gitlab'
if Gitlab.singleton_class.method_defined?(:com?)
abort 'lib/gitlab.rb is loaded, and this means we can no longer load the client and we cannot proceed'
end
end
```
#### 2 - Mock the `gitlab` gem entirely in your specs
In your specs, `require 'gitlab'` will reference the `lib/gitlab.rb` file:
```ruby
# Bad
allow(GitLab).to receive(:a_method).and_return(...)
# Good
client = double('GitLab')
# In order to easily stub the client, consider using a method to return the client.
# We can then stub the method to return our fake client, which we can further stub its methods.
#
# This is the pattern followed below
let(:instance) { described_class.new }
allow(instance).to receive(:gitlab).and_return(client)
allow(client).to receive(:a_method).and_return(...)
```
In case you need to query jobs for instance, the following snippet will be useful:
```ruby
# Bad
allow(GitLab).to receive(:pipeline_jobs).and_return(...)
# Good
#
# rubocop:disable RSpec/VerifiedDoubles -- We do not load the Gitlab client directly
client = double('GitLab')
allow(instance).to receive(:gitlab).and_return(client)
jobs = ['job1', 'job2']
allow(client).to yield_jobs(:pipeline_jobs, jobs)
def yield_jobs(api_method, jobs)
messages = receive_message_chain(api_method, :auto_paginate)
jobs.inject(messages) do |stub, job_name|
stub.and_yield(double(name: job_name))
end
end
# rubocop:enable RSpec/VerifiedDoubles
```
#### 3 - Do not call your script with `bundle exec`
Executing with `bundle exec` will change the `$LOAD_PATH` for Ruby, and it will load `lib/gitlab.rb` when calling `require 'gitlab'`:
```shell
# Bad
bundle exec scripts/my-script.rb
# Good
scripts/my-script.rb
```
## CI Configuration Testing
We now have RSpec tests to verify changes to the CI configuration by simulating pipeline creation with the updated YAML files. You can find these tests and a documentation of the current test coverage in [`spec/dot_gitlab_ci/job_dependency_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/dot_gitlab_ci/job_dependency_spec.rb).
### How Do the Tests Work
With the help of `Ci::CreatePipelineService`, we are able to simulate pipeline creation with different attributes such as branch name, MR labels, pipeline source (scheduled v.s push), pipeline type (merge train v.s merged results), etc. This is the same service utilized by the GitLab CI Lint API for validating CI/CD configurations.
These tests will automatically run for merge requests that update CI configurations. However, team members can opt to skip these tests by adding the label ~"pipeline:skip-ci-validation" to their merge requests.
Running these tests locally is encouraged, as it provides the fastest feedback.
|
---
stage: none
group: Engineering Productivity
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: CI configuration internals
breadcrumbs:
- doc
- development
- pipelines
---
## Workflow rules
Pipelines for the GitLab project are created using the [`workflow:rules` keyword](../../ci/yaml/_index.md#workflow)
feature of the GitLab CI/CD.
Pipelines are always created for the following scenarios:
- `main` branch, including on schedules, pushes, merges, and so on.
- Merge requests.
- Tags.
- Stable, `auto-deploy`, and security branches.
Pipeline creation is also affected by the following CI/CD variables:
- If `$FORCE_GITLAB_CI` is set, pipelines are created. Not recommended to use.
See [Avoid `$FORCE_GITLAB_CI`](#avoid-force_gitlab_ci).
- If `$GITLAB_INTERNAL` is not set, pipelines are not created.
No pipeline is created in any other cases (for example, when pushing a branch with no
MR for it).
The source of truth for these workflow rules is defined in [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
### Avoid `$FORCE_GITLAB_CI`
The pipeline is very complex and we need to clearly understand the kind of
pipeline we want to trigger. We need to know which jobs we should run and
which ones we shouldn't.
If we use `$FORCE_GITLAB_CI` to force trigger a pipeline,
we don't really know what kind of pipeline it is. The result can be that we don't
run the jobs we want, or we run too many jobs we don't care about.
Some more context and background can be found at:
[Avoid blanket changes to avoid unexpected run](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/102881)
Here's a list of where we're using this right now, and should try to move away
from using `$FORCE_GITLAB_CI`.
- [JiHu validation pipeline](https://handbook.gitlab.com/handbook/ceo/chief-of-staff-team/jihu-support/jihu-validation-pipelines/)
See the next section for how we can enable pipelines without using
`$FORCE_GITLAB_CI`.
#### Alternative to `$FORCE_GITLAB_CI`
Essentially, we use different variables to enable different pipelines.
An example doing this is `$START_AS_IF_FOSS`. When we want to trigger a
cross project FOSS pipeline, we set `$START_AS_IF_FOSS`, along with a set of
other variables like `$ENABLE_RSPEC_UNIT`, `$ENABLE_RSPEC_SYSTEM`, and so on
so forth to enable each jobs we want to run in the as-if-foss cross project
downstream pipeline.
The advantage of this over `$FORCE_GITLAB_CI` is that we have full control
over how we want to run the pipeline because `$START_AS_IF_FOSS` is only used
for this purpose, and changing how the pipeline behaves under this variable
will not affect other types of pipelines, while using `$FORCE_GITLAB_CI` we
do not know what exactly the pipeline is because it's used for multiple
purposes.
## Default image
The default image is defined in [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
<!-- vale gitlab_base.Spelling = NO -->
It includes Ruby, Go, Git, Git LFS, Chrome, Node, Yarn, PostgreSQL, and Graphics Magick.
<!-- vale gitlab_base.Spelling = YES -->
The images used in our pipelines are configured in the
[`gitlab-org/gitlab-build-images`](https://gitlab.com/gitlab-org/gitlab-build-images)
project, which is push-mirrored to [`gitlab/gitlab-build-images`](https://dev.gitlab.org/gitlab/gitlab-build-images)
for redundancy.
The current version of the build images can be found in the
["Used by GitLab section"](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/.gitlab-ci.yml).
## Default variables
In addition to the [predefined CI/CD variables](../../ci/variables/predefined_variables.md),
each pipeline includes default variables defined in
[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
### Variable naming
Starting in March 2025, we have begun prefixing new environment variables
that are exclusively used for the monolith CI pipelines with `GLCI_`.
This allows us to track if an environment variable is intended for CI
(`GLCI_`), the product (`GITLAB_`), or tools and systems not owned by us.
That helps us better evaluate the impact of environment variable changes
in our pipeline configuration.
## Stages
The current stages are:
- `sync`: This stage is used to synchronize changes from [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) to
[`gitlab-org/gitlab-foss`](https://gitlab.com/gitlab-org/gitlab-foss).
- `prepare`: This stage includes jobs that prepare artifacts that are needed by
jobs in subsequent stages.
- `build-images`: This stage includes jobs that prepare Docker images
that are needed by jobs in subsequent stages or downstream pipelines.
- `fixtures`: This stage includes jobs that prepare fixtures needed by frontend tests.
- `lint`: This stage includes linting and static analysis jobs.
- `test`: This stage includes most of the tests, and DB/migration jobs.
- `post-test`: This stage includes jobs that build reports or gather data from
the `test` stage's jobs (for example, coverage, Knapsack metadata, and so on).
- `review`: This stage includes jobs that build the CNG images, deploy them, and
run end-to-end tests against review apps (see [review apps](../testing_guide/review_apps.md) for details).
It also includes Docs Review App jobs.
- `qa`: This stage includes jobs that perform QA tasks against the Review App
that is deployed in stage `review`.
- `post-qa`: This stage includes jobs that build reports or gather data from
the `qa` stage's jobs (for example, Review App performance report).
- `pages`: This stage includes a job that deploys the various reports as
GitLab Pages (for example, [`coverage-ruby`](https://gitlab-org.gitlab.io/gitlab/coverage-ruby/),
and `webpack-report` (found at `https://gitlab-org.gitlab.io/gitlab/webpack-report/`, but there is
[an issue with the deployment](https://gitlab.com/gitlab-org/gitlab/-/issues/233458)).
- `notify`: This stage includes jobs that notify various failures to Slack.
## Dependency Proxy
Some of the jobs are using images from Docker Hub, where we also use
`${GITLAB_DEPENDENCY_PROXY_ADDRESS}` as a prefix to the image path, so that we pull
images from our [Dependency Proxy](../../user/packages/dependency_proxy/_index.md).
By default, this variable is set from the value of `${GITLAB_DEPENDENCY_PROXY}`.
- `CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX` is [a GitLab predefined CI/CD variable](../../ci/variables/predefined_variables.md) that gives the top-level group image prefix to pull images through the Dependency Proxy.
- `GITLAB_DEPENDENCY_PROXY` is a CI/CD variable in the [`gitlab-org`](https://gitlab.com/gitlab-org) and the [`gitlab-com`](https://gitlab.com/gitlab-com) groups. It is defined as `${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/`.
- `GITLAB_DEPENDENCY_PROXY_ADDRESS` is defined in the `gitlab-org/gitlab` project. It defaults to `"${GITLAB_DEPENDENCY_PROXY}"`, but is overridden in some cases (see the workaround section below).
In `gitlab-org/gitlab`, we'll use `GITLAB_DEPENDENCY_PROXY_ADDRESS` [due to a workaround](#work-around-for-when-a-pipeline-is-started-by-a-project-access-token-user). Everywhere else in the `gitlab-org` and `gitlab-com` groups, we should use `GITLAB_DEPENDENCY_PROXY` to use the Dependency Proxy. For any other project, you can rely on the `CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX` predefined CI/CD variable to enable the dependency proxy:
```yaml
# In the gitlab-org/gitlab project
image: ${GITLAB_DEPENDENCY_PROXY_ADDRESS}alpine:edge
# In any other project in gitlab-org and gitlab-com groups
image: ${GITLAB_DEPENDENCY_PROXY}alpine:edge
# In projects outside of gitlab-org and gitlab-com groups
image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/alpine:edge
```
Forks that reside on any other personal namespaces or groups fall back to
Docker Hub unless `GITLAB_DEPENDENCY_PROXY` is also defined there.
### Work around for when a pipeline is started by a Project access token user
When a pipeline is started by a Project access token user (for example, the `release-tools approver bot` user which
automatically updates the Gitaly version used in the main project),
[the Dependency proxy isn't accessible](https://gitlab.com/gitlab-org/gitlab/-/issues/332411#note_1130388163)
and the job fails at the `Preparing the "docker+machine" executor` step.
To work around that, we have a special workflow rule, that overrides the
`${GITLAB_DEPENDENCY_PROXY_ADDRESS}` variable so that Dependency proxy isn't used in that case:
```yaml
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $GITLAB_USER_LOGIN =~ /project_\d+_bot\d*/'
variables:
GITLAB_DEPENDENCY_PROXY_ADDRESS: ""
```
{{< alert type="note" >}}
We don't directly override the `${GITLAB_DEPENDENCY_PROXY}` variable because group-level
variables have higher precedence over `.gitlab-ci.yml` variables.
{{< /alert >}}
## External CI/CD secrets
As part of <https://gitlab.com/groups/gitlab-org/quality/engineering-productivity/-/epics/46>, in February 2024, we
started to dogfood [the usage of GCP Secret Manager](../../ci/secrets/gcp_secret_manager.md) to
[store the `ADD_JH_FILES_TOKEN` CI variable](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/144228).
As part of this, [the `qual-ci-secret-mgmt-e78c9b95` GCP project was created](https://gitlab.com/gitlab-org/quality/engineering-productivity-infrastructure/-/issues/99#note_1605141484).
## Common job definitions
Most of the jobs [extend from a few CI definitions](../../ci/yaml/_index.md#extends)
defined in [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml)
that are scoped to a single [configuration keyword](../../ci/yaml/_index.md#job-keywords).
| Job definitions | Description |
|------------------|-------------|
| `.default-retry` | Allows a job to [retry](../../ci/yaml/_index.md#retry) upon `unknown_failure`, `api_failure`, `runner_system_failure`, `job_execution_timeout`, or `stuck_or_timeout_failure`. |
| `.default-before_script` | Allows a job to use a default `before_script` definition suitable for Ruby/Rails tasks that may need a database running (for example, tests). |
| `.repo-from-artifacts` | Allows a job to fetch the repository from artifacts in `clone-gitlab-repo` instead of cloning. This should reduce GitLab.com Gitaly load and also slightly improve the speed because downloading from artifacts is faster than cloning. Note that this should be avoided to be used with jobs having `needs: []` because otherwise it'll start later and we usually want all jobs to start as soon as possible. Use this only on jobs which has other dependencies so that we don't wait longer than just cloning. Note that this behavior can be controlled via `CI_FETCH_REPO_GIT_STRATEGY`. See [Fetch repository via artifacts instead of cloning/fetching from Gitaly](performance.md#fetch-repository-via-artifacts-instead-of-cloningfetching-from-gitaly) for more details. |
| `.setup-test-env-cache` | Allows a job to use a default `cache` definition suitable for setting up test environment for subsequent Ruby/Rails tasks. |
| `.ruby-cache` | Allows a job to use a default `cache` definition suitable for Ruby tasks. |
| `.static-analysis-cache` | Allows a job to use a default `cache` definition suitable for static analysis tasks. |
| `.qa-cache` | Allows a job to use a default `cache` definition suitable for QA tasks. |
| `.yarn-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that do a `yarn install`. |
| `.assets-compile-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that compile assets. |
| `.use-pg14` | Allows a job to use the `postgres` 14, `redis`, and `rediscluster` services (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific versions of the services). |
| `.use-pg14-ee` | Same as `.use-pg14` but also use an `elasticsearch` service (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific version of the service). |
| `.use-pg15` | Allows a job to use the `postgres` 15, `redis`, and `rediscluster` services (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific versions of the services). |
| `.use-pg15-ee` | Same as `.use-pg15` but also use an `elasticsearch` service (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific version of the service). |
| `.use-pg16` | Allows a job to use the `postgres` 16, `redis`, and `rediscluster` services (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific versions of the services). |
| `.use-pg16-ee` | Same as `.use-pg16` but also use an `elasticsearch` service (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific version of the service). |
| `.use-pg17` | Allows a job to use the `postgres` 17, `redis`, and `rediscluster` services (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific versions of the services). |
| `.use-pg17-ee` | Same as `.use-pg17` but also use an `elasticsearch` service (see [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml) for the specific version of the service). |
| `.use-buildx` | Allows a job to use the `docker buildx` tool to build Docker images. |
| `.as-if-foss` | Simulate the FOSS project by setting the `FOSS_ONLY='1'` CI/CD variable. |
| `.use-docker-in-docker` | Allows a job to use Docker in Docker. For more details, see the [handbook about CI/CD configuration](https://handbook.gitlab.com/handbook/engineering/gitlab-repositories/#cicd-configuration). |
## `rules`, `if:` conditions and `changes:` patterns
We're using the [`rules` keyword](../../ci/yaml/_index.md#rules) extensively.
All `rules` definitions are defined in
[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml),
then included in individual jobs via [`extends`](../../ci/yaml/_index.md#extends).
The `rules` definitions are composed of `if:` conditions and `changes:` patterns,
which are also defined in
[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml)
and included in `rules` definitions via [YAML anchors](../../ci/yaml/yaml_optimization.md#anchors)
### `if:` conditions
| `if:` conditions | Description | Notes |
|---------------------------------------------|-------------|-------|
| `if-not-canonical-namespace` | Matches if the project isn't in the canonical (`gitlab-org/` and `gitlab-cn/`) or security (`gitlab-org/security`) namespace. | Use to create a job for forks (by using `when: on_success` or `when: manual`), or **not** create a job for forks (by using `when: never`). |
| `if-not-ee` | Matches if the project isn't EE (that is, project name isn't `gitlab` or `gitlab-ee`). | Use to create a job only in the FOSS project (by using `when: on_success` or `when: manual`), or **not** create a job if the project is EE (by using `when: never`). |
| `if-not-foss` | Matches if the project isn't FOSS (that is, project name isn't `gitlab-foss`, `gitlab-ce`, or `gitlabhq`). | Use to create a job only in the EE project (by using `when: on_success` or `when: manual`), or **not** create a job if the project is FOSS (by using `when: never`). |
| `if-default-refs` | Matches if the pipeline is for `master`, `main`, `/^[\d-]+-stable(-ee)?$/` (stable branches), `/^\d+-\d+-auto-deploy-\d+$/` (auto-deploy branches), `/^security\//` (security branches), merge requests, and tags. | Note that jobs aren't created for branches with this default configuration. |
| `if-master-refs` | Matches if the current branch is `master` or `main`. | |
| `if-master-push` | Matches if the current branch is `master` or `main` and pipeline source is `push`. | |
| `if-master-schedule-maintenance` | Matches if the current branch is `master` or `main` and pipeline runs on a 2-hourly schedule. | |
| `if-master-schedule-nightly` | Matches if the current branch is `master` or `main` and pipeline runs on a nightly schedule. | |
| `if-auto-deploy-branches` | Matches if the current branch is an auto-deploy one. | |
| `if-master-or-tag` | Matches if the pipeline is for the `master` or `main` branch or for a tag. | |
| `if-merge-request` | Matches if the pipeline is for a merge request. | |
| `if-merge-request-title-as-if-foss` | Matches if the pipeline is for a merge request and the MR has label `~"pipeline:run-as-if-foss"`. | |
| `if-merge-request-title-update-caches` | Matches if the pipeline is for a merge request and the MR has label `~"pipeline:update-cache"`. | |
| `if-merge-request-labels-run-all-rspec` | Matches if the pipeline is for a merge request and the MR has label `~"pipeline:run-all-rspec"`. | |
| `if-merge-request-labels-run-cs-evaluation` | Matches if the pipeline is for a merge request and the MR has label `~"pipeline:run-CS-evaluation"`. | |
| `if-security-merge-request` | Matches if the pipeline is for a security merge request. | |
| `if-security-schedule` | Matches if the pipeline is for a security scheduled pipeline. | |
| `if-nightly-master-schedule` | Matches if the pipeline is for a `master` scheduled pipeline with `$NIGHTLY` set. | |
| `if-dot-com-gitlab-org-schedule` | Limits jobs creation to scheduled pipelines for the `gitlab-org` group on GitLab.com. | |
| `if-dot-com-gitlab-org-master` | Limits jobs creation to the `master` or `main` branch for the `gitlab-org` group on GitLab.com. | |
| `if-dot-com-gitlab-org-merge-request` | Limits jobs creation to merge requests for the `gitlab-org` group on GitLab.com. | |
| `if-dot-com-ee-schedule` | Limits jobs to scheduled pipelines for the `gitlab-org/gitlab` project on GitLab.com. | |
### `changes:` patterns
| `changes:` patterns | Description |
|------------------------------|--------------------------------------------------------------------------|
| `ci-patterns` | Only create job for CI configuration-related changes. |
| `ci-build-images-patterns` | Only create job for CI configuration-related changes related to the `build-images` stage. |
| `ci-review-patterns` | Only create job for CI configuration-related changes related to the `review` stage. |
| `ci-qa-patterns` | Only create job for CI configuration-related changes related to the `qa` stage. |
| `yaml-lint-patterns` | Only create job for YAML-related changes. |
| `docs-patterns` | Only create job for docs-related changes. |
| `frontend-dependency-patterns` | Only create job when frontend dependencies are updated (for example, `package.json`, and `yarn.lock`) changes. |
| `frontend-patterns-for-as-if-foss` | Only create job for frontend-related changes that have impact on FOSS. |
| `backend-patterns` | Only create job for backend-related changes. |
| `db-patterns` | Only create job for DB-related changes. |
| `backstage-patterns` | Only create job for backstage-related changes (that is, Danger, fixtures, RuboCop, specs). |
| `code-patterns` | Only create job for code-related changes. |
| `qa-patterns` | Only create job for QA-related changes. |
| `code-backstage-patterns` | Combination of `code-patterns` and `backstage-patterns`. |
| `code-qa-patterns` | Combination of `code-patterns` and `qa-patterns`. |
| `code-backstage-qa-patterns` | Combination of `code-patterns`, `backstage-patterns`, and `qa-patterns`. |
| `static-analysis-patterns` | Only create jobs for Static Analytics configuration-related changes. |
## Custom exit codes
GitLab CI uses custom exit codes to categorize different types of job failures. This helps with automated failure tracking and retry logic. To see which exit codes trigger automatic retries, check the retry rules in [GitLab global CI configuration](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml).
The table below lists current exit codes and their meanings:
| exit code | Description |
|-----------|---------------------------------------|
|110 | network connection error |
|111 | low disk space |
|112 | known flaky test failure |
|160 | failed to upload/download job artifact|
|161 | 5XX server error |
|162 | Gitaly spawn failure |
|163 | RSpec job timeout |
|164 | Redis cluster error |
|165 | segmentation fault |
|166 | EEXIST: file already exists |
|167 | `gitlab.com` overloaded |
|168 | gRPC resource exhausted |
|169 | SQL query limit exceeded |
|170 | SQL table is write protected |
This list can be expanded as new failure patterns emerge. To avoid conflicts with standard Bash exit codes, new custom codes must be 160 or higher.
## Best Practices
### When to use `extends:`, `<<: *xyz` (YAML anchors), or `!reference`
[Reference](../../ci/yaml/yaml_optimization.md)
#### Key takeaways
- If you need to **extend a hash**, you should use `extends`
- If you need to **extend an array**, you'll need to use `!reference`, or `YAML anchors` as last resort
- For more complex cases (for example, extend hash inside array, extend array inside hash, ...), you'll have to use `!reference` or `YAML anchors`
#### What can `extends` and `YAML anchors` do?
##### `extends`
- Deep merge for hashes
- NO merge for arrays. It overwrites ([source](../../ci/yaml/yaml_optimization.md#merge-details))
##### YAML anchors
- NO deep merge for hashes, BUT it can be used to extend a hash (see the example below)
- NO merge for arrays, BUT it can be used to extend an array (see the example below)
#### A great example
This example shows how to extend complex YAML data structures with `!reference` and `YAML anchors`:
```yaml
.strict-ee-only-rules:
# `rules` is an array of hashes
rules:
- if: '$CI_PROJECT_NAME !~ /^gitlab(-ee)?$/ '
when: never
# `if-security-merge-request` is a hash
.if-security-merge-request: &if-security-merge-request
if: '$CI_PROJECT_NAMESPACE == "gitlab-org/security"'
# `code-qa-patterns` is an array
.code-qa-patterns: &code-qa-patterns
- "{package.json,yarn.lock}"
- ".browserslistrc"
- "babel.config.js"
- "jest.config.{base,integration,unit}.js"
.qa:rules:as-if-foss:
rules:
# We extend the `rules` array with an array of hashes directly
- !reference [".strict-ee-only-rules", rules]
# We extend a single array entry with a hash
- <<: *if-security-merge-request
# `changes` is an array, so we pass it an entire array
changes: *code-qa-patterns
qa:selectors-as-if-foss:
# We include the rules from .qa:rules:as-if-foss in this job
extends:
- .qa:rules:as-if-foss
```
### Extend the `.fast-no-clone-job` job
Downloading the branch for the canonical project takes between 20 and 30 seconds.
Some jobs only need a limited number of files, which we can download via the GitLab API.
You can skip a job `git clone`/`git fetch` by adding the following pattern to a job.
#### Scenario 1: no `before_script` is defined in the job
This applies to the parent sections the job extends from as well.
You can just extend the `.fast-no-clone-job`:
**Before**:
```yaml
# Note: No `extends:` is present in the job
a-job:
script:
- source scripts/rspec_helpers.sh scripts/slack
- echo "No need for a git clone!"
```
**After**:
```yaml
# Note: No `extends:` is present in the job
a-job:
extends:
- .fast-no-clone-job
variables:
FILES_TO_DOWNLOAD: >
scripts/rspec_helpers.sh
scripts/slack
script:
- source scripts/rspec_helpers.sh scripts/slack
- echo "No need for a git clone!"
```
#### Scenario 2: a `before_script` block is already defined in the job (or in jobs it extends)
For this scenario, you have to:
1. Extend the `.fast-no-clone-job` as in the first scenario (this will merge the `FILES_TO_DOWNLOAD` variable with the other variables)
1. Make sure the `before_script` section from `.fast-no-clone-job` is referenced in the `before_script` we use for this job.
**Before**:
```yaml
.base-job:
before_script:
echo "Hello from .base-job"
a-job:
extends:
- .base-job
script:
- source scripts/rspec_helpers.sh scripts/slack
- echo "No need for a git clone!"
```
**After**:
```yaml
.base-job:
before_script:
echo "Hello from .base-job"
a-job:
extends:
- .base-job
- .fast-no-clone-job
variables:
FILES_TO_DOWNLOAD: >
scripts/rspec_helpers.sh
scripts/slack
before_script:
- !reference [".fast-no-clone-job", before_script]
- !reference [".base-job", before_script]
script:
- source scripts/rspec_helpers.sh scripts/slack
- echo "No need for a git clone!"
```
#### Caveats
- This pattern does not work if a script relies on `git` to access the repository, because we don't have the repository without cloning or fetching.
- The job using this pattern needs to have `curl` available.
- If you need to run `bundle install` in the job (even using `BUNDLE_ONLY`), you need to:
- Download the gems that are stored in the `gitlab-org/gitlab` project.
- You can use the `download_local_gems` shell command for that purpose.
- Include the `Gemfile`, `Gemfile.lock` and `Gemfile.checksum` (if applicable)
#### Where is this pattern used?
- For now, we use this pattern for the following jobs, and those do not block private repositories:
- `review-build-cng-env` for:
- `GITALY_SERVER_VERSION`
- `GITLAB_ELASTICSEARCH_INDEXER_VERSION`
- `GITLAB_KAS_VERSION`
- `GITLAB_PAGES_VERSION`
- `GITLAB_SHELL_VERSION`
- `scripts/trigger-build.rb`
- `VERSION`
- `review-deploy` for:
- `GITALY_SERVER_VERSION`
- `GITLAB_SHELL_VERSION`
- `scripts/review_apps/review-apps.sh`
- `scripts/review_apps/seed-dast-test-data.sh`
- `VERSION`
- `rspec:coverage` for:
- `config/bundler_setup.rb`
- `Gemfile`
- `Gemfile.checksum`
- `Gemfile.lock`
- `scripts/merge-simplecov`
- `spec/simplecov_env_core.rb`
- `spec/simplecov_env.rb`
- `prepare-as-if-foss-env` for:
- `scripts/setup/generate-as-if-foss-env.rb`
Additionally, `scripts/utils.sh` is always downloaded from the API when this pattern is used (this file contains the code for `.fast-no-clone-job`).
### Runner tags
On GitLab.com, both unprivileged and privileged runners are
available. For projects in the `gitlab-org` group and forks of those
projects, only one of the following tags should be added to a job:
- `gitlab-org`: Jobs randomly use privileged and unprivileged runners.
- `gitlab-org-docker`: Jobs must use a privileged runner. If you need [Docker-in-Docker support](../../ci/docker/using_docker_build.md#use-docker-in-docker),
use `gitlab-org-docker` instead of `gitlab-org`.
The `gitlab-org-docker` tag is added by the `.use-docker-in-docker` job
definition above.
To ensure compatibility with forks, avoid using both `gitlab-org` and
`gitlab-org-docker` simultaneously. No instance runners
have both `gitlab-org` and `gitlab-org-docker` tags. For forks of
`gitlab-org` projects, jobs will get stuck if both tags are supplied because
no matching runners are available.
See [the GitLab Repositories handbook page](https://handbook.gitlab.com/handbook/engineering/gitlab-repositories/#cicd-configuration)
for more information.
### Using the `gitlab` Ruby gem in the canonical project
When calling `require 'gitlab'` in the canonical project, it will require the `lib/gitlab.rb` file when `$LOAD_PATH` has `lib`, which happens when we're loading the application (`config/application.rb`) or tests (`spec/spec_helper.rb`).
This means we're not able to load the `gitlab` gem under the above conditions and even if we can, the constant name will conflict, breaking internal assumptions and causing random errors.
If you are working on a script that is using [the `gitlab` Ruby gem](https://github.com/NARKOZ/gitlab), you will need to take a few precautions:
#### 1 - Conditional require of the gem
To avoid potential conflicts, only require the `gitlab` gem if the `Gitlab` constant isn't defined:
```ruby
# Bad
require 'gitlab'
# Good
if Object.const_defined?(:RSpec)
# Ok, we're testing, we know we're going to stub `Gitlab`, so we just ignore
else
require 'gitlab'
if Gitlab.singleton_class.method_defined?(:com?)
abort 'lib/gitlab.rb is loaded, and this means we can no longer load the client and we cannot proceed'
end
end
```
#### 2 - Mock the `gitlab` gem entirely in your specs
In your specs, `require 'gitlab'` will reference the `lib/gitlab.rb` file:
```ruby
# Bad
allow(GitLab).to receive(:a_method).and_return(...)
# Good
client = double('GitLab')
# In order to easily stub the client, consider using a method to return the client.
# We can then stub the method to return our fake client, which we can further stub its methods.
#
# This is the pattern followed below
let(:instance) { described_class.new }
allow(instance).to receive(:gitlab).and_return(client)
allow(client).to receive(:a_method).and_return(...)
```
In case you need to query jobs for instance, the following snippet will be useful:
```ruby
# Bad
allow(GitLab).to receive(:pipeline_jobs).and_return(...)
# Good
#
# rubocop:disable RSpec/VerifiedDoubles -- We do not load the Gitlab client directly
client = double('GitLab')
allow(instance).to receive(:gitlab).and_return(client)
jobs = ['job1', 'job2']
allow(client).to yield_jobs(:pipeline_jobs, jobs)
def yield_jobs(api_method, jobs)
messages = receive_message_chain(api_method, :auto_paginate)
jobs.inject(messages) do |stub, job_name|
stub.and_yield(double(name: job_name))
end
end
# rubocop:enable RSpec/VerifiedDoubles
```
#### 3 - Do not call your script with `bundle exec`
Executing with `bundle exec` will change the `$LOAD_PATH` for Ruby, and it will load `lib/gitlab.rb` when calling `require 'gitlab'`:
```shell
# Bad
bundle exec scripts/my-script.rb
# Good
scripts/my-script.rb
```
## CI Configuration Testing
We now have RSpec tests to verify changes to the CI configuration by simulating pipeline creation with the updated YAML files. You can find these tests and a documentation of the current test coverage in [`spec/dot_gitlab_ci/job_dependency_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/dot_gitlab_ci/job_dependency_spec.rb).
### How Do the Tests Work
With the help of `Ci::CreatePipelineService`, we are able to simulate pipeline creation with different attributes such as branch name, MR labels, pipeline source (scheduled v.s push), pipeline type (merge train v.s merged results), etc. This is the same service utilized by the GitLab CI Lint API for validating CI/CD configurations.
These tests will automatically run for merge requests that update CI configurations. However, team members can opt to skip these tests by adding the label ~"pipeline:skip-ci-validation" to their merge requests.
Running these tests locally is encouraged, as it provides the fastest feedback.
|
https://docs.gitlab.com/development/design_patterns
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/design_patterns.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
design_patterns.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Design Patterns
| null |
This page covers suggested design patterns and also anti-patterns.
{{< alert type="note" >}}
When adding a design pattern to this document, be sure to clearly state the **problem it solves**.
When adding a design anti-pattern, clearly state **the problem it prevents**.
{{< /alert >}}
## Patterns
The following design patterns are suggested approaches for solving common problems. Use discretion when evaluating
if a certain pattern makes sense in your situation. Just because it is a pattern, doesn't mean it is a good one for your problem.
## Anti-patterns
Anti-patterns may seem like good approaches at first, but it has been shown that they bring more ills than benefits. These should
generally be avoided.
Throughout the GitLab codebase, there may be historic uses of these anti-patterns. [Use discretion](https://handbook.gitlab.com/handbook/engineering/development/principles/#balance-refactoring-and-velocity)
when figuring out whether or not to refactor, when touching code that uses one of these legacy patterns.
{{< alert type="note" >}}
For new features, anti-patterns are not necessarily prohibited, but it is **strongly suggested** to find another approach.
{{< /alert >}}
### Shared Global Object
A shared global object is an instance of something that can be accessed from anywhere and therefore has no clear owner.
Here's an example of this pattern applied to a Vuex Store:
```javascript
const createStore = () => new Vuex.Store({
actions,
state,
mutations
});
// Notice that we are forcing all references to this module to use the same single instance of the store.
// We are also creating the store at import-time and there is nothing which can automatically dispose of it.
//
// As an alternative, we should export the `createStore` and let the client manage the
// lifecycle and instance of the store.
export default createStore();
```
#### What problems do Shared Global Objects cause?
Shared Global Objects are convenient because they can be accessed from anywhere. However,
the convenience does not always outweigh their heavy cost:
- **No ownership.** There is no clear owner to these objects and therefore they assume a non-deterministic
and permanent lifecycle. This can be especially problematic for tests.
- **No access control.** When Shared Global Objects manage some state, this can create some very buggy and difficult
coupling situations because there is no access control to this object.
- **Possible circular references.** Shared Global Objects can also create some circular referencing situations since submodules
of the Shared Global Object can reference modules that reference itself (see
[this MR for an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/33366)).
Here are some historic examples where this pattern was identified to be problematic:
- [Reference to global Vuex store in IDE](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36401)
- [Docs update to discourage singleton Vuex store](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36952)
#### When could the Shared Global Object pattern be actually appropriate?
Shared Global Object's solve the problem of making something globally accessible. This pattern
could be appropriate:
- When a responsibility is truly global and should be referenced across the application
(for example, an application-wide Event Bus).
Even in these scenarios, consider avoiding the Shared Global Object pattern because the
side-effects can be notoriously difficult to reason with.
#### References
For more information, see [Global Variables Are Bad on the C2 wiki](https://wiki.c2.com/?GlobalVariablesAreBad).
### Singleton
The classic [Singleton pattern](https://en.wikipedia.org/wiki/Singleton_pattern) is an approach to ensure that only one
instance of a thing exists.
Here's an example of this pattern:
```javascript
class MyThing {
constructor() {
// ...
}
// ...
}
MyThing.instance = null;
export const getThingInstance = () => {
if (MyThing.instance) {
return MyThing.instance;
}
const instance = new MyThing();
MyThing.instance = instance;
return instance;
};
```
#### What problems do Singletons cause?
It is a big assumption that only one instance of a thing should exist. More often than not,
a Singleton is misused and causes very tight coupling amongst itself and the modules that reference it.
Here are some historic examples where this pattern was identified to be problematic:
- [Test issues caused by singleton class in IDE](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/30398#note_331174190)
- [Implicit Singleton created by module's shared variables](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/merge_requests/97#note_417515776)
- [Complexity caused by Singletons](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/29461#note_324585814)
Here are some ills that Singletons often produce:
1. **Non-deterministic tests.** Singletons encourage non-deterministic tests because the single instance is shared across
individual tests, often causing the state of one test to bleed into another.
1. **High coupling.** Under the hood, clients of a singleton class all share a single specific
instance of an object, which means this pattern inherits all the [problems of Shared Global Object](#what-problems-do-shared-global-objects-cause)
such as no clear ownership and no access control. These leads to high coupling situations that can
be buggy and difficult to untangle.
1. **Infectious.** Singletons are infectious, especially when they manage state. Consider the component
[RepoEditor](https://gitlab.com/gitlab-org/gitlab/-/blob/27ad6cb7b76430fbcbaf850df68c338d6719ed2b/app%2Fassets%2Fjavascripts%2Fide%2Fcomponents%2Frepo_editor.vue#L0-1)
used in the Web IDE. This component interfaces with a Singleton [Editor](https://gitlab.com/gitlab-org/gitlab/-/blob/862ad57c44ec758ef3942ac2e7a2bd40a37a9c59/app%2Fassets%2Fjavascripts%2Fide%2Flib%2Feditor.js#L21)
which manages some state for working with Monaco. Because of the Singleton nature of the Editor class,
the component `RepoEditor` is now forced to be a Singleton as well. Multiple instances of this component
would cause production issues because no one truly owns the instance of `Editor`.
#### Why is the Singleton pattern popular in other languages like Java?
This is because of the limitations of languages like Java where everything has to be wrapped
in a class. In JavaScript we have things like object and function literals where we can solve
many problems with a module that exports utility functions.
#### When could the Singleton pattern be actually appropriate?**
Singletons solve the problem of enforcing there to be only 1 instance of a thing. It's possible
that a Singleton could be appropriate in the following rare cases:
- We need to manage some resource that **MUST** have just 1 instance (that is, some hardware restriction).
- There is a real [cross-cutting concern](https://en.wikipedia.org/wiki/Cross-cutting_concern) (for example, logging) and a Singleton provides the simplest API.
Even in these scenarios, consider avoiding the Singleton pattern.
#### What alternatives are there to the Singleton pattern?
##### Utility Functions
When no state needs to be managed, we can export utility functions from a module without
messing with any class instantiation.
```javascript
// bad - Singleton
export class ThingUtils {
static create() {
if(this.instance) {
return this.instance;
}
this.instance = new ThingUtils();
return this.instance;
}
bar() { /* ... */ }
fuzzify(id) { /* ... */ }
}
// good - Utility functions
export const bar = () => { /* ... */ };
export const fuzzify = (id) => { /* ... */ };
```
##### Dependency Injection
[Dependency Injection](https://en.wikipedia.org/wiki/Dependency_injection) is an approach which breaks
coupling by declaring a module's dependencies to be injected from outside the module (for example, through constructor parameters, a bona-fide Dependency Injection framework, and even in Vue `provide/inject`).
```javascript
// bad - Vue component coupled to Singleton
export default {
created() {
this.mediator = MyFooMediator.getInstance();
},
};
// good - Vue component declares dependency
export default {
inject: ['mediator']
};
```
```javascript
// bad - We're not sure where the singleton is in it's lifecycle so we init it here.
export class Foo {
constructor() {
Bar.getInstance().init();
}
stuff() {
return Bar.getInstance().doStuff();
}
}
// good - Lets receive this dependency as a constructor argument.
// It's also not our responsibility to manage the lifecycle.
export class Foo {
constructor(bar) {
this.bar = bar;
}
stuff() {
return this.bar.doStuff();
}
}
```
In this example, the lifecycle and implementation details of `mediator` are all managed
**outside** the component (most likely the page entrypoint).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Design Patterns
breadcrumbs:
- doc
- development
- fe_guide
---
This page covers suggested design patterns and also anti-patterns.
{{< alert type="note" >}}
When adding a design pattern to this document, be sure to clearly state the **problem it solves**.
When adding a design anti-pattern, clearly state **the problem it prevents**.
{{< /alert >}}
## Patterns
The following design patterns are suggested approaches for solving common problems. Use discretion when evaluating
if a certain pattern makes sense in your situation. Just because it is a pattern, doesn't mean it is a good one for your problem.
## Anti-patterns
Anti-patterns may seem like good approaches at first, but it has been shown that they bring more ills than benefits. These should
generally be avoided.
Throughout the GitLab codebase, there may be historic uses of these anti-patterns. [Use discretion](https://handbook.gitlab.com/handbook/engineering/development/principles/#balance-refactoring-and-velocity)
when figuring out whether or not to refactor, when touching code that uses one of these legacy patterns.
{{< alert type="note" >}}
For new features, anti-patterns are not necessarily prohibited, but it is **strongly suggested** to find another approach.
{{< /alert >}}
### Shared Global Object
A shared global object is an instance of something that can be accessed from anywhere and therefore has no clear owner.
Here's an example of this pattern applied to a Vuex Store:
```javascript
const createStore = () => new Vuex.Store({
actions,
state,
mutations
});
// Notice that we are forcing all references to this module to use the same single instance of the store.
// We are also creating the store at import-time and there is nothing which can automatically dispose of it.
//
// As an alternative, we should export the `createStore` and let the client manage the
// lifecycle and instance of the store.
export default createStore();
```
#### What problems do Shared Global Objects cause?
Shared Global Objects are convenient because they can be accessed from anywhere. However,
the convenience does not always outweigh their heavy cost:
- **No ownership.** There is no clear owner to these objects and therefore they assume a non-deterministic
and permanent lifecycle. This can be especially problematic for tests.
- **No access control.** When Shared Global Objects manage some state, this can create some very buggy and difficult
coupling situations because there is no access control to this object.
- **Possible circular references.** Shared Global Objects can also create some circular referencing situations since submodules
of the Shared Global Object can reference modules that reference itself (see
[this MR for an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/33366)).
Here are some historic examples where this pattern was identified to be problematic:
- [Reference to global Vuex store in IDE](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36401)
- [Docs update to discourage singleton Vuex store](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36952)
#### When could the Shared Global Object pattern be actually appropriate?
Shared Global Object's solve the problem of making something globally accessible. This pattern
could be appropriate:
- When a responsibility is truly global and should be referenced across the application
(for example, an application-wide Event Bus).
Even in these scenarios, consider avoiding the Shared Global Object pattern because the
side-effects can be notoriously difficult to reason with.
#### References
For more information, see [Global Variables Are Bad on the C2 wiki](https://wiki.c2.com/?GlobalVariablesAreBad).
### Singleton
The classic [Singleton pattern](https://en.wikipedia.org/wiki/Singleton_pattern) is an approach to ensure that only one
instance of a thing exists.
Here's an example of this pattern:
```javascript
class MyThing {
constructor() {
// ...
}
// ...
}
MyThing.instance = null;
export const getThingInstance = () => {
if (MyThing.instance) {
return MyThing.instance;
}
const instance = new MyThing();
MyThing.instance = instance;
return instance;
};
```
#### What problems do Singletons cause?
It is a big assumption that only one instance of a thing should exist. More often than not,
a Singleton is misused and causes very tight coupling amongst itself and the modules that reference it.
Here are some historic examples where this pattern was identified to be problematic:
- [Test issues caused by singleton class in IDE](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/30398#note_331174190)
- [Implicit Singleton created by module's shared variables](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/merge_requests/97#note_417515776)
- [Complexity caused by Singletons](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/29461#note_324585814)
Here are some ills that Singletons often produce:
1. **Non-deterministic tests.** Singletons encourage non-deterministic tests because the single instance is shared across
individual tests, often causing the state of one test to bleed into another.
1. **High coupling.** Under the hood, clients of a singleton class all share a single specific
instance of an object, which means this pattern inherits all the [problems of Shared Global Object](#what-problems-do-shared-global-objects-cause)
such as no clear ownership and no access control. These leads to high coupling situations that can
be buggy and difficult to untangle.
1. **Infectious.** Singletons are infectious, especially when they manage state. Consider the component
[RepoEditor](https://gitlab.com/gitlab-org/gitlab/-/blob/27ad6cb7b76430fbcbaf850df68c338d6719ed2b/app%2Fassets%2Fjavascripts%2Fide%2Fcomponents%2Frepo_editor.vue#L0-1)
used in the Web IDE. This component interfaces with a Singleton [Editor](https://gitlab.com/gitlab-org/gitlab/-/blob/862ad57c44ec758ef3942ac2e7a2bd40a37a9c59/app%2Fassets%2Fjavascripts%2Fide%2Flib%2Feditor.js#L21)
which manages some state for working with Monaco. Because of the Singleton nature of the Editor class,
the component `RepoEditor` is now forced to be a Singleton as well. Multiple instances of this component
would cause production issues because no one truly owns the instance of `Editor`.
#### Why is the Singleton pattern popular in other languages like Java?
This is because of the limitations of languages like Java where everything has to be wrapped
in a class. In JavaScript we have things like object and function literals where we can solve
many problems with a module that exports utility functions.
#### When could the Singleton pattern be actually appropriate?**
Singletons solve the problem of enforcing there to be only 1 instance of a thing. It's possible
that a Singleton could be appropriate in the following rare cases:
- We need to manage some resource that **MUST** have just 1 instance (that is, some hardware restriction).
- There is a real [cross-cutting concern](https://en.wikipedia.org/wiki/Cross-cutting_concern) (for example, logging) and a Singleton provides the simplest API.
Even in these scenarios, consider avoiding the Singleton pattern.
#### What alternatives are there to the Singleton pattern?
##### Utility Functions
When no state needs to be managed, we can export utility functions from a module without
messing with any class instantiation.
```javascript
// bad - Singleton
export class ThingUtils {
static create() {
if(this.instance) {
return this.instance;
}
this.instance = new ThingUtils();
return this.instance;
}
bar() { /* ... */ }
fuzzify(id) { /* ... */ }
}
// good - Utility functions
export const bar = () => { /* ... */ };
export const fuzzify = (id) => { /* ... */ };
```
##### Dependency Injection
[Dependency Injection](https://en.wikipedia.org/wiki/Dependency_injection) is an approach which breaks
coupling by declaring a module's dependencies to be injected from outside the module (for example, through constructor parameters, a bona-fide Dependency Injection framework, and even in Vue `provide/inject`).
```javascript
// bad - Vue component coupled to Singleton
export default {
created() {
this.mediator = MyFooMediator.getInstance();
},
};
// good - Vue component declares dependency
export default {
inject: ['mediator']
};
```
```javascript
// bad - We're not sure where the singleton is in it's lifecycle so we init it here.
export class Foo {
constructor() {
Bar.getInstance().init();
}
stuff() {
return Bar.getInstance().doStuff();
}
}
// good - Lets receive this dependency as a constructor argument.
// It's also not our responsibility to manage the lifecycle.
export class Foo {
constructor(bar) {
this.bar = bar;
}
stuff() {
return this.bar.doStuff();
}
}
```
In this example, the lifecycle and implementation details of `mediator` are all managed
**outside** the component (most likely the page entrypoint).
|
https://docs.gitlab.com/development/dashboard_layout_framework
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/dashboard_layout_framework.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
dashboard_layout_framework.md
|
Monitor
|
Platform Insights
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Dashboard layout framework
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/191174) in GitLab 18.1.
{{< /history >}}
The dashboard layout framework is part of a broader effort to standardize dashboards across the platform
as described in [Epic #13801](https://gitlab.com/groups/gitlab-org/-/epics/13801).
For more in depth details on the dashboard layout framework, see the [architecture design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/dashboard_layout_framework/).
## Rendering dashboards
To render dashboard layouts it's recommended to use the [GlDashboardLayout](https://design.gitlab.com/storybook/?path=/docs/dashboards-dashboards-layout--docs)
component. It provides an easy way to render dashboards using
a configuration which aligns with our [Pajamas guidelines](https://design.gitlab.com/patterns/dashboards/).
### Panel guidelines
You are free to
choose whichever panel component best suits your needs. However, to ensure consistency
with our design patterns, it's strongly recommended that you use one of the
following components:
- [GlDashboardPanel](https://gitlab-org.gitlab.io/gitlab-ui/?path=/docs/dashboards-dashboards-panel--docs): The official Pajamas dashboard panel
- [`extended_dashboard_panel.vue`](https://gitlab-org.gitlab.io/gitlab/storybook/?path=/docs/vue-shared-components-extended-dashboard-panel--docs): Extends `GlDashboardPanel` with easy alert styling and i18n strings
## Migration guide
Migrating an existing dashboard to the GlDashboardLayout should be relatively
straightforward. In most cases because you only need to replace the dashboard shell
and can keep existing visualizations. A typical migration path could look like this:
1. Create a feature flag to conditionally render your new dashboard.
1. Create a new dashboard using GlDashboardLayout and `extended_dashboard_panel.vue`.
1. Create a dashboard config object that mimics your old dashboard layout.
1. Optionally, use GlDashboardLayout's slots to render your dashboard's
filters, actions, or custom title or description.
1. Ensure your new dashboard, panels, and visualizations render correctly.
1. Remove the feature flag and your old dashboard.
See the basic implementation on [GitLab UI](https://design.gitlab.com/storybook/?path=/docs/dashboards-dashboards-layout--docs)
for an example on how to render existing visualization components using the dashboard layout component.
### Example implementations
Real world implementations and migrations using the GlDashboardLayout component:
- New group security dashboard added in MR [!191974](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/191974)
- New project security dashboard added in MR [!197626](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/197626)
- New compliance center added in MR [!195759](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/195759)
|
---
stage: Monitor
group: Platform Insights
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Dashboard layout framework
breadcrumbs:
- doc
- development
- fe_guide
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/191174) in GitLab 18.1.
{{< /history >}}
The dashboard layout framework is part of a broader effort to standardize dashboards across the platform
as described in [Epic #13801](https://gitlab.com/groups/gitlab-org/-/epics/13801).
For more in depth details on the dashboard layout framework, see the [architecture design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/dashboard_layout_framework/).
## Rendering dashboards
To render dashboard layouts it's recommended to use the [GlDashboardLayout](https://design.gitlab.com/storybook/?path=/docs/dashboards-dashboards-layout--docs)
component. It provides an easy way to render dashboards using
a configuration which aligns with our [Pajamas guidelines](https://design.gitlab.com/patterns/dashboards/).
### Panel guidelines
You are free to
choose whichever panel component best suits your needs. However, to ensure consistency
with our design patterns, it's strongly recommended that you use one of the
following components:
- [GlDashboardPanel](https://gitlab-org.gitlab.io/gitlab-ui/?path=/docs/dashboards-dashboards-panel--docs): The official Pajamas dashboard panel
- [`extended_dashboard_panel.vue`](https://gitlab-org.gitlab.io/gitlab/storybook/?path=/docs/vue-shared-components-extended-dashboard-panel--docs): Extends `GlDashboardPanel` with easy alert styling and i18n strings
## Migration guide
Migrating an existing dashboard to the GlDashboardLayout should be relatively
straightforward. In most cases because you only need to replace the dashboard shell
and can keep existing visualizations. A typical migration path could look like this:
1. Create a feature flag to conditionally render your new dashboard.
1. Create a new dashboard using GlDashboardLayout and `extended_dashboard_panel.vue`.
1. Create a dashboard config object that mimics your old dashboard layout.
1. Optionally, use GlDashboardLayout's slots to render your dashboard's
filters, actions, or custom title or description.
1. Ensure your new dashboard, panels, and visualizations render correctly.
1. Remove the feature flag and your old dashboard.
See the basic implementation on [GitLab UI](https://design.gitlab.com/storybook/?path=/docs/dashboards-dashboards-layout--docs)
for an example on how to render existing visualization components using the dashboard layout component.
### Example implementations
Real world implementations and migrations using the GlDashboardLayout component:
- New group security dashboard added in MR [!191974](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/191974)
- New project security dashboard added in MR [!197626](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/197626)
- New compliance center added in MR [!195759](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/195759)
|
https://docs.gitlab.com/development/type_hinting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/type_hinting.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
type_hinting.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Type hinting overview
| null |
The Frontend codebase of the GitLab project currently does not require nor enforces types. Adding
type annotations is optional, and we don't currently enforce any type safety in the JavaScript
codebase. However, type annotations might be very helpful in adding clarity to the codebase,
especially in shared utilities code. This document aims to cover how type hinting currently works,
how to add new type annotations, and how to set up type hinting in the GitLab project.
## JSDoc
[JSDoc](https://jsdoc.app/) is a tool to document and describe types in JavaScript code, using
specially formed comments. JSDoc's types vocabulary is relatively limited, but it is widely
supported [by many IDEs](https://en.wikipedia.org/wiki/JSDoc#JSDoc_in_use).
### Examples
#### Describing functions
Use [`@param`](https://jsdoc.app/tags-param) and [`@returns`](https://jsdoc.app/tags-returns)
to describe function type:
```javascript
/**
* Adds two numbers
* @param {number} a first number
* @param {number} b second number
* @returns {number} sum of two numbers
*/
function add(a, b) {
return a + b;
}
```
##### Optional parameters
Use square brackets `[]` around a parameter name to mark it as optional. A default value can be
provided by using the `[name=value]` syntax:
```javascript
/**
* Adds two numbers
* @param {number} value
* @param {number} [increment=1] optional param
* @returns {number} sum of two numbers
*/
function increment(a, b=1) {
return a + b;
}
```
##### Object parameters
Functions that accept objects can be typed by using `object.field` notation in `@param` names:
```javascript
/**
* Adds two numbers
* @param {object} config
* @param {string} config.path path
* @param {string} [config.anchor] anchor
* @returns {string}
*/
function createUrl(config) {
if (config.anchor) {
return path + '#' + anchor;
}
return path;
}
```
#### Annotating types of variables that are not immediately assigned a value
For tools and IDEs it's hard to infer type of a value that doesn't immediately receive a value. We
can use [`@type`](https://jsdoc.app/tags-type) notation to assign type to such variables:
```javascript
/** @type {number} */
let value;
```
Consult [JSDoc official website](https://jsdoc.app/) for more syntax details.
### Tips for using JSDoc
#### Use lowercase names for basic types
Both uppercase and lowercase are acceptable, but in most cases use lowercase
for a primitive or an object: `boolean`, `number`, `string`, `symbol`, or `object`.
```javascript
/**
* Translates `text`.
* @param {string} text - The text to be translated
* @returns {string} The translated text
*/
const gettext = (text) => locale.gettext(ensureSingleLine(text));
```
#### Use well-known types
Well-known types, like `HTMLDivElement` or `Intl` are available and can be used directly:
```javascript
/** @type {HTMLDivElement} */
let element;
```
```javascript
/**
* Creates an instance of Intl.DateTimeFormat for the current locale.
* @param {Intl.DateTimeFormatOptions} [formatOptions] - for available options, please see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DateTimeFormat
* @returns {Intl.DateTimeFormat}
*/
const createDateTimeFormat = (formatOptions) =>
Intl.DateTimeFormat(getPreferredLocales(), formatOptions);
```
#### Import existing type definitions via `import('path/to/module')`
Here are examples of how to annotate a type of the Vue Test Utils Wrapper variables, that are not
immediately defined:
```javascript
/** @type {import('helpers/vue_test_utils_helper').ExtendedWrapper} */
let wrapper;
// ...
wrapper = mountExtended(/* ... */);
```
```javascript
/** @type {import('@vue/test-utils').Wrapper} */
let wrapper;
// ...
wrapper = shallowMount(/* ... */);
```
{{< alert type="note" >}}
`import()` is [not a native JSDoc construct](https://github.com/jsdoc/jsdoc/issues/1645), but it is
recognized by many IDEs and tools. In this case we're aiming for better clarity in the code and
improved Developer Experience with an IDE.
{{< /alert >}}
#### JSDoc is limited
As was stated above, JSDoc has limited vocabulary. And using it would not describe the type fully.
But sometimes it's possible to use 3rd party library's type definitions to make type inference to
work for our code. Here's an example of such approach:
```diff
- export const mountExtended = (...args) => extendedWrapper(mount(...args));
+ import { compose } from 'lodash/fp';
+ export const mountExtended = compose(extendedWrapper, mount);
```
Here we use TypeScript type definitions from `compose` function, to add inferred type definitions to
`mountExtended` function. In this case `mountExtended` arguments will be of same type as `mount`
arguments. And return type will be the same as `extendedWrapper` return type.
We can still use JSDoc's syntax to add description to the function, for example:
```javascript
/** Mounts a component and returns an extended wrapper for it */
export const mountExtended = compose(extendedWrapper, mount);
```
## System requirements
A setup might be required for type definitions from GitLab codebase and from 3rd party packages to
be properly displayed in IDEs and tools.
### VS Code settings
If you are having trouble getting VS Code IntelliSense working you may need to increase the amount of
memory the TS server is allowed to use. To do this, add the following to your `settings.json` file:
```json
{
"typescript.tsserver.maxTsServerMemory": 8192,
"typescript.tsserver.nodePath": "node"
}
```
### Aliases
Our codebase uses many aliases for imports. For example, `import Api from '~/api';` would import a
`app/assets/javascripts/api.js` file. But IDEs might not know that alias and so might not know the
type of the `Api`. To fix that for most IDEs, we need to create a
[`jsconfig.json`](https://code.visualstudio.com/docs/languages/jsconfig) file.
There is a script in the GitLab project that can generate a `jsconfig.json` file based on webpack
configuration and current environment variables. To generate or update the `jsconfig.json` file,
run from the GitLab project root:
```shell
node scripts/frontend/create_jsconfig.js
```
`jsconfig.json` is added to gitignore list, so creating or changing it does not cause Git changes in
the GitLab project. This also means it is not included in Git pulls, so it has to be manually
generated or updated.
### 3rd party TypeScript definitions
While more and more libraries use TypeScript for type definitions, some still might have JSDoc
annotated types or no types at all. To cover that gap, TypeScript community started a
[DefinitelyTyped](https://github.com/DefinitelyTyped/DefinitelyTyped) initiative, that creates and
supports standalone type definitions for popular JavaScript libraries. We can use those definitions
by either explicitly installing the type packages (`yarn add -D "@types/lodash"`) or by using a
feature called [Automatic Type Acquisition (ATA)](https://www.typescriptlang.org/tsconfig/#typeAcquisition),
that is available in some Language Services
(for example, [ATA in VS Code](https://github.com/microsoft/TypeScript/wiki/JavaScript-Language-Service-in-Visual-Studio#user-content--automatic-acquisition-of-type-definitions)).
Automatic Type Acquisition (ATA) automatically fetches type definitions from the DefinitelyTyped
list. But for ATA to work, a globally installed `npm` might be required. IDEs can provide a fallback
configuration options to set location of the `npm` executables. Consult your IDE documentation for
details.
ATA is not guaranteed to work and Lodash is a backbone for many of our utility functions,
so we have [DefinitelyTyped definitions for Lodash](https://www.npmjs.com/package/@types/lodash)
explicitly added to our `devDependencies` in the `package.json`. This ensures that everyone gets
type hints for `lodash`-based functions out of the box.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Type hinting overview
breadcrumbs:
- doc
- development
- fe_guide
---
The Frontend codebase of the GitLab project currently does not require nor enforces types. Adding
type annotations is optional, and we don't currently enforce any type safety in the JavaScript
codebase. However, type annotations might be very helpful in adding clarity to the codebase,
especially in shared utilities code. This document aims to cover how type hinting currently works,
how to add new type annotations, and how to set up type hinting in the GitLab project.
## JSDoc
[JSDoc](https://jsdoc.app/) is a tool to document and describe types in JavaScript code, using
specially formed comments. JSDoc's types vocabulary is relatively limited, but it is widely
supported [by many IDEs](https://en.wikipedia.org/wiki/JSDoc#JSDoc_in_use).
### Examples
#### Describing functions
Use [`@param`](https://jsdoc.app/tags-param) and [`@returns`](https://jsdoc.app/tags-returns)
to describe function type:
```javascript
/**
* Adds two numbers
* @param {number} a first number
* @param {number} b second number
* @returns {number} sum of two numbers
*/
function add(a, b) {
return a + b;
}
```
##### Optional parameters
Use square brackets `[]` around a parameter name to mark it as optional. A default value can be
provided by using the `[name=value]` syntax:
```javascript
/**
* Adds two numbers
* @param {number} value
* @param {number} [increment=1] optional param
* @returns {number} sum of two numbers
*/
function increment(a, b=1) {
return a + b;
}
```
##### Object parameters
Functions that accept objects can be typed by using `object.field` notation in `@param` names:
```javascript
/**
* Adds two numbers
* @param {object} config
* @param {string} config.path path
* @param {string} [config.anchor] anchor
* @returns {string}
*/
function createUrl(config) {
if (config.anchor) {
return path + '#' + anchor;
}
return path;
}
```
#### Annotating types of variables that are not immediately assigned a value
For tools and IDEs it's hard to infer type of a value that doesn't immediately receive a value. We
can use [`@type`](https://jsdoc.app/tags-type) notation to assign type to such variables:
```javascript
/** @type {number} */
let value;
```
Consult [JSDoc official website](https://jsdoc.app/) for more syntax details.
### Tips for using JSDoc
#### Use lowercase names for basic types
Both uppercase and lowercase are acceptable, but in most cases use lowercase
for a primitive or an object: `boolean`, `number`, `string`, `symbol`, or `object`.
```javascript
/**
* Translates `text`.
* @param {string} text - The text to be translated
* @returns {string} The translated text
*/
const gettext = (text) => locale.gettext(ensureSingleLine(text));
```
#### Use well-known types
Well-known types, like `HTMLDivElement` or `Intl` are available and can be used directly:
```javascript
/** @type {HTMLDivElement} */
let element;
```
```javascript
/**
* Creates an instance of Intl.DateTimeFormat for the current locale.
* @param {Intl.DateTimeFormatOptions} [formatOptions] - for available options, please see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DateTimeFormat
* @returns {Intl.DateTimeFormat}
*/
const createDateTimeFormat = (formatOptions) =>
Intl.DateTimeFormat(getPreferredLocales(), formatOptions);
```
#### Import existing type definitions via `import('path/to/module')`
Here are examples of how to annotate a type of the Vue Test Utils Wrapper variables, that are not
immediately defined:
```javascript
/** @type {import('helpers/vue_test_utils_helper').ExtendedWrapper} */
let wrapper;
// ...
wrapper = mountExtended(/* ... */);
```
```javascript
/** @type {import('@vue/test-utils').Wrapper} */
let wrapper;
// ...
wrapper = shallowMount(/* ... */);
```
{{< alert type="note" >}}
`import()` is [not a native JSDoc construct](https://github.com/jsdoc/jsdoc/issues/1645), but it is
recognized by many IDEs and tools. In this case we're aiming for better clarity in the code and
improved Developer Experience with an IDE.
{{< /alert >}}
#### JSDoc is limited
As was stated above, JSDoc has limited vocabulary. And using it would not describe the type fully.
But sometimes it's possible to use 3rd party library's type definitions to make type inference to
work for our code. Here's an example of such approach:
```diff
- export const mountExtended = (...args) => extendedWrapper(mount(...args));
+ import { compose } from 'lodash/fp';
+ export const mountExtended = compose(extendedWrapper, mount);
```
Here we use TypeScript type definitions from `compose` function, to add inferred type definitions to
`mountExtended` function. In this case `mountExtended` arguments will be of same type as `mount`
arguments. And return type will be the same as `extendedWrapper` return type.
We can still use JSDoc's syntax to add description to the function, for example:
```javascript
/** Mounts a component and returns an extended wrapper for it */
export const mountExtended = compose(extendedWrapper, mount);
```
## System requirements
A setup might be required for type definitions from GitLab codebase and from 3rd party packages to
be properly displayed in IDEs and tools.
### VS Code settings
If you are having trouble getting VS Code IntelliSense working you may need to increase the amount of
memory the TS server is allowed to use. To do this, add the following to your `settings.json` file:
```json
{
"typescript.tsserver.maxTsServerMemory": 8192,
"typescript.tsserver.nodePath": "node"
}
```
### Aliases
Our codebase uses many aliases for imports. For example, `import Api from '~/api';` would import a
`app/assets/javascripts/api.js` file. But IDEs might not know that alias and so might not know the
type of the `Api`. To fix that for most IDEs, we need to create a
[`jsconfig.json`](https://code.visualstudio.com/docs/languages/jsconfig) file.
There is a script in the GitLab project that can generate a `jsconfig.json` file based on webpack
configuration and current environment variables. To generate or update the `jsconfig.json` file,
run from the GitLab project root:
```shell
node scripts/frontend/create_jsconfig.js
```
`jsconfig.json` is added to gitignore list, so creating or changing it does not cause Git changes in
the GitLab project. This also means it is not included in Git pulls, so it has to be manually
generated or updated.
### 3rd party TypeScript definitions
While more and more libraries use TypeScript for type definitions, some still might have JSDoc
annotated types or no types at all. To cover that gap, TypeScript community started a
[DefinitelyTyped](https://github.com/DefinitelyTyped/DefinitelyTyped) initiative, that creates and
supports standalone type definitions for popular JavaScript libraries. We can use those definitions
by either explicitly installing the type packages (`yarn add -D "@types/lodash"`) or by using a
feature called [Automatic Type Acquisition (ATA)](https://www.typescriptlang.org/tsconfig/#typeAcquisition),
that is available in some Language Services
(for example, [ATA in VS Code](https://github.com/microsoft/TypeScript/wiki/JavaScript-Language-Service-in-Visual-Studio#user-content--automatic-acquisition-of-type-definitions)).
Automatic Type Acquisition (ATA) automatically fetches type definitions from the DefinitelyTyped
list. But for ATA to work, a globally installed `npm` might be required. IDEs can provide a fallback
configuration options to set location of the `npm` executables. Consult your IDE documentation for
details.
ATA is not guaranteed to work and Lodash is a backbone for many of our utility functions,
so we have [DefinitelyTyped definitions for Lodash](https://www.npmjs.com/package/@types/lodash)
explicitly added to our `devDependencies` in the `package.json`. This ensures that everyone gets
type hints for `lodash`-based functions out of the box.
|
https://docs.gitlab.com/development/graphql
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/graphql.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
graphql.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GraphQL
| null |
## Getting Started
### Helpful Resources
**General resources**:
- [📚 Official Introduction to GraphQL](https://graphql.org/learn/)
- [📚 Official Introduction to Apollo](https://www.apollographql.com/tutorials/fullstack-quickstart/01-introduction)
**GraphQL at GitLab**:
<!-- vale gitlab_base.Spelling = NO -->
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [GitLab Unfiltered GraphQL playlist](https://www.youtube.com/watch?v=wHPKZBDMfxE&list=PL05JrBw4t0KpcjeHjaRMB7IGB2oDWyJzv)
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [GraphQL at GitLab: Deep Dive](../api_graphql_styleguide.md#deep-dive) (video) by Nick Thomas
- An overview of the history of GraphQL at GitLab (not frontend-specific)
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [GitLab Feature Walkthrough with GraphQL and Vue Apollo](https://www.youtube.com/watch?v=6yYp2zB7FrM) (video) by Natalia Tepluhina
- A real-life example of implementing a frontend feature in GitLab using GraphQL
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [History of client-side GraphQL at GitLab](https://www.youtube.com/watch?v=mCKRJxvMnf0) (video) Illya Klymov and Natalia Tepluhina
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [From Vuex to Apollo](https://www.youtube.com/watch?v=9knwu87IfU8) (video) by Natalia Tepluhina
- An overview of when Apollo might be a better choice than Vuex, and how one could go about the transition
- [🛠 Vuex -> Apollo Migration: a proof-of-concept project](https://gitlab.com/ntepluhina/vuex-to-apollo/blob/master/README.md)
- A collection of examples that show the possible approaches for state management with Vue+GraphQL+(Vuex or Apollo) apps
<!-- vale gitlab_base.Spelling = YES -->
### Libraries
We use [Apollo](https://www.apollographql.com/) (specifically [Apollo Client](https://www.apollographql.com/docs/react/)) and [Vue Apollo](https://github.com/vuejs/vue-apollo)
when using GraphQL for frontend development.
If you are using GraphQL in a Vue application, the [Usage in Vue](#usage-in-vue) section
can help you learn how to integrate Vue Apollo.
For other use cases, check out the [Usage outside of Vue](#usage-outside-of-vue) section.
We use [Immer](https://immerjs.github.io/immer/) for immutable cache updates;
see [Immutability and cache updates](#immutability-and-cache-updates) for more information.
### Tooling
<!-- vale gitlab_base.Spelling = NO -->
- [Apollo Client Devtools](https://github.com/apollographql/apollo-client-devtools)
<!-- vale gitlab_base.Spelling = YES -->
#### Apollo GraphQL VS Code extension
If you use VS Code, the [Apollo GraphQL extension](https://marketplace.visualstudio.com/items?itemName=apollographql.vscode-apollo) supports autocompletion in `.graphql` files. To set up
the GraphQL extension, follow these steps:
1. Generate the schema: `bundle exec rake gitlab:graphql:schema:dump`
1. Add an `apollo.config.js` file to the root of your `gitlab` local directory.
1. Populate the file with the following content:
```javascript
module.exports = {
client: {
includes: ['./app/assets/javascripts/**/*.graphql', './ee/app/assets/javascripts/**/*.graphql'],
service: {
name: 'GitLab',
localSchemaFile: './tmp/tests/graphql/gitlab_schema.graphql',
},
},
};
```
1. Restart VS Code.
### Exploring the GraphQL API
Our GraphQL API can be explored via GraphiQL at your instance's
`/-/graphql-explorer` or at [GitLab.com](https://gitlab.com/-/graphql-explorer). Consult the
[GitLab GraphQL API Reference documentation](../../api/graphql/reference/_index.md)
where needed.
To check all existing queries and mutations, on the right side of GraphiQL, select **Documentation explorer**.
To check the execution of the queries and mutations you've written, in the upper-left corner, select **Execute query**.

## Apollo Client
To save duplicated clients getting created in different apps, we have a
[default client](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/lib/graphql.js) that should be used. This sets up the
Apollo client with the correct URL and also sets the CSRF headers.
Default client accepts two parameters: `resolvers` and `config`.
- `resolvers` parameter is created to accept an object of resolvers for [local state management](#local-state-with-apollo) queries and mutations
- `config` parameter takes an object of configuration settings:
- `cacheConfig` field accepts an optional object of settings to [customize Apollo cache](https://www.apollographql.com/docs/react/caching/cache-configuration/#configuring-the-cache)
- `baseUrl` allows us to pass a URL for GraphQL endpoint different from our main endpoint (for example, `${gon.relative_url_root}/api/graphql`)
- `fetchPolicy` determines how you want your component to interact with the Apollo cache. Defaults to "cache-first".
### Multiple client queries for the same object
If you are making multiple queries to the same Apollo client object you might encounter the following error: `Cache data may be lost when replacing the someProperty field of a Query object. To address this problem, either ensure all objects of SomeEntityhave an id or a custom merge function`. We are already checking `id` presence for every GraphQL type that has an `id`, so this shouldn't be the case (unless you see this warning when running unit tests; in this case ensure your mocked responses contain an `id` whenever it's requested).
When `SomeEntity` type doesn't have an `id` property in the GraphQL schema, to fix this warning we need to define a custom merge function.
We have some client-wide types with `merge: true` defined in the default client as [`typePolicies`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/lib/graphql.js) (this means that Apollo will merge existing and incoming responses in the case of subsequent queries). Consider adding `SomeEntity` there or defining a custom merge function for it.
## GraphQL Queries
To save query compilation at runtime, webpack can directly import `.graphql`
files. This allows webpack to pre-process the query at compile time instead
of the client doing compilation of queries.
To distinguish queries from mutations and fragments, the following naming convention is recommended:
- `all_users.query.graphql` for queries;
- `add_user.mutation.graphql` for mutations;
- `basic_user.fragment.graphql` for fragments.
If you are using queries for the [CustomersDot GraphQL endpoint](https://gitlab.com/gitlab-org/gitlab/-/blob/be78ccd832fd40315c5e63bb48ee1596ae146f56/app/controllers/customers_dot/proxy_controller.rb), end the filename with `.customer.query.graphql`, `.customer.mutation.graphql`, or `.customer.fragment.graphql`.
### Fragments
[Fragments](https://graphql.org/learn/queries/#fragments) are a way to make your complex GraphQL queries more readable and re-usable. Here is an example of GraphQL fragment:
```javascript
fragment DesignListItem on Design {
id
image
event
filename
notesCount
}
```
Fragments can be stored in separate files, imported and used in queries, mutations, or other fragments.
```javascript
#import "./design_list.fragment.graphql"
#import "./diff_refs.fragment.graphql"
fragment DesignItem on Design {
...DesignListItem
fullPath
diffRefs {
...DesignDiffRefs
}
}
```
More about fragments:
[GraphQL documentation](https://graphql.org/learn/queries/#fragments)
## Global IDs
The GitLab GraphQL API expresses `id` fields as Global IDs rather than the PostgreSQL
primary key `id`. Global ID is [a convention](https://graphql.org/learn/global-object-identification/)
used for caching and fetching in client-side libraries.
To convert a Global ID to the primary key `id`, you can use `getIdFromGraphQLId`:
```javascript
import { getIdFromGraphQLId } from '~/graphql_shared/utils';
const primaryKeyId = getIdFromGraphQLId(data.id);
```
**It is required** to query global `id` for every GraphQL type that has an `id` in the schema:
```javascript
query allReleases(...) {
project(...) {
id // Project has an ID in GraphQL schema so should fetch it
releases(...) {
nodes {
// Release has no ID property in GraphQL schema
name
tagName
tagPath
assets {
count
links {
nodes {
id // Link has an ID in GraphQL schema so should fetch it
name
}
}
}
}
pageInfo {
// PageInfo no ID property in GraphQL schema
startCursor
hasPreviousPage
hasNextPage
endCursor
}
}
}
}
```
## Skip query with async variables
Whenever a query has one or more variable that requires another query to have executed before it can run, it is **vital** to add a `skip()` property to the query with all relations.
Failing to do so will result in the query executing twice: once with the default value (whatever was defined on the `data` property or `undefined`) and once more once the initial query is resolved, triggering a new variable value to be injected in the smart query and then refetched by Apollo.
```javascript
data() {
return {
// Define data properties for all apollo queries
project: null,
issues: null
}
},
apollo: {
project: {
query: getProject,
variables() {
return {
projectId: this.projectId
}
}
},
releaseName: {
query: getReleaseName,
// Without this skip, the query would run initially with `projectName: null`
// Then when `getProject` resolves, it will run again.
skip() {
return !this.project?.name
},
variables() {
return {
projectName: this.project?.name
}
}
}
}
```
## Splitting queries in GraphQL
Splitting queries in Apollo is often done to optimize data fetching by breaking down larger, monolithic queries into smaller, more manageable pieces.
### Why split queries in GraphQL
1. **Increased query complexity** We have [limits](../../api/graphql#limits) for GraphQL queries which should be adhered to.
1. **Performance** Smaller, targeted queries often result in faster response times from the server, which directly benefits the frontend by getting data to the client sooner.
1. **Better Component Decoupling and Maintainability** Each component can handle its own data needs, making it easier to reuse components across your app without requiring access to a large, shared query.
### How to split queries
1. Define multiple queries and use them independently in various parts of your component hierarchy. This way, each component fetches only the data it needs.
If you look at [work item query architecture](../work_items_widgets.md#frontend-architecture) , we have [split the queries](../work_items_widgets.md#widget-responsibility-and-structure) for most of the widgets for the same reason of query complexity and splitting of concerned data.
```javascript
#import "ee_else_ce/work_items/graphql/work_item_development.fragment.graphql"
query workItemDevelopment($id: WorkItemID!) {
workItem(id: $id) {
id
iid
namespace {
id
}
widgets {
... on WorkItemWidgetDevelopment {
...WorkItemDevelopmentFragment
}
}
}
}
```
```javascript
#import "~/graphql_shared/fragments/user.fragment.graphql"
query workItemParticipants($fullPath: ID!, $iid: String!) {
workspace: namespace(fullPath: $fullPath) {
id
workItem(iid: $iid) {
id
widgets {
... on WorkItemWidgetParticipants {
type
participants {
nodes {
...User
}
}
}
}
}
}
}
```
1. Conditional Queries Using the `@include` and `@skip` Directives
Apollo supports conditional queries using these directives, allowing you to split queries based on a component's state or other conditions
```javascript
query projectWorkItems(
$searchTerm: String
$fullPath: ID!
$types: [IssueType!]
$in: [IssuableSearchableField!]
$iid: String = null
$searchByIid: Boolean = false
$searchByText: Boolean = true
) {
workspace: project(fullPath: $fullPath) {
id
workItems(search: $searchTerm, types: $types, in: $in) @include(if: $searchByText) {
nodes {
...
}
}
workItemsByIid: workItems(iid: $iid, types: $types) @include(if: $searchByIid) {
nodes {
...
}
}
}
}
```
```javascript
#import "../fragments/user.fragment.graphql"
#import "~/graphql_shared/fragments/user_availability.fragment.graphql"
query workspaceAutocompleteUsersSearch(
$search: String!
$fullPath: ID!
$isProject: Boolean = true
) {
groupWorkspace: group(fullPath: $fullPath) @skip(if: $isProject) {
id
users: autocompleteUsers(search: $search) {
...
}
}
workspace: project(fullPath: $fullPath) {
id
users: autocompleteUsers(search: $search) {
...
}
}
}
```
**CAUTION** We have to be careful to make sure that we do not invalidate the existing GraphQL queries when we split queries. We should ensure to check the inspector that the same queries are not called multiple times when we split queries.
## Immutability and cache updates
From Apollo version 3.0.0 all the cache updates need to be immutable. It needs to be replaced entirely
with a **new and updated** object.
To facilitate the process of updating the cache and returning the new object we
use the library [Immer](https://immerjs.github.io/immer/).
Follow these conventions:
- The updated cache is named `data`.
- The original cache data is named `sourceData`.
A typical update process looks like this:
```javascript
...
const sourceData = client.readQuery({ query });
const data = produce(sourceData, draftState => {
draftState.commits.push(newCommit);
});
client.writeQuery({
query,
data,
});
...
```
As shown in the code example by using `produce`, we can perform any kind of direct manipulation of the
`draftState`. Besides, `immer` guarantees that a new state which includes the changes to `draftState` is generated.
## Usage in Vue
To use Vue Apollo, import the [Vue Apollo](https://github.com/vuejs/vue-apollo) plugin as well
as the default client. This should be created at the same point
the Vue application is mounted.
```javascript
import Vue from 'vue';
import VueApollo from 'vue-apollo';
import createDefaultClient from '~/lib/graphql';
Vue.use(VueApollo);
const apolloProvider = new VueApollo({
defaultClient: createDefaultClient(),
});
new Vue({
...,
apolloProvider,
...
});
```
Read more about [Vue Apollo](https://github.com/vuejs/vue-apollo) in the [Vue Apollo documentation](https://vue-apollo.netlify.app/guide/).
### Local state with Apollo
It is possible to manage an application state with Apollo when creating your default client.
#### Using client-side resolvers
The default state can be set by writing to the cache after setting up the default client. In the
example below, we are using query with `@client` Apollo directive to write the initial data to
Apollo cache and then get this state in the Vue component:
```javascript
// user.query.graphql
query User {
user @client {
name
surname
age
}
}
```
```javascript
// index.js
import Vue from 'vue';
import VueApollo from 'vue-apollo';
import createDefaultClient from '~/lib/graphql';
import userQuery from '~/user/user.query.graphql'
Vue.use(VueApollo);
const defaultClient = createDefaultClient();
defaultClient.cache.writeQuery({
query: userQuery,
data: {
user: {
name: 'John',
surname: 'Doe',
age: 30
},
},
});
const apolloProvider = new VueApollo({
defaultClient,
});
```
```javascript
// App.vue
import userQuery from '~/user/user.query.graphql'
export default {
apollo: {
user: {
query: userQuery
}
}
}
```
Instead of using `writeQuery`, we can create a type policy that will return `user` on every attempt of reading the `userQuery` from the cache:
```javascript
const defaultClient = createDefaultClient({}, {
cacheConfig: {
typePolicies: {
Query: {
fields: {
user: {
read(data) {
return data || {
user: {
name: 'John',
surname: 'Doe',
age: 30
},
}
}
}
}
}
}
}
});
```
Along with creating local data, we can also extend existing GraphQL types with `@client` fields. This is extremely helpful when we need to mock an API response for fields not yet added to our GraphQL API.
##### Mocking API response with local Apollo cache
Using local Apollo Cache is helpful when we have a reason to mock some GraphQL API responses, queries, or mutations locally (such as when they're still not added to our actual API).
For example, we have a [fragment](#fragments) on `DesignVersion` used in our queries:
```javascript
fragment VersionListItem on DesignVersion {
id
sha
}
```
We also must fetch the version author and the `created at` property to display in the versions dropdown list. But, these changes are still not implemented in our API. We can change the existing fragment to get a mocked response for these new fields:
```javascript
fragment VersionListItem on DesignVersion {
id
sha
author @client {
avatarUrl
name
}
createdAt @client
}
```
Now Apollo tries to find a _resolver_ for every field marked with `@client` directive. Let's create a resolver for `DesignVersion` type (why `DesignVersion`? because our fragment was created on this type).
```javascript
// resolvers.js
const resolvers = {
DesignVersion: {
author: () => ({
avatarUrl:
'https://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80&d=identicon',
name: 'Administrator',
__typename: 'User',
}),
createdAt: () => '2019-11-13T16:08:11Z',
},
};
export default resolvers;
```
We need to pass a resolvers object to our existing Apollo Client:
```javascript
// graphql.js
import createDefaultClient from '~/lib/graphql';
import resolvers from './graphql/resolvers';
const defaultClient = createDefaultClient(resolvers);
```
For each attempt to fetch a version, our client fetches `id` and `sha` from the remote API endpoint. It then assigns our hardcoded values to the `author` and `createdAt` version properties. With this data, frontend developers are able to work on their UI without being blocked by backend. When the response is added to the API, our custom local resolver can be removed. The only change to the query/fragment is to remove the `@client` directive.
Read more about local state management with Apollo in the [Vue Apollo documentation](https://vue-apollo.netlify.app/guide/local-state.html#local-state).
### Using with Pinia
Combining [Pinia](pinia.md) and Apollo in a single Vue application is generally discouraged.
[Learn about the restrictions and circumstances around combining Apollo and Pinia](state_management.md#combining-pinia-and-apollo).
### Using with Vuex
We do not recommend combining Vuex and Apollo Client. [Vuex is deprecated in GitLab](vuex.md#deprecated).
If you have an existing Vuex store that's used alongside Apollo we strongly recommend [migrating away from Vuex entirely](migrating_from_vuex.md).
[Learn more about state management in GitLab](state_management.md).
### Working on GraphQL-based features when frontend and backend are not in sync
Any feature that requires GraphQL queries/mutations to be created or updated should be carefully
planned. Frontend and backend counterparts should agree on a schema that satisfies both client-side and
server-side requirements. This enables both departments to start implementing their parts without
blocking each other.
Ideally, the backend implementation should be done prior to the frontend so that the client can
immediately start querying the API with minimal back and forth between departments. However, we
recognize that priorities don't always align. For the sake of iteration and
delivering work we're committed to, it might be necessary for the frontend to be implemented ahead
of the backend.
#### Implementing frontend queries and mutations ahead of the backend
In such case, the frontend defines GraphQL schemas or fields that do not correspond to any
backend resolver yet. This is fine as long as the implementation is properly feature-flagged so it
does not translate to public-facing errors in the product. However, we do validate client-side
queries/mutations against the backend GraphQL schema with the `graphql-verify` CI job.
You must confirm your changes pass the validation if they are to be merged before the
backend actually supports them. Below are a few suggestions to go about this.
##### Using the `@client` directive
The preferred approach is to use the `@client` directive on any new query, mutation, or field that
isn't yet supported by the backend. Any entity with the directive is skipped by the
`graphql-verify` validation job.
Additionally Apollo attempts to resolve them client-side, which can be used in conjunction with
[Mocking API response with local Apollo cache](#mocking-api-response-with-local-apollo-cache). This
provides a convenient way of testing your feature with fake data defined client-side.
When opening a merge request for your changes, it can be a good idea to provide local resolvers as a
patch that reviewers can apply in their GDK to easily smoke-test your work.
Make sure to track the removal of the directive in a follow-up issue, or as part of the backend
implementation plan.
##### Adding an exception to the list of known failures
GraphQL queries/mutations validation can be completely turned off for specific files by adding their
paths to the
[`config/known_invalid_graphql_queries.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/known_invalid_graphql_queries.yml)
file, much like you would disable ESLint for some files via an `.eslintignore` file.
Bear in mind that any file listed in here is not validated at all. So if you're only adding
fields to an existing query, use the `@client` directive approach so that the rest of the query
is still validated.
Again, make sure that those overrides are as short-lived as possible by tracking their removal in
the appropriate issue.
#### Feature-flagged queries
In cases where the backend is complete and the frontend is being implemented behind a feature flag,
a couple options are available to leverage the feature flag in the GraphQL queries.
##### The `@include` directive
The `@include` (or its opposite, `@skip`) can be used to control whether an entity should be
included in the query. If the `@include` directive evaluates to `false`, the entity's resolver is
not hit and the entity is excluded from the response. For example:
```graphql
query getAuthorData($authorNameEnabled: Boolean = false) {
username
name @include(if: $authorNameEnabled)
}
```
Then in the Vue (or JavaScript) call to the query we can pass in our feature flag. This feature
flag needs to be already set up correctly. See the [feature flag documentation](../feature_flags/_index.md)
for the correct way to do this.
```javascript
export default {
apollo: {
user: {
query: QUERY_IMPORT,
variables() {
return {
authorNameEnabled: gon?.features?.authorNameEnabled,
};
},
}
},
};
```
Note that, even if the directive evaluates to `false`, the guarded entity is sent to the backend and
matched against the GraphQL schema. So this approach requires that the feature-flagged entity
exists in the schema, even if the feature flag is disabled. When the feature flag is turned off, it
is recommended that the resolver returns `null` at the very least using the same feature flag as the frontend. See the [API GraphQL guide](../api_graphql_styleguide.md#feature-flags).
##### Different versions of a query
There's another approach that involves duplicating the standard query, and it should be avoided. The copy includes the new entities
while the original remains unchanged. It is up to the production code to trigger the right query
based on the feature flag's status. For example:
```javascript
export default {
apollo: {
user: {
query() {
return this.glFeatures.authorNameEnabled ? NEW_QUERY : ORIGINAL_QUERY,
}
}
},
};
```
##### Avoiding multiple query versions
The multiple version approach is not recommended as it results in bigger merge requests and requires maintaining
two similar queries for as long as the feature flag exists. Multiple versions can be used in cases where the new
GraphQL entities are not yet part of the schema, or if they are feature-flagged at the schema level
(`new_entity: :feature_flag`).
### Manually triggering queries
Queries on a component's `apollo` property are made automatically when the component is created.
Some components instead want the network request made on-demand, for example a dropdown list with lazy-loaded items.
There are two ways to do this:
1. Use the `skip` property
```javascript
export default {
apollo: {
user: {
query: QUERY_IMPORT,
skip() {
// only make the query when dropdown is open
return !this.isOpen;
},
}
},
};
```
1. Using `addSmartQuery`
You can manually create the Smart Query in your method.
```javascript
handleClick() {
this.$apollo.addSmartQuery('user', {
// this takes the same values as you'd have in the `apollo` section
query: QUERY_IMPORT,
}),
};
```
### Working with pagination
The GitLab GraphQL API uses [Relay-style cursor pagination](https://www.apollographql.com/docs/react/pagination/overview/#cursor-based)
for connection types. This means a "cursor" is used to keep track of where in the data
set the next items should be fetched from. [GraphQL Ruby Connection Concepts](https://graphql-ruby.org/pagination/connection_concepts.html)
is a good overview and introduction to connections.
Every connection type (for example, `DesignConnection` and `DiscussionConnection`) has a field `pageInfo` that contains an information required for pagination:
```javascript
pageInfo {
endCursor
hasNextPage
hasPreviousPage
startCursor
}
```
Here:
- `startCursor` displays the cursor of the first items and `endCursor` displays the cursor of the last items.
- `hasPreviousPage` and `hasNextPage` allow us to check if there are more pages
available before or after the current page.
When we fetch data with a connection type, we can pass cursor as `after` or `before`
parameter, indicating a starting or ending point of our pagination. They should be
followed with `first` or `last` parameter to indicate _how many_ items
we want to fetch after or before a given endpoint.
For example, here we're fetching 10 designs after a cursor (let us call this `projectQuery`):
```javascript
#import "~/graphql_shared/fragments/page_info.fragment.graphql"
query {
project(fullPath: "root/my-project") {
id
issue(iid: "42") {
designCollection {
designs(atVersion: null, after: "Ihwffmde0i", first: 10) {
edges {
node {
id
}
}
pageInfo {
...PageInfo
}
}
}
}
}
}
```
Note that we are using the [`page_info.fragment.graphql`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/graphql_shared/fragments/page_info.fragment.graphql) to populate the `pageInfo` information.
#### Using `fetchMore` method in components
This approach makes sense to use with user-handled pagination. For example, when the scrolling to fetch more data or explicitly clicking a **Next Page** button.
When we need to fetch all the data initially, it is recommended to use [a (non-smart) query, instead](#using-a-recursive-query-in-components).
When making an initial fetch, we usually want to start a pagination from the beginning.
In this case, we can either:
- Skip passing a cursor.
- Pass `null` explicitly to `after`.
After data is fetched, we can use the `update`-hook as an opportunity
[to customize the data that is set in the Vue component property](https://apollo.vuejs.org/api/smart-query.html#options).
This allows us to get a hold of the `pageInfo` object among other data.
In the `result`-hook, we can inspect the `pageInfo` object to see if we need to fetch
the next page. Note that we also keep a `requestCount` to ensure that the application
does not keep requesting the next page, indefinitely:
```javascript
data() {
return {
pageInfo: null,
requestCount: 0,
}
},
apollo: {
designs: {
query: projectQuery,
variables() {
return {
// ... The rest of the design variables
first: 10,
};
},
update(data) {
const { id = null, issue = {} } = data.project || {};
const { edges = [], pageInfo } = issue.designCollection?.designs || {};
return {
id,
edges,
pageInfo,
};
},
result() {
const { pageInfo } = this.designs;
// Increment the request count with each new result
this.requestCount += 1;
// Only fetch next page if we have more requests and there is a next page to fetch
if (this.requestCount < MAX_REQUEST_COUNT && pageInfo?.hasNextPage) {
this.fetchNextPage(pageInfo.endCursor);
}
},
},
},
```
When we want to move to the next page, we use an Apollo `fetchMore` method, passing a
new cursor (and, optionally, new variables) there.
```javascript
fetchNextPage(endCursor) {
this.$apollo.queries.designs.fetchMore({
variables: {
// ... The rest of the design variables
first: 10,
after: endCursor,
},
});
}
```
##### Defining field merge policy
We would also need to define a field policy to specify how do we want to merge the existing results with the incoming results. For example, if we have `Previous/Next` buttons, it makes sense to replace the existing result with the incoming one:
```javascript
const apolloProvider = new VueApollo({
defaultClient: createDefaultClient(
{},
{
cacheConfig: {
typePolicies: {
DesignCollection: {
fields: {
designs: {
merge(existing, incoming) {
if (!incoming) return existing;
if (!existing) return incoming;
// We want to save only incoming nodes and replace existing ones
return incoming
}
}
}
}
}
},
},
),
});
```
When we have an infinite scroll, it would make sense to add the incoming `designs` nodes to existing ones instead of replacing. In this case, merge function would be slightly different:
```javascript
const apolloProvider = new VueApollo({
defaultClient: createDefaultClient(
{},
{
cacheConfig: {
typePolicies: {
DesignCollection: {
fields: {
designs: {
merge(existing, incoming) {
if (!incoming) return existing;
if (!existing) return incoming;
const { nodes, ...rest } = incoming;
// We only need to merge the nodes array.
// The rest of the fields (pagination) should always be overwritten by incoming
let result = rest;
result.nodes = [...existing.nodes, ...nodes];
return result;
}
}
}
}
}
},
},
),
});
```
`apollo-client` [provides](https://github.com/apollographql/apollo-client/blob/212b1e686359a3489b48d7e5d38a256312f81fde/src/utilities/policies/pagination.ts)
a few field policies to be used with paginated queries. Here's another way to achieve infinite
scroll pagination with the `concatPagination` policy:
```javascript
import { concatPagination } from '@apollo/client/utilities';
import Vue from 'vue';
import VueApollo from 'vue-apollo';
import createDefaultClient from '~/lib/graphql';
Vue.use(VueApollo);
export default new VueApollo({
defaultClient: createDefaultClient(
{},
{
cacheConfig: {
typePolicies: {
Project: {
fields: {
dastSiteProfiles: {
keyArgs: ['fullPath'], // You might need to set the keyArgs option to enforce the cache's integrity
},
},
},
DastSiteProfileConnection: {
fields: {
nodes: concatPagination(),
},
},
},
},
},
),
});
```
This is similar to the `DesignCollection` example above as new page results are appended to the
previous ones.
For some cases, it's hard to define the correct `keyArgs` for the field because all
the fields are updated. In this case, we can set `keyArgs` to `false`. This instructs
Apollo Client to not perform any automatic merge, and fully rely on the logic we
put into the `merge` function.
For example, we have a query like this:
```javascript
query searchGroupsWhereUserCanTransfer {
currentUser {
id
groups(after: 'somecursor') {
nodes {
id
fullName
}
pageInfo {
...PageInfo
}
}
}
}
```
Here, the `groups` field doesn't have a good candidate for `keyArgs`: we don't want to account for `after` argument because it will change on requesting subsequent pages. Setting `keyArgs` to `false` makes the update work as intended:
```javascript
typePolicies: {
UserCore: {
fields: {
groups: {
keyArgs: false,
},
},
},
GroupConnection: {
fields: {
nodes: concatPagination(),
},
},
}
```
#### Using a recursive query in components
When it is necessary to fetch all paginated data initially an Apollo query can do the trick for us.
If we need to fetch the next page based on user interactions, it is recommend to use a [`smartQuery`](https://apollo.vuejs.org/api/smart-query.html) along with the [`fetchMore`-hook](#using-fetchmore-method-in-components).
When the query resolves we can update the component data and inspect the `pageInfo` object. This allows us
to see if we need to fetch the next page, calling the method recursively.
Note that we also keep a `requestCount` to ensure that the application does not keep
requesting the next page, indefinitely.
```javascript
data() {
return {
requestCount: 0,
isLoading: false,
designs: {
edges: [],
pageInfo: null,
},
}
},
created() {
this.fetchDesigns();
},
methods: {
handleError(error) {
this.isLoading = false;
// Do something with `error`
},
fetchDesigns(endCursor) {
this.isLoading = true;
return this.$apollo
.query({
query: projectQuery,
variables() {
return {
// ... The rest of the design variables
first: 10,
endCursor,
};
},
})
.then(({ data }) => {
const { id = null, issue = {} } = data.project || {};
const { edges = [], pageInfo } = issue.designCollection?.designs || {};
// Update data
this.designs = {
id,
edges: [...this.designs.edges, ...edges];
pageInfo: pageInfo;
};
// Increment the request count with each new result
this.requestCount += 1;
// Only fetch next page if we have more requests and there is a next page to fetch
if (this.requestCount < MAX_REQUEST_COUNT && pageInfo?.hasNextPage) {
this.fetchDesigns(pageInfo.endCursor);
} else {
this.isLoading = false;
}
})
.catch(this.handleError);
},
},
```
#### Pagination and optimistic updates
When Apollo caches paginated data client-side, it includes `pageInfo` variables in the cache key.
If you wanted to optimistically update that data, you'd have to provide `pageInfo` variables
when interacting with the cache via [`.readQuery()`](https://www.apollographql.com/docs/react/v2/api/apollo-client/#ApolloClient.readQuery)
or [`.writeQuery()`](https://www.apollographql.com/docs/react/v2/api/apollo-client/#ApolloClient.writeQuery).
This can be tedious and counter-intuitive.
To make it easier to deal with cached paginated queries, Apollo provides the `@connection` directive.
The directive accepts a `key` parameter that is used as a static key when caching the data.
You'd then be able to retrieve the data without providing any pagination-specific variables.
Here's an example of a query using the `@connection` directive:
```graphql
#import "~/graphql_shared/fragments/page_info.fragment.graphql"
query DastSiteProfiles($fullPath: ID!, $after: String, $before: String, $first: Int, $last: Int) {
project(fullPath: $fullPath) {
siteProfiles: dastSiteProfiles(after: $after, before: $before, first: $first, last: $last)
@connection(key: "dastSiteProfiles") {
pageInfo {
...PageInfo
}
edges {
cursor
node {
id
# ...
}
}
}
}
}
```
In this example, Apollo stores the data with the stable `dastSiteProfiles` cache key.
To retrieve that data from the cache, you'd then only need to provide the `$fullPath` variable,
omitting pagination-specific variables like `after` or `before`:
```javascript
const data = store.readQuery({
query: dastSiteProfilesQuery,
variables: {
fullPath: 'namespace/project',
},
});
```
Read more about the `@connection` directive in [Apollo's documentation](https://www.apollographql.com/docs/react/caching/advanced-topics/#the-connection-directive).
### Batching similar queries
By default, the Apollo client sends one HTTP request from the browser per query. You can choose to
batch several queries in a single outgoing request and lower the number of requests by defining a
[batchKey](https://www.apollographql.com/docs/react/api/link/apollo-link-batch-http/#batchkey).
This can be helpful when a query is called multiple times from the same component but you
want to update the UI once. In this example we use the component name as the key:
```javascript
export default {
name: 'MyComponent'
apollo: {
user: {
query: QUERY_IMPORT,
context: {
batchKey: 'MyComponent',
},
}
},
};
```
The batch key can be the name of the component.
#### Polling and Performance
While the Apollo client has support for simple polling, for performance reasons, our [ETag-based caching](../polling.md) is preferred to hitting the database each time.
After the ETag resource is set up to be cached from backend, there are a few changes to make on the frontend.
First, get your ETag resource from the backend, which should be in the form of a URL path. In the example of the pipelines graph, this is called the `graphql_resource_etag`, which is used to create new headers to add to the Apollo context:
```javascript
/* pipelines/components/graph/utils.js */
/* eslint-disable @gitlab/require-i18n-strings */
const getQueryHeaders = (etagResource) => {
return {
fetchOptions: {
method: 'GET',
},
headers: {
/* This will depend on your feature */
'X-GITLAB-GRAPHQL-FEATURE-CORRELATION': 'verify/ci/pipeline-graph',
'X-GITLAB-GRAPHQL-RESOURCE-ETAG': etagResource,
'X-REQUESTED-WITH': 'XMLHttpRequest',
},
};
};
/* eslint-enable @gitlab/require-i18n-strings */
/* component.vue */
apollo: {
pipeline: {
context() {
return getQueryHeaders(this.graphqlResourceEtag);
},
query: getPipelineDetails,
pollInterval: 10000,
..
},
},
```
Here, the apollo query is watching for changes in `graphqlResourceEtag`. If your ETag resource dynamically changes, you should make sure the resource you are sending in the query headers is also updated. To do this, you can store and update the ETag resource dynamically in the local cache.
You can see an example of this in the pipeline status of the pipeline editor. The pipeline editor watches for changes in the latest pipeline. When the user creates a new commit, we update the pipeline query to poll for changes in the new pipeline.
```graphql
# pipeline_etag.query.graphql
query getPipelineEtag {
pipelineEtag @client
}
```
```javascript
/* pipeline_editor/components/header/pipeline_editor_header.vue */
import getPipelineEtag from '~/ci/pipeline_editor/graphql/queries/client/pipeline_etag.query.graphql';
apollo: {
pipelineEtag: {
query: getPipelineEtag,
},
pipeline: {
context() {
return getQueryHeaders(this.pipelineEtag);
},
query: getPipelineIidQuery,
pollInterval: PIPELINE_POLL_INTERVAL,
},
}
/* pipeline_editor/components/commit/commit_section.vue */
await this.$apollo.mutate({
mutation: commitCIFile,
update(store, { data }) {
const pipelineEtag = data?.commitCreate?.commit?.commitPipelinePath;
if (pipelineEtag) {
store.writeQuery({ query: getPipelineEtag, data: { pipelineEtag } });
}
},
});
```
Finally, we can add a visibility check so that the component pauses polling when the browser tab is not active. This should lessen the request load on the page.
```javascript
/* component.vue */
import { toggleQueryPollingByVisibility } from '~/pipelines/components/graph/utils';
export default {
mounted() {
toggleQueryPollingByVisibility(this.$apollo.queries.pipeline, POLL_INTERVAL);
},
};
```
You can use [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/59672/) as a reference on how to fully implement ETag caching on the frontend.
Once subscriptions are mature, this process can be replaced by using them and we can remove the separate link library and return to batching queries.
##### How to test ETag caching
You can test that your implementation works by checking requests on the network tab. If there are no changes in your ETag resource, all polled requests should:
- Be `GET` requests instead of `POST` requests.
- Have an HTTP status of `304` instead of `200`.
Make sure that caching is not disabled in your developer tools when testing.
If you are using Chrome and keep seeing `200` HTTP status codes, it might be this bug: [Developer tools show 200 instead of 304](https://bugs.chromium.org/p/chromium/issues/detail?id=1269602). In this case, inspect the response headers' source to confirm that the request was actually cached and did return with a `304` status code.
#### Subscriptions
We use [subscriptions](https://www.apollographql.com/docs/react/data/subscriptions/) to receive real-time updates from GraphQL API via websockets. Currently, the number of existing subscriptions is limited, you can check a list of available ones in [GraphqiQL explorer](https://gitlab.com/-/graphql-explorer)
Refer to the [Real-time widgets developer guide](../real_time.md) for a comprehensive introduction to subscriptions.
### Best Practices
#### When to use (and not use) `update` hook in mutations
Apollo Client's [`.mutate()`](https://www.apollographql.com/docs/react/api/core/ApolloClient/#ApolloClient.mutate)
method exposes an `update` hook that is invoked twice during the mutation lifecycle:
- Once at the beginning. That is, before the mutation has completed.
- Once after the mutation has completed.
You should use this hook only if you're adding or removing an item from the store
(that is, ApolloCache). If you're _updating_ an existing item, it is usually represented by
a global `id`.
In that case, presence of this `id` in your mutation query definition makes the store update
automatically. Here's an example of a typical mutation query with `id` present in it:
```graphql
mutation issueSetWeight($input: IssueSetWeightInput!) {
issuableSetWeight: issueSetWeight(input: $input) {
issuable: issue {
id
weight
}
errors
}
}
```
### Testing
#### Generating the GraphQL schema
Some of our tests load the schema JSON files. To generate these files, run:
```shell
bundle exec rake gitlab:graphql:schema:dump
```
You should run this task after pulling from upstream, or when rebasing your
branch. This is run automatically as part of `gdk update`.
{{< alert type="note" >}}
If you use the RubyMine IDE, and have marked the `tmp` directory as
"Excluded", you should "Mark Directory As -> Not Excluded" for
`gitlab/tmp/tests/graphql`. This will allow the **JS GraphQL** plugin to
automatically find and index the schema.
{{< /alert >}}
#### Mocking Apollo Client
To test the components with Apollo operations, we need to mock an Apollo Client in our unit tests. We use [`mock-apollo-client`](https://www.npmjs.com/package/mock-apollo-client) library to mock Apollo client and [`createMockApollo` helper](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/frontend/__helpers__/mock_apollo_helper.js) we created on top of it.
We need to inject `VueApollo` into the Vue instance by calling `Vue.use(VueApollo)`. This will install `VueApollo` globally for all the tests in the file. It is recommended to call `Vue.use(VueApollo)` just after the imports.
```javascript
import VueApollo from 'vue-apollo';
import Vue from 'vue';
Vue.use(VueApollo);
describe('Some component with Apollo mock', () => {
let wrapper;
function createComponent(options = {}) {
wrapper = shallowMount(...);
}
})
```
After this, we need to create a mocked Apollo provider:
```javascript
import createMockApollo from 'helpers/mock_apollo_helper';
describe('Some component with Apollo mock', () => {
let wrapper;
let mockApollo;
function createComponent(options = {}) {
mockApollo = createMockApollo(...)
wrapper = shallowMount(SomeComponent, {
apolloProvider: mockApollo
});
}
afterEach(() => {
// we need to ensure we don't have provider persisted between tests
mockApollo = null
})
})
```
Now, we need to define an array of _handlers_ for every query or mutation. Handlers should be mock functions that return either a correct query response, or an error:
```javascript
import getDesignListQuery from '~/design_management/graphql/queries/get_design_list.query.graphql';
import permissionsQuery from '~/design_management/graphql/queries/design_permissions.query.graphql';
import moveDesignMutation from '~/design_management/graphql/mutations/move_design.mutation.graphql';
describe('Some component with Apollo mock', () => {
let wrapper;
let mockApollo;
function createComponent(options = {
designListHandler: jest.fn().mockResolvedValue(designListQueryResponse)
}) {
mockApollo = createMockApollo([
[getDesignListQuery, options.designListHandler],
[permissionsQuery, jest.fn().mockResolvedValue(permissionsQueryResponse)],
[moveDesignMutation, jest.fn().mockResolvedValue(moveDesignMutationResponse)],
])
wrapper = shallowMount(SomeComponent, {
apolloProvider: mockApollo
});
}
})
```
When mocking resolved values, ensure the structure of the response is the same
as the actual API response. For example, root property should be `data`:
```javascript
const designListQueryResponse = {
data: {
project: {
id: '1',
issue: {
id: 'issue-1',
designCollection: {
copyState: 'READY',
designs: {
nodes: [
{
id: '3',
event: 'NONE',
filename: 'fox_3.jpg',
notesCount: 1,
image: 'image-3',
imageV432x230: 'image-3',
currentUserTodos: {
nodes: [],
},
},
],
},
versions: {
nodes: [],
},
},
},
},
},
};
```
When testing queries, keep in mind they are promises, so they need to be _resolved_ to render a result. Without resolving, we can check the `loading` state of the query:
```javascript
it('renders a loading state', () => {
const wrapper = createComponent();
expect(wrapper.findComponent(LoadingSpinner).exists()).toBe(true)
});
it('renders designs list', async () => {
const wrapper = createComponent();
await waitForPromises()
expect(findDesigns()).toHaveLength(3);
});
```
If we need to test a query error, we need to mock a rejected value as request handler:
```javascript
it('renders error if query fails', async () => {
const wrapper = createComponent({
designListHandler: jest.fn().mockRejectedValue('Houston, we have a problem!')
});
await waitForPromises()
expect(wrapper.find('.test-error').exists()).toBe(true)
})
```
Mutations could be tested the same way:
```javascript
const moveDesignHandlerSuccess = jest.fn().mockResolvedValue(moveDesignMutationResponse)
function createComponent(options = {
designListHandler: jest.fn().mockResolvedValue(designListQueryResponse),
moveDesignHandler: moveDesignHandlerSuccess
}) {
mockApollo = createMockApollo([
[getDesignListQuery, options.designListHandler],
[permissionsQuery, jest.fn().mockResolvedValue(permissionsQueryResponse)],
[moveDesignMutation, moveDesignHandler],
])
wrapper = shallowMount(SomeComponent, {
apolloProvider: mockApollo
});
}
it('calls a mutation with correct parameters and reorders designs', async () => {
const wrapper = createComponent();
wrapper.find(VueDraggable).vm.$emit('change', {
moved: {
newIndex: 0,
element: designToMove,
},
});
expect(moveDesignHandlerSuccess).toHaveBeenCalled();
await waitForPromises();
expect(
findDesigns()
.at(0)
.props('id'),
).toBe('2');
});
```
To mock multiple query response states, success and failure, Apollo Client's native retry behavior can combine with Jest's mock functions to create a series of responses. These do not need to be advanced manually, but they do need to be awaited in specific fashion.
```javascript
describe('when query times out', () => {
const advanceApolloTimers = async () => {
jest.runOnlyPendingTimers();
await waitForPromises()
};
beforeEach(async () => {
const failSucceedFail = jest
.fn()
.mockResolvedValueOnce({ errors: [{ message: 'timeout' }] })
.mockResolvedValueOnce(mockPipelineResponse)
.mockResolvedValueOnce({ errors: [{ message: 'timeout' }] });
createComponentWithApollo(failSucceedFail);
await waitForPromises();
});
it('shows correct errors and does not overwrite populated data when data is empty', async () => {
/* fails at first, shows error, no data yet */
expect(getAlert().exists()).toBe(true);
expect(getGraph().exists()).toBe(false);
/* succeeds, clears error, shows graph */
await advanceApolloTimers();
expect(getAlert().exists()).toBe(false);
expect(getGraph().exists()).toBe(true);
/* fails again, alert returns but data persists */
await advanceApolloTimers();
expect(getAlert().exists()).toBe(true);
expect(getGraph().exists()).toBe(true);
});
});
```
Previously, we've used `{ mocks: { $apollo ...}}` on `mount` to test Apollo functionality. This approach is discouraged - proper `$apollo` mocking leaks a lot of implementation details to the tests. Consider replacing it with mocked Apollo provider
```javascript
wrapper = mount(SomeComponent, {
mocks: {
// avoid! Mock real graphql queries and mutations instead
$apollo: {
mutate: jest.fn(),
queries: {
groups: {
loading,
},
},
},
},
});
```
#### Testing subscriptions
When testing subscriptions, be aware that default behavior for subscription in `vue-apollo@4` is to re-subscribe and immediately issue new request on error (unless value of `skip` restricts us from doing that)
```javascript
import waitForPromises from 'helpers/wait_for_promises';
// subscriptionMock is registered as handler function for subscription
// in our helper
const subcriptionMock = jest.fn().mockResolvedValue(okResponse);
// ...
it('testing error state', () => {
// Avoid: will stuck below!
subscriptionMock = jest.fn().mockRejectedValue({ errors: [] });
// component calls subscription mock as part of
createComponent();
// will be stuck forever:
// * rejected promise will trigger resubscription
// * re-subscription will call subscriptionMock again, resulting in rejected promise
// * rejected promise will trigger next re-subscription,
await waitForPromises();
// ...
})
```
To avoid such infinite loops when using `vue@3` and `vue-apollo@4` consider using one-time rejections
```javascript
it('testing failure', () => {
// OK: subscription will fail once
subscriptionMock.mockRejectedValueOnce({ errors: [] });
// component calls subscription mock as part of
createComponent();
await waitForPromises();
// code below now will be executed
})
```
#### Testing `@client` queries
##### Using mock resolvers
If your application contains `@client` queries, you get
the following Apollo Client warning when passing only handlers:
```shell
Unexpected call of console.warn() with:
Warning: mock-apollo-client - The query is entirely client-side (using @client directives) and resolvers have been configured. The request handler will not be called.
```
To fix this you should define mock `resolvers` instead of
mock `handlers`. For example, given the following `@client` query:
```graphql
query getBlobContent($path: String, $ref: String!) {
blobContent(path: $path, ref: $ref) @client {
rawData
}
}
```
And its actual client-side resolvers:
```javascript
import Api from '~/api';
export const resolvers = {
Query: {
blobContent(_, { path, ref }) {
return {
__typename: 'BlobContent',
rawData: Api.getRawFile(path, { ref }).then(({ data }) => {
return data;
}),
};
},
},
};
export default resolvers;
```
We can use a **mock resolver** that returns data with the
same shape, while mock the result with a mock function:
```javascript
let mockApollo;
let mockBlobContentData; // mock function, jest.fn();
const mockResolvers = {
Query: {
blobContent() {
return {
__typename: 'BlobContent',
rawData: mockBlobContentData(), // the mock function can resolve mock data
};
},
},
};
const createComponentWithApollo = ({ props = {} } = {}) => {
mockApollo = createMockApollo([], mockResolvers); // resolvers are the second parameter
wrapper = shallowMount(MyComponent, {
propsData: {},
apolloProvider: mockApollo,
// ...
})
};
```
After which, you can resolve or reject the value needed.
```javascript
beforeEach(() => {
mockBlobContentData = jest.fn();
});
it('shows data', async() => {
mockBlobContentData.mockResolvedValue(data); // you may resolve or reject to mock the result
createComponentWithApollo();
await waitForPromises(); // wait on the resolver mock to execute
expect(findContent().text()).toBe(mockCiYml);
});
```
##### Using `cache.writeQuery`
Sometimes we want to test a `result` hook of the local query. In order to have it triggered, we need to populate a cache with correct data to be fetched with this query:
```javascript
query fetchLocalUser {
fetchLocalUser @client {
name
}
}
```
```javascript
import fetchLocalUserQuery from '~/design_management/graphql/queries/fetch_local_user.query.graphql';
describe('Some component with Apollo mock', () => {
let wrapper;
let mockApollo;
function createComponent(options = {
designListHandler: jest.fn().mockResolvedValue(designListQueryResponse)
}) {
mockApollo = createMockApollo([...])
mockApollo.clients.defaultClient.cache.writeQuery({
query: fetchLocalUserQuery,
data: {
fetchLocalUser: {
__typename: 'User',
name: 'Test',
},
},
});
wrapper = shallowMount(SomeComponent, {
apolloProvider: mockApollo
});
}
})
```
When you need to configure the mocked apollo client's caching behavior,
provide additional cache options when creating a mocked client instance and the provided options will merge with the default cache option:
```javascript
const defaultCacheOptions = {
fragmentMatcher: { match: () => true },
addTypename: false,
};
```
```javascript
mockApollo = createMockApollo(
requestHandlers,
{},
{
dataIdFromObject: (object) =>
// eslint-disable-next-line no-underscore-dangle
object.__typename === 'Requirement' ? object.iid : defaultDataIdFromObject(object),
},
);
```
## Handling errors
The GitLab GraphQL mutations have two distinct error modes: [Top-level](#top-level-errors) and [errors-as-data](#errors-as-data).
When utilising a GraphQL mutation, consider handling **both of these error modes** to ensure that the user receives the appropriate feedback when an error occurs.
### Top-level errors
These errors are located at the "top level" of a GraphQL response. These are non-recoverable errors including argument errors and syntax errors, and should not be presented directly to the user.
#### Handling top-level errors
Apollo is aware of top-level errors, so we are able to leverage Apollo's various error-handling mechanisms to handle these errors. For example, handling Promise rejections after invoking the [`mutate`](https://www.apollographql.com/docs/react/api/core/ApolloClient/#ApolloClient.mutate) method, or handling the `error` event emitted from the [`ApolloMutation`](https://apollo.vuejs.org/api/apollo-mutation.html#events) component.
Because these errors are not intended for users, error messages for top-level errors should be defined client-side.
### Errors-as-data
These errors are nested in the `data` object of a GraphQL response. These are recoverable errors that, ideally, can be presented directly to the user.
#### Handling errors-as-data
First, we must add `errors` to our mutation object:
```diff
mutation createNoteMutation($input: String!) {
createNoteMutation(input: $input) {
note {
id
+ errors
}
}
```
Now, when we commit this mutation and errors occur, the response includes `errors` for us to handle:
```javascript
{
data: {
mutationName: {
errors: ["Sorry, we were not able to update the note."]
}
}
}
```
When handling errors-as-data, use your best judgement to determine whether to present the error message in the response, or another message defined client-side, to the user.
## Usage outside of Vue
It is also possible to use GraphQL outside of Vue by directly importing
and using the default client with queries.
```javascript
import createDefaultClient from '~/lib/graphql';
import query from './query.graphql';
const defaultClient = createDefaultClient();
defaultClient.query({ query })
.then(result => console.log(result));
```
When [using Vuex](#using-with-vuex), disable the cache when:
- The data is being cached elsewhere
- The use case does not need caching
if the data is being cached elsewhere, or if there is no need for it for the given use case.
```javascript
import createDefaultClient, { fetchPolicies } from '~/lib/graphql';
const defaultClient = createDefaultClient(
{},
{
fetchPolicy: fetchPolicies.NO_CACHE,
},
);
```
## Making initial queries early with GraphQL startup calls
To improve performance, sometimes we want to make initial GraphQL queries early. In order to do this, we can add them to **startup calls** with the following steps:
- Move all the queries you need initially in your application to `app/graphql/queries`;
- Add `__typename` property to every nested query level:
```javascript
query getPermissions($projectPath: ID!) {
project(fullPath: $projectPath) {
__typename
userPermissions {
__typename
pushCode
forkProject
createMergeRequestIn
}
}
}
```
- If queries contain fragments, you need to move fragments to the query file directly instead of importing them:
```javascript
fragment PageInfo on PageInfo {
__typename
hasNextPage
hasPreviousPage
startCursor
endCursor
}
query getFiles(
$projectPath: ID!
$path: String
$ref: String!
) {
project(fullPath: $projectPath) {
__typename
repository {
__typename
tree(path: $path, ref: $ref) {
__typename
pageInfo {
...PageInfo
}
}
}
}
}
}
```
- If the fragment is used only once, we can also remove the fragment altogether:
```javascript
query getFiles(
$projectPath: ID!
$path: String
$ref: String!
) {
project(fullPath: $projectPath) {
__typename
repository {
__typename
tree(path: $path, ref: $ref) {
__typename
pageInfo {
__typename
hasNextPage
hasPreviousPage
startCursor
endCursor
}
}
}
}
}
}
```
- Add startup calls with correct variables to the HAML file that serves as a view
for your application. To add GraphQL startup calls, we use
`add_page_startup_graphql_call` helper where the first parameter is a path to the
query, the second one is an object containing query variables. Path to the query is
relative to `app/graphql/queries` folder: for example, if we need a
`app/graphql/queries/repository/files.query.graphql` query, the path is
`repository/files`.
## Troubleshooting
### Mocked client returns empty objects instead of mock response
If your unit test is failing because the response contains empty objects instead of mock data, add
`__typename` field to the mocked responses.
Alternatively, [GraphQL query fixtures](../testing_guide/frontend_testing.md#graphql-query-fixtures)
automatically adds the `__typename` for you upon generation.
### Warning about losing cache data
Sometimes you can see a warning in the console: `Cache data may be lost when replacing the someProperty field of a Query object. To address this problem, either ensure all objects of SomeEntityhave an id or a custom merge function`. Check section about [multiple queries](#multiple-client-queries-for-the-same-object) to resolve an issue.
```yaml
- current_route_path = request.fullpath.match(/-\/tree\/[^\/]+\/(.+$)/).to_a[1]
- add_page_startup_graphql_call('repository/path_last_commit', { projectPath: @project.full_path, ref: current_ref, path: current_route_path || "" })
- add_page_startup_graphql_call('repository/permissions', { projectPath: @project.full_path })
- add_page_startup_graphql_call('repository/files', { nextPageCursor: "", pageSize: 100, projectPath: @project.full_path, ref: current_ref, path: current_route_path || "/"})
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GraphQL
breadcrumbs:
- doc
- development
- fe_guide
---
## Getting Started
### Helpful Resources
**General resources**:
- [📚 Official Introduction to GraphQL](https://graphql.org/learn/)
- [📚 Official Introduction to Apollo](https://www.apollographql.com/tutorials/fullstack-quickstart/01-introduction)
**GraphQL at GitLab**:
<!-- vale gitlab_base.Spelling = NO -->
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [GitLab Unfiltered GraphQL playlist](https://www.youtube.com/watch?v=wHPKZBDMfxE&list=PL05JrBw4t0KpcjeHjaRMB7IGB2oDWyJzv)
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [GraphQL at GitLab: Deep Dive](../api_graphql_styleguide.md#deep-dive) (video) by Nick Thomas
- An overview of the history of GraphQL at GitLab (not frontend-specific)
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [GitLab Feature Walkthrough with GraphQL and Vue Apollo](https://www.youtube.com/watch?v=6yYp2zB7FrM) (video) by Natalia Tepluhina
- A real-life example of implementing a frontend feature in GitLab using GraphQL
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [History of client-side GraphQL at GitLab](https://www.youtube.com/watch?v=mCKRJxvMnf0) (video) Illya Klymov and Natalia Tepluhina
- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [From Vuex to Apollo](https://www.youtube.com/watch?v=9knwu87IfU8) (video) by Natalia Tepluhina
- An overview of when Apollo might be a better choice than Vuex, and how one could go about the transition
- [🛠 Vuex -> Apollo Migration: a proof-of-concept project](https://gitlab.com/ntepluhina/vuex-to-apollo/blob/master/README.md)
- A collection of examples that show the possible approaches for state management with Vue+GraphQL+(Vuex or Apollo) apps
<!-- vale gitlab_base.Spelling = YES -->
### Libraries
We use [Apollo](https://www.apollographql.com/) (specifically [Apollo Client](https://www.apollographql.com/docs/react/)) and [Vue Apollo](https://github.com/vuejs/vue-apollo)
when using GraphQL for frontend development.
If you are using GraphQL in a Vue application, the [Usage in Vue](#usage-in-vue) section
can help you learn how to integrate Vue Apollo.
For other use cases, check out the [Usage outside of Vue](#usage-outside-of-vue) section.
We use [Immer](https://immerjs.github.io/immer/) for immutable cache updates;
see [Immutability and cache updates](#immutability-and-cache-updates) for more information.
### Tooling
<!-- vale gitlab_base.Spelling = NO -->
- [Apollo Client Devtools](https://github.com/apollographql/apollo-client-devtools)
<!-- vale gitlab_base.Spelling = YES -->
#### Apollo GraphQL VS Code extension
If you use VS Code, the [Apollo GraphQL extension](https://marketplace.visualstudio.com/items?itemName=apollographql.vscode-apollo) supports autocompletion in `.graphql` files. To set up
the GraphQL extension, follow these steps:
1. Generate the schema: `bundle exec rake gitlab:graphql:schema:dump`
1. Add an `apollo.config.js` file to the root of your `gitlab` local directory.
1. Populate the file with the following content:
```javascript
module.exports = {
client: {
includes: ['./app/assets/javascripts/**/*.graphql', './ee/app/assets/javascripts/**/*.graphql'],
service: {
name: 'GitLab',
localSchemaFile: './tmp/tests/graphql/gitlab_schema.graphql',
},
},
};
```
1. Restart VS Code.
### Exploring the GraphQL API
Our GraphQL API can be explored via GraphiQL at your instance's
`/-/graphql-explorer` or at [GitLab.com](https://gitlab.com/-/graphql-explorer). Consult the
[GitLab GraphQL API Reference documentation](../../api/graphql/reference/_index.md)
where needed.
To check all existing queries and mutations, on the right side of GraphiQL, select **Documentation explorer**.
To check the execution of the queries and mutations you've written, in the upper-left corner, select **Execute query**.

## Apollo Client
To save duplicated clients getting created in different apps, we have a
[default client](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/lib/graphql.js) that should be used. This sets up the
Apollo client with the correct URL and also sets the CSRF headers.
Default client accepts two parameters: `resolvers` and `config`.
- `resolvers` parameter is created to accept an object of resolvers for [local state management](#local-state-with-apollo) queries and mutations
- `config` parameter takes an object of configuration settings:
- `cacheConfig` field accepts an optional object of settings to [customize Apollo cache](https://www.apollographql.com/docs/react/caching/cache-configuration/#configuring-the-cache)
- `baseUrl` allows us to pass a URL for GraphQL endpoint different from our main endpoint (for example, `${gon.relative_url_root}/api/graphql`)
- `fetchPolicy` determines how you want your component to interact with the Apollo cache. Defaults to "cache-first".
### Multiple client queries for the same object
If you are making multiple queries to the same Apollo client object you might encounter the following error: `Cache data may be lost when replacing the someProperty field of a Query object. To address this problem, either ensure all objects of SomeEntityhave an id or a custom merge function`. We are already checking `id` presence for every GraphQL type that has an `id`, so this shouldn't be the case (unless you see this warning when running unit tests; in this case ensure your mocked responses contain an `id` whenever it's requested).
When `SomeEntity` type doesn't have an `id` property in the GraphQL schema, to fix this warning we need to define a custom merge function.
We have some client-wide types with `merge: true` defined in the default client as [`typePolicies`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/lib/graphql.js) (this means that Apollo will merge existing and incoming responses in the case of subsequent queries). Consider adding `SomeEntity` there or defining a custom merge function for it.
## GraphQL Queries
To save query compilation at runtime, webpack can directly import `.graphql`
files. This allows webpack to pre-process the query at compile time instead
of the client doing compilation of queries.
To distinguish queries from mutations and fragments, the following naming convention is recommended:
- `all_users.query.graphql` for queries;
- `add_user.mutation.graphql` for mutations;
- `basic_user.fragment.graphql` for fragments.
If you are using queries for the [CustomersDot GraphQL endpoint](https://gitlab.com/gitlab-org/gitlab/-/blob/be78ccd832fd40315c5e63bb48ee1596ae146f56/app/controllers/customers_dot/proxy_controller.rb), end the filename with `.customer.query.graphql`, `.customer.mutation.graphql`, or `.customer.fragment.graphql`.
### Fragments
[Fragments](https://graphql.org/learn/queries/#fragments) are a way to make your complex GraphQL queries more readable and re-usable. Here is an example of GraphQL fragment:
```javascript
fragment DesignListItem on Design {
id
image
event
filename
notesCount
}
```
Fragments can be stored in separate files, imported and used in queries, mutations, or other fragments.
```javascript
#import "./design_list.fragment.graphql"
#import "./diff_refs.fragment.graphql"
fragment DesignItem on Design {
...DesignListItem
fullPath
diffRefs {
...DesignDiffRefs
}
}
```
More about fragments:
[GraphQL documentation](https://graphql.org/learn/queries/#fragments)
## Global IDs
The GitLab GraphQL API expresses `id` fields as Global IDs rather than the PostgreSQL
primary key `id`. Global ID is [a convention](https://graphql.org/learn/global-object-identification/)
used for caching and fetching in client-side libraries.
To convert a Global ID to the primary key `id`, you can use `getIdFromGraphQLId`:
```javascript
import { getIdFromGraphQLId } from '~/graphql_shared/utils';
const primaryKeyId = getIdFromGraphQLId(data.id);
```
**It is required** to query global `id` for every GraphQL type that has an `id` in the schema:
```javascript
query allReleases(...) {
project(...) {
id // Project has an ID in GraphQL schema so should fetch it
releases(...) {
nodes {
// Release has no ID property in GraphQL schema
name
tagName
tagPath
assets {
count
links {
nodes {
id // Link has an ID in GraphQL schema so should fetch it
name
}
}
}
}
pageInfo {
// PageInfo no ID property in GraphQL schema
startCursor
hasPreviousPage
hasNextPage
endCursor
}
}
}
}
```
## Skip query with async variables
Whenever a query has one or more variable that requires another query to have executed before it can run, it is **vital** to add a `skip()` property to the query with all relations.
Failing to do so will result in the query executing twice: once with the default value (whatever was defined on the `data` property or `undefined`) and once more once the initial query is resolved, triggering a new variable value to be injected in the smart query and then refetched by Apollo.
```javascript
data() {
return {
// Define data properties for all apollo queries
project: null,
issues: null
}
},
apollo: {
project: {
query: getProject,
variables() {
return {
projectId: this.projectId
}
}
},
releaseName: {
query: getReleaseName,
// Without this skip, the query would run initially with `projectName: null`
// Then when `getProject` resolves, it will run again.
skip() {
return !this.project?.name
},
variables() {
return {
projectName: this.project?.name
}
}
}
}
```
## Splitting queries in GraphQL
Splitting queries in Apollo is often done to optimize data fetching by breaking down larger, monolithic queries into smaller, more manageable pieces.
### Why split queries in GraphQL
1. **Increased query complexity** We have [limits](../../api/graphql#limits) for GraphQL queries which should be adhered to.
1. **Performance** Smaller, targeted queries often result in faster response times from the server, which directly benefits the frontend by getting data to the client sooner.
1. **Better Component Decoupling and Maintainability** Each component can handle its own data needs, making it easier to reuse components across your app without requiring access to a large, shared query.
### How to split queries
1. Define multiple queries and use them independently in various parts of your component hierarchy. This way, each component fetches only the data it needs.
If you look at [work item query architecture](../work_items_widgets.md#frontend-architecture) , we have [split the queries](../work_items_widgets.md#widget-responsibility-and-structure) for most of the widgets for the same reason of query complexity and splitting of concerned data.
```javascript
#import "ee_else_ce/work_items/graphql/work_item_development.fragment.graphql"
query workItemDevelopment($id: WorkItemID!) {
workItem(id: $id) {
id
iid
namespace {
id
}
widgets {
... on WorkItemWidgetDevelopment {
...WorkItemDevelopmentFragment
}
}
}
}
```
```javascript
#import "~/graphql_shared/fragments/user.fragment.graphql"
query workItemParticipants($fullPath: ID!, $iid: String!) {
workspace: namespace(fullPath: $fullPath) {
id
workItem(iid: $iid) {
id
widgets {
... on WorkItemWidgetParticipants {
type
participants {
nodes {
...User
}
}
}
}
}
}
}
```
1. Conditional Queries Using the `@include` and `@skip` Directives
Apollo supports conditional queries using these directives, allowing you to split queries based on a component's state or other conditions
```javascript
query projectWorkItems(
$searchTerm: String
$fullPath: ID!
$types: [IssueType!]
$in: [IssuableSearchableField!]
$iid: String = null
$searchByIid: Boolean = false
$searchByText: Boolean = true
) {
workspace: project(fullPath: $fullPath) {
id
workItems(search: $searchTerm, types: $types, in: $in) @include(if: $searchByText) {
nodes {
...
}
}
workItemsByIid: workItems(iid: $iid, types: $types) @include(if: $searchByIid) {
nodes {
...
}
}
}
}
```
```javascript
#import "../fragments/user.fragment.graphql"
#import "~/graphql_shared/fragments/user_availability.fragment.graphql"
query workspaceAutocompleteUsersSearch(
$search: String!
$fullPath: ID!
$isProject: Boolean = true
) {
groupWorkspace: group(fullPath: $fullPath) @skip(if: $isProject) {
id
users: autocompleteUsers(search: $search) {
...
}
}
workspace: project(fullPath: $fullPath) {
id
users: autocompleteUsers(search: $search) {
...
}
}
}
```
**CAUTION** We have to be careful to make sure that we do not invalidate the existing GraphQL queries when we split queries. We should ensure to check the inspector that the same queries are not called multiple times when we split queries.
## Immutability and cache updates
From Apollo version 3.0.0 all the cache updates need to be immutable. It needs to be replaced entirely
with a **new and updated** object.
To facilitate the process of updating the cache and returning the new object we
use the library [Immer](https://immerjs.github.io/immer/).
Follow these conventions:
- The updated cache is named `data`.
- The original cache data is named `sourceData`.
A typical update process looks like this:
```javascript
...
const sourceData = client.readQuery({ query });
const data = produce(sourceData, draftState => {
draftState.commits.push(newCommit);
});
client.writeQuery({
query,
data,
});
...
```
As shown in the code example by using `produce`, we can perform any kind of direct manipulation of the
`draftState`. Besides, `immer` guarantees that a new state which includes the changes to `draftState` is generated.
## Usage in Vue
To use Vue Apollo, import the [Vue Apollo](https://github.com/vuejs/vue-apollo) plugin as well
as the default client. This should be created at the same point
the Vue application is mounted.
```javascript
import Vue from 'vue';
import VueApollo from 'vue-apollo';
import createDefaultClient from '~/lib/graphql';
Vue.use(VueApollo);
const apolloProvider = new VueApollo({
defaultClient: createDefaultClient(),
});
new Vue({
...,
apolloProvider,
...
});
```
Read more about [Vue Apollo](https://github.com/vuejs/vue-apollo) in the [Vue Apollo documentation](https://vue-apollo.netlify.app/guide/).
### Local state with Apollo
It is possible to manage an application state with Apollo when creating your default client.
#### Using client-side resolvers
The default state can be set by writing to the cache after setting up the default client. In the
example below, we are using query with `@client` Apollo directive to write the initial data to
Apollo cache and then get this state in the Vue component:
```javascript
// user.query.graphql
query User {
user @client {
name
surname
age
}
}
```
```javascript
// index.js
import Vue from 'vue';
import VueApollo from 'vue-apollo';
import createDefaultClient from '~/lib/graphql';
import userQuery from '~/user/user.query.graphql'
Vue.use(VueApollo);
const defaultClient = createDefaultClient();
defaultClient.cache.writeQuery({
query: userQuery,
data: {
user: {
name: 'John',
surname: 'Doe',
age: 30
},
},
});
const apolloProvider = new VueApollo({
defaultClient,
});
```
```javascript
// App.vue
import userQuery from '~/user/user.query.graphql'
export default {
apollo: {
user: {
query: userQuery
}
}
}
```
Instead of using `writeQuery`, we can create a type policy that will return `user` on every attempt of reading the `userQuery` from the cache:
```javascript
const defaultClient = createDefaultClient({}, {
cacheConfig: {
typePolicies: {
Query: {
fields: {
user: {
read(data) {
return data || {
user: {
name: 'John',
surname: 'Doe',
age: 30
},
}
}
}
}
}
}
}
});
```
Along with creating local data, we can also extend existing GraphQL types with `@client` fields. This is extremely helpful when we need to mock an API response for fields not yet added to our GraphQL API.
##### Mocking API response with local Apollo cache
Using local Apollo Cache is helpful when we have a reason to mock some GraphQL API responses, queries, or mutations locally (such as when they're still not added to our actual API).
For example, we have a [fragment](#fragments) on `DesignVersion` used in our queries:
```javascript
fragment VersionListItem on DesignVersion {
id
sha
}
```
We also must fetch the version author and the `created at` property to display in the versions dropdown list. But, these changes are still not implemented in our API. We can change the existing fragment to get a mocked response for these new fields:
```javascript
fragment VersionListItem on DesignVersion {
id
sha
author @client {
avatarUrl
name
}
createdAt @client
}
```
Now Apollo tries to find a _resolver_ for every field marked with `@client` directive. Let's create a resolver for `DesignVersion` type (why `DesignVersion`? because our fragment was created on this type).
```javascript
// resolvers.js
const resolvers = {
DesignVersion: {
author: () => ({
avatarUrl:
'https://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80&d=identicon',
name: 'Administrator',
__typename: 'User',
}),
createdAt: () => '2019-11-13T16:08:11Z',
},
};
export default resolvers;
```
We need to pass a resolvers object to our existing Apollo Client:
```javascript
// graphql.js
import createDefaultClient from '~/lib/graphql';
import resolvers from './graphql/resolvers';
const defaultClient = createDefaultClient(resolvers);
```
For each attempt to fetch a version, our client fetches `id` and `sha` from the remote API endpoint. It then assigns our hardcoded values to the `author` and `createdAt` version properties. With this data, frontend developers are able to work on their UI without being blocked by backend. When the response is added to the API, our custom local resolver can be removed. The only change to the query/fragment is to remove the `@client` directive.
Read more about local state management with Apollo in the [Vue Apollo documentation](https://vue-apollo.netlify.app/guide/local-state.html#local-state).
### Using with Pinia
Combining [Pinia](pinia.md) and Apollo in a single Vue application is generally discouraged.
[Learn about the restrictions and circumstances around combining Apollo and Pinia](state_management.md#combining-pinia-and-apollo).
### Using with Vuex
We do not recommend combining Vuex and Apollo Client. [Vuex is deprecated in GitLab](vuex.md#deprecated).
If you have an existing Vuex store that's used alongside Apollo we strongly recommend [migrating away from Vuex entirely](migrating_from_vuex.md).
[Learn more about state management in GitLab](state_management.md).
### Working on GraphQL-based features when frontend and backend are not in sync
Any feature that requires GraphQL queries/mutations to be created or updated should be carefully
planned. Frontend and backend counterparts should agree on a schema that satisfies both client-side and
server-side requirements. This enables both departments to start implementing their parts without
blocking each other.
Ideally, the backend implementation should be done prior to the frontend so that the client can
immediately start querying the API with minimal back and forth between departments. However, we
recognize that priorities don't always align. For the sake of iteration and
delivering work we're committed to, it might be necessary for the frontend to be implemented ahead
of the backend.
#### Implementing frontend queries and mutations ahead of the backend
In such case, the frontend defines GraphQL schemas or fields that do not correspond to any
backend resolver yet. This is fine as long as the implementation is properly feature-flagged so it
does not translate to public-facing errors in the product. However, we do validate client-side
queries/mutations against the backend GraphQL schema with the `graphql-verify` CI job.
You must confirm your changes pass the validation if they are to be merged before the
backend actually supports them. Below are a few suggestions to go about this.
##### Using the `@client` directive
The preferred approach is to use the `@client` directive on any new query, mutation, or field that
isn't yet supported by the backend. Any entity with the directive is skipped by the
`graphql-verify` validation job.
Additionally Apollo attempts to resolve them client-side, which can be used in conjunction with
[Mocking API response with local Apollo cache](#mocking-api-response-with-local-apollo-cache). This
provides a convenient way of testing your feature with fake data defined client-side.
When opening a merge request for your changes, it can be a good idea to provide local resolvers as a
patch that reviewers can apply in their GDK to easily smoke-test your work.
Make sure to track the removal of the directive in a follow-up issue, or as part of the backend
implementation plan.
##### Adding an exception to the list of known failures
GraphQL queries/mutations validation can be completely turned off for specific files by adding their
paths to the
[`config/known_invalid_graphql_queries.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/known_invalid_graphql_queries.yml)
file, much like you would disable ESLint for some files via an `.eslintignore` file.
Bear in mind that any file listed in here is not validated at all. So if you're only adding
fields to an existing query, use the `@client` directive approach so that the rest of the query
is still validated.
Again, make sure that those overrides are as short-lived as possible by tracking their removal in
the appropriate issue.
#### Feature-flagged queries
In cases where the backend is complete and the frontend is being implemented behind a feature flag,
a couple options are available to leverage the feature flag in the GraphQL queries.
##### The `@include` directive
The `@include` (or its opposite, `@skip`) can be used to control whether an entity should be
included in the query. If the `@include` directive evaluates to `false`, the entity's resolver is
not hit and the entity is excluded from the response. For example:
```graphql
query getAuthorData($authorNameEnabled: Boolean = false) {
username
name @include(if: $authorNameEnabled)
}
```
Then in the Vue (or JavaScript) call to the query we can pass in our feature flag. This feature
flag needs to be already set up correctly. See the [feature flag documentation](../feature_flags/_index.md)
for the correct way to do this.
```javascript
export default {
apollo: {
user: {
query: QUERY_IMPORT,
variables() {
return {
authorNameEnabled: gon?.features?.authorNameEnabled,
};
},
}
},
};
```
Note that, even if the directive evaluates to `false`, the guarded entity is sent to the backend and
matched against the GraphQL schema. So this approach requires that the feature-flagged entity
exists in the schema, even if the feature flag is disabled. When the feature flag is turned off, it
is recommended that the resolver returns `null` at the very least using the same feature flag as the frontend. See the [API GraphQL guide](../api_graphql_styleguide.md#feature-flags).
##### Different versions of a query
There's another approach that involves duplicating the standard query, and it should be avoided. The copy includes the new entities
while the original remains unchanged. It is up to the production code to trigger the right query
based on the feature flag's status. For example:
```javascript
export default {
apollo: {
user: {
query() {
return this.glFeatures.authorNameEnabled ? NEW_QUERY : ORIGINAL_QUERY,
}
}
},
};
```
##### Avoiding multiple query versions
The multiple version approach is not recommended as it results in bigger merge requests and requires maintaining
two similar queries for as long as the feature flag exists. Multiple versions can be used in cases where the new
GraphQL entities are not yet part of the schema, or if they are feature-flagged at the schema level
(`new_entity: :feature_flag`).
### Manually triggering queries
Queries on a component's `apollo` property are made automatically when the component is created.
Some components instead want the network request made on-demand, for example a dropdown list with lazy-loaded items.
There are two ways to do this:
1. Use the `skip` property
```javascript
export default {
apollo: {
user: {
query: QUERY_IMPORT,
skip() {
// only make the query when dropdown is open
return !this.isOpen;
},
}
},
};
```
1. Using `addSmartQuery`
You can manually create the Smart Query in your method.
```javascript
handleClick() {
this.$apollo.addSmartQuery('user', {
// this takes the same values as you'd have in the `apollo` section
query: QUERY_IMPORT,
}),
};
```
### Working with pagination
The GitLab GraphQL API uses [Relay-style cursor pagination](https://www.apollographql.com/docs/react/pagination/overview/#cursor-based)
for connection types. This means a "cursor" is used to keep track of where in the data
set the next items should be fetched from. [GraphQL Ruby Connection Concepts](https://graphql-ruby.org/pagination/connection_concepts.html)
is a good overview and introduction to connections.
Every connection type (for example, `DesignConnection` and `DiscussionConnection`) has a field `pageInfo` that contains an information required for pagination:
```javascript
pageInfo {
endCursor
hasNextPage
hasPreviousPage
startCursor
}
```
Here:
- `startCursor` displays the cursor of the first items and `endCursor` displays the cursor of the last items.
- `hasPreviousPage` and `hasNextPage` allow us to check if there are more pages
available before or after the current page.
When we fetch data with a connection type, we can pass cursor as `after` or `before`
parameter, indicating a starting or ending point of our pagination. They should be
followed with `first` or `last` parameter to indicate _how many_ items
we want to fetch after or before a given endpoint.
For example, here we're fetching 10 designs after a cursor (let us call this `projectQuery`):
```javascript
#import "~/graphql_shared/fragments/page_info.fragment.graphql"
query {
project(fullPath: "root/my-project") {
id
issue(iid: "42") {
designCollection {
designs(atVersion: null, after: "Ihwffmde0i", first: 10) {
edges {
node {
id
}
}
pageInfo {
...PageInfo
}
}
}
}
}
}
```
Note that we are using the [`page_info.fragment.graphql`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/graphql_shared/fragments/page_info.fragment.graphql) to populate the `pageInfo` information.
#### Using `fetchMore` method in components
This approach makes sense to use with user-handled pagination. For example, when the scrolling to fetch more data or explicitly clicking a **Next Page** button.
When we need to fetch all the data initially, it is recommended to use [a (non-smart) query, instead](#using-a-recursive-query-in-components).
When making an initial fetch, we usually want to start a pagination from the beginning.
In this case, we can either:
- Skip passing a cursor.
- Pass `null` explicitly to `after`.
After data is fetched, we can use the `update`-hook as an opportunity
[to customize the data that is set in the Vue component property](https://apollo.vuejs.org/api/smart-query.html#options).
This allows us to get a hold of the `pageInfo` object among other data.
In the `result`-hook, we can inspect the `pageInfo` object to see if we need to fetch
the next page. Note that we also keep a `requestCount` to ensure that the application
does not keep requesting the next page, indefinitely:
```javascript
data() {
return {
pageInfo: null,
requestCount: 0,
}
},
apollo: {
designs: {
query: projectQuery,
variables() {
return {
// ... The rest of the design variables
first: 10,
};
},
update(data) {
const { id = null, issue = {} } = data.project || {};
const { edges = [], pageInfo } = issue.designCollection?.designs || {};
return {
id,
edges,
pageInfo,
};
},
result() {
const { pageInfo } = this.designs;
// Increment the request count with each new result
this.requestCount += 1;
// Only fetch next page if we have more requests and there is a next page to fetch
if (this.requestCount < MAX_REQUEST_COUNT && pageInfo?.hasNextPage) {
this.fetchNextPage(pageInfo.endCursor);
}
},
},
},
```
When we want to move to the next page, we use an Apollo `fetchMore` method, passing a
new cursor (and, optionally, new variables) there.
```javascript
fetchNextPage(endCursor) {
this.$apollo.queries.designs.fetchMore({
variables: {
// ... The rest of the design variables
first: 10,
after: endCursor,
},
});
}
```
##### Defining field merge policy
We would also need to define a field policy to specify how do we want to merge the existing results with the incoming results. For example, if we have `Previous/Next` buttons, it makes sense to replace the existing result with the incoming one:
```javascript
const apolloProvider = new VueApollo({
defaultClient: createDefaultClient(
{},
{
cacheConfig: {
typePolicies: {
DesignCollection: {
fields: {
designs: {
merge(existing, incoming) {
if (!incoming) return existing;
if (!existing) return incoming;
// We want to save only incoming nodes and replace existing ones
return incoming
}
}
}
}
}
},
},
),
});
```
When we have an infinite scroll, it would make sense to add the incoming `designs` nodes to existing ones instead of replacing. In this case, merge function would be slightly different:
```javascript
const apolloProvider = new VueApollo({
defaultClient: createDefaultClient(
{},
{
cacheConfig: {
typePolicies: {
DesignCollection: {
fields: {
designs: {
merge(existing, incoming) {
if (!incoming) return existing;
if (!existing) return incoming;
const { nodes, ...rest } = incoming;
// We only need to merge the nodes array.
// The rest of the fields (pagination) should always be overwritten by incoming
let result = rest;
result.nodes = [...existing.nodes, ...nodes];
return result;
}
}
}
}
}
},
},
),
});
```
`apollo-client` [provides](https://github.com/apollographql/apollo-client/blob/212b1e686359a3489b48d7e5d38a256312f81fde/src/utilities/policies/pagination.ts)
a few field policies to be used with paginated queries. Here's another way to achieve infinite
scroll pagination with the `concatPagination` policy:
```javascript
import { concatPagination } from '@apollo/client/utilities';
import Vue from 'vue';
import VueApollo from 'vue-apollo';
import createDefaultClient from '~/lib/graphql';
Vue.use(VueApollo);
export default new VueApollo({
defaultClient: createDefaultClient(
{},
{
cacheConfig: {
typePolicies: {
Project: {
fields: {
dastSiteProfiles: {
keyArgs: ['fullPath'], // You might need to set the keyArgs option to enforce the cache's integrity
},
},
},
DastSiteProfileConnection: {
fields: {
nodes: concatPagination(),
},
},
},
},
},
),
});
```
This is similar to the `DesignCollection` example above as new page results are appended to the
previous ones.
For some cases, it's hard to define the correct `keyArgs` for the field because all
the fields are updated. In this case, we can set `keyArgs` to `false`. This instructs
Apollo Client to not perform any automatic merge, and fully rely on the logic we
put into the `merge` function.
For example, we have a query like this:
```javascript
query searchGroupsWhereUserCanTransfer {
currentUser {
id
groups(after: 'somecursor') {
nodes {
id
fullName
}
pageInfo {
...PageInfo
}
}
}
}
```
Here, the `groups` field doesn't have a good candidate for `keyArgs`: we don't want to account for `after` argument because it will change on requesting subsequent pages. Setting `keyArgs` to `false` makes the update work as intended:
```javascript
typePolicies: {
UserCore: {
fields: {
groups: {
keyArgs: false,
},
},
},
GroupConnection: {
fields: {
nodes: concatPagination(),
},
},
}
```
#### Using a recursive query in components
When it is necessary to fetch all paginated data initially an Apollo query can do the trick for us.
If we need to fetch the next page based on user interactions, it is recommend to use a [`smartQuery`](https://apollo.vuejs.org/api/smart-query.html) along with the [`fetchMore`-hook](#using-fetchmore-method-in-components).
When the query resolves we can update the component data and inspect the `pageInfo` object. This allows us
to see if we need to fetch the next page, calling the method recursively.
Note that we also keep a `requestCount` to ensure that the application does not keep
requesting the next page, indefinitely.
```javascript
data() {
return {
requestCount: 0,
isLoading: false,
designs: {
edges: [],
pageInfo: null,
},
}
},
created() {
this.fetchDesigns();
},
methods: {
handleError(error) {
this.isLoading = false;
// Do something with `error`
},
fetchDesigns(endCursor) {
this.isLoading = true;
return this.$apollo
.query({
query: projectQuery,
variables() {
return {
// ... The rest of the design variables
first: 10,
endCursor,
};
},
})
.then(({ data }) => {
const { id = null, issue = {} } = data.project || {};
const { edges = [], pageInfo } = issue.designCollection?.designs || {};
// Update data
this.designs = {
id,
edges: [...this.designs.edges, ...edges];
pageInfo: pageInfo;
};
// Increment the request count with each new result
this.requestCount += 1;
// Only fetch next page if we have more requests and there is a next page to fetch
if (this.requestCount < MAX_REQUEST_COUNT && pageInfo?.hasNextPage) {
this.fetchDesigns(pageInfo.endCursor);
} else {
this.isLoading = false;
}
})
.catch(this.handleError);
},
},
```
#### Pagination and optimistic updates
When Apollo caches paginated data client-side, it includes `pageInfo` variables in the cache key.
If you wanted to optimistically update that data, you'd have to provide `pageInfo` variables
when interacting with the cache via [`.readQuery()`](https://www.apollographql.com/docs/react/v2/api/apollo-client/#ApolloClient.readQuery)
or [`.writeQuery()`](https://www.apollographql.com/docs/react/v2/api/apollo-client/#ApolloClient.writeQuery).
This can be tedious and counter-intuitive.
To make it easier to deal with cached paginated queries, Apollo provides the `@connection` directive.
The directive accepts a `key` parameter that is used as a static key when caching the data.
You'd then be able to retrieve the data without providing any pagination-specific variables.
Here's an example of a query using the `@connection` directive:
```graphql
#import "~/graphql_shared/fragments/page_info.fragment.graphql"
query DastSiteProfiles($fullPath: ID!, $after: String, $before: String, $first: Int, $last: Int) {
project(fullPath: $fullPath) {
siteProfiles: dastSiteProfiles(after: $after, before: $before, first: $first, last: $last)
@connection(key: "dastSiteProfiles") {
pageInfo {
...PageInfo
}
edges {
cursor
node {
id
# ...
}
}
}
}
}
```
In this example, Apollo stores the data with the stable `dastSiteProfiles` cache key.
To retrieve that data from the cache, you'd then only need to provide the `$fullPath` variable,
omitting pagination-specific variables like `after` or `before`:
```javascript
const data = store.readQuery({
query: dastSiteProfilesQuery,
variables: {
fullPath: 'namespace/project',
},
});
```
Read more about the `@connection` directive in [Apollo's documentation](https://www.apollographql.com/docs/react/caching/advanced-topics/#the-connection-directive).
### Batching similar queries
By default, the Apollo client sends one HTTP request from the browser per query. You can choose to
batch several queries in a single outgoing request and lower the number of requests by defining a
[batchKey](https://www.apollographql.com/docs/react/api/link/apollo-link-batch-http/#batchkey).
This can be helpful when a query is called multiple times from the same component but you
want to update the UI once. In this example we use the component name as the key:
```javascript
export default {
name: 'MyComponent'
apollo: {
user: {
query: QUERY_IMPORT,
context: {
batchKey: 'MyComponent',
},
}
},
};
```
The batch key can be the name of the component.
#### Polling and Performance
While the Apollo client has support for simple polling, for performance reasons, our [ETag-based caching](../polling.md) is preferred to hitting the database each time.
After the ETag resource is set up to be cached from backend, there are a few changes to make on the frontend.
First, get your ETag resource from the backend, which should be in the form of a URL path. In the example of the pipelines graph, this is called the `graphql_resource_etag`, which is used to create new headers to add to the Apollo context:
```javascript
/* pipelines/components/graph/utils.js */
/* eslint-disable @gitlab/require-i18n-strings */
const getQueryHeaders = (etagResource) => {
return {
fetchOptions: {
method: 'GET',
},
headers: {
/* This will depend on your feature */
'X-GITLAB-GRAPHQL-FEATURE-CORRELATION': 'verify/ci/pipeline-graph',
'X-GITLAB-GRAPHQL-RESOURCE-ETAG': etagResource,
'X-REQUESTED-WITH': 'XMLHttpRequest',
},
};
};
/* eslint-enable @gitlab/require-i18n-strings */
/* component.vue */
apollo: {
pipeline: {
context() {
return getQueryHeaders(this.graphqlResourceEtag);
},
query: getPipelineDetails,
pollInterval: 10000,
..
},
},
```
Here, the apollo query is watching for changes in `graphqlResourceEtag`. If your ETag resource dynamically changes, you should make sure the resource you are sending in the query headers is also updated. To do this, you can store and update the ETag resource dynamically in the local cache.
You can see an example of this in the pipeline status of the pipeline editor. The pipeline editor watches for changes in the latest pipeline. When the user creates a new commit, we update the pipeline query to poll for changes in the new pipeline.
```graphql
# pipeline_etag.query.graphql
query getPipelineEtag {
pipelineEtag @client
}
```
```javascript
/* pipeline_editor/components/header/pipeline_editor_header.vue */
import getPipelineEtag from '~/ci/pipeline_editor/graphql/queries/client/pipeline_etag.query.graphql';
apollo: {
pipelineEtag: {
query: getPipelineEtag,
},
pipeline: {
context() {
return getQueryHeaders(this.pipelineEtag);
},
query: getPipelineIidQuery,
pollInterval: PIPELINE_POLL_INTERVAL,
},
}
/* pipeline_editor/components/commit/commit_section.vue */
await this.$apollo.mutate({
mutation: commitCIFile,
update(store, { data }) {
const pipelineEtag = data?.commitCreate?.commit?.commitPipelinePath;
if (pipelineEtag) {
store.writeQuery({ query: getPipelineEtag, data: { pipelineEtag } });
}
},
});
```
Finally, we can add a visibility check so that the component pauses polling when the browser tab is not active. This should lessen the request load on the page.
```javascript
/* component.vue */
import { toggleQueryPollingByVisibility } from '~/pipelines/components/graph/utils';
export default {
mounted() {
toggleQueryPollingByVisibility(this.$apollo.queries.pipeline, POLL_INTERVAL);
},
};
```
You can use [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/59672/) as a reference on how to fully implement ETag caching on the frontend.
Once subscriptions are mature, this process can be replaced by using them and we can remove the separate link library and return to batching queries.
##### How to test ETag caching
You can test that your implementation works by checking requests on the network tab. If there are no changes in your ETag resource, all polled requests should:
- Be `GET` requests instead of `POST` requests.
- Have an HTTP status of `304` instead of `200`.
Make sure that caching is not disabled in your developer tools when testing.
If you are using Chrome and keep seeing `200` HTTP status codes, it might be this bug: [Developer tools show 200 instead of 304](https://bugs.chromium.org/p/chromium/issues/detail?id=1269602). In this case, inspect the response headers' source to confirm that the request was actually cached and did return with a `304` status code.
#### Subscriptions
We use [subscriptions](https://www.apollographql.com/docs/react/data/subscriptions/) to receive real-time updates from GraphQL API via websockets. Currently, the number of existing subscriptions is limited, you can check a list of available ones in [GraphqiQL explorer](https://gitlab.com/-/graphql-explorer)
Refer to the [Real-time widgets developer guide](../real_time.md) for a comprehensive introduction to subscriptions.
### Best Practices
#### When to use (and not use) `update` hook in mutations
Apollo Client's [`.mutate()`](https://www.apollographql.com/docs/react/api/core/ApolloClient/#ApolloClient.mutate)
method exposes an `update` hook that is invoked twice during the mutation lifecycle:
- Once at the beginning. That is, before the mutation has completed.
- Once after the mutation has completed.
You should use this hook only if you're adding or removing an item from the store
(that is, ApolloCache). If you're _updating_ an existing item, it is usually represented by
a global `id`.
In that case, presence of this `id` in your mutation query definition makes the store update
automatically. Here's an example of a typical mutation query with `id` present in it:
```graphql
mutation issueSetWeight($input: IssueSetWeightInput!) {
issuableSetWeight: issueSetWeight(input: $input) {
issuable: issue {
id
weight
}
errors
}
}
```
### Testing
#### Generating the GraphQL schema
Some of our tests load the schema JSON files. To generate these files, run:
```shell
bundle exec rake gitlab:graphql:schema:dump
```
You should run this task after pulling from upstream, or when rebasing your
branch. This is run automatically as part of `gdk update`.
{{< alert type="note" >}}
If you use the RubyMine IDE, and have marked the `tmp` directory as
"Excluded", you should "Mark Directory As -> Not Excluded" for
`gitlab/tmp/tests/graphql`. This will allow the **JS GraphQL** plugin to
automatically find and index the schema.
{{< /alert >}}
#### Mocking Apollo Client
To test the components with Apollo operations, we need to mock an Apollo Client in our unit tests. We use [`mock-apollo-client`](https://www.npmjs.com/package/mock-apollo-client) library to mock Apollo client and [`createMockApollo` helper](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/frontend/__helpers__/mock_apollo_helper.js) we created on top of it.
We need to inject `VueApollo` into the Vue instance by calling `Vue.use(VueApollo)`. This will install `VueApollo` globally for all the tests in the file. It is recommended to call `Vue.use(VueApollo)` just after the imports.
```javascript
import VueApollo from 'vue-apollo';
import Vue from 'vue';
Vue.use(VueApollo);
describe('Some component with Apollo mock', () => {
let wrapper;
function createComponent(options = {}) {
wrapper = shallowMount(...);
}
})
```
After this, we need to create a mocked Apollo provider:
```javascript
import createMockApollo from 'helpers/mock_apollo_helper';
describe('Some component with Apollo mock', () => {
let wrapper;
let mockApollo;
function createComponent(options = {}) {
mockApollo = createMockApollo(...)
wrapper = shallowMount(SomeComponent, {
apolloProvider: mockApollo
});
}
afterEach(() => {
// we need to ensure we don't have provider persisted between tests
mockApollo = null
})
})
```
Now, we need to define an array of _handlers_ for every query or mutation. Handlers should be mock functions that return either a correct query response, or an error:
```javascript
import getDesignListQuery from '~/design_management/graphql/queries/get_design_list.query.graphql';
import permissionsQuery from '~/design_management/graphql/queries/design_permissions.query.graphql';
import moveDesignMutation from '~/design_management/graphql/mutations/move_design.mutation.graphql';
describe('Some component with Apollo mock', () => {
let wrapper;
let mockApollo;
function createComponent(options = {
designListHandler: jest.fn().mockResolvedValue(designListQueryResponse)
}) {
mockApollo = createMockApollo([
[getDesignListQuery, options.designListHandler],
[permissionsQuery, jest.fn().mockResolvedValue(permissionsQueryResponse)],
[moveDesignMutation, jest.fn().mockResolvedValue(moveDesignMutationResponse)],
])
wrapper = shallowMount(SomeComponent, {
apolloProvider: mockApollo
});
}
})
```
When mocking resolved values, ensure the structure of the response is the same
as the actual API response. For example, root property should be `data`:
```javascript
const designListQueryResponse = {
data: {
project: {
id: '1',
issue: {
id: 'issue-1',
designCollection: {
copyState: 'READY',
designs: {
nodes: [
{
id: '3',
event: 'NONE',
filename: 'fox_3.jpg',
notesCount: 1,
image: 'image-3',
imageV432x230: 'image-3',
currentUserTodos: {
nodes: [],
},
},
],
},
versions: {
nodes: [],
},
},
},
},
},
};
```
When testing queries, keep in mind they are promises, so they need to be _resolved_ to render a result. Without resolving, we can check the `loading` state of the query:
```javascript
it('renders a loading state', () => {
const wrapper = createComponent();
expect(wrapper.findComponent(LoadingSpinner).exists()).toBe(true)
});
it('renders designs list', async () => {
const wrapper = createComponent();
await waitForPromises()
expect(findDesigns()).toHaveLength(3);
});
```
If we need to test a query error, we need to mock a rejected value as request handler:
```javascript
it('renders error if query fails', async () => {
const wrapper = createComponent({
designListHandler: jest.fn().mockRejectedValue('Houston, we have a problem!')
});
await waitForPromises()
expect(wrapper.find('.test-error').exists()).toBe(true)
})
```
Mutations could be tested the same way:
```javascript
const moveDesignHandlerSuccess = jest.fn().mockResolvedValue(moveDesignMutationResponse)
function createComponent(options = {
designListHandler: jest.fn().mockResolvedValue(designListQueryResponse),
moveDesignHandler: moveDesignHandlerSuccess
}) {
mockApollo = createMockApollo([
[getDesignListQuery, options.designListHandler],
[permissionsQuery, jest.fn().mockResolvedValue(permissionsQueryResponse)],
[moveDesignMutation, moveDesignHandler],
])
wrapper = shallowMount(SomeComponent, {
apolloProvider: mockApollo
});
}
it('calls a mutation with correct parameters and reorders designs', async () => {
const wrapper = createComponent();
wrapper.find(VueDraggable).vm.$emit('change', {
moved: {
newIndex: 0,
element: designToMove,
},
});
expect(moveDesignHandlerSuccess).toHaveBeenCalled();
await waitForPromises();
expect(
findDesigns()
.at(0)
.props('id'),
).toBe('2');
});
```
To mock multiple query response states, success and failure, Apollo Client's native retry behavior can combine with Jest's mock functions to create a series of responses. These do not need to be advanced manually, but they do need to be awaited in specific fashion.
```javascript
describe('when query times out', () => {
const advanceApolloTimers = async () => {
jest.runOnlyPendingTimers();
await waitForPromises()
};
beforeEach(async () => {
const failSucceedFail = jest
.fn()
.mockResolvedValueOnce({ errors: [{ message: 'timeout' }] })
.mockResolvedValueOnce(mockPipelineResponse)
.mockResolvedValueOnce({ errors: [{ message: 'timeout' }] });
createComponentWithApollo(failSucceedFail);
await waitForPromises();
});
it('shows correct errors and does not overwrite populated data when data is empty', async () => {
/* fails at first, shows error, no data yet */
expect(getAlert().exists()).toBe(true);
expect(getGraph().exists()).toBe(false);
/* succeeds, clears error, shows graph */
await advanceApolloTimers();
expect(getAlert().exists()).toBe(false);
expect(getGraph().exists()).toBe(true);
/* fails again, alert returns but data persists */
await advanceApolloTimers();
expect(getAlert().exists()).toBe(true);
expect(getGraph().exists()).toBe(true);
});
});
```
Previously, we've used `{ mocks: { $apollo ...}}` on `mount` to test Apollo functionality. This approach is discouraged - proper `$apollo` mocking leaks a lot of implementation details to the tests. Consider replacing it with mocked Apollo provider
```javascript
wrapper = mount(SomeComponent, {
mocks: {
// avoid! Mock real graphql queries and mutations instead
$apollo: {
mutate: jest.fn(),
queries: {
groups: {
loading,
},
},
},
},
});
```
#### Testing subscriptions
When testing subscriptions, be aware that default behavior for subscription in `vue-apollo@4` is to re-subscribe and immediately issue new request on error (unless value of `skip` restricts us from doing that)
```javascript
import waitForPromises from 'helpers/wait_for_promises';
// subscriptionMock is registered as handler function for subscription
// in our helper
const subcriptionMock = jest.fn().mockResolvedValue(okResponse);
// ...
it('testing error state', () => {
// Avoid: will stuck below!
subscriptionMock = jest.fn().mockRejectedValue({ errors: [] });
// component calls subscription mock as part of
createComponent();
// will be stuck forever:
// * rejected promise will trigger resubscription
// * re-subscription will call subscriptionMock again, resulting in rejected promise
// * rejected promise will trigger next re-subscription,
await waitForPromises();
// ...
})
```
To avoid such infinite loops when using `vue@3` and `vue-apollo@4` consider using one-time rejections
```javascript
it('testing failure', () => {
// OK: subscription will fail once
subscriptionMock.mockRejectedValueOnce({ errors: [] });
// component calls subscription mock as part of
createComponent();
await waitForPromises();
// code below now will be executed
})
```
#### Testing `@client` queries
##### Using mock resolvers
If your application contains `@client` queries, you get
the following Apollo Client warning when passing only handlers:
```shell
Unexpected call of console.warn() with:
Warning: mock-apollo-client - The query is entirely client-side (using @client directives) and resolvers have been configured. The request handler will not be called.
```
To fix this you should define mock `resolvers` instead of
mock `handlers`. For example, given the following `@client` query:
```graphql
query getBlobContent($path: String, $ref: String!) {
blobContent(path: $path, ref: $ref) @client {
rawData
}
}
```
And its actual client-side resolvers:
```javascript
import Api from '~/api';
export const resolvers = {
Query: {
blobContent(_, { path, ref }) {
return {
__typename: 'BlobContent',
rawData: Api.getRawFile(path, { ref }).then(({ data }) => {
return data;
}),
};
},
},
};
export default resolvers;
```
We can use a **mock resolver** that returns data with the
same shape, while mock the result with a mock function:
```javascript
let mockApollo;
let mockBlobContentData; // mock function, jest.fn();
const mockResolvers = {
Query: {
blobContent() {
return {
__typename: 'BlobContent',
rawData: mockBlobContentData(), // the mock function can resolve mock data
};
},
},
};
const createComponentWithApollo = ({ props = {} } = {}) => {
mockApollo = createMockApollo([], mockResolvers); // resolvers are the second parameter
wrapper = shallowMount(MyComponent, {
propsData: {},
apolloProvider: mockApollo,
// ...
})
};
```
After which, you can resolve or reject the value needed.
```javascript
beforeEach(() => {
mockBlobContentData = jest.fn();
});
it('shows data', async() => {
mockBlobContentData.mockResolvedValue(data); // you may resolve or reject to mock the result
createComponentWithApollo();
await waitForPromises(); // wait on the resolver mock to execute
expect(findContent().text()).toBe(mockCiYml);
});
```
##### Using `cache.writeQuery`
Sometimes we want to test a `result` hook of the local query. In order to have it triggered, we need to populate a cache with correct data to be fetched with this query:
```javascript
query fetchLocalUser {
fetchLocalUser @client {
name
}
}
```
```javascript
import fetchLocalUserQuery from '~/design_management/graphql/queries/fetch_local_user.query.graphql';
describe('Some component with Apollo mock', () => {
let wrapper;
let mockApollo;
function createComponent(options = {
designListHandler: jest.fn().mockResolvedValue(designListQueryResponse)
}) {
mockApollo = createMockApollo([...])
mockApollo.clients.defaultClient.cache.writeQuery({
query: fetchLocalUserQuery,
data: {
fetchLocalUser: {
__typename: 'User',
name: 'Test',
},
},
});
wrapper = shallowMount(SomeComponent, {
apolloProvider: mockApollo
});
}
})
```
When you need to configure the mocked apollo client's caching behavior,
provide additional cache options when creating a mocked client instance and the provided options will merge with the default cache option:
```javascript
const defaultCacheOptions = {
fragmentMatcher: { match: () => true },
addTypename: false,
};
```
```javascript
mockApollo = createMockApollo(
requestHandlers,
{},
{
dataIdFromObject: (object) =>
// eslint-disable-next-line no-underscore-dangle
object.__typename === 'Requirement' ? object.iid : defaultDataIdFromObject(object),
},
);
```
## Handling errors
The GitLab GraphQL mutations have two distinct error modes: [Top-level](#top-level-errors) and [errors-as-data](#errors-as-data).
When utilising a GraphQL mutation, consider handling **both of these error modes** to ensure that the user receives the appropriate feedback when an error occurs.
### Top-level errors
These errors are located at the "top level" of a GraphQL response. These are non-recoverable errors including argument errors and syntax errors, and should not be presented directly to the user.
#### Handling top-level errors
Apollo is aware of top-level errors, so we are able to leverage Apollo's various error-handling mechanisms to handle these errors. For example, handling Promise rejections after invoking the [`mutate`](https://www.apollographql.com/docs/react/api/core/ApolloClient/#ApolloClient.mutate) method, or handling the `error` event emitted from the [`ApolloMutation`](https://apollo.vuejs.org/api/apollo-mutation.html#events) component.
Because these errors are not intended for users, error messages for top-level errors should be defined client-side.
### Errors-as-data
These errors are nested in the `data` object of a GraphQL response. These are recoverable errors that, ideally, can be presented directly to the user.
#### Handling errors-as-data
First, we must add `errors` to our mutation object:
```diff
mutation createNoteMutation($input: String!) {
createNoteMutation(input: $input) {
note {
id
+ errors
}
}
```
Now, when we commit this mutation and errors occur, the response includes `errors` for us to handle:
```javascript
{
data: {
mutationName: {
errors: ["Sorry, we were not able to update the note."]
}
}
}
```
When handling errors-as-data, use your best judgement to determine whether to present the error message in the response, or another message defined client-side, to the user.
## Usage outside of Vue
It is also possible to use GraphQL outside of Vue by directly importing
and using the default client with queries.
```javascript
import createDefaultClient from '~/lib/graphql';
import query from './query.graphql';
const defaultClient = createDefaultClient();
defaultClient.query({ query })
.then(result => console.log(result));
```
When [using Vuex](#using-with-vuex), disable the cache when:
- The data is being cached elsewhere
- The use case does not need caching
if the data is being cached elsewhere, or if there is no need for it for the given use case.
```javascript
import createDefaultClient, { fetchPolicies } from '~/lib/graphql';
const defaultClient = createDefaultClient(
{},
{
fetchPolicy: fetchPolicies.NO_CACHE,
},
);
```
## Making initial queries early with GraphQL startup calls
To improve performance, sometimes we want to make initial GraphQL queries early. In order to do this, we can add them to **startup calls** with the following steps:
- Move all the queries you need initially in your application to `app/graphql/queries`;
- Add `__typename` property to every nested query level:
```javascript
query getPermissions($projectPath: ID!) {
project(fullPath: $projectPath) {
__typename
userPermissions {
__typename
pushCode
forkProject
createMergeRequestIn
}
}
}
```
- If queries contain fragments, you need to move fragments to the query file directly instead of importing them:
```javascript
fragment PageInfo on PageInfo {
__typename
hasNextPage
hasPreviousPage
startCursor
endCursor
}
query getFiles(
$projectPath: ID!
$path: String
$ref: String!
) {
project(fullPath: $projectPath) {
__typename
repository {
__typename
tree(path: $path, ref: $ref) {
__typename
pageInfo {
...PageInfo
}
}
}
}
}
}
```
- If the fragment is used only once, we can also remove the fragment altogether:
```javascript
query getFiles(
$projectPath: ID!
$path: String
$ref: String!
) {
project(fullPath: $projectPath) {
__typename
repository {
__typename
tree(path: $path, ref: $ref) {
__typename
pageInfo {
__typename
hasNextPage
hasPreviousPage
startCursor
endCursor
}
}
}
}
}
}
```
- Add startup calls with correct variables to the HAML file that serves as a view
for your application. To add GraphQL startup calls, we use
`add_page_startup_graphql_call` helper where the first parameter is a path to the
query, the second one is an object containing query variables. Path to the query is
relative to `app/graphql/queries` folder: for example, if we need a
`app/graphql/queries/repository/files.query.graphql` query, the path is
`repository/files`.
## Troubleshooting
### Mocked client returns empty objects instead of mock response
If your unit test is failing because the response contains empty objects instead of mock data, add
`__typename` field to the mocked responses.
Alternatively, [GraphQL query fixtures](../testing_guide/frontend_testing.md#graphql-query-fixtures)
automatically adds the `__typename` for you upon generation.
### Warning about losing cache data
Sometimes you can see a warning in the console: `Cache data may be lost when replacing the someProperty field of a Query object. To address this problem, either ensure all objects of SomeEntityhave an id or a custom merge function`. Check section about [multiple queries](#multiple-client-queries-for-the-same-object) to resolve an issue.
```yaml
- current_route_path = request.fullpath.match(/-\/tree\/[^\/]+\/(.+$)/).to_a[1]
- add_page_startup_graphql_call('repository/path_last_commit', { projectPath: @project.full_path, ref: current_ref, path: current_route_path || "" })
- add_page_startup_graphql_call('repository/permissions', { projectPath: @project.full_path })
- add_page_startup_graphql_call('repository/files', { nextPageCursor: "", pageSize: 100, projectPath: @project.full_path, ref: current_ref, path: current_route_path || "/"})
```
|
https://docs.gitlab.com/development/tooling
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/tooling.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
tooling.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Tooling
| null |
## ESLint
We use ESLint to encapsulate and enforce frontend code standards. Our configuration may be found in the [`gitlab-eslint-config`](https://gitlab.com/gitlab-org/gitlab-eslint-config) project.
### Yarn Script
This section describes yarn scripts that are available to validate and apply automatic fixes to files using ESLint.
To check all staged files (based on `git diff`) with ESLint, run the following script:
```shell
yarn run lint:eslint:staged
```
A list of problems found are logged to the console.
To apply automatic ESLint fixes to all staged files (based on `git diff`), run the following script:
```shell
yarn run lint:eslint:staged:fix
```
If manual changes are required, a list of changes are sent to the console.
To check a specific file in the repository with ESLINT, run the following script (replacing $PATH_TO_FILE):
```shell
yarn run lint:eslint $PATH_TO_FILE
```
To check **all** files in the repository with ESLint, run the following script:
```shell
yarn run lint:eslint:all
```
A list of problems found are logged to the console.
To apply automatic ESLint fixes to **all** files in the repository, run the following script:
```shell
yarn run lint:eslint:all:fix
```
If manual changes are required, a list of changes are sent to the console.
{{< alert type="warning" >}}
Limit use to global rule updates. Otherwise, the changes can lead to huge Merge Requests.
{{< /alert >}}
### Disabling ESLint in new files
Do not disable ESLint when creating new files. Existing files may have existing rules
disabled due to legacy compatibility reasons but they are in the process of being refactored.
Do not disable specific ESLint rules. To avoid introducing technical debt, you may disable the following
rules only if you are invoking/instantiating existing code modules.
- [`no-new`](https://eslint.org/docs/latest/rules/no-new)
- [`class-method-use-this`](https://eslint.org/docs/latest/rules/class-methods-use-this)
Disable these rules on a per-line basis. This makes it easier to refactor in the
future. For example, use `eslint-disable-next-line` or `eslint-disable-line`.
### Disabling ESLint for a single violation
If you do need to disable a rule for a single violation, disable it for the smallest amount of code necessary:
```javascript
// bad
/* eslint-disable no-new */
import Foo from 'foo';
new Foo();
// better
import Foo from 'foo';
// eslint-disable-next-line no-new
new Foo();
```
### Generating todo files
When enabling a new ESLint rule that uncovers many offenses across the codebase, it might be easier
to generate a todo file to temporarily ignore those offenses. This approach has some pros and cons:
**Pros**:
- A single source of truth for all the files that violate a specific rule. This can make it easier
to track the work necessary to pay the incurred technical debt.
- A smaller changeset when initially enabling the rule as you don't need to modify every offending
file.
**Cons**:
- Disabling the rule for entire files means that more offenses of the same type can be introduced in
those files.
- When fixing offenses over multiple concurrent merge requests, conflicts can often arise in the todo files,
requiring MR authors to rebase their branches.
To generate a todo file, run the `scripts/frontend/generate_eslint_todo_list.mjs` script:
```shell
node scripts/frontend/generate_eslint_todo_list.mjs <rule_name>
```
For example, generating a todo file for the `vue/no-unused-properties` rule:
```shell
node scripts/frontend/generate_eslint_todo_list.mjs vue/no-unused-properties
```
This creates an ESLint configuration in `.eslint_todo/vue-no-unused-properties.mjs` which gets
automatically added to the global configuration.
Once a todo file has been created for a given rule, make sure to plan for the work necessary to
address those violations. Todo files should be as short lived as possible. If some offenses cannot
be addressed, switch to inline ignores by [disabling ESLint for a single violation](#disabling-eslint-for-a-single-violation).
When all offending files have been fixed, the todo file should be removed along with the `export`
statement in `.eslint_todo/index.mjs`.
### The `no-undef` rule and declaring globals
**Never** disable the `no-undef` rule. Declare globals with `/* global Foo */` instead.
When declaring multiple globals, always use one `/* global [name] */` line per variable.
```javascript
// bad
/* globals Flash, Cookies, jQuery */
// good
/* global Flash */
/* global Cookies */
/* global jQuery */
```
### Deprecating functions with `import/no-deprecated`
Our `@gitlab/eslint-plugin` Node module contains the [`eslint-plugin-import`](https://gitlab.com/gitlab-org/frontend/eslint-plugin) package.
We can use the [`import/no-deprecated`](https://github.com/benmosher/eslint-plugin-import/blob/HEAD/docs/rules/no-deprecated.md) rule to deprecate functions using a JSDoc block with a `@deprecated` tag:
```javascript
/**
* Convert search query into an object
*
* @param {String} query from "document.location.search"
* @param {Object} options
* @param {Boolean} options.gatherArrays - gather array values into an Array
* @returns {Object}
*
*For example: "?one=1&two=2" into {one: 1, two: 2}
* @deprecated Please use `queryToObject` instead. See https://gitlab.com/gitlab-org/gitlab/-/issues/283982 for more information
*/
export function queryToObject(query, options = {}) {
...
}
```
It is strongly encouraged that you:
- Put in an **alternative path for developers** looking to use this function.
- **Provide a link to the issue** that tracks the migration process.
{{< alert type="note" >}}
Uses are detected if you import the deprecated function into another file. They are not detected when the function is used in the same file.
{{< /alert >}}
Running `$ yarn eslint` after this will give us the list of deprecated usages:
```shell
$ yarn eslint
./app/assets/javascripts/issuable_form.js
9:10 error Deprecated: Please use `queryToObject` instead. See https://gitlab.com/gitlab-org/gitlab/-/issues/283982 for more information import/no-deprecated
33:23 error Deprecated: Please use `queryToObject` instead. See https://gitlab.com/gitlab-org/gitlab/-/issues/283982 for more information import/no-deprecated
...
```
Grep for disabled cases of this rule to generate a working list to create issues from, so you can track the effort of removing deprecated uses:
```shell
$ grep "eslint-disable.*import/no-deprecated" -r .
./app/assets/javascripts/issuable_form.js:import { queryToObject, objectToQuery } from './lib/utils/url_utility'; // eslint-disable-line import/no-deprecate
./app/assets/javascripts/issuable_form.js: // eslint-disable-next-line import/no-deprecated
```
### `vue/multi-word-component-names` is disabled in my file
Single name components are discouraged by the
[Vue style guide](https://vuejs.org/style-guide/rules-essential.html#use-multi-word-component-names).
They are problematic because they can be confused with other HTML components: We could name a
component `<table>` and it would stop rendering an HTML `<table>`.
To solve this, you should rename the `.vue` file and its references to use at least two words,
for example:
- `user/table.vue` could be renamed to `user/users_table.vue` and be imported as `UsersTable` and used with `<users-table />`.
### GraphQL schema and operations validation
We use [`@graphql-eslint/eslint-plugin`](https://www.npmjs.com/package/@graphql-eslint/eslint-plugin)
to lint GraphQL schema and operations. This plugin requires the entire schema to function properly.
It is thus recommended to generate an up-to-date dump of the schema when running ESLint locally.
You can do this by running the `./scripts/dump_graphql_schema` script.
## Formatting with Prettier
Our code is automatically formatted with [Prettier](https://prettier.io) to follow our style guides. Prettier is taking care of formatting `.js`, `.vue`, `.graphql`, and `.scss` files based on the standard prettier rules. You can find all settings for Prettier in `.prettierrc`.
### Editor
The recommended method to include Prettier in your workflow is to set up your
preferred editor (all major editors are supported) accordingly. We suggest
setting up Prettier to run when each file is saved. For instructions about using
Prettier in your preferred editor, see the [Prettier documentation](https://prettier.io/docs/en/editors.html).
Take care that you only let Prettier format the same file types as the global Yarn script does (`.js`, `.vue`, `.graphql`, and `.scss`). For example, you can exclude file formats in your Visual Studio Code settings file:
```json
"prettier.disableLanguages": [
"json",
"markdown"
]
```
### Yarn Script
The following yarn scripts are available to do global formatting:
```shell
yarn run lint:prettier:staged:fix
```
Updates all staged files (based on `git diff`) with Prettier and saves the needed changes.
```shell
yarn run lint:prettier:staged
```
Checks all staged files (based on `git diff`) with Prettier and log which files would need manual updating to the console.
```shell
yarn run lint:prettier
```
Checks all files with Prettier and logs which files need manual updating to the console.
```shell
yarn run lint:prettier:fix
```
Formats all files in the repository with Prettier.
### VS Code Settings
#### Select Prettier as default formatter
To select Prettier as a formatter, add the following properties to your User or Workspace Settings:
```javascript
{
"[html]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[vue]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[graphql]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}
```
#### Format on Save
To automatically format your files with Prettier, add the following properties to your User or Workspace Settings:
```javascript
{
"[html]": {
"editor.formatOnSave": true
},
"[javascript]": {
"editor.formatOnSave": true
},
"[vue]": {
"editor.formatOnSave": true
},
"[graphql]": {
"editor.formatOnSave": true
},
}
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Tooling
breadcrumbs:
- doc
- development
- fe_guide
---
## ESLint
We use ESLint to encapsulate and enforce frontend code standards. Our configuration may be found in the [`gitlab-eslint-config`](https://gitlab.com/gitlab-org/gitlab-eslint-config) project.
### Yarn Script
This section describes yarn scripts that are available to validate and apply automatic fixes to files using ESLint.
To check all staged files (based on `git diff`) with ESLint, run the following script:
```shell
yarn run lint:eslint:staged
```
A list of problems found are logged to the console.
To apply automatic ESLint fixes to all staged files (based on `git diff`), run the following script:
```shell
yarn run lint:eslint:staged:fix
```
If manual changes are required, a list of changes are sent to the console.
To check a specific file in the repository with ESLINT, run the following script (replacing $PATH_TO_FILE):
```shell
yarn run lint:eslint $PATH_TO_FILE
```
To check **all** files in the repository with ESLint, run the following script:
```shell
yarn run lint:eslint:all
```
A list of problems found are logged to the console.
To apply automatic ESLint fixes to **all** files in the repository, run the following script:
```shell
yarn run lint:eslint:all:fix
```
If manual changes are required, a list of changes are sent to the console.
{{< alert type="warning" >}}
Limit use to global rule updates. Otherwise, the changes can lead to huge Merge Requests.
{{< /alert >}}
### Disabling ESLint in new files
Do not disable ESLint when creating new files. Existing files may have existing rules
disabled due to legacy compatibility reasons but they are in the process of being refactored.
Do not disable specific ESLint rules. To avoid introducing technical debt, you may disable the following
rules only if you are invoking/instantiating existing code modules.
- [`no-new`](https://eslint.org/docs/latest/rules/no-new)
- [`class-method-use-this`](https://eslint.org/docs/latest/rules/class-methods-use-this)
Disable these rules on a per-line basis. This makes it easier to refactor in the
future. For example, use `eslint-disable-next-line` or `eslint-disable-line`.
### Disabling ESLint for a single violation
If you do need to disable a rule for a single violation, disable it for the smallest amount of code necessary:
```javascript
// bad
/* eslint-disable no-new */
import Foo from 'foo';
new Foo();
// better
import Foo from 'foo';
// eslint-disable-next-line no-new
new Foo();
```
### Generating todo files
When enabling a new ESLint rule that uncovers many offenses across the codebase, it might be easier
to generate a todo file to temporarily ignore those offenses. This approach has some pros and cons:
**Pros**:
- A single source of truth for all the files that violate a specific rule. This can make it easier
to track the work necessary to pay the incurred technical debt.
- A smaller changeset when initially enabling the rule as you don't need to modify every offending
file.
**Cons**:
- Disabling the rule for entire files means that more offenses of the same type can be introduced in
those files.
- When fixing offenses over multiple concurrent merge requests, conflicts can often arise in the todo files,
requiring MR authors to rebase their branches.
To generate a todo file, run the `scripts/frontend/generate_eslint_todo_list.mjs` script:
```shell
node scripts/frontend/generate_eslint_todo_list.mjs <rule_name>
```
For example, generating a todo file for the `vue/no-unused-properties` rule:
```shell
node scripts/frontend/generate_eslint_todo_list.mjs vue/no-unused-properties
```
This creates an ESLint configuration in `.eslint_todo/vue-no-unused-properties.mjs` which gets
automatically added to the global configuration.
Once a todo file has been created for a given rule, make sure to plan for the work necessary to
address those violations. Todo files should be as short lived as possible. If some offenses cannot
be addressed, switch to inline ignores by [disabling ESLint for a single violation](#disabling-eslint-for-a-single-violation).
When all offending files have been fixed, the todo file should be removed along with the `export`
statement in `.eslint_todo/index.mjs`.
### The `no-undef` rule and declaring globals
**Never** disable the `no-undef` rule. Declare globals with `/* global Foo */` instead.
When declaring multiple globals, always use one `/* global [name] */` line per variable.
```javascript
// bad
/* globals Flash, Cookies, jQuery */
// good
/* global Flash */
/* global Cookies */
/* global jQuery */
```
### Deprecating functions with `import/no-deprecated`
Our `@gitlab/eslint-plugin` Node module contains the [`eslint-plugin-import`](https://gitlab.com/gitlab-org/frontend/eslint-plugin) package.
We can use the [`import/no-deprecated`](https://github.com/benmosher/eslint-plugin-import/blob/HEAD/docs/rules/no-deprecated.md) rule to deprecate functions using a JSDoc block with a `@deprecated` tag:
```javascript
/**
* Convert search query into an object
*
* @param {String} query from "document.location.search"
* @param {Object} options
* @param {Boolean} options.gatherArrays - gather array values into an Array
* @returns {Object}
*
*For example: "?one=1&two=2" into {one: 1, two: 2}
* @deprecated Please use `queryToObject` instead. See https://gitlab.com/gitlab-org/gitlab/-/issues/283982 for more information
*/
export function queryToObject(query, options = {}) {
...
}
```
It is strongly encouraged that you:
- Put in an **alternative path for developers** looking to use this function.
- **Provide a link to the issue** that tracks the migration process.
{{< alert type="note" >}}
Uses are detected if you import the deprecated function into another file. They are not detected when the function is used in the same file.
{{< /alert >}}
Running `$ yarn eslint` after this will give us the list of deprecated usages:
```shell
$ yarn eslint
./app/assets/javascripts/issuable_form.js
9:10 error Deprecated: Please use `queryToObject` instead. See https://gitlab.com/gitlab-org/gitlab/-/issues/283982 for more information import/no-deprecated
33:23 error Deprecated: Please use `queryToObject` instead. See https://gitlab.com/gitlab-org/gitlab/-/issues/283982 for more information import/no-deprecated
...
```
Grep for disabled cases of this rule to generate a working list to create issues from, so you can track the effort of removing deprecated uses:
```shell
$ grep "eslint-disable.*import/no-deprecated" -r .
./app/assets/javascripts/issuable_form.js:import { queryToObject, objectToQuery } from './lib/utils/url_utility'; // eslint-disable-line import/no-deprecate
./app/assets/javascripts/issuable_form.js: // eslint-disable-next-line import/no-deprecated
```
### `vue/multi-word-component-names` is disabled in my file
Single name components are discouraged by the
[Vue style guide](https://vuejs.org/style-guide/rules-essential.html#use-multi-word-component-names).
They are problematic because they can be confused with other HTML components: We could name a
component `<table>` and it would stop rendering an HTML `<table>`.
To solve this, you should rename the `.vue` file and its references to use at least two words,
for example:
- `user/table.vue` could be renamed to `user/users_table.vue` and be imported as `UsersTable` and used with `<users-table />`.
### GraphQL schema and operations validation
We use [`@graphql-eslint/eslint-plugin`](https://www.npmjs.com/package/@graphql-eslint/eslint-plugin)
to lint GraphQL schema and operations. This plugin requires the entire schema to function properly.
It is thus recommended to generate an up-to-date dump of the schema when running ESLint locally.
You can do this by running the `./scripts/dump_graphql_schema` script.
## Formatting with Prettier
Our code is automatically formatted with [Prettier](https://prettier.io) to follow our style guides. Prettier is taking care of formatting `.js`, `.vue`, `.graphql`, and `.scss` files based on the standard prettier rules. You can find all settings for Prettier in `.prettierrc`.
### Editor
The recommended method to include Prettier in your workflow is to set up your
preferred editor (all major editors are supported) accordingly. We suggest
setting up Prettier to run when each file is saved. For instructions about using
Prettier in your preferred editor, see the [Prettier documentation](https://prettier.io/docs/en/editors.html).
Take care that you only let Prettier format the same file types as the global Yarn script does (`.js`, `.vue`, `.graphql`, and `.scss`). For example, you can exclude file formats in your Visual Studio Code settings file:
```json
"prettier.disableLanguages": [
"json",
"markdown"
]
```
### Yarn Script
The following yarn scripts are available to do global formatting:
```shell
yarn run lint:prettier:staged:fix
```
Updates all staged files (based on `git diff`) with Prettier and saves the needed changes.
```shell
yarn run lint:prettier:staged
```
Checks all staged files (based on `git diff`) with Prettier and log which files would need manual updating to the console.
```shell
yarn run lint:prettier
```
Checks all files with Prettier and logs which files need manual updating to the console.
```shell
yarn run lint:prettier:fix
```
Formats all files in the repository with Prettier.
### VS Code Settings
#### Select Prettier as default formatter
To select Prettier as a formatter, add the following properties to your User or Workspace Settings:
```javascript
{
"[html]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[vue]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[graphql]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}
```
#### Format on Save
To automatically format your files with Prettier, add the following properties to your User or Workspace Settings:
```javascript
{
"[html]": {
"editor.formatOnSave": true
},
"[javascript]": {
"editor.formatOnSave": true
},
"[vue]": {
"editor.formatOnSave": true
},
"[graphql]": {
"editor.formatOnSave": true
},
}
```
|
https://docs.gitlab.com/development/merge_request_widgets
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/merge_request_widgets.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
merge_request_widgets.md
|
Create
|
Code Review
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Merge request widgets
|
Developer documentation for extending the merge request report widget with additional features.
|
Merge request widgets enable you to add new features that match the design framework.
With these widgets we get a lot of benefits out of the box without much effort required, like:
- A consistent look and feel.
- Tracking when the widget is opened.
- Virtual scrolling for performance.
## Usage
The widgets are regular Vue components that make use of the
`~/vue_merge_request_widget/components/widget/widget.vue` component.
Depending on the complexity of the use case, it is possible to pass down
configuration objects, or extend the component through `slot`s.
For an example that uses slots, refer to the following file:
`ee/app/assets/javascripts/vue_merge_request_widget/widgets/security_reports/mr_widget_security_reports.vue`
For an example that uses data objects, refer to the following file:
`ee/app/assets/javascripts/vue_merge_request_widget/widgets/metrics/index.vue`
Here is a minimal example that renders a Hello World widget:
```vue
<script>
import MrWidget from '~/vue_merge_request_widget/components/widget/widget.vue';
import { __ } from '~/locale';
export default {
name: 'WidgetHelloWorld',
components: {
MrWidget,
},
computed: {
summary() {
return { title: __('Hello World') };
},
},
};
</script>
<template>
<mr-widget :summary="summary" :is-collapsible="false" :widget-name="$options.name" />
</template>
```
### Registering widgets
The example above won't be rendered anywhere in the page. In order to mount it in the Merge Request
Widget section, we have to register the widget in one or both of these two locations:
- `app/assets/javascripts/vue_merge_request_widget/components/widget/app.vue` (for CE widgets)
- `ee/app/assets/javascripts/vue_merge_request_widget/components/widget/app.vue` (for CE and EE widgets)
Defining the component in the components list and adding the name to the `widgets` computed property
will mount the widget:
```vue
<script>
export default {
components: {
MrHelloWorldWidget: () =>
import('ee/vue_merge_request_widget/widgets/hello_world/index.vue'),
},
computed: {
mrHelloWorldWidget() {
return this.mr.shouldRenderHelloWorldWidget ? 'MrHelloWorldWidget' : undefined;
},
widgets() {
return [
this.mrHelloWorldWidget,
].filter((w) => w);
},
},
};
</script>
```
## Data fetching
To fetch data when the widget is mounted, pass the `:fetch-collapsed-data` property a function
that performs an API call.
{{< alert type="warning" >}}
The function must return a `Promise` that resolves to the `response` object.
The implementation relies on the `POLL-INTERVAL` header to keep polling, therefore it is
important not to alter the status code and headers.
{{< /alert >}}
```vue
<script>
export default {
// ...
data() {
return {
collapsedData: [],
};
},
methods: {
fetchCollapsedData() {
return axios.get('/my/path').then((response) => {
this.collapsedData = response.data;
return response;
});
},
},
};
</script>
<template>
<mr-widget :fetch-collapsed-data="fetchCollapsedData" />
</template>
```
`:fetch-expanded-data` works the same way, but it will be called only when the user expands the widget.
### Data structure
The `content` and `summary` properties can be used to render the `Widget`. Below is the documentation for both
properties:
```javascript
// content
{
text: '', // Required: Main text for the row
subtext: '', // Optional: Smaller sub-text to be displayed below the main text
supportingText: '', // Optional: Paragraph to be displayed below the subtext
icon: { // Optional: Icon object
name: EXTENSION_ICONS.success, // Required: The icon name for the row
},
badge: { // Optional: Badge displayed after text
text: '', // Required: Text to be displayed inside badge
variant: '', // Optional: GitLab UI badge variant, defaults to info
},
link: { // Optional: Link to a URL displayed after text
text: '', // Required: Text of the link
href: '', // Optional: URL for the link
},
actions: [], // Optional: Action button for row
children: [], // Optional: Child content to render, structure matches the same structure
helpPopover: { // Optional: If provided, an information icon will be display at the right-most corner of the content row
options: {
title: '' // Required: The title of the popover
},
content: {
text: '', // Optional: Text content of the popover
learnMorePath: '', // Optional: The path to the documentation. A learn more link will be displayed if provided.
}
}
}
// summary
{
title: '', // Required: The main text of the summary part
subtitle: '', // Optional: The subtext of the summary part
}
```
### Errors
If `:fetch-collapsed-data` or `:fetch-expanded-data` methods throw an error.
To customise the error text, you can use the `:error-text` property:
```vue
<template>
<mr-widget :error-text="__('Failed to load.')" />
</template>
```
## Telemetry
The base implementation of the widget framework includes some telemetry events.
Each widget reports:
- `view`: When it is rendered to the screen.
- `expand`: When it is expanded.
- `full_report_clicked`: When an (optional) input is clicked to view the full report.
- Outcome (`expand_success`, `expand_warning`, or `expand_failed`): One of three
additional events relating to the status of the widget when it was expanded.
### Add new widgets
When adding new widgets, the above events must be marked as `known`, and have metrics
created, to be reportable.
{{< alert type="note" >}}
Events that are only for EE should include `--ee` at the end of both shell commands below.
{{< /alert >}}
To generate these known events for a single widget:
1. Widgets should be named `Widget${CamelName}`.
- For example: a widget for **Test Reports** should be `WidgetTestReports`.
1. Compute the widget name slug by converting the `${CamelName}` to lower-, snake-case.
- The previous example would be `test_reports`.
1. Add the new widget name slug to `lib/gitlab/usage_data_counters/merge_request_widget_counter.rb`
in the `WIDGETS` list.
1. Ensure the GDK is running (`gdk start`).
1. Generate known events on the command line with the following command.
Replace `test_reports` with your appropriate name slug:
```shell
bundle exec rails generate gitlab:usage_metric_definition \
counts.i_code_review_merge_request_widget_test_reports_count_view \
counts.i_code_review_merge_request_widget_test_reports_count_full_report_clicked \
counts.i_code_review_merge_request_widget_test_reports_count_expand \
counts.i_code_review_merge_request_widget_test_reports_count_expand_success \
counts.i_code_review_merge_request_widget_test_reports_count_expand_warning \
counts.i_code_review_merge_request_widget_test_reports_count_expand_failed \
--dir=all
```
1. Modify each newly generated file to match the existing files for the merge request widget extension telemetry.
- Find existing examples by doing a glob search, like: `metrics/**/*_i_code_review_merge_request_widget_*`
- Roughly speaking, each file should have these values:
1. `description` = A plain English description of this value. Review existing widget extension telemetry files for examples.
1. `product_section` = `dev`
1. `product_stage` = `create`
1. `product_group` = `code_review`
1. `introduced_by_url` = `'[your MR]'`
1. `options.events` = (the event in the command from above that generated this file, like `i_code_review_merge_request_widget_test_reports_count_view`)
- This value is how the telemetry events are linked to "metrics" so this is probably one of the more important values.
1. `data_source` = `redis`
1. `data_category` = `optional`
1. Generate known HLL events on the command line with the following command.
Replace `test_reports` with your appropriate name slug.
```shell
bundle exec rails generate gitlab:usage_metric_definition:redis_hll code_review \
i_code_review_merge_request_widget_test_reports_view \
i_code_review_merge_request_widget_test_reports_full_report_clicked \
i_code_review_merge_request_widget_test_reports_expand \
i_code_review_merge_request_widget_test_reports_expand_success \
i_code_review_merge_request_widget_test_reports_expand_warning \
i_code_review_merge_request_widget_test_reports_expand_failed \
--class_name=RedisHLLMetric
```
1. Repeat step 6, but change the `data_source` to `redis_hll`.
1. Add each event (those listed in the command in step 7, replacing `test_reports`
with the appropriate name slug) to the aggregate files:
1. `config/metrics/counts_7d/{timestamp}_code_review_category_monthly_active_users.yml`
1. `config/metrics/counts_7d/{timestamp}_code_review_group_monthly_active_users.yml`
1. `config/metrics/counts_28d/{timestamp}_code_review_category_monthly_active_users.yml`
1. `config/metrics/counts_28d/{timestamp}_code_review_group_monthly_active_users.yml`
### Add new events
If you are adding a new event to our known events, include the new event in the
`KNOWN_EVENTS` list in `lib/gitlab/usage_data_counters/merge_request_widget_extension_counter.rb`.
## Icons
Level 1 and all subsequent levels can have their own status icons. To keep with
the design framework, import the `EXTENSION_ICONS` constant
from the `constants.js` file:
```javascript
import { EXTENSION_ICONS } from '~/vue_merge_request_widget/constants.js';
```
This constant has the below icons available for use. Per the design framework,
only some of these icons should be used on level 1:
- `failed`
- `warning`
- `success`
- `neutral`
- `error`
- `notice`
- `severityCritical`
- `severityHigh`
- `severityMedium`
- `severityLow`
- `severityInfo`
- `severityUnknown`
## Action buttons
You can add action buttons to all level 1 and 2 in each extension. These buttons
are meant as a way to provide links or actions for each row:
- Action buttons for level 1 can be set through the `tertiaryButtons` computed property.
This property should return an array of objects for each action button.
- Action buttons for level 2 can be set by adding the `actions` key to the level 2 rows object.
The value for this key must also be an array of objects for each action button.
Links must follow this structure:
```javascript
{
text: 'Click me',
href: this.someLinkHref,
target: '_blank', // Optional
}
```
For internal action buttons, follow this structure:
```javascript
{
text: 'Click me',
onClick() {}
}
```
## Demo
Visit [GitLab MR Widgets Demo](https://gitlab.com/gitlab-org/frontend/playground/gitlab-mr-widgets-demo/-/merge_requests/26) to
see an example of all widgets displayed together.
|
---
stage: Create
group: Code Review
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Developer documentation for extending the merge request report widget
with additional features.
title: Merge request widgets
breadcrumbs:
- doc
- development
- fe_guide
---
Merge request widgets enable you to add new features that match the design framework.
With these widgets we get a lot of benefits out of the box without much effort required, like:
- A consistent look and feel.
- Tracking when the widget is opened.
- Virtual scrolling for performance.
## Usage
The widgets are regular Vue components that make use of the
`~/vue_merge_request_widget/components/widget/widget.vue` component.
Depending on the complexity of the use case, it is possible to pass down
configuration objects, or extend the component through `slot`s.
For an example that uses slots, refer to the following file:
`ee/app/assets/javascripts/vue_merge_request_widget/widgets/security_reports/mr_widget_security_reports.vue`
For an example that uses data objects, refer to the following file:
`ee/app/assets/javascripts/vue_merge_request_widget/widgets/metrics/index.vue`
Here is a minimal example that renders a Hello World widget:
```vue
<script>
import MrWidget from '~/vue_merge_request_widget/components/widget/widget.vue';
import { __ } from '~/locale';
export default {
name: 'WidgetHelloWorld',
components: {
MrWidget,
},
computed: {
summary() {
return { title: __('Hello World') };
},
},
};
</script>
<template>
<mr-widget :summary="summary" :is-collapsible="false" :widget-name="$options.name" />
</template>
```
### Registering widgets
The example above won't be rendered anywhere in the page. In order to mount it in the Merge Request
Widget section, we have to register the widget in one or both of these two locations:
- `app/assets/javascripts/vue_merge_request_widget/components/widget/app.vue` (for CE widgets)
- `ee/app/assets/javascripts/vue_merge_request_widget/components/widget/app.vue` (for CE and EE widgets)
Defining the component in the components list and adding the name to the `widgets` computed property
will mount the widget:
```vue
<script>
export default {
components: {
MrHelloWorldWidget: () =>
import('ee/vue_merge_request_widget/widgets/hello_world/index.vue'),
},
computed: {
mrHelloWorldWidget() {
return this.mr.shouldRenderHelloWorldWidget ? 'MrHelloWorldWidget' : undefined;
},
widgets() {
return [
this.mrHelloWorldWidget,
].filter((w) => w);
},
},
};
</script>
```
## Data fetching
To fetch data when the widget is mounted, pass the `:fetch-collapsed-data` property a function
that performs an API call.
{{< alert type="warning" >}}
The function must return a `Promise` that resolves to the `response` object.
The implementation relies on the `POLL-INTERVAL` header to keep polling, therefore it is
important not to alter the status code and headers.
{{< /alert >}}
```vue
<script>
export default {
// ...
data() {
return {
collapsedData: [],
};
},
methods: {
fetchCollapsedData() {
return axios.get('/my/path').then((response) => {
this.collapsedData = response.data;
return response;
});
},
},
};
</script>
<template>
<mr-widget :fetch-collapsed-data="fetchCollapsedData" />
</template>
```
`:fetch-expanded-data` works the same way, but it will be called only when the user expands the widget.
### Data structure
The `content` and `summary` properties can be used to render the `Widget`. Below is the documentation for both
properties:
```javascript
// content
{
text: '', // Required: Main text for the row
subtext: '', // Optional: Smaller sub-text to be displayed below the main text
supportingText: '', // Optional: Paragraph to be displayed below the subtext
icon: { // Optional: Icon object
name: EXTENSION_ICONS.success, // Required: The icon name for the row
},
badge: { // Optional: Badge displayed after text
text: '', // Required: Text to be displayed inside badge
variant: '', // Optional: GitLab UI badge variant, defaults to info
},
link: { // Optional: Link to a URL displayed after text
text: '', // Required: Text of the link
href: '', // Optional: URL for the link
},
actions: [], // Optional: Action button for row
children: [], // Optional: Child content to render, structure matches the same structure
helpPopover: { // Optional: If provided, an information icon will be display at the right-most corner of the content row
options: {
title: '' // Required: The title of the popover
},
content: {
text: '', // Optional: Text content of the popover
learnMorePath: '', // Optional: The path to the documentation. A learn more link will be displayed if provided.
}
}
}
// summary
{
title: '', // Required: The main text of the summary part
subtitle: '', // Optional: The subtext of the summary part
}
```
### Errors
If `:fetch-collapsed-data` or `:fetch-expanded-data` methods throw an error.
To customise the error text, you can use the `:error-text` property:
```vue
<template>
<mr-widget :error-text="__('Failed to load.')" />
</template>
```
## Telemetry
The base implementation of the widget framework includes some telemetry events.
Each widget reports:
- `view`: When it is rendered to the screen.
- `expand`: When it is expanded.
- `full_report_clicked`: When an (optional) input is clicked to view the full report.
- Outcome (`expand_success`, `expand_warning`, or `expand_failed`): One of three
additional events relating to the status of the widget when it was expanded.
### Add new widgets
When adding new widgets, the above events must be marked as `known`, and have metrics
created, to be reportable.
{{< alert type="note" >}}
Events that are only for EE should include `--ee` at the end of both shell commands below.
{{< /alert >}}
To generate these known events for a single widget:
1. Widgets should be named `Widget${CamelName}`.
- For example: a widget for **Test Reports** should be `WidgetTestReports`.
1. Compute the widget name slug by converting the `${CamelName}` to lower-, snake-case.
- The previous example would be `test_reports`.
1. Add the new widget name slug to `lib/gitlab/usage_data_counters/merge_request_widget_counter.rb`
in the `WIDGETS` list.
1. Ensure the GDK is running (`gdk start`).
1. Generate known events on the command line with the following command.
Replace `test_reports` with your appropriate name slug:
```shell
bundle exec rails generate gitlab:usage_metric_definition \
counts.i_code_review_merge_request_widget_test_reports_count_view \
counts.i_code_review_merge_request_widget_test_reports_count_full_report_clicked \
counts.i_code_review_merge_request_widget_test_reports_count_expand \
counts.i_code_review_merge_request_widget_test_reports_count_expand_success \
counts.i_code_review_merge_request_widget_test_reports_count_expand_warning \
counts.i_code_review_merge_request_widget_test_reports_count_expand_failed \
--dir=all
```
1. Modify each newly generated file to match the existing files for the merge request widget extension telemetry.
- Find existing examples by doing a glob search, like: `metrics/**/*_i_code_review_merge_request_widget_*`
- Roughly speaking, each file should have these values:
1. `description` = A plain English description of this value. Review existing widget extension telemetry files for examples.
1. `product_section` = `dev`
1. `product_stage` = `create`
1. `product_group` = `code_review`
1. `introduced_by_url` = `'[your MR]'`
1. `options.events` = (the event in the command from above that generated this file, like `i_code_review_merge_request_widget_test_reports_count_view`)
- This value is how the telemetry events are linked to "metrics" so this is probably one of the more important values.
1. `data_source` = `redis`
1. `data_category` = `optional`
1. Generate known HLL events on the command line with the following command.
Replace `test_reports` with your appropriate name slug.
```shell
bundle exec rails generate gitlab:usage_metric_definition:redis_hll code_review \
i_code_review_merge_request_widget_test_reports_view \
i_code_review_merge_request_widget_test_reports_full_report_clicked \
i_code_review_merge_request_widget_test_reports_expand \
i_code_review_merge_request_widget_test_reports_expand_success \
i_code_review_merge_request_widget_test_reports_expand_warning \
i_code_review_merge_request_widget_test_reports_expand_failed \
--class_name=RedisHLLMetric
```
1. Repeat step 6, but change the `data_source` to `redis_hll`.
1. Add each event (those listed in the command in step 7, replacing `test_reports`
with the appropriate name slug) to the aggregate files:
1. `config/metrics/counts_7d/{timestamp}_code_review_category_monthly_active_users.yml`
1. `config/metrics/counts_7d/{timestamp}_code_review_group_monthly_active_users.yml`
1. `config/metrics/counts_28d/{timestamp}_code_review_category_monthly_active_users.yml`
1. `config/metrics/counts_28d/{timestamp}_code_review_group_monthly_active_users.yml`
### Add new events
If you are adding a new event to our known events, include the new event in the
`KNOWN_EVENTS` list in `lib/gitlab/usage_data_counters/merge_request_widget_extension_counter.rb`.
## Icons
Level 1 and all subsequent levels can have their own status icons. To keep with
the design framework, import the `EXTENSION_ICONS` constant
from the `constants.js` file:
```javascript
import { EXTENSION_ICONS } from '~/vue_merge_request_widget/constants.js';
```
This constant has the below icons available for use. Per the design framework,
only some of these icons should be used on level 1:
- `failed`
- `warning`
- `success`
- `neutral`
- `error`
- `notice`
- `severityCritical`
- `severityHigh`
- `severityMedium`
- `severityLow`
- `severityInfo`
- `severityUnknown`
## Action buttons
You can add action buttons to all level 1 and 2 in each extension. These buttons
are meant as a way to provide links or actions for each row:
- Action buttons for level 1 can be set through the `tertiaryButtons` computed property.
This property should return an array of objects for each action button.
- Action buttons for level 2 can be set by adding the `actions` key to the level 2 rows object.
The value for this key must also be an array of objects for each action button.
Links must follow this structure:
```javascript
{
text: 'Click me',
href: this.someLinkHref,
target: '_blank', // Optional
}
```
For internal action buttons, follow this structure:
```javascript
{
text: 'Click me',
onClick() {}
}
```
## Demo
Visit [GitLab MR Widgets Demo](https://gitlab.com/gitlab-org/frontend/playground/gitlab-mr-widgets-demo/-/merge_requests/26) to
see an example of all widgets displayed together.
|
https://docs.gitlab.com/development/tips_and_tricks
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/tips_and_tricks.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
tips_and_tricks.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Tips and tricks
| null |
## Code deletion checklist
When your merge request deletes code, it's important to also delete all
related code that is no longer used.
When deleting Haml and Vue code, check whether it contains the following types of
code that is unused:
- CSS.
For example, we've deleted a Vue component that contained the `.mr-card` class, which is now unused.
The `.mr-card` CSS rule set should then be deleted from `merge_requests.scss`.
- Ruby variables.
Deleting unused Ruby variables is important so we don't continue instantiating them with
potentially expensive code.
For example, we've deleted a Haml template that used the `@total_count` Ruby variable.
The `@total_count` variable was no longer used in the remaining templates for the page.
The instantiation of `@total_count` in `issues_controller.rb` should then be deleted so that we
don't make unnecessary database calls to calculate the count of issues.
- Ruby methods.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Tips and tricks
breadcrumbs:
- doc
- development
- fe_guide
---
## Code deletion checklist
When your merge request deletes code, it's important to also delete all
related code that is no longer used.
When deleting Haml and Vue code, check whether it contains the following types of
code that is unused:
- CSS.
For example, we've deleted a Vue component that contained the `.mr-card` class, which is now unused.
The `.mr-card` CSS rule set should then be deleted from `merge_requests.scss`.
- Ruby variables.
Deleting unused Ruby variables is important so we don't continue instantiating them with
potentially expensive code.
For example, we've deleted a Haml template that used the `@total_count` Ruby variable.
The `@total_count` variable was no longer used in the remaining templates for the page.
The instantiation of `@total_count` in `issues_controller.rb` should then be deleted so that we
don't make unnecessary database calls to calculate the count of issues.
- Ruby methods.
|
https://docs.gitlab.com/development/sentry
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/sentry.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
sentry.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sentry monitoring in the frontend development of GitLab
| null |
The GitLab Frontend team uses Sentry as an observability tool to monitor how the UI performs for
users on `gitlab.com`.
GitLab.com is configured to report to our Sentry instance at **Admin > Metrics and profiling > Sentry**.
We monitor two kinds of data: **Errors** and **Performance**.
{{< alert type="note" >}}
The [Frontend Observability Working Group](https://handbook.gitlab.com/handbook/company/working-groups/frontend-observability/) is looking to improve how we use Sentry. GitLab team members can provide feedback at
[issue #427402](https://gitlab.com/gitlab-org/gitlab/-/issues/427402).
{{< /alert >}}
## Start using Sentry
Our Sentry instance is located at [https://new-sentry.gitlab.net/](https://new-sentry.gitlab.net/).
Only GitLab team members can access Sentry.
After your first sign in you can join the `#gitlab` team by selecting **Join a team**. Confirm that
`#gitlab` appears under `YOUR TEAMS` in the [teams page](https://new-sentry.gitlab.net/settings/gitlab/teams/).
## Error reporting
Errors, also known as "events" in the Sentry UI, are instances of abnormal or unexpected runtime
behavior that users experience in their browser.
GitLab uses the [Sentry Browser SDK](https://docs.sentry.io/platforms/javascript/) to report errors
to our Sentry instance under the project
[`gitlabcom-clientside`](https://new-sentry.gitlab.net/organizations/gitlab/projects/gitlabcom-clientside/?project=4).
### Reporting known errors
The most common way to report errors to Sentry is to call `captureException(error)`, for example:
```javascript
import * as Sentry from '~/sentry/sentry_browser_wrapper';
try {
// Code that may fail in runtime
} catch (error) {
Sentry.captureException(error)
}
```
**When should you report an error?** We want to avoid reporting errors that we either don't care
about, or have no control over. For example, we shouldn't report validation errors when a user fills
out a form incorrectly. However, if that form submission fails because or a server error,
this is an error we want Sentry to know about.
By default your local development instance does not have Sentry configured. Calls to Sentry are
stubbed and shown in the console with a `[Sentry stub]` prefix for debugging.
### Unhandled/unknown errors
Additionally, we capture unhandled errors automatically in all of our pages.
## Error Monitoring
Once errors are captured, they appear in Sentry. For example you can see the
[errors reported in the last 24 hours in canary and production](https://new-sentry.gitlab.net/organizations/gitlab/issues/?environment=gprd-cny&environment=gprd&project=4&query=&referrer=issue-list&sort=freq&statsPeriod=24h).
In the list, select any error to see more details... and ideally propose a solution for it!
{{< alert type="note" >}}
We suggest filtering errors by the environments `gprd` and `gprd-cny`, as there is some spam in our
environment data.
{{< /alert >}}
### Exploring error data
Team members can use Sentry's [Discover page](https://new-sentry.gitlab.net/organizations/gitlab/discover/homepage/?environment=gprd-cny&environment=gprd&field=title&field=event.type&field=project&field=user.display&field=timestamp&field=replayId&name=All+Events&project=4&query=&sort=-timestamp&statsPeriod=14d&yAxis=count%28%29) to find unexpected issues.
Additionally, we have created [a dashboard](https://new-sentry.gitlab.net/organizations/gitlab/dashboard/3/?environment=gprd&environment=gprd-cny&project=4&statsPeriod=24h) to report which feature categories and pages produce
most errors, among other data.
Engineering team members are encouraged to explore error data and find ways to reduce errors on our
user interface. Sentry also provides alerts for folks interested in getting notified when errors occur.
### Filtering errors
We receive several thousands of reports per day, so team members can filter errors based on their
work area.
We mark errors with two additional custom `tags` to help identify their source:
- `feature_category`: The feature area of the page. (For example, `code_review_workflow` or `continuous_integration`.) **Source**: `gon.feature_category`
- `page`: Identifier of method called in the controller to render the page. (For example, `projects:merge_requests:index` or `projects:pipelines:index`.) **Source**: [`body_data_page`](https://gitlab.com/gitlab-org/gitlab/blob/b2ea95b8b1f15228a2fd5fa3fbd316857d5676b8/app/helpers/application_helper.rb#L144).
Frontend engineering team members can filter errors relevant to their group and/or page.
## Performance Monitoring
We use [BrowserTracing](https://docs.sentry.io/platforms/javascript/performance/) to report performance metrics to Sentry.
You can visit [our performance data of the last 24 hours](https://new-sentry.gitlab.net/organizations/gitlab/performance/?environment=gprd-cny&environment=gprd&project=4&statsPeriod=24h) and use the filters to drill down and learn more.
## Sentry instance infrastructure
The GitLab infrastructure team manages the Sentry instance, you can find more details about its architecture and data management in its [runbook documentation](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/sentry/sentry.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sentry monitoring in the frontend development of GitLab
breadcrumbs:
- doc
- development
- fe_guide
---
The GitLab Frontend team uses Sentry as an observability tool to monitor how the UI performs for
users on `gitlab.com`.
GitLab.com is configured to report to our Sentry instance at **Admin > Metrics and profiling > Sentry**.
We monitor two kinds of data: **Errors** and **Performance**.
{{< alert type="note" >}}
The [Frontend Observability Working Group](https://handbook.gitlab.com/handbook/company/working-groups/frontend-observability/) is looking to improve how we use Sentry. GitLab team members can provide feedback at
[issue #427402](https://gitlab.com/gitlab-org/gitlab/-/issues/427402).
{{< /alert >}}
## Start using Sentry
Our Sentry instance is located at [https://new-sentry.gitlab.net/](https://new-sentry.gitlab.net/).
Only GitLab team members can access Sentry.
After your first sign in you can join the `#gitlab` team by selecting **Join a team**. Confirm that
`#gitlab` appears under `YOUR TEAMS` in the [teams page](https://new-sentry.gitlab.net/settings/gitlab/teams/).
## Error reporting
Errors, also known as "events" in the Sentry UI, are instances of abnormal or unexpected runtime
behavior that users experience in their browser.
GitLab uses the [Sentry Browser SDK](https://docs.sentry.io/platforms/javascript/) to report errors
to our Sentry instance under the project
[`gitlabcom-clientside`](https://new-sentry.gitlab.net/organizations/gitlab/projects/gitlabcom-clientside/?project=4).
### Reporting known errors
The most common way to report errors to Sentry is to call `captureException(error)`, for example:
```javascript
import * as Sentry from '~/sentry/sentry_browser_wrapper';
try {
// Code that may fail in runtime
} catch (error) {
Sentry.captureException(error)
}
```
**When should you report an error?** We want to avoid reporting errors that we either don't care
about, or have no control over. For example, we shouldn't report validation errors when a user fills
out a form incorrectly. However, if that form submission fails because or a server error,
this is an error we want Sentry to know about.
By default your local development instance does not have Sentry configured. Calls to Sentry are
stubbed and shown in the console with a `[Sentry stub]` prefix for debugging.
### Unhandled/unknown errors
Additionally, we capture unhandled errors automatically in all of our pages.
## Error Monitoring
Once errors are captured, they appear in Sentry. For example you can see the
[errors reported in the last 24 hours in canary and production](https://new-sentry.gitlab.net/organizations/gitlab/issues/?environment=gprd-cny&environment=gprd&project=4&query=&referrer=issue-list&sort=freq&statsPeriod=24h).
In the list, select any error to see more details... and ideally propose a solution for it!
{{< alert type="note" >}}
We suggest filtering errors by the environments `gprd` and `gprd-cny`, as there is some spam in our
environment data.
{{< /alert >}}
### Exploring error data
Team members can use Sentry's [Discover page](https://new-sentry.gitlab.net/organizations/gitlab/discover/homepage/?environment=gprd-cny&environment=gprd&field=title&field=event.type&field=project&field=user.display&field=timestamp&field=replayId&name=All+Events&project=4&query=&sort=-timestamp&statsPeriod=14d&yAxis=count%28%29) to find unexpected issues.
Additionally, we have created [a dashboard](https://new-sentry.gitlab.net/organizations/gitlab/dashboard/3/?environment=gprd&environment=gprd-cny&project=4&statsPeriod=24h) to report which feature categories and pages produce
most errors, among other data.
Engineering team members are encouraged to explore error data and find ways to reduce errors on our
user interface. Sentry also provides alerts for folks interested in getting notified when errors occur.
### Filtering errors
We receive several thousands of reports per day, so team members can filter errors based on their
work area.
We mark errors with two additional custom `tags` to help identify their source:
- `feature_category`: The feature area of the page. (For example, `code_review_workflow` or `continuous_integration`.) **Source**: `gon.feature_category`
- `page`: Identifier of method called in the controller to render the page. (For example, `projects:merge_requests:index` or `projects:pipelines:index`.) **Source**: [`body_data_page`](https://gitlab.com/gitlab-org/gitlab/blob/b2ea95b8b1f15228a2fd5fa3fbd316857d5676b8/app/helpers/application_helper.rb#L144).
Frontend engineering team members can filter errors relevant to their group and/or page.
## Performance Monitoring
We use [BrowserTracing](https://docs.sentry.io/platforms/javascript/performance/) to report performance metrics to Sentry.
You can visit [our performance data of the last 24 hours](https://new-sentry.gitlab.net/organizations/gitlab/performance/?environment=gprd-cny&environment=gprd&project=4&statsPeriod=24h) and use the filters to drill down and learn more.
## Sentry instance infrastructure
The GitLab infrastructure team manages the Sentry instance, you can find more details about its architecture and data management in its [runbook documentation](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/sentry/sentry.md).
|
https://docs.gitlab.com/development/emojis
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/emojis.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
emojis.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Emojis
| null |
GitLab supports native Emojis through the [`tanuki_emoji`](https://gitlab.com/gitlab-org/ruby/gems/tanuki_emoji) gem.
## How to update Emojis
Because our emoji support is implemented on both the backend and the frontend, we need to update support over three milestones.
### First milestone (backend)
1. Update the [`tanuki_emoji`](https://gitlab.com/gitlab-org/ruby/gems/tanuki_emoji) gem as needed.
1. Update the `Gemfile` to use the latest `tanuki_emoji` gem.
1. Update the `Gemfile` to use the latest [`unicode-emoji`](https://github.com/janlelis/unicode-emoji) that supports the version of Unicode you're upgrading to.
1. Update `EMOJI_VERSION` in `lib/gitlab/emoji.rb`
1. `bundle exec rake tanuki_emoji:import` - imports all fallback images into the versioned `public/-/emojis` directory.
Ensure you see new individual images copied into there.
1. When testing, you should be able to use the shortcodes of any new emojis and have them display.
1. See example MRs [one](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/171446) and
[two](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170289) for the backend.
### Second milestone (frontend)
1. Update `EMOJI_VERSION` in `app/assets/javascripts/emoji/index.js`
1. Use the [`tanuki_emoji`](https://gitlab.com/gitlab-org/ruby/gems/tanuki_emoji) gem's [Rake tasks](../rake_tasks.md) to update aliases, digests, and sprites. Run in the following order:
1. `bundle exec rake tanuki_emoji:aliases` - updates `fixtures/emojis/aliases.json`
1. `bundle exec rake tanuki_emoji:digests` - updates `public/-/emojis/VERSION/emojis.json` and `fixtures/emojis/digests.json`
1. `bundle exec rake tanuki_emoji:sprite` - creates new sprite sheets
If new emoji are added, the sprite sheet may change size. To compensate for
such changes, first generate the `app/assets/images/emoji.png` sprite sheet with the above Rake
task, then check the dimensions of the new sprite sheet and update the
`SPRITESHEET_WIDTH` and `SPRITESHEET_HEIGHT` constants in `lib/tasks/tanuki_emoji.rake` accordingly.
Then re-run the task.
- Use [ImageOptim](https://imageoptim.com) or similar program to optimize the images for size
1. Ensure new sprite sheets were generated for 1x and 2x
- `app/assets/images/emoji.png`
- `app/assets/images/emoji@2x.png`
1. Update `fixtures/emojis/intents.json` with any new emoji that we would like to highlight as having positive or negative intent.
- Positive intent should be set to `0.5`.
- Neutral intent can be set to `1`. This is applied to all emoji automatically so there is no need to set this explicitly.
- Negative intent should be set to `1.5`.
1. You might need to add new emoji Unicode support checks and rules for platforms
that do not support a certain emoji and we need to fallback to an image.
See `app/assets/javascripts/emoji/support/is_emoji_unicode_supported.js`
and `app/assets/javascripts/emoji/support/unicode_support_map.js`
1. Ensure you use the version of [emoji-regex](https://github.com/mathiasbynens/emoji-regex) that corresponds
to the version of Unicode that is being supported. This should be updated in `package.json`. Used for
filtering emojis in `app/assets/javascripts/emoji/index.js`.
1. Have there been any changes to the category names? If so then `app/assets/javascripts/emoji/constants.js`
will need to be updated
1. When testing
1. Ensure you can see the new emojis and their aliases in the GitLab Flavored Markdown (GLFM) Autocomplete
1. Ensure you can see the new emojis and their aliases in the emoji reactions menu
### Third milestone (cleanup)
Remove any old emoji versions from the `public/-/emojis` directory. This is not strictly necessary -
everything continues to work if you don't do this. However it's good to clean it up.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Emojis
breadcrumbs:
- doc
- development
- fe_guide
---
GitLab supports native Emojis through the [`tanuki_emoji`](https://gitlab.com/gitlab-org/ruby/gems/tanuki_emoji) gem.
## How to update Emojis
Because our emoji support is implemented on both the backend and the frontend, we need to update support over three milestones.
### First milestone (backend)
1. Update the [`tanuki_emoji`](https://gitlab.com/gitlab-org/ruby/gems/tanuki_emoji) gem as needed.
1. Update the `Gemfile` to use the latest `tanuki_emoji` gem.
1. Update the `Gemfile` to use the latest [`unicode-emoji`](https://github.com/janlelis/unicode-emoji) that supports the version of Unicode you're upgrading to.
1. Update `EMOJI_VERSION` in `lib/gitlab/emoji.rb`
1. `bundle exec rake tanuki_emoji:import` - imports all fallback images into the versioned `public/-/emojis` directory.
Ensure you see new individual images copied into there.
1. When testing, you should be able to use the shortcodes of any new emojis and have them display.
1. See example MRs [one](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/171446) and
[two](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170289) for the backend.
### Second milestone (frontend)
1. Update `EMOJI_VERSION` in `app/assets/javascripts/emoji/index.js`
1. Use the [`tanuki_emoji`](https://gitlab.com/gitlab-org/ruby/gems/tanuki_emoji) gem's [Rake tasks](../rake_tasks.md) to update aliases, digests, and sprites. Run in the following order:
1. `bundle exec rake tanuki_emoji:aliases` - updates `fixtures/emojis/aliases.json`
1. `bundle exec rake tanuki_emoji:digests` - updates `public/-/emojis/VERSION/emojis.json` and `fixtures/emojis/digests.json`
1. `bundle exec rake tanuki_emoji:sprite` - creates new sprite sheets
If new emoji are added, the sprite sheet may change size. To compensate for
such changes, first generate the `app/assets/images/emoji.png` sprite sheet with the above Rake
task, then check the dimensions of the new sprite sheet and update the
`SPRITESHEET_WIDTH` and `SPRITESHEET_HEIGHT` constants in `lib/tasks/tanuki_emoji.rake` accordingly.
Then re-run the task.
- Use [ImageOptim](https://imageoptim.com) or similar program to optimize the images for size
1. Ensure new sprite sheets were generated for 1x and 2x
- `app/assets/images/emoji.png`
- `app/assets/images/emoji@2x.png`
1. Update `fixtures/emojis/intents.json` with any new emoji that we would like to highlight as having positive or negative intent.
- Positive intent should be set to `0.5`.
- Neutral intent can be set to `1`. This is applied to all emoji automatically so there is no need to set this explicitly.
- Negative intent should be set to `1.5`.
1. You might need to add new emoji Unicode support checks and rules for platforms
that do not support a certain emoji and we need to fallback to an image.
See `app/assets/javascripts/emoji/support/is_emoji_unicode_supported.js`
and `app/assets/javascripts/emoji/support/unicode_support_map.js`
1. Ensure you use the version of [emoji-regex](https://github.com/mathiasbynens/emoji-regex) that corresponds
to the version of Unicode that is being supported. This should be updated in `package.json`. Used for
filtering emojis in `app/assets/javascripts/emoji/index.js`.
1. Have there been any changes to the category names? If so then `app/assets/javascripts/emoji/constants.js`
will need to be updated
1. When testing
1. Ensure you can see the new emojis and their aliases in the GitLab Flavored Markdown (GLFM) Autocomplete
1. Ensure you can see the new emojis and their aliases in the emoji reactions menu
### Third milestone (cleanup)
Remove any old emoji versions from the `public/-/emojis` directory. This is not strictly necessary -
everything continues to work if you don't do this. However it's good to clean it up.
|
https://docs.gitlab.com/development/source_editor
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/source_editor.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
source_editor.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Source Editor
| null |
Source Editor provides the editing experience at GitLab. This thin wrapper around
[the Monaco editor](https://microsoft.github.io/monaco-editor/) provides necessary
helpers and abstractions, and extends Monaco [using extensions](#extensions). Multiple
GitLab features use it, including:
- [Web IDE](../../user/project/web_ide/_index.md)
- [CI Linter](../../ci/yaml/lint.md)
- [Snippets](../../user/snippets.md)
- [Web Editor](../../user/project/repository/web_editor.md)
- [Security Policies](../../user/application_security/policies/_index.md)
## When to use Source Editor
Use Source Editor only when users need to edit the file content.
If you only need to display source code, consider using the [`BlobContent`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/blob/components/blob_content.vue) component.
If the page you're working on is already loading the Source Editor,
displaying read-only content in the Source Editor is still a valid option.
## How to use Source Editor
Source Editor is framework-independent and can be used in any application, including both
Rails and Vue. To help with integration, we have the dedicated `<source-editor>`
Vue component, but the integration of Source Editor is generally straightforward:
1. Import Source Editor:
```javascript
import SourceEditor from '~/editor/source_editor';
```
1. Initialize global editor for the view:
```javascript
const editor = new SourceEditor({
// Editor Options.
// The list of all accepted options can be found at
// https://microsoft.github.io/monaco-editor/docs.html
});
```
1. Create an editor's instance:
```javascript
editor.createInstance({
// Source Editor configuration options.
})
```
An instance of Source Editor accepts the following configuration options:
| Option | Required? | Description |
| -------------- | ------- | ---- |
| `el` | `true` | `HTML Node`: The element on which to render the editor. |
| `blobPath` | `false` | `String`: The name of a file to render in the editor, used to identify the correct syntax highlighter to use with that file, or another file type. Can accept wildcards like `*.js` when the actual filename isn't known or doesn't play any role. |
| `blobContent` | `false` | `String`: The initial content to render in the editor. |
| `extensions` | `false` | `Array`: Extensions to use in this instance. |
| `blobGlobalId` | `false` | `String`: An auto-generated property.|
| Editor Options | `false` | `Object(s)`: Any property outside of the list above is treated as an Editor Option for this particular instance. Use this field to override global Editor Options on the instance level. A full [index of Editor Options](https://microsoft.github.io/monaco-editor/docs.html#enums/editor.EditorOption.html) is available. |
{{< alert type="note" >}}
The `blobGlobalId` property may be removed in a future release. Use the standard blob properties
instead unless you have a specific use case that requires `blobGlobalId`.
{{< /alert >}}
## API
The editor uses the same public API as
[provided by Monaco editor](https://microsoft.github.io/monaco-editor/docs.html)
with additional functions on the instance level:
| Function | Arguments | Description |
| --------------------- | ----- | ----- |
| `updateModelLanguage` | `path`: String | Updates the instance's syntax highlighting to follow the extension of the passed `path`. Available only on the instance level. |
| `use` | Array of objects | Array of extensions to apply to the instance. Accepts only an array of objects. The extensions' ES6 modules must be fetched and resolved in your views or components before they're passed to `use`. Available on the instance and global editor (all instances) levels. |
| Monaco Editor options | See [documentation](https://microsoft.github.io/monaco-editor/docs.html#interfaces/editor.IStandaloneCodeEditor.html) | Default Monaco editor options. |
## Tips
1. Editor's loading state.
The loading state is built in to Source Editor, making spinners and loaders
rarely needed in HTML. To benefit the built-in loading state, set the `data-editor-loading`
property on the HTML element that should contain the editor. When bootstrapping,
Source Editor shows the loader automatically.
1. Update syntax highlighting if the filename changes.
```javascript
// fileNameEl here is the HTML input element that contains the filename
fileNameEl.addEventListener('change', () => {
this.editor.updateModelLanguage(fileNameEl.value);
});
```
1. Get the editor's content.
We may set up listeners on the editor for every change, but it rapidly can become
an expensive operation. Instead, get the editor's content when it's needed.
For example, on a form's submission:
```javascript
form.addEventListener('submit', () => {
my_content_variable = this.editor.getValue();
});
```
1. Performance
Even though Source Editor itself is extremely slim, it still depends on Monaco editor,
which adds weight. Every time you add Source Editor to a view, the JavaScript bundle's
size significantly increases, affecting your view's loading performance. You should
import the editor on demand if either:
- You're uncertain if the view needs the editor.
- The editor is a secondary element of the view.
Loading Source Editor on demand is handled like loading any other module:
```javascript
someActionFunction() {
import(/* webpackChunkName: 'SourceEditor' */ '~/editor/source_editor').
then(({ default: SourceEditor }) => {
const editor = new SourceEditor();
...
});
...
}
```
## Extensions
Source Editor provides a universal, extensible editing tool to the whole product,
and doesn't depend on any particular group. Even though the Source Editor's core is owned by
[Create::Editor FE Team](https://handbook.gitlab.com/handbook/engineering/development/dev/create/editor-extensions/),
any group can own the extensions (the main functional elements). The goal of
Source Editor extensions is to keep the editor's core slim and stable. Any
needed features can be added as extensions to this core. Any group can
build and own new editing features without worrying about changes to Source Editor
breaking or overriding them.
You can depend on other modules in your extensions. This organization helps keep
the size of Source Editor's core at bay by importing dependencies only when needed.
Structurally, the complete implementation of Source Editor can be presented as this diagram:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD;
B[Extension 1]---A[Source Editor]
C[Extension 2]---A[Source Editor]
D[Extension 3]---A[Source Editor]
E[...]---A[Source Editor]
F[Extension N]---A[Source Editor]
A[Source Editor]---Z[Monaco]
```
An extension is an ES6 module that exports a JavaScript object:
```javascript
import { Position } from 'monaco-editor';
export default {
navigateFileStart() {
this.setPosition(new Position(1, 1));
},
};
```
In the extension's functions, `this` refers to the current Source Editor instance.
Using `this`, you get access to the complete instance's API, such as the
`setPosition()` method in this particular case.
### Using an existing extension
Adding an extension to Source Editor's instance requires the following steps:
```javascript
import SourceEditor from '~/editor/source_editor';
import MyExtension from '~/my_extension';
const editor = new SourceEditor().createInstance({
...
});
editor.use(MyExtension);
```
### Creating an extension
Let's create our first Source Editor extension. Extensions are
[ES6 modules](https://hacks.mozilla.org/2015/08/es6-in-depth-modules/) exporting a
basic `Object`, used to extend Source Editor's features. As a test, let's
create an extension that extends Source Editor with a new function that, when called,
outputs the editor's content in `alert`.
`~/my_folder/my_fancy_extension.js:`
```javascript
export default {
throwContentAtMe() {
alert(this.getValue());
},
};
```
In the code example, `this` refers to the instance. By referring to the instance,
we can access the complete underlying
[Monaco editor API](https://microsoft.github.io/monaco-editor/docs.html),
which includes functions like `getValue()`.
Now let's use our extension:
`~/my_folder/component_bundle.js`:
```javascript
import SourceEditor from '~/editor/source_editor';
import MyFancyExtension from './my_fancy_extension';
const editor = new SourceEditor().createInstance({
...
});
editor.use(MyFancyExtension);
...
someButton.addEventListener('click', () => {
editor.throwContentAtMe();
});
```
First of all, we import Source Editor and our new extension. Then we create the
editor and its instance. By default Source Editor has no `throwContentAtMe` method.
But the `editor.use(MyFancyExtension)` line brings that method to our instance.
After that, we can use it any time we need it. In this case, we call it when some
theoretical button has been clicked.
This script would result in an alert containing the editor's content when `someButton` is clicked.
### Tips
1. Performance
Just like Source Editor itself, any extension can be loaded on demand to not harm
loading performance of the views:
```javascript
const EditorPromise = import(
/* webpackChunkName: 'SourceEditor' */ '~/editor/source_editor'
);
const MarkdownExtensionPromise = import('~/editor/source_editor_markdown_ext');
Promise.all([EditorPromise, MarkdownExtensionPromise])
.then(([{ default: SourceEditor }, { default: MarkdownExtension }]) => {
const editor = new SourceEditor().createInstance({
...
});
editor.use(MarkdownExtension);
});
```
1. Using multiple extensions
Just pass the array of extensions to your `use` method:
```javascript
editor.use([FileTemplateExtension, MyFancyExtension]);
```
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Source Editor
breadcrumbs:
- doc
- development
- fe_guide
---
Source Editor provides the editing experience at GitLab. This thin wrapper around
[the Monaco editor](https://microsoft.github.io/monaco-editor/) provides necessary
helpers and abstractions, and extends Monaco [using extensions](#extensions). Multiple
GitLab features use it, including:
- [Web IDE](../../user/project/web_ide/_index.md)
- [CI Linter](../../ci/yaml/lint.md)
- [Snippets](../../user/snippets.md)
- [Web Editor](../../user/project/repository/web_editor.md)
- [Security Policies](../../user/application_security/policies/_index.md)
## When to use Source Editor
Use Source Editor only when users need to edit the file content.
If you only need to display source code, consider using the [`BlobContent`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/blob/components/blob_content.vue) component.
If the page you're working on is already loading the Source Editor,
displaying read-only content in the Source Editor is still a valid option.
## How to use Source Editor
Source Editor is framework-independent and can be used in any application, including both
Rails and Vue. To help with integration, we have the dedicated `<source-editor>`
Vue component, but the integration of Source Editor is generally straightforward:
1. Import Source Editor:
```javascript
import SourceEditor from '~/editor/source_editor';
```
1. Initialize global editor for the view:
```javascript
const editor = new SourceEditor({
// Editor Options.
// The list of all accepted options can be found at
// https://microsoft.github.io/monaco-editor/docs.html
});
```
1. Create an editor's instance:
```javascript
editor.createInstance({
// Source Editor configuration options.
})
```
An instance of Source Editor accepts the following configuration options:
| Option | Required? | Description |
| -------------- | ------- | ---- |
| `el` | `true` | `HTML Node`: The element on which to render the editor. |
| `blobPath` | `false` | `String`: The name of a file to render in the editor, used to identify the correct syntax highlighter to use with that file, or another file type. Can accept wildcards like `*.js` when the actual filename isn't known or doesn't play any role. |
| `blobContent` | `false` | `String`: The initial content to render in the editor. |
| `extensions` | `false` | `Array`: Extensions to use in this instance. |
| `blobGlobalId` | `false` | `String`: An auto-generated property.|
| Editor Options | `false` | `Object(s)`: Any property outside of the list above is treated as an Editor Option for this particular instance. Use this field to override global Editor Options on the instance level. A full [index of Editor Options](https://microsoft.github.io/monaco-editor/docs.html#enums/editor.EditorOption.html) is available. |
{{< alert type="note" >}}
The `blobGlobalId` property may be removed in a future release. Use the standard blob properties
instead unless you have a specific use case that requires `blobGlobalId`.
{{< /alert >}}
## API
The editor uses the same public API as
[provided by Monaco editor](https://microsoft.github.io/monaco-editor/docs.html)
with additional functions on the instance level:
| Function | Arguments | Description |
| --------------------- | ----- | ----- |
| `updateModelLanguage` | `path`: String | Updates the instance's syntax highlighting to follow the extension of the passed `path`. Available only on the instance level. |
| `use` | Array of objects | Array of extensions to apply to the instance. Accepts only an array of objects. The extensions' ES6 modules must be fetched and resolved in your views or components before they're passed to `use`. Available on the instance and global editor (all instances) levels. |
| Monaco Editor options | See [documentation](https://microsoft.github.io/monaco-editor/docs.html#interfaces/editor.IStandaloneCodeEditor.html) | Default Monaco editor options. |
## Tips
1. Editor's loading state.
The loading state is built in to Source Editor, making spinners and loaders
rarely needed in HTML. To benefit the built-in loading state, set the `data-editor-loading`
property on the HTML element that should contain the editor. When bootstrapping,
Source Editor shows the loader automatically.
1. Update syntax highlighting if the filename changes.
```javascript
// fileNameEl here is the HTML input element that contains the filename
fileNameEl.addEventListener('change', () => {
this.editor.updateModelLanguage(fileNameEl.value);
});
```
1. Get the editor's content.
We may set up listeners on the editor for every change, but it rapidly can become
an expensive operation. Instead, get the editor's content when it's needed.
For example, on a form's submission:
```javascript
form.addEventListener('submit', () => {
my_content_variable = this.editor.getValue();
});
```
1. Performance
Even though Source Editor itself is extremely slim, it still depends on Monaco editor,
which adds weight. Every time you add Source Editor to a view, the JavaScript bundle's
size significantly increases, affecting your view's loading performance. You should
import the editor on demand if either:
- You're uncertain if the view needs the editor.
- The editor is a secondary element of the view.
Loading Source Editor on demand is handled like loading any other module:
```javascript
someActionFunction() {
import(/* webpackChunkName: 'SourceEditor' */ '~/editor/source_editor').
then(({ default: SourceEditor }) => {
const editor = new SourceEditor();
...
});
...
}
```
## Extensions
Source Editor provides a universal, extensible editing tool to the whole product,
and doesn't depend on any particular group. Even though the Source Editor's core is owned by
[Create::Editor FE Team](https://handbook.gitlab.com/handbook/engineering/development/dev/create/editor-extensions/),
any group can own the extensions (the main functional elements). The goal of
Source Editor extensions is to keep the editor's core slim and stable. Any
needed features can be added as extensions to this core. Any group can
build and own new editing features without worrying about changes to Source Editor
breaking or overriding them.
You can depend on other modules in your extensions. This organization helps keep
the size of Source Editor's core at bay by importing dependencies only when needed.
Structurally, the complete implementation of Source Editor can be presented as this diagram:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD;
B[Extension 1]---A[Source Editor]
C[Extension 2]---A[Source Editor]
D[Extension 3]---A[Source Editor]
E[...]---A[Source Editor]
F[Extension N]---A[Source Editor]
A[Source Editor]---Z[Monaco]
```
An extension is an ES6 module that exports a JavaScript object:
```javascript
import { Position } from 'monaco-editor';
export default {
navigateFileStart() {
this.setPosition(new Position(1, 1));
},
};
```
In the extension's functions, `this` refers to the current Source Editor instance.
Using `this`, you get access to the complete instance's API, such as the
`setPosition()` method in this particular case.
### Using an existing extension
Adding an extension to Source Editor's instance requires the following steps:
```javascript
import SourceEditor from '~/editor/source_editor';
import MyExtension from '~/my_extension';
const editor = new SourceEditor().createInstance({
...
});
editor.use(MyExtension);
```
### Creating an extension
Let's create our first Source Editor extension. Extensions are
[ES6 modules](https://hacks.mozilla.org/2015/08/es6-in-depth-modules/) exporting a
basic `Object`, used to extend Source Editor's features. As a test, let's
create an extension that extends Source Editor with a new function that, when called,
outputs the editor's content in `alert`.
`~/my_folder/my_fancy_extension.js:`
```javascript
export default {
throwContentAtMe() {
alert(this.getValue());
},
};
```
In the code example, `this` refers to the instance. By referring to the instance,
we can access the complete underlying
[Monaco editor API](https://microsoft.github.io/monaco-editor/docs.html),
which includes functions like `getValue()`.
Now let's use our extension:
`~/my_folder/component_bundle.js`:
```javascript
import SourceEditor from '~/editor/source_editor';
import MyFancyExtension from './my_fancy_extension';
const editor = new SourceEditor().createInstance({
...
});
editor.use(MyFancyExtension);
...
someButton.addEventListener('click', () => {
editor.throwContentAtMe();
});
```
First of all, we import Source Editor and our new extension. Then we create the
editor and its instance. By default Source Editor has no `throwContentAtMe` method.
But the `editor.use(MyFancyExtension)` line brings that method to our instance.
After that, we can use it any time we need it. In this case, we call it when some
theoretical button has been clicked.
This script would result in an alert containing the editor's content when `someButton` is clicked.
### Tips
1. Performance
Just like Source Editor itself, any extension can be loaded on demand to not harm
loading performance of the views:
```javascript
const EditorPromise = import(
/* webpackChunkName: 'SourceEditor' */ '~/editor/source_editor'
);
const MarkdownExtensionPromise = import('~/editor/source_editor_markdown_ext');
Promise.all([EditorPromise, MarkdownExtensionPromise])
.then(([{ default: SourceEditor }, { default: MarkdownExtension }]) => {
const editor = new SourceEditor().createInstance({
...
});
editor.use(MarkdownExtension);
});
```
1. Using multiple extensions
Just pass the array of extensions to your `use` method:
```javascript
editor.use([FileTemplateExtension, MyFancyExtension]);
```
|
https://docs.gitlab.com/development/vuex
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/vuex.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
vuex.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Vuex
| null |
## DEPRECATED
**[Vuex](https://vuex.vuejs.org) is deprecated at GitLab** and no new Vuex stores should be created.
You can still maintain existing Vuex stores but we strongly recommend [migrating away from Vuex entirely](migrating_from_vuex.md).
The rest of the information included on this page is explained in more detail in the
official [Vuex documentation](https://vuex.vuejs.org).
## Separation of concerns
Vuex is composed of State, Getters, Mutations, Actions, and Modules.
When a user selects an action, we need to `dispatch` it. This action `commits` a mutation that changes the state. The action itself does not update the state; only a mutation should update the state.
## File structure
When using Vuex at GitLab, separate these concerns into different files to improve readability:
```plaintext
└── store
├── index.js # where we assemble modules and export the store
├── actions.js # actions
├── mutations.js # mutations
├── getters.js # getters
├── state.js # state
└── mutation_types.js # mutation types
```
The following example shows an application that lists and adds users to the
state. (For a more complex example implementation, review the security
applications stored in this [repository](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/app/assets/javascripts/vue_shared/security_reports/store)).
### `index.js`
This is the entry point for our store. You can use the following as a guide:
```javascript
// eslint-disable-next-line no-restricted-imports
import Vuex from 'vuex';
import * as actions from './actions';
import * as getters from './getters';
import mutations from './mutations';
import state from './state';
export const createStore = () =>
new Vuex.Store({
actions,
getters,
mutations,
state,
});
```
### `state.js`
The first thing you should do before writing any code is to design the state.
Often we need to provide data from HAML to our Vue application. Let's store it in the state for better access.
```javascript
export default () => ({
endpoint: null,
isLoading: false,
error: null,
isAddingUser: false,
errorAddingUser: false,
users: [],
});
```
#### Access `state` properties
You can use `mapState` to access state properties in the components.
### `actions.js`
An action is a payload of information to send data from our application to our store.
An action is usually composed by a `type` and a `payload` and they describe what happened. Unlike [mutations](#mutationsjs), actions can contain asynchronous operations - that's why we always need to handle asynchronous logic in actions.
In this file, we write the actions that call mutations for handling a list of users:
```javascript
import * as types from './mutation_types';
import axios from '~/lib/utils/axios_utils';
import { createAlert } from '~/alert';
export const fetchUsers = ({ state, dispatch }) => {
commit(types.REQUEST_USERS);
axios.get(state.endpoint)
.then(({ data }) => commit(types.RECEIVE_USERS_SUCCESS, data))
.catch((error) => {
commit(types.RECEIVE_USERS_ERROR, error)
createAlert({ message: 'There was an error' })
});
}
export const addUser = ({ state, dispatch }, user) => {
commit(types.REQUEST_ADD_USER);
axios.post(state.endpoint, user)
.then(({ data }) => commit(types.RECEIVE_ADD_USER_SUCCESS, data))
.catch((error) => commit(types.REQUEST_ADD_USER_ERROR, error));
}
```
#### Dispatching actions
To dispatch an action from a component, use the `mapActions` helper:
```javascript
import { mapActions } from 'vuex';
{
methods: {
...mapActions([
'addUser',
]),
onClickUser(user) {
this.addUser(user);
},
},
};
```
### `mutations.js`
The mutations specify how the application state changes in response to actions sent to the store.
The only way to change state in a Vuex store is by committing a mutation.
Most mutations are committed from an action using `commit`. If you don't have any
asynchronous operations, you can call mutations from a component using the `mapMutations` helper.
See the Vuex documentation for examples of [committing mutations from components](https://vuex.vuejs.org/guide/mutations.html#committing-mutations-in-components).
#### Naming Pattern: `REQUEST` and `RECEIVE` namespaces
When a request is made we often want to show a loading state to the user.
Instead of creating a mutation to toggle the loading state, we should:
1. A mutation with type `REQUEST_SOMETHING`, to toggle the loading state
1. A mutation with type `RECEIVE_SOMETHING_SUCCESS`, to handle the success callback
1. A mutation with type `RECEIVE_SOMETHING_ERROR`, to handle the error callback
1. An action `fetchSomething` to make the request and commit mutations on mentioned cases
1. In case your application does more than a `GET` request you can use these as examples:
- `POST`: `createSomething`
- `PUT`: `updateSomething`
- `DELETE`: `deleteSomething`
As a result, we can dispatch the `fetchNamespace` action from the component and it is responsible to commit `REQUEST_NAMESPACE`, `RECEIVE_NAMESPACE_SUCCESS` and `RECEIVE_NAMESPACE_ERROR` mutations.
> Previously, we were dispatching actions from the `fetchNamespace` action instead of committing mutation, so don't be confused if you find a different pattern in the older parts of the codebase. However, we encourage leveraging a new pattern whenever you write new Vuex stores.
By following this pattern we guarantee:
1. All applications follow the same pattern, making it easier for anyone to maintain the code.
1. All data in the application follows the same lifecycle pattern.
1. Unit tests are easier.
#### Updating complex state
Sometimes, especially when the state is complex, is really hard to traverse the state to precisely update what the mutation needs to update.
Ideally a `vuex` state should be as normalized/decoupled as possible but this is not always the case.
It's important to remember that the code is much easier to read and maintain when the `portion of the mutated state` is selected and mutated in the mutation itself.
Given this state:
```javascript
export default () => ({
items: [
{
id: 1,
name: 'my_issue',
closed: false,
},
{
id: 2,
name: 'another_issue',
closed: false,
}
]
});
```
It may be tempting to write a mutation like so:
```javascript
// Bad
export default {
[types.MARK_AS_CLOSED](state, item) {
Object.assign(item, {closed: true})
}
}
```
While this approach works it has several dependencies:
- Correct selection of `item` in the component/action.
- The `item` property is already declared in the `closed` state.
- A new `confidential` property would not be reactive.
- Noting that `item` is referenced by `items`.
A mutation written like this is harder to maintain and more error prone. We should rather write a mutation like this:
```javascript
// Good
export default {
[types.MARK_AS_CLOSED](state, itemId) {
const item = state.items.find(x => x.id === itemId);
if (!item) {
return;
}
Vue.set(item, 'closed', true);
},
};
```
This approach is better because:
- It selects and updates the state in the mutation, which is more maintainable.
- It has no external dependencies, if the correct `itemId` is passed the state is correctly updated.
- It does not have reactivity caveats, as we generate a new `item` to avoid coupling to the initial state.
A mutation written like this is easier to maintain. In addition, we avoid errors due to the limitation of the reactivity system.
### `getters.js`
Sometimes we may need to get derived state based on store state, like filtering for a specific prop.
Using a getter also caches the result based on dependencies due to [how computed props work](https://v2.vuejs.org/v2/guide/computed.html#Computed-Caching-vs-Methods)
This can be done through the `getters`:
```javascript
// get all the users with pets
export const getUsersWithPets = (state, getters) => {
return state.users.filter(user => user.pet !== undefined);
};
```
To access a getter from a component, use the `mapGetters` helper:
```javascript
import { mapGetters } from 'vuex';
{
computed: {
...mapGetters([
'getUsersWithPets',
]),
},
};
```
### `mutation_types.js`
From [Vuex mutations documentation](https://vuex.vuejs.org/guide/mutations.html):
> It is a commonly seen pattern to use constants for mutation types in various Flux implementations.
> This allows the code to take advantage of tooling like linters, and putting all constants in a
> single file allows your collaborators to get an at-a-glance view of what mutations are possible
> in the entire application.
```javascript
export const ADD_USER = 'ADD_USER';
```
### Initializing a store's state
It's common for a Vuex store to need some initial state before its `action`s can
be used. Often this includes data like API endpoints, documentation URLs, or
IDs.
To set this initial state, pass it as a parameter to your store's creation
function when mounting your Vue component:
```javascript
// in the Vue app's initialization script (for example, mount_show.js)
import Vue from 'vue';
// eslint-disable-next-line no-restricted-imports
import Vuex from 'vuex';
import { createStore } from './stores';
import AwesomeVueApp from './components/awesome_vue_app.vue'
Vue.use(Vuex);
export default () => {
const el = document.getElementById('js-awesome-vue-app');
return new Vue({
el,
name: 'AwesomeVueRoot',
store: createStore(el.dataset),
render: h => h(AwesomeVueApp)
});
};
```
The store function, in turn, can pass this data along to the state's creation
function:
```javascript
// in store/index.js
import * as actions from './actions';
import mutations from './mutations';
import createState from './state';
export default initialState => ({
actions,
mutations,
state: createState(initialState),
});
```
And the state function can accept this initial data as a parameter and bake it
into the `state` object it returns:
```javascript
// in store/state.js
export default ({
projectId,
documentationPath,
anOptionalProperty = true
}) => ({
projectId,
documentationPath,
anOptionalProperty,
// other state properties here
});
```
#### Why not just ...spread the initial state?
The astute reader sees an opportunity to cut out a few lines of code from
the example above:
```javascript
// Don't do this!
export default initialState => ({
...initialState,
// other state properties here
});
```
We made the conscious decision to avoid this pattern to improve the ability to
discover and search our frontend codebase. The same applies
when [providing data to a Vue app](vue.md#providing-data-from-haml-to-javascript). The reasoning for this is described in
[this discussion](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/56#note_302514865):
> Consider a `someStateKey` is being used in the store state. You _may_ not be
> able to grep for it directly if it was provided only by `el.dataset`. Instead,
> you'd have to grep for `some_state_key`, because it could have come from a Rails
> template. The reverse is also true: if you're looking at a rails template, you
> might wonder what uses `some_state_key`, but you'd _have_ to grep for
> `someStateKey`.
### Communicating with the Store
```javascript
<script>
// eslint-disable-next-line no-restricted-imports
import { mapActions, mapState, mapGetters } from 'vuex';
export default {
computed: {
...mapGetters([
'getUsersWithPets'
]),
...mapState([
'isLoading',
'users',
'error',
]),
},
methods: {
...mapActions([
'fetchUsers',
'addUser',
]),
onClickAddUser(data) {
this.addUser(data);
}
},
created() {
this.fetchUsers()
}
}
</script>
<template>
<ul>
<li v-if="isLoading">
Loading...
</li>
<li v-else-if="error">
{{ error }}
</li>
<template v-else>
<li
v-for="user in users"
:key="user.id"
>
{{ user }}
</li>
</template>
</ul>
</template>
```
### Testing Vuex
#### Testing Vuex concerns
Refer to [Vuex documentation](https://vuex.vuejs.org/guide/testing.html) regarding testing Actions, Getters and Mutations.
#### Testing components that need a store
Smaller components might use `store` properties to access the data. To write unit tests for those
components, we need to include the store and provide the correct state:
```javascript
//component_spec.js
import Vue from 'vue';
// eslint-disable-next-line no-restricted-imports
import Vuex from 'vuex';
import { mount } from '@vue/test-utils';
import { createStore } from './store';
import Component from './component.vue'
Vue.use(Vuex);
describe('component', () => {
let store;
let wrapper;
const createComponent = () => {
store = createStore();
wrapper = mount(Component, {
store,
});
};
beforeEach(() => {
createComponent();
});
it('should show a user', async () => {
const user = {
name: 'Foo',
age: '30',
};
// populate the store
await store.dispatch('addUser', user);
expect(wrapper.text()).toContain(user.name);
});
});
```
Some test files may still use the
[deprecated `createLocalVue` function](https://gitlab.com/gitlab-org/gitlab/-/issues/220482)
from `@vue/test-utils` and `localVue.use(Vuex)`. This is unnecessary, and should be
avoided or removed when possible.
### Two way data binding
When storing form data in Vuex, it is sometimes necessary to update the value stored. The store
should never be mutated directly, and an action should be used instead.
To use `v-model` in our code, we need to create computed properties in this form:
```javascript
export default {
computed: {
someValue: {
get() {
return this.$store.state.someValue;
},
set(value) {
this.$store.dispatch("setSomeValue", value);
}
}
}
};
```
An alternative is to use `mapState` and `mapActions`:
```javascript
export default {
computed: {
...mapState(['someValue']),
localSomeValue: {
get() {
return this.someValue;
},
set(value) {
this.setSomeValue(value)
}
}
},
methods: {
...mapActions(['setSomeValue'])
}
};
```
Adding a few of these properties becomes cumbersome, and makes the code more repetitive with more tests to write. To simplify this there is a helper in `~/vuex_shared/bindings.js`.
The helper can be used like so:
```javascript
// this store is non-functional and only used to give context to the example
export default {
state: {
baz: '',
bar: '',
foo: ''
},
actions: {
updateBar() {...},
updateAll() {...},
},
getters: {
getFoo() {...},
}
}
```
```javascript
import { mapComputed } from '~/vuex_shared/bindings'
export default {
computed: {
/**
* @param {(string[]|Object[])} list - list of string matching state keys or list objects
* @param {string} list[].key - the key matching the key present in the vuex state
* @param {string} list[].getter - the name of the getter, leave it empty to not use a getter
* @param {string} list[].updateFn - the name of the action, leave it empty to use the default action
* @param {string} defaultUpdateFn - the default function to dispatch
* @param {string|function} root - optional key of the state where to search for they keys described in list
* @returns {Object} a dictionary with all the computed properties generated
*/
...mapComputed(
[
'baz',
{ key: 'bar', updateFn: 'updateBar' },
{ key: 'foo', getter: 'getFoo' },
],
'updateAll',
),
}
}
```
`mapComputed` then generates the appropriate computed properties that get the data from the store and dispatch the correct action when updated.
In the event that the `root` of the key is more than one-level deep you can use a function to retrieve the relevant state object.
For instance, with a store like:
```javascript
// this store is non-functional and only used to give context to the example
export default {
state: {
foo: {
qux: {
baz: '',
bar: '',
foo: '',
},
},
},
actions: {
updateBar() {...},
updateAll() {...},
},
getters: {
getFoo() {...},
}
}
```
The `root` could be:
```javascript
import { mapComputed } from '~/vuex_shared/bindings'
export default {
computed: {
...mapComputed(
[
'baz',
{ key: 'bar', updateFn: 'updateBar' },
{ key: 'foo', getter: 'getFoo' },
],
'updateAll',
(state) => state.foo.qux,
),
}
}
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Vuex
breadcrumbs:
- doc
- development
- fe_guide
---
## DEPRECATED
**[Vuex](https://vuex.vuejs.org) is deprecated at GitLab** and no new Vuex stores should be created.
You can still maintain existing Vuex stores but we strongly recommend [migrating away from Vuex entirely](migrating_from_vuex.md).
The rest of the information included on this page is explained in more detail in the
official [Vuex documentation](https://vuex.vuejs.org).
## Separation of concerns
Vuex is composed of State, Getters, Mutations, Actions, and Modules.
When a user selects an action, we need to `dispatch` it. This action `commits` a mutation that changes the state. The action itself does not update the state; only a mutation should update the state.
## File structure
When using Vuex at GitLab, separate these concerns into different files to improve readability:
```plaintext
└── store
├── index.js # where we assemble modules and export the store
├── actions.js # actions
├── mutations.js # mutations
├── getters.js # getters
├── state.js # state
└── mutation_types.js # mutation types
```
The following example shows an application that lists and adds users to the
state. (For a more complex example implementation, review the security
applications stored in this [repository](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/app/assets/javascripts/vue_shared/security_reports/store)).
### `index.js`
This is the entry point for our store. You can use the following as a guide:
```javascript
// eslint-disable-next-line no-restricted-imports
import Vuex from 'vuex';
import * as actions from './actions';
import * as getters from './getters';
import mutations from './mutations';
import state from './state';
export const createStore = () =>
new Vuex.Store({
actions,
getters,
mutations,
state,
});
```
### `state.js`
The first thing you should do before writing any code is to design the state.
Often we need to provide data from HAML to our Vue application. Let's store it in the state for better access.
```javascript
export default () => ({
endpoint: null,
isLoading: false,
error: null,
isAddingUser: false,
errorAddingUser: false,
users: [],
});
```
#### Access `state` properties
You can use `mapState` to access state properties in the components.
### `actions.js`
An action is a payload of information to send data from our application to our store.
An action is usually composed by a `type` and a `payload` and they describe what happened. Unlike [mutations](#mutationsjs), actions can contain asynchronous operations - that's why we always need to handle asynchronous logic in actions.
In this file, we write the actions that call mutations for handling a list of users:
```javascript
import * as types from './mutation_types';
import axios from '~/lib/utils/axios_utils';
import { createAlert } from '~/alert';
export const fetchUsers = ({ state, dispatch }) => {
commit(types.REQUEST_USERS);
axios.get(state.endpoint)
.then(({ data }) => commit(types.RECEIVE_USERS_SUCCESS, data))
.catch((error) => {
commit(types.RECEIVE_USERS_ERROR, error)
createAlert({ message: 'There was an error' })
});
}
export const addUser = ({ state, dispatch }, user) => {
commit(types.REQUEST_ADD_USER);
axios.post(state.endpoint, user)
.then(({ data }) => commit(types.RECEIVE_ADD_USER_SUCCESS, data))
.catch((error) => commit(types.REQUEST_ADD_USER_ERROR, error));
}
```
#### Dispatching actions
To dispatch an action from a component, use the `mapActions` helper:
```javascript
import { mapActions } from 'vuex';
{
methods: {
...mapActions([
'addUser',
]),
onClickUser(user) {
this.addUser(user);
},
},
};
```
### `mutations.js`
The mutations specify how the application state changes in response to actions sent to the store.
The only way to change state in a Vuex store is by committing a mutation.
Most mutations are committed from an action using `commit`. If you don't have any
asynchronous operations, you can call mutations from a component using the `mapMutations` helper.
See the Vuex documentation for examples of [committing mutations from components](https://vuex.vuejs.org/guide/mutations.html#committing-mutations-in-components).
#### Naming Pattern: `REQUEST` and `RECEIVE` namespaces
When a request is made we often want to show a loading state to the user.
Instead of creating a mutation to toggle the loading state, we should:
1. A mutation with type `REQUEST_SOMETHING`, to toggle the loading state
1. A mutation with type `RECEIVE_SOMETHING_SUCCESS`, to handle the success callback
1. A mutation with type `RECEIVE_SOMETHING_ERROR`, to handle the error callback
1. An action `fetchSomething` to make the request and commit mutations on mentioned cases
1. In case your application does more than a `GET` request you can use these as examples:
- `POST`: `createSomething`
- `PUT`: `updateSomething`
- `DELETE`: `deleteSomething`
As a result, we can dispatch the `fetchNamespace` action from the component and it is responsible to commit `REQUEST_NAMESPACE`, `RECEIVE_NAMESPACE_SUCCESS` and `RECEIVE_NAMESPACE_ERROR` mutations.
> Previously, we were dispatching actions from the `fetchNamespace` action instead of committing mutation, so don't be confused if you find a different pattern in the older parts of the codebase. However, we encourage leveraging a new pattern whenever you write new Vuex stores.
By following this pattern we guarantee:
1. All applications follow the same pattern, making it easier for anyone to maintain the code.
1. All data in the application follows the same lifecycle pattern.
1. Unit tests are easier.
#### Updating complex state
Sometimes, especially when the state is complex, is really hard to traverse the state to precisely update what the mutation needs to update.
Ideally a `vuex` state should be as normalized/decoupled as possible but this is not always the case.
It's important to remember that the code is much easier to read and maintain when the `portion of the mutated state` is selected and mutated in the mutation itself.
Given this state:
```javascript
export default () => ({
items: [
{
id: 1,
name: 'my_issue',
closed: false,
},
{
id: 2,
name: 'another_issue',
closed: false,
}
]
});
```
It may be tempting to write a mutation like so:
```javascript
// Bad
export default {
[types.MARK_AS_CLOSED](state, item) {
Object.assign(item, {closed: true})
}
}
```
While this approach works it has several dependencies:
- Correct selection of `item` in the component/action.
- The `item` property is already declared in the `closed` state.
- A new `confidential` property would not be reactive.
- Noting that `item` is referenced by `items`.
A mutation written like this is harder to maintain and more error prone. We should rather write a mutation like this:
```javascript
// Good
export default {
[types.MARK_AS_CLOSED](state, itemId) {
const item = state.items.find(x => x.id === itemId);
if (!item) {
return;
}
Vue.set(item, 'closed', true);
},
};
```
This approach is better because:
- It selects and updates the state in the mutation, which is more maintainable.
- It has no external dependencies, if the correct `itemId` is passed the state is correctly updated.
- It does not have reactivity caveats, as we generate a new `item` to avoid coupling to the initial state.
A mutation written like this is easier to maintain. In addition, we avoid errors due to the limitation of the reactivity system.
### `getters.js`
Sometimes we may need to get derived state based on store state, like filtering for a specific prop.
Using a getter also caches the result based on dependencies due to [how computed props work](https://v2.vuejs.org/v2/guide/computed.html#Computed-Caching-vs-Methods)
This can be done through the `getters`:
```javascript
// get all the users with pets
export const getUsersWithPets = (state, getters) => {
return state.users.filter(user => user.pet !== undefined);
};
```
To access a getter from a component, use the `mapGetters` helper:
```javascript
import { mapGetters } from 'vuex';
{
computed: {
...mapGetters([
'getUsersWithPets',
]),
},
};
```
### `mutation_types.js`
From [Vuex mutations documentation](https://vuex.vuejs.org/guide/mutations.html):
> It is a commonly seen pattern to use constants for mutation types in various Flux implementations.
> This allows the code to take advantage of tooling like linters, and putting all constants in a
> single file allows your collaborators to get an at-a-glance view of what mutations are possible
> in the entire application.
```javascript
export const ADD_USER = 'ADD_USER';
```
### Initializing a store's state
It's common for a Vuex store to need some initial state before its `action`s can
be used. Often this includes data like API endpoints, documentation URLs, or
IDs.
To set this initial state, pass it as a parameter to your store's creation
function when mounting your Vue component:
```javascript
// in the Vue app's initialization script (for example, mount_show.js)
import Vue from 'vue';
// eslint-disable-next-line no-restricted-imports
import Vuex from 'vuex';
import { createStore } from './stores';
import AwesomeVueApp from './components/awesome_vue_app.vue'
Vue.use(Vuex);
export default () => {
const el = document.getElementById('js-awesome-vue-app');
return new Vue({
el,
name: 'AwesomeVueRoot',
store: createStore(el.dataset),
render: h => h(AwesomeVueApp)
});
};
```
The store function, in turn, can pass this data along to the state's creation
function:
```javascript
// in store/index.js
import * as actions from './actions';
import mutations from './mutations';
import createState from './state';
export default initialState => ({
actions,
mutations,
state: createState(initialState),
});
```
And the state function can accept this initial data as a parameter and bake it
into the `state` object it returns:
```javascript
// in store/state.js
export default ({
projectId,
documentationPath,
anOptionalProperty = true
}) => ({
projectId,
documentationPath,
anOptionalProperty,
// other state properties here
});
```
#### Why not just ...spread the initial state?
The astute reader sees an opportunity to cut out a few lines of code from
the example above:
```javascript
// Don't do this!
export default initialState => ({
...initialState,
// other state properties here
});
```
We made the conscious decision to avoid this pattern to improve the ability to
discover and search our frontend codebase. The same applies
when [providing data to a Vue app](vue.md#providing-data-from-haml-to-javascript). The reasoning for this is described in
[this discussion](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/56#note_302514865):
> Consider a `someStateKey` is being used in the store state. You _may_ not be
> able to grep for it directly if it was provided only by `el.dataset`. Instead,
> you'd have to grep for `some_state_key`, because it could have come from a Rails
> template. The reverse is also true: if you're looking at a rails template, you
> might wonder what uses `some_state_key`, but you'd _have_ to grep for
> `someStateKey`.
### Communicating with the Store
```javascript
<script>
// eslint-disable-next-line no-restricted-imports
import { mapActions, mapState, mapGetters } from 'vuex';
export default {
computed: {
...mapGetters([
'getUsersWithPets'
]),
...mapState([
'isLoading',
'users',
'error',
]),
},
methods: {
...mapActions([
'fetchUsers',
'addUser',
]),
onClickAddUser(data) {
this.addUser(data);
}
},
created() {
this.fetchUsers()
}
}
</script>
<template>
<ul>
<li v-if="isLoading">
Loading...
</li>
<li v-else-if="error">
{{ error }}
</li>
<template v-else>
<li
v-for="user in users"
:key="user.id"
>
{{ user }}
</li>
</template>
</ul>
</template>
```
### Testing Vuex
#### Testing Vuex concerns
Refer to [Vuex documentation](https://vuex.vuejs.org/guide/testing.html) regarding testing Actions, Getters and Mutations.
#### Testing components that need a store
Smaller components might use `store` properties to access the data. To write unit tests for those
components, we need to include the store and provide the correct state:
```javascript
//component_spec.js
import Vue from 'vue';
// eslint-disable-next-line no-restricted-imports
import Vuex from 'vuex';
import { mount } from '@vue/test-utils';
import { createStore } from './store';
import Component from './component.vue'
Vue.use(Vuex);
describe('component', () => {
let store;
let wrapper;
const createComponent = () => {
store = createStore();
wrapper = mount(Component, {
store,
});
};
beforeEach(() => {
createComponent();
});
it('should show a user', async () => {
const user = {
name: 'Foo',
age: '30',
};
// populate the store
await store.dispatch('addUser', user);
expect(wrapper.text()).toContain(user.name);
});
});
```
Some test files may still use the
[deprecated `createLocalVue` function](https://gitlab.com/gitlab-org/gitlab/-/issues/220482)
from `@vue/test-utils` and `localVue.use(Vuex)`. This is unnecessary, and should be
avoided or removed when possible.
### Two way data binding
When storing form data in Vuex, it is sometimes necessary to update the value stored. The store
should never be mutated directly, and an action should be used instead.
To use `v-model` in our code, we need to create computed properties in this form:
```javascript
export default {
computed: {
someValue: {
get() {
return this.$store.state.someValue;
},
set(value) {
this.$store.dispatch("setSomeValue", value);
}
}
}
};
```
An alternative is to use `mapState` and `mapActions`:
```javascript
export default {
computed: {
...mapState(['someValue']),
localSomeValue: {
get() {
return this.someValue;
},
set(value) {
this.setSomeValue(value)
}
}
},
methods: {
...mapActions(['setSomeValue'])
}
};
```
Adding a few of these properties becomes cumbersome, and makes the code more repetitive with more tests to write. To simplify this there is a helper in `~/vuex_shared/bindings.js`.
The helper can be used like so:
```javascript
// this store is non-functional and only used to give context to the example
export default {
state: {
baz: '',
bar: '',
foo: ''
},
actions: {
updateBar() {...},
updateAll() {...},
},
getters: {
getFoo() {...},
}
}
```
```javascript
import { mapComputed } from '~/vuex_shared/bindings'
export default {
computed: {
/**
* @param {(string[]|Object[])} list - list of string matching state keys or list objects
* @param {string} list[].key - the key matching the key present in the vuex state
* @param {string} list[].getter - the name of the getter, leave it empty to not use a getter
* @param {string} list[].updateFn - the name of the action, leave it empty to use the default action
* @param {string} defaultUpdateFn - the default function to dispatch
* @param {string|function} root - optional key of the state where to search for they keys described in list
* @returns {Object} a dictionary with all the computed properties generated
*/
...mapComputed(
[
'baz',
{ key: 'bar', updateFn: 'updateBar' },
{ key: 'foo', getter: 'getFoo' },
],
'updateAll',
),
}
}
```
`mapComputed` then generates the appropriate computed properties that get the data from the store and dispatch the correct action when updated.
In the event that the `root` of the key is more than one-level deep you can use a function to retrieve the relevant state object.
For instance, with a store like:
```javascript
// this store is non-functional and only used to give context to the example
export default {
state: {
foo: {
qux: {
baz: '',
bar: '',
foo: '',
},
},
},
actions: {
updateBar() {...},
updateAll() {...},
},
getters: {
getFoo() {...},
}
}
```
The `root` could be:
```javascript
import { mapComputed } from '~/vuex_shared/bindings'
export default {
computed: {
...mapComputed(
[
'baz',
{ key: 'bar', updateFn: 'updateBar' },
{ key: 'foo', getter: 'getFoo' },
],
'updateAll',
(state) => state.foo.qux,
),
}
}
```
|
https://docs.gitlab.com/development/axios
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/axios.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
axios.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Axios
| null |
In older parts of our codebase using the REST API, we used [Axios](https://github.com/axios/axios) to communicate with the server, but you should not use Axios in new applications. Instead rely on `apollo-client` to query the GraphQL API. For more details, see [our GraphQL documentation](graphql.md).
To guarantee all defaults are set you should import Axios from `axios_utils`. Do not use Axios directly.
## CSRF token
All our requests require a CSRF token.
To guarantee this token is set, we are importing [Axios](https://github.com/axios/axios), setting the token, and exporting `axios` .
This exported module should be used instead of directly using Axios to ensure the token is set.
## Usage
```javascript
import axios from './lib/utils/axios_utils';
axios.get(url)
.then((response) => {
// `data` is the response that was provided by the server
const data = response.data;
// `headers` the headers that the server responded with
// All header names are lower cased
const paginationData = response.headers;
})
.catch(() => {
//handle the error
});
```
## Mock Axios response in tests
To help us mock the responses we are using [axios-mock-adapter](https://github.com/ctimmerm/axios-mock-adapter).
Advantages over [`spyOn()`](https://jasmine.github.io/api/edge/global.html#spyOn):
- no need to create response objects
- does not allow call through (which we want to avoid)
- clear API to test error cases
- provides `replyOnce()` to allow for different responses
We have also decided against using [Axios interceptors](https://github.com/axios/axios#interceptors) because they are not suitable for mocking.
### Example
```javascript
import axios from '~/lib/utils/axios_utils';
import MockAdapter from 'axios-mock-adapter';
let mock;
beforeEach(() => {
// This sets the mock adapter on the default instance
mock = new MockAdapter(axios);
// Mock any GET request to /users
// arguments for reply are (status, data, headers)
mock.onGet('/users').reply(200, {
users: [
{ id: 1, name: 'John Smith' }
]
});
});
afterEach(() => {
mock.restore();
});
```
### Mock poll requests in tests with Axios
Because a polling function requires a header object, we need to always include an object as the third argument:
```javascript
mock.onGet('/users').reply(200, { foo: 'bar' }, {});
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Axios
breadcrumbs:
- doc
- development
- fe_guide
---
In older parts of our codebase using the REST API, we used [Axios](https://github.com/axios/axios) to communicate with the server, but you should not use Axios in new applications. Instead rely on `apollo-client` to query the GraphQL API. For more details, see [our GraphQL documentation](graphql.md).
To guarantee all defaults are set you should import Axios from `axios_utils`. Do not use Axios directly.
## CSRF token
All our requests require a CSRF token.
To guarantee this token is set, we are importing [Axios](https://github.com/axios/axios), setting the token, and exporting `axios` .
This exported module should be used instead of directly using Axios to ensure the token is set.
## Usage
```javascript
import axios from './lib/utils/axios_utils';
axios.get(url)
.then((response) => {
// `data` is the response that was provided by the server
const data = response.data;
// `headers` the headers that the server responded with
// All header names are lower cased
const paginationData = response.headers;
})
.catch(() => {
//handle the error
});
```
## Mock Axios response in tests
To help us mock the responses we are using [axios-mock-adapter](https://github.com/ctimmerm/axios-mock-adapter).
Advantages over [`spyOn()`](https://jasmine.github.io/api/edge/global.html#spyOn):
- no need to create response objects
- does not allow call through (which we want to avoid)
- clear API to test error cases
- provides `replyOnce()` to allow for different responses
We have also decided against using [Axios interceptors](https://github.com/axios/axios#interceptors) because they are not suitable for mocking.
### Example
```javascript
import axios from '~/lib/utils/axios_utils';
import MockAdapter from 'axios-mock-adapter';
let mock;
beforeEach(() => {
// This sets the mock adapter on the default instance
mock = new MockAdapter(axios);
// Mock any GET request to /users
// arguments for reply are (status, data, headers)
mock.onGet('/users').reply(200, {
users: [
{ id: 1, name: 'John Smith' }
]
});
});
afterEach(() => {
mock.restore();
});
```
### Mock poll requests in tests with Axios
Because a polling function requires a header object, we need to always include an object as the third argument:
```javascript
mock.onGet('/users').reply(200, { foo: 'bar' }, {});
```
|
https://docs.gitlab.com/development/content_editor
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/content_editor.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
content_editor.md
|
Plan
|
Knowledge
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Rich text editor development guidelines
| null |
The rich text editor is a UI component that provides a WYSIWYG editing
experience for [GitLab Flavored Markdown](../../user/markdown.md) in the GitLab application.
It also serves as the foundation for implementing Markdown-focused editors
that target other engines, like static site generators.
We use [Tiptap 2.0](https://tiptap.dev/) and [ProseMirror](https://prosemirror.net/)
to build the rich text editor. These frameworks provide a level of abstraction on top of
the native
[`contenteditable`](https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/contenteditable) web technology.
## Usage guide
Follow these instructions to include the rich text editor in a feature.
1. [Include the rich text editor component](#include-the-rich-text-editor-component).
1. [Set and get Markdown](#set-and-get-markdown).
1. [Listen for changes](#listen-for-changes).
### Include the rich text editor component
Import the `ContentEditor` Vue component. We recommend using asynchronous named imports to
take advantage of caching, as the ContentEditor is a big dependency.
```html
<script>
export default {
components: {
ContentEditor: () =>
import(
/* webpackChunkName: 'content_editor' */ '~/content_editor/components/content_editor.vue'
),
},
// rest of the component definition
}
</script>
```
The rich text editor requires two properties:
- `renderMarkdown` is an asynchronous function that returns the response (String) of invoking the
[Markdown API](../../api/markdown.md).
- `uploadsPath` is a URL that points to a [GitLab upload service](../uploads/_index.md)
with `multipart/form-data` support.
See the [`WikiForm.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/wikis/components/wiki_form.vue#L207)
component for a production example of these two properties.
### Set and get Markdown
The `ContentEditor` Vue component doesn't implement Vue data binding flow (`v-model`)
because setting and getting Markdown are expensive operations. Data binding would
trigger these operations every time the user interacts with the component.
Instead, you should obtain an instance of the `ContentEditor` class by listening to the
`initialized` event:
```html
<script>
import { createAlert } from '~/alert';
import { __ } from '~/locale';
export default {
methods: {
async loadInitialContent(contentEditor) {
this.contentEditor = contentEditor;
try {
await this.contentEditor.setSerializedContent(this.content);
} catch (e) {
createAlert({ message: __('Could not load initial document') });
}
},
submitChanges() {
const markdown = this.contentEditor.getSerializedContent();
},
},
};
</script>
<template>
<content-editor
:render-markdown="renderMarkdown"
:uploads-path="pageInfo.uploadsPath"
@initialized="loadInitialContent"
/>
</template>
```
### Listen for changes
You can still react to changes in the rich text editor. Reacting to changes helps
you know if the document is empty or dirty. Use the `@change` event handler for
this purpose.
```html
<script>
export default {
data() {
return {
empty: false,
};
},
methods: {
handleContentEditorChange({ empty }) {
this.empty = empty;
}
},
};
</script>
<template>
<div>
<content-editor
:render-markdown="renderMarkdown"
:uploads-path="pageInfo.uploadsPath"
@initialized="loadInitialContent"
@change="handleContentEditorChange"
/>
<gl-button :disabled="empty" @click="submitChanges">
{{ __('Submit changes') }}
</gl-button>
</div>
</template>
```
## Implementation guide
The rich text editor is composed of three main layers:
- **The editing tools UI**, like the toolbar and the table structure editor. They
display the editor's state and mutate it by dispatching commands.
- **The Tiptap Editor object** manages the editor's state,
and exposes business logic as commands executed by the editing tools UI.
- **The Markdown serializer** transforms a Markdown source string into a ProseMirror
document and vice versa.
### Editing tools UI
The editing tools UI are Vue components that display the editor's state and
dispatch [commands](https://tiptap.dev/docs/editor/api/commands) to mutate it.
They are located in the `~/content_editor/components` directory. For example,
the **Bold** toolbar button displays the editor's state by becoming active when
the user selects bold text. This button also dispatches the `toggleBold` command
to format text as bold:
```mermaid
sequenceDiagram
participant A as Editing tools UI
participant B as Tiptap object
A->>B: queries state/dispatches commands
B--)A: notifies state changes
```
#### Node views
We implement [node views](https://tiptap.dev/docs/editor/guide/node-views/vue)
to provide inline editing tools for some content types, like tables and images. Node views
allow separating the presentation of a content type from its
[model](https://prosemirror.net/docs/guide/#doc.data_structures). Using a Vue component in
the presentation layer enables sophisticated editing experiences in the rich text editor.
Node views are located in `~/content_editor/components/wrappers`.
#### Dispatch commands
You can inject the Tiptap Editor object to Vue components to dispatch
commands.
{{< alert type="note" >}}
Do not implement logic that changes the editor's
state in Vue components. Encapsulate this logic in commands, and dispatch
the command from the component's methods.
{{< /alert >}}
```html
<script>
export default {
inject: ['tiptapEditor'],
methods: {
execute() {
//Incorrect
const { state, view } = this.tiptapEditor.state;
const { tr, schema } = state;
tr.addMark(state.selection.from, state.selection.to, null, null, schema.mark('bold'));
// Correct
this.tiptapEditor.chain().toggleBold().focus().run();
},
}
};
</script>
<template>
```
#### Query editor's state
Use the `EditorStateObserver` renderless component to react to changes in the
editor's state, such as when the document or the selection changes. You can listen to
the following events:
- `docUpdate`
- `selectionUpdate`
- `transaction`
- `focus`
- `blur`
- `error`.
Learn more about these events in [the Tiptap event guide](https://tiptap.dev/docs/editor/api/events).
```html
<script>
// Parts of the code has been hidden for efficiency
import EditorStateObserver from './editor_state_observer.vue';
export default {
components: {
EditorStateObserver,
},
data() {
return {
error: null,
};
},
methods: {
displayError({ message }) {
this.error = message;
},
dismissError() {
this.error = null;
},
},
};
</script>
<template>
<editor-state-observer @error="displayError">
<gl-alert v-if="error" class="gl-mb-6" variant="danger" @dismiss="dismissError">
{{ error }}
</gl-alert>
</editor-state-observer>
</template>
```
### The Tiptap editor object
The Tiptap [Editor](https://tiptap.dev/docs/editor/api/editor) class manages
the editor's state and encapsulates all the business logic that powers
the rich text editor. The rich text editor constructs a new instance of this class and
provides all the necessary extensions to support
[GitLab Flavored Markdown](../../user/markdown.md).
#### Implement new extensions
Extensions are the building blocks of the rich text editor. You can learn how to implement
new ones by reading [the Tiptap guide](https://tiptap.dev/docs/editor/guide/custom-extensions).
We recommend checking the list of built-in [nodes](https://tiptap.dev/docs/editor/api/nodes) and
[marks](https://tiptap.dev/docs/editor/api/marks) before implementing a new extension
from scratch.
Store the rich text editor extensions in the `~/content_editor/extensions` directory.
When using a Tiptap built-in extension, wrap it in a ES6 module inside this directory:
```javascript
export { Bold as default } from '@tiptap/extension-bold';
```
Use the `extend` method to customize the Extension's behavior:
```javascript
import { HardBreak } from '@tiptap/extension-hard-break';
export default HardBreak.extend({
addKeyboardShortcuts() {
return {
'Shift-Enter': () => this.editor.commands.setHardBreak(),
};
},
});
```
#### Register extensions
Register the new extension in `~/content_editor/services/create_content_editor.js`. Import
the extension module and add it to the `builtInContentEditorExtensions` array:
```javascript
import Emoji from '../extensions/emoji';
const builtInContentEditorExtensions = [
Code,
CodeBlockHighlight,
Document,
Dropcursor,
Emoji,
// Other extensions
]
```
### The Markdown serializer
The Markdown Serializer transforms a Markdown String to a
[ProseMirror document](https://prosemirror.net/docs/guide/#doc) and vice versa.
#### Deserialization
Deserialization is the process of converting Markdown to a ProseMirror document.
We take advantage of ProseMirror's
[HTML parsing and serialization capabilities](https://prosemirror.net/docs/guide/#schema.serialization_and_parsing)
by first rendering the Markdown as HTML using the [Markdown API endpoint](../../api/markdown.md):
```mermaid
sequenceDiagram
participant A as rich text editor
participant E as Tiptap object
participant B as Markdown serializer
participant C as Markdown API
participant D as ProseMirror parser
A->>B: deserialize(markdown)
B->>C: render(markdown)
C-->>B: html
B->>D: to document(html)
D-->>A: document
A->>E: setContent(document)
```
Deserializers live in the extension modules. Read Tiptap documentation about
[`parseHTML`](https://tiptap.dev/docs/editor/guide/custom-extensions#parse-html) and
[`addAttributes`](https://tiptap.dev/docs/editor/guide/custom-extensions#attributes) to
learn how to implement them. The Tiptap API is a wrapper around ProseMirror's
[schema spec API](https://prosemirror.net/docs/ref/#model.SchemaSpec).
#### Serialization
Serialization is the process of converting a ProseMirror document to Markdown. The Content
Editor uses [`prosemirror-markdown`](https://github.com/ProseMirror/prosemirror-markdown)
to serialize documents. We recommend reading the
[MarkdownSerializer](https://github.com/ProseMirror/prosemirror-markdown#class-markdownserializer)
and [MarkdownSerializerState](https://github.com/ProseMirror/prosemirror-markdown#class-markdownserializerstate)
classes documentation before implementing a serializer:
```mermaid
sequenceDiagram
participant A as rich text editor
participant B as Markdown serializer
participant C as ProseMirror Markdown
A->>B: serialize(document)
B->>C: serialize(document, serializers)
C-->>A: Markdown string
```
`prosemirror-markdown` requires implementing a serializer function for each content type supported
by the rich text editor. We implement serializers in `~/content_editor/services/markdown_serializer.js`.
|
---
stage: Plan
group: Knowledge
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Rich text editor development guidelines
breadcrumbs:
- doc
- development
- fe_guide
---
The rich text editor is a UI component that provides a WYSIWYG editing
experience for [GitLab Flavored Markdown](../../user/markdown.md) in the GitLab application.
It also serves as the foundation for implementing Markdown-focused editors
that target other engines, like static site generators.
We use [Tiptap 2.0](https://tiptap.dev/) and [ProseMirror](https://prosemirror.net/)
to build the rich text editor. These frameworks provide a level of abstraction on top of
the native
[`contenteditable`](https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/contenteditable) web technology.
## Usage guide
Follow these instructions to include the rich text editor in a feature.
1. [Include the rich text editor component](#include-the-rich-text-editor-component).
1. [Set and get Markdown](#set-and-get-markdown).
1. [Listen for changes](#listen-for-changes).
### Include the rich text editor component
Import the `ContentEditor` Vue component. We recommend using asynchronous named imports to
take advantage of caching, as the ContentEditor is a big dependency.
```html
<script>
export default {
components: {
ContentEditor: () =>
import(
/* webpackChunkName: 'content_editor' */ '~/content_editor/components/content_editor.vue'
),
},
// rest of the component definition
}
</script>
```
The rich text editor requires two properties:
- `renderMarkdown` is an asynchronous function that returns the response (String) of invoking the
[Markdown API](../../api/markdown.md).
- `uploadsPath` is a URL that points to a [GitLab upload service](../uploads/_index.md)
with `multipart/form-data` support.
See the [`WikiForm.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/wikis/components/wiki_form.vue#L207)
component for a production example of these two properties.
### Set and get Markdown
The `ContentEditor` Vue component doesn't implement Vue data binding flow (`v-model`)
because setting and getting Markdown are expensive operations. Data binding would
trigger these operations every time the user interacts with the component.
Instead, you should obtain an instance of the `ContentEditor` class by listening to the
`initialized` event:
```html
<script>
import { createAlert } from '~/alert';
import { __ } from '~/locale';
export default {
methods: {
async loadInitialContent(contentEditor) {
this.contentEditor = contentEditor;
try {
await this.contentEditor.setSerializedContent(this.content);
} catch (e) {
createAlert({ message: __('Could not load initial document') });
}
},
submitChanges() {
const markdown = this.contentEditor.getSerializedContent();
},
},
};
</script>
<template>
<content-editor
:render-markdown="renderMarkdown"
:uploads-path="pageInfo.uploadsPath"
@initialized="loadInitialContent"
/>
</template>
```
### Listen for changes
You can still react to changes in the rich text editor. Reacting to changes helps
you know if the document is empty or dirty. Use the `@change` event handler for
this purpose.
```html
<script>
export default {
data() {
return {
empty: false,
};
},
methods: {
handleContentEditorChange({ empty }) {
this.empty = empty;
}
},
};
</script>
<template>
<div>
<content-editor
:render-markdown="renderMarkdown"
:uploads-path="pageInfo.uploadsPath"
@initialized="loadInitialContent"
@change="handleContentEditorChange"
/>
<gl-button :disabled="empty" @click="submitChanges">
{{ __('Submit changes') }}
</gl-button>
</div>
</template>
```
## Implementation guide
The rich text editor is composed of three main layers:
- **The editing tools UI**, like the toolbar and the table structure editor. They
display the editor's state and mutate it by dispatching commands.
- **The Tiptap Editor object** manages the editor's state,
and exposes business logic as commands executed by the editing tools UI.
- **The Markdown serializer** transforms a Markdown source string into a ProseMirror
document and vice versa.
### Editing tools UI
The editing tools UI are Vue components that display the editor's state and
dispatch [commands](https://tiptap.dev/docs/editor/api/commands) to mutate it.
They are located in the `~/content_editor/components` directory. For example,
the **Bold** toolbar button displays the editor's state by becoming active when
the user selects bold text. This button also dispatches the `toggleBold` command
to format text as bold:
```mermaid
sequenceDiagram
participant A as Editing tools UI
participant B as Tiptap object
A->>B: queries state/dispatches commands
B--)A: notifies state changes
```
#### Node views
We implement [node views](https://tiptap.dev/docs/editor/guide/node-views/vue)
to provide inline editing tools for some content types, like tables and images. Node views
allow separating the presentation of a content type from its
[model](https://prosemirror.net/docs/guide/#doc.data_structures). Using a Vue component in
the presentation layer enables sophisticated editing experiences in the rich text editor.
Node views are located in `~/content_editor/components/wrappers`.
#### Dispatch commands
You can inject the Tiptap Editor object to Vue components to dispatch
commands.
{{< alert type="note" >}}
Do not implement logic that changes the editor's
state in Vue components. Encapsulate this logic in commands, and dispatch
the command from the component's methods.
{{< /alert >}}
```html
<script>
export default {
inject: ['tiptapEditor'],
methods: {
execute() {
//Incorrect
const { state, view } = this.tiptapEditor.state;
const { tr, schema } = state;
tr.addMark(state.selection.from, state.selection.to, null, null, schema.mark('bold'));
// Correct
this.tiptapEditor.chain().toggleBold().focus().run();
},
}
};
</script>
<template>
```
#### Query editor's state
Use the `EditorStateObserver` renderless component to react to changes in the
editor's state, such as when the document or the selection changes. You can listen to
the following events:
- `docUpdate`
- `selectionUpdate`
- `transaction`
- `focus`
- `blur`
- `error`.
Learn more about these events in [the Tiptap event guide](https://tiptap.dev/docs/editor/api/events).
```html
<script>
// Parts of the code has been hidden for efficiency
import EditorStateObserver from './editor_state_observer.vue';
export default {
components: {
EditorStateObserver,
},
data() {
return {
error: null,
};
},
methods: {
displayError({ message }) {
this.error = message;
},
dismissError() {
this.error = null;
},
},
};
</script>
<template>
<editor-state-observer @error="displayError">
<gl-alert v-if="error" class="gl-mb-6" variant="danger" @dismiss="dismissError">
{{ error }}
</gl-alert>
</editor-state-observer>
</template>
```
### The Tiptap editor object
The Tiptap [Editor](https://tiptap.dev/docs/editor/api/editor) class manages
the editor's state and encapsulates all the business logic that powers
the rich text editor. The rich text editor constructs a new instance of this class and
provides all the necessary extensions to support
[GitLab Flavored Markdown](../../user/markdown.md).
#### Implement new extensions
Extensions are the building blocks of the rich text editor. You can learn how to implement
new ones by reading [the Tiptap guide](https://tiptap.dev/docs/editor/guide/custom-extensions).
We recommend checking the list of built-in [nodes](https://tiptap.dev/docs/editor/api/nodes) and
[marks](https://tiptap.dev/docs/editor/api/marks) before implementing a new extension
from scratch.
Store the rich text editor extensions in the `~/content_editor/extensions` directory.
When using a Tiptap built-in extension, wrap it in a ES6 module inside this directory:
```javascript
export { Bold as default } from '@tiptap/extension-bold';
```
Use the `extend` method to customize the Extension's behavior:
```javascript
import { HardBreak } from '@tiptap/extension-hard-break';
export default HardBreak.extend({
addKeyboardShortcuts() {
return {
'Shift-Enter': () => this.editor.commands.setHardBreak(),
};
},
});
```
#### Register extensions
Register the new extension in `~/content_editor/services/create_content_editor.js`. Import
the extension module and add it to the `builtInContentEditorExtensions` array:
```javascript
import Emoji from '../extensions/emoji';
const builtInContentEditorExtensions = [
Code,
CodeBlockHighlight,
Document,
Dropcursor,
Emoji,
// Other extensions
]
```
### The Markdown serializer
The Markdown Serializer transforms a Markdown String to a
[ProseMirror document](https://prosemirror.net/docs/guide/#doc) and vice versa.
#### Deserialization
Deserialization is the process of converting Markdown to a ProseMirror document.
We take advantage of ProseMirror's
[HTML parsing and serialization capabilities](https://prosemirror.net/docs/guide/#schema.serialization_and_parsing)
by first rendering the Markdown as HTML using the [Markdown API endpoint](../../api/markdown.md):
```mermaid
sequenceDiagram
participant A as rich text editor
participant E as Tiptap object
participant B as Markdown serializer
participant C as Markdown API
participant D as ProseMirror parser
A->>B: deserialize(markdown)
B->>C: render(markdown)
C-->>B: html
B->>D: to document(html)
D-->>A: document
A->>E: setContent(document)
```
Deserializers live in the extension modules. Read Tiptap documentation about
[`parseHTML`](https://tiptap.dev/docs/editor/guide/custom-extensions#parse-html) and
[`addAttributes`](https://tiptap.dev/docs/editor/guide/custom-extensions#attributes) to
learn how to implement them. The Tiptap API is a wrapper around ProseMirror's
[schema spec API](https://prosemirror.net/docs/ref/#model.SchemaSpec).
#### Serialization
Serialization is the process of converting a ProseMirror document to Markdown. The Content
Editor uses [`prosemirror-markdown`](https://github.com/ProseMirror/prosemirror-markdown)
to serialize documents. We recommend reading the
[MarkdownSerializer](https://github.com/ProseMirror/prosemirror-markdown#class-markdownserializer)
and [MarkdownSerializerState](https://github.com/ProseMirror/prosemirror-markdown#class-markdownserializerstate)
classes documentation before implementing a serializer:
```mermaid
sequenceDiagram
participant A as rich text editor
participant B as Markdown serializer
participant C as ProseMirror Markdown
A->>B: serialize(document)
B->>C: serialize(document, serializers)
C-->>A: Markdown string
```
`prosemirror-markdown` requires implementing a serializer function for each content type supported
by the rich text editor. We implement serializers in `~/content_editor/services/markdown_serializer.js`.
|
https://docs.gitlab.com/development/vue3_migration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/vue3_migration.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
vue3_migration.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Migration to Vue 3
| null |
The migration from Vue 2 to 3 is tracked in epic [&6252](https://gitlab.com/groups/gitlab-org/-/epics/6252).
To ease migration to Vue 3.x, we have added [ESLint rules](https://gitlab.com/gitlab-org/frontend/eslint-plugin/-/merge_requests/50)
that prevent us from using the following deprecated features in the codebase.
## Vue filters
**Why?**
Filters [are removed](https://github.com/vuejs/rfcs/blob/master/active-rfcs/0015-remove-filters.md) from the Vue 3 API completely.
**What to use instead**
Component's computed properties / methods or external helpers.
## Event hub
**Why?**
`$on`, `$once`, and `$off` methods [are removed](https://github.com/vuejs/rfcs/blob/master/active-rfcs/0020-events-api-change.md) from the Vue instance, so in Vue 3 it can't be used to create an event hub.
**When to use**
If you are in a Vue app that doesn't use any event hub, try to avoid adding a new one unless absolutely necessary. For example, if you need a child component to react to its parent's event, it's preferred to pass a prop down. Then, use the watch property on that prop in the child component to create the desired side effect.
If you need cross-component communication (between different Vue apps), then perhaps introducing a hub is the right decision.
**What to use instead**
We have created a factory that you can use to instantiate a new [mitt](https://github.com/developit/mitt)-like event hub.
This makes it easier to migrate existing event hubs to the new recommended approach, or
to create new ones.
```javascript
import createEventHub from '~/helpers/event_hub_factory';
export default createEventHub();
```
Event hubs created with the factory expose the same methods as Vue 2 event hubs (`$on`, `$once`, `$off` and
`$emit`), making them backward compatible with our previous approach.
## \<template functional>
**Why?**
In Vue 3, `{ functional: true }` option [is removed](https://github.com/vuejs/rfcs/blob/functional-async-api-change/active-rfcs/0007-functional-async-api-change.md) and `<template functional>` is no longer supported.
**What to use instead**
Functional components must be written as plain functions:
```javascript
import { h } from 'vue'
const FunctionalComp = (props, slots) => {
return h('div', `Hello! ${props.name}`)
}
```
It is not recommended to replace stateful components with functional components unless you absolutely need a performance improvement right now. In Vue 3, performance gains for functional components are negligible.
## Old slots syntax with `slot` attribute
**Why?**
In Vue 2.6 `slot` attribute was already deprecated in favor of `v-slot` directive. The `slot` attribute usage is still allowed and sometimes we prefer using it because it simplifies unit tests (with old syntax, slots are rendered on `shallowMount`). However, in Vue 3 we can't use old syntax anymore.
**What to use instead**
The syntax with `v-slot` directive. To fix rendering slots in `shallowMount`, we need to stub a child component with slots explicitly.
```html
<!-- MyAwesomeComponent.vue -->
<script>
import SomeChildComponent from './some_child_component.vue'
export default {
components: {
SomeChildComponent
}
}
</script>
<template>
<div>
<h1>Hello GitLab!</h1>
<some-child-component>
<template #header>
Header content
</template>
</some-child-component>
</div>
</template>
```
```javascript
// MyAwesomeComponent.spec.js
import SomeChildComponent from '~/some_child_component.vue'
shallowMount(MyAwesomeComponent, {
stubs: {
SomeChildComponent
}
})
```
## Props default function `this` access
**Why?**
In Vue 3, props default value factory functions no longer have access to `this`
(the component instance).
**What to use instead**
Write a computed prop that resolves the desired value from other props. This
will work in both Vue 2 and 3.
```html
<script>
export default {
props: {
metric: {
type: String,
required: true,
},
title: {
type: String,
required: false,
default: null,
},
},
computed: {
actualTitle() {
return this.title ?? this.metric;
},
},
}
</script>
<template>
<div>{{ actualTitle }}</div>
</template>
```
[In Vue 3](https://v3-migration.vuejs.org/breaking-changes/props-default-this.html),
the props default value factory is passed the raw props as an argument, and can
also access injections.
## Handling libraries that do not work with `@vue/compat`
**Problem**
Some libraries rely on Vue.js 2 internals. They might not work with `@vue/compat`, so we need a strategy to use an updated version with Vue.js 3 while maintaining compatibility with the current codebase.
**Goals**
- We should add as few changes as possible to existing code to support new libraries. Instead, we should **add*- new code, which will act as **facade**, making the new version compatible with the old one
- Switching between new and old versions should be hidden inside tooling (webpack / jest) and should not be exposed to the code
- All facades specific to migration should live in the same directory to simplify future migration steps
### Step-by-step migration
In the step-by-step guide, we will be migrating [VueApollo Demo](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/tree/main/src/vue3compat) project. It will allow us to focus on migration specifics while avoiding nuances of complex tooling setup in the GitLab project. The project intentionally uses the same tooling as GitLab:
- webpack
- yarn
- Vue.js + VueApollo
#### Initial state
Right after cloning, you could run [VueApollo Demo](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/tree/main/src/vue3compat) with Vue.js 2 using `yarn serve` or with Vue.js 3 (`compat` build) using `yarn serve:vue3`. However latter immediately crashes:
```javascript
Uncaught TypeError: Cannot read properties of undefined (reading 'loading')
```
VueApollo v3 (used for Vue.js 2) fails to initialize in Vue.js `compat`
{{< alert type="note" >}}
While stubbing `Vue.version` will solve VueApollo-related issues in the demo project, it will still lose reactivity on specific scenarios, so an upgrade is still needed
{{< /alert >}}
#### Step 1. Perform upgrade according to library docs
According to [VueApollo v4 installation guide](https://v4.apollo.vuejs.org/guide/installation.html), we need to install `@vue/apollo-option` (this package provides VueApollo support for Options API) and make changes to our application:
```diff
--- a/src/index.js
+++ b/src/index.js
@@ -1,19 +1,17 @@
-import Vue from "vue";
-import VueApollo from "vue-apollo";
+import { createApp, h } from "vue";
+import { createApolloProvider } from "@vue/apollo-option";
import Demo from "./components/Demo.vue";
import createDefaultClient from "./lib/graphql";
-Vue.use(VueApollo);
-
-const apolloProvider = new VueApollo({
+const apolloProvider = createApolloProvider({
defaultClient: createDefaultClient(),
});
-new Vue({
- el: "#app",
- apolloProvider,
- render(h) {
+const app = createApp({
+ render() {
return h(Demo);
},
});
+app.use(apolloProvider);
+app.mount("#app");
```
You can view these changes in [01-upgrade-vue-apollo](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/compare/main...01-upgrade-vue-apollo) branch of demo project
#### Step 2. Addressing differences in augmenting applications in Vue.js 2 and 3
In Vue.js 2 tooling like `VueApollo` is initialized in a "lazy" fashion:
```javascript
// We are registering VueApollo "handler" to handle some data LATER
Vue.use(VueApollo)
// ...
// apolloProvider is provided at app instantiation,
// previously registered VueApollo will handle that
new Vue({ /- ... */, apolloProvider })
```
In Vue.js 3 both steps were merged in one - we are immediately registering the handler and passing configuration:
```javascript
app.use(apolloProvider)
```
In order to backport this behavior, we need the following knowledge:
- We can access extra options provided to Vue instance via `$options`, so extra `apolloProvider` will be visible as `this.$options.apolloProvider`
- We can access the current `app` (in Vue.js 3 meaning) on the Vue instance via `this.$.appContext.app`
{{< alert type="note" >}}
We're relying on non-public Vue.js 3 API in this case. However, since `@vue/compat` builds are expected to be available only for 3.2.x branch, we have reduced risks that this API will be changed
{{< /alert >}}
With this knowledge, we can move the initialization of our tooling as early as possible in Vue2 - in the `beforeCreate()` lifecycle hook:
```diff
--- a/src/index.js
+++ b/src/index.js
@@ -1,4 +1,4 @@
-import { createApp, h } from "vue";
+import Vue from "vue";
import { createApolloProvider } from "@vue/apollo-option";
import Demo from "./components/Demo.vue";
@@ -8,10 +8,13 @@ const apolloProvider = createApolloProvider({
defaultClient: createDefaultClient(),
});
-const app = createApp({
- render() {
+new Vue({
+ el: "#app",
+ apolloProvider,
+ render(h) {
return h(Demo);
},
+ beforeCreate() {
+ this.$.appContext.app.use(this.$options.apolloProvider);
+ },
});
-app.use(apolloProvider);
-app.mount("#app");
```
You can view these changes in [02-bring-back-new-vue](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/compare/01-upgrade-vue-apollo...02-bring-back-new-vue) branch of demo project
#### Step 3. Recreating `VueApollo` class
Vue.js 3 libraries (and Vue.js itself) have a preference for using factories like `createApp` instead of classes (previously `new Vue`)
`VueApollo` class served two purposes:
- constructor for creating `apolloProvider`
- installation of apollo-related logic in components
We can utilize `Vue.use(VueApollo)` code, which existed in our codebase, to hide there our mixin and avoid modification of our app code:
```diff
--- a/src/index.js
+++ b/src/index.js
@@ -4,7 +4,26 @@ import { createApolloProvider } from "@vue/apollo-option";
import Demo from "./components/Demo.vue";
import createDefaultClient from "./lib/graphql";
-const apolloProvider = createApolloProvider({
+class VueApollo {
+ constructor(...args) {
+ return createApolloProvider(...args);
+ }
+
+ // called by Vue.use
+ static install() {
+ Vue.mixin({
+ beforeCreate() {
+ if (this.$options.apolloProvider) {
+ this.$.appContext.app.use(this.$options.apolloProvider);
+ }
+ },
+ });
+ }
+}
+
+Vue.use(VueApollo);
+
+const apolloProvider = new VueApollo({
defaultClient: createDefaultClient(),
});
@@ -14,7 +33,4 @@ new Vue({
render(h) {
return h(Demo);
},
- beforeCreate() {
- this.$.appContext.app.use(this.$options.apolloProvider);
- },
});
```
You can view these changes in [03-recreate-vue-apollo](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/compare/02-bring-back-new-vue...03-recreate-vue-apollo) branch of demo project
#### Step 4. Moving `VueApollo` class to a separate file and setting up an alias
Now, we have almost the same code (excluding import) as in Vue.js 2 version.
We will move our facade to the separate file and set up `webpack` conditionally execute it if `vue-apollo` is imported when using Vue.js 3:
```diff
--- a/src/index.js
+++ b/src/index.js
@@ -1,5 +1,5 @@
import Vue from "vue";
-import { createApolloProvider } from "@vue/apollo-option";
+import VueApollo from "vue-apollo";
import Demo from "./components/Demo.vue";
import createDefaultClient from "./lib/graphql";
diff --git a/webpack.config.js b/webpack.config.js
index 6160d3f..b8b955f 100644
--- a/webpack.config.js
+++ b/webpack.config.js
@@ -12,6 +12,7 @@ if (USE_VUE3) {
VUE3_ALIASES = {
vue: "@vue/compat",
+ "vue-apollo": path.resolve("src/vue3compat/vue-apollo"),
};
}
```
(moving `VueApollo` class from `index.js` to `vue3compat/vue-apollo.js` as default export is omitted for clarity)
You can view these changes in [04-add-webpack-alias](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/compare/03-recreate-vue-apollo...04-add-webpack-alias) branch of demo project
#### Step 5. Observe the results
At this point, you should be able again to run **both*- Vue.js 2 version with `yarn serve` and Vue.js 3 one with `yarn serve:vue3`
[Final MR](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/merge_requests/1/diffs) with all changes from previous steps displays no changes to `index.js` (application code), which was our goal
### Applying this approach in the GitLab project
In [commit adding VueApollo v4 support](https://gitlab.com/gitlab-org/gitlab/-/commit/e0af7e6479695a28a4fe85a88f90815aa3ce2814) we can see additional nuances not covered by step-by-step guide:
- We might need to add additional imports to our facades (our code in GitLab uses `ApolloMutation` component)
- We need to update aliases not only for webpack but also for jest so our tests could also consume our facade
## Unit testing
For more information about implementing unit tests or fixing tests that fail while using Vue 3,
read the [Vue 3 testing guide](../testing_guide/testing_vue3.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Migration to Vue 3
breadcrumbs:
- doc
- development
- fe_guide
---
The migration from Vue 2 to 3 is tracked in epic [&6252](https://gitlab.com/groups/gitlab-org/-/epics/6252).
To ease migration to Vue 3.x, we have added [ESLint rules](https://gitlab.com/gitlab-org/frontend/eslint-plugin/-/merge_requests/50)
that prevent us from using the following deprecated features in the codebase.
## Vue filters
**Why?**
Filters [are removed](https://github.com/vuejs/rfcs/blob/master/active-rfcs/0015-remove-filters.md) from the Vue 3 API completely.
**What to use instead**
Component's computed properties / methods or external helpers.
## Event hub
**Why?**
`$on`, `$once`, and `$off` methods [are removed](https://github.com/vuejs/rfcs/blob/master/active-rfcs/0020-events-api-change.md) from the Vue instance, so in Vue 3 it can't be used to create an event hub.
**When to use**
If you are in a Vue app that doesn't use any event hub, try to avoid adding a new one unless absolutely necessary. For example, if you need a child component to react to its parent's event, it's preferred to pass a prop down. Then, use the watch property on that prop in the child component to create the desired side effect.
If you need cross-component communication (between different Vue apps), then perhaps introducing a hub is the right decision.
**What to use instead**
We have created a factory that you can use to instantiate a new [mitt](https://github.com/developit/mitt)-like event hub.
This makes it easier to migrate existing event hubs to the new recommended approach, or
to create new ones.
```javascript
import createEventHub from '~/helpers/event_hub_factory';
export default createEventHub();
```
Event hubs created with the factory expose the same methods as Vue 2 event hubs (`$on`, `$once`, `$off` and
`$emit`), making them backward compatible with our previous approach.
## \<template functional>
**Why?**
In Vue 3, `{ functional: true }` option [is removed](https://github.com/vuejs/rfcs/blob/functional-async-api-change/active-rfcs/0007-functional-async-api-change.md) and `<template functional>` is no longer supported.
**What to use instead**
Functional components must be written as plain functions:
```javascript
import { h } from 'vue'
const FunctionalComp = (props, slots) => {
return h('div', `Hello! ${props.name}`)
}
```
It is not recommended to replace stateful components with functional components unless you absolutely need a performance improvement right now. In Vue 3, performance gains for functional components are negligible.
## Old slots syntax with `slot` attribute
**Why?**
In Vue 2.6 `slot` attribute was already deprecated in favor of `v-slot` directive. The `slot` attribute usage is still allowed and sometimes we prefer using it because it simplifies unit tests (with old syntax, slots are rendered on `shallowMount`). However, in Vue 3 we can't use old syntax anymore.
**What to use instead**
The syntax with `v-slot` directive. To fix rendering slots in `shallowMount`, we need to stub a child component with slots explicitly.
```html
<!-- MyAwesomeComponent.vue -->
<script>
import SomeChildComponent from './some_child_component.vue'
export default {
components: {
SomeChildComponent
}
}
</script>
<template>
<div>
<h1>Hello GitLab!</h1>
<some-child-component>
<template #header>
Header content
</template>
</some-child-component>
</div>
</template>
```
```javascript
// MyAwesomeComponent.spec.js
import SomeChildComponent from '~/some_child_component.vue'
shallowMount(MyAwesomeComponent, {
stubs: {
SomeChildComponent
}
})
```
## Props default function `this` access
**Why?**
In Vue 3, props default value factory functions no longer have access to `this`
(the component instance).
**What to use instead**
Write a computed prop that resolves the desired value from other props. This
will work in both Vue 2 and 3.
```html
<script>
export default {
props: {
metric: {
type: String,
required: true,
},
title: {
type: String,
required: false,
default: null,
},
},
computed: {
actualTitle() {
return this.title ?? this.metric;
},
},
}
</script>
<template>
<div>{{ actualTitle }}</div>
</template>
```
[In Vue 3](https://v3-migration.vuejs.org/breaking-changes/props-default-this.html),
the props default value factory is passed the raw props as an argument, and can
also access injections.
## Handling libraries that do not work with `@vue/compat`
**Problem**
Some libraries rely on Vue.js 2 internals. They might not work with `@vue/compat`, so we need a strategy to use an updated version with Vue.js 3 while maintaining compatibility with the current codebase.
**Goals**
- We should add as few changes as possible to existing code to support new libraries. Instead, we should **add*- new code, which will act as **facade**, making the new version compatible with the old one
- Switching between new and old versions should be hidden inside tooling (webpack / jest) and should not be exposed to the code
- All facades specific to migration should live in the same directory to simplify future migration steps
### Step-by-step migration
In the step-by-step guide, we will be migrating [VueApollo Demo](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/tree/main/src/vue3compat) project. It will allow us to focus on migration specifics while avoiding nuances of complex tooling setup in the GitLab project. The project intentionally uses the same tooling as GitLab:
- webpack
- yarn
- Vue.js + VueApollo
#### Initial state
Right after cloning, you could run [VueApollo Demo](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/tree/main/src/vue3compat) with Vue.js 2 using `yarn serve` or with Vue.js 3 (`compat` build) using `yarn serve:vue3`. However latter immediately crashes:
```javascript
Uncaught TypeError: Cannot read properties of undefined (reading 'loading')
```
VueApollo v3 (used for Vue.js 2) fails to initialize in Vue.js `compat`
{{< alert type="note" >}}
While stubbing `Vue.version` will solve VueApollo-related issues in the demo project, it will still lose reactivity on specific scenarios, so an upgrade is still needed
{{< /alert >}}
#### Step 1. Perform upgrade according to library docs
According to [VueApollo v4 installation guide](https://v4.apollo.vuejs.org/guide/installation.html), we need to install `@vue/apollo-option` (this package provides VueApollo support for Options API) and make changes to our application:
```diff
--- a/src/index.js
+++ b/src/index.js
@@ -1,19 +1,17 @@
-import Vue from "vue";
-import VueApollo from "vue-apollo";
+import { createApp, h } from "vue";
+import { createApolloProvider } from "@vue/apollo-option";
import Demo from "./components/Demo.vue";
import createDefaultClient from "./lib/graphql";
-Vue.use(VueApollo);
-
-const apolloProvider = new VueApollo({
+const apolloProvider = createApolloProvider({
defaultClient: createDefaultClient(),
});
-new Vue({
- el: "#app",
- apolloProvider,
- render(h) {
+const app = createApp({
+ render() {
return h(Demo);
},
});
+app.use(apolloProvider);
+app.mount("#app");
```
You can view these changes in [01-upgrade-vue-apollo](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/compare/main...01-upgrade-vue-apollo) branch of demo project
#### Step 2. Addressing differences in augmenting applications in Vue.js 2 and 3
In Vue.js 2 tooling like `VueApollo` is initialized in a "lazy" fashion:
```javascript
// We are registering VueApollo "handler" to handle some data LATER
Vue.use(VueApollo)
// ...
// apolloProvider is provided at app instantiation,
// previously registered VueApollo will handle that
new Vue({ /- ... */, apolloProvider })
```
In Vue.js 3 both steps were merged in one - we are immediately registering the handler and passing configuration:
```javascript
app.use(apolloProvider)
```
In order to backport this behavior, we need the following knowledge:
- We can access extra options provided to Vue instance via `$options`, so extra `apolloProvider` will be visible as `this.$options.apolloProvider`
- We can access the current `app` (in Vue.js 3 meaning) on the Vue instance via `this.$.appContext.app`
{{< alert type="note" >}}
We're relying on non-public Vue.js 3 API in this case. However, since `@vue/compat` builds are expected to be available only for 3.2.x branch, we have reduced risks that this API will be changed
{{< /alert >}}
With this knowledge, we can move the initialization of our tooling as early as possible in Vue2 - in the `beforeCreate()` lifecycle hook:
```diff
--- a/src/index.js
+++ b/src/index.js
@@ -1,4 +1,4 @@
-import { createApp, h } from "vue";
+import Vue from "vue";
import { createApolloProvider } from "@vue/apollo-option";
import Demo from "./components/Demo.vue";
@@ -8,10 +8,13 @@ const apolloProvider = createApolloProvider({
defaultClient: createDefaultClient(),
});
-const app = createApp({
- render() {
+new Vue({
+ el: "#app",
+ apolloProvider,
+ render(h) {
return h(Demo);
},
+ beforeCreate() {
+ this.$.appContext.app.use(this.$options.apolloProvider);
+ },
});
-app.use(apolloProvider);
-app.mount("#app");
```
You can view these changes in [02-bring-back-new-vue](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/compare/01-upgrade-vue-apollo...02-bring-back-new-vue) branch of demo project
#### Step 3. Recreating `VueApollo` class
Vue.js 3 libraries (and Vue.js itself) have a preference for using factories like `createApp` instead of classes (previously `new Vue`)
`VueApollo` class served two purposes:
- constructor for creating `apolloProvider`
- installation of apollo-related logic in components
We can utilize `Vue.use(VueApollo)` code, which existed in our codebase, to hide there our mixin and avoid modification of our app code:
```diff
--- a/src/index.js
+++ b/src/index.js
@@ -4,7 +4,26 @@ import { createApolloProvider } from "@vue/apollo-option";
import Demo from "./components/Demo.vue";
import createDefaultClient from "./lib/graphql";
-const apolloProvider = createApolloProvider({
+class VueApollo {
+ constructor(...args) {
+ return createApolloProvider(...args);
+ }
+
+ // called by Vue.use
+ static install() {
+ Vue.mixin({
+ beforeCreate() {
+ if (this.$options.apolloProvider) {
+ this.$.appContext.app.use(this.$options.apolloProvider);
+ }
+ },
+ });
+ }
+}
+
+Vue.use(VueApollo);
+
+const apolloProvider = new VueApollo({
defaultClient: createDefaultClient(),
});
@@ -14,7 +33,4 @@ new Vue({
render(h) {
return h(Demo);
},
- beforeCreate() {
- this.$.appContext.app.use(this.$options.apolloProvider);
- },
});
```
You can view these changes in [03-recreate-vue-apollo](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/compare/02-bring-back-new-vue...03-recreate-vue-apollo) branch of demo project
#### Step 4. Moving `VueApollo` class to a separate file and setting up an alias
Now, we have almost the same code (excluding import) as in Vue.js 2 version.
We will move our facade to the separate file and set up `webpack` conditionally execute it if `vue-apollo` is imported when using Vue.js 3:
```diff
--- a/src/index.js
+++ b/src/index.js
@@ -1,5 +1,5 @@
import Vue from "vue";
-import { createApolloProvider } from "@vue/apollo-option";
+import VueApollo from "vue-apollo";
import Demo from "./components/Demo.vue";
import createDefaultClient from "./lib/graphql";
diff --git a/webpack.config.js b/webpack.config.js
index 6160d3f..b8b955f 100644
--- a/webpack.config.js
+++ b/webpack.config.js
@@ -12,6 +12,7 @@ if (USE_VUE3) {
VUE3_ALIASES = {
vue: "@vue/compat",
+ "vue-apollo": path.resolve("src/vue3compat/vue-apollo"),
};
}
```
(moving `VueApollo` class from `index.js` to `vue3compat/vue-apollo.js` as default export is omitted for clarity)
You can view these changes in [04-add-webpack-alias](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/compare/03-recreate-vue-apollo...04-add-webpack-alias) branch of demo project
#### Step 5. Observe the results
At this point, you should be able again to run **both*- Vue.js 2 version with `yarn serve` and Vue.js 3 one with `yarn serve:vue3`
[Final MR](https://gitlab.com/gitlab-org/frontend/vue3-migration-vue-apollo/-/merge_requests/1/diffs) with all changes from previous steps displays no changes to `index.js` (application code), which was our goal
### Applying this approach in the GitLab project
In [commit adding VueApollo v4 support](https://gitlab.com/gitlab-org/gitlab/-/commit/e0af7e6479695a28a4fe85a88f90815aa3ce2814) we can see additional nuances not covered by step-by-step guide:
- We might need to add additional imports to our facades (our code in GitLab uses `ApolloMutation` component)
- We need to update aliases not only for webpack but also for jest so our tests could also consume our facade
## Unit testing
For more information about implementing unit tests or fixing tests that fail while using Vue 3,
read the [Vue 3 testing guide](../testing_guide/testing_vue3.md).
|
https://docs.gitlab.com/development/design_tokens
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/design_tokens.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
design_tokens.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Design tokens
| null |
GitLab uses design tokens to maintain a single source of truth that, through automation, can be formatted for different uses.
- See [Pajamas](https://design.gitlab.com/product-foundations/design-tokens) for an overview on design tokens.
- See [GitLab UI](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/contributing/design_tokens.md) for details about creating and implementing design tokens.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Design tokens
breadcrumbs:
- doc
- development
- fe_guide
---
GitLab uses design tokens to maintain a single source of truth that, through automation, can be formatted for different uses.
- See [Pajamas](https://design.gitlab.com/product-foundations/design-tokens) for an overview on design tokens.
- See [GitLab UI](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/contributing/design_tokens.md) for details about creating and implementing design tokens.
|
https://docs.gitlab.com/development/diagrams_net_integration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/diagrams_net_integration.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
diagrams_net_integration.md
|
Plan
|
Knowledge
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Diagrams.net integration
| null |
In [wikis](../../user/project/wiki/markdown.md#diagramsnet-editor) you can use the diagrams.net editor to
create diagrams. The diagrams.net editor runs as a separate web service outside the GitLab
application and GitLab instance administrators can
[configure the URL](../../administration/integration/diagrams_net.md) that points to this service.
This page describes the key implementation aspects of this integration on the frontend. The diagrams.net
integration implementation is located in the
[`drawio_editor.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/drawio/drawio_editor.js)
file of the GitLab repository.
## IFrame sandbox
The GitLab application embeds the diagrams.net editor inside an iframe. IFrames creates a
sandboxed environment that disallows the diagrams.net editor from accessing the GitLab
application's browsing context thus protecting user data and enhancing security.
The diagrams.net and the GitLab application communicate using the
[postMessage](https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage) API.
```mermaid
sequenceDiagram
Diagrams.net->>+GitLab application: message('configure')
GitLab application-->>Diagrams.net: action('configure', config)
```
The GitLab application receives messages from the Diagrams.net editor that
contain a serialized JavaScript object. This object has the following shape:
```typescript
type Message = {
event: string;
value?: string;
data?: string;
}
```
The `event` property tells the GitLab application how it should
react to the message. The diagrams.net editor sends the following events:
- `configure`: When the GitLab application receives this message, it sends back
a `configure` action to set the color theme of the diagrams.net editor.
- `init`: When the GitLab application receives this message,
it can upload an existing diagram using the `load` action.
- `exit`: The GitLab application closes and disposes the
diagrams.net editor.
- `prompt`: This event has a `value` attribute with the
diagram's filename. If the `value` property is an empty value,
the GitLab application should send a `prompt`requesting the user to introduce a filename.
- `export`: This event has a `data` attribute that contains
the diagram created by the user in the SVG format.
## Markdown Editor integration
The user can start the diagrams.net editor from the Markdown
Editor or the [Content Editor](content_editor.md). The diagrams.net editor integration doesn't
know implementation details about these editors. Instead, it exposes a protocol or interface that serves
as a façade to decouple the editor implementation details from the diagrams.net integration.
```mermaid
classDiagram
DiagramsnetIntegration ..> EditorFacade
EditorFacade <|-- ContentEditorFacade
EditorFacade <|-- MarkdownEditorFacade
ContentEditorFacade ..> ContentEditor
MarkdownEditorFacade ..> MarkdownEditor
class EditorFacade {
<<Interface>>
+uploadDiagram(filename, diagramSvg)
+insertDiagram(uploadResults)
+updateDiagram(diagramMarkdown, uploadResults)
+getDiagram()
}
```
The diagrams.net integration calls these methods to upload a diagram to the GitLab
application or get a diagrams embedded as an uploaded resource in the Markdown Editor.
|
---
stage: Plan
group: Knowledge
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Diagrams.net integration
breadcrumbs:
- doc
- development
- fe_guide
---
In [wikis](../../user/project/wiki/markdown.md#diagramsnet-editor) you can use the diagrams.net editor to
create diagrams. The diagrams.net editor runs as a separate web service outside the GitLab
application and GitLab instance administrators can
[configure the URL](../../administration/integration/diagrams_net.md) that points to this service.
This page describes the key implementation aspects of this integration on the frontend. The diagrams.net
integration implementation is located in the
[`drawio_editor.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/drawio/drawio_editor.js)
file of the GitLab repository.
## IFrame sandbox
The GitLab application embeds the diagrams.net editor inside an iframe. IFrames creates a
sandboxed environment that disallows the diagrams.net editor from accessing the GitLab
application's browsing context thus protecting user data and enhancing security.
The diagrams.net and the GitLab application communicate using the
[postMessage](https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage) API.
```mermaid
sequenceDiagram
Diagrams.net->>+GitLab application: message('configure')
GitLab application-->>Diagrams.net: action('configure', config)
```
The GitLab application receives messages from the Diagrams.net editor that
contain a serialized JavaScript object. This object has the following shape:
```typescript
type Message = {
event: string;
value?: string;
data?: string;
}
```
The `event` property tells the GitLab application how it should
react to the message. The diagrams.net editor sends the following events:
- `configure`: When the GitLab application receives this message, it sends back
a `configure` action to set the color theme of the diagrams.net editor.
- `init`: When the GitLab application receives this message,
it can upload an existing diagram using the `load` action.
- `exit`: The GitLab application closes and disposes the
diagrams.net editor.
- `prompt`: This event has a `value` attribute with the
diagram's filename. If the `value` property is an empty value,
the GitLab application should send a `prompt`requesting the user to introduce a filename.
- `export`: This event has a `data` attribute that contains
the diagram created by the user in the SVG format.
## Markdown Editor integration
The user can start the diagrams.net editor from the Markdown
Editor or the [Content Editor](content_editor.md). The diagrams.net editor integration doesn't
know implementation details about these editors. Instead, it exposes a protocol or interface that serves
as a façade to decouple the editor implementation details from the diagrams.net integration.
```mermaid
classDiagram
DiagramsnetIntegration ..> EditorFacade
EditorFacade <|-- ContentEditorFacade
EditorFacade <|-- MarkdownEditorFacade
ContentEditorFacade ..> ContentEditor
MarkdownEditorFacade ..> MarkdownEditor
class EditorFacade {
<<Interface>>
+uploadDiagram(filename, diagramSvg)
+insertDiagram(uploadResults)
+updateDiagram(diagramMarkdown, uploadResults)
+getDiagram()
}
```
The diagrams.net integration calls these methods to upload a diagram to the GitLab
application or get a diagrams embedded as an uploaded resource in the Markdown Editor.
|
https://docs.gitlab.com/development/migrating_from_vuex
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/migrating_from_vuex.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
migrating_from_vuex.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Migrating from Vuex
| null |
[Vuex is deprecated in GitLab](vuex.md#deprecated), if you have an existing Vuex store you should strongly consider migrating.
## Why?
We have defined the [GraphQL API](../../api/graphql/_index.md) as the primary choice for all user-facing features.
We can safely assume that whenever GraphQL is present, so will the Apollo Client.
We [do not want to use Vuex with Apollo](graphql.md#using-with-vuex), so the VueX stores count
will naturally decline over time as we move from the REST API to GraphQL.
This section gives guidelines and methods to translate an existing VueX store to
pure Vue and Apollo, or how to rely less on VueX.
## How?
[Pick your preferred state manager solution](state_management.md) before proceeding with the migration.
- If you plan to use Pinia [follow this guide](pinia.md#migrating-from-vuex).
- If you plan to use Apollo Client for all state management, then [follow the guide below](#migrate-to-vue-managed-state-and-apollo-client).
### Migrate to Vue-managed state and Apollo Client
As a whole, we want to understand how complex our change will be. Sometimes, we only have a few properties that are truly worth being stored in a global state and sometimes they can safely all be extracted to pure `Vue`. `VueX` properties generally fall into one of these categories:
- Static properties
- Reactive mutable properties
- Getters
- API data
Therefore, the first step is to read the current VueX state and determine the category of each property.
At a high level, we could map each category with an equivalent non-VueX code pattern:
- Static properties: Provide/Inject from Vue API.
- Reactive mutable properties: Vue events and props, Apollo Client.
- Getters: utility functions, Apollo `update` hook, computed properties.
- API data: Apollo Client.
Let's go through an example. In each section we refer to this state and slowly go through migrating it fully:
```javascript
// state.js AKA our store
export default ({ blobPath = '', summaryEndpoint = '', suiteEndpoint = '' }) => ({
blobPath,
summaryEndpoint,
suiteEndpoint,
testReports: {},
selectedSuiteIndex: null,
isLoading: false,
errorMessage: null,
limit : 10,
pageInfo: {
page: 1,
perPage: 20,
},
});
```
### How to migrate static values
The easiest type of values to migrate are static values, either:
- Client-side constants: If the static value is a client-side constant, it may have been implemented
in the store for easy access by other state properties or methods. However, it is generally
a better practice to add such values to a `constants.js` file and import it when needed.
- Rails-injected dataset: These are values that we may need to provide to our Vue apps.
They are static, so adding them to the VueX store is not necessary and it could instead
be done easily through the `provide/inject` Vue API, which would be equivalent but without the VueX overhead. This should **only** be injected inside the top-most JS file that mounts our component.
If we take a look at our example above, we can already see that two properties contain `Endpoint` in their name, which probably means that these come from our Rails dataset. To confirm this, we would search the codebase for these properties and see where they are defined, which is the case in our example. Additionally, `blobPath` is also a static property, and a little less obvious here is that `pageInfo` is actually a constant! It is never modified and is only used as a default value that we use inside our getter:
```javascript
// state.js AKA our store
export default ({ blobPath = '', summaryEndpoint = '', suiteEndpoint = '' }) => ({
limit
blobPath, // Static - Dataset
summaryEndpoint, // Static - Dataset
suiteEndpoint, // Static - Dataset
testReports: {},
selectedSuiteIndex: null,
isLoading: false,
errorMessage: null,
pageInfo: { // Static - Constant
page: 1, // Static - Constant
perPage: 20, // Static - Constant
},
});
```
### How to migrate reactive mutable values
These values are especially useful when used by a lot of different components, so we can first evaluate how many reads and writes each property gets, and how far apart these are from each other. The fewer reads there are and the closer together they live, the easier it will be to remove these properties in favor of native Vue props and events.
#### Simple read/write values
If we go back to our example, `selectedSuiteIndex` is only used by **one component** and also **once inside a getter**. Additionally, this getter is only used once itself! It would be quite easy to translate this logic to Vue because this could become a `data` property on the component instance. For the getter, we can use a computed property instead, or a method on the component that returns the right item because we will have access to the index there as well. This is a perfect example of how the VueX store here complicates the application by adding a lot of abstractions when really everything could live inside the same component.
Luckily, in our example all properties could live inside the same component. However, there are cases where it will not be possible. When this happens, we can use Vue events and props to communicate between sibling components. Store the data in question inside a parent component that should know about the state, and when a child component wants to write to the component, it can `$emit` an event with the new value and let the parent update. Then, by cascading props down to all of its children, all instances of the sibling components will share the same data.
Sometimes, it can feel that events and props are cumbersome, especially in very deep component trees. However, it is quite important to be aware that this is mostly an inconvenience issue and not an architectural flaw or problem to fix. Passing down props, even deeply nested, is a very acceptable pattern for cross-components communication.
#### Shared read/write values
Let's assume that we have a property in the store that is used by multiple components for read and writes that are either so numerous or far apart that Vue props and events seem like a bad solution. Instead, we use Apollo client-side resolvers. This section requires knowledge of [Apollo Client](graphql.md), so feel free to check the apollo details as needed.
First we need to set up our Vue app to use `VueApollo`. Then when creating our store, we pass the `resolvers` and `typedefs` (defined later) to the Apollo Client:
```javascript
import { resolvers } from "./graphql/settings.js"
import typeDefs from './graphql/typedefs.graphql';
...
const apolloProvider = new VueApollo({
defaultClient: createDefaultClient({
resolvers, // To be written soon
{ typeDefs }, // We are going to create this in a sec
}),
});
```
For our example, let's call our field `app.status`, and we need is to define queries and mutations that use the `@client` directives. Let's create them right now:
```javascript
// get_app_status.query.graphql
query getAppStatus {
app @client {
status
}
}
```
```javascript
// update_app_status.mutation.graphql
mutation updateAppStatus($appStatus: String) {
updateAppStatus(appStatus: $appStatus) @client
}
```
For fields that **do not exist in our schema**, we need to set up `typeDefs`. For example:
```javascript
// typedefs.graphql
type TestReportApp {
status: String!
}
extend type Query {
app: TestReportApp
}
```
Now we can write our resolvers so that we can update the field with our mutation:
```javascript
// settings.js
export const resolvers = {
Mutation: {
// appStatus is the argument to our mutation
updateAppStatus: (_, { appStatus }, { cache }) => {
cache.writeQuery({
query: getAppStatus,
data: {
app: {
__typename: 'TestReportApp',
status: appStatus,
},
},
});
},
}
}
```
For querying, this works without any additional instructions because it behaves like any `Object`, because querying for `app { status }` is equivalent to `app.status`. However, we need to write either a "default" `writeQuery` (to define the very first value our field will have) or we can set up the [`typePolicies` for our `cacheConfig`](graphql.md#local-state-with-apollo) to provide this default value.
So now when we want to read from this value, we can use our local query. When we need to update it, we can call the mutation and pass the new value as an argument.
#### Network-related values
There are values like `isLoading` and `errorMessage` which are tied to the network request state. These are read/write properties, but will easily be replaced later with Apollo Client's own capabilities without us doing any extra work:
```javascript
// state.js AKA our store
export default ({ blobPath = '', summaryEndpoint = '', suiteEndpoint = '' }) => ({
blobPath, // Static - Dataset
summaryEndpoint, // Static - Dataset
suiteEndpoint, // Static - Dataset
testReports: {},
selectedSuiteIndex: null, // Mutable -> data property
isLoading: false, // Mutable -> tied to network
errorMessage: null, // Mutable -> tied to network
pageInfo: { // Static - Constant
page: 1, // Static - Constant
perPage: 20, // Static - Constant
},
});
```
### How to migrate getters
Getters have to be reviewed case-by-case, but a general guideline is that it is highly possible to write a pure JavaScript util function that takes as an argument the state values we used to use inside the getter, and then return whatever value we want. Consider the following getter:
```javascript
// getters.js
export const getSelectedSuite = (state) =>
state.testReports?.test_suites?.[state.selectedSuiteIndex] || {};
```
All that we do here is reference two state values, which can both become arguments to a function:
```javascript
//new_utils.js
export const getSelectedSuite = (testReports, selectedSuiteIndex) =>
testReports?.test_suites?.[selectedSuiteIndex] || {};
```
This new util can then be imported and used as it previously was, but directly inside the component. Also, most of the specs for the getters can be ported to the utils quite easily because the logic is preserved.
### How to migrate API data
Our last property is called `testReports` and it is fetched via an `axios` call to the API. We assume that we are in a pure REST application and that GraphQL data is not yet available:
```javascript
// actions.js
export const fetchSummary = ({ state, commit, dispatch }) => {
dispatch('toggleLoading');
return axios
.get(state.summaryEndpoint)
.then(({ data }) => {
commit(types.SET_SUMMARY, data);
})
.catch(() => {
createAlert({
message: s__('TestReports|There was an error fetching the summary.'),
});
})
.finally(() => {
dispatch('toggleLoading');
});
};
```
We have two options here. If this action is only used once, there is nothing preventing us from just moving all of this code from the `actions.js` file to the component that does the fetching. Then, it would be easy to remove all the state related code in favor of `data` properties. In that case, `isLoading` and `errorMessages` would both live along with it because it's only used once.
If we are reusing this function multiple time (or plan to), then that Apollo Client can be leveraged to do what it does best: network calls and caching. In this section, we assume Apollo Client knowledge and that you know how to set it up, but feel free to read through [the GraphQL documentation](graphql.md).
We can use a local GraphQL query (with an `@client` directive) to structure how we want to receive the data, and then use a client-side resolver to tell Apollo Client how to resolve that query. We can take a look at our REST call in the browser network tab and determine which structure suits the use case. In our example, we could write our query like:
```graphql
query getTestReportSummary($fullPath: ID!, $iid: ID!, endpoint: String!) {
project(fullPath: $fullPath){
id,
pipeline(iid: $iid){
id,
testReportSummary(endpoint: $endpoint) @client {
testSuites{
nodes{
name
totalTime,
# There are more fields here, but they aren't needed for our example
}
}
}
}
}
}
```
The structure here is arbitrary in the sense that we could write this however we want. It might be tempting to skip the `project.pipeline.testReportSummary` because this is not how the REST call is structured. However, by making the query structure compliant with the `GraphQL` API, we will not need to modify our query if we do decide to transition to `GraphQL` later, and can simply remove the `@client` directive. This also gives us **caching for free** because if we try to fetch the summary again for the same pipeline, Apollo Client knows that we already have the result!
Additionally, we are passing an `endpoint` argument to our field `testReportSummary`. This would not be necessary in pure `GraphQL`, but our resolver is going to need that information to make the `REST` call later.
Now we need to write a client-side resolver. When we mark a field with an `@client` directive, it is **not sent to the server**, and Apollo Client instead expects us to [define our own code to resolve the value](graphql.md#using-client-side-resolvers). We can write a client-side resolver for `testReportSummary` inside the `cacheConfig` object that we pass to Apollo Client. We want this resolver to make the Axios call and return whatever data structure we want. That this is also the perfect place to transfer a getter if it was always used when accessing the API data or massaging the data structure:
```javascript
// graphql_config.js
export const resolvers = {
Query: {
testReportSummary(_, { summaryEndpoint }): {
return axios.get(summaryEndpoint).then(({ data }) => {
return data // we could format/massage our data here instead of using a getter
}
}
}
```
Any time we make a call to the `testReportSummary @client` field, this resolver is executed and returns the result of the operation, which is essentially doing the same job as the `VueX` action did.
If we assume that our GraphQL call is stored inside a data property called `testReportSummary`, we can replace `isLoading` with `this.$apollo.queries.testReportSummary.lodaing` in any component that fires this query. Errors can be handled inside the `error` hook of the Query.
### Migration strategy
Now that we have gone through each type of data, let's review how to plan for the transition between a VueX-based store and one without. We are trying to avoid VueX and Apollo coexisting, so the less time where both stores are available in the same context the better. To minimize this overlap, we should start our migration by removing from the store all that does not involve adding an Apollo store. Each of the following point could be its own MR:
1. Migrate away from Static values, both `Rails` dataset and client-side constants and use `provide/inject` and `constants.js` files instead.
1. Replace simple read/write operations with either:
- `data` properties and `methods` if in a single component.
- `props` and `emits` if shared across a localized group of components.
1. Replace shared read/write operations with Apollo Client `@client` directives.
1. Replace network data with Apollo Client, either with actual GraphQL calls when available or by using client-side resolvers to make REST calls.
If it is impossible to quickly replace shared read/write operations or network data (for example in one or two milestones), consider making a different Vue component behind a feature flag that is exclusively functional with Apollo Client, and rename the current component that uses VueX with a `legacy-` prefix. The newer component might not be able to implement all functionality right away, but we can progressively add them as we make MRs. This way, our legacy component is exclusively using VueX as a store and the new one is only Apollo. After the new component has re-implemented all the logic, we can turn the Feature Flag on and ensure that it behaves as expected.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Migrating from Vuex
breadcrumbs:
- doc
- development
- fe_guide
---
[Vuex is deprecated in GitLab](vuex.md#deprecated), if you have an existing Vuex store you should strongly consider migrating.
## Why?
We have defined the [GraphQL API](../../api/graphql/_index.md) as the primary choice for all user-facing features.
We can safely assume that whenever GraphQL is present, so will the Apollo Client.
We [do not want to use Vuex with Apollo](graphql.md#using-with-vuex), so the VueX stores count
will naturally decline over time as we move from the REST API to GraphQL.
This section gives guidelines and methods to translate an existing VueX store to
pure Vue and Apollo, or how to rely less on VueX.
## How?
[Pick your preferred state manager solution](state_management.md) before proceeding with the migration.
- If you plan to use Pinia [follow this guide](pinia.md#migrating-from-vuex).
- If you plan to use Apollo Client for all state management, then [follow the guide below](#migrate-to-vue-managed-state-and-apollo-client).
### Migrate to Vue-managed state and Apollo Client
As a whole, we want to understand how complex our change will be. Sometimes, we only have a few properties that are truly worth being stored in a global state and sometimes they can safely all be extracted to pure `Vue`. `VueX` properties generally fall into one of these categories:
- Static properties
- Reactive mutable properties
- Getters
- API data
Therefore, the first step is to read the current VueX state and determine the category of each property.
At a high level, we could map each category with an equivalent non-VueX code pattern:
- Static properties: Provide/Inject from Vue API.
- Reactive mutable properties: Vue events and props, Apollo Client.
- Getters: utility functions, Apollo `update` hook, computed properties.
- API data: Apollo Client.
Let's go through an example. In each section we refer to this state and slowly go through migrating it fully:
```javascript
// state.js AKA our store
export default ({ blobPath = '', summaryEndpoint = '', suiteEndpoint = '' }) => ({
blobPath,
summaryEndpoint,
suiteEndpoint,
testReports: {},
selectedSuiteIndex: null,
isLoading: false,
errorMessage: null,
limit : 10,
pageInfo: {
page: 1,
perPage: 20,
},
});
```
### How to migrate static values
The easiest type of values to migrate are static values, either:
- Client-side constants: If the static value is a client-side constant, it may have been implemented
in the store for easy access by other state properties or methods. However, it is generally
a better practice to add such values to a `constants.js` file and import it when needed.
- Rails-injected dataset: These are values that we may need to provide to our Vue apps.
They are static, so adding them to the VueX store is not necessary and it could instead
be done easily through the `provide/inject` Vue API, which would be equivalent but without the VueX overhead. This should **only** be injected inside the top-most JS file that mounts our component.
If we take a look at our example above, we can already see that two properties contain `Endpoint` in their name, which probably means that these come from our Rails dataset. To confirm this, we would search the codebase for these properties and see where they are defined, which is the case in our example. Additionally, `blobPath` is also a static property, and a little less obvious here is that `pageInfo` is actually a constant! It is never modified and is only used as a default value that we use inside our getter:
```javascript
// state.js AKA our store
export default ({ blobPath = '', summaryEndpoint = '', suiteEndpoint = '' }) => ({
limit
blobPath, // Static - Dataset
summaryEndpoint, // Static - Dataset
suiteEndpoint, // Static - Dataset
testReports: {},
selectedSuiteIndex: null,
isLoading: false,
errorMessage: null,
pageInfo: { // Static - Constant
page: 1, // Static - Constant
perPage: 20, // Static - Constant
},
});
```
### How to migrate reactive mutable values
These values are especially useful when used by a lot of different components, so we can first evaluate how many reads and writes each property gets, and how far apart these are from each other. The fewer reads there are and the closer together they live, the easier it will be to remove these properties in favor of native Vue props and events.
#### Simple read/write values
If we go back to our example, `selectedSuiteIndex` is only used by **one component** and also **once inside a getter**. Additionally, this getter is only used once itself! It would be quite easy to translate this logic to Vue because this could become a `data` property on the component instance. For the getter, we can use a computed property instead, or a method on the component that returns the right item because we will have access to the index there as well. This is a perfect example of how the VueX store here complicates the application by adding a lot of abstractions when really everything could live inside the same component.
Luckily, in our example all properties could live inside the same component. However, there are cases where it will not be possible. When this happens, we can use Vue events and props to communicate between sibling components. Store the data in question inside a parent component that should know about the state, and when a child component wants to write to the component, it can `$emit` an event with the new value and let the parent update. Then, by cascading props down to all of its children, all instances of the sibling components will share the same data.
Sometimes, it can feel that events and props are cumbersome, especially in very deep component trees. However, it is quite important to be aware that this is mostly an inconvenience issue and not an architectural flaw or problem to fix. Passing down props, even deeply nested, is a very acceptable pattern for cross-components communication.
#### Shared read/write values
Let's assume that we have a property in the store that is used by multiple components for read and writes that are either so numerous or far apart that Vue props and events seem like a bad solution. Instead, we use Apollo client-side resolvers. This section requires knowledge of [Apollo Client](graphql.md), so feel free to check the apollo details as needed.
First we need to set up our Vue app to use `VueApollo`. Then when creating our store, we pass the `resolvers` and `typedefs` (defined later) to the Apollo Client:
```javascript
import { resolvers } from "./graphql/settings.js"
import typeDefs from './graphql/typedefs.graphql';
...
const apolloProvider = new VueApollo({
defaultClient: createDefaultClient({
resolvers, // To be written soon
{ typeDefs }, // We are going to create this in a sec
}),
});
```
For our example, let's call our field `app.status`, and we need is to define queries and mutations that use the `@client` directives. Let's create them right now:
```javascript
// get_app_status.query.graphql
query getAppStatus {
app @client {
status
}
}
```
```javascript
// update_app_status.mutation.graphql
mutation updateAppStatus($appStatus: String) {
updateAppStatus(appStatus: $appStatus) @client
}
```
For fields that **do not exist in our schema**, we need to set up `typeDefs`. For example:
```javascript
// typedefs.graphql
type TestReportApp {
status: String!
}
extend type Query {
app: TestReportApp
}
```
Now we can write our resolvers so that we can update the field with our mutation:
```javascript
// settings.js
export const resolvers = {
Mutation: {
// appStatus is the argument to our mutation
updateAppStatus: (_, { appStatus }, { cache }) => {
cache.writeQuery({
query: getAppStatus,
data: {
app: {
__typename: 'TestReportApp',
status: appStatus,
},
},
});
},
}
}
```
For querying, this works without any additional instructions because it behaves like any `Object`, because querying for `app { status }` is equivalent to `app.status`. However, we need to write either a "default" `writeQuery` (to define the very first value our field will have) or we can set up the [`typePolicies` for our `cacheConfig`](graphql.md#local-state-with-apollo) to provide this default value.
So now when we want to read from this value, we can use our local query. When we need to update it, we can call the mutation and pass the new value as an argument.
#### Network-related values
There are values like `isLoading` and `errorMessage` which are tied to the network request state. These are read/write properties, but will easily be replaced later with Apollo Client's own capabilities without us doing any extra work:
```javascript
// state.js AKA our store
export default ({ blobPath = '', summaryEndpoint = '', suiteEndpoint = '' }) => ({
blobPath, // Static - Dataset
summaryEndpoint, // Static - Dataset
suiteEndpoint, // Static - Dataset
testReports: {},
selectedSuiteIndex: null, // Mutable -> data property
isLoading: false, // Mutable -> tied to network
errorMessage: null, // Mutable -> tied to network
pageInfo: { // Static - Constant
page: 1, // Static - Constant
perPage: 20, // Static - Constant
},
});
```
### How to migrate getters
Getters have to be reviewed case-by-case, but a general guideline is that it is highly possible to write a pure JavaScript util function that takes as an argument the state values we used to use inside the getter, and then return whatever value we want. Consider the following getter:
```javascript
// getters.js
export const getSelectedSuite = (state) =>
state.testReports?.test_suites?.[state.selectedSuiteIndex] || {};
```
All that we do here is reference two state values, which can both become arguments to a function:
```javascript
//new_utils.js
export const getSelectedSuite = (testReports, selectedSuiteIndex) =>
testReports?.test_suites?.[selectedSuiteIndex] || {};
```
This new util can then be imported and used as it previously was, but directly inside the component. Also, most of the specs for the getters can be ported to the utils quite easily because the logic is preserved.
### How to migrate API data
Our last property is called `testReports` and it is fetched via an `axios` call to the API. We assume that we are in a pure REST application and that GraphQL data is not yet available:
```javascript
// actions.js
export const fetchSummary = ({ state, commit, dispatch }) => {
dispatch('toggleLoading');
return axios
.get(state.summaryEndpoint)
.then(({ data }) => {
commit(types.SET_SUMMARY, data);
})
.catch(() => {
createAlert({
message: s__('TestReports|There was an error fetching the summary.'),
});
})
.finally(() => {
dispatch('toggleLoading');
});
};
```
We have two options here. If this action is only used once, there is nothing preventing us from just moving all of this code from the `actions.js` file to the component that does the fetching. Then, it would be easy to remove all the state related code in favor of `data` properties. In that case, `isLoading` and `errorMessages` would both live along with it because it's only used once.
If we are reusing this function multiple time (or plan to), then that Apollo Client can be leveraged to do what it does best: network calls and caching. In this section, we assume Apollo Client knowledge and that you know how to set it up, but feel free to read through [the GraphQL documentation](graphql.md).
We can use a local GraphQL query (with an `@client` directive) to structure how we want to receive the data, and then use a client-side resolver to tell Apollo Client how to resolve that query. We can take a look at our REST call in the browser network tab and determine which structure suits the use case. In our example, we could write our query like:
```graphql
query getTestReportSummary($fullPath: ID!, $iid: ID!, endpoint: String!) {
project(fullPath: $fullPath){
id,
pipeline(iid: $iid){
id,
testReportSummary(endpoint: $endpoint) @client {
testSuites{
nodes{
name
totalTime,
# There are more fields here, but they aren't needed for our example
}
}
}
}
}
}
```
The structure here is arbitrary in the sense that we could write this however we want. It might be tempting to skip the `project.pipeline.testReportSummary` because this is not how the REST call is structured. However, by making the query structure compliant with the `GraphQL` API, we will not need to modify our query if we do decide to transition to `GraphQL` later, and can simply remove the `@client` directive. This also gives us **caching for free** because if we try to fetch the summary again for the same pipeline, Apollo Client knows that we already have the result!
Additionally, we are passing an `endpoint` argument to our field `testReportSummary`. This would not be necessary in pure `GraphQL`, but our resolver is going to need that information to make the `REST` call later.
Now we need to write a client-side resolver. When we mark a field with an `@client` directive, it is **not sent to the server**, and Apollo Client instead expects us to [define our own code to resolve the value](graphql.md#using-client-side-resolvers). We can write a client-side resolver for `testReportSummary` inside the `cacheConfig` object that we pass to Apollo Client. We want this resolver to make the Axios call and return whatever data structure we want. That this is also the perfect place to transfer a getter if it was always used when accessing the API data or massaging the data structure:
```javascript
// graphql_config.js
export const resolvers = {
Query: {
testReportSummary(_, { summaryEndpoint }): {
return axios.get(summaryEndpoint).then(({ data }) => {
return data // we could format/massage our data here instead of using a getter
}
}
}
```
Any time we make a call to the `testReportSummary @client` field, this resolver is executed and returns the result of the operation, which is essentially doing the same job as the `VueX` action did.
If we assume that our GraphQL call is stored inside a data property called `testReportSummary`, we can replace `isLoading` with `this.$apollo.queries.testReportSummary.lodaing` in any component that fires this query. Errors can be handled inside the `error` hook of the Query.
### Migration strategy
Now that we have gone through each type of data, let's review how to plan for the transition between a VueX-based store and one without. We are trying to avoid VueX and Apollo coexisting, so the less time where both stores are available in the same context the better. To minimize this overlap, we should start our migration by removing from the store all that does not involve adding an Apollo store. Each of the following point could be its own MR:
1. Migrate away from Static values, both `Rails` dataset and client-side constants and use `provide/inject` and `constants.js` files instead.
1. Replace simple read/write operations with either:
- `data` properties and `methods` if in a single component.
- `props` and `emits` if shared across a localized group of components.
1. Replace shared read/write operations with Apollo Client `@client` directives.
1. Replace network data with Apollo Client, either with actual GraphQL calls when available or by using client-side resolvers to make REST calls.
If it is impossible to quickly replace shared read/write operations or network data (for example in one or two milestones), consider making a different Vue component behind a feature flag that is exclusively functional with Apollo Client, and rename the current component that uses VueX with a `legacy-` prefix. The newer component might not be able to implement all functionality right away, but we can progressively add them as we make MRs. This way, our legacy component is exclusively using VueX as a store and the new one is only Apollo. After the new component has re-implemented all the logic, we can turn the Feature Flag on and ensure that it behaves as expected.
|
https://docs.gitlab.com/development/keyboard_shortcuts
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/keyboard_shortcuts.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
keyboard_shortcuts.md
|
Growth
|
Engagement
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Implementing keyboard shortcuts
| null |
We use [Mousetrap](https://craig.is/killing/mice) to implement keyboard
shortcuts in GitLab.
Mousetrap provides an API that allows keyboard shortcut strings (like
`mod+shift+p` or `p b`) to be bound to a JavaScript handler:
```javascript
// Don't do this; see note below
Mousetrap.bind('p b', togglePerformanceBar)
```
However, associating a hard-coded key sequence to a handler (as shown above)
prevents these keyboard shortcuts from being customized or disabled by users.
To allow keyboard shortcuts to be customized, commands are defined in
`~/behaviors/shortcuts/keybindings.js`. The `keysFor` method is responsible for
returning the correct key sequence for the provided command:
```javascript
import { keysFor, TOGGLE_PERFORMANCE_BAR } from '~/behaviors/shortcuts/keybindings'
Mousetrap.bind(keysFor(TOGGLE_PERFORMANCE_BAR), togglePerformanceBar);
```
## Shortcut customization
`keybindings.js` stores keyboard shortcut customizations as a JSON string in
`localStorage`. When `keysFor` is called, it uses the provided command object's
`id` to lookup any customizations found in `localStorage` and returns the custom
keybindings, or the default keybindings if the command has not been customized.
There is no UI to edit these customizations.
## Adding new shortcuts
Because keyboard shortcuts can be customized or disabled by end users,
developers are encouraged to build _lots_ of keyboard shortcuts into GitLab.
Shortcuts that are less likely to be used should be
[disabled](#disabling-shortcuts) by default.
To add a new shortcut, define and export a new command object in
`keybindings.js`:
```javascript
export const MAKE_COFFEE = {
id: 'foodAndBeverage.makeCoffee',
description: s__('KeyboardShortcuts|Make coffee'),
defaultKeys: ['mod+shift+c'],
};
```
Next, add a new command to the appropriate keybinding group object:
```javascript
const COFFEE_GROUP = {
id: 'foodAndBeverage',
name: s__('KeyboardShortcuts|Food and Beverage'),
keybindings: [
MAKE_ESPRESSO,
MAKE_LATTE,
MAKE_COFFEE
];
}
```
Finally, in the application code, import the `keysFor` function and the new
command object and bind the shortcut to the handler using Mousetrap:
```javascript
import { keysFor, MAKE_COFFEE } from '~/behaviors/shortcuts/keybindings'
Mousetrap.bind(keysFor(MAKE_COFFEE), makeCoffee);
```
See the existing the command definitions in `keybindings.js` for more examples.
## Disabling shortcuts
A shortcut can be disabled, also known as _unassigned_, by assigning the
shortcut to an empty array `[]`. For example, to introduce a new shortcut that
is disabled by default, a command can be defined like this:
```javascript
export const MAKE_MOCHA = {
id: 'foodAndBeverage.makeMocha',
description: s__('KeyboardShortcuts|Make a mocha'),
defaultKeys: [],
};
```
## Making shortcuts non-customizable
Occasionally, it's important that a keyboard shortcut _not_ be customizable
(although this should be a rare occurrence).
In this case, a shortcut can be defined with `customizable: false`, which
disables customization of the keybinding:
```javascript
export const MAKE_AMERICANO = {
id: 'foodAndBeverage.makeAmericano',
description: s__('KeyboardShortcuts|Make an Americano'),
defaultKeys: ['mod+shift+a'],
// this disables customization of this shortcut
customizable: false
};
```
This shortcut will always be bound to its `defaultKeys`.
## Make cross-platform shortcuts
It's difficult to make shortcuts that work well in all platforms and browsers.
This is one of the reasons that being able to customize and disable shortcuts is
so important.
One important way to make keyboard shortcuts more portable is to use the `mod`
shortcut string, which resolves to `command` on Mac and `ctrl` otherwise.
See [Mousetrap's documentation](https://craig.is/killing/mice#api.bind.combo)
for more information.
|
---
stage: Growth
group: Engagement
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Implementing keyboard shortcuts
breadcrumbs:
- doc
- development
- fe_guide
---
We use [Mousetrap](https://craig.is/killing/mice) to implement keyboard
shortcuts in GitLab.
Mousetrap provides an API that allows keyboard shortcut strings (like
`mod+shift+p` or `p b`) to be bound to a JavaScript handler:
```javascript
// Don't do this; see note below
Mousetrap.bind('p b', togglePerformanceBar)
```
However, associating a hard-coded key sequence to a handler (as shown above)
prevents these keyboard shortcuts from being customized or disabled by users.
To allow keyboard shortcuts to be customized, commands are defined in
`~/behaviors/shortcuts/keybindings.js`. The `keysFor` method is responsible for
returning the correct key sequence for the provided command:
```javascript
import { keysFor, TOGGLE_PERFORMANCE_BAR } from '~/behaviors/shortcuts/keybindings'
Mousetrap.bind(keysFor(TOGGLE_PERFORMANCE_BAR), togglePerformanceBar);
```
## Shortcut customization
`keybindings.js` stores keyboard shortcut customizations as a JSON string in
`localStorage`. When `keysFor` is called, it uses the provided command object's
`id` to lookup any customizations found in `localStorage` and returns the custom
keybindings, or the default keybindings if the command has not been customized.
There is no UI to edit these customizations.
## Adding new shortcuts
Because keyboard shortcuts can be customized or disabled by end users,
developers are encouraged to build _lots_ of keyboard shortcuts into GitLab.
Shortcuts that are less likely to be used should be
[disabled](#disabling-shortcuts) by default.
To add a new shortcut, define and export a new command object in
`keybindings.js`:
```javascript
export const MAKE_COFFEE = {
id: 'foodAndBeverage.makeCoffee',
description: s__('KeyboardShortcuts|Make coffee'),
defaultKeys: ['mod+shift+c'],
};
```
Next, add a new command to the appropriate keybinding group object:
```javascript
const COFFEE_GROUP = {
id: 'foodAndBeverage',
name: s__('KeyboardShortcuts|Food and Beverage'),
keybindings: [
MAKE_ESPRESSO,
MAKE_LATTE,
MAKE_COFFEE
];
}
```
Finally, in the application code, import the `keysFor` function and the new
command object and bind the shortcut to the handler using Mousetrap:
```javascript
import { keysFor, MAKE_COFFEE } from '~/behaviors/shortcuts/keybindings'
Mousetrap.bind(keysFor(MAKE_COFFEE), makeCoffee);
```
See the existing the command definitions in `keybindings.js` for more examples.
## Disabling shortcuts
A shortcut can be disabled, also known as _unassigned_, by assigning the
shortcut to an empty array `[]`. For example, to introduce a new shortcut that
is disabled by default, a command can be defined like this:
```javascript
export const MAKE_MOCHA = {
id: 'foodAndBeverage.makeMocha',
description: s__('KeyboardShortcuts|Make a mocha'),
defaultKeys: [],
};
```
## Making shortcuts non-customizable
Occasionally, it's important that a keyboard shortcut _not_ be customizable
(although this should be a rare occurrence).
In this case, a shortcut can be defined with `customizable: false`, which
disables customization of the keybinding:
```javascript
export const MAKE_AMERICANO = {
id: 'foodAndBeverage.makeAmericano',
description: s__('KeyboardShortcuts|Make an Americano'),
defaultKeys: ['mod+shift+a'],
// this disables customization of this shortcut
customizable: false
};
```
This shortcut will always be bound to its `defaultKeys`.
## Make cross-platform shortcuts
It's difficult to make shortcuts that work well in all platforms and browsers.
This is one of the reasons that being able to customize and disable shortcuts is
so important.
One important way to make keyboard shortcuts more portable is to use the `mod`
shortcut string, which resolves to `command` on Mac and `ctrl` otherwise.
See [Mousetrap's documentation](https://craig.is/killing/mice#api.bind.combo)
for more information.
|
https://docs.gitlab.com/development/logging
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/logging.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
logging.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Client-side logging for frontend development
| null |
This guide contains the best practices for client-side logging for GitLab
frontend development.
## When to log to the browser console
We do not want to log unnecessarily to the browser console, as excessively
noisy console logs are not easy to read, parse, or process. We **do** want to
give visibility to unintended events in the system. If a possible but unexpected
exception occurs during runtime, we want to log the details of this exception.
These logs can give significantly helpful context to end users creating issues, or
contributors diagnosing problems.
Whenever a `catch(e)` exists, and `e` is something unexpected, log the details.
### What makes an error unexpected?
Sometimes a caught exception can be part of standard operations. For instance, third-party
libraries might throw an exception based on certain inputs. If we can gracefully
handle these exceptions, then they are expected. Don't log them noisily.
For example:
```javascript
try {
// Here, we call a method based on some user input.
// `doAThing` will throw an exception if the input is invalid.
const userInput = getUserInput();
doAThing(userInput);
} catch (e) {
if (e instanceof FooSyntaxError) {
// To handle a `FooSyntaxError`, we just need to instruct the user to change their input.
// This isn't unexpected, and is part of standard operations.
setUserMessage(`Try writing better code. ${e.message}`);
} else {
// We're not sure what `e` is, so something unexpected and bad happened...
logError(e);
setUserMessage('Something unexpected happened...');
}
}
```
## How to log an error
We have a helpful `~/lib/logger` module which encapsulates how we can
consistently log runtime errors in GitLab. Import `logError` from this
module, and use it as you typically would `console.error`. Pass the actual `Error`
object, so the stack trace and other details can be captured in the log:
```javascript
// 1. Import the logger module.
import { logError } from '~/lib/logger';
export const doThing = () => {
return foo()
.then(() => {
// ...
})
.catch(e => {
// 2. Use `logError` like you would `console.error`.
logError('An unexpected error occurred while doing the thing', e);
// We may or may not want to present that something bad happened to the end user.
showThingFailed();
});
};
```
## Relation to frontend observability
Client-side logging is strongly related to
[Frontend observability](https://handbook.gitlab.com/handbook/company/working-groups/frontend-observability/).
We want unexpected errors to be observed by our monitoring systems, so
we can quickly react to user-facing issues. For a number of reasons, it is
unfeasible to send every log to the monitoring system. Don't shy away from using
`~/lib/logger`, but consider controlling which messages passed to `~/lib/logger`
are actually sent to the monitoring systems.
A cohesive logging module helps us control these side effects consistently
across the various entry points.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Client-side logging for frontend development
breadcrumbs:
- doc
- development
- fe_guide
---
This guide contains the best practices for client-side logging for GitLab
frontend development.
## When to log to the browser console
We do not want to log unnecessarily to the browser console, as excessively
noisy console logs are not easy to read, parse, or process. We **do** want to
give visibility to unintended events in the system. If a possible but unexpected
exception occurs during runtime, we want to log the details of this exception.
These logs can give significantly helpful context to end users creating issues, or
contributors diagnosing problems.
Whenever a `catch(e)` exists, and `e` is something unexpected, log the details.
### What makes an error unexpected?
Sometimes a caught exception can be part of standard operations. For instance, third-party
libraries might throw an exception based on certain inputs. If we can gracefully
handle these exceptions, then they are expected. Don't log them noisily.
For example:
```javascript
try {
// Here, we call a method based on some user input.
// `doAThing` will throw an exception if the input is invalid.
const userInput = getUserInput();
doAThing(userInput);
} catch (e) {
if (e instanceof FooSyntaxError) {
// To handle a `FooSyntaxError`, we just need to instruct the user to change their input.
// This isn't unexpected, and is part of standard operations.
setUserMessage(`Try writing better code. ${e.message}`);
} else {
// We're not sure what `e` is, so something unexpected and bad happened...
logError(e);
setUserMessage('Something unexpected happened...');
}
}
```
## How to log an error
We have a helpful `~/lib/logger` module which encapsulates how we can
consistently log runtime errors in GitLab. Import `logError` from this
module, and use it as you typically would `console.error`. Pass the actual `Error`
object, so the stack trace and other details can be captured in the log:
```javascript
// 1. Import the logger module.
import { logError } from '~/lib/logger';
export const doThing = () => {
return foo()
.then(() => {
// ...
})
.catch(e => {
// 2. Use `logError` like you would `console.error`.
logError('An unexpected error occurred while doing the thing', e);
// We may or may not want to present that something bad happened to the end user.
showThingFailed();
});
};
```
## Relation to frontend observability
Client-side logging is strongly related to
[Frontend observability](https://handbook.gitlab.com/handbook/company/working-groups/frontend-observability/).
We want unexpected errors to be observed by our monitoring systems, so
we can quickly react to user-facing issues. For a number of reasons, it is
unfeasible to send every log to the monitoring system. Don't shy away from using
`~/lib/logger`, but consider controlling which messages passed to `~/lib/logger`
are actually sent to the monitoring systems.
A cohesive logging module helps us control these side effects consistently
across the various entry points.
|
https://docs.gitlab.com/development/analytics_dashboards
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/analytics_dashboards.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
analytics_dashboards.md
|
Monitor
|
Platform Insights
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Analytics dashboards
| null |
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/98610) in GitLab 15.5 as an [experiment](../../policy/development_stages_support.md#experiment).
- Inline visualizations configuration [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/509111) in GitLab 17.9.
{{< /history >}}
Analytics dashboards provide a configuration-based [dashboard](https://design.gitlab.com/patterns/dashboards)
structure, which is used to render and modify dashboard configurations created by GitLab or users.
{{< alert type="note" >}}
Analytics dashboards is intended for Premium and Ultimate subscriptions.
{{< /alert >}}
## Overview
Analytics dashboard can be broken down into the following logical components:
- Dashboard: The container that organizes and displays all visualizations
- Panels: Individual sections that host visualizations
- Visualizations: Data display templates (charts, tables, etc.)
- Data sources: Connections to the underlying data
### Dashboard
A dashboard combines a collection of data sources, panels and visualizations into a single page to visually represent data.
Each panel in the dashboard queries the relevant data source and displays the resulting data as the specified visualization. Visualizations serve as templates for how to display data and can be reused across different panels.
A typical dashboard structure looks like this:
```plaintext
dashboard
├── panelA
│ └── visualizationX
│ └── datasource1
├── panelB
│ └── visualizationY
│ └── datasource2
├── panelC
│ └── visualizationY
│ └── datasource1
```
#### Dashboard filters
Dashboards support the following filters:
- **Date range**: Date selector to filter data by date.
- **Anonymous users**: Toggle to include or exclude anonymous users from the dataset.
- **Project**: Dropdown list to filter data by project.
- **Filtered search**: Filter bar to filter data by selected attributes.
#### Dashboard status
Dashboards with a `status` badge indicate their [development stage](../../policy/development_stages_support.md) and functionality. Dashboards without a `status` badge are fully developed and production-ready.
The supported options are:
- `experiment`
- `beta`
### Panel
Panels form the foundation of a dashboard and act as containers for your visualizations. Each panel is built using the GitLab standardized UI component called [GlDashboardPanel](https://gitlab-org.gitlab.io/gitlab-ui/?path=/docs/dashboards-dashboards-panel--docs).
### Visualization
A visualization transforms your data into a graphical format like a chart or table. You can use the following standard visualization types:
- LineChart
- ColumnChart
- DataTable
- SingleStats
For a list of all supported visualization types, see `AnalyticsVisualization.type` enum in [`analytics_visualization`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json).
You're not limited to these options though - you can create new visualization types as needed.
### Data source
A data source is a connection to a database, an endpoint or a collection of data which can be used by your dashboard to query, retrieve, filter, and visualize results.
While there's a core set of supported data sources (see `Data.type` enum in [`analytics_visualizations`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json)), you can add new ones to meet your needs.
To support all visualization types, ensure your data source returns data as a single aggregated value and a time series with a value for each point in time.
Note that each panel fetches data from the data source separately and independently from other panels.
## Create a built-in dashboard
GitLab provides predefined dashboards that are labeled **By GitLab**. Users cannot edit them, but they can clone or use them as the basis for creating similar custom dashboards.
To create a built-in analytics dashboard:
1. Create a folder for the new dashboard under `ee/lib/gitlab/analytics`, for example:
```plaintext
ee/lib/gitlab/analytics/cool_dashboard
```
1. Create a dashboard configuration file (for example `dashboard.yaml`) in the new folder. The configuration must conform to the JSON schema defined in [`ee/app/validators/json_schemas/analytics_dashboard.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_dashboard.json). Example:
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
description: My cool dashboard
panels: []
```
1. Optional. Enable dashboard filters by setting the filter's `enabled` option to `true` in the `.yaml` configuration file :
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
filters:
excludeAnonymousUsers:
enabled: true
dateRange:
enabled: true
projects:
enabled: true
filteredSearch:
enabled: true
# Use `options` to define an array of tokens to override the default ones
options:
- token: assignee
unique: false
- token: label
maxSuggestions: 10
```
Refer to the `DashboardFilters` type in the [`ee/app/validators/json_schemas/analytics_dashboard.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_dashboard.json) for a list of supported filters.
1. Optional. Set the appropriate status of the dashboard if it is not production ready:
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
status: experiment
```
1. Optional. Create visualization templates by creating a folder for your templates (for example `visualizations/`) in your dashboard directory and
add configuration files for each template.
Visualization templates might be used when a visualization will be used by multiple dashboards. Use a template to
prevent duplicating the same YAML block multiple times. For built-in dashboards, the dashboard
will automatically update when the visualization template is changed. For user-defined dashboards, the visualization
template is copied rather than referenced. Visualization templates copied to dashboards are not updated when the
visualization template is updated.
Each file must conform to the JSON schema defined in [`ee/app/validators/json_schemas/analytics_visualization.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json).
Example:
```yaml
# cool_dashboard/visualizations/cool_viz.yaml
---
version: 1
type: LineChart # The render type of the visualization.
data:
type: my_datasource # The name of the datasource
query: {}
options: {}
```
Both `query` and `options` objects will be passed to the data source and used to build the proper query.
Refer to [Data source](#data-source) for a list of supported data sources, and [Visualization](#visualization) for a list of supported visualization render types.
1. To add panels to your dashboard that reference your visualizations, use either:
- Recommended. Use an inline visualization within the dashboard configuration file:
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
description: My cool dashboard
panels:
- title: "My cool panel"
visualization:
version: 1
slug: 'cool_viz' # Recommended to define a slug when a visualization is inline
type: LineChart # The render type of the visualization.
data:
type: my_datasource # The name of the datasource
query: {}
options: {}
gridAttributes:
yPos: 0
xPos: 0
width: 3
height: 1
```
Both `query` and `options` objects will be passed to the data source and used to build the proper query.
Refer to [Data source](#data-source) for a list of supported data sources, and [Visualization](#visualization) for a list of supported visualization render types.
- Use a visualization template:
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
description: My cool dashboard
panels:
- title: "My cool panel"
visualization: cool_viz # Must match the visualization config filename
gridAttributes:
yPos: 0
xPos: 0
width: 3
height: 1
```
The `gridAttributes` position the panel within a 12x12 dashboard grid, powered by [gridstack](https://github.com/gridstack/gridstack.js/tree/master/doc#item-options).
1. Register the dashboard by adding it to `builtin_dashboards` in [ee/app/models/analytics/dashboard.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/analytics/dashboard.rb).
Here you can make your dashboard available at project-level or group-level (or both), restrict access based on feature flags, license or user role etc.
1. Optional. Register visualization templates by adding them to `get_path_for_visualization` in [ee/app/models/analytics/visualization.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/analytics/visualization.rb).
For a complete example, refer to the AI Impact [dashboard config](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/analytics/ai_impact_dashboard/dashboard.yaml).
### Adding a new data source
To add a new data source:
1. Create a new JavaScript module that exports a `fetch` method. See [analytics_dashboards/data_sources/index.js](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/data_sources/index.js) for the full documentation of the `fetch` API. You can also take a look at[`cube_analytics.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/data_sources/cube_analytics.js) as an example
1. Add your module to the list exports in [`data_sources/index.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/data_sources/index.js).
1. Add your data source to the schema's list of `Data` types in [`analytics_visualizations.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json).
{{< alert type="note" >}}
Your data source must respect the filters so that all panels shows the same filtered data.
{{< /alert >}}
### Adding a new visualization render type
To add a new visualization render type:
1. Create a new Vue component that accepts `data` and `options` properties.
See [`line_chart.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/components/visualizations/line_chart.vue) as an example.
1. Add relevant storybook stories for the different states of the visualization
See [`line_chart.stories.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/components/visualizations/line_chart.stories.js) as an example.
1. Add your component to the list of conditional components imports in [`analytics_dashboard_panel.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/components/analytics_dashboard_panel.vue).
1. Add your component to the schema's list of `AnalyticsVisualization` enum type in [`analytics_visualization.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json).
#### Migrating existing components to visualizations
You can migrate existing components to dashboard visualizations. To do this,
wrap your existing component in a new visualization that provides the component with the
required context and data. See [`dora_performers_score.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/components/visualizations/dora_performers_score.vue) as an example.
As an upgrade path, your component may fetch its own data internally.
But you should ensure to plan how to migrate your visualization to use the shared analytics data sources method.
See [`value_stream.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/data_sources/value_stream.js) as an example.
#### Introducing visualizations behind a feature flag
While developing new visualizations we can use [feature flags](../feature_flags/_index.md#create-a-new-feature-flag) to mitigate risks of disruptions or incorrect data for users.
The [`from_data`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/analytics/panel.rb) method builds the panel objects for a dashboard. Using the `filter_map` method, we can add a condition to skip rendering panels that include the visualization we are developing.
For example, here we have added the `enable_usage_overview_visualization` feature flag and can check it's current state to determine whether panels using the `usage_overview` visualization should be rendered:
```ruby
panel_yaml.filter_map do |panel|
# Skip processing the usage_overview panel if the feature flag is disabled
next if panel['visualization'] == 'usage_overview' && Feature.disabled?(:enable_usage_overview_visualization)
new(
title: panel['title'],
project: project,
grid_attributes: panel['gridAttributes'],
query_overrides: panel['queryOverrides'],
visualization: panel['visualization']
)
end
```
|
---
stage: Monitor
group: Platform Insights
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Analytics dashboards
breadcrumbs:
- doc
- development
- fe_guide
---
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/98610) in GitLab 15.5 as an [experiment](../../policy/development_stages_support.md#experiment).
- Inline visualizations configuration [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/509111) in GitLab 17.9.
{{< /history >}}
Analytics dashboards provide a configuration-based [dashboard](https://design.gitlab.com/patterns/dashboards)
structure, which is used to render and modify dashboard configurations created by GitLab or users.
{{< alert type="note" >}}
Analytics dashboards is intended for Premium and Ultimate subscriptions.
{{< /alert >}}
## Overview
Analytics dashboard can be broken down into the following logical components:
- Dashboard: The container that organizes and displays all visualizations
- Panels: Individual sections that host visualizations
- Visualizations: Data display templates (charts, tables, etc.)
- Data sources: Connections to the underlying data
### Dashboard
A dashboard combines a collection of data sources, panels and visualizations into a single page to visually represent data.
Each panel in the dashboard queries the relevant data source and displays the resulting data as the specified visualization. Visualizations serve as templates for how to display data and can be reused across different panels.
A typical dashboard structure looks like this:
```plaintext
dashboard
├── panelA
│ └── visualizationX
│ └── datasource1
├── panelB
│ └── visualizationY
│ └── datasource2
├── panelC
│ └── visualizationY
│ └── datasource1
```
#### Dashboard filters
Dashboards support the following filters:
- **Date range**: Date selector to filter data by date.
- **Anonymous users**: Toggle to include or exclude anonymous users from the dataset.
- **Project**: Dropdown list to filter data by project.
- **Filtered search**: Filter bar to filter data by selected attributes.
#### Dashboard status
Dashboards with a `status` badge indicate their [development stage](../../policy/development_stages_support.md) and functionality. Dashboards without a `status` badge are fully developed and production-ready.
The supported options are:
- `experiment`
- `beta`
### Panel
Panels form the foundation of a dashboard and act as containers for your visualizations. Each panel is built using the GitLab standardized UI component called [GlDashboardPanel](https://gitlab-org.gitlab.io/gitlab-ui/?path=/docs/dashboards-dashboards-panel--docs).
### Visualization
A visualization transforms your data into a graphical format like a chart or table. You can use the following standard visualization types:
- LineChart
- ColumnChart
- DataTable
- SingleStats
For a list of all supported visualization types, see `AnalyticsVisualization.type` enum in [`analytics_visualization`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json).
You're not limited to these options though - you can create new visualization types as needed.
### Data source
A data source is a connection to a database, an endpoint or a collection of data which can be used by your dashboard to query, retrieve, filter, and visualize results.
While there's a core set of supported data sources (see `Data.type` enum in [`analytics_visualizations`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json)), you can add new ones to meet your needs.
To support all visualization types, ensure your data source returns data as a single aggregated value and a time series with a value for each point in time.
Note that each panel fetches data from the data source separately and independently from other panels.
## Create a built-in dashboard
GitLab provides predefined dashboards that are labeled **By GitLab**. Users cannot edit them, but they can clone or use them as the basis for creating similar custom dashboards.
To create a built-in analytics dashboard:
1. Create a folder for the new dashboard under `ee/lib/gitlab/analytics`, for example:
```plaintext
ee/lib/gitlab/analytics/cool_dashboard
```
1. Create a dashboard configuration file (for example `dashboard.yaml`) in the new folder. The configuration must conform to the JSON schema defined in [`ee/app/validators/json_schemas/analytics_dashboard.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_dashboard.json). Example:
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
description: My cool dashboard
panels: []
```
1. Optional. Enable dashboard filters by setting the filter's `enabled` option to `true` in the `.yaml` configuration file :
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
filters:
excludeAnonymousUsers:
enabled: true
dateRange:
enabled: true
projects:
enabled: true
filteredSearch:
enabled: true
# Use `options` to define an array of tokens to override the default ones
options:
- token: assignee
unique: false
- token: label
maxSuggestions: 10
```
Refer to the `DashboardFilters` type in the [`ee/app/validators/json_schemas/analytics_dashboard.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_dashboard.json) for a list of supported filters.
1. Optional. Set the appropriate status of the dashboard if it is not production ready:
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
status: experiment
```
1. Optional. Create visualization templates by creating a folder for your templates (for example `visualizations/`) in your dashboard directory and
add configuration files for each template.
Visualization templates might be used when a visualization will be used by multiple dashboards. Use a template to
prevent duplicating the same YAML block multiple times. For built-in dashboards, the dashboard
will automatically update when the visualization template is changed. For user-defined dashboards, the visualization
template is copied rather than referenced. Visualization templates copied to dashboards are not updated when the
visualization template is updated.
Each file must conform to the JSON schema defined in [`ee/app/validators/json_schemas/analytics_visualization.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json).
Example:
```yaml
# cool_dashboard/visualizations/cool_viz.yaml
---
version: 1
type: LineChart # The render type of the visualization.
data:
type: my_datasource # The name of the datasource
query: {}
options: {}
```
Both `query` and `options` objects will be passed to the data source and used to build the proper query.
Refer to [Data source](#data-source) for a list of supported data sources, and [Visualization](#visualization) for a list of supported visualization render types.
1. To add panels to your dashboard that reference your visualizations, use either:
- Recommended. Use an inline visualization within the dashboard configuration file:
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
description: My cool dashboard
panels:
- title: "My cool panel"
visualization:
version: 1
slug: 'cool_viz' # Recommended to define a slug when a visualization is inline
type: LineChart # The render type of the visualization.
data:
type: my_datasource # The name of the datasource
query: {}
options: {}
gridAttributes:
yPos: 0
xPos: 0
width: 3
height: 1
```
Both `query` and `options` objects will be passed to the data source and used to build the proper query.
Refer to [Data source](#data-source) for a list of supported data sources, and [Visualization](#visualization) for a list of supported visualization render types.
- Use a visualization template:
```yaml
# cool_dashboard/dashboard.yaml
---
title: My dashboard
description: My cool dashboard
panels:
- title: "My cool panel"
visualization: cool_viz # Must match the visualization config filename
gridAttributes:
yPos: 0
xPos: 0
width: 3
height: 1
```
The `gridAttributes` position the panel within a 12x12 dashboard grid, powered by [gridstack](https://github.com/gridstack/gridstack.js/tree/master/doc#item-options).
1. Register the dashboard by adding it to `builtin_dashboards` in [ee/app/models/analytics/dashboard.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/analytics/dashboard.rb).
Here you can make your dashboard available at project-level or group-level (or both), restrict access based on feature flags, license or user role etc.
1. Optional. Register visualization templates by adding them to `get_path_for_visualization` in [ee/app/models/analytics/visualization.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/analytics/visualization.rb).
For a complete example, refer to the AI Impact [dashboard config](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/analytics/ai_impact_dashboard/dashboard.yaml).
### Adding a new data source
To add a new data source:
1. Create a new JavaScript module that exports a `fetch` method. See [analytics_dashboards/data_sources/index.js](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/data_sources/index.js) for the full documentation of the `fetch` API. You can also take a look at[`cube_analytics.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/data_sources/cube_analytics.js) as an example
1. Add your module to the list exports in [`data_sources/index.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/data_sources/index.js).
1. Add your data source to the schema's list of `Data` types in [`analytics_visualizations.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json).
{{< alert type="note" >}}
Your data source must respect the filters so that all panels shows the same filtered data.
{{< /alert >}}
### Adding a new visualization render type
To add a new visualization render type:
1. Create a new Vue component that accepts `data` and `options` properties.
See [`line_chart.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/components/visualizations/line_chart.vue) as an example.
1. Add relevant storybook stories for the different states of the visualization
See [`line_chart.stories.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/components/visualizations/line_chart.stories.js) as an example.
1. Add your component to the list of conditional components imports in [`analytics_dashboard_panel.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/components/analytics_dashboard_panel.vue).
1. Add your component to the schema's list of `AnalyticsVisualization` enum type in [`analytics_visualization.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json).
#### Migrating existing components to visualizations
You can migrate existing components to dashboard visualizations. To do this,
wrap your existing component in a new visualization that provides the component with the
required context and data. See [`dora_performers_score.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/components/visualizations/dora_performers_score.vue) as an example.
As an upgrade path, your component may fetch its own data internally.
But you should ensure to plan how to migrate your visualization to use the shared analytics data sources method.
See [`value_stream.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/analytics/analytics_dashboards/data_sources/value_stream.js) as an example.
#### Introducing visualizations behind a feature flag
While developing new visualizations we can use [feature flags](../feature_flags/_index.md#create-a-new-feature-flag) to mitigate risks of disruptions or incorrect data for users.
The [`from_data`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/analytics/panel.rb) method builds the panel objects for a dashboard. Using the `filter_map` method, we can add a condition to skip rendering panels that include the visualization we are developing.
For example, here we have added the `enable_usage_overview_visualization` feature flag and can check it's current state to determine whether panels using the `usage_overview` visualization should be rendered:
```ruby
panel_yaml.filter_map do |panel|
# Skip processing the usage_overview panel if the feature flag is disabled
next if panel['visualization'] == 'usage_overview' && Feature.disabled?(:enable_usage_overview_visualization)
new(
title: panel['title'],
project: project,
grid_attributes: panel['gridAttributes'],
query_overrides: panel['queryOverrides'],
visualization: panel['visualization']
)
end
```
|
https://docs.gitlab.com/development/storybook
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/storybook.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
storybook.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Storybook
| null |
The Storybook for the `gitlab-org/gitlab` project is available on our [GitLab Pages site](https://gitlab-org.gitlab.io/gitlab/storybook/).
## Storybook in local development
Storybook dependencies and configuration are located under the `storybook/` directory.
To build and launch Storybook locally, in the root directory of the `gitlab` project:
1. Install Storybook dependencies:
```shell
yarn storybook:install
```
1. Build the Storybook site:
```shell
yarn storybook:start
```
## Adding components to Storybook
Stories can be added for any Vue component in the `gitlab` repository.
To add a story:
1. Create a new `.stories.js` file in the same directory as the Vue component.
The filename should have the same prefix as the Vue component.
```txt
vue_shared/
├─ components/
│ ├─ sidebar
│ | ├─ todo_toggle
│ | | ├─ todo_button.vue
│ │ | ├─ todo_button.stories.js
```
1. Stories should demonstrate each significantly different UI state related to the component's exposed props and events.
For instructions on how to write stories, refer to the [official Storybook instructions](https://storybook.js.org/docs/writing-stories/)
{{< alert type="note" >}}
Specify the `title` field of the story as the component's file path from the `javascripts/` directory, without the `/components` part.
For example, if the component is located at `app/assets/javascripts/vue_shared/components/sidebar/todo_toggle/todo_button.vue`,
specify the story `title` as `vue_shared/sidebar/todo_toggle/todo_button`.
If the component is located in the `ee/` directory, make sure to prefix the story's title with `ee/` as well.
This will ensure the Storybook navigation maps closely to our internal directory structure.
{{< /alert >}}
## Using GitLab REST and GraphQL APIs
You can write stories for components that use either the GitLab [REST](../../api/rest/_index.md) or
[GraphQL](../../api/graphql/_index.md) APIs.
### Set up API access token and GitLab instance URL
To add a story with API access:
1. Create a [personal access token](../../user/profile/personal_access_tokens.md) in your GitLab instance.
{{< alert type="note" >}}
If you test against `gitlab.com`, make sure to use a token with `read_api` if possible and to make the token short-lived.
{{< /alert >}}
1. Create an `.env` file in the `storybook` directory. Use the `storybook/.env.template` file as
a starting point.
1. Set the `API_ACCESS_TOKEN` variable to the access token that you created.
1. Set the `GITLAB_URL` variable to the GitLab instance's domain URL, for example: `http://gdk.test:3000`.
1. Start or restart your storybook.
You can also use the GitLab API Access panel in the Storybook UI to set the GitLab instance URL and access token.
### Set up API access in your stories
You should apply the `withGitLabAPIAccess` decorator to the stories that will consume GitLab APIs. This decorator
will display a badge indicating that the story won't work without providing the API access parameters:
```javascript
import { withGitLabAPIAccess } from 'storybook_addons/gitlab_api_access';
import Api from '~/api';
import { ContentEditor } from './index';
export default {
component: ContentEditor,
title: 'ce/content_editor/content_editor',
decorators: [withGitLabAPIAccess],
};
```
#### Using REST API
The Storybook sets up `~/lib/utils/axios_utils` in `storybook/config/preview.js`. Components that use the REST API
should work out of the box as long as you provide a valid GitLab instance URL and access token.
#### Using GraphQL
To write a story for a component that uses the GraphQL API, use the `createVueApollo` method provided in
the Story context.
```javascript
import Vue from 'vue';
import VueApollo from 'vue-apollo';
import { withGitLabAPIAccess } from 'storybook_addons/gitlab_api_access';
import WorkspacesList from './list.vue';
Vue.use(VueApollo);
const Template = (_, { argTypes, createVueApollo }) => {
return {
components: { WorkspacesList },
apolloProvider: createVueApollo(),
provide: {
emptyStateSvgPath: '',
},
props: Object.keys(argTypes),
template: '<workspaces-list />',
};
};
export default {
component: WorkspacesList,
title: 'ee/workspaces/workspaces_list',
decorators: [withGitLabAPIAccess],
};
export const Default = Template.bind({});
Default.args = {};
```
## Using a Vuex store
To write a story for a component that requires access to a Vuex store, use the `createVuexStore` method provided in
the Story context.
```javascript
import { withVuexStore } from 'storybook_addons/vuex_store';
import DurationChart from './duration-chart.vue';
const Template = (_, { argTypes, createVuexStore }) => {
return {
components: { DurationChart },
store: createVuexStore({
state: {},
getters: {},
modules: {},
}),
props: Object.keys(argTypes),
template: '<duration-chart />',
};
};
export default {
component: DurationChart,
title: 'ee/analytics/cycle_analytics/components/duration_chart',
decorators: [withVuexStore],
};
export const Default = Template.bind({});
Default.args = {};
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Storybook
breadcrumbs:
- doc
- development
- fe_guide
---
The Storybook for the `gitlab-org/gitlab` project is available on our [GitLab Pages site](https://gitlab-org.gitlab.io/gitlab/storybook/).
## Storybook in local development
Storybook dependencies and configuration are located under the `storybook/` directory.
To build and launch Storybook locally, in the root directory of the `gitlab` project:
1. Install Storybook dependencies:
```shell
yarn storybook:install
```
1. Build the Storybook site:
```shell
yarn storybook:start
```
## Adding components to Storybook
Stories can be added for any Vue component in the `gitlab` repository.
To add a story:
1. Create a new `.stories.js` file in the same directory as the Vue component.
The filename should have the same prefix as the Vue component.
```txt
vue_shared/
├─ components/
│ ├─ sidebar
│ | ├─ todo_toggle
│ | | ├─ todo_button.vue
│ │ | ├─ todo_button.stories.js
```
1. Stories should demonstrate each significantly different UI state related to the component's exposed props and events.
For instructions on how to write stories, refer to the [official Storybook instructions](https://storybook.js.org/docs/writing-stories/)
{{< alert type="note" >}}
Specify the `title` field of the story as the component's file path from the `javascripts/` directory, without the `/components` part.
For example, if the component is located at `app/assets/javascripts/vue_shared/components/sidebar/todo_toggle/todo_button.vue`,
specify the story `title` as `vue_shared/sidebar/todo_toggle/todo_button`.
If the component is located in the `ee/` directory, make sure to prefix the story's title with `ee/` as well.
This will ensure the Storybook navigation maps closely to our internal directory structure.
{{< /alert >}}
## Using GitLab REST and GraphQL APIs
You can write stories for components that use either the GitLab [REST](../../api/rest/_index.md) or
[GraphQL](../../api/graphql/_index.md) APIs.
### Set up API access token and GitLab instance URL
To add a story with API access:
1. Create a [personal access token](../../user/profile/personal_access_tokens.md) in your GitLab instance.
{{< alert type="note" >}}
If you test against `gitlab.com`, make sure to use a token with `read_api` if possible and to make the token short-lived.
{{< /alert >}}
1. Create an `.env` file in the `storybook` directory. Use the `storybook/.env.template` file as
a starting point.
1. Set the `API_ACCESS_TOKEN` variable to the access token that you created.
1. Set the `GITLAB_URL` variable to the GitLab instance's domain URL, for example: `http://gdk.test:3000`.
1. Start or restart your storybook.
You can also use the GitLab API Access panel in the Storybook UI to set the GitLab instance URL and access token.
### Set up API access in your stories
You should apply the `withGitLabAPIAccess` decorator to the stories that will consume GitLab APIs. This decorator
will display a badge indicating that the story won't work without providing the API access parameters:
```javascript
import { withGitLabAPIAccess } from 'storybook_addons/gitlab_api_access';
import Api from '~/api';
import { ContentEditor } from './index';
export default {
component: ContentEditor,
title: 'ce/content_editor/content_editor',
decorators: [withGitLabAPIAccess],
};
```
#### Using REST API
The Storybook sets up `~/lib/utils/axios_utils` in `storybook/config/preview.js`. Components that use the REST API
should work out of the box as long as you provide a valid GitLab instance URL and access token.
#### Using GraphQL
To write a story for a component that uses the GraphQL API, use the `createVueApollo` method provided in
the Story context.
```javascript
import Vue from 'vue';
import VueApollo from 'vue-apollo';
import { withGitLabAPIAccess } from 'storybook_addons/gitlab_api_access';
import WorkspacesList from './list.vue';
Vue.use(VueApollo);
const Template = (_, { argTypes, createVueApollo }) => {
return {
components: { WorkspacesList },
apolloProvider: createVueApollo(),
provide: {
emptyStateSvgPath: '',
},
props: Object.keys(argTypes),
template: '<workspaces-list />',
};
};
export default {
component: WorkspacesList,
title: 'ee/workspaces/workspaces_list',
decorators: [withGitLabAPIAccess],
};
export const Default = Template.bind({});
Default.args = {};
```
## Using a Vuex store
To write a story for a component that requires access to a Vuex store, use the `createVuexStore` method provided in
the Story context.
```javascript
import { withVuexStore } from 'storybook_addons/vuex_store';
import DurationChart from './duration-chart.vue';
const Template = (_, { argTypes, createVuexStore }) => {
return {
components: { DurationChart },
store: createVuexStore({
state: {},
getters: {},
modules: {},
}),
props: Object.keys(argTypes),
template: '<duration-chart />',
};
};
export default {
component: DurationChart,
title: 'ee/analytics/cycle_analytics/components/duration_chart',
decorators: [withVuexStore],
};
export const Default = Template.bind({});
Default.args = {};
```
|
https://docs.gitlab.com/development/fe_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Frontend Development Guidelines
| null |
This document describes various guidelines to ensure consistency and quality
across the GitLab frontend team.
## Introduction
GitLab is built on top of [Ruby on Rails](https://rubyonrails.org). It uses [Haml](https://haml.info/) and a JavaScript-based frontend with [Vue.js](https://vuejs.org). If you are not sure when to use Vue on top of Haml-page, read [this explanation](vue.md#when-to-add-vue-application).
<!-- vale gitlab_base.Spelling = NO -->
For more information, see [Hamlit](https://github.com/k0kubun/hamlit/blob/master/REFERENCE.md).
<!-- vale gitlab_base.Spelling = YES -->
When it comes to CSS, we use a utils-based CSS approach. For more information and to find where CSS utilities are defined, refer to the [SCSS style section](style/scss.md#where-are-css-utility-classes-defined) of this guide.
We also use [SCSS](https://sass-lang.com) and plain JavaScript with
modern ECMAScript standards supported through [Babel](https://babeljs.io/) and ES module support through [webpack](https://webpack.js.org/).
When making API calls, we use [GraphQL](graphql.md) as the first choice.
There are still instances where the GitLab REST API is used, such as when creating new simple Haml pages, or in legacy parts of the codebase, but we should always default to GraphQL when possible.
For [client-side state management](state_management.md) in Vue, depending on the specific needs of the feature,
we use:
- [Apollo](https://www.apollographql.com/) (default choice for applications relying on [GraphQL](graphql.md))
- [Pinia](pinia.md)
- Stateful components.
[Vuex is deprecated](vuex.md) and you should [migrate away from it](migrating_from_vuex.md) whenever possible.
Learn: [How do I know which state manager to use?](state_management.md)
For copy strings and translations, we have frontend utilities available. See the JavaScript section of [Preparing a page for translation](../i18n/externalization.md#javascript-files) for more information.
Working with our frontend assets requires Node (v12.22.1 or greater) and Yarn
(v1.10.0 or greater). You can find information on how to install these on our
[installation guide](../../install/self_compiled/_index.md#5-node).
### High-level overview
GitLab core frontend code is located under [`app/assets/javascripts`](https://gitlab.com/gitlab-org/gitlab/-/tree/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/assets/javascripts).
Since GitLab uses the [Ruby on Rails](https://rubyonrails.org) framework, we inject our Vue applications into the views using [Haml](https://haml.info/). For example, to build a Vue app in a Rails view, we set up a view like [`app/views/projects/pipeline_schedules/index.html.haml`](https://gitlab.com/gitlab-org/gitlab/-/blob/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/views/projects/pipeline_schedules/index.html.haml). Inside this view, we add an element with an `id` like `#pipeline-schedules-app`. This element serves as the mounting point for our frontend code.
The application structure typically follows the pattern: `app/assets/javascripts/<feature-name>`. For example, the directory for a specific feature might look like [`app/assets/javascripts/ci/pipeline_schedules`](https://gitlab.com/gitlab-org/gitlab/-/tree/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/assets/javascripts/ci/pipeline_schedules). Within these type of directories, we organize our code into subfolders like `components` or `graphql`, which house the code that makes up a feature. A typical structure might look like
- `feature_name/`
- `components/` (vue components that make up a feature)
- `graphql/` (queries/mutations)
- `utils/` (helper functions)
- `router/` (optional: only for Vue Router powered apps)
- `constants.js` (shared variables)
- `index.js` (file that injects the Vue app)
There is always a top-level Vue component that acts as the "main" component and imports lower-level components to build a feature. In all cases, there is an accompanying file (often named index.js or app.js but often varies) that looks for the injection point on a Haml view (for example, `#pipeline-schedules-app`) and mounts the Vue app to the page.
We achieve this by importing a JavaScript file like [`app/assets/javascripts/ci/pipeline_schedules/mount_pipeline_schedules_app.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/assets/javascripts/ci/pipeline_schedules/mount_pipeline_schedules_app.js) (which sets up the Vue app) into the related Haml view's corresponding page bundle, such as [`app/assets/javascripts/pages/projects/pipeline_schedules/index/index.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/assets/javascripts/pages/projects/pipeline_schedules/index/index.js).
Often, a feature will have multiple routes, such as `index`, `show`, `edit`, or `new`. For these cases, we typically inject different Vue applications based on the specific route. The folder structure within `app/assets/javascripts/pages` reflects this setup. For example, a subfolder like `app/assets/javascripts/pages/<feature-name>/show` corresponds to the Rails controller `app/controllers/<controller-name>` and its action `def show; end`. Alternatively, we can mount the Vue application on the `index` route and handle routing on the client with [Vue Router](https://router.vuejs.org/).
## Vision
As Frontend engineers, we strive to give users **delightful experiences**. We should always think of how this applies at GitLab specifically: a great GitLab experience means helping our user base ship **their own projects faster and with more confidence** when shipping their own software. This means that whenever confronted with a choice for the future of our department, we should remember to try to put this first.
### Values
We define three core values, Stability, Speed and Maintainability (SSM)
#### Stability
Although velocity is extremely important, we believe that GitLab is now an enterprise-grade platform that requires even the smallest MVC to be **stable, tested and with a good architecture**. We should not merge code, even as an MVC, that could introduce degradation, poor performance, confusion or generally lower our users expectations.
This is an extension of the core value that want our users to have confidence in their own software and to do so, they need to have **confidence in GitLab first**. This means that our own confidence in our software should be at the absolute maximum.
#### Speed
Users should be able to navigate through the GitLab application with ease. This implies fast load times, easy to find pages, clear UX and an overall sense that they can accomplish their goal without friction.
Additionally, we want our speed to be felt and appreciated by our developers. This means that we should put a lot of effort and thoughts into processes, tools and documentation that help us achieve success faster across our department. This benefits us as engineers, but also our users that end up receiving quality features at a faster rate.
#### Maintainability
GitLab is now a large, enterprise-grade software and it often requires complex code to give the best possible experience. Although complexity is a necessity, we must remain vigilant to not let it grow more than it should. To minimize this, we want to focus on making our codebase maintainable by **encapsulating complexity**. This is done by:
- Building tools that solve commonly-faced problems and making them easily discoverable.
- Writing better documentation on how we solve our problems.
- Writing loosely coupled components that can be easily added or removed from our codebase.
- Remove older technologies or pattern that we deem are no longer acceptable.
By focusing on these aspects, we aim to allow engineers to contain complexity in well defined boundaries and quickly share them with their peers.
### Goals
Now that our values have been defined, we can base our goals on these values and determine what we would like to achieve at GitLab with this in mind.
- Lowest possible FID, LCP and cross-page navigation times
- Minimal page reloads when interacting with the UI
- [Have as little Vue applications per page as possible](vue.md#avoid-multiple-vue-applications-on-the-page)
- Leverage [Ruby ViewComponents](view_component.md) for simple pages and avoid Vue overhead when possible
- [Migrate away from VueX](migrating_from_vuex.md), but more urgently **stop using Apollo and VueX together**
- Remove jQuery from our codebase
- Add a visual testing framework
- Reduce CSS bundle size to a minimum
- Reduce cognitive overhead and improve maintainability of our CSS
- Improve our pipelines speed
- Build a better set of shared components with documentation
We have detailed description on how we see GitLab frontend in the future in [Frontend Goals](frontend_goals.md) section
### First time contributors
If you're a first-time contributor, see [Contribute to GitLab development](../contributing/_index.md).
When you're ready to create your first merge request, or need to review the GitLab frontend workflow, see [Getting started](getting_started.md).
For a guided introduction to frontend development at GitLab, you can watch the [Frontend onboarding course](onboarding_course/_index.md) which provides a six-week structured curriculum.
### Helpful links
#### Initiatives
You can find current frontend initiatives with a cross-functional impact on epics
with the label [frontend-initiative](https://gitlab.com/groups/gitlab-org/-/epics?state=opened&page=1&sort=UPDATED_AT_DESC&label_name[]=frontend-initiative).
#### Testing
How we write [frontend tests](../testing_guide/frontend_testing.md), run the GitLab test suite, and debug test related
issues.
#### Pajamas Design System
Reusable components with technical and usage guidelines can be found in our
[Pajamas Design System](https://design.gitlab.com/).
#### Frontend FAQ
Read the [frontend FAQ](frontend_faq.md) for common small pieces of helpful information.
#### Internationalization (i18n) and Translations
Frontend internationalization support is described in [**Translate GitLab to your language**](../i18n/_index.md).
The [externalization part of the guide](../i18n/externalization.md) explains the helpers/methods available.
#### Troubleshooting
Running into a Frontend development problem? Check out [this troubleshooting guide](troubleshooting.md) to help resolve your issue.
#### Browser support
For supported browsers, see our [requirements](../../install/requirements.md#supported-web-browsers).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Frontend Development Guidelines
breadcrumbs:
- doc
- development
- fe_guide
---
This document describes various guidelines to ensure consistency and quality
across the GitLab frontend team.
## Introduction
GitLab is built on top of [Ruby on Rails](https://rubyonrails.org). It uses [Haml](https://haml.info/) and a JavaScript-based frontend with [Vue.js](https://vuejs.org). If you are not sure when to use Vue on top of Haml-page, read [this explanation](vue.md#when-to-add-vue-application).
<!-- vale gitlab_base.Spelling = NO -->
For more information, see [Hamlit](https://github.com/k0kubun/hamlit/blob/master/REFERENCE.md).
<!-- vale gitlab_base.Spelling = YES -->
When it comes to CSS, we use a utils-based CSS approach. For more information and to find where CSS utilities are defined, refer to the [SCSS style section](style/scss.md#where-are-css-utility-classes-defined) of this guide.
We also use [SCSS](https://sass-lang.com) and plain JavaScript with
modern ECMAScript standards supported through [Babel](https://babeljs.io/) and ES module support through [webpack](https://webpack.js.org/).
When making API calls, we use [GraphQL](graphql.md) as the first choice.
There are still instances where the GitLab REST API is used, such as when creating new simple Haml pages, or in legacy parts of the codebase, but we should always default to GraphQL when possible.
For [client-side state management](state_management.md) in Vue, depending on the specific needs of the feature,
we use:
- [Apollo](https://www.apollographql.com/) (default choice for applications relying on [GraphQL](graphql.md))
- [Pinia](pinia.md)
- Stateful components.
[Vuex is deprecated](vuex.md) and you should [migrate away from it](migrating_from_vuex.md) whenever possible.
Learn: [How do I know which state manager to use?](state_management.md)
For copy strings and translations, we have frontend utilities available. See the JavaScript section of [Preparing a page for translation](../i18n/externalization.md#javascript-files) for more information.
Working with our frontend assets requires Node (v12.22.1 or greater) and Yarn
(v1.10.0 or greater). You can find information on how to install these on our
[installation guide](../../install/self_compiled/_index.md#5-node).
### High-level overview
GitLab core frontend code is located under [`app/assets/javascripts`](https://gitlab.com/gitlab-org/gitlab/-/tree/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/assets/javascripts).
Since GitLab uses the [Ruby on Rails](https://rubyonrails.org) framework, we inject our Vue applications into the views using [Haml](https://haml.info/). For example, to build a Vue app in a Rails view, we set up a view like [`app/views/projects/pipeline_schedules/index.html.haml`](https://gitlab.com/gitlab-org/gitlab/-/blob/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/views/projects/pipeline_schedules/index.html.haml). Inside this view, we add an element with an `id` like `#pipeline-schedules-app`. This element serves as the mounting point for our frontend code.
The application structure typically follows the pattern: `app/assets/javascripts/<feature-name>`. For example, the directory for a specific feature might look like [`app/assets/javascripts/ci/pipeline_schedules`](https://gitlab.com/gitlab-org/gitlab/-/tree/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/assets/javascripts/ci/pipeline_schedules). Within these type of directories, we organize our code into subfolders like `components` or `graphql`, which house the code that makes up a feature. A typical structure might look like
- `feature_name/`
- `components/` (vue components that make up a feature)
- `graphql/` (queries/mutations)
- `utils/` (helper functions)
- `router/` (optional: only for Vue Router powered apps)
- `constants.js` (shared variables)
- `index.js` (file that injects the Vue app)
There is always a top-level Vue component that acts as the "main" component and imports lower-level components to build a feature. In all cases, there is an accompanying file (often named index.js or app.js but often varies) that looks for the injection point on a Haml view (for example, `#pipeline-schedules-app`) and mounts the Vue app to the page.
We achieve this by importing a JavaScript file like [`app/assets/javascripts/ci/pipeline_schedules/mount_pipeline_schedules_app.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/assets/javascripts/ci/pipeline_schedules/mount_pipeline_schedules_app.js) (which sets up the Vue app) into the related Haml view's corresponding page bundle, such as [`app/assets/javascripts/pages/projects/pipeline_schedules/index/index.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/4ce851345054dbf09956dabcc9b958ae8aab77bb/app/assets/javascripts/pages/projects/pipeline_schedules/index/index.js).
Often, a feature will have multiple routes, such as `index`, `show`, `edit`, or `new`. For these cases, we typically inject different Vue applications based on the specific route. The folder structure within `app/assets/javascripts/pages` reflects this setup. For example, a subfolder like `app/assets/javascripts/pages/<feature-name>/show` corresponds to the Rails controller `app/controllers/<controller-name>` and its action `def show; end`. Alternatively, we can mount the Vue application on the `index` route and handle routing on the client with [Vue Router](https://router.vuejs.org/).
## Vision
As Frontend engineers, we strive to give users **delightful experiences**. We should always think of how this applies at GitLab specifically: a great GitLab experience means helping our user base ship **their own projects faster and with more confidence** when shipping their own software. This means that whenever confronted with a choice for the future of our department, we should remember to try to put this first.
### Values
We define three core values, Stability, Speed and Maintainability (SSM)
#### Stability
Although velocity is extremely important, we believe that GitLab is now an enterprise-grade platform that requires even the smallest MVC to be **stable, tested and with a good architecture**. We should not merge code, even as an MVC, that could introduce degradation, poor performance, confusion or generally lower our users expectations.
This is an extension of the core value that want our users to have confidence in their own software and to do so, they need to have **confidence in GitLab first**. This means that our own confidence in our software should be at the absolute maximum.
#### Speed
Users should be able to navigate through the GitLab application with ease. This implies fast load times, easy to find pages, clear UX and an overall sense that they can accomplish their goal without friction.
Additionally, we want our speed to be felt and appreciated by our developers. This means that we should put a lot of effort and thoughts into processes, tools and documentation that help us achieve success faster across our department. This benefits us as engineers, but also our users that end up receiving quality features at a faster rate.
#### Maintainability
GitLab is now a large, enterprise-grade software and it often requires complex code to give the best possible experience. Although complexity is a necessity, we must remain vigilant to not let it grow more than it should. To minimize this, we want to focus on making our codebase maintainable by **encapsulating complexity**. This is done by:
- Building tools that solve commonly-faced problems and making them easily discoverable.
- Writing better documentation on how we solve our problems.
- Writing loosely coupled components that can be easily added or removed from our codebase.
- Remove older technologies or pattern that we deem are no longer acceptable.
By focusing on these aspects, we aim to allow engineers to contain complexity in well defined boundaries and quickly share them with their peers.
### Goals
Now that our values have been defined, we can base our goals on these values and determine what we would like to achieve at GitLab with this in mind.
- Lowest possible FID, LCP and cross-page navigation times
- Minimal page reloads when interacting with the UI
- [Have as little Vue applications per page as possible](vue.md#avoid-multiple-vue-applications-on-the-page)
- Leverage [Ruby ViewComponents](view_component.md) for simple pages and avoid Vue overhead when possible
- [Migrate away from VueX](migrating_from_vuex.md), but more urgently **stop using Apollo and VueX together**
- Remove jQuery from our codebase
- Add a visual testing framework
- Reduce CSS bundle size to a minimum
- Reduce cognitive overhead and improve maintainability of our CSS
- Improve our pipelines speed
- Build a better set of shared components with documentation
We have detailed description on how we see GitLab frontend in the future in [Frontend Goals](frontend_goals.md) section
### First time contributors
If you're a first-time contributor, see [Contribute to GitLab development](../contributing/_index.md).
When you're ready to create your first merge request, or need to review the GitLab frontend workflow, see [Getting started](getting_started.md).
For a guided introduction to frontend development at GitLab, you can watch the [Frontend onboarding course](onboarding_course/_index.md) which provides a six-week structured curriculum.
### Helpful links
#### Initiatives
You can find current frontend initiatives with a cross-functional impact on epics
with the label [frontend-initiative](https://gitlab.com/groups/gitlab-org/-/epics?state=opened&page=1&sort=UPDATED_AT_DESC&label_name[]=frontend-initiative).
#### Testing
How we write [frontend tests](../testing_guide/frontend_testing.md), run the GitLab test suite, and debug test related
issues.
#### Pajamas Design System
Reusable components with technical and usage guidelines can be found in our
[Pajamas Design System](https://design.gitlab.com/).
#### Frontend FAQ
Read the [frontend FAQ](frontend_faq.md) for common small pieces of helpful information.
#### Internationalization (i18n) and Translations
Frontend internationalization support is described in [**Translate GitLab to your language**](../i18n/_index.md).
The [externalization part of the guide](../i18n/externalization.md) explains the helpers/methods available.
#### Troubleshooting
Running into a Frontend development problem? Check out [this troubleshooting guide](troubleshooting.md) to help resolve your issue.
#### Browser support
For supported browsers, see our [requirements](../../install/requirements.md#supported-web-browsers).
|
https://docs.gitlab.com/development/view_component
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/view_component.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
view_component.md
|
Foundations
|
Design System
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
ViewComponent
| null |
ViewComponent is a framework for creating reusable, testable & encapsulated view
components with Ruby on Rails, without the need for a JavaScript framework like Vue.
They are rendered server-side and can be seamlessly used with template languages like [Haml](haml.md).
For more information, see the [official documentation](https://viewcomponent.org/) or
[this introduction video](https://youtu.be/akRhUbvtnmo).
## Browse components with Lookbook
We have a [Lookbook](https://github.com/allmarkedup/lookbook) in `http://gdk.test:3000/rails/lookbook` (only available in development mode) to browse and interact with ViewComponent previews.
## Pajamas components
Some of the components of our [Pajamas](https://design.gitlab.com) design system are
available as a ViewComponent in `app/components/pajamas`.
{{< alert type="note" >}}
We are still in the process of creating these components, so not every Pajamas component is available as ViewComponent.
Reach out to the [Design Systems team](https://handbook.gitlab.com/handbook/engineering/development/dev/foundations/design-system/)
if the component you are looking for is not yet available.
{{< /alert >}}
### Available components
Consider this list a best effort. The full list can be found in [`app/components/pajamas`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/app/components/pajamas). Also see our Lookbook (`http://gdk.test:3000/rails/lookbook`) for a more interactive way to browse our components.
#### Alert
The `Pajamas::AlertComponent` follows the [Pajamas Alert](https://design.gitlab.com/components/alert/) specification.
**Examples**:
By default this creates a dismissible info alert with icon:
```ruby
= render Pajamas::AlertComponent.new(title: "Almost done!")
```
You can set variant, hide the icons and more:
```ruby
= render Pajamas::AlertComponent.new(title: "All done!",
variant: :success,
dismissible: :false,
show_icon: false)
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/alert_component.rb).
#### Banner
The `Pajamas::BannerComponent` follows the [Pajamas Banner](https://design.gitlab.com/components/banner/) specification.
**Examples**:
In its simplest form the banner component looks like this:
```ruby
= render Pajamas::BannerComponent.new(button_text: 'Learn more', button_link: example_path,
svg_path: 'illustrations/example.svg') do |c|
- c.with_title { 'Hello world!' }
%p Content of your banner goes here...
```
If you have a need for more control, you can also use the `illustration` slot
instead of `svg_path` and the `primary_action` slot instead of `button_text` and `button_link`:
```ruby
= render Pajamas::BannerComponent.new do |c|
- c.with_illustration do
= custom_icon('my_inline_svg')
- c.with_title do
Hello world!
- c.with_primary_action do
= render 'my_button_in_a_partial'
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/banner_component.rb).
#### Button
The `Pajamas::ButtonComponent` follows the [Pajamas Button](https://design.gitlab.com/components/button/) specification.
**Examples**:
The button component has a lot of options but all of them have good defaults,
so the simplest button looks like this:
```ruby
= render Pajamas::ButtonComponent.new do |c|
= _('Button text goes here')
```
The following example shows most of the available options:
```ruby
= render Pajamas::ButtonComponent.new(category: :secondary,
variant: :danger,
size: :small,
type: :submit,
disabled: true,
loading: false,
block: true) do |c|
Button text goes here
```
You can also create button-like looking `<a>` tags, like this:
```ruby
= render Pajamas::ButtonComponent.new(href: root_path) do |c|
Go home
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/button_component.rb).
#### Card
The `Pajamas::CardComponent` follows the [Pajamas Card](https://design.gitlab.com/components/card/) specification.
**Examples**:
The card has one mandatory `body` slot and optional `header` and `footer` slots:
```ruby
= render Pajamas::CardComponent.new do |c|
- c.with_header do
I'm the header.
- c.with_body do
%p Multiple line
%p body content.
- c.with_footer do
Footer goes here.
```
If you want to add custom attributes to any of these or the card itself, use the following options:
```ruby
= render Pajamas::CardComponent.new(card_options: {id: "my-id"}, body_options: {data: { count: 1 }})
```
`header_options` and `footer_options` are available, too.
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/card_component.rb).
#### Checkbox tag
The `Pajamas::CheckboxTagComponent` follows the [Pajamas Checkbox](https://design.gitlab.com/components/checkbox/) specification.
The `name` argument and `label` slot are required.
For example:
```ruby
= render Pajamas::CheckboxTagComponent.new(name: 'project[initialize_with_sast]',
checkbox_options: { data: { testid: 'initialize-with-sast-checkbox', track_label: track_label, track_action: 'activate_form_input', track_property: 'init_with_sast' } }) do |c|
- c.with_label do
= s_('ProjectsNew|Enable Static Application Security Testing (SAST)')
- c.with_help_text do
= s_('ProjectsNew|Analyze your source code for known security vulnerabilities.')
= link_to _('Learn more.'), help_page_path('user/application_security/sast/_index.md'), target: '_blank', rel: 'noopener noreferrer', data: { track_action: 'followed' }
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/checkbox_tag_component.rb).
#### Checkbox
The `Pajamas::CheckboxComponent` follows the [Pajamas Checkbox](https://design.gitlab.com/components/checkbox/) specification.
{{< alert type="note" >}}
`Pajamas::CheckboxComponent` is used internally by the [GitLab UI form builder](haml.md#use-the-gitlab-ui-form-builder) and requires an instance of [ActionView::Helpers::FormBuilder](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html) to be passed as the `form` argument.
It is preferred to use the [`gitlab_ui_checkbox_component`](haml.md#gitlab_ui_checkbox_component) method to render this ViewComponent.
To use a checkbox without an instance of [ActionView::Helpers::FormBuilder](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html) use [CheckboxTagComponent](#checkbox-tag).
{{< /alert >}}
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/checkbox_component.rb).
#### Toggle
The `Pajamas::ToggleComponent` follows the [Pajamas Toggle](https://design.gitlab.com/components/toggle/) specification.
```ruby
= render Pajamas::ToggleComponent.new(classes: 'js-force-push-toggle',
label: s_("ProtectedBranch|Toggle allowed to force push"),
is_checked: protected_branch.allow_force_push,
label_position: :hidden) do
Leverage this block to render a rich help text. To render a plain text help text, prefer the `help` parameter.
```
{{< alert type="note" >}}
**The toggle ViewComponent is special as it depends on the Vue.js component.**
To actually initialize this component, make sure to call the `initToggle` helper from `~/toggles`.
{{< /alert >}}
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/toggle_component.rb).
## Layouts
Layout components can be used to create common layout patterns used in GitLab.
### Available components
#### Page heading
A standard page header with a page title and optional actions.
**Example**:
```ruby
= render ::Layouts::PageHeadingComponent.new(_('Page title')) do |c|
- c.with_actions do
= buttons
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/page_heading_component.rb).
#### CRUD component
A list container being used to host a table or list with user actions such as create, read, update, delete.
**Example**:
```ruby
= render ::Layouts::CrudComponent.new(_('CRUD title'), icon: 'ICONNAME', count: COUNT) do |c|
- c.with_description do
= description
- c.with_actions do
= buttons
- c.with_form do
= add item form
- c.with_body do
= body
- c.with_pagination do
= pagination component
- c.with_footer do
= optional footer
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/crud_component.rb).
#### Horizontal section
Many of the settings pages use a layout where the title and description are on the left and the settings fields are on the right. The `Layouts::HorizontalSectionComponent` can be used to create this layout.
**Example**:
```ruby
= render ::Layouts::HorizontalSectionComponent.new(options: { class: 'gl-mb-6' }) do |c|
- c.with_title { _('Naming, visibility') }
- c.with_description do
= _('Update your group name, description, avatar, and visibility.')
= link_to _('Learn more about groups.'), help_page_path('user/group/_index.md')
- c.with_body do
.form-group.gl-form-group
= f.label :name, _('New group name')
= f.text_field :name
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/horizontal_section_component.rb).
#### Settings block
A settings block (accordion) to group related settings.
**Example**:
```ruby
= render ::Layouts::SettingsBlock.new(_('Settings block heading')) do |c|
- c.with_description do
= description
- c.with_body do
= body
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/settings_block_component.rb).
#### Settings section
Similar to SettingsBlock (see above) this component is used to group related settings together. Unlike SettingsBlock it doesn't provide accordion functionality. Uses a sticky header.
**Example**:
```ruby
= render ::Layouts::SettingsSection.new(_('Settings section heading')) do |c|
- c.with_description do
= description
- c.with_body do
= body
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/settings_section_component.rb).
## Best practices
- If you are about to create a new view in Haml, use the available components
over creating plain Haml tags with CSS classes.
- If you are making changes to an existing Haml view and see, for example, a
button that is still implemented with plain Haml, consider migrating it to use a ViewComponent.
- If you decide to create a new component, consider creating [previews](https://viewcomponent.org/guide/previews.html) for it, too.
This will help others to discover your component with Lookbook, also it makes it much easier to test its different states.
### Preview layouts
If you need to have a custom layout for your ViewComponent preview consider using these paths for the layout code:
- `app/views/layouts/lookbook` for your layout HAML file
- `app/assets/javascripts/entrypoints/lookbook` for your custom JavaScript code
- `app/assets/stylesheets/lookbook` for your custom SASS code
JavaScript and SASS code have to be manually included in the layout.
|
---
stage: Foundations
group: Design System
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: ViewComponent
breadcrumbs:
- doc
- development
- fe_guide
---
ViewComponent is a framework for creating reusable, testable & encapsulated view
components with Ruby on Rails, without the need for a JavaScript framework like Vue.
They are rendered server-side and can be seamlessly used with template languages like [Haml](haml.md).
For more information, see the [official documentation](https://viewcomponent.org/) or
[this introduction video](https://youtu.be/akRhUbvtnmo).
## Browse components with Lookbook
We have a [Lookbook](https://github.com/allmarkedup/lookbook) in `http://gdk.test:3000/rails/lookbook` (only available in development mode) to browse and interact with ViewComponent previews.
## Pajamas components
Some of the components of our [Pajamas](https://design.gitlab.com) design system are
available as a ViewComponent in `app/components/pajamas`.
{{< alert type="note" >}}
We are still in the process of creating these components, so not every Pajamas component is available as ViewComponent.
Reach out to the [Design Systems team](https://handbook.gitlab.com/handbook/engineering/development/dev/foundations/design-system/)
if the component you are looking for is not yet available.
{{< /alert >}}
### Available components
Consider this list a best effort. The full list can be found in [`app/components/pajamas`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/app/components/pajamas). Also see our Lookbook (`http://gdk.test:3000/rails/lookbook`) for a more interactive way to browse our components.
#### Alert
The `Pajamas::AlertComponent` follows the [Pajamas Alert](https://design.gitlab.com/components/alert/) specification.
**Examples**:
By default this creates a dismissible info alert with icon:
```ruby
= render Pajamas::AlertComponent.new(title: "Almost done!")
```
You can set variant, hide the icons and more:
```ruby
= render Pajamas::AlertComponent.new(title: "All done!",
variant: :success,
dismissible: :false,
show_icon: false)
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/alert_component.rb).
#### Banner
The `Pajamas::BannerComponent` follows the [Pajamas Banner](https://design.gitlab.com/components/banner/) specification.
**Examples**:
In its simplest form the banner component looks like this:
```ruby
= render Pajamas::BannerComponent.new(button_text: 'Learn more', button_link: example_path,
svg_path: 'illustrations/example.svg') do |c|
- c.with_title { 'Hello world!' }
%p Content of your banner goes here...
```
If you have a need for more control, you can also use the `illustration` slot
instead of `svg_path` and the `primary_action` slot instead of `button_text` and `button_link`:
```ruby
= render Pajamas::BannerComponent.new do |c|
- c.with_illustration do
= custom_icon('my_inline_svg')
- c.with_title do
Hello world!
- c.with_primary_action do
= render 'my_button_in_a_partial'
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/banner_component.rb).
#### Button
The `Pajamas::ButtonComponent` follows the [Pajamas Button](https://design.gitlab.com/components/button/) specification.
**Examples**:
The button component has a lot of options but all of them have good defaults,
so the simplest button looks like this:
```ruby
= render Pajamas::ButtonComponent.new do |c|
= _('Button text goes here')
```
The following example shows most of the available options:
```ruby
= render Pajamas::ButtonComponent.new(category: :secondary,
variant: :danger,
size: :small,
type: :submit,
disabled: true,
loading: false,
block: true) do |c|
Button text goes here
```
You can also create button-like looking `<a>` tags, like this:
```ruby
= render Pajamas::ButtonComponent.new(href: root_path) do |c|
Go home
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/button_component.rb).
#### Card
The `Pajamas::CardComponent` follows the [Pajamas Card](https://design.gitlab.com/components/card/) specification.
**Examples**:
The card has one mandatory `body` slot and optional `header` and `footer` slots:
```ruby
= render Pajamas::CardComponent.new do |c|
- c.with_header do
I'm the header.
- c.with_body do
%p Multiple line
%p body content.
- c.with_footer do
Footer goes here.
```
If you want to add custom attributes to any of these or the card itself, use the following options:
```ruby
= render Pajamas::CardComponent.new(card_options: {id: "my-id"}, body_options: {data: { count: 1 }})
```
`header_options` and `footer_options` are available, too.
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/card_component.rb).
#### Checkbox tag
The `Pajamas::CheckboxTagComponent` follows the [Pajamas Checkbox](https://design.gitlab.com/components/checkbox/) specification.
The `name` argument and `label` slot are required.
For example:
```ruby
= render Pajamas::CheckboxTagComponent.new(name: 'project[initialize_with_sast]',
checkbox_options: { data: { testid: 'initialize-with-sast-checkbox', track_label: track_label, track_action: 'activate_form_input', track_property: 'init_with_sast' } }) do |c|
- c.with_label do
= s_('ProjectsNew|Enable Static Application Security Testing (SAST)')
- c.with_help_text do
= s_('ProjectsNew|Analyze your source code for known security vulnerabilities.')
= link_to _('Learn more.'), help_page_path('user/application_security/sast/_index.md'), target: '_blank', rel: 'noopener noreferrer', data: { track_action: 'followed' }
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/checkbox_tag_component.rb).
#### Checkbox
The `Pajamas::CheckboxComponent` follows the [Pajamas Checkbox](https://design.gitlab.com/components/checkbox/) specification.
{{< alert type="note" >}}
`Pajamas::CheckboxComponent` is used internally by the [GitLab UI form builder](haml.md#use-the-gitlab-ui-form-builder) and requires an instance of [ActionView::Helpers::FormBuilder](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html) to be passed as the `form` argument.
It is preferred to use the [`gitlab_ui_checkbox_component`](haml.md#gitlab_ui_checkbox_component) method to render this ViewComponent.
To use a checkbox without an instance of [ActionView::Helpers::FormBuilder](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html) use [CheckboxTagComponent](#checkbox-tag).
{{< /alert >}}
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/checkbox_component.rb).
#### Toggle
The `Pajamas::ToggleComponent` follows the [Pajamas Toggle](https://design.gitlab.com/components/toggle/) specification.
```ruby
= render Pajamas::ToggleComponent.new(classes: 'js-force-push-toggle',
label: s_("ProtectedBranch|Toggle allowed to force push"),
is_checked: protected_branch.allow_force_push,
label_position: :hidden) do
Leverage this block to render a rich help text. To render a plain text help text, prefer the `help` parameter.
```
{{< alert type="note" >}}
**The toggle ViewComponent is special as it depends on the Vue.js component.**
To actually initialize this component, make sure to call the `initToggle` helper from `~/toggles`.
{{< /alert >}}
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/pajamas/toggle_component.rb).
## Layouts
Layout components can be used to create common layout patterns used in GitLab.
### Available components
#### Page heading
A standard page header with a page title and optional actions.
**Example**:
```ruby
= render ::Layouts::PageHeadingComponent.new(_('Page title')) do |c|
- c.with_actions do
= buttons
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/page_heading_component.rb).
#### CRUD component
A list container being used to host a table or list with user actions such as create, read, update, delete.
**Example**:
```ruby
= render ::Layouts::CrudComponent.new(_('CRUD title'), icon: 'ICONNAME', count: COUNT) do |c|
- c.with_description do
= description
- c.with_actions do
= buttons
- c.with_form do
= add item form
- c.with_body do
= body
- c.with_pagination do
= pagination component
- c.with_footer do
= optional footer
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/crud_component.rb).
#### Horizontal section
Many of the settings pages use a layout where the title and description are on the left and the settings fields are on the right. The `Layouts::HorizontalSectionComponent` can be used to create this layout.
**Example**:
```ruby
= render ::Layouts::HorizontalSectionComponent.new(options: { class: 'gl-mb-6' }) do |c|
- c.with_title { _('Naming, visibility') }
- c.with_description do
= _('Update your group name, description, avatar, and visibility.')
= link_to _('Learn more about groups.'), help_page_path('user/group/_index.md')
- c.with_body do
.form-group.gl-form-group
= f.label :name, _('New group name')
= f.text_field :name
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/horizontal_section_component.rb).
#### Settings block
A settings block (accordion) to group related settings.
**Example**:
```ruby
= render ::Layouts::SettingsBlock.new(_('Settings block heading')) do |c|
- c.with_description do
= description
- c.with_body do
= body
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/settings_block_component.rb).
#### Settings section
Similar to SettingsBlock (see above) this component is used to group related settings together. Unlike SettingsBlock it doesn't provide accordion functionality. Uses a sticky header.
**Example**:
```ruby
= render ::Layouts::SettingsSection.new(_('Settings section heading')) do |c|
- c.with_description do
= description
- c.with_body do
= body
```
For the full list of options, see its
[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/components/layouts/settings_section_component.rb).
## Best practices
- If you are about to create a new view in Haml, use the available components
over creating plain Haml tags with CSS classes.
- If you are making changes to an existing Haml view and see, for example, a
button that is still implemented with plain Haml, consider migrating it to use a ViewComponent.
- If you decide to create a new component, consider creating [previews](https://viewcomponent.org/guide/previews.html) for it, too.
This will help others to discover your component with Lookbook, also it makes it much easier to test its different states.
### Preview layouts
If you need to have a custom layout for your ViewComponent preview consider using these paths for the layout code:
- `app/views/layouts/lookbook` for your layout HAML file
- `app/assets/javascripts/entrypoints/lookbook` for your custom JavaScript code
- `app/assets/stylesheets/lookbook` for your custom SASS code
JavaScript and SASS code have to be manually included in the layout.
|
https://docs.gitlab.com/development/date_and_time
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/date_and_time.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
date_and_time.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Date and time
| null |
## Formatting
Our design guidelines, [Pajamas, states](https://design.gitlab.com/content/date-and-time):
> We can either display a localized time and date format based on the user's location or use a non-localized format following the ISO 8601 standard.
When formatting dates for the UI, use the `localeDateFormat` singleton as this localizes dates based on the user's locale preferences.
The logic for getting the locale is in the `getPreferredLocales` function in `app/assets/javascripts/locale/index.js`.
Avoid using the `formatDate` and `dateFormat` date utility functions as they do not format dates in a localized way.
```javascript
// good
const formattedDate = localeDateFormat.asDate.format(date);
// bad
const formattedDate = formatDate(date);
const formattedDate = dateFormat(date);
```
## Gotchas
When working with dates, you might encounter unexpected behavior.
### Date-only bug
There is a bug when passing a string of the format `yyyy-mm-dd` to the `Date` constructor.
From the [MDN Date page](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date):
> When the time zone offset is absent, **date-only forms are interpreted as a UTC time and date-time forms are interpreted as local time**.
This is due to a historical spec error that was not consistent with ISO 8601 but could not be changed due to web compatibility.
When doing `new Date('2020-02-02')`, you might expect this to create a date like `Sun Feb 02 2020 00:00:00` in your local time.
However, due to this date-only bug, `new Date('2020-02-02')` is interpreted as UTC.
For example, if your time zone is UTC-8, this creates the date object at UTC (`Sun Feb 02 2020 00:00:00 UTC`) instead of local UTC-8 timezone, and is then converted to local UTC-8 timezone (`Sat Feb 01 2020 16:00:00 GMT-0800 (Pacific Standard Time)`).
When in a time zone behind UTC, this causes the parsed date to become a day behind, resulting in unexpected bugs.
There are a few ways to convert a date-only string to keep the same date:
- Use the `newDate` function, created specifically to avoid this bug, which is a wrapper around the `Date` constructor.
- Include a time component in the string.
- Use the `(year, month, day)` constructor.
Ideally, use the `newDate` function when creating a `Date` object so you don't have to worry about this bug.
```javascript
// good
// use the newDate function
import { newDate } from '~/lib/utils/datetime_utility';
newDate('2020-02-02') // Sun Feb 02 2020 00:00:00 GMT-0800 (Pacific Standard Time)
// add a time component
new Date('2020-02-02T00:00') // Sun Feb 02 2020 00:00:00 GMT-0800 (Pacific Standard Time)
// use the (year, month, day) constructor - month is 0-indexed (another source of possible bugs, yay!)
new Date(2020, 1, 2) // Sun Feb 02 2020 00:00:00 GMT-0800 (Pacific Standard Time)
// bad
// date-only string
new Date('2020-02-02') // Sat Feb 01 2020 16:00:00 GMT-0800 (Pacific Standard Time)
// using the static parse method with a date-only string
new Date(Date.parse('2020-02-02')) // Sat Feb 01 2020 16:00:00 GMT-0800 (Pacific Standard Time)
// using the static UTC method
new Date(Date.UTC(2020, 1, 2)) // Sat Feb 01 2020 16:00:00 GMT-0800 (Pacific Standard Time)
```
### Date picker
The `GlDatepicker` component returns a `Date` object at midnight local time.
This can cause issues in time zones ahead of UTC, for example with GraphQL mutations.
For example, in UTC+8:
1. You select `2020-02-02` in the date picker.
1. The `Date` object returned is `Sun Feb 02 2020 00:00:00 GMT+0800 (China Standard Time)` local time.
1. When sent to GraphQL, it's converted to the UTC string `2020-02-01T16:00:00.000Z`, which is a day behind.
To preserve the date, use `toISODateFormat` to convert the `Date` object to a date-only string:
```javascript
const dateString = toISODateFormat(dateObject); // "2020-02-02"
```
## Testing
### Manual testing
When performing manual testing of dates, such as when reviewing merge requests, test with time zones behind and ahead of UTC, such as UTC-8, UTC, and UTC+8, to spot potential bugs.
To change the time zone on macOS:
1. Go to **System Settings > General > Date & Time**.
1. Clear the **Set time zone automatically using your current location** checkbox.
1. Change **Closest city** to a city in another time zone, such as Sacramento, London, or Beijing.
### Jest
Our Jest tests are run with a mocked date of 2020-07-06 for determinism, which can be overridden using the `useFakeDate` function.
The logic for this is in `spec/frontend/__helpers__/fake_date/fake_date.js`.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Date and time
breadcrumbs:
- doc
- development
- fe_guide
---
## Formatting
Our design guidelines, [Pajamas, states](https://design.gitlab.com/content/date-and-time):
> We can either display a localized time and date format based on the user's location or use a non-localized format following the ISO 8601 standard.
When formatting dates for the UI, use the `localeDateFormat` singleton as this localizes dates based on the user's locale preferences.
The logic for getting the locale is in the `getPreferredLocales` function in `app/assets/javascripts/locale/index.js`.
Avoid using the `formatDate` and `dateFormat` date utility functions as they do not format dates in a localized way.
```javascript
// good
const formattedDate = localeDateFormat.asDate.format(date);
// bad
const formattedDate = formatDate(date);
const formattedDate = dateFormat(date);
```
## Gotchas
When working with dates, you might encounter unexpected behavior.
### Date-only bug
There is a bug when passing a string of the format `yyyy-mm-dd` to the `Date` constructor.
From the [MDN Date page](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date):
> When the time zone offset is absent, **date-only forms are interpreted as a UTC time and date-time forms are interpreted as local time**.
This is due to a historical spec error that was not consistent with ISO 8601 but could not be changed due to web compatibility.
When doing `new Date('2020-02-02')`, you might expect this to create a date like `Sun Feb 02 2020 00:00:00` in your local time.
However, due to this date-only bug, `new Date('2020-02-02')` is interpreted as UTC.
For example, if your time zone is UTC-8, this creates the date object at UTC (`Sun Feb 02 2020 00:00:00 UTC`) instead of local UTC-8 timezone, and is then converted to local UTC-8 timezone (`Sat Feb 01 2020 16:00:00 GMT-0800 (Pacific Standard Time)`).
When in a time zone behind UTC, this causes the parsed date to become a day behind, resulting in unexpected bugs.
There are a few ways to convert a date-only string to keep the same date:
- Use the `newDate` function, created specifically to avoid this bug, which is a wrapper around the `Date` constructor.
- Include a time component in the string.
- Use the `(year, month, day)` constructor.
Ideally, use the `newDate` function when creating a `Date` object so you don't have to worry about this bug.
```javascript
// good
// use the newDate function
import { newDate } from '~/lib/utils/datetime_utility';
newDate('2020-02-02') // Sun Feb 02 2020 00:00:00 GMT-0800 (Pacific Standard Time)
// add a time component
new Date('2020-02-02T00:00') // Sun Feb 02 2020 00:00:00 GMT-0800 (Pacific Standard Time)
// use the (year, month, day) constructor - month is 0-indexed (another source of possible bugs, yay!)
new Date(2020, 1, 2) // Sun Feb 02 2020 00:00:00 GMT-0800 (Pacific Standard Time)
// bad
// date-only string
new Date('2020-02-02') // Sat Feb 01 2020 16:00:00 GMT-0800 (Pacific Standard Time)
// using the static parse method with a date-only string
new Date(Date.parse('2020-02-02')) // Sat Feb 01 2020 16:00:00 GMT-0800 (Pacific Standard Time)
// using the static UTC method
new Date(Date.UTC(2020, 1, 2)) // Sat Feb 01 2020 16:00:00 GMT-0800 (Pacific Standard Time)
```
### Date picker
The `GlDatepicker` component returns a `Date` object at midnight local time.
This can cause issues in time zones ahead of UTC, for example with GraphQL mutations.
For example, in UTC+8:
1. You select `2020-02-02` in the date picker.
1. The `Date` object returned is `Sun Feb 02 2020 00:00:00 GMT+0800 (China Standard Time)` local time.
1. When sent to GraphQL, it's converted to the UTC string `2020-02-01T16:00:00.000Z`, which is a day behind.
To preserve the date, use `toISODateFormat` to convert the `Date` object to a date-only string:
```javascript
const dateString = toISODateFormat(dateObject); // "2020-02-02"
```
## Testing
### Manual testing
When performing manual testing of dates, such as when reviewing merge requests, test with time zones behind and ahead of UTC, such as UTC-8, UTC, and UTC+8, to spot potential bugs.
To change the time zone on macOS:
1. Go to **System Settings > General > Date & Time**.
1. Clear the **Set time zone automatically using your current location** checkbox.
1. Change **Closest city** to a city in another time zone, such as Sacramento, London, or Beijing.
### Jest
Our Jest tests are run with a mocked date of 2020-07-06 for determinism, which can be overridden using the `useFakeDate` function.
The logic for this is in `spec/frontend/__helpers__/fake_date/fake_date.js`.
|
https://docs.gitlab.com/development/performance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/performance.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
performance.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Performance
| null |
Performance is an essential part and one of the main areas of concern for any modern application.
## Monitoring
We have a performance dashboard available in one of our [Grafana instances](https://dashboards.gitlab.net/d/000000043/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://www.sitespeed.io/) every 4 hours. These changes are displayed after a set number of pages are aggregated.
These pages can be found inside text files in the [`sitespeed-measurement-setup` repository](https://gitlab.com/gitlab-org/frontend/sitespeed-measurement-setup) called [`gitlab`](https://gitlab.com/gitlab-org/frontend/sitespeed-measurement-setup/-/tree/master/gitlab)
Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing URLs of pages to the text files. The changes are pushed live on the next scheduled run after the changes are merged into `main`.
There are 3 recommended high impact metrics (core web vitals) to review on each page:
- [Largest Contentful Paint](https://web.dev/articles/lcp)
- [First Input Delay](https://web.dev/articles/fid/)
- [Cumulative Layout Shift](https://web.dev/articles/cls)
For these metrics, lower numbers are better as it means that the website is more performant.
## User Timing API
[User Timing API](https://developer.mozilla.org/en-US/docs/Web/API/Performance_API/User_timing) is a web API
[available in all modern browsers](https://caniuse.com/?search=User%20timing). It allows measuring
custom times and durations in your applications by placing special marks in your
code. You can use the User Timing API in GitLab to measure any timing, regardless of the framework,
including Rails, Vue, or vanilla JavaScript environments. For consistency and
convenience of adoption, GitLab offers several ways to enable custom user timing metrics in
your code.
User Timing API introduces two important paradigms: `mark` and `measure`.
**Mark** is the timestamp on the performance timeline. For example,
`performance.mark('my-component-start');` makes a browser note the time this code
is met. Then, you can obtain information about this mark by querying the global
performance object again. For example, in your DevTools console:
```javascript
performance.getEntriesByName('my-component-start')
```
**Measure** is the duration between either:
- Two marks
- The start of navigation and a mark
- The start of navigation and the moment the measurement is taken
It takes several arguments of which the measurement's name is the only one required. Examples:
- Duration between the start and end marks:
```javascript
performance.measure('My component', 'my-component-start', 'my-component-end')
```
- Duration between a mark and the moment the measurement is taken. The end mark is omitted in
this case.
```javascript
performance.measure('My component', 'my-component-start')
```
- Duration between [the navigation start](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin)
and the moment the actual measurement is taken.
```javascript
performance.measure('My component')
```
- Duration between [the navigation start](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin)
and a mark. You cannot omit the start mark in this case but you can set it to `undefined`.
```javascript
performance.measure('My component', undefined, 'my-component-end')
```
To query a particular `measure`, You can use the same API, as for `mark`:
```javascript
performance.getEntriesByName('My component')
```
You can also query for all captured marks and measurements:
```javascript
performance.getEntriesByType('mark');
performance.getEntriesByType('measure');
```
Using `getEntriesByName()` or `getEntriesByType()` returns an Array of
[the PerformanceMeasure objects](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceMeasure)
which contain information about the measurement's start time and duration.
### User Timing API utility
You can use the `performanceMarkAndMeasure` utility anywhere in GitLab, as it's not tied to any
particular environment.
`performanceMarkAndMeasure` takes an object as an argument, where:
| Attribute | Type | Required | Description |
|:------------|:---------|:---------|:----------------------|
| `mark` | `String` | no | The name for the mark to set. Used for retrieving the mark later. If not specified, the mark is not set. |
| `measures` | `Array` | no | The list of the measurements to take at this point. |
In return, the entries in the `measures` array are objects with the following API:
| Attribute | Type | Required | Description |
|:------------|:---------|:---------|:----------------------|
| `name` | `String` | yes | The name for the measurement. Used for retrieving the mark later. Must be specified for every measure object, otherwise JavaScript fails. |
| `start` | `String` | no | The name of a mark **from** which the measurement should be taken. |
| `end` | `String` | no | The name of a mark **to** which the measurement should be taken. |
Example:
```javascript
import { performanceMarkAndMeasure } from '~/performance/utils';
...
performanceMarkAndMeasure({
mark: MR_DIFFS_MARK_DIFF_FILES_END,
measures: [
{
name: MR_DIFFS_MEASURE_DIFF_FILES_DONE,
start: MR_DIFFS_MARK_DIFF_FILES_START,
end: MR_DIFFS_MARK_DIFF_FILES_END,
},
],
});
```
### Vue performance plugin
The plugin captures and measures the performance of the specified Vue components automatically
leveraging the Vue lifecycle and the User Timing API.
To use the Vue performance plugin:
1. Import the plugin:
```javascript
import PerformancePlugin from '~/performance/vue_performance_plugin';
```
1. Use it before initializing your Vue application:
```javascript
Vue.use(PerformancePlugin, {
components: [
'IdeTreeList',
'FileTree',
'RepoEditor',
]
});
```
The plugin accepts the list of components, performance of which should be measured. The components
should be specified by their `name` option.
You might need to explicitly set this option on the needed components, as
most components in the codebase don't have this option set:
```javascript
export default {
name: 'IdeTreeList',
components: {
...
...
}
```
The plugin captures and stores the following:
- The start **mark** for when the component has been initialized (in `beforeCreate()` hook)
- The end **mark** of the component when it has been rendered (next animation frame after `nextTick`
in `mounted()` hook). In most cases, this event does not wait for all sub-components to be
bootstrapped. To measure the sub-components, you should include those into the
plugin options.
- **Measure** duration between the two marks above.
### Access stored measurements
To access stored measurements, you can use either:
- **Performance bar**. If you have it enabled (`P` + `B` key-combo), you can see the metrics
output in your DevTools console.
- **"Performance" tab** of the DevTools. You can get the measurements (not the marks, though) in
this tab when profiling performance.
- **DevTools console**. As mentioned above, you can query for the entries:
```javascript
performance.getEntriesByType('mark');
performance.getEntriesByType('measure');
```
### Naming convention
All the marks and measures should be instantiated with the constants from
`app/assets/javascripts/performance/constants.js`. When you're ready to add a new mark's or
measurement's label, you can follow the pattern.
{{< alert type="note" >}}
This pattern is a recommendation and not a hard rule.
{{< /alert >}}
```javascript
app-*-start // for a start 'mark'
app-*-end // for an end 'mark'
app-* // for 'measure'
```
For example, `'webide-init-editor-start`, `mr-diffs-mark-file-tree-end`, and so on. We do it to
help identify marks and measures coming from the different apps on the same page.
## Best Practices
### Real-time Components
When writing code for real-time features we have to keep a couple of things in mind:
1. Do not overload the server with requests.
1. It should feel real-time.
Thus, we must strike a balance between sending requests and the feeling of real-time.
Use the following rules when creating real-time solutions.
<!-- vale gitlab_base.Spelling = NO -->
1. The server tells you how much to poll by sending `Poll-Interval` in the header.
Use that as your polling interval. This enables system administrators to change the
[polling rate](../../administration/polling.md).
A `Poll-Interval: -1` means you should disable polling, and this must be implemented.
1. A response with HTTP status different from 2XX should disable polling as well.
1. Use a common library for polling.
1. Poll on active tabs only. Use [Visibility](https://github.com/ai/visibilityjs).
1. Use regular polling intervals, do not use backoff polling or jitter, as the interval is
controlled by the server.
1. The backend code is likely to be using ETags. You do not and should not check for status
`304 Not Modified`. The browser transforms it for you.
<!-- vale gitlab_base.Spelling = YES -->
### Lazy Loading Images
To improve the time to first render we are using lazy loading for images. This works by setting
the actual image source on the `data-src` attribute. After the HTML is rendered and JavaScript is loaded,
the value of `data-src` is moved to `src` automatically if the image is in the current viewport.
- Prepare images in HTML for lazy loading by renaming the `src` attribute to `data-src` and adding the class `lazy`.
- If you are using the Rails `image_tag` helper, all images are lazy-loaded by default unless `lazy: false` is provided.
When asynchronously adding content which contains lazy images, call the function
`gl.lazyLoader.searchLazyImages()` which searches for lazy images and loads them if needed.
In general, it should be handled automatically through a `MutationObserver` in the lazy loading function.
### Animations
Only animate `opacity` & `transform` properties. Other properties (such as `top`, `left`, `margin`, and `padding`) all cause
Layout to be recalculated, which is much more expensive. For details on this, see
[High Performance Animations](https://web.dev/articles/animations-guide).
If you _do_ need to change layout (for example, a sidebar that pushes main content over), prefer [FLIP](https://aerotwist.com/blog/flip-your-animations/). FLIP allows you to change expensive
properties once, and handle the actual animation with transforms.
### Prefetching assets
In addition to prefetching data from the [API](graphql.md#making-initial-queries-early-with-graphql-startup-calls)
we allow prefetching the named JavaScript "chunks" as
[defined in the Webpack configuration](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/webpack.config.js#L298-359).
We support two types of prefetching for the chunks:
- The [`prefetch` link type](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel/prefetch)
is used to prefetch a chunk for the future navigation
- The [`preload` link type](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel/preload)
is used to prefetch a chunk that is crucial for the current navigation but is not
discovered until later in the rendering process
Both `prefetch` and `preload` links bring the loading performance benefit to the pages. Both are
fetched asynchronously, but contrary to [deferring the loading](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script#attr-defer)
of the assets which is used for other JavaScript resources in the product by default, `prefetch` and
`preload` neither parse nor execute the fetched script unless explicitly imported in any JavaScript
module. This allows to cache the fetched resources without blocking the execution of the
remaining page resources.
To prefetch a JavaScript chunk in a HAML view, `:prefetch_asset_tags` with the combination of
the `webpack_preload_asset_tag` helper is provided:
```javascript
- content_for :prefetch_asset_tags do
- webpack_preload_asset_tag('monaco')
```
This snippet will add a new `<link rel="preload">` element into the resulting HTML page:
```HTML
<link rel="preload" href="/assets/webpack/monaco.chunk.js" as="script" type="text/javascript">
```
By default, `webpack_preload_asset_tag` will `preload` the chunk. You don't need to worry about
`as` and `type` attributes for preloading the JavaScript chunks. However, when a chunk is not
critical, for the current navigation, one has to explicitly request `prefetch`:
```javascript
- content_for :prefetch_asset_tags do
- webpack_preload_asset_tag('monaco', prefetch: true)
```
This snippet will add a new `<link rel="prefetch">` element into the resulting HTML page:
```HTML
<link rel="prefetch" href="/assets/webpack/monaco.chunk.js">
```
## Reducing Asset Footprint
### Universal code
Code that is contained in `main.js` and `commons/index.js` is loaded and
run on _all_ pages. **Do not add** anything to these files unless it is truly
needed _everywhere_. These bundles include ubiquitous libraries like `vue`,
`axios`, and `jQuery`, as well as code for the main navigation and sidebar.
Where possible we should aim to remove modules from these bundles to reduce our
code footprint.
### Page-specific JavaScript
Webpack has been configured to automatically generate entry point bundles based
on the file structure in `app/assets/javascripts/pages/*`. The directories
in the `pages` directory correspond to Rails controllers and actions. These
auto-generated bundles are automatically included on the corresponding
pages.
For example, if you were to visit <https://gitlab.com/gitlab-org/gitlab/-/issues>,
you would be accessing the `app/controllers/projects/issues_controller.rb`
controller with the `index` action. If a corresponding file exists at
`pages/projects/issues/index/index.js`, it is compiled into a webpack
bundle and included on the page.
Previously, GitLab encouraged the use of
`content_for :page_specific_javascripts` in HAML files, along with
manually generated webpack bundles. However under this new system you should
not ever need to manually add an entry point to the `webpack.config.js` file.
{{< alert type="note" >}}
When unsure what controller and action corresponds to a page,
inspect `document.body.dataset.page` in your
browser's developer console from any page in GitLab.
{{< /alert >}}
TROUBLESHOOTING:
If using Vite, keep in mind that support for it is new and you may encounter unexpected effects from time to
time. If the entrypoint is correctly configured but the JavaScript is not loading,
try clearing the Vite cache and restarting the service:
`rm -rf tmp/cache/vite && gdk restart vite`
Alternatively, you can opt to use Webpack instead. Follow these [instructions for disabling Vite and using Webpack](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/configuration.md#vite-settings).
#### Important Considerations
- **Keep Entry Points Lite**:
Page-specific JavaScript entry points should be as lite as possible. These
files are exempt from unit tests, and should be used primarily for
instantiation and dependency injection of classes and methods that live in
modules outside of the entry point script. Just import, read the DOM,
instantiate, and nothing else.
- **`DOMContentLoaded` should not be used**:
All GitLab JavaScript files are added with the `defer` attribute.
According to the [Mozilla documentation](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script#attr-defer),
this implies that "the script is meant to be executed after the document has
been parsed, but before firing `DOMContentLoaded`". Because the document is already
parsed, `DOMContentLoaded` is not needed to bootstrap applications because all
the DOM nodes are already at our disposal.
- **Supporting Module Placement**:
- If a class or a module is _specific to a particular route_, try to locate
it close to the entry point in which it is used. For instance, if
`my_widget.js` is only imported in `pages/widget/show/index.js`, you
should place the module at `pages/widget/show/my_widget.js` and import it
with a relative path (for example, `import initMyWidget from './my_widget';`).
- If a class or module is _used by multiple routes_, place it in a
shared directory at the closest common parent directory for the entry
points that import it. For example, if `my_widget.js` is imported in
both `pages/widget/show/index.js` and `pages/widget/run/index.js`, then
place the module at `pages/widget/shared/my_widget.js` and import it with
a relative path if possible (for example, `../shared/my_widget`).
- **Enterprise Edition Caveats**:
For GitLab Enterprise Edition, page-specific entry points override their
Community Edition counterparts with the same name, so if
`ee/app/assets/javascripts/pages/foo/bar/index.js` exists, it takes
precedence over `app/assets/javascripts/pages/foo/bar/index.js`. If you want
to minimize duplicate code, you can import one entry point from the other.
This is not done automatically to allow for flexibility in overriding
functionality.
### Code Splitting
Code that does not need to be run immediately upon page load (for example,
modals, dropdowns, and other behaviors that can be lazy-loaded) should be split
into asynchronous chunks with dynamic import statements. These
imports return a Promise which is resolved after the script has loaded:
```javascript
import(/* webpackChunkName: 'emoji' */ '~/emoji')
.then(/* do something */)
.catch(/* report error */)
```
Use `webpackChunkName` when generating dynamic imports as
it provides a deterministic filename for the chunk which can then be cached
in the browser across GitLab versions.
More information is available in the [webpack code splitting documentation](https://webpack.js.org/guides/code-splitting/#dynamic-imports) and the [Vue dynamic component documentation](https://v2.vuejs.org/v2/guide/components-dynamic-async.html).
### Minimizing page size
A smaller page size means the page loads faster, especially on mobile
and poor connections. The page is parsed more quickly by the browser, and less
data is used for users with capped data plans.
General tips:
- Don't add new fonts.
- Prefer font formats with better compression, for example, WOFF2 is better than WOFF, which is better than TTF.
- Compress and minify assets wherever possible (For CSS/JS, Sprockets and webpack do this for us).
- If some functionality can reasonably be achieved without adding extra libraries, avoid them.
- Use page-specific JavaScript as described above to load libraries that are only needed on certain pages.
- Use code-splitting dynamic imports wherever possible to lazy-load code that is not needed initially.
- [High Performance Animations](https://web.dev/articles/animations-guide)
---
## Additional Resources
- [WebPage Test](https://www.webpagetest.org) for testing site loading time and size.
- [Google PageSpeed Insights](https://pagespeed.web.dev/) grades web pages and provides feedback to improve the page.
- [Profiling with Chrome DevTools](https://developer.chrome.com/docs/devtools/)
- [Browser Diet](https://github.com/zenorocha/browser-diet) was a community-built guide that cataloged practical tips for improving web page performance.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Performance
breadcrumbs:
- doc
- development
- fe_guide
---
Performance is an essential part and one of the main areas of concern for any modern application.
## Monitoring
We have a performance dashboard available in one of our [Grafana instances](https://dashboards.gitlab.net/d/000000043/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://www.sitespeed.io/) every 4 hours. These changes are displayed after a set number of pages are aggregated.
These pages can be found inside text files in the [`sitespeed-measurement-setup` repository](https://gitlab.com/gitlab-org/frontend/sitespeed-measurement-setup) called [`gitlab`](https://gitlab.com/gitlab-org/frontend/sitespeed-measurement-setup/-/tree/master/gitlab)
Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing URLs of pages to the text files. The changes are pushed live on the next scheduled run after the changes are merged into `main`.
There are 3 recommended high impact metrics (core web vitals) to review on each page:
- [Largest Contentful Paint](https://web.dev/articles/lcp)
- [First Input Delay](https://web.dev/articles/fid/)
- [Cumulative Layout Shift](https://web.dev/articles/cls)
For these metrics, lower numbers are better as it means that the website is more performant.
## User Timing API
[User Timing API](https://developer.mozilla.org/en-US/docs/Web/API/Performance_API/User_timing) is a web API
[available in all modern browsers](https://caniuse.com/?search=User%20timing). It allows measuring
custom times and durations in your applications by placing special marks in your
code. You can use the User Timing API in GitLab to measure any timing, regardless of the framework,
including Rails, Vue, or vanilla JavaScript environments. For consistency and
convenience of adoption, GitLab offers several ways to enable custom user timing metrics in
your code.
User Timing API introduces two important paradigms: `mark` and `measure`.
**Mark** is the timestamp on the performance timeline. For example,
`performance.mark('my-component-start');` makes a browser note the time this code
is met. Then, you can obtain information about this mark by querying the global
performance object again. For example, in your DevTools console:
```javascript
performance.getEntriesByName('my-component-start')
```
**Measure** is the duration between either:
- Two marks
- The start of navigation and a mark
- The start of navigation and the moment the measurement is taken
It takes several arguments of which the measurement's name is the only one required. Examples:
- Duration between the start and end marks:
```javascript
performance.measure('My component', 'my-component-start', 'my-component-end')
```
- Duration between a mark and the moment the measurement is taken. The end mark is omitted in
this case.
```javascript
performance.measure('My component', 'my-component-start')
```
- Duration between [the navigation start](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin)
and the moment the actual measurement is taken.
```javascript
performance.measure('My component')
```
- Duration between [the navigation start](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin)
and a mark. You cannot omit the start mark in this case but you can set it to `undefined`.
```javascript
performance.measure('My component', undefined, 'my-component-end')
```
To query a particular `measure`, You can use the same API, as for `mark`:
```javascript
performance.getEntriesByName('My component')
```
You can also query for all captured marks and measurements:
```javascript
performance.getEntriesByType('mark');
performance.getEntriesByType('measure');
```
Using `getEntriesByName()` or `getEntriesByType()` returns an Array of
[the PerformanceMeasure objects](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceMeasure)
which contain information about the measurement's start time and duration.
### User Timing API utility
You can use the `performanceMarkAndMeasure` utility anywhere in GitLab, as it's not tied to any
particular environment.
`performanceMarkAndMeasure` takes an object as an argument, where:
| Attribute | Type | Required | Description |
|:------------|:---------|:---------|:----------------------|
| `mark` | `String` | no | The name for the mark to set. Used for retrieving the mark later. If not specified, the mark is not set. |
| `measures` | `Array` | no | The list of the measurements to take at this point. |
In return, the entries in the `measures` array are objects with the following API:
| Attribute | Type | Required | Description |
|:------------|:---------|:---------|:----------------------|
| `name` | `String` | yes | The name for the measurement. Used for retrieving the mark later. Must be specified for every measure object, otherwise JavaScript fails. |
| `start` | `String` | no | The name of a mark **from** which the measurement should be taken. |
| `end` | `String` | no | The name of a mark **to** which the measurement should be taken. |
Example:
```javascript
import { performanceMarkAndMeasure } from '~/performance/utils';
...
performanceMarkAndMeasure({
mark: MR_DIFFS_MARK_DIFF_FILES_END,
measures: [
{
name: MR_DIFFS_MEASURE_DIFF_FILES_DONE,
start: MR_DIFFS_MARK_DIFF_FILES_START,
end: MR_DIFFS_MARK_DIFF_FILES_END,
},
],
});
```
### Vue performance plugin
The plugin captures and measures the performance of the specified Vue components automatically
leveraging the Vue lifecycle and the User Timing API.
To use the Vue performance plugin:
1. Import the plugin:
```javascript
import PerformancePlugin from '~/performance/vue_performance_plugin';
```
1. Use it before initializing your Vue application:
```javascript
Vue.use(PerformancePlugin, {
components: [
'IdeTreeList',
'FileTree',
'RepoEditor',
]
});
```
The plugin accepts the list of components, performance of which should be measured. The components
should be specified by their `name` option.
You might need to explicitly set this option on the needed components, as
most components in the codebase don't have this option set:
```javascript
export default {
name: 'IdeTreeList',
components: {
...
...
}
```
The plugin captures and stores the following:
- The start **mark** for when the component has been initialized (in `beforeCreate()` hook)
- The end **mark** of the component when it has been rendered (next animation frame after `nextTick`
in `mounted()` hook). In most cases, this event does not wait for all sub-components to be
bootstrapped. To measure the sub-components, you should include those into the
plugin options.
- **Measure** duration between the two marks above.
### Access stored measurements
To access stored measurements, you can use either:
- **Performance bar**. If you have it enabled (`P` + `B` key-combo), you can see the metrics
output in your DevTools console.
- **"Performance" tab** of the DevTools. You can get the measurements (not the marks, though) in
this tab when profiling performance.
- **DevTools console**. As mentioned above, you can query for the entries:
```javascript
performance.getEntriesByType('mark');
performance.getEntriesByType('measure');
```
### Naming convention
All the marks and measures should be instantiated with the constants from
`app/assets/javascripts/performance/constants.js`. When you're ready to add a new mark's or
measurement's label, you can follow the pattern.
{{< alert type="note" >}}
This pattern is a recommendation and not a hard rule.
{{< /alert >}}
```javascript
app-*-start // for a start 'mark'
app-*-end // for an end 'mark'
app-* // for 'measure'
```
For example, `'webide-init-editor-start`, `mr-diffs-mark-file-tree-end`, and so on. We do it to
help identify marks and measures coming from the different apps on the same page.
## Best Practices
### Real-time Components
When writing code for real-time features we have to keep a couple of things in mind:
1. Do not overload the server with requests.
1. It should feel real-time.
Thus, we must strike a balance between sending requests and the feeling of real-time.
Use the following rules when creating real-time solutions.
<!-- vale gitlab_base.Spelling = NO -->
1. The server tells you how much to poll by sending `Poll-Interval` in the header.
Use that as your polling interval. This enables system administrators to change the
[polling rate](../../administration/polling.md).
A `Poll-Interval: -1` means you should disable polling, and this must be implemented.
1. A response with HTTP status different from 2XX should disable polling as well.
1. Use a common library for polling.
1. Poll on active tabs only. Use [Visibility](https://github.com/ai/visibilityjs).
1. Use regular polling intervals, do not use backoff polling or jitter, as the interval is
controlled by the server.
1. The backend code is likely to be using ETags. You do not and should not check for status
`304 Not Modified`. The browser transforms it for you.
<!-- vale gitlab_base.Spelling = YES -->
### Lazy Loading Images
To improve the time to first render we are using lazy loading for images. This works by setting
the actual image source on the `data-src` attribute. After the HTML is rendered and JavaScript is loaded,
the value of `data-src` is moved to `src` automatically if the image is in the current viewport.
- Prepare images in HTML for lazy loading by renaming the `src` attribute to `data-src` and adding the class `lazy`.
- If you are using the Rails `image_tag` helper, all images are lazy-loaded by default unless `lazy: false` is provided.
When asynchronously adding content which contains lazy images, call the function
`gl.lazyLoader.searchLazyImages()` which searches for lazy images and loads them if needed.
In general, it should be handled automatically through a `MutationObserver` in the lazy loading function.
### Animations
Only animate `opacity` & `transform` properties. Other properties (such as `top`, `left`, `margin`, and `padding`) all cause
Layout to be recalculated, which is much more expensive. For details on this, see
[High Performance Animations](https://web.dev/articles/animations-guide).
If you _do_ need to change layout (for example, a sidebar that pushes main content over), prefer [FLIP](https://aerotwist.com/blog/flip-your-animations/). FLIP allows you to change expensive
properties once, and handle the actual animation with transforms.
### Prefetching assets
In addition to prefetching data from the [API](graphql.md#making-initial-queries-early-with-graphql-startup-calls)
we allow prefetching the named JavaScript "chunks" as
[defined in the Webpack configuration](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/webpack.config.js#L298-359).
We support two types of prefetching for the chunks:
- The [`prefetch` link type](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel/prefetch)
is used to prefetch a chunk for the future navigation
- The [`preload` link type](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel/preload)
is used to prefetch a chunk that is crucial for the current navigation but is not
discovered until later in the rendering process
Both `prefetch` and `preload` links bring the loading performance benefit to the pages. Both are
fetched asynchronously, but contrary to [deferring the loading](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script#attr-defer)
of the assets which is used for other JavaScript resources in the product by default, `prefetch` and
`preload` neither parse nor execute the fetched script unless explicitly imported in any JavaScript
module. This allows to cache the fetched resources without blocking the execution of the
remaining page resources.
To prefetch a JavaScript chunk in a HAML view, `:prefetch_asset_tags` with the combination of
the `webpack_preload_asset_tag` helper is provided:
```javascript
- content_for :prefetch_asset_tags do
- webpack_preload_asset_tag('monaco')
```
This snippet will add a new `<link rel="preload">` element into the resulting HTML page:
```HTML
<link rel="preload" href="/assets/webpack/monaco.chunk.js" as="script" type="text/javascript">
```
By default, `webpack_preload_asset_tag` will `preload` the chunk. You don't need to worry about
`as` and `type` attributes for preloading the JavaScript chunks. However, when a chunk is not
critical, for the current navigation, one has to explicitly request `prefetch`:
```javascript
- content_for :prefetch_asset_tags do
- webpack_preload_asset_tag('monaco', prefetch: true)
```
This snippet will add a new `<link rel="prefetch">` element into the resulting HTML page:
```HTML
<link rel="prefetch" href="/assets/webpack/monaco.chunk.js">
```
## Reducing Asset Footprint
### Universal code
Code that is contained in `main.js` and `commons/index.js` is loaded and
run on _all_ pages. **Do not add** anything to these files unless it is truly
needed _everywhere_. These bundles include ubiquitous libraries like `vue`,
`axios`, and `jQuery`, as well as code for the main navigation and sidebar.
Where possible we should aim to remove modules from these bundles to reduce our
code footprint.
### Page-specific JavaScript
Webpack has been configured to automatically generate entry point bundles based
on the file structure in `app/assets/javascripts/pages/*`. The directories
in the `pages` directory correspond to Rails controllers and actions. These
auto-generated bundles are automatically included on the corresponding
pages.
For example, if you were to visit <https://gitlab.com/gitlab-org/gitlab/-/issues>,
you would be accessing the `app/controllers/projects/issues_controller.rb`
controller with the `index` action. If a corresponding file exists at
`pages/projects/issues/index/index.js`, it is compiled into a webpack
bundle and included on the page.
Previously, GitLab encouraged the use of
`content_for :page_specific_javascripts` in HAML files, along with
manually generated webpack bundles. However under this new system you should
not ever need to manually add an entry point to the `webpack.config.js` file.
{{< alert type="note" >}}
When unsure what controller and action corresponds to a page,
inspect `document.body.dataset.page` in your
browser's developer console from any page in GitLab.
{{< /alert >}}
TROUBLESHOOTING:
If using Vite, keep in mind that support for it is new and you may encounter unexpected effects from time to
time. If the entrypoint is correctly configured but the JavaScript is not loading,
try clearing the Vite cache and restarting the service:
`rm -rf tmp/cache/vite && gdk restart vite`
Alternatively, you can opt to use Webpack instead. Follow these [instructions for disabling Vite and using Webpack](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/configuration.md#vite-settings).
#### Important Considerations
- **Keep Entry Points Lite**:
Page-specific JavaScript entry points should be as lite as possible. These
files are exempt from unit tests, and should be used primarily for
instantiation and dependency injection of classes and methods that live in
modules outside of the entry point script. Just import, read the DOM,
instantiate, and nothing else.
- **`DOMContentLoaded` should not be used**:
All GitLab JavaScript files are added with the `defer` attribute.
According to the [Mozilla documentation](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script#attr-defer),
this implies that "the script is meant to be executed after the document has
been parsed, but before firing `DOMContentLoaded`". Because the document is already
parsed, `DOMContentLoaded` is not needed to bootstrap applications because all
the DOM nodes are already at our disposal.
- **Supporting Module Placement**:
- If a class or a module is _specific to a particular route_, try to locate
it close to the entry point in which it is used. For instance, if
`my_widget.js` is only imported in `pages/widget/show/index.js`, you
should place the module at `pages/widget/show/my_widget.js` and import it
with a relative path (for example, `import initMyWidget from './my_widget';`).
- If a class or module is _used by multiple routes_, place it in a
shared directory at the closest common parent directory for the entry
points that import it. For example, if `my_widget.js` is imported in
both `pages/widget/show/index.js` and `pages/widget/run/index.js`, then
place the module at `pages/widget/shared/my_widget.js` and import it with
a relative path if possible (for example, `../shared/my_widget`).
- **Enterprise Edition Caveats**:
For GitLab Enterprise Edition, page-specific entry points override their
Community Edition counterparts with the same name, so if
`ee/app/assets/javascripts/pages/foo/bar/index.js` exists, it takes
precedence over `app/assets/javascripts/pages/foo/bar/index.js`. If you want
to minimize duplicate code, you can import one entry point from the other.
This is not done automatically to allow for flexibility in overriding
functionality.
### Code Splitting
Code that does not need to be run immediately upon page load (for example,
modals, dropdowns, and other behaviors that can be lazy-loaded) should be split
into asynchronous chunks with dynamic import statements. These
imports return a Promise which is resolved after the script has loaded:
```javascript
import(/* webpackChunkName: 'emoji' */ '~/emoji')
.then(/* do something */)
.catch(/* report error */)
```
Use `webpackChunkName` when generating dynamic imports as
it provides a deterministic filename for the chunk which can then be cached
in the browser across GitLab versions.
More information is available in the [webpack code splitting documentation](https://webpack.js.org/guides/code-splitting/#dynamic-imports) and the [Vue dynamic component documentation](https://v2.vuejs.org/v2/guide/components-dynamic-async.html).
### Minimizing page size
A smaller page size means the page loads faster, especially on mobile
and poor connections. The page is parsed more quickly by the browser, and less
data is used for users with capped data plans.
General tips:
- Don't add new fonts.
- Prefer font formats with better compression, for example, WOFF2 is better than WOFF, which is better than TTF.
- Compress and minify assets wherever possible (For CSS/JS, Sprockets and webpack do this for us).
- If some functionality can reasonably be achieved without adding extra libraries, avoid them.
- Use page-specific JavaScript as described above to load libraries that are only needed on certain pages.
- Use code-splitting dynamic imports wherever possible to lazy-load code that is not needed initially.
- [High Performance Animations](https://web.dev/articles/animations-guide)
---
## Additional Resources
- [WebPage Test](https://www.webpagetest.org) for testing site loading time and size.
- [Google PageSpeed Insights](https://pagespeed.web.dev/) grades web pages and provides feedback to improve the page.
- [Profiling with Chrome DevTools](https://developer.chrome.com/docs/devtools/)
- [Browser Diet](https://github.com/zenorocha/browser-diet) was a community-built guide that cataloged practical tips for improving web page performance.
|
https://docs.gitlab.com/development/state_management
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/state_management.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
state_management.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
State management guidance
| null |
At GitLab we support two solutions for client state management: [Apollo](https://www.apollographql.com/) and [Pinia](https://pinia.vuejs.org/).
It is non-trivial to pick either of these as your primary state manager.
This page should provide you with general guidance on how to make this choice.
You may also see Vuex in the GitLab codebase. [Vuex is deprecated in GitLab](vuex.md#deprecated) and **no new Vuex stores should be created**.
If your app has a Vuex store, [consider migrating](migrating_from_vuex.md).
## Difference between state and data
**Data** is information that user interacts with.
It usually comes from external requests (GraphQL or REST) or from the page itself.
**State** stores information about user or system interactions.
For example any flag is considered state: `isLoading`, `isFormVisible`, etc.
State management could be used to work with both state and data.
## Do I need to have state management?
You should prefer using the standard Vue data flow in your application first:
components define local state and pass it down through props and change it through events.
However this might not be sufficient for complex cases where state is shared between multiple components
that are not direct descendants of the component which defined this state.
You might consider hoisting that state to the root of your application, but that eventually
bloats the root component because it starts to do too many things at once.
To deal with that complexity you can use a state management solution.
The sections below will help you with this choice.
If you're still uncertain, prefer using Apollo before Pinia.
## Apollo
[Apollo](https://www.apollographql.com/), our primary interface to GraphQL API, can also be used as a client-side state manager.
[Learn more about GraphQL and Apollo](graphql.md).
### Strengths
- Great for working with data from GraphQL requests,
provides [data normalization](https://www.apollographql.com/docs/react/caching/overview#data-normalization) out of the box.
- Can cache data from REST API when GraphQL is not available.
- Queries are statically verified against the GraphQL schema.
### Weaknesses
- [More complex and involved than Pinia for client state management](https://www.apollographql.com/docs/react/local-state/managing-state-with-field-policies).
- Apollo DevTools: don't properly work on a significant part of our pages, Apollo Client errors are hard to track down.
### Pick Apollo when
- You rely on the GraphQL API
- You need specific Apollo features, for example:
- [Parametrized cache, cache invalidation](graphql.md#immutability-and-cache-updates)
- [Polling](graphql.md#polling-and-performance)
- [Stale While Revalidate](https://www.apollographql.com/docs/react/caching/advanced-topics#persisting-the-cache)
- [Real-time updates](graphql.md#subscriptions)
- [Other](https://www.apollographql.com/docs/react/)
## Pinia
[Pinia](https://pinia.vuejs.org/) is the client-side state management tool Vue recommends.
[Learn more about Pinia at GitLab](pinia.md).
### Strengths
- Simple but robust
- Lightweight at ≈1.5kb (as quoted by the Pinia site)
- Vue reactivity under the hood, API similar to Vuex
- Easy to debug
### Weaknesses
- Can't do any advanced request handling out of the box (data normalization, polling, caching, etc.)
- Can lead to same pitfalls as Vuex without guidance (overblown stores)
### Pick Pinia when you have any of these
- Significant percentage of Vue application state is client-side state
- Migrating from Vuex is a high priority
- Your application does not rely primarily on GraphQL API, and you don't plan the migration to GraphQL API in the near future
## Combining Pinia and Apollo
We recommend you pick either Apollo or Pinia as the only state manager in your app.
Combining them is not recommended because:
- Pinia and Apollo are both global stores, which means sharing responsibilities and having two sources of truth.
- Difference in mental models: Apollo is configuration based, Pinia is not. Switching between these mental models is tedious and error-prone.
- Experiencing the drawbacks of both approaches.
However there may be cases when it's OK to combine these two to seek specific benefits from both solutions:
- If there's a significant percentage of client-side state that would be best managed in Pinia.
- If domain-specific concerns warrant Apollo for cohesive GraphQL requests within a component.
If you have to use both Apollo and Pinia, follow these rules:
- **Never use Apollo Client in Pinia stores**. Apollo Client should only be consumed within a Vue component or a [composable](vue.md#composables).
- Do not sync data between Apollo and Pinia.
- You should have only one source of truth for your requests.
### Add Apollo to an existing app with Pinia
You can have Apollo data management in your components alongside existing Pinia state when you both:
- Need to work with data coming from GraphQL
- Can't migrate from Pinia to Apollo because of high migration effort
Don't try to manage client state (not to be confused with GraphQL or REST data) with Apollo and Pinia at the same time,
consider migrating from Pinia to Apollo if you need this.
Don't use Apollo inside Pinia stores.
### Add Pinia to an existing app with Apollo
Strongly consider [using Apollo for client-side state management](graphql.md#local-state-with-apollo) first. However, if all of the
following are true, Apollo might not be the best tool for managing this client-side state:
- If the footprint of client-side state is significant enough that there's a high implementation cost due to Apollo's complexities.
- If the client-side state can be nicely decoupled from the Apollo managed GraphQL API data.
### Vuex used alongside Apollo
[Vuex is deprecated in GitLab](vuex.md#deprecated), use the guidance above to pick either Apollo or Pinia as your primary state manager.
Follow the corresponding migration guide: [Apollo](migrating_from_vuex.md) or [Pinia](pinia.md#migrating-from-vuex).
Do not add new Pinia stores on top of the existing Vuex store, migrate first.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: State management guidance
breadcrumbs:
- doc
- development
- fe_guide
---
At GitLab we support two solutions for client state management: [Apollo](https://www.apollographql.com/) and [Pinia](https://pinia.vuejs.org/).
It is non-trivial to pick either of these as your primary state manager.
This page should provide you with general guidance on how to make this choice.
You may also see Vuex in the GitLab codebase. [Vuex is deprecated in GitLab](vuex.md#deprecated) and **no new Vuex stores should be created**.
If your app has a Vuex store, [consider migrating](migrating_from_vuex.md).
## Difference between state and data
**Data** is information that user interacts with.
It usually comes from external requests (GraphQL or REST) or from the page itself.
**State** stores information about user or system interactions.
For example any flag is considered state: `isLoading`, `isFormVisible`, etc.
State management could be used to work with both state and data.
## Do I need to have state management?
You should prefer using the standard Vue data flow in your application first:
components define local state and pass it down through props and change it through events.
However this might not be sufficient for complex cases where state is shared between multiple components
that are not direct descendants of the component which defined this state.
You might consider hoisting that state to the root of your application, but that eventually
bloats the root component because it starts to do too many things at once.
To deal with that complexity you can use a state management solution.
The sections below will help you with this choice.
If you're still uncertain, prefer using Apollo before Pinia.
## Apollo
[Apollo](https://www.apollographql.com/), our primary interface to GraphQL API, can also be used as a client-side state manager.
[Learn more about GraphQL and Apollo](graphql.md).
### Strengths
- Great for working with data from GraphQL requests,
provides [data normalization](https://www.apollographql.com/docs/react/caching/overview#data-normalization) out of the box.
- Can cache data from REST API when GraphQL is not available.
- Queries are statically verified against the GraphQL schema.
### Weaknesses
- [More complex and involved than Pinia for client state management](https://www.apollographql.com/docs/react/local-state/managing-state-with-field-policies).
- Apollo DevTools: don't properly work on a significant part of our pages, Apollo Client errors are hard to track down.
### Pick Apollo when
- You rely on the GraphQL API
- You need specific Apollo features, for example:
- [Parametrized cache, cache invalidation](graphql.md#immutability-and-cache-updates)
- [Polling](graphql.md#polling-and-performance)
- [Stale While Revalidate](https://www.apollographql.com/docs/react/caching/advanced-topics#persisting-the-cache)
- [Real-time updates](graphql.md#subscriptions)
- [Other](https://www.apollographql.com/docs/react/)
## Pinia
[Pinia](https://pinia.vuejs.org/) is the client-side state management tool Vue recommends.
[Learn more about Pinia at GitLab](pinia.md).
### Strengths
- Simple but robust
- Lightweight at ≈1.5kb (as quoted by the Pinia site)
- Vue reactivity under the hood, API similar to Vuex
- Easy to debug
### Weaknesses
- Can't do any advanced request handling out of the box (data normalization, polling, caching, etc.)
- Can lead to same pitfalls as Vuex without guidance (overblown stores)
### Pick Pinia when you have any of these
- Significant percentage of Vue application state is client-side state
- Migrating from Vuex is a high priority
- Your application does not rely primarily on GraphQL API, and you don't plan the migration to GraphQL API in the near future
## Combining Pinia and Apollo
We recommend you pick either Apollo or Pinia as the only state manager in your app.
Combining them is not recommended because:
- Pinia and Apollo are both global stores, which means sharing responsibilities and having two sources of truth.
- Difference in mental models: Apollo is configuration based, Pinia is not. Switching between these mental models is tedious and error-prone.
- Experiencing the drawbacks of both approaches.
However there may be cases when it's OK to combine these two to seek specific benefits from both solutions:
- If there's a significant percentage of client-side state that would be best managed in Pinia.
- If domain-specific concerns warrant Apollo for cohesive GraphQL requests within a component.
If you have to use both Apollo and Pinia, follow these rules:
- **Never use Apollo Client in Pinia stores**. Apollo Client should only be consumed within a Vue component or a [composable](vue.md#composables).
- Do not sync data between Apollo and Pinia.
- You should have only one source of truth for your requests.
### Add Apollo to an existing app with Pinia
You can have Apollo data management in your components alongside existing Pinia state when you both:
- Need to work with data coming from GraphQL
- Can't migrate from Pinia to Apollo because of high migration effort
Don't try to manage client state (not to be confused with GraphQL or REST data) with Apollo and Pinia at the same time,
consider migrating from Pinia to Apollo if you need this.
Don't use Apollo inside Pinia stores.
### Add Pinia to an existing app with Apollo
Strongly consider [using Apollo for client-side state management](graphql.md#local-state-with-apollo) first. However, if all of the
following are true, Apollo might not be the best tool for managing this client-side state:
- If the footprint of client-side state is significant enough that there's a high implementation cost due to Apollo's complexities.
- If the client-side state can be nicely decoupled from the Apollo managed GraphQL API data.
### Vuex used alongside Apollo
[Vuex is deprecated in GitLab](vuex.md#deprecated), use the guidance above to pick either Apollo or Pinia as your primary state manager.
Follow the corresponding migration guide: [Apollo](migrating_from_vuex.md) or [Pinia](pinia.md#migrating-from-vuex).
Do not add new Pinia stores on top of the existing Vuex store, migrate first.
|
https://docs.gitlab.com/development/vue
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/vue.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
vue.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Vue
| null |
To get started with Vue, read through [their documentation](https://v2.vuejs.org/v2/guide/index.html).
## Examples
What is described in the following sections can be found in these examples:
- [Security products](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/app/assets/javascripts/vue_shared/security_reports)
- [Registry](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/registry/stores)
## When to add Vue application
Sometimes, HAML page is enough to satisfy requirements. This statement is correct primarily for the static pages or pages that have very little logic. How do we know it's worth adding a Vue application to the page? The answer is "when we need to maintain application state and synchronize the rendered page with it".
To better explain this, let's imagine the page that has one toggle, and toggling it sends an API request. This case does not involve any state we want to maintain, we send the request and switch the toggle. However, if we add one more toggle that should always be the opposite to the first one, we need a _state_: one toggle should be "aware" about the state of another one. When written in plain JavaScript, this logic usually involves listening to DOM event and reacting with modifying DOM. Cases like this are much easier to handle with Vue.js so we should create a Vue application here.
## How to add a Vue application to a page
1. Create a new folder in `app/assets/javascripts` for your Vue application.
1. Add [page-specific JavaScript](performance.md#page-specific-javascript) to load your application.
1. You can use the [`initSimpleApp helper](#the-initsimpleapp-helper) to simplify [passing data from HAML to JS](#providing-data-from-haml-to-javascript).
### What are some flags signaling that you might need Vue application?
- when you need to define complex conditionals based on multiple factors and update them on user interaction;
- when you have to maintain any form of application state and share it between tags/elements;
- when you expect complex logic to be added in the future - it's easier to start with basic Vue application than having to rewrite JS/HAML to Vue on the next step.
## Avoid multiple Vue applications on the page
In the past, we added interactivity to the page piece-by-piece, adding multiple small Vue applications to different parts of the rendered HAML page. However, this approach led us to multiple complications:
- in most cases, these applications don't share state and perform API requests independently which grows a number of requests;
- we have to provide data from Rails to Vue using multiple endpoints;
- we cannot render Vue applications dynamically after page load, so the page structure becomes rigid;
- we cannot fully leverage client-side routing to replace Rails routing;
- multiple applications lead to unpredictable user experience, increased page complexity, harder debugging process;
- the way apps communicate with each other affects Web Vitals numbers.
Because of these reasons, we want to be cautious about adding new Vue applications to the pages where another Vue application is already present (this does not include old or new navigation). Before adding a new app, make sure that it is absolutely impossible to extend an existing application to achieve a desired functionality. When in doubt, feel free to ask for the architectural advise on `#frontend` or `#frontend-maintainers` Slack channel.
If you still need to add a new application, make sure it shares local state with existing applications.
Learn: [How do I know which state manager to use?](state_management.md)
## Vue architecture
The main goal we are trying to achieve with Vue architecture is to have only one data flow, and only one data entry.
To achieve this goal we use [Pinia](pinia.md) or [Apollo Client](graphql.md#libraries)
You can also read about this architecture in Vue documentation about
[state management](https://v2.vuejs.org/v2/guide/state-management.html#Simple-State-Management-from-Scratch)
and about [one way data flow](https://v2.vuejs.org/v2/guide/components-props.html#One-Way-Data-Flow).
### Components and Store
In some features implemented with Vue.js, like the [issue board](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/boards)
or [environments table](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/environments)
you can find a clear separation of concerns:
```plaintext
new_feature
├── components
│ └── component.vue
│ └── ...
├── store
│ └── new_feature_store.js
├── index.js
```
_For consistency purposes, we recommend you to follow the same structure._
Let's look into each of them:
### An `index.js` file
This file is the index file of your new feature. The root Vue instance
of the new feature should be here.
The Store and the Service should be imported and initialized in this file and
provided as a prop to the main component.
Be sure to read about [page-specific JavaScript](performance.md#page-specific-javascript).
### Bootstrapping Gotchas
#### Providing data from HAML to JavaScript
While mounting a Vue application, you might need to provide data from Rails to JavaScript.
To do that, you can use the `data` attributes in the HTML element and query them while mounting the application.
You should only do this while initializing the application, because the mounted element is replaced
with a Vue-generated DOM.
The `data` attributes are [only able to accept String values](https://developer.mozilla.org/en-US/docs/Learn/HTML/Howto/Use_data_attributes#javascript_access),
so you will need to cast or convert other variable types to String.
The advantage of providing data from the DOM to the Vue instance through `props` or
`provide` in the `render` function, instead of querying the DOM inside the main Vue
component, is that you avoid creating a fixture or an HTML element in the unit test.
##### The `initSimpleApp` helper
`initSimpleApp` is a helper function that streamlines the process of mounting a component in Vue.js. It accepts two arguments: a selector string representing the mount point in the HTML, and a Vue component.
To use `initSimpleApp`:
1. Include an HTML element in the page with an ID or unique class.
1. Add a data-view-model attribute containing a JSON object.
1. Import the desired Vue component, and pass it along with a valid CSS selector string
that selects the HTML element to `initSimpleApp`. This string mounts the component
at the specified location.
`initSimpleApp` automatically retrieves the content of the data-view-model attribute as a JSON object and passes it as props to the mounted Vue component. This can be used to pre-populate the component with data.
Example:
```vue
//my_component.vue
<template>
<div>
<p>Prop1: {{ prop1 }}</p>
<p>Prop2: {{ prop2 }}</p>
</div>
</template>
<script>
export default {
name: 'MyComponent',
props: {
prop1: {
type: String,
required: true
},
prop2: {
type: Number,
required: true
}
}
}
</script>
```
```html
<div id="js-my-element" data-view-model='{"prop1": "my object", "prop2": 42 }'></div>
```
```javascript
//index.js
import MyComponent from './my_component.vue'
import { initSimpleApp } from '~/helpers/init_simple_app_helper'
initSimpleApp('#js-my-element', MyComponent, { name: 'MyAppRoot' })
```
###### Passing values as `provide`/`inject` instead of props
To use `initSimpleApp` to pass values as `provide`/`inject` instead of props:
1. Include an HTML element in the page with an ID or unique class.
1. Add a `data-provide` attribute containing a JSON object.
1. Import the desired Vue component, and pass it along with a valid CSS selector string
that selects the HTML element to `initSimpleApp`. This string mounts the component
at the specified location.
`initSimpleApp` automatically retrieves the content of the data-provide attribute as a JSON object and passes it as inject to the mounted Vue component. This can be used to pre-populate the component with data.
Example:
```vue
//my_component.vue
<template>
<div>
<p>Inject1: {{ inject1 }}</p>
<p>Inject2: {{ inject2 }}</p>
</div>
</template>
<script>
export default {
name: 'MyComponent',
inject: {
inject1: {
default: '',
},
inject2: {
default: 0
}
},
}
</script>
```
```html
<div id="js-my-element" data-provide='{"inject1": "my object", "inject2": 42 }'></div>
```
```javascript
//index.js
import MyComponent from './my_component.vue'
import { initSimpleApp } from '~/helpers/init_simple_app_helper'
initSimpleApp('#js-my-element', MyComponent, { name: 'MyAppRoot' })
```
##### `provide` and `inject`
Vue supports dependency injection through [`provide` and `inject`](https://v2.vuejs.org/v2/api/#provide-inject).
In the component the `inject` configuration accesses the values `provide` passes down.
This example of a Vue app initialization shows how the `provide` configuration passes a value from HAML to the component:
```javascript
#js-vue-app{ data: { endpoint: 'foo' }}
// index.js
const el = document.getElementById('js-vue-app');
if (!el) return false;
const { endpoint } = el.dataset;
return new Vue({
el,
name: 'MyComponentRoot',
render(createElement) {
return createElement('my-component', {
provide: {
endpoint
},
});
},
});
```
The component, or any of its child components, can access the property through `inject` as:
```vue
<script>
export default {
name: 'MyComponent',
inject: ['endpoint'],
...
...
};
</script>
<template>
...
...
</template>
```
Using dependency injection to provide values from HAML is ideal when:
- The injected value doesn't need an explicit validation against its data type or contents.
- The value doesn't need to be reactive.
- Multiple components exist in the hierarchy that need access to this value where
prop-drilling becomes an inconvenience. Prop-drilling when the same prop is passed
through all components in the hierarchy until the component that is genuinely using it.
Dependency injection can potentially break a child component (either an immediate child or multiple levels deep) if both conditions are true:
- The value declared in the `inject` configuration doesn't have defaults defined.
- The parent component has not provided the value using the `provide` configuration.
A [default value](https://vuejs.org/guide/components/provide-inject.html#injection-default-values) might be useful in contexts where it makes sense.
##### props
If the value from HAML doesn't fit the criteria of dependency injection, use `props`.
See the following example.
```javascript
// haml
#js-vue-app{ data: { endpoint: 'foo' }}
// index.js
const el = document.getElementById('js-vue-app');
if (!el) return false;
const { endpoint } = el.dataset;
return new Vue({
el,
name: 'MyComponentRoot',
render(createElement) {
return createElement('my-component', {
props: {
endpoint
},
});
},
});
```
{{< alert type="note" >}}
When adding an `id` attribute to mount a Vue application, make sure this `id` is unique
across the codebase.
{{< /alert >}}
For more information on why we explicitly declare the data being passed into the Vue app,
refer to our [Vue style guide](style/vue.md#basic-rules).
#### Providing Rails form fields to Vue applications
When composing a form with Rails, the `name`, `id`, and `value` attributes of form inputs are generated
to match the backend. It can be helpful to have access to these generated attributes when converting
a Rails form to Vue, or when [integrating components](https://gitlab.com/gitlab-org/gitlab/-/blob/8956ad767d522f37a96e03840595c767de030968/app/assets/javascripts/access_tokens/index.js#L15) (such as a date picker or project selector) into it.
The [`parseRailsFormFields`](https://gitlab.com/gitlab-org/gitlab/-/blob/fe88797f682c7ff0b13f2c2223a3ff45ada751c1/app/assets/javascripts/lib/utils/forms.js#L107) utility function can be used to parse the generated form input attributes so they can be passed to the Vue application.
This enables us to integrate Vue components without changing how the form submits.
```ruby
-# form.html.haml
= form_for user do |form|
.js-user-form
= form.text_field :name, class: 'form-control gl-form-input', data: { js_name: 'name' }
= form.text_field :email, class: 'form-control gl-form-input', data: { js_name: 'email' }
```
The `js_name` data attribute is used as the key in the resulting JavaScript object.
For example `= form.text_field :email, data: { js_name: 'fooBarBaz' }` would be translated
to `{ fooBarBaz: { name: 'user[email]', id: 'user_email', value: '' } }`
```javascript
// index.js
import Vue from 'vue';
import { parseRailsFormFields } from '~/lib/utils/forms';
import UserForm from './components/user_form.vue';
export const initUserForm = () => {
const el = document.querySelector('.js-user-form');
if (!el) {
return null;
}
const fields = parseRailsFormFields(el);
return new Vue({
el,
name: 'UserFormRoot',
render(h) {
return h(UserForm, {
props: {
fields,
},
});
},
});
};
```
```vue
<script>
// user_form.vue
import { GlButton, GlFormGroup, GlFormInput } from '@gitlab/ui';
export default {
name: 'UserForm',
components: { GlButton, GlFormGroup, GlFormInput },
props: {
fields: {
type: Object,
required: true,
},
},
};
</script>
<template>
<div>
<gl-form-group :label-for="fields.name.id" :label="__('Name')">
<gl-form-input v-bind="fields.name" width="lg" />
</gl-form-group>
<gl-form-group :label-for="fields.email.id" :label="__('Email')">
<gl-form-input v-bind="fields.email" type="email" width="lg" />
</gl-form-group>
<gl-button type="submit" category="primary" variant="confirm">{{ __('Update') }}</gl-button>
</div>
</template>
```
#### Accessing the `gl` object
We query the `gl` object for data that doesn't change during the application's life
cycle in the same place we query the DOM. By following this practice, we can
avoid mocking the `gl` object, which makes tests easier. It should be done while
initializing our Vue instance, and the data should be provided as `props` to the main component:
```javascript
return new Vue({
el: '.js-vue-app',
name: 'MyComponentRoot',
render(createElement) {
return createElement('my-component', {
props: {
avatarUrl: gl.avatarUrl,
},
});
},
});
```
#### Accessing abilities
After pushing an ability to the [frontend](../permissions/authorizations.md#frontend),
use the [`provide` and `inject`](https://v2.vuejs.org/v2/api/#provide-inject)
mechanisms in Vue to make abilities available to any descendant components
in a Vue application. The `glAbilties` object is already provided in
`commons/vue.js`, so only the mixin is required to use the flags:
```javascript
// An arbitrary descendant component
import glAbilitiesMixin from '~/vue_shared/mixins/gl_abilities_mixin';
export default {
// ...
mixins: [glAbilitiesMixin()],
// ...
created() {
if (this.glAbilities.someAbility) {
// ...
}
},
}
```
#### Accessing feature flags
After pushing a feature flag to the [frontend](../feature_flags/_index.md#frontend),
use the [`provide` and `inject`](https://v2.vuejs.org/v2/api/#provide-inject)
mechanisms in Vue to make feature flags available to any descendant components
in a Vue application. The `glFeatures` object is already provided in
`commons/vue.js`, so only the mixin is required to use the flags:
```javascript
// An arbitrary descendant component
import glFeatureFlagsMixin from '~/vue_shared/mixins/gl_feature_flags_mixin';
export default {
// ...
mixins: [glFeatureFlagsMixin()],
// ...
created() {
if (this.glFeatures.myFlag) {
// ...
}
},
}
```
This approach has a few benefits:
- Arbitrarily deeply nested components can opt-in and access the flag without
intermediate components being aware of it (c.f. passing the flag down via
props).
- Good testability, because the flag can be provided to `mount`/`shallowMount`
from `vue-test-utils` as a prop.
```javascript
import { shallowMount } from '@vue/test-utils';
shallowMount(component, {
provide: {
glFeatures: { myFlag: true },
},
});
```
- Accessing a global variable is not required, except in the application's
[entry point](#accessing-the-gl-object).
#### Redirecting to page and displaying alerts
If you need to redirect to another page and display alerts, you can use the [`visitUrlWithAlerts`](https://gitlab.com/gitlab-org/gitlab/-/blob/7063dce68b8231442567707024b2f29e48ce2f64/app/assets/javascripts/lib/utils/url_utility.js#L731) utility function.
This can be useful when you're redirecting to a newly created resource and showing a success alert.
By default the alerts will be cleared when the page is reloaded. If you need an alert to be persisted on a page you can set the
`persistOnPages` key to an array of Rails controller actions. To find the Rails controller action run `document.body.dataset.page` in your console.
Example:
```javascript
visitUrlWithAlerts('/dashboard/groups', [
{
id: 'resource-building-in-background',
message: 'Resource is being built in the background.',
variant: 'info',
persistOnPages: ['dashboard:groups:index'],
},
])
```
If you need to manually remove a persisted alert, you can use the [`removeGlobalAlertById`](https://gitlab.com/gitlab-org/gitlab/-/blob/7063dce68b8231442567707024b2f29e48ce2f64/app/assets/javascripts/lib/utils/global_alerts.js#L31) utility function.
If you need to programmatically dismiss an alert, you can use the [`dismissGlobalAlertById`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/lib/utils/global_alerts.js#L43) utility function.
### A folder for Components
This folder holds all components that are specific to this new feature.
To use or create a component that is likely to be used somewhere
else, refer to `vue_shared/components`.
A good guideline to know when you should create a component is to think if
it could be reusable elsewhere.
For example, tables are used in a quite amount of places across GitLab, a table
would be a good fit for a component. On the other hand, a table cell used only
in one table would not be a good use of this pattern.
You can read more about components in Vue.js site, [Component System](https://v2.vuejs.org/v2/guide/#Composing-with-Components).
### Pinia
[Learn more about Pinia in GitLab](pinia.md).
### Vuex
[Vuex is deprecated](vuex.md#deprecated), consider [migrating](migrating_from_vuex.md).
### Vue Router
To add [Vue Router](https://router.vuejs.org/) to a page:
1. Add a catch-all route to the Rails route file using a wildcard named `*vueroute`:
```ruby
# example from ee/config/routes/project.rb
resources :iteration_cadences, path: 'cadences(/*vueroute)', action: :index
```
The above example serves the `index` page from `iteration_cadences` controller to any route
matching the start of the `path`, for example `groupname/projectname/-/cadences/123/456/`.
1. Pass the base route (everything before `*vueroute`) to the frontend to use as the `base` parameter to initialize Vue Router:
```haml
.js-my-app{ data: { base_path: project_iteration_cadences_path(project) } }
```
1. Initialize the router:
```javascript
Vue.use(VueRouter);
export function createRouter(basePath) {
return new VueRouter({
routes: createRoutes(),
mode: 'history',
base: basePath,
});
}
```
1. Add a fallback for unrecognised routes with `path: '*'`. Either:
- Add a redirect to the end of your routes array:
```javascript
const routes = [
{
path: '/',
name: 'list-page',
component: ListPage,
},
{
path: '*',
redirect: '/',
},
];
```
- Add a fallback component to the end of your routes array:
```javascript
const routes = [
{
path: '/',
name: 'list-page',
component: ListPage,
},
{
path: '*',
component: NotFound,
},
];
```
1. Optional. To also allow using the path helper for child routes, add `controller` and `action`
parameters to use the parent controller.
```ruby
resources :iteration_cadences, path: 'cadences(/*vueroute)', action: :index do
resources :iterations, only: [:index, :new, :edit, :show], constraints: { id: /\d+/ }, controller: :iteration_cadences, action: :index
end
```
This means routes like `/cadences/123/iterations/456/edit` can be validated on the backend,
for example to check group or project membership.
It also means we can use the `_path` helper, which means we can load the page in feature specs
without manually building the `*vueroute` part of the path..
### Mixing Vue and jQuery
- Mixing Vue and jQuery is not recommended.
- To use a specific jQuery plugin in Vue, [create a wrapper around it](https://vuejs.org/v2/examples/select2.html).
- It is acceptable for Vue to listen to existing jQuery events using jQuery event listeners.
- It is not recommended to add new jQuery events for Vue to interact with jQuery.
### Mixing Vue and JavaScript classes (in the data function)
In the [Vue documentation](https://v2.vuejs.org/v2/api/#Options-Data) the Data function/object is defined as follows:
> The data object for the Vue instance. Vue recursively converts its properties into getter/setters
to make it "reactive". The object must be plain: native objects such as browser API objects and
prototype properties are ignored. A guideline is that data should just be data - it is not
recommended to observe objects with their own stateful behavior.
Based on the Vue guidance:
- **Do not** use or create a JavaScript class in your [data function](https://v2.vuejs.org/v2/api/#data).
- **Do not** add new JavaScript class implementations.
- **Do** encapsulate complex state management with cohesive decoupled components or [a state manager](state_management.md).
- **Do** maintain existing implementations using such approaches.
- **Do** Migrate components to a pure object model when there are substantial changes to it.
- **Do** move business logic to separate files, so you can test them separately from your component.
#### Why
Additional reasons why having a JavaScript class presents maintainability issues on a huge codebase:
- After a class is created, it can be extended in a way that can infringe Vue reactivity and best practices.
- A class adds a layer of abstraction, which makes the component API and its inner workings less clear.
- It makes it harder to test. Because the class is instantiated by the component data function, it is
harder to 'manage' component and class separately.
- Adding Object Oriented Principles (OOP) to a functional codebase adds another way of writing code, reducing consistency and clarity.
## Style guide
Refer to the Vue section of our [style guide](style/vue.md)
for best practices while writing and testing your Vue components and templates.
## Composition API
With Vue 2.7 it is possible to use [Composition API](https://vuejs.org/guide/introduction.html#api-styles) in Vue components and as standalone composables.
### Prefer `<script>` over `<script setup>`
Composition API allows you to place the logic in the `<script>` section of the component or to have a dedicated `<script setup>` section. We should use `<script>` and add Composition API to components using `setup()` property:
```html
<script>
import { computed } from 'vue';
export default {
name: 'MyComponent',
setup(props) {
const doubleCount = computed(() => props.count*2)
}
}
</script>
```
### `v-bind` limitations
Avoid using `v-bind="$attrs"` unless absolutely necessary. You might need this when
developing a native control wrapper. (This is a good candidate for a `gitlab-ui` component.)
In any other cases, always prefer using `props` and explicit data flow.
Using `v-bind="$attrs"` leads to:
1. A loss in component's contract. The `props` were designed specifically
to address this problem.
1. High maintenance cost for each component in the tree. `v-bind="$attrs"` is specifically
hard to debug because you must scan the whole hierarchy of components to understand
the data flow.
1. Problems during migration to Vue 3. `$attrs` in Vue 3 include event listeners which
could cause unexpected side-effects after Vue 3 migration is completed.
### Aim to have one API style per component
When adding `setup()` property to Vue component, consider refactoring it to Composition API entirely. It's not always feasible, especially for large components, but we should aim to have one API style per component for readability and maintainability.
### Composables
With Composition API, we have a new way of abstracting logic including reactive state to _composables_. Composable is the function that can accept parameters and return reactive properties and methods to be used in Vue component.
```javascript
// useCount.js
import { ref } from 'vue';
export function useCount(initialValue) {
const count = ref(initialValue)
function incrementCount() {
count.value += 1
}
function decrementCount() {
count.value -= 1
}
return { count, incrementCount, decrementCount }
}
```
```javascript
// MyComponent.vue
import { useCount } from 'useCount'
export default {
name: 'MyComponent',
setup() {
const { count, incrementCount, decrementCount } = useCount(5)
return { count, incrementCount, decrementCount }
}
}
```
#### Prefix function and filenames with `use`
Common naming convention in Vue for composables is to prefix them with `use` and then refer to composable functionality briefly (`useBreakpoints`, `useGeolocation` etc). The same rule applies to the `.js` files containing composables - they should start with `use_` even if the file contains more than one composable.
#### Avoid lifecycle pitfalls
When building a composable, we should aim to keep it as simple as possible. Lifecycle hooks add complexity to composables and might lead to unexpected side effects. To avoid that we should follow these principles:
- Minimize lifecycle hooks usage whenever possible, prefer accepting/returning callbacks instead.
- If your composable needs lifecycle hooks, make sure it also performs a cleanup. If we add a listener on `onMounted`, we should remove it on `onUnmounted` within the same composable.
- Always set up lifecycle hooks immediately:
```javascript
// bad
const useAsyncLogic = () => {
const action = async () => {
await doSomething();
onMounted(doSomethingElse);
};
return { action };
};
// OK
const useAsyncLogic = () => {
const done = ref(false);
onMounted(() => {
watch(
done,
() => done.value && doSomethingElse(),
{ immediate: true },
);
});
const action = async () => {
await doSomething();
done.value = true;
};
return { action };
};
```
#### Avoid escape hatches
It might be tempting to write a composable that does everything as a black box, using some of the escape hatches that Vue provides. But for most of the cases this makes them too complex and hard to maintain. One escape hatch is the `getCurrentInstance` method. This method returns an instance of a current rendering component. Instead of using that method, you should prefer passing down the data or methods to a composable via arguments.
```javascript
const useSomeLogic = () => {
doSomeLogic();
getCurrentInstance().emit('done'); // bad
};
```
```javascript
const done = () => emit('done');
const useSomeLogic = (done) => {
doSomeLogic();
done(); // good, composable doesn't try to be too smart
}
```
#### Testing composables
<!-- TBD -->
## Testing Vue Components
Refer to the [Vue testing style guide](style/vue.md#vue-testing)
for guidelines and best practices for testing your Vue components.
Each Vue component has a unique output. This output is always present in the render function.
Although each method of a Vue component can be tested individually, our goal is to test the output
of the render function, which represents the state at all times.
Visit the [Vue testing guide](https://v2.vuejs.org/v2/guide/testing.html#Unit-Testing) for help.
Here's an example of a well structured unit test for [this Vue component](#appendix---vue-component-subject-under-test):
```javascript
import { GlLoadingIcon } from '@gitlab/ui';
import MockAdapter from 'axios-mock-adapter';
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import axios from '~/lib/utils/axios_utils';
import App from '~/todos/app.vue';
const TEST_TODOS = [{ text: 'Lorem ipsum test text' }, { text: 'Lorem ipsum 2' }];
const TEST_NEW_TODO = 'New todo title';
const TEST_TODO_PATH = '/todos';
describe('~/todos/app.vue', () => {
let wrapper;
let mock;
beforeEach(() => {
// IMPORTANT: Use axios-mock-adapter for stubbing axios API requests
mock = new MockAdapter(axios);
mock.onGet(TEST_TODO_PATH).reply(200, TEST_TODOS);
mock.onPost(TEST_TODO_PATH).reply(200);
});
afterEach(() => {
// IMPORTANT: Clean up the axios mock adapter
mock.restore();
});
// It is very helpful to separate setting up the component from
// its collaborators (for example, Vuex and axios).
const createWrapper = (props = {}) => {
wrapper = shallowMountExtended(App, {
propsData: {
path: TEST_TODO_PATH,
...props,
},
});
};
// Helper methods greatly help test maintainability and readability.
const findLoader = () => wrapper.findComponent(GlLoadingIcon);
const findAddButton = () => wrapper.findByTestId('add-button');
const findTextInput = () => wrapper.findByTestId('text-input');
const findTodoData = () =>
wrapper
.findAllByTestId('todo-item')
.wrappers.map((item) => ({ text: item.text() }));
describe('when mounted and loading', () => {
beforeEach(() => {
// Create request which will never resolve
mock.onGet(TEST_TODO_PATH).reply(() => new Promise(() => {}));
createWrapper();
});
it('should render the loading state', () => {
expect(findLoader().exists()).toBe(true);
});
});
describe('when todos are loaded', () => {
beforeEach(() => {
createWrapper();
// IMPORTANT: This component fetches data asynchronously on mount, so let's wait for the Vue template to update
return wrapper.vm.$nextTick();
});
it('should not show loading', () => {
expect(findLoader().exists()).toBe(false);
});
it('should render todos', () => {
expect(findTodoData()).toEqual(TEST_TODOS);
});
it('when todo is added, should post new todo', async () => {
findTextInput().vm.$emit('update', TEST_NEW_TODO);
findAddButton().vm.$emit('click');
await wrapper.vm.$nextTick();
expect(mock.history.post.map((x) => JSON.parse(x.data))).toEqual([{ text: TEST_NEW_TODO }]);
});
});
});
```
### Child components
1. Test any directive that defines if/how child component is rendered (for example, `v-if` and `v-for`).
1. Test any props we are passing to child components (especially if the prop is calculated in the
component under test, with the `computed` property, for example). Remember to use `.props()` and not `.vm.someProp`.
1. Test we react correctly to any events emitted from child components:
```javascript
const checkbox = wrapper.findByTestId('checkboxTestId');
expect(checkbox.attributes('disabled')).not.toBeDefined();
findChildComponent().vm.$emit('primary');
await nextTick();
expect(checkbox.attributes('disabled')).toBeDefined();
```
1. **Do not** test the internal implementation of the child components:
```javascript
// bad
expect(findChildComponent().find('.error-alert').exists()).toBe(false);
// good
expect(findChildComponent().props('withAlertContainer')).toBe(false);
```
### Events
We should test for events emitted in response to an action in our component. This testing
verifies the correct events are being fired with the correct arguments.
For any native DOM events we should use [`trigger`](https://v1.test-utils.vuejs.org/api/wrapper/#trigger)
to fire out event.
```javascript
// Assuming SomeButton renders: <button>Some button</button>
wrapper = mount(SomeButton);
...
it('should fire the click event', () => {
const btn = wrapper.find('button')
btn.trigger('click');
...
})
```
When firing a Vue event, use [`emit`](https://v2.vuejs.org/v2/guide/components-custom-events.html).
```javascript
wrapper = shallowMount(DropdownItem);
...
it('should fire the itemClicked event', () => {
DropdownItem.vm.$emit('itemClicked');
...
})
```
We should verify an event has been fired by asserting against the result of the
[`emitted()`](https://v1.test-utils.vuejs.org/api/wrapper/#emitted) method.
It is a good practice to prefer to use `vm.$emit` over `trigger` when emitting events from child components.
Using `trigger` on the component means we treat it as a white box: we assume that the root element of child component has a native `click` event. Also, some tests fail in Vue3 mode when using `trigger` on child components.
```javascript
const findButton = () => wrapper.findComponent(GlButton);
// bad
findButton().trigger('click');
// good
findButton().vm.$emit('click');
```
## Vue.js Expert Role
You should only apply to be a Vue.js expert when your own merge requests and your reviews show:
- Deep understanding of Vue reactivity
- Vue and [Pinia](pinia.md) code are structured according to both official and our guidelines
- Full understanding of testing Vue components and Pinia stores
- Knowledge about the existing Vue and Pinia applications and existing reusable components
## Vue 2 -> Vue 3 Migration
{{< history >}}
- This section is added temporarily to support the efforts to migrate the codebase from Vue 2.x to Vue 3.x
{{< /history >}}
We recommend to minimize adding certain features to the codebase to prevent increasing
the tech debt for the eventual migration:
- filters;
- event buses;
- functional templated
- `slot` attributes
You can find more details on [Migration to Vue 3](vue3_migration.md)
## Appendix - Vue component subject under test
This is the template for the example component which is tested in the
[Testing Vue components](#testing-vue-components) section:
```html
<template>
<div class="content">
<gl-loading-icon v-if="isLoading" />
<template v-else>
<div
v-for="todo in todos"
:key="todo.id"
:class="{ 'gl-strike': todo.isDone }"
data-testid="todo-item"
>{{ todo.text }}</div>
<footer class="gl-border-t-1 gl-mt-3 gl-pt-3">
<gl-form-input
type="text"
v-model="todoText"
data-testid="text-input"
>
<gl-button
variant="confirm"
data-testid="add-button"
@click="addTodo"
>Add</gl-button>
</footer>
</template>
</div>
</template>
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Vue
breadcrumbs:
- doc
- development
- fe_guide
---
To get started with Vue, read through [their documentation](https://v2.vuejs.org/v2/guide/index.html).
## Examples
What is described in the following sections can be found in these examples:
- [Security products](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/app/assets/javascripts/vue_shared/security_reports)
- [Registry](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/registry/stores)
## When to add Vue application
Sometimes, HAML page is enough to satisfy requirements. This statement is correct primarily for the static pages or pages that have very little logic. How do we know it's worth adding a Vue application to the page? The answer is "when we need to maintain application state and synchronize the rendered page with it".
To better explain this, let's imagine the page that has one toggle, and toggling it sends an API request. This case does not involve any state we want to maintain, we send the request and switch the toggle. However, if we add one more toggle that should always be the opposite to the first one, we need a _state_: one toggle should be "aware" about the state of another one. When written in plain JavaScript, this logic usually involves listening to DOM event and reacting with modifying DOM. Cases like this are much easier to handle with Vue.js so we should create a Vue application here.
## How to add a Vue application to a page
1. Create a new folder in `app/assets/javascripts` for your Vue application.
1. Add [page-specific JavaScript](performance.md#page-specific-javascript) to load your application.
1. You can use the [`initSimpleApp helper](#the-initsimpleapp-helper) to simplify [passing data from HAML to JS](#providing-data-from-haml-to-javascript).
### What are some flags signaling that you might need Vue application?
- when you need to define complex conditionals based on multiple factors and update them on user interaction;
- when you have to maintain any form of application state and share it between tags/elements;
- when you expect complex logic to be added in the future - it's easier to start with basic Vue application than having to rewrite JS/HAML to Vue on the next step.
## Avoid multiple Vue applications on the page
In the past, we added interactivity to the page piece-by-piece, adding multiple small Vue applications to different parts of the rendered HAML page. However, this approach led us to multiple complications:
- in most cases, these applications don't share state and perform API requests independently which grows a number of requests;
- we have to provide data from Rails to Vue using multiple endpoints;
- we cannot render Vue applications dynamically after page load, so the page structure becomes rigid;
- we cannot fully leverage client-side routing to replace Rails routing;
- multiple applications lead to unpredictable user experience, increased page complexity, harder debugging process;
- the way apps communicate with each other affects Web Vitals numbers.
Because of these reasons, we want to be cautious about adding new Vue applications to the pages where another Vue application is already present (this does not include old or new navigation). Before adding a new app, make sure that it is absolutely impossible to extend an existing application to achieve a desired functionality. When in doubt, feel free to ask for the architectural advise on `#frontend` or `#frontend-maintainers` Slack channel.
If you still need to add a new application, make sure it shares local state with existing applications.
Learn: [How do I know which state manager to use?](state_management.md)
## Vue architecture
The main goal we are trying to achieve with Vue architecture is to have only one data flow, and only one data entry.
To achieve this goal we use [Pinia](pinia.md) or [Apollo Client](graphql.md#libraries)
You can also read about this architecture in Vue documentation about
[state management](https://v2.vuejs.org/v2/guide/state-management.html#Simple-State-Management-from-Scratch)
and about [one way data flow](https://v2.vuejs.org/v2/guide/components-props.html#One-Way-Data-Flow).
### Components and Store
In some features implemented with Vue.js, like the [issue board](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/boards)
or [environments table](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/environments)
you can find a clear separation of concerns:
```plaintext
new_feature
├── components
│ └── component.vue
│ └── ...
├── store
│ └── new_feature_store.js
├── index.js
```
_For consistency purposes, we recommend you to follow the same structure._
Let's look into each of them:
### An `index.js` file
This file is the index file of your new feature. The root Vue instance
of the new feature should be here.
The Store and the Service should be imported and initialized in this file and
provided as a prop to the main component.
Be sure to read about [page-specific JavaScript](performance.md#page-specific-javascript).
### Bootstrapping Gotchas
#### Providing data from HAML to JavaScript
While mounting a Vue application, you might need to provide data from Rails to JavaScript.
To do that, you can use the `data` attributes in the HTML element and query them while mounting the application.
You should only do this while initializing the application, because the mounted element is replaced
with a Vue-generated DOM.
The `data` attributes are [only able to accept String values](https://developer.mozilla.org/en-US/docs/Learn/HTML/Howto/Use_data_attributes#javascript_access),
so you will need to cast or convert other variable types to String.
The advantage of providing data from the DOM to the Vue instance through `props` or
`provide` in the `render` function, instead of querying the DOM inside the main Vue
component, is that you avoid creating a fixture or an HTML element in the unit test.
##### The `initSimpleApp` helper
`initSimpleApp` is a helper function that streamlines the process of mounting a component in Vue.js. It accepts two arguments: a selector string representing the mount point in the HTML, and a Vue component.
To use `initSimpleApp`:
1. Include an HTML element in the page with an ID or unique class.
1. Add a data-view-model attribute containing a JSON object.
1. Import the desired Vue component, and pass it along with a valid CSS selector string
that selects the HTML element to `initSimpleApp`. This string mounts the component
at the specified location.
`initSimpleApp` automatically retrieves the content of the data-view-model attribute as a JSON object and passes it as props to the mounted Vue component. This can be used to pre-populate the component with data.
Example:
```vue
//my_component.vue
<template>
<div>
<p>Prop1: {{ prop1 }}</p>
<p>Prop2: {{ prop2 }}</p>
</div>
</template>
<script>
export default {
name: 'MyComponent',
props: {
prop1: {
type: String,
required: true
},
prop2: {
type: Number,
required: true
}
}
}
</script>
```
```html
<div id="js-my-element" data-view-model='{"prop1": "my object", "prop2": 42 }'></div>
```
```javascript
//index.js
import MyComponent from './my_component.vue'
import { initSimpleApp } from '~/helpers/init_simple_app_helper'
initSimpleApp('#js-my-element', MyComponent, { name: 'MyAppRoot' })
```
###### Passing values as `provide`/`inject` instead of props
To use `initSimpleApp` to pass values as `provide`/`inject` instead of props:
1. Include an HTML element in the page with an ID or unique class.
1. Add a `data-provide` attribute containing a JSON object.
1. Import the desired Vue component, and pass it along with a valid CSS selector string
that selects the HTML element to `initSimpleApp`. This string mounts the component
at the specified location.
`initSimpleApp` automatically retrieves the content of the data-provide attribute as a JSON object and passes it as inject to the mounted Vue component. This can be used to pre-populate the component with data.
Example:
```vue
//my_component.vue
<template>
<div>
<p>Inject1: {{ inject1 }}</p>
<p>Inject2: {{ inject2 }}</p>
</div>
</template>
<script>
export default {
name: 'MyComponent',
inject: {
inject1: {
default: '',
},
inject2: {
default: 0
}
},
}
</script>
```
```html
<div id="js-my-element" data-provide='{"inject1": "my object", "inject2": 42 }'></div>
```
```javascript
//index.js
import MyComponent from './my_component.vue'
import { initSimpleApp } from '~/helpers/init_simple_app_helper'
initSimpleApp('#js-my-element', MyComponent, { name: 'MyAppRoot' })
```
##### `provide` and `inject`
Vue supports dependency injection through [`provide` and `inject`](https://v2.vuejs.org/v2/api/#provide-inject).
In the component the `inject` configuration accesses the values `provide` passes down.
This example of a Vue app initialization shows how the `provide` configuration passes a value from HAML to the component:
```javascript
#js-vue-app{ data: { endpoint: 'foo' }}
// index.js
const el = document.getElementById('js-vue-app');
if (!el) return false;
const { endpoint } = el.dataset;
return new Vue({
el,
name: 'MyComponentRoot',
render(createElement) {
return createElement('my-component', {
provide: {
endpoint
},
});
},
});
```
The component, or any of its child components, can access the property through `inject` as:
```vue
<script>
export default {
name: 'MyComponent',
inject: ['endpoint'],
...
...
};
</script>
<template>
...
...
</template>
```
Using dependency injection to provide values from HAML is ideal when:
- The injected value doesn't need an explicit validation against its data type or contents.
- The value doesn't need to be reactive.
- Multiple components exist in the hierarchy that need access to this value where
prop-drilling becomes an inconvenience. Prop-drilling when the same prop is passed
through all components in the hierarchy until the component that is genuinely using it.
Dependency injection can potentially break a child component (either an immediate child or multiple levels deep) if both conditions are true:
- The value declared in the `inject` configuration doesn't have defaults defined.
- The parent component has not provided the value using the `provide` configuration.
A [default value](https://vuejs.org/guide/components/provide-inject.html#injection-default-values) might be useful in contexts where it makes sense.
##### props
If the value from HAML doesn't fit the criteria of dependency injection, use `props`.
See the following example.
```javascript
// haml
#js-vue-app{ data: { endpoint: 'foo' }}
// index.js
const el = document.getElementById('js-vue-app');
if (!el) return false;
const { endpoint } = el.dataset;
return new Vue({
el,
name: 'MyComponentRoot',
render(createElement) {
return createElement('my-component', {
props: {
endpoint
},
});
},
});
```
{{< alert type="note" >}}
When adding an `id` attribute to mount a Vue application, make sure this `id` is unique
across the codebase.
{{< /alert >}}
For more information on why we explicitly declare the data being passed into the Vue app,
refer to our [Vue style guide](style/vue.md#basic-rules).
#### Providing Rails form fields to Vue applications
When composing a form with Rails, the `name`, `id`, and `value` attributes of form inputs are generated
to match the backend. It can be helpful to have access to these generated attributes when converting
a Rails form to Vue, or when [integrating components](https://gitlab.com/gitlab-org/gitlab/-/blob/8956ad767d522f37a96e03840595c767de030968/app/assets/javascripts/access_tokens/index.js#L15) (such as a date picker or project selector) into it.
The [`parseRailsFormFields`](https://gitlab.com/gitlab-org/gitlab/-/blob/fe88797f682c7ff0b13f2c2223a3ff45ada751c1/app/assets/javascripts/lib/utils/forms.js#L107) utility function can be used to parse the generated form input attributes so they can be passed to the Vue application.
This enables us to integrate Vue components without changing how the form submits.
```ruby
-# form.html.haml
= form_for user do |form|
.js-user-form
= form.text_field :name, class: 'form-control gl-form-input', data: { js_name: 'name' }
= form.text_field :email, class: 'form-control gl-form-input', data: { js_name: 'email' }
```
The `js_name` data attribute is used as the key in the resulting JavaScript object.
For example `= form.text_field :email, data: { js_name: 'fooBarBaz' }` would be translated
to `{ fooBarBaz: { name: 'user[email]', id: 'user_email', value: '' } }`
```javascript
// index.js
import Vue from 'vue';
import { parseRailsFormFields } from '~/lib/utils/forms';
import UserForm from './components/user_form.vue';
export const initUserForm = () => {
const el = document.querySelector('.js-user-form');
if (!el) {
return null;
}
const fields = parseRailsFormFields(el);
return new Vue({
el,
name: 'UserFormRoot',
render(h) {
return h(UserForm, {
props: {
fields,
},
});
},
});
};
```
```vue
<script>
// user_form.vue
import { GlButton, GlFormGroup, GlFormInput } from '@gitlab/ui';
export default {
name: 'UserForm',
components: { GlButton, GlFormGroup, GlFormInput },
props: {
fields: {
type: Object,
required: true,
},
},
};
</script>
<template>
<div>
<gl-form-group :label-for="fields.name.id" :label="__('Name')">
<gl-form-input v-bind="fields.name" width="lg" />
</gl-form-group>
<gl-form-group :label-for="fields.email.id" :label="__('Email')">
<gl-form-input v-bind="fields.email" type="email" width="lg" />
</gl-form-group>
<gl-button type="submit" category="primary" variant="confirm">{{ __('Update') }}</gl-button>
</div>
</template>
```
#### Accessing the `gl` object
We query the `gl` object for data that doesn't change during the application's life
cycle in the same place we query the DOM. By following this practice, we can
avoid mocking the `gl` object, which makes tests easier. It should be done while
initializing our Vue instance, and the data should be provided as `props` to the main component:
```javascript
return new Vue({
el: '.js-vue-app',
name: 'MyComponentRoot',
render(createElement) {
return createElement('my-component', {
props: {
avatarUrl: gl.avatarUrl,
},
});
},
});
```
#### Accessing abilities
After pushing an ability to the [frontend](../permissions/authorizations.md#frontend),
use the [`provide` and `inject`](https://v2.vuejs.org/v2/api/#provide-inject)
mechanisms in Vue to make abilities available to any descendant components
in a Vue application. The `glAbilties` object is already provided in
`commons/vue.js`, so only the mixin is required to use the flags:
```javascript
// An arbitrary descendant component
import glAbilitiesMixin from '~/vue_shared/mixins/gl_abilities_mixin';
export default {
// ...
mixins: [glAbilitiesMixin()],
// ...
created() {
if (this.glAbilities.someAbility) {
// ...
}
},
}
```
#### Accessing feature flags
After pushing a feature flag to the [frontend](../feature_flags/_index.md#frontend),
use the [`provide` and `inject`](https://v2.vuejs.org/v2/api/#provide-inject)
mechanisms in Vue to make feature flags available to any descendant components
in a Vue application. The `glFeatures` object is already provided in
`commons/vue.js`, so only the mixin is required to use the flags:
```javascript
// An arbitrary descendant component
import glFeatureFlagsMixin from '~/vue_shared/mixins/gl_feature_flags_mixin';
export default {
// ...
mixins: [glFeatureFlagsMixin()],
// ...
created() {
if (this.glFeatures.myFlag) {
// ...
}
},
}
```
This approach has a few benefits:
- Arbitrarily deeply nested components can opt-in and access the flag without
intermediate components being aware of it (c.f. passing the flag down via
props).
- Good testability, because the flag can be provided to `mount`/`shallowMount`
from `vue-test-utils` as a prop.
```javascript
import { shallowMount } from '@vue/test-utils';
shallowMount(component, {
provide: {
glFeatures: { myFlag: true },
},
});
```
- Accessing a global variable is not required, except in the application's
[entry point](#accessing-the-gl-object).
#### Redirecting to page and displaying alerts
If you need to redirect to another page and display alerts, you can use the [`visitUrlWithAlerts`](https://gitlab.com/gitlab-org/gitlab/-/blob/7063dce68b8231442567707024b2f29e48ce2f64/app/assets/javascripts/lib/utils/url_utility.js#L731) utility function.
This can be useful when you're redirecting to a newly created resource and showing a success alert.
By default the alerts will be cleared when the page is reloaded. If you need an alert to be persisted on a page you can set the
`persistOnPages` key to an array of Rails controller actions. To find the Rails controller action run `document.body.dataset.page` in your console.
Example:
```javascript
visitUrlWithAlerts('/dashboard/groups', [
{
id: 'resource-building-in-background',
message: 'Resource is being built in the background.',
variant: 'info',
persistOnPages: ['dashboard:groups:index'],
},
])
```
If you need to manually remove a persisted alert, you can use the [`removeGlobalAlertById`](https://gitlab.com/gitlab-org/gitlab/-/blob/7063dce68b8231442567707024b2f29e48ce2f64/app/assets/javascripts/lib/utils/global_alerts.js#L31) utility function.
If you need to programmatically dismiss an alert, you can use the [`dismissGlobalAlertById`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/lib/utils/global_alerts.js#L43) utility function.
### A folder for Components
This folder holds all components that are specific to this new feature.
To use or create a component that is likely to be used somewhere
else, refer to `vue_shared/components`.
A good guideline to know when you should create a component is to think if
it could be reusable elsewhere.
For example, tables are used in a quite amount of places across GitLab, a table
would be a good fit for a component. On the other hand, a table cell used only
in one table would not be a good use of this pattern.
You can read more about components in Vue.js site, [Component System](https://v2.vuejs.org/v2/guide/#Composing-with-Components).
### Pinia
[Learn more about Pinia in GitLab](pinia.md).
### Vuex
[Vuex is deprecated](vuex.md#deprecated), consider [migrating](migrating_from_vuex.md).
### Vue Router
To add [Vue Router](https://router.vuejs.org/) to a page:
1. Add a catch-all route to the Rails route file using a wildcard named `*vueroute`:
```ruby
# example from ee/config/routes/project.rb
resources :iteration_cadences, path: 'cadences(/*vueroute)', action: :index
```
The above example serves the `index` page from `iteration_cadences` controller to any route
matching the start of the `path`, for example `groupname/projectname/-/cadences/123/456/`.
1. Pass the base route (everything before `*vueroute`) to the frontend to use as the `base` parameter to initialize Vue Router:
```haml
.js-my-app{ data: { base_path: project_iteration_cadences_path(project) } }
```
1. Initialize the router:
```javascript
Vue.use(VueRouter);
export function createRouter(basePath) {
return new VueRouter({
routes: createRoutes(),
mode: 'history',
base: basePath,
});
}
```
1. Add a fallback for unrecognised routes with `path: '*'`. Either:
- Add a redirect to the end of your routes array:
```javascript
const routes = [
{
path: '/',
name: 'list-page',
component: ListPage,
},
{
path: '*',
redirect: '/',
},
];
```
- Add a fallback component to the end of your routes array:
```javascript
const routes = [
{
path: '/',
name: 'list-page',
component: ListPage,
},
{
path: '*',
component: NotFound,
},
];
```
1. Optional. To also allow using the path helper for child routes, add `controller` and `action`
parameters to use the parent controller.
```ruby
resources :iteration_cadences, path: 'cadences(/*vueroute)', action: :index do
resources :iterations, only: [:index, :new, :edit, :show], constraints: { id: /\d+/ }, controller: :iteration_cadences, action: :index
end
```
This means routes like `/cadences/123/iterations/456/edit` can be validated on the backend,
for example to check group or project membership.
It also means we can use the `_path` helper, which means we can load the page in feature specs
without manually building the `*vueroute` part of the path..
### Mixing Vue and jQuery
- Mixing Vue and jQuery is not recommended.
- To use a specific jQuery plugin in Vue, [create a wrapper around it](https://vuejs.org/v2/examples/select2.html).
- It is acceptable for Vue to listen to existing jQuery events using jQuery event listeners.
- It is not recommended to add new jQuery events for Vue to interact with jQuery.
### Mixing Vue and JavaScript classes (in the data function)
In the [Vue documentation](https://v2.vuejs.org/v2/api/#Options-Data) the Data function/object is defined as follows:
> The data object for the Vue instance. Vue recursively converts its properties into getter/setters
to make it "reactive". The object must be plain: native objects such as browser API objects and
prototype properties are ignored. A guideline is that data should just be data - it is not
recommended to observe objects with their own stateful behavior.
Based on the Vue guidance:
- **Do not** use or create a JavaScript class in your [data function](https://v2.vuejs.org/v2/api/#data).
- **Do not** add new JavaScript class implementations.
- **Do** encapsulate complex state management with cohesive decoupled components or [a state manager](state_management.md).
- **Do** maintain existing implementations using such approaches.
- **Do** Migrate components to a pure object model when there are substantial changes to it.
- **Do** move business logic to separate files, so you can test them separately from your component.
#### Why
Additional reasons why having a JavaScript class presents maintainability issues on a huge codebase:
- After a class is created, it can be extended in a way that can infringe Vue reactivity and best practices.
- A class adds a layer of abstraction, which makes the component API and its inner workings less clear.
- It makes it harder to test. Because the class is instantiated by the component data function, it is
harder to 'manage' component and class separately.
- Adding Object Oriented Principles (OOP) to a functional codebase adds another way of writing code, reducing consistency and clarity.
## Style guide
Refer to the Vue section of our [style guide](style/vue.md)
for best practices while writing and testing your Vue components and templates.
## Composition API
With Vue 2.7 it is possible to use [Composition API](https://vuejs.org/guide/introduction.html#api-styles) in Vue components and as standalone composables.
### Prefer `<script>` over `<script setup>`
Composition API allows you to place the logic in the `<script>` section of the component or to have a dedicated `<script setup>` section. We should use `<script>` and add Composition API to components using `setup()` property:
```html
<script>
import { computed } from 'vue';
export default {
name: 'MyComponent',
setup(props) {
const doubleCount = computed(() => props.count*2)
}
}
</script>
```
### `v-bind` limitations
Avoid using `v-bind="$attrs"` unless absolutely necessary. You might need this when
developing a native control wrapper. (This is a good candidate for a `gitlab-ui` component.)
In any other cases, always prefer using `props` and explicit data flow.
Using `v-bind="$attrs"` leads to:
1. A loss in component's contract. The `props` were designed specifically
to address this problem.
1. High maintenance cost for each component in the tree. `v-bind="$attrs"` is specifically
hard to debug because you must scan the whole hierarchy of components to understand
the data flow.
1. Problems during migration to Vue 3. `$attrs` in Vue 3 include event listeners which
could cause unexpected side-effects after Vue 3 migration is completed.
### Aim to have one API style per component
When adding `setup()` property to Vue component, consider refactoring it to Composition API entirely. It's not always feasible, especially for large components, but we should aim to have one API style per component for readability and maintainability.
### Composables
With Composition API, we have a new way of abstracting logic including reactive state to _composables_. Composable is the function that can accept parameters and return reactive properties and methods to be used in Vue component.
```javascript
// useCount.js
import { ref } from 'vue';
export function useCount(initialValue) {
const count = ref(initialValue)
function incrementCount() {
count.value += 1
}
function decrementCount() {
count.value -= 1
}
return { count, incrementCount, decrementCount }
}
```
```javascript
// MyComponent.vue
import { useCount } from 'useCount'
export default {
name: 'MyComponent',
setup() {
const { count, incrementCount, decrementCount } = useCount(5)
return { count, incrementCount, decrementCount }
}
}
```
#### Prefix function and filenames with `use`
Common naming convention in Vue for composables is to prefix them with `use` and then refer to composable functionality briefly (`useBreakpoints`, `useGeolocation` etc). The same rule applies to the `.js` files containing composables - they should start with `use_` even if the file contains more than one composable.
#### Avoid lifecycle pitfalls
When building a composable, we should aim to keep it as simple as possible. Lifecycle hooks add complexity to composables and might lead to unexpected side effects. To avoid that we should follow these principles:
- Minimize lifecycle hooks usage whenever possible, prefer accepting/returning callbacks instead.
- If your composable needs lifecycle hooks, make sure it also performs a cleanup. If we add a listener on `onMounted`, we should remove it on `onUnmounted` within the same composable.
- Always set up lifecycle hooks immediately:
```javascript
// bad
const useAsyncLogic = () => {
const action = async () => {
await doSomething();
onMounted(doSomethingElse);
};
return { action };
};
// OK
const useAsyncLogic = () => {
const done = ref(false);
onMounted(() => {
watch(
done,
() => done.value && doSomethingElse(),
{ immediate: true },
);
});
const action = async () => {
await doSomething();
done.value = true;
};
return { action };
};
```
#### Avoid escape hatches
It might be tempting to write a composable that does everything as a black box, using some of the escape hatches that Vue provides. But for most of the cases this makes them too complex and hard to maintain. One escape hatch is the `getCurrentInstance` method. This method returns an instance of a current rendering component. Instead of using that method, you should prefer passing down the data or methods to a composable via arguments.
```javascript
const useSomeLogic = () => {
doSomeLogic();
getCurrentInstance().emit('done'); // bad
};
```
```javascript
const done = () => emit('done');
const useSomeLogic = (done) => {
doSomeLogic();
done(); // good, composable doesn't try to be too smart
}
```
#### Testing composables
<!-- TBD -->
## Testing Vue Components
Refer to the [Vue testing style guide](style/vue.md#vue-testing)
for guidelines and best practices for testing your Vue components.
Each Vue component has a unique output. This output is always present in the render function.
Although each method of a Vue component can be tested individually, our goal is to test the output
of the render function, which represents the state at all times.
Visit the [Vue testing guide](https://v2.vuejs.org/v2/guide/testing.html#Unit-Testing) for help.
Here's an example of a well structured unit test for [this Vue component](#appendix---vue-component-subject-under-test):
```javascript
import { GlLoadingIcon } from '@gitlab/ui';
import MockAdapter from 'axios-mock-adapter';
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import axios from '~/lib/utils/axios_utils';
import App from '~/todos/app.vue';
const TEST_TODOS = [{ text: 'Lorem ipsum test text' }, { text: 'Lorem ipsum 2' }];
const TEST_NEW_TODO = 'New todo title';
const TEST_TODO_PATH = '/todos';
describe('~/todos/app.vue', () => {
let wrapper;
let mock;
beforeEach(() => {
// IMPORTANT: Use axios-mock-adapter for stubbing axios API requests
mock = new MockAdapter(axios);
mock.onGet(TEST_TODO_PATH).reply(200, TEST_TODOS);
mock.onPost(TEST_TODO_PATH).reply(200);
});
afterEach(() => {
// IMPORTANT: Clean up the axios mock adapter
mock.restore();
});
// It is very helpful to separate setting up the component from
// its collaborators (for example, Vuex and axios).
const createWrapper = (props = {}) => {
wrapper = shallowMountExtended(App, {
propsData: {
path: TEST_TODO_PATH,
...props,
},
});
};
// Helper methods greatly help test maintainability and readability.
const findLoader = () => wrapper.findComponent(GlLoadingIcon);
const findAddButton = () => wrapper.findByTestId('add-button');
const findTextInput = () => wrapper.findByTestId('text-input');
const findTodoData = () =>
wrapper
.findAllByTestId('todo-item')
.wrappers.map((item) => ({ text: item.text() }));
describe('when mounted and loading', () => {
beforeEach(() => {
// Create request which will never resolve
mock.onGet(TEST_TODO_PATH).reply(() => new Promise(() => {}));
createWrapper();
});
it('should render the loading state', () => {
expect(findLoader().exists()).toBe(true);
});
});
describe('when todos are loaded', () => {
beforeEach(() => {
createWrapper();
// IMPORTANT: This component fetches data asynchronously on mount, so let's wait for the Vue template to update
return wrapper.vm.$nextTick();
});
it('should not show loading', () => {
expect(findLoader().exists()).toBe(false);
});
it('should render todos', () => {
expect(findTodoData()).toEqual(TEST_TODOS);
});
it('when todo is added, should post new todo', async () => {
findTextInput().vm.$emit('update', TEST_NEW_TODO);
findAddButton().vm.$emit('click');
await wrapper.vm.$nextTick();
expect(mock.history.post.map((x) => JSON.parse(x.data))).toEqual([{ text: TEST_NEW_TODO }]);
});
});
});
```
### Child components
1. Test any directive that defines if/how child component is rendered (for example, `v-if` and `v-for`).
1. Test any props we are passing to child components (especially if the prop is calculated in the
component under test, with the `computed` property, for example). Remember to use `.props()` and not `.vm.someProp`.
1. Test we react correctly to any events emitted from child components:
```javascript
const checkbox = wrapper.findByTestId('checkboxTestId');
expect(checkbox.attributes('disabled')).not.toBeDefined();
findChildComponent().vm.$emit('primary');
await nextTick();
expect(checkbox.attributes('disabled')).toBeDefined();
```
1. **Do not** test the internal implementation of the child components:
```javascript
// bad
expect(findChildComponent().find('.error-alert').exists()).toBe(false);
// good
expect(findChildComponent().props('withAlertContainer')).toBe(false);
```
### Events
We should test for events emitted in response to an action in our component. This testing
verifies the correct events are being fired with the correct arguments.
For any native DOM events we should use [`trigger`](https://v1.test-utils.vuejs.org/api/wrapper/#trigger)
to fire out event.
```javascript
// Assuming SomeButton renders: <button>Some button</button>
wrapper = mount(SomeButton);
...
it('should fire the click event', () => {
const btn = wrapper.find('button')
btn.trigger('click');
...
})
```
When firing a Vue event, use [`emit`](https://v2.vuejs.org/v2/guide/components-custom-events.html).
```javascript
wrapper = shallowMount(DropdownItem);
...
it('should fire the itemClicked event', () => {
DropdownItem.vm.$emit('itemClicked');
...
})
```
We should verify an event has been fired by asserting against the result of the
[`emitted()`](https://v1.test-utils.vuejs.org/api/wrapper/#emitted) method.
It is a good practice to prefer to use `vm.$emit` over `trigger` when emitting events from child components.
Using `trigger` on the component means we treat it as a white box: we assume that the root element of child component has a native `click` event. Also, some tests fail in Vue3 mode when using `trigger` on child components.
```javascript
const findButton = () => wrapper.findComponent(GlButton);
// bad
findButton().trigger('click');
// good
findButton().vm.$emit('click');
```
## Vue.js Expert Role
You should only apply to be a Vue.js expert when your own merge requests and your reviews show:
- Deep understanding of Vue reactivity
- Vue and [Pinia](pinia.md) code are structured according to both official and our guidelines
- Full understanding of testing Vue components and Pinia stores
- Knowledge about the existing Vue and Pinia applications and existing reusable components
## Vue 2 -> Vue 3 Migration
{{< history >}}
- This section is added temporarily to support the efforts to migrate the codebase from Vue 2.x to Vue 3.x
{{< /history >}}
We recommend to minimize adding certain features to the codebase to prevent increasing
the tech debt for the eventual migration:
- filters;
- event buses;
- functional templated
- `slot` attributes
You can find more details on [Migration to Vue 3](vue3_migration.md)
## Appendix - Vue component subject under test
This is the template for the example component which is tested in the
[Testing Vue components](#testing-vue-components) section:
```html
<template>
<div class="content">
<gl-loading-icon v-if="isLoading" />
<template v-else>
<div
v-for="todo in todos"
:key="todo.id"
:class="{ 'gl-strike': todo.isDone }"
data-testid="todo-item"
>{{ todo.text }}</div>
<footer class="gl-border-t-1 gl-mt-3 gl-pt-3">
<gl-form-input
type="text"
v-model="todoText"
data-testid="text-input"
>
<gl-button
variant="confirm"
data-testid="add-button"
@click="addTodo"
>Add</gl-button>
</footer>
</template>
</div>
</template>
```
|
https://docs.gitlab.com/development/dark_mode
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/dark_mode.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
dark_mode.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Dark mode
| null |
This page is about developing dark mode for GitLab. For more information on how to enable dark mode, see [how to change the UI appearance](../../user/profile/preferences.md#change-the-mode).
## How dark mode works
### Current approach
1. GitLab UI includes light and dark mode [design tokens](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/contributing/design_tokens.md) CSS custom properties for colors and components. See [design tokens technical implementation](https://design.gitlab.com/product-foundations/design-tokens-technical-implementation)
1. [Semantic design tokens](https://design.gitlab.com/product-foundations/design-tokens#semantic-design-tokens) provide values for light and dark mode in general usage, for example: background, text, and border colors.
### Deprecated approach
1. SCSS variables for the [color palette](https://design.gitlab.com/product-foundations/color) are reversed using [design tokens](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/contributing/design_tokens.md) to provide darker colors for smaller scales.
1. `app/assets/stylesheets/color_modes/_dark.scss` imports dark mode [design tokens](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/contributing/design_tokens.md) SCSS variables for colors.
1. Bootstrap variables overridden in `app/assets/stylesheets/framework/variables_overrides.scss` are given dark mode values in `_dark.scss`.
1. `_dark.scss` is loaded before `application.scss` to generate separate `application_dark.css` stylesheet for dark mode users only.
## Utility classes
Design tokens for dark mode can be applied with Tailwind classes (`gl-text-subtle`) or with `@apply` rule (`@apply gl-text-subtle`).
## CSS custom properties vs SCSS variables
Design tokens generate both CSS custom properties and SCSS variables which are imported into the dark mode stylesheet.
- **CSS custom properties**: are preferred to update color modes without loading a color mode specific stylesheet, and are required for any colors within the `app/assets/stylesheets/page_bundles` directory.
- **SCSS variables**: override existing color usage for dark mode and are compiled into a color mode specific stylesheet.
### Adding CSS custom properties
Create bespoke CSS custom properties when design tokens cannot be used with either Tailwind utilities or existing CSS custom properties. See [guidance for manually adding CSS custom properties](https://design.gitlab.com/product-foundations/design-tokens-technical-implementation#bespoke-dark-mode-solutions) in projects.
### Page bundles
To support dark mode, CSS custom properties should be used in `page_bundle` styles as we do not generate separate
`*_dark.css` variants of each `page_bundle` file.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Dark mode
breadcrumbs:
- doc
- development
- fe_guide
---
This page is about developing dark mode for GitLab. For more information on how to enable dark mode, see [how to change the UI appearance](../../user/profile/preferences.md#change-the-mode).
## How dark mode works
### Current approach
1. GitLab UI includes light and dark mode [design tokens](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/contributing/design_tokens.md) CSS custom properties for colors and components. See [design tokens technical implementation](https://design.gitlab.com/product-foundations/design-tokens-technical-implementation)
1. [Semantic design tokens](https://design.gitlab.com/product-foundations/design-tokens#semantic-design-tokens) provide values for light and dark mode in general usage, for example: background, text, and border colors.
### Deprecated approach
1. SCSS variables for the [color palette](https://design.gitlab.com/product-foundations/color) are reversed using [design tokens](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/contributing/design_tokens.md) to provide darker colors for smaller scales.
1. `app/assets/stylesheets/color_modes/_dark.scss` imports dark mode [design tokens](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/contributing/design_tokens.md) SCSS variables for colors.
1. Bootstrap variables overridden in `app/assets/stylesheets/framework/variables_overrides.scss` are given dark mode values in `_dark.scss`.
1. `_dark.scss` is loaded before `application.scss` to generate separate `application_dark.css` stylesheet for dark mode users only.
## Utility classes
Design tokens for dark mode can be applied with Tailwind classes (`gl-text-subtle`) or with `@apply` rule (`@apply gl-text-subtle`).
## CSS custom properties vs SCSS variables
Design tokens generate both CSS custom properties and SCSS variables which are imported into the dark mode stylesheet.
- **CSS custom properties**: are preferred to update color modes without loading a color mode specific stylesheet, and are required for any colors within the `app/assets/stylesheets/page_bundles` directory.
- **SCSS variables**: override existing color usage for dark mode and are compiled into a color mode specific stylesheet.
### Adding CSS custom properties
Create bespoke CSS custom properties when design tokens cannot be used with either Tailwind utilities or existing CSS custom properties. See [guidance for manually adding CSS custom properties](https://design.gitlab.com/product-foundations/design-tokens-technical-implementation#bespoke-dark-mode-solutions) in projects.
### Page bundles
To support dark mode, CSS custom properties should be used in `page_bundle` styles as we do not generate separate
`*_dark.css` variants of each `page_bundle` file.
|
https://docs.gitlab.com/development/frontend_goals
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/frontend_goals.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
frontend_goals.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Frontend Goals
| null |
This section defines the _desired state_ of the GitLab frontend and how we see it over the next few years. It is a living document and will adapt as technologies and team dynamics evolve.
## Technologies
### Vue@latest
Keeping up with the latest version of Vue ensures that the GitLab frontend leverages the most efficient, secure, and feature-rich framework capabilities. The latest Vue (3) offers improved performance and a more intuitive API, which collectively enhance the developer experience and application performance.
**Current Status**
- **As of December 2023**: GitLab is currently using Vue 2.x.
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Working Group**: [Vue.js 3 Migration Working Group](https://handbook.gitlab.com/handbook/company/working-groups/vuejs-3-migration/)
- **Facilitator**: Sam Beckham, Engineering Manager, Manage:Foundations
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- Using @vue/compat in Monolith
### State Management
When global state management is needed, it should happen in Apollo instead of Vuex or other state management libraries. See [Migrating from Vuex](migrating_from_vuex.md) for more details regarding why and how we plan on migrating.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### HAML by default
We'll continue using HAML over Vue when appropriate. See [when to add Vue application](vue.md#when-to-add-vue-application) on how to decide when Vue should be chosen.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Complete removal of jQuery
In 2019 we committed to no longer use jQuery, however we have not prioritized full removal. Our goal here is to no longer have any references to it in the primary GitLab codebase.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Dependencies management
Similar to keeping on the latest major version of Vue, we should try to keep as close as possible to the latest version of our dependencies, unless not upgrading outweighs the benefits of upgrading. At a minimum, we will audit the dependencies annually to evaluate whether or not they should be upgraded.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
## Best Practices
## Scalability and Performance
### Cluster SPAs
Currently, GitLab mostly follows Rails architecture and Rails routing which means every single time we're changing route, we have page reload. This results in long loading times because we are:
- rendering HAML page;
- mounting Vue applications if we have any;
- fetching data for these applications
Ideally, we should reduce the number of times user needs to go through this long process. This would be possible with converting GitLab into a single-page application but this would require significant refactoring and is not an achievable short/mid-term goal.
The realistic goal is to move to _multiple SPAs_ experience where we define the _clusters_ of pages that form the user flow, and move this cluster from Rails routing to a single-page application with client-side routing. This way, we can load all the relevant context from HAML only once, and fetch all the additional data from the API depending on the route. An example of a cluster could be the following pages:
- **Issues** page
- **Issue boards** page
- **Issue details** page
- **New issue** page
- editing an issue
All of them have the same context (project path, current user etc.), we could easily fetch more data with issue-specific parameter (issue `iid`) and store the results on the client (so that opening the same issue won't require more API calls). This leads to a smooth user experience for navigating through issues.
For navigation between clusters, we can still rely on Rails routing. These cases should be relatively more scarce than navigation within clusters.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Reusable components
Currently, we keep generically reusable components in two main places:
- GitLab UI
- `vue_shared` folder
While GitLab UI is well-documented and components are abstract enough to be reused anywhere in Vue applications, our `vue_shared` components are somewhat chaotic, often can be used only in certain context (for example, they can be bound to an existing Vuex store) and have duplicates (we have multiple components for notes).
We should perform an audit of `vue_shared`, find out what can and what cannot be moved to GitLab UI, and refactor existing components to remove duplicates and increase reusability. The ideal outcome would be having application-specific components moved to application folders, and keep reusable "smart" components in the shared folder/library, ensuring that every single piece of reusable functionality has _only one implementation_.
This is currently under development. Follow the [GitLab Modular Monolith for FE](https://gitlab.com/gitlab-org/gitlab/-/issues/422903) for updates on how we will enforce encapsulation on top-level folders like `vue_shared`.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Migrate to PostCSS
SASS compilation takes almost half of the total frontend compilation time. This makes our pipelines run longer than they should. Migrating to PostCSS should [significantly improve compilation times](https://github.com/postcss/benchmark#preprocessors).
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
## Collaboration and Tooling
### Visual Testing
We're early in the process of adding visual testing, but we should have a framework established. Once implementation is determined, we'll update this doc to include the specifics.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Accessibility testing
In 2023 we determined the tooling for accessibility testing. We opted for axe-core gem used in feature tests, to test the whole views rather then components in isolation. [See documentation on Automated accessibility testing](accessibility/automated_testing.md) to learn when and how to include it. You can check out our progress with [Accessibility scanner](https://gitlab-org.gitlab.io/frontend/playground/accessibility-scanner/) that uses Semgrep to find out if tests are present.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Working Group**: [Product Accessibility Group](https://handbook.gitlab.com/handbook/company/working-groups/product-accessibility/)
- **Facilitator**: Paulina Sędłak-Jakubowska
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Frontend Goals
breadcrumbs:
- doc
- development
- fe_guide
---
This section defines the _desired state_ of the GitLab frontend and how we see it over the next few years. It is a living document and will adapt as technologies and team dynamics evolve.
## Technologies
### Vue@latest
Keeping up with the latest version of Vue ensures that the GitLab frontend leverages the most efficient, secure, and feature-rich framework capabilities. The latest Vue (3) offers improved performance and a more intuitive API, which collectively enhance the developer experience and application performance.
**Current Status**
- **As of December 2023**: GitLab is currently using Vue 2.x.
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Working Group**: [Vue.js 3 Migration Working Group](https://handbook.gitlab.com/handbook/company/working-groups/vuejs-3-migration/)
- **Facilitator**: Sam Beckham, Engineering Manager, Manage:Foundations
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- Using @vue/compat in Monolith
### State Management
When global state management is needed, it should happen in Apollo instead of Vuex or other state management libraries. See [Migrating from Vuex](migrating_from_vuex.md) for more details regarding why and how we plan on migrating.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### HAML by default
We'll continue using HAML over Vue when appropriate. See [when to add Vue application](vue.md#when-to-add-vue-application) on how to decide when Vue should be chosen.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Complete removal of jQuery
In 2019 we committed to no longer use jQuery, however we have not prioritized full removal. Our goal here is to no longer have any references to it in the primary GitLab codebase.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Dependencies management
Similar to keeping on the latest major version of Vue, we should try to keep as close as possible to the latest version of our dependencies, unless not upgrading outweighs the benefits of upgrading. At a minimum, we will audit the dependencies annually to evaluate whether or not they should be upgraded.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
## Best Practices
## Scalability and Performance
### Cluster SPAs
Currently, GitLab mostly follows Rails architecture and Rails routing which means every single time we're changing route, we have page reload. This results in long loading times because we are:
- rendering HAML page;
- mounting Vue applications if we have any;
- fetching data for these applications
Ideally, we should reduce the number of times user needs to go through this long process. This would be possible with converting GitLab into a single-page application but this would require significant refactoring and is not an achievable short/mid-term goal.
The realistic goal is to move to _multiple SPAs_ experience where we define the _clusters_ of pages that form the user flow, and move this cluster from Rails routing to a single-page application with client-side routing. This way, we can load all the relevant context from HAML only once, and fetch all the additional data from the API depending on the route. An example of a cluster could be the following pages:
- **Issues** page
- **Issue boards** page
- **Issue details** page
- **New issue** page
- editing an issue
All of them have the same context (project path, current user etc.), we could easily fetch more data with issue-specific parameter (issue `iid`) and store the results on the client (so that opening the same issue won't require more API calls). This leads to a smooth user experience for navigating through issues.
For navigation between clusters, we can still rely on Rails routing. These cases should be relatively more scarce than navigation within clusters.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Reusable components
Currently, we keep generically reusable components in two main places:
- GitLab UI
- `vue_shared` folder
While GitLab UI is well-documented and components are abstract enough to be reused anywhere in Vue applications, our `vue_shared` components are somewhat chaotic, often can be used only in certain context (for example, they can be bound to an existing Vuex store) and have duplicates (we have multiple components for notes).
We should perform an audit of `vue_shared`, find out what can and what cannot be moved to GitLab UI, and refactor existing components to remove duplicates and increase reusability. The ideal outcome would be having application-specific components moved to application folders, and keep reusable "smart" components in the shared folder/library, ensuring that every single piece of reusable functionality has _only one implementation_.
This is currently under development. Follow the [GitLab Modular Monolith for FE](https://gitlab.com/gitlab-org/gitlab/-/issues/422903) for updates on how we will enforce encapsulation on top-level folders like `vue_shared`.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Migrate to PostCSS
SASS compilation takes almost half of the total frontend compilation time. This makes our pipelines run longer than they should. Migrating to PostCSS should [significantly improve compilation times](https://github.com/postcss/benchmark#preprocessors).
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
## Collaboration and Tooling
### Visual Testing
We're early in the process of adding visual testing, but we should have a framework established. Once implementation is determined, we'll update this doc to include the specifics.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Task Group**:
- **Facilitator**:
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
### Accessibility testing
In 2023 we determined the tooling for accessibility testing. We opted for axe-core gem used in feature tests, to test the whole views rather then components in isolation. [See documentation on Automated accessibility testing](accessibility/automated_testing.md) to learn when and how to include it. You can check out our progress with [Accessibility scanner](https://gitlab-org.gitlab.io/frontend/playground/accessibility-scanner/) that uses Semgrep to find out if tests are present.
**Current Status**
- **As of December 2023**: (Status)
- **Progress**: (Brief description of progress)
**Responsible Team**
- **Working Group**: [Product Accessibility Group](https://handbook.gitlab.com/handbook/company/working-groups/product-accessibility/)
- **Facilitator**: Paulina Sędłak-Jakubowska
**Milestones and Timelines**
- (Key milestones, expected completions)
**Challenges and Dependencies**
- (Any major challenges)
**Success Metrics**
- (Potential metrics)
|
https://docs.gitlab.com/development/tech_stack
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/tech_stack.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
tech_stack.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Tech Stack
| null |
For an exhaustive list of all the technology that we use, simply check our [latest `package.json` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/package.json?ref_type=heads).
Each navigation item in this section is a guide for that specific technology.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Tech Stack
breadcrumbs:
- doc
- development
- fe_guide
---
For an exhaustive list of all the technology that we use, simply check our [latest `package.json` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/package.json?ref_type=heads).
Each navigation item in this section is a guide for that specific technology.
|
https://docs.gitlab.com/development/widgets
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/widgets.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
widgets.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Widgets
| null |
Frontend widgets are standalone Vue applications or Vue component trees that can be added on a page
to handle a part of the functionality.
Good examples of widgets are [sidebar assignees](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/sidebar/components/assignees/sidebar_assignees_widget.vue) and [sidebar confidentiality](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/sidebar/components/confidential/sidebar_confidentiality_widget.vue).
When building a widget, we should follow a few principles described below.
## Vue Apollo is required
All widgets should use the same stack (Vue + Apollo Client).
To make it happen, we must add Vue Apollo to the application root (if we use a widget
as a component) or provide it directly to a widget. For sidebar widgets, use the
[issuable Apollo Client and Apollo Provider](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/graphql_shared/issuable_client.js):
```javascript
import SidebarConfidentialityWidget from '~/sidebar/components/confidential/sidebar_confidentiality_widget.vue';
import { apolloProvider } from '~/graphql_shared/issuable_client';
function mountConfidentialComponent() {
new Vue({
apolloProvider,
components: {
SidebarConfidentialityWidget,
},
/* ... */
});
}
```
## Required injections
All editable sidebar widgets should use [`SidebarEditableItem`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/sidebar/components/sidebar_editable_item.vue) to handle collapsed/expanded state. This component requires the `canUpdate` property provided in the application root.
## No global state mappings
We aim to make widgets as reusable as possible. That's why we should avoid adding any external state
bindings to widgets or to their child components. This includes Vuex mappings and mediator stores.
## Widget responsibility
A widget is responsible for fetching and updating an entity it's designed for (assignees, iterations, and so on).
This means a widget should **always** fetch data (if it's not in Apollo cache already).
Even if we provide an initial value to the widget, it should perform a GraphQL query in the background
to be stored in Apollo cache.
Eventually, when we have an Apollo Client cache as a global application state, we won't need to pass
initial data to the sidebar widget. Then it will be capable of retrieving the data from the cache.
## Using GraphQL queries and mutations
We need widgets to be flexible to work with different entities (epics, issues, merge requests, and so on).
Because we need different GraphQL queries and mutations for different sidebars, we create
[_mappings_](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/sidebar/constants.js#L9):
```javascript
export const assigneesQueries = {
[TYPE_ISSUE]: {
query: getIssueParticipants,
mutation: updateAssigneesMutation,
},
[TYPE_MERGE_REQUEST]: {
query: getMergeRequestParticipants,
mutation: updateMergeRequestParticipantsMutation,
},
};
```
To handle the same logic for query updates, we **alias** query fields. For example:
- `group` or `project` become `workspace`
- `issue`, `epic`, or `mergeRequest` become `issuable`
Unfortunately, Apollo assigns aliased fields a `typename` of `undefined`, so we need to fetch `__typename` explicitly:
```plaintext
query issueConfidential($fullPath: ID!, $iid: String) {
workspace: project(fullPath: $fullPath) {
__typename
issuable: issue(iid: $iid) {
__typename
id
confidential
}
}
}
```
## Communication with other Vue applications
If we need to communicate the changes of the widget state (for example, after successful mutation)
to the parent application, we should emit an event:
```javascript
updateAssignees(assigneeUsernames) {
return this.$apollo
.mutate({
mutation: this.$options.assigneesQueries[this.issuableType].mutation,
variables: {...},
})
.then(({ data }) => {
const assignees = data.issueSetAssignees?.issue?.assignees?.nodes || [];
this.$emit('assignees-updated', assignees);
})
}
```
Sometimes, we want to listen to the changes on the different Vue application like `NotesApp`.
In this case, we can use a renderless component that imports a client and listens to a certain query:
```javascript
import { fetchPolicies } from '~/lib/graphql';
import { confidentialityQueries } from '~/sidebar/constants';
import { defaultClient as gqlClient } from '~/graphql_shared/issuable_client';
created() {
if (this.issuableType !== IssuableType.Issue) {
return;
}
gqlClient
.watchQuery({
query: confidentialityQueries[this.issuableType].query,
variables: {...},
fetchPolicy: fetchPolicies.CACHE_ONLY,
})
.subscribe((res) => {
this.setConfidentiality(issuable.confidential);
});
},
methods: {
...mapActions(['setConfidentiality']),
},
```
[View an example of such a component.](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/notes/components/sidebar_subscription.vue)
## Merge request widgets
Refer to the documentation specific to the [merge request widget framework](merge_request_widgets.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Widgets
breadcrumbs:
- doc
- development
- fe_guide
---
Frontend widgets are standalone Vue applications or Vue component trees that can be added on a page
to handle a part of the functionality.
Good examples of widgets are [sidebar assignees](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/sidebar/components/assignees/sidebar_assignees_widget.vue) and [sidebar confidentiality](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/sidebar/components/confidential/sidebar_confidentiality_widget.vue).
When building a widget, we should follow a few principles described below.
## Vue Apollo is required
All widgets should use the same stack (Vue + Apollo Client).
To make it happen, we must add Vue Apollo to the application root (if we use a widget
as a component) or provide it directly to a widget. For sidebar widgets, use the
[issuable Apollo Client and Apollo Provider](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/graphql_shared/issuable_client.js):
```javascript
import SidebarConfidentialityWidget from '~/sidebar/components/confidential/sidebar_confidentiality_widget.vue';
import { apolloProvider } from '~/graphql_shared/issuable_client';
function mountConfidentialComponent() {
new Vue({
apolloProvider,
components: {
SidebarConfidentialityWidget,
},
/* ... */
});
}
```
## Required injections
All editable sidebar widgets should use [`SidebarEditableItem`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/sidebar/components/sidebar_editable_item.vue) to handle collapsed/expanded state. This component requires the `canUpdate` property provided in the application root.
## No global state mappings
We aim to make widgets as reusable as possible. That's why we should avoid adding any external state
bindings to widgets or to their child components. This includes Vuex mappings and mediator stores.
## Widget responsibility
A widget is responsible for fetching and updating an entity it's designed for (assignees, iterations, and so on).
This means a widget should **always** fetch data (if it's not in Apollo cache already).
Even if we provide an initial value to the widget, it should perform a GraphQL query in the background
to be stored in Apollo cache.
Eventually, when we have an Apollo Client cache as a global application state, we won't need to pass
initial data to the sidebar widget. Then it will be capable of retrieving the data from the cache.
## Using GraphQL queries and mutations
We need widgets to be flexible to work with different entities (epics, issues, merge requests, and so on).
Because we need different GraphQL queries and mutations for different sidebars, we create
[_mappings_](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/sidebar/constants.js#L9):
```javascript
export const assigneesQueries = {
[TYPE_ISSUE]: {
query: getIssueParticipants,
mutation: updateAssigneesMutation,
},
[TYPE_MERGE_REQUEST]: {
query: getMergeRequestParticipants,
mutation: updateMergeRequestParticipantsMutation,
},
};
```
To handle the same logic for query updates, we **alias** query fields. For example:
- `group` or `project` become `workspace`
- `issue`, `epic`, or `mergeRequest` become `issuable`
Unfortunately, Apollo assigns aliased fields a `typename` of `undefined`, so we need to fetch `__typename` explicitly:
```plaintext
query issueConfidential($fullPath: ID!, $iid: String) {
workspace: project(fullPath: $fullPath) {
__typename
issuable: issue(iid: $iid) {
__typename
id
confidential
}
}
}
```
## Communication with other Vue applications
If we need to communicate the changes of the widget state (for example, after successful mutation)
to the parent application, we should emit an event:
```javascript
updateAssignees(assigneeUsernames) {
return this.$apollo
.mutate({
mutation: this.$options.assigneesQueries[this.issuableType].mutation,
variables: {...},
})
.then(({ data }) => {
const assignees = data.issueSetAssignees?.issue?.assignees?.nodes || [];
this.$emit('assignees-updated', assignees);
})
}
```
Sometimes, we want to listen to the changes on the different Vue application like `NotesApp`.
In this case, we can use a renderless component that imports a client and listens to a certain query:
```javascript
import { fetchPolicies } from '~/lib/graphql';
import { confidentialityQueries } from '~/sidebar/constants';
import { defaultClient as gqlClient } from '~/graphql_shared/issuable_client';
created() {
if (this.issuableType !== IssuableType.Issue) {
return;
}
gqlClient
.watchQuery({
query: confidentialityQueries[this.issuableType].query,
variables: {...},
fetchPolicy: fetchPolicies.CACHE_ONLY,
})
.subscribe((res) => {
this.setConfidentiality(issuable.confidential);
});
},
methods: {
...mapActions(['setConfidentiality']),
},
```
[View an example of such a component.](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/notes/components/sidebar_subscription.vue)
## Merge request widgets
Refer to the documentation specific to the [merge request widget framework](merge_request_widgets.md).
|
https://docs.gitlab.com/development/registry_architecture
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/registry_architecture.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
registry_architecture.md
|
Package
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Registry architecture
| null |
GitLab has several registry applications. Given that they all leverage similar UI, UX, and business
logic, they are all built with the same architecture. In addition, a set of shared components
already exists to unify the user and developer experiences.
Existing registries:
- Package registry
- Container registry
- Terraform Module Registry
- Dependency Proxy
## Frontend architecture
### Component classification
All the registries follow an architecture pattern that includes four component types:
- Pages: represent an entire app, or for the registries using [vue-router](https://v3.router.vuejs.org/) they represent one router
route.
- Containers: represent a single piece of functionality. They contain complex logic and may
connect to the API.
- Presentationals: represent a portion of the UI. They receive all their data with `props` or through
`inject`, and do not connect to the API.
- Shared components: presentational components that accept a various array of configurations and are
shared across all of the registries.
### Communicating with the API
The complexity and communication with the API should be concentrated in the pages components, and
in the container components when needed. This makes it easier to:
- Handle concurrent requests, loading states, and user messages.
- Maintain the code, especially to estimate work. If it touches a page or functional component,
expect it to be more complex.
- Write fast and consistent unit tests.
### Best practices
- Use [`provide` or `inject`](https://v2.vuejs.org/v2/api/?redirect=true#provide-inject)
to pass static, non-reactive values coming from the app initialization.
- When passing data, prefer `props` over nested queries or Vuex bindings. Only pages and
container components should be aware of the state and API communication.
- Don't repeat yourself. If one registry receives functionality, the likelihood of the rest needing
it in the future is high. If something seems reusable and isn't bound to the state, create a
shared component.
- Try to express functionality and logic with dedicated components. It's much easier to deal with
events and properties than callbacks and asynchronous code (see
[`delete_package.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/packages_and_registries/package_registry/components/functional/delete_package.vue)).
- Leverage [startup for GraphQL calls](graphql.md#making-initial-queries-early-with-graphql-startup-calls).
## Shared components library
Inside `vue_shared/components/registry` and `packages_and_registries/shared`, there's a set of
shared components that you can use to implement registry functionality. These components build the
main pieces of the desired UI and UX of a registry page. The most important components are:
- `code-instruction`: represents a copyable box containing code. Supports multiline and single line
code boxes. Snowplow tracks the code copy event.
- `details-row`: represents a row of details. Used to add additional information in the details area of
the `list-item` component.
- `history-item`: represents a history list item used to build a timeline.
- `list-item`: represents a list element in the registry. It supports: left action, left primary and
secondary content, right primary and secondary content, right action, and details slots.
- `metadata-item`: represents one piece of metadata, with an icon or a link. Used primarily in the
title area.
- `persisted-dropdown-selection`: represents a menu that stores the user selection in the
`localStorage`.
- `registry-search`: implements `gl-filtered-search` with a sorting section on the right.
- `title-area`: implements the top title area of the registry. Includes: a main title, an avatar, a
subtitle, a metadata row, and a right actions slot.
## Adding a new registry page
When adding a new registry:
- Leverage the shared components that already exist. It's good to look at how the components are
structured and used in the more mature registries (for example, the package registry).
- If it's in line with the backend requirements, we suggest using GraphQL for the API. This helps in
dealing with the innate performance issue of registries.
- If possible, we recommend using [Vue Router](https://v3.router.vuejs.org/)
and frontend routing. Coupled with Apollo, the caching layer helps with the perceived page
performance.
|
---
stage: Package
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Registry architecture
breadcrumbs:
- doc
- development
- fe_guide
---
GitLab has several registry applications. Given that they all leverage similar UI, UX, and business
logic, they are all built with the same architecture. In addition, a set of shared components
already exists to unify the user and developer experiences.
Existing registries:
- Package registry
- Container registry
- Terraform Module Registry
- Dependency Proxy
## Frontend architecture
### Component classification
All the registries follow an architecture pattern that includes four component types:
- Pages: represent an entire app, or for the registries using [vue-router](https://v3.router.vuejs.org/) they represent one router
route.
- Containers: represent a single piece of functionality. They contain complex logic and may
connect to the API.
- Presentationals: represent a portion of the UI. They receive all their data with `props` or through
`inject`, and do not connect to the API.
- Shared components: presentational components that accept a various array of configurations and are
shared across all of the registries.
### Communicating with the API
The complexity and communication with the API should be concentrated in the pages components, and
in the container components when needed. This makes it easier to:
- Handle concurrent requests, loading states, and user messages.
- Maintain the code, especially to estimate work. If it touches a page or functional component,
expect it to be more complex.
- Write fast and consistent unit tests.
### Best practices
- Use [`provide` or `inject`](https://v2.vuejs.org/v2/api/?redirect=true#provide-inject)
to pass static, non-reactive values coming from the app initialization.
- When passing data, prefer `props` over nested queries or Vuex bindings. Only pages and
container components should be aware of the state and API communication.
- Don't repeat yourself. If one registry receives functionality, the likelihood of the rest needing
it in the future is high. If something seems reusable and isn't bound to the state, create a
shared component.
- Try to express functionality and logic with dedicated components. It's much easier to deal with
events and properties than callbacks and asynchronous code (see
[`delete_package.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/packages_and_registries/package_registry/components/functional/delete_package.vue)).
- Leverage [startup for GraphQL calls](graphql.md#making-initial-queries-early-with-graphql-startup-calls).
## Shared components library
Inside `vue_shared/components/registry` and `packages_and_registries/shared`, there's a set of
shared components that you can use to implement registry functionality. These components build the
main pieces of the desired UI and UX of a registry page. The most important components are:
- `code-instruction`: represents a copyable box containing code. Supports multiline and single line
code boxes. Snowplow tracks the code copy event.
- `details-row`: represents a row of details. Used to add additional information in the details area of
the `list-item` component.
- `history-item`: represents a history list item used to build a timeline.
- `list-item`: represents a list element in the registry. It supports: left action, left primary and
secondary content, right primary and secondary content, right action, and details slots.
- `metadata-item`: represents one piece of metadata, with an icon or a link. Used primarily in the
title area.
- `persisted-dropdown-selection`: represents a menu that stores the user selection in the
`localStorage`.
- `registry-search`: implements `gl-filtered-search` with a sorting section on the right.
- `title-area`: implements the top title area of the registry. Includes: a main title, an avatar, a
subtitle, a metadata row, and a right actions slot.
## Adding a new registry page
When adding a new registry:
- Leverage the shared components that already exist. It's good to look at how the components are
structured and used in the more mature registries (for example, the package registry).
- If it's in line with the backend requirements, we suggest using GraphQL for the API. This helps in
dealing with the innate performance issue of registries.
- If possible, we recommend using [Vue Router](https://v3.router.vuejs.org/)
and frontend routing. Coupled with Apollo, the caching layer helps with the perceived page
performance.
|
https://docs.gitlab.com/development/frontend_faq
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/frontend_faq.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
frontend_faq.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Frontend FAQ
| null |
## Rules of Frontend FAQ
1. **You talk about Frontend FAQ.**
Share links to it whenever applicable, so more eyes catch when content
gets outdated.
1. **Keep it short and simple.**
Whenever an answer needs more than two sentences it does not belong here.
1. **Provide background when possible.**
Linking to relevant source code, issue / epic, or other documentation helps
to understand the answer.
1. **If you see something, do something.**
Remove or update any content that is outdated as soon as you see it.
## FAQ
### 1. How does one find the Rails route for a page?
#### Check the 'page' data attribute
The easiest way is to type the following in the browser while on the page in
question:
```javascript
document.body.dataset.page
```
Find here the [source code setting the attribute](https://gitlab.com/gitlab-org/gitlab/-/blob/cc5095edfce2b4d4083a4fb1cdc7c0a1898b9921/app/views/layouts/application.html.haml#L4).
#### Rails routes
The `rails routes` command can be used to list all the routes available in the application. Piping the output into `grep`, we can perform a search through the list of available routes.
The output includes the request types available, route parameters and the relevant controller.
```shell
bundle exec rails routes | grep "issues"
```
### 2. `modal_copy_button` vs `clipboard_button`
The `clipboard_button` uses the `copy_to_clipboard.js` behavior, which is
initialized on page load. Vue clipboard buttons that
don't exist at page load (such as ones in a `GlModal`) do not have
click handlers associated with the clipboard package.
`modal_copy_button` manages an instance of the
[`clipboard` plugin](https://www.npmjs.com/package/clipboard) specific to
the instance of that component. This means that clipboard events are
bound on mounting and destroyed when the button is, mitigating the above
issue. It also has bindings to a particular container or modal ID
available, to work with the focus trap created by our GlModal.
### 3. A `gitlab-ui` component not conforming to Pajamas Design System
Some [Pajamas Design System](https://design.gitlab.com/) components implemented in
`gitlab-ui` do not conform with the design system specs. This is because they lack some
planned features or are not correctly styled yet. In the Pajamas website, a
banner on top of the component examples indicates that:
> This component does not yet conform to the correct styling defined in our Design
> System. Refer to the Design System documentation when referencing visuals for this
> component.
For example, at the time of writing, this type of warning can be observed for
all form components, such as the [checkbox](https://design.gitlab.com/components/checkbox). It, however,
doesn't imply that the component should not be used.
GitLab always asks to use `<gl-*>` components whenever a suitable component exists.
It makes codebase unified and more comfortable to maintain/refactor in the future.
Ensure a [Product Designer](https://about.gitlab.com/company/team/?department=ux-department)
reviews the use of the non-conforming component as part of the MR review. Make a
follow up issue and attach it to the component implementation epic found in the
[Components of Pajamas Design System epic](https://gitlab.com/groups/gitlab-org/-/epics/973).
### 4. My submit form button becomes disabled after submitting
A Submit button inside of a form attaches an `onSubmit` event listener on the form element. [This code](https://gitlab.com/gitlab-org/gitlab/-/blob/794c247a910e2759ce9b401356432a38a4535d49/app/assets/javascripts/main.js#L225) adds a `disabled` class selector to the submit button when the form is submitted. To avoid this behavior, add the class `js-no-auto-disable` to the button.
### 5. Should one use a full URL or a full path when referencing backend endpoints?
It's preferred to use a **full path** like `gon.relative_url_root` over a **full URL** (like `gon.gitlab_url`). This is because the URL uses the hostname configured with
GitLab which may not match the request. This causes [cross-origin resource sharing issues like this Web IDE example](https://gitlab.com/gitlab-org/gitlab/-/issues/36810).
Example:
```javascript
// bad :(
// If gitlab is configured with hostname `0.0.0.0`
// This will cause CORS issues if I request from `localhost`
axios.get(joinPaths(gon.gitlab_url, '-', 'foo'))
// good :)
axios.get(joinPaths(gon.relative_url_root, '-', 'foo'))
```
Also, try not to hardcode paths in the Frontend, but instead receive them from the Backend (see next section).
When referencing Backend rails paths, avoid using `*_url`, and use `*_path` instead.
Example:
```ruby
-# Bad :(
#js-foo{ data: { foo_url: some_rails_foo_url } }
-# Good :)
#js-foo{ data: { foo_path: some_rails_foo_path } }
```
### 6. How should the Frontend reference Backend paths?
We prefer not to add extra coupling by hard-coding paths. If possible,
add these paths as data attributes to the DOM element being referenced in the JavaScript.
Example:
```javascript
// Bad :(
// Here's a Vuex action that hardcodes a path :(
export const fetchFoos = ({ state }) => {
return axios.get(joinPaths(gon.relative_url_root, '-', 'foo'));
};
// Good :)
function initFoo() {
const el = document.getElementById('js-foo');
// Path comes from our root element's data which is used to initialize the store :)
const store = createStore({
fooPath: el.dataset.fooPath
});
Vue.extend({
store,
el,
render(h) {
return h(Component);
},
});
}
// Vuex action can now reference the path from its state :)
export const fetchFoos = ({ state }) => {
return axios.get(state.settings.fooPath);
};
```
### 7. How can one test the production build locally?
Sometimes it's necessary to test locally what the frontend production build would produce, to do so the steps are:
1. Stop webpack: `gdk stop webpack`.
1. Open `gitlab.yaml` located in `gitlab/config` folder, scroll down to the `webpack` section, and change `dev_server` to `enabled: false`.
1. Run `yarn webpack-prod && gdk restart rails-web`.
The production build takes a few minutes to be completed. Any code changes at this point are
displayed only after executing the item 3 above again.
To return to the standard development mode:
1. Open `gitlab.yaml` located in your `gitlab` installation folder, scroll down to the `webpack` section and change back `dev_server` to `enabled: true`.
1. Run `yarn clean` to remove the production assets and free some space (optional).
1. Start webpack again: `gdk start webpack`.
1. Restart GDK: `gdk restart rails-web`.
### 8. Babel polyfills
GitLab has enabled the Babel `preset-env` option
[`useBuiltIns: 'usage'`](https://babeljs.io/docs/babel-preset-env#usebuiltins-usage).
This adds the appropriate `core-js` polyfills once for each JavaScript feature
we're using that our target browsers don't support. You don't need to add `core-js`
polyfills manually.
GitLab adds non-`core-js` polyfills for extending browser features (such as
the GitLab SVG polyfill), which allow us to reference SVGs by using `<use xlink:href>`.
Be sure to add these polyfills to `app/assets/javascripts/commons/polyfills.js`.
To see what polyfills are being used:
1. Go to your merge request.
1. In the secondary menu below the title of the merge request, select **Pipelines**, then
select the pipeline you want to view, to display the jobs in that pipeline.
1. Select the [`compile-production-assets`](https://gitlab.com/gitlab-org/gitlab/-/jobs/641770154) job.
1. In the right-hand sidebar, scroll to **Job Artifacts**, and select **Browse**.
1. Select the **webpack-report** folder to open it, and select **index.html**.
1. In the upper-left corner of the page, select the right arrow ({{< icon name="chevron-lg-right" >}})
to display the explorer.
1. In the **Search modules** field, enter `gitlab/node_modules/core-js` to see
which polyfills are being loaded and where:

### 9. Why is my page broken in dark mode?
See [dark mode docs](dark_mode.md)
### 10. How to render GitLab-flavored Markdown?
If you need to render [GitLab-flavored Markdown](../gitlab_flavored_markdown/_index.md), then there are two things that you require:
- Pass the GLFM content with the `v-safe-html` directive to a `div` HTML element inside your Vue component
- Add the `md` class to the root div, which will apply the appropriate CSS styling
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Frontend FAQ
breadcrumbs:
- doc
- development
- fe_guide
---
## Rules of Frontend FAQ
1. **You talk about Frontend FAQ.**
Share links to it whenever applicable, so more eyes catch when content
gets outdated.
1. **Keep it short and simple.**
Whenever an answer needs more than two sentences it does not belong here.
1. **Provide background when possible.**
Linking to relevant source code, issue / epic, or other documentation helps
to understand the answer.
1. **If you see something, do something.**
Remove or update any content that is outdated as soon as you see it.
## FAQ
### 1. How does one find the Rails route for a page?
#### Check the 'page' data attribute
The easiest way is to type the following in the browser while on the page in
question:
```javascript
document.body.dataset.page
```
Find here the [source code setting the attribute](https://gitlab.com/gitlab-org/gitlab/-/blob/cc5095edfce2b4d4083a4fb1cdc7c0a1898b9921/app/views/layouts/application.html.haml#L4).
#### Rails routes
The `rails routes` command can be used to list all the routes available in the application. Piping the output into `grep`, we can perform a search through the list of available routes.
The output includes the request types available, route parameters and the relevant controller.
```shell
bundle exec rails routes | grep "issues"
```
### 2. `modal_copy_button` vs `clipboard_button`
The `clipboard_button` uses the `copy_to_clipboard.js` behavior, which is
initialized on page load. Vue clipboard buttons that
don't exist at page load (such as ones in a `GlModal`) do not have
click handlers associated with the clipboard package.
`modal_copy_button` manages an instance of the
[`clipboard` plugin](https://www.npmjs.com/package/clipboard) specific to
the instance of that component. This means that clipboard events are
bound on mounting and destroyed when the button is, mitigating the above
issue. It also has bindings to a particular container or modal ID
available, to work with the focus trap created by our GlModal.
### 3. A `gitlab-ui` component not conforming to Pajamas Design System
Some [Pajamas Design System](https://design.gitlab.com/) components implemented in
`gitlab-ui` do not conform with the design system specs. This is because they lack some
planned features or are not correctly styled yet. In the Pajamas website, a
banner on top of the component examples indicates that:
> This component does not yet conform to the correct styling defined in our Design
> System. Refer to the Design System documentation when referencing visuals for this
> component.
For example, at the time of writing, this type of warning can be observed for
all form components, such as the [checkbox](https://design.gitlab.com/components/checkbox). It, however,
doesn't imply that the component should not be used.
GitLab always asks to use `<gl-*>` components whenever a suitable component exists.
It makes codebase unified and more comfortable to maintain/refactor in the future.
Ensure a [Product Designer](https://about.gitlab.com/company/team/?department=ux-department)
reviews the use of the non-conforming component as part of the MR review. Make a
follow up issue and attach it to the component implementation epic found in the
[Components of Pajamas Design System epic](https://gitlab.com/groups/gitlab-org/-/epics/973).
### 4. My submit form button becomes disabled after submitting
A Submit button inside of a form attaches an `onSubmit` event listener on the form element. [This code](https://gitlab.com/gitlab-org/gitlab/-/blob/794c247a910e2759ce9b401356432a38a4535d49/app/assets/javascripts/main.js#L225) adds a `disabled` class selector to the submit button when the form is submitted. To avoid this behavior, add the class `js-no-auto-disable` to the button.
### 5. Should one use a full URL or a full path when referencing backend endpoints?
It's preferred to use a **full path** like `gon.relative_url_root` over a **full URL** (like `gon.gitlab_url`). This is because the URL uses the hostname configured with
GitLab which may not match the request. This causes [cross-origin resource sharing issues like this Web IDE example](https://gitlab.com/gitlab-org/gitlab/-/issues/36810).
Example:
```javascript
// bad :(
// If gitlab is configured with hostname `0.0.0.0`
// This will cause CORS issues if I request from `localhost`
axios.get(joinPaths(gon.gitlab_url, '-', 'foo'))
// good :)
axios.get(joinPaths(gon.relative_url_root, '-', 'foo'))
```
Also, try not to hardcode paths in the Frontend, but instead receive them from the Backend (see next section).
When referencing Backend rails paths, avoid using `*_url`, and use `*_path` instead.
Example:
```ruby
-# Bad :(
#js-foo{ data: { foo_url: some_rails_foo_url } }
-# Good :)
#js-foo{ data: { foo_path: some_rails_foo_path } }
```
### 6. How should the Frontend reference Backend paths?
We prefer not to add extra coupling by hard-coding paths. If possible,
add these paths as data attributes to the DOM element being referenced in the JavaScript.
Example:
```javascript
// Bad :(
// Here's a Vuex action that hardcodes a path :(
export const fetchFoos = ({ state }) => {
return axios.get(joinPaths(gon.relative_url_root, '-', 'foo'));
};
// Good :)
function initFoo() {
const el = document.getElementById('js-foo');
// Path comes from our root element's data which is used to initialize the store :)
const store = createStore({
fooPath: el.dataset.fooPath
});
Vue.extend({
store,
el,
render(h) {
return h(Component);
},
});
}
// Vuex action can now reference the path from its state :)
export const fetchFoos = ({ state }) => {
return axios.get(state.settings.fooPath);
};
```
### 7. How can one test the production build locally?
Sometimes it's necessary to test locally what the frontend production build would produce, to do so the steps are:
1. Stop webpack: `gdk stop webpack`.
1. Open `gitlab.yaml` located in `gitlab/config` folder, scroll down to the `webpack` section, and change `dev_server` to `enabled: false`.
1. Run `yarn webpack-prod && gdk restart rails-web`.
The production build takes a few minutes to be completed. Any code changes at this point are
displayed only after executing the item 3 above again.
To return to the standard development mode:
1. Open `gitlab.yaml` located in your `gitlab` installation folder, scroll down to the `webpack` section and change back `dev_server` to `enabled: true`.
1. Run `yarn clean` to remove the production assets and free some space (optional).
1. Start webpack again: `gdk start webpack`.
1. Restart GDK: `gdk restart rails-web`.
### 8. Babel polyfills
GitLab has enabled the Babel `preset-env` option
[`useBuiltIns: 'usage'`](https://babeljs.io/docs/babel-preset-env#usebuiltins-usage).
This adds the appropriate `core-js` polyfills once for each JavaScript feature
we're using that our target browsers don't support. You don't need to add `core-js`
polyfills manually.
GitLab adds non-`core-js` polyfills for extending browser features (such as
the GitLab SVG polyfill), which allow us to reference SVGs by using `<use xlink:href>`.
Be sure to add these polyfills to `app/assets/javascripts/commons/polyfills.js`.
To see what polyfills are being used:
1. Go to your merge request.
1. In the secondary menu below the title of the merge request, select **Pipelines**, then
select the pipeline you want to view, to display the jobs in that pipeline.
1. Select the [`compile-production-assets`](https://gitlab.com/gitlab-org/gitlab/-/jobs/641770154) job.
1. In the right-hand sidebar, scroll to **Job Artifacts**, and select **Browse**.
1. Select the **webpack-report** folder to open it, and select **index.html**.
1. In the upper-left corner of the page, select the right arrow ({{< icon name="chevron-lg-right" >}})
to display the explorer.
1. In the **Search modules** field, enter `gitlab/node_modules/core-js` to see
which polyfills are being loaded and where:

### 9. Why is my page broken in dark mode?
See [dark mode docs](dark_mode.md)
### 10. How to render GitLab-flavored Markdown?
If you need to render [GitLab-flavored Markdown](../gitlab_flavored_markdown/_index.md), then there are two things that you require:
- Pass the GLFM content with the `v-safe-html` directive to a `div` HTML element inside your Vue component
- Add the `md` class to the root div, which will apply the appropriate CSS styling
|
https://docs.gitlab.com/development/haml
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/haml.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
haml.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
HAML
| null |
[HAML](https://haml.info/) is the [Ruby on Rails](https://rubyonrails.org/) template language that GitLab uses.
## HAML and our Pajamas Design System
[GitLab UI](https://gitlab-org.gitlab.io/gitlab-ui/) is a Vue component library that conforms
to the [Pajamas design system](https://design.gitlab.com/). Many of these components
rely on JavaScript and therefore can only be used in Vue.
However, some of the simpler components (such as buttons, checkboxes, or form inputs) can be
used in HAML:
- Some of the Pajamas components are available as a [ViewComponent](view_component.md#pajamas-components). Use these when possible.
- If no ViewComponent exists, why not go ahead and create it? Talk to the [Design System](https://handbook.gitlab.com/handbook/engineering/development/dev/foundations/design-system/) team if you need help.
- As a fallback, this can be done by applying the correct CSS classes to the elements.
- A custom [Ruby on Rails form builder](https://gitlab.com/gitlab-org/gitlab/-/blob/7c108df101e86d8a27d69df2b5b1ff1fc24133c5/lib/gitlab/form_builders/gitlab_ui_form_builder.rb)
exists to help use GitLab UI components in HAML forms.
### Use the GitLab UI form builder
To use the GitLab UI form builder:
1. Change `form_for` to `gitlab_ui_form_for`.
1. Change `f.check_box` to `f.gitlab_ui_checkbox_component`.
1. Remove `f.label` and instead pass the label as the second argument in `f.gitlab_ui_checkbox_component`.
For example:
- Before:
```haml
= form_for @group do |f|
.form-group.gl-mb-3
.gl-form-checkbox.custom-control.custom-checkbox
= f.check_box :prevent_sharing_groups_outside_hierarchy, disabled: !can_change_prevent_sharing_groups_outside_hierarchy?(@group), class: 'custom-control-input'
= f.label :prevent_sharing_groups_outside_hierarchy, class: 'custom-control-label' do
%span
= safe_format(s_('GroupSettings|Prevent members from sending invitations to groups outside of %{group} and its subgroups.'), group: link_to_group(@group))
%p.help-text= prevent_sharing_groups_outside_hierarchy_help_text(@group)
.form-group.gl-mb-3
.gl-form-checkbox.custom-control.custom-checkbox
= f.check_box :lfs_enabled, checked: @group.lfs_enabled?, class: 'custom-control-input'
= f.label :lfs_enabled, class: 'custom-control-label' do
%span
= _('Allow projects within this group to use Git LFS')
= link_to sprite_icon('question-o'), help_page_path('topics/git/lfs/_index.md')
%p.help-text= _('This setting can be overridden in each project.')
```
- After:
```haml
= gitlab_ui_form_for @group do |f|
.form-group.gl-mb-3
= f.gitlab_ui_checkbox_component :prevent_sharing_groups_outside_hierarchy,
safe_format(s_('GroupSettings|Prevent members from sending invitations to groups outside of %{group} and its subgroups.'), group: link_to_group(@group)),
help_text: prevent_sharing_groups_outside_hierarchy_help_text(@group),
checkbox_options: { disabled: !can_change_prevent_sharing_groups_outside_hierarchy?(@group) }
.form-group.gl-mb-3
= f.gitlab_ui_checkbox_component :lfs_enabled, checkbox_options: { checked: @group.lfs_enabled? } do |c|
- c.with_label do
= _('Allow projects within this group to use Git LFS')
= link_to sprite_icon('question-o'), help_page_path('topics/git/lfs/_index.md')
- c.with_help_text do
= _('This setting can be overridden in each project.')
```
### Available components
When using the GitLab UI form builder, the following components are available for use in HAML.
{{< alert type="note" >}}
Currently only the listed components are available but more components are planned.
{{< /alert >}}
#### `gitlab_ui_checkbox_component`
[GitLab UI Docs](https://gitlab-org.gitlab.io/gitlab-ui/?path=/story/base-form-form-checkbox--default)
##### Arguments
| Argument | Type | Required (default value) | Description |
|--------------------|----------|--------------------------|-------------|
| `method` | `Symbol` | `true` | Attribute on the object passed to `gitlab_ui_form_for`. |
| `label` | `String` | `false` (`nil`) | Checkbox label. `label` slot can be used instead of this argument if HTML is needed. |
| `help_text` | `String` | `false` (`nil`) | Help text displayed below the checkbox. `help_text` slot can be used instead of this argument if HTML is needed. |
| `checkbox_options` | `Hash` | `false` (`{}`) | Options that are passed to [Rails `check_box` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-check_box). |
| `checked_value` | `String` | `false` (`'1'`) | Value when checkbox is checked. |
| `unchecked_value` | `String` | `false` (`'0'`) | Value when checkbox is unchecked. |
| `label_options` | `Hash` | `false` (`{}`) | Options that are passed to [Rails `label` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-label). |
##### Slots
This component supports [ViewComponent slots](https://viewcomponent.org/guide/slots.html).
| Slot | Description |
|-------------|-------------|
| `label` | Checkbox label content. This slot can be used instead of the `label` argument. |
| `help_text` | Help text content displayed below the checkbox. This slot can be used instead of the `help_text` argument. |
#### `gitlab_ui_radio_component`
[GitLab UI Docs](https://gitlab-org.gitlab.io/gitlab-ui/?path=/story/base-form-form-radio--default)
##### Arguments
| Argument | Type | Required (default value) | Description |
|-----------------|----------|--------------------------|-------------|
| `method` | `Symbol` | `true` | Attribute on the object passed to `gitlab_ui_form_for`. |
| `value` | `Symbol` | `true` | The value of the radio tag. |
| `label` | `String` | `false` (`nil`) | Radio label. `label` slot can be used instead of this argument if HTML content is needed inside the label. |
| `help_text` | `String` | `false` (`nil`) | Help text displayed below the radio button. `help_text` slot can be used instead of this argument if HTML content is needed inside the help text. |
| `radio_options` | `Hash` | `false` (`{}`) | Options that are passed to [Rails `radio_button` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-radio_button). |
| `label_options` | `Hash` | `false` (`{}`) | Options that are passed to [Rails `label` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-label). |
##### Slots
This component supports [ViewComponent slots](https://viewcomponent.org/guide/slots.html).
| Slot | Description |
|-------------|-------------|
| `label` | Checkbox label content. This slot can be used instead of the `label` argument. |
| `help_text` | Help text content displayed below the radio button. This slot can be used instead of the `help_text` argument. |
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: HAML
breadcrumbs:
- doc
- development
- fe_guide
---
[HAML](https://haml.info/) is the [Ruby on Rails](https://rubyonrails.org/) template language that GitLab uses.
## HAML and our Pajamas Design System
[GitLab UI](https://gitlab-org.gitlab.io/gitlab-ui/) is a Vue component library that conforms
to the [Pajamas design system](https://design.gitlab.com/). Many of these components
rely on JavaScript and therefore can only be used in Vue.
However, some of the simpler components (such as buttons, checkboxes, or form inputs) can be
used in HAML:
- Some of the Pajamas components are available as a [ViewComponent](view_component.md#pajamas-components). Use these when possible.
- If no ViewComponent exists, why not go ahead and create it? Talk to the [Design System](https://handbook.gitlab.com/handbook/engineering/development/dev/foundations/design-system/) team if you need help.
- As a fallback, this can be done by applying the correct CSS classes to the elements.
- A custom [Ruby on Rails form builder](https://gitlab.com/gitlab-org/gitlab/-/blob/7c108df101e86d8a27d69df2b5b1ff1fc24133c5/lib/gitlab/form_builders/gitlab_ui_form_builder.rb)
exists to help use GitLab UI components in HAML forms.
### Use the GitLab UI form builder
To use the GitLab UI form builder:
1. Change `form_for` to `gitlab_ui_form_for`.
1. Change `f.check_box` to `f.gitlab_ui_checkbox_component`.
1. Remove `f.label` and instead pass the label as the second argument in `f.gitlab_ui_checkbox_component`.
For example:
- Before:
```haml
= form_for @group do |f|
.form-group.gl-mb-3
.gl-form-checkbox.custom-control.custom-checkbox
= f.check_box :prevent_sharing_groups_outside_hierarchy, disabled: !can_change_prevent_sharing_groups_outside_hierarchy?(@group), class: 'custom-control-input'
= f.label :prevent_sharing_groups_outside_hierarchy, class: 'custom-control-label' do
%span
= safe_format(s_('GroupSettings|Prevent members from sending invitations to groups outside of %{group} and its subgroups.'), group: link_to_group(@group))
%p.help-text= prevent_sharing_groups_outside_hierarchy_help_text(@group)
.form-group.gl-mb-3
.gl-form-checkbox.custom-control.custom-checkbox
= f.check_box :lfs_enabled, checked: @group.lfs_enabled?, class: 'custom-control-input'
= f.label :lfs_enabled, class: 'custom-control-label' do
%span
= _('Allow projects within this group to use Git LFS')
= link_to sprite_icon('question-o'), help_page_path('topics/git/lfs/_index.md')
%p.help-text= _('This setting can be overridden in each project.')
```
- After:
```haml
= gitlab_ui_form_for @group do |f|
.form-group.gl-mb-3
= f.gitlab_ui_checkbox_component :prevent_sharing_groups_outside_hierarchy,
safe_format(s_('GroupSettings|Prevent members from sending invitations to groups outside of %{group} and its subgroups.'), group: link_to_group(@group)),
help_text: prevent_sharing_groups_outside_hierarchy_help_text(@group),
checkbox_options: { disabled: !can_change_prevent_sharing_groups_outside_hierarchy?(@group) }
.form-group.gl-mb-3
= f.gitlab_ui_checkbox_component :lfs_enabled, checkbox_options: { checked: @group.lfs_enabled? } do |c|
- c.with_label do
= _('Allow projects within this group to use Git LFS')
= link_to sprite_icon('question-o'), help_page_path('topics/git/lfs/_index.md')
- c.with_help_text do
= _('This setting can be overridden in each project.')
```
### Available components
When using the GitLab UI form builder, the following components are available for use in HAML.
{{< alert type="note" >}}
Currently only the listed components are available but more components are planned.
{{< /alert >}}
#### `gitlab_ui_checkbox_component`
[GitLab UI Docs](https://gitlab-org.gitlab.io/gitlab-ui/?path=/story/base-form-form-checkbox--default)
##### Arguments
| Argument | Type | Required (default value) | Description |
|--------------------|----------|--------------------------|-------------|
| `method` | `Symbol` | `true` | Attribute on the object passed to `gitlab_ui_form_for`. |
| `label` | `String` | `false` (`nil`) | Checkbox label. `label` slot can be used instead of this argument if HTML is needed. |
| `help_text` | `String` | `false` (`nil`) | Help text displayed below the checkbox. `help_text` slot can be used instead of this argument if HTML is needed. |
| `checkbox_options` | `Hash` | `false` (`{}`) | Options that are passed to [Rails `check_box` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-check_box). |
| `checked_value` | `String` | `false` (`'1'`) | Value when checkbox is checked. |
| `unchecked_value` | `String` | `false` (`'0'`) | Value when checkbox is unchecked. |
| `label_options` | `Hash` | `false` (`{}`) | Options that are passed to [Rails `label` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-label). |
##### Slots
This component supports [ViewComponent slots](https://viewcomponent.org/guide/slots.html).
| Slot | Description |
|-------------|-------------|
| `label` | Checkbox label content. This slot can be used instead of the `label` argument. |
| `help_text` | Help text content displayed below the checkbox. This slot can be used instead of the `help_text` argument. |
#### `gitlab_ui_radio_component`
[GitLab UI Docs](https://gitlab-org.gitlab.io/gitlab-ui/?path=/story/base-form-form-radio--default)
##### Arguments
| Argument | Type | Required (default value) | Description |
|-----------------|----------|--------------------------|-------------|
| `method` | `Symbol` | `true` | Attribute on the object passed to `gitlab_ui_form_for`. |
| `value` | `Symbol` | `true` | The value of the radio tag. |
| `label` | `String` | `false` (`nil`) | Radio label. `label` slot can be used instead of this argument if HTML content is needed inside the label. |
| `help_text` | `String` | `false` (`nil`) | Help text displayed below the radio button. `help_text` slot can be used instead of this argument if HTML content is needed inside the help text. |
| `radio_options` | `Hash` | `false` (`{}`) | Options that are passed to [Rails `radio_button` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-radio_button). |
| `label_options` | `Hash` | `false` (`{}`) | Options that are passed to [Rails `label` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-label). |
##### Slots
This component supports [ViewComponent slots](https://viewcomponent.org/guide/slots.html).
| Slot | Description |
|-------------|-------------|
| `label` | Checkbox label content. This slot can be used instead of the `label` argument. |
| `help_text` | Help text content displayed below the radio button. This slot can be used instead of the `help_text` argument. |
|
https://docs.gitlab.com/development/dependencies
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/dependencies.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
dependencies.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Frontend dependencies
| null |
We use [yarn@1](https://classic.yarnpkg.com/lang/en/) to manage frontend dependencies.
There are a few exceptions in the GitLab repository, stored in `vendor/assets/`.
## What are production and development dependencies?
These dependencies are defined in two groups within `package.json`, `dependencies` and `devDependencies`.
For our purposes, we consider anything that is required to compile our production assets a "production" dependency.
That is, anything required to run the `webpack` script with `NODE_ENV=production`.
Tools like `eslint`, `jest`, and various plugins and tools used in development are considered `devDependencies`.
This distinction is used by omnibus to determine which dependencies it requires when building GitLab.
Exceptions are made for some tools that we require in the
`compile-production-assets` CI job such as `webpack-bundle-analyzer` to analyze our
production assets post-compile.
## Updating dependencies
See the main [Dependencies](../dependencies.md) page for general information about dependency updates.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Frontend dependencies
breadcrumbs:
- doc
- development
- fe_guide
---
We use [yarn@1](https://classic.yarnpkg.com/lang/en/) to manage frontend dependencies.
There are a few exceptions in the GitLab repository, stored in `vendor/assets/`.
## What are production and development dependencies?
These dependencies are defined in two groups within `package.json`, `dependencies` and `devDependencies`.
For our purposes, we consider anything that is required to compile our production assets a "production" dependency.
That is, anything required to run the `webpack` script with `NODE_ENV=production`.
Tools like `eslint`, `jest`, and various plugins and tools used in development are considered `devDependencies`.
This distinction is used by omnibus to determine which dependencies it requires when building GitLab.
Exceptions are made for some tools that we require in the
`compile-production-assets` CI job such as `webpack-bundle-analyzer` to analyze our
production assets post-compile.
## Updating dependencies
See the main [Dependencies](../dependencies.md) page for general information about dependency updates.
|
https://docs.gitlab.com/development/blob_syntax_highlighting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/blob_syntax_highlighting.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
blob_syntax_highlighting.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Syntax highlighting development guidelines (repository blob viewer)
| null |
This guide outlines best practices and implementation details for syntax highlighting in the repository source code viewer. GitLab uses two syntax highlighting libraries:
- [Highlight.js](https://highlightjs.org/) for client-side highlighting in the source viewer
- See the [full list of supported languages](https://github.com/highlightjs/highlight.js/blob/main/SUPPORTED_LANGUAGES.md)
- [Rouge](https://rubygems.org/gems/rouge) as a server-side fallback
- See the [full list of supported languages](https://github.com/rouge-ruby/rouge/wiki/list-of-supported-languages-and-lexers)
The source code viewer uses this dual approach to ensure broad language support and optimal performance when viewing files in the repository.
## Components Overview
The syntax highlighting implementation consists of several key components:
- `blob_content_viewer.vue`: Main component for displaying file content
- `source_viewer.vue`: Handles the rendering of source code
- `highlight_mixin.js`: Manages the highlighting process and WebWorker communication
- `highlight_utils.js`: Provides utilities for content chunking and processing
## Performance Principles
### Display content as quickly as possible
We optimize the display of content through a staged rendering approach:
1. Immediately render the first 70 lines in plaintext (without highlighting)
1. Request the WebWorker to highlight the first 70 lines
1. Request the WebWorker to highlight the entire file
### Maintain optimal browser performance
To maintain optimal browser performance:
- Use a WebWorker for the highlighting task so that it doesn't block the main thread
- Break highlighted content into chunks and render them as the user scrolls using the [IntersectionObserver API](https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API)
## Adding Syntax Highlighting Support
You can add syntax highlighting support for new languages by:
1. Using existing third-party language definitions.
1. Creating custom language definitions in our codebase.
The method you choose depends on whether the language already has a Highlight.js compatible definition available.
### For Languages with Third-Party Definitions
We can add third-party dependencies to our [`package.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/package.json) and import the dependency in [`highlight_js_language_loader`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/content_editor/services/highlight_js_language_loader.js#L260).
Example:
- Add the dependency to `package.json`:
```javascript
// package.json
//...
"dependencies": {
"@gleam-lang/highlight.js-gleam": "^1.5.0",
//...
```
- Import the language in `highlight_js_language_loader.js`:
```javascript
// highlight_js_language_loader.js
//...
gleam: () => import(/* webpackChunkName: 'hl-gleam' */ '@gleam-lang/highlight.js-gleam'),
//...
```
If the language is still displayed as plaintext, you might need to add language detection based on the file extension in [`highlight_mixin.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/repository/mixins/highlight_mixin.js):
```javascript
if (name.endsWith('.gleam')) {
language = 'gleam';
}
```
### For Languages Without Existing Definitions
New language definitions can be added to our codebase under [`~/vue_shared/components/source_viewer/languages/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/app/assets/javascripts/vue_shared/components/source_viewer/languages/).
To add support for a new language:
1. Create a new language definition file following the [Highlight.js syntax](https://highlightjs.readthedocs.io/en/latest/language-contribution.html).
1. Register the language in `highlight_js_language_loader.js`.
1. Add file extension mapping in `highlight_mixin.js` if needed.
Here are two examples of custom language implementations:
1. [Svelte](https://gitlab.com/gitlab-org/gitlab/-/commit/0680b3a27b3973287ae6a973703faf9472535c47)
1. [CODEOWNERS](https://gitlab.com/gitlab-org/gitlab/-/commit/825fd1e97df582b9f2654fc248c15e073d78d82b)
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Syntax highlighting development guidelines (repository blob viewer)
breadcrumbs:
- doc
- development
- fe_guide
---
This guide outlines best practices and implementation details for syntax highlighting in the repository source code viewer. GitLab uses two syntax highlighting libraries:
- [Highlight.js](https://highlightjs.org/) for client-side highlighting in the source viewer
- See the [full list of supported languages](https://github.com/highlightjs/highlight.js/blob/main/SUPPORTED_LANGUAGES.md)
- [Rouge](https://rubygems.org/gems/rouge) as a server-side fallback
- See the [full list of supported languages](https://github.com/rouge-ruby/rouge/wiki/list-of-supported-languages-and-lexers)
The source code viewer uses this dual approach to ensure broad language support and optimal performance when viewing files in the repository.
## Components Overview
The syntax highlighting implementation consists of several key components:
- `blob_content_viewer.vue`: Main component for displaying file content
- `source_viewer.vue`: Handles the rendering of source code
- `highlight_mixin.js`: Manages the highlighting process and WebWorker communication
- `highlight_utils.js`: Provides utilities for content chunking and processing
## Performance Principles
### Display content as quickly as possible
We optimize the display of content through a staged rendering approach:
1. Immediately render the first 70 lines in plaintext (without highlighting)
1. Request the WebWorker to highlight the first 70 lines
1. Request the WebWorker to highlight the entire file
### Maintain optimal browser performance
To maintain optimal browser performance:
- Use a WebWorker for the highlighting task so that it doesn't block the main thread
- Break highlighted content into chunks and render them as the user scrolls using the [IntersectionObserver API](https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API)
## Adding Syntax Highlighting Support
You can add syntax highlighting support for new languages by:
1. Using existing third-party language definitions.
1. Creating custom language definitions in our codebase.
The method you choose depends on whether the language already has a Highlight.js compatible definition available.
### For Languages with Third-Party Definitions
We can add third-party dependencies to our [`package.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/package.json) and import the dependency in [`highlight_js_language_loader`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/content_editor/services/highlight_js_language_loader.js#L260).
Example:
- Add the dependency to `package.json`:
```javascript
// package.json
//...
"dependencies": {
"@gleam-lang/highlight.js-gleam": "^1.5.0",
//...
```
- Import the language in `highlight_js_language_loader.js`:
```javascript
// highlight_js_language_loader.js
//...
gleam: () => import(/* webpackChunkName: 'hl-gleam' */ '@gleam-lang/highlight.js-gleam'),
//...
```
If the language is still displayed as plaintext, you might need to add language detection based on the file extension in [`highlight_mixin.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/repository/mixins/highlight_mixin.js):
```javascript
if (name.endsWith('.gleam')) {
language = 'gleam';
}
```
### For Languages Without Existing Definitions
New language definitions can be added to our codebase under [`~/vue_shared/components/source_viewer/languages/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/app/assets/javascripts/vue_shared/components/source_viewer/languages/).
To add support for a new language:
1. Create a new language definition file following the [Highlight.js syntax](https://highlightjs.readthedocs.io/en/latest/language-contribution.html).
1. Register the language in `highlight_js_language_loader.js`.
1. Add file extension mapping in `highlight_mixin.js` if needed.
Here are two examples of custom language implementations:
1. [Svelte](https://gitlab.com/gitlab-org/gitlab/-/commit/0680b3a27b3973287ae6a973703faf9472535c47)
1. [CODEOWNERS](https://gitlab.com/gitlab-org/gitlab/-/commit/825fd1e97df582b9f2654fc248c15e073d78d82b)
|
https://docs.gitlab.com/development/icons
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/icons.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
icons.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Icons and SVG Illustrations
| null |
We manage our own icon and illustration library in the [`gitlab-svgs`](https://gitlab.com/gitlab-org/gitlab-svgs)
repository. This repository is published on [npm](https://www.npmjs.com/package/@gitlab/svgs),
and is managed as a dependency with yarn. You can browse all available
[icons and illustrations](https://gitlab-org.gitlab.io/gitlab-svgs). To upgrade
to a new version run `yarn upgrade @gitlab/svgs`.
## Icons
We are using SVG Icons in GitLab with a SVG Sprite.
This means the icons are only loaded once, and are referenced through an ID.
The sprite SVG is located under `/assets/icons.svg`.
### Usage in HAML/Rails
To use a sprite Icon in HAML or Rails we use a specific helper function:
```ruby
sprite_icon(icon_name, size: nil, css_class: '')
```
- **`icon_name`**: Use the `icon_name` for the SVG sprite in the list of
([GitLab SVGs](https://gitlab-org.gitlab.io/gitlab-svgs)).
- **`size` (optional)**: Use one of the following sizes: 16, 24, 32, 48, 72 (this
is translated into a `s16` class)
- **`css_class` (optional)**: If you want to add additional CSS classes.
**Example**
```ruby
= sprite_icon('issues', size: 72, css_class: 'icon-danger')
```
**Output from example above**
```html
<svg class="s72 icon-danger">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/assets/icons.svg#issues"></use>
</svg>
```
### Usage in Vue
[GitLab UI](https://gitlab-org.gitlab.io/gitlab-ui/), our components library, provides a component to display sprite icons.
Sample usage :
```html
<script>
import { GlIcon } from "@gitlab/ui";
export default {
components: {
GlIcon,
},
};
<script>
<template>
<gl-icon
name="issues"
:size="24"
class="class-name"
/>
</template>
```
- **name**: Name of the icon of the SVG sprite, as listed in the
([GitLab SVG Previewer](https://gitlab-org.gitlab.io/gitlab-svgs)).
- **size: (optional)** Number value for the size which is then mapped to a
specific CSS class (Available sizes: 8, 12, 16, 18, 24, 32, 48, 72 are mapped
to `sXX` CSS classes)
- **class (optional)**: Additional CSS classes to add to the SVG tag.
### Usage in HTML/JS
Use the following function inside JS to render an icon:
`gl.utils.spriteIcon(iconName)`
## Loading icon
### Usage in HAML/Rails
To insert a loading spinner in HAML or Rails use the `gl_loading_icon` helper:
```ruby
= gl_loading_icon
```
You can include one or more of the following properties with the `gl_loading_icon` helper, as demonstrated
by the examples that follow:
- `inline` (optional): uses in an inline element if `true`, otherwise, a block element (default), with the spinner centered.
- `color` (optional): either `dark` (default) or `light`.
- `size` (optional): either `sm` (default), `md`, `lg`, or `xl`.
- `css_class` (optional): defaults to nothing, but can be used for utility classes to fine-tune alignment or spacing.
**Example 1**:
The following HAML expression generates a loading icon's markup and
centers the icon.
```ruby
= gl_loading_icon
```
**Example 2**:
The following HAML expression generates an inline loading icon's markup
with a custom size. It also appends a margin utility class.
```ruby
= gl_loading_icon(inline: true, size: 'lg', css_class: 'gl-mr-2')
```
### Usage in Vue
The [GitLab UI](https://gitlab-org.gitlab.io/gitlab-ui/) components library provides a
`GlLoadingIcon` component. See the component's
[storybook](https://gitlab-org.gitlab.io/gitlab-ui/?path=/story/base-loading-icon--default)
for more information about its usage.
**Example**:
The following code snippet demonstrates how to use `GlLoadingIcon` in
a Vue component.
```html
<script>
import { GlLoadingIcon } from "@gitlab/ui";
export default {
components: {
GlLoadingIcon,
},
};
<script>
<template>
<gl-loading-icon inline />
</template>
```
## SVG Illustrations
From now on, use `img` tags to display any SVG based illustrations with either `image_tag` or `image_path` helpers.
Using the class `svg-content` around it ensures nice rendering.
### Usage in HAML/Rails
**Example**
```ruby
.svg-content
= image_tag 'illustrations/merge_requests.svg'
```
### Usage in Vue
It is discouraged to pass down SVG paths from Rails. Instead of `Rails => Haml => Vue` we can import SVG illustrations directly in `Vue`.
To use an SVG illustration in a template import the SVG from `@gitlab/svgs`. You can find the available SVG paths via the [GitLab SVG Previewer](https://gitlab-org.gitlab.io/gitlab-svgs/illustrations).
Below is an example of how to import an SVG illustration and use with the `GlEmptyState` component.
Component:
```html
<script>
import { GlEmptyState } from '@gitlab/ui';
// The ?url query string ensures the SVG is imported as a URL instead of an inline SVG
// This is useful for bundle size and optimized loading
import mergeTrainsSvgPath from '@gitlab/svgs/dist/illustrations/train-sm.svg?url';
export default {
components: {
GlEmptyState
},
mergeTrainsSvgPath,
};
<script>
<template>
<gl-empty-state
title="This state is empty"
description="Empty state description"
:svg-path="$options.mergeTrainsSvgPath"
/>
</template>
```
### Minimize SVGs
When you develop or export a new SVG illustration, minimize it with an [SVGO](https://github.com/svg/svgo) powered tool, like
[SVGOMG](https://jakearchibald.github.io/svgomg/), to save space. Illustrations
added to [GitLab SVG](https://gitlab.com/gitlab-org/gitlab-svgs) are automatically
minimized, so no manual action is needed.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Icons and SVG Illustrations
breadcrumbs:
- doc
- development
- fe_guide
---
We manage our own icon and illustration library in the [`gitlab-svgs`](https://gitlab.com/gitlab-org/gitlab-svgs)
repository. This repository is published on [npm](https://www.npmjs.com/package/@gitlab/svgs),
and is managed as a dependency with yarn. You can browse all available
[icons and illustrations](https://gitlab-org.gitlab.io/gitlab-svgs). To upgrade
to a new version run `yarn upgrade @gitlab/svgs`.
## Icons
We are using SVG Icons in GitLab with a SVG Sprite.
This means the icons are only loaded once, and are referenced through an ID.
The sprite SVG is located under `/assets/icons.svg`.
### Usage in HAML/Rails
To use a sprite Icon in HAML or Rails we use a specific helper function:
```ruby
sprite_icon(icon_name, size: nil, css_class: '')
```
- **`icon_name`**: Use the `icon_name` for the SVG sprite in the list of
([GitLab SVGs](https://gitlab-org.gitlab.io/gitlab-svgs)).
- **`size` (optional)**: Use one of the following sizes: 16, 24, 32, 48, 72 (this
is translated into a `s16` class)
- **`css_class` (optional)**: If you want to add additional CSS classes.
**Example**
```ruby
= sprite_icon('issues', size: 72, css_class: 'icon-danger')
```
**Output from example above**
```html
<svg class="s72 icon-danger">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/assets/icons.svg#issues"></use>
</svg>
```
### Usage in Vue
[GitLab UI](https://gitlab-org.gitlab.io/gitlab-ui/), our components library, provides a component to display sprite icons.
Sample usage :
```html
<script>
import { GlIcon } from "@gitlab/ui";
export default {
components: {
GlIcon,
},
};
<script>
<template>
<gl-icon
name="issues"
:size="24"
class="class-name"
/>
</template>
```
- **name**: Name of the icon of the SVG sprite, as listed in the
([GitLab SVG Previewer](https://gitlab-org.gitlab.io/gitlab-svgs)).
- **size: (optional)** Number value for the size which is then mapped to a
specific CSS class (Available sizes: 8, 12, 16, 18, 24, 32, 48, 72 are mapped
to `sXX` CSS classes)
- **class (optional)**: Additional CSS classes to add to the SVG tag.
### Usage in HTML/JS
Use the following function inside JS to render an icon:
`gl.utils.spriteIcon(iconName)`
## Loading icon
### Usage in HAML/Rails
To insert a loading spinner in HAML or Rails use the `gl_loading_icon` helper:
```ruby
= gl_loading_icon
```
You can include one or more of the following properties with the `gl_loading_icon` helper, as demonstrated
by the examples that follow:
- `inline` (optional): uses in an inline element if `true`, otherwise, a block element (default), with the spinner centered.
- `color` (optional): either `dark` (default) or `light`.
- `size` (optional): either `sm` (default), `md`, `lg`, or `xl`.
- `css_class` (optional): defaults to nothing, but can be used for utility classes to fine-tune alignment or spacing.
**Example 1**:
The following HAML expression generates a loading icon's markup and
centers the icon.
```ruby
= gl_loading_icon
```
**Example 2**:
The following HAML expression generates an inline loading icon's markup
with a custom size. It also appends a margin utility class.
```ruby
= gl_loading_icon(inline: true, size: 'lg', css_class: 'gl-mr-2')
```
### Usage in Vue
The [GitLab UI](https://gitlab-org.gitlab.io/gitlab-ui/) components library provides a
`GlLoadingIcon` component. See the component's
[storybook](https://gitlab-org.gitlab.io/gitlab-ui/?path=/story/base-loading-icon--default)
for more information about its usage.
**Example**:
The following code snippet demonstrates how to use `GlLoadingIcon` in
a Vue component.
```html
<script>
import { GlLoadingIcon } from "@gitlab/ui";
export default {
components: {
GlLoadingIcon,
},
};
<script>
<template>
<gl-loading-icon inline />
</template>
```
## SVG Illustrations
From now on, use `img` tags to display any SVG based illustrations with either `image_tag` or `image_path` helpers.
Using the class `svg-content` around it ensures nice rendering.
### Usage in HAML/Rails
**Example**
```ruby
.svg-content
= image_tag 'illustrations/merge_requests.svg'
```
### Usage in Vue
It is discouraged to pass down SVG paths from Rails. Instead of `Rails => Haml => Vue` we can import SVG illustrations directly in `Vue`.
To use an SVG illustration in a template import the SVG from `@gitlab/svgs`. You can find the available SVG paths via the [GitLab SVG Previewer](https://gitlab-org.gitlab.io/gitlab-svgs/illustrations).
Below is an example of how to import an SVG illustration and use with the `GlEmptyState` component.
Component:
```html
<script>
import { GlEmptyState } from '@gitlab/ui';
// The ?url query string ensures the SVG is imported as a URL instead of an inline SVG
// This is useful for bundle size and optimized loading
import mergeTrainsSvgPath from '@gitlab/svgs/dist/illustrations/train-sm.svg?url';
export default {
components: {
GlEmptyState
},
mergeTrainsSvgPath,
};
<script>
<template>
<gl-empty-state
title="This state is empty"
description="Empty state description"
:svg-path="$options.mergeTrainsSvgPath"
/>
</template>
```
### Minimize SVGs
When you develop or export a new SVG illustration, minimize it with an [SVGO](https://github.com/svg/svgo) powered tool, like
[SVGOMG](https://jakearchibald.github.io/svgomg/), to save space. Illustrations
added to [GitLab SVG](https://gitlab.com/gitlab-org/gitlab-svgs) are automatically
minimized, so no manual action is needed.
|
https://docs.gitlab.com/development/getting_started
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/getting_started.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
getting_started.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Getting started
| null |
This page will guide you through the Frontend development process and show you what a normal merge request cycle looks like. You can find more about the organization of the frontend team in the [handbook](https://handbook.gitlab.com/handbook/engineering/frontend/).
There are a lot of things to consider for a first merge request and it can feel overwhelming. The [Frontend onboarding course](onboarding_course/_index.md) provides a 6-week structured curriculum to learn how to contribute to the GitLab frontend.
## Development lifecycle
### Step 1: Preparing the issue
Before tackling any work, read through the issue that has been assigned to you and make sure that all [required departments](https://handbook.gitlab.com/handbook/engineering/#engineering-teams) have been involved as they should. Read through the comments as needed and if unclear, post a comment in the issue summarizing **what you think the work is** and ping your Engineering or Product Manager to confirm. Then once everything is clarified, apply the correct workflow labels to the issue and create a merge request branch. If created directly from the issue, the issue and the merge request will be linked by default.
### Step 2: Plan your implementation
Before writing code, make sure to ask yourself the following questions and have clear answers before you start developing:
- What API data is required? Is it already available in our API or should I ask a Backend counterpart?
- If this is GraphQL, write a query proposal and ask your BE counterpart to confirm they are in agreement.
- Can I use [GitLab UI components](https://gitlab-org.gitlab.io/gitlab-ui/?path=/docs/base-accordion--docs)? Which components are appropriate and do they have all of the functionality that I need?
- Are there existing components or utilities in the GitLab project that I could use?
- [Should this change live behind a Feature Flag](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags)?
- In which directory should this code live?
- Should I build part of this feature as reusable? If so, where should it live in the codebase and how do I make it discoverable?
- Note: For now this is still being considered, but the `vue_shared` folder is still the preferred directory for GitLab-wide components.
- What kinds of tests will it require? Consider unit tests **and** [Feature Tests](../testing_guide/frontend_testing.md#get-started-with-feature-tests)? Should I reach out to a [SET](https://handbook.gitlab.com/job-families/engineering/software-engineer-in-test/) for guidance or am I comfortable implementing the tests?
- How big will this change be? Try to keep diffs to **500+- at the most**.
If all of these questions have an answer, then you can safely move on to writing code.
### Step 3: Writing code
Make sure to communicate with your team as you progress or if you are unable to work on a planned issue for a long period of time.
If you require assistance, make sure to push your branch and share your merge request either directly to a teammate or in the Slack channel `#frontend` to get advice on how to move forward. You can [mark your merge request as a draft](../../user/project/merge_requests/drafts.md), which will clearly communicate that it is not ready for a full on review. Always remember to have a [low level of shame](https://handbook.gitlab.com/handbook/values/#low-level-of-shame) and **ask for help when you need it**.
As you write code, make sure to test your change thoroughly. It is the author's responsibility to test their code, ensure that it works as expected, and ensure that it did not break existing behaviours. Reviewers may help in that regard, but **do not expect it**. Make sure to check different browsers, mobile viewports and unexpected user flows.
### Step 4: Review
When it's time to send your code to review, it can be quite stressful. It is recommended to read through [the code review guidelines](../code_review.md) to get a better sense of what to expect. One of the most valuable pieces of advice that is **essential** is simply:
> ... to avoid unnecessary back-and-forth with reviewers, ... perform a self-review of your own merge request, and follow the Code Review guidelines.
This is key to having a great merge request experience because you will catch small mistakes and leave comments in areas where your reviewer might be uncertain and have questions. This speeds up the process tremendously.
### Step 5: Verifying
After your code has merged (congratulations!), make sure to verify that it works on the production environment and does not cause any errors.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Getting started
breadcrumbs:
- doc
- development
- fe_guide
---
This page will guide you through the Frontend development process and show you what a normal merge request cycle looks like. You can find more about the organization of the frontend team in the [handbook](https://handbook.gitlab.com/handbook/engineering/frontend/).
There are a lot of things to consider for a first merge request and it can feel overwhelming. The [Frontend onboarding course](onboarding_course/_index.md) provides a 6-week structured curriculum to learn how to contribute to the GitLab frontend.
## Development lifecycle
### Step 1: Preparing the issue
Before tackling any work, read through the issue that has been assigned to you and make sure that all [required departments](https://handbook.gitlab.com/handbook/engineering/#engineering-teams) have been involved as they should. Read through the comments as needed and if unclear, post a comment in the issue summarizing **what you think the work is** and ping your Engineering or Product Manager to confirm. Then once everything is clarified, apply the correct workflow labels to the issue and create a merge request branch. If created directly from the issue, the issue and the merge request will be linked by default.
### Step 2: Plan your implementation
Before writing code, make sure to ask yourself the following questions and have clear answers before you start developing:
- What API data is required? Is it already available in our API or should I ask a Backend counterpart?
- If this is GraphQL, write a query proposal and ask your BE counterpart to confirm they are in agreement.
- Can I use [GitLab UI components](https://gitlab-org.gitlab.io/gitlab-ui/?path=/docs/base-accordion--docs)? Which components are appropriate and do they have all of the functionality that I need?
- Are there existing components or utilities in the GitLab project that I could use?
- [Should this change live behind a Feature Flag](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags)?
- In which directory should this code live?
- Should I build part of this feature as reusable? If so, where should it live in the codebase and how do I make it discoverable?
- Note: For now this is still being considered, but the `vue_shared` folder is still the preferred directory for GitLab-wide components.
- What kinds of tests will it require? Consider unit tests **and** [Feature Tests](../testing_guide/frontend_testing.md#get-started-with-feature-tests)? Should I reach out to a [SET](https://handbook.gitlab.com/job-families/engineering/software-engineer-in-test/) for guidance or am I comfortable implementing the tests?
- How big will this change be? Try to keep diffs to **500+- at the most**.
If all of these questions have an answer, then you can safely move on to writing code.
### Step 3: Writing code
Make sure to communicate with your team as you progress or if you are unable to work on a planned issue for a long period of time.
If you require assistance, make sure to push your branch and share your merge request either directly to a teammate or in the Slack channel `#frontend` to get advice on how to move forward. You can [mark your merge request as a draft](../../user/project/merge_requests/drafts.md), which will clearly communicate that it is not ready for a full on review. Always remember to have a [low level of shame](https://handbook.gitlab.com/handbook/values/#low-level-of-shame) and **ask for help when you need it**.
As you write code, make sure to test your change thoroughly. It is the author's responsibility to test their code, ensure that it works as expected, and ensure that it did not break existing behaviours. Reviewers may help in that regard, but **do not expect it**. Make sure to check different browsers, mobile viewports and unexpected user flows.
### Step 4: Review
When it's time to send your code to review, it can be quite stressful. It is recommended to read through [the code review guidelines](../code_review.md) to get a better sense of what to expect. One of the most valuable pieces of advice that is **essential** is simply:
> ... to avoid unnecessary back-and-forth with reviewers, ... perform a self-review of your own merge request, and follow the Code Review guidelines.
This is key to having a great merge request experience because you will catch small mistakes and leave comments in areas where your reviewer might be uncertain and have questions. This speeds up the process tremendously.
### Step 5: Verifying
After your code has merged (congratulations!), make sure to verify that it works on the production environment and does not cause any errors.
|
https://docs.gitlab.com/development/guides
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/guides.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
guides.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Guides
| null |
This section contains guides to help our developers.
For example, you can find information about how to accomplish a specific task,
or how get proficient with a tool.
Guidelines related to one specific technology, like Vue, should not be added to this section. Instead, add them to the `Tech Stack` section.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Guides
breadcrumbs:
- doc
- development
- fe_guide
---
This section contains guides to help our developers.
For example, you can find information about how to accomplish a specific task,
or how get proficient with a tool.
Guidelines related to one specific technology, like Vue, should not be added to this section. Instead, add them to the `Tech Stack` section.
|
https://docs.gitlab.com/development/architecture
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/architecture.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
architecture.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Architecture
| null |
At GitLab, there are no dedicated "software architects". Everyone is encouraged to make their own decisions and document them appropriately. To know how or where to document these decisions, read on.
## Documenting decisions
When building new features, consider the scope and scale of what you are about to build. Depending on the answer, there are several tools or processes that could support your endeavor. We aim to keep the process of building features as efficient as possible. As a general rule, use the simplest process possible unless you need the additional support and structure of more time consuming solutions.
### Merge requests
When a change impacts is limited within a group or has a single contributor, the smallest possible documentation of architecture decisions is a commit and by extension a merge request (MR). MRs or commits can still be referenced even after they are merged, so it is vital to leave a good description, comments and commit messages to explain certain decisions in case it needs to be referenced later. Even a MR that is intended to be reviewed within a group should contain all relevant decision-making explicitly.
### Architectural Issue
When a unit of work starts to get big enough that it might impact an entire group's direction, it may be a good idea to create an architecture issue to discuss the technical direction. This process is informal and has no official requirements. Create an issue within the GitLab project where you can propose a plan for the work to be done and invite collaborators to refine the proposal with you.
This structure allows the group to think through a proposed change, gather feedback and iterate. It also allows them to use the issue as a source of truth rather than a comments thread on the feature issue or the MRs themselves. Consider adding some kind of visual support (like a schema) to facilitate the discussion. For example, you can reference this [architectural issue of the CI/CD Catalog](https://gitlab.com/gitlab-org/gitlab/-/issues/393225).
### Design Documents
When the work ahead may affect more than a single group, stage or potentially an entire department (for example, all of the Frontend team) then it is likely that there is need for a [Design Document](https://handbook.gitlab.com/handbook/engineering/architecture/workflow/).
This is well documented in the handbook, but to touch on it shortly, it is **the best way** to propose large changes and gather the required feedback and support to move forward. These documents are version controlled, keep evolving with time and are a great way to share a complex understanding across the entire organization. They also require a coach, which is a great way to involve someone with a lot of experience with larger changes. This process is shared across all engineering departments and is owned by the CTO.
To see all Design Documents, you can check the [Architecture at GitLab page](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/)
### Frontend RFCs (deprecated)
In the past, we had a [Frontend RFC project](https://gitlab.com/gitlab-org/frontend/rfcs) which goal was to propose larger changes and get opinions from the entire department. This project is no longer used for a couple of reasons:
1. Issues created in this project had a very low participation rate (less than 20%)
1. Controversial issues would stall with no clear way to resolve them
1. Issues that were completed often did not need a RFC in the first place (small issues)
1. Changes were often proposed "naively" without clear time and resource allocation
In most instances where we would have created a RFC, a Design Document can be used instead as it will have it's own RFC attached to it. This makes the conversation centered around the technical design and RFCs are just a way to further the completion of the design.
### Entry in the Frontend documentation
Adding an architecture section to the docs is a way to tell frontend engineers how to use or build upon an existing architecture. Use it to help "onboard" engineers to a part of the application that may not be self-evident. Try to avoid documenting your group's architecture here if it has no impact on other teams.
### Which to choose?
As a general rule, the wider the scope of your change, the more likely it is that you and your team would benefit from a Design Document. Also consider whether your change is a true two-way door decision: changes that can easily be reverted require less thinking ahead than those that cannot.
Work that can be achieved within a single milestone probably only needs Merge requests. Work that may take several milestone to complete, but where you are the only DRI is probably also easier done through MRs.
When multiple DRIs are involved, ask yourself if the work ahead is clear for all of you. If the work you do is complex and affects each others, consider gathering technical feedback from your team before you start working on an Architectural issue. Write a clear proposal, involve all stakeholders early and keep yourselves accountable to the decisions made on the issue.
Very small changes may have a very broad impact. For example, a change to any ESLint rule will impact all of engineering, but might not require a Design Document. Consider sending your proposal through Slack to gauge interest ("Should we enable X rule?") and then simply create a MR. Finally, share widely to the appropriate channels to gather feedback.
For recommending certain code patterns in our documentation, you can write the MR that apply your proposed change, share it broadly with the department and if no strong objections are raised, merge your change. This is more efficient than RFCs because of the bias for action, while also gathering all the feedback necessary for everyone to feel included.
If you'd like to propose a major change to the technological stack (Vue to React, JavaScript to TypeScript, etc.), start by reaching out on Slack to gauge interest. Always ask yourself whether or not the problems that you see can be fixed from our current tech stack, as we should always try to fix our problems with the tools we already have. Other departments, such as Backend and QA, do not have a clear process to propose technological changes either. That is because these changes would require huge investments from the company and probably cannot be decided without involving high-ranking executives from engineering.
Instead, consider starting a Design Document that explains the problem and try to solve it with our current tools. Invite contribution from the department and research this thoroughly as there can only be two outcomes. Either the problem **can** be solved with our current tools or it cannot. If it can, this is a huge win for our teams since we've fixed an issue without the need to completely change our stack, and if it cannot, then the Design Document can be the start of the larger conversation around the technological change.
## Widget Architecture
The [Plan stage](https://handbook.gitlab.com/handbook/engineering/development/dev/plan-project-management/)
is refactoring the right sidebar to consist of **widgets**. They have a specific architecture to be
reusable and to expose an interface that can be used by external Vue applications on the page.
Learn more about the [widget architecture](widgets.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Architecture
breadcrumbs:
- doc
- development
- fe_guide
---
At GitLab, there are no dedicated "software architects". Everyone is encouraged to make their own decisions and document them appropriately. To know how or where to document these decisions, read on.
## Documenting decisions
When building new features, consider the scope and scale of what you are about to build. Depending on the answer, there are several tools or processes that could support your endeavor. We aim to keep the process of building features as efficient as possible. As a general rule, use the simplest process possible unless you need the additional support and structure of more time consuming solutions.
### Merge requests
When a change impacts is limited within a group or has a single contributor, the smallest possible documentation of architecture decisions is a commit and by extension a merge request (MR). MRs or commits can still be referenced even after they are merged, so it is vital to leave a good description, comments and commit messages to explain certain decisions in case it needs to be referenced later. Even a MR that is intended to be reviewed within a group should contain all relevant decision-making explicitly.
### Architectural Issue
When a unit of work starts to get big enough that it might impact an entire group's direction, it may be a good idea to create an architecture issue to discuss the technical direction. This process is informal and has no official requirements. Create an issue within the GitLab project where you can propose a plan for the work to be done and invite collaborators to refine the proposal with you.
This structure allows the group to think through a proposed change, gather feedback and iterate. It also allows them to use the issue as a source of truth rather than a comments thread on the feature issue or the MRs themselves. Consider adding some kind of visual support (like a schema) to facilitate the discussion. For example, you can reference this [architectural issue of the CI/CD Catalog](https://gitlab.com/gitlab-org/gitlab/-/issues/393225).
### Design Documents
When the work ahead may affect more than a single group, stage or potentially an entire department (for example, all of the Frontend team) then it is likely that there is need for a [Design Document](https://handbook.gitlab.com/handbook/engineering/architecture/workflow/).
This is well documented in the handbook, but to touch on it shortly, it is **the best way** to propose large changes and gather the required feedback and support to move forward. These documents are version controlled, keep evolving with time and are a great way to share a complex understanding across the entire organization. They also require a coach, which is a great way to involve someone with a lot of experience with larger changes. This process is shared across all engineering departments and is owned by the CTO.
To see all Design Documents, you can check the [Architecture at GitLab page](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/)
### Frontend RFCs (deprecated)
In the past, we had a [Frontend RFC project](https://gitlab.com/gitlab-org/frontend/rfcs) which goal was to propose larger changes and get opinions from the entire department. This project is no longer used for a couple of reasons:
1. Issues created in this project had a very low participation rate (less than 20%)
1. Controversial issues would stall with no clear way to resolve them
1. Issues that were completed often did not need a RFC in the first place (small issues)
1. Changes were often proposed "naively" without clear time and resource allocation
In most instances where we would have created a RFC, a Design Document can be used instead as it will have it's own RFC attached to it. This makes the conversation centered around the technical design and RFCs are just a way to further the completion of the design.
### Entry in the Frontend documentation
Adding an architecture section to the docs is a way to tell frontend engineers how to use or build upon an existing architecture. Use it to help "onboard" engineers to a part of the application that may not be self-evident. Try to avoid documenting your group's architecture here if it has no impact on other teams.
### Which to choose?
As a general rule, the wider the scope of your change, the more likely it is that you and your team would benefit from a Design Document. Also consider whether your change is a true two-way door decision: changes that can easily be reverted require less thinking ahead than those that cannot.
Work that can be achieved within a single milestone probably only needs Merge requests. Work that may take several milestone to complete, but where you are the only DRI is probably also easier done through MRs.
When multiple DRIs are involved, ask yourself if the work ahead is clear for all of you. If the work you do is complex and affects each others, consider gathering technical feedback from your team before you start working on an Architectural issue. Write a clear proposal, involve all stakeholders early and keep yourselves accountable to the decisions made on the issue.
Very small changes may have a very broad impact. For example, a change to any ESLint rule will impact all of engineering, but might not require a Design Document. Consider sending your proposal through Slack to gauge interest ("Should we enable X rule?") and then simply create a MR. Finally, share widely to the appropriate channels to gather feedback.
For recommending certain code patterns in our documentation, you can write the MR that apply your proposed change, share it broadly with the department and if no strong objections are raised, merge your change. This is more efficient than RFCs because of the bias for action, while also gathering all the feedback necessary for everyone to feel included.
If you'd like to propose a major change to the technological stack (Vue to React, JavaScript to TypeScript, etc.), start by reaching out on Slack to gauge interest. Always ask yourself whether or not the problems that you see can be fixed from our current tech stack, as we should always try to fix our problems with the tools we already have. Other departments, such as Backend and QA, do not have a clear process to propose technological changes either. That is because these changes would require huge investments from the company and probably cannot be decided without involving high-ranking executives from engineering.
Instead, consider starting a Design Document that explains the problem and try to solve it with our current tools. Invite contribution from the department and research this thoroughly as there can only be two outcomes. Either the problem **can** be solved with our current tools or it cannot. If it can, this is a huge win for our teams since we've fixed an issue without the need to completely change our stack, and if it cannot, then the Design Document can be the start of the larger conversation around the technological change.
## Widget Architecture
The [Plan stage](https://handbook.gitlab.com/handbook/engineering/development/dev/plan-project-management/)
is refactoring the right sidebar to consist of **widgets**. They have a specific architecture to be
reusable and to expose an interface that can be used by external Vue applications on the page.
Learn more about the [widget architecture](widgets.md).
|
https://docs.gitlab.com/development/troubleshooting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/troubleshooting.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
troubleshooting.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Troubleshooting frontend development issues
| null |
Running into a problem? Maybe this will help ¯\_(ツ)_/¯.
## Troubleshooting issues
### This guide doesn't contain the issue you ran into
If you run into a Frontend development issue that is not in this guide, consider updating this guide with your issue and possible remedies. This way future adventurers can face these dragons with more success, being armed with your experience and knowledge.
## Testing issues
### ``Property or method `nodeType` is not defined`` but you're not using `nodeType` anywhere
This issue can happen in Vue component tests, when an expectation fails, but there is an error thrown when
Jest tries to pretty print the diff in the console. It's been noted that using `toEqual` with an array as a
property might also be a contributing factor.
See [this video](https://youtu.be/-BkEhghP-kM) for an in-depth overview and investigation.
**Remedy - Try cloning the object that has Vue watchers**
```diff
- expect(wrapper.findComponent(ChildComponent).props()).toEqual(...);
+ expect(cloneDeep(wrapper.findComponent(ChildComponent).props())).toEqual(...)
```
**Remedy - Try using `toMatchObject` instead of `toEqual`**
```diff
- expect(wrapper.findComponent(ChildComponent).props()).toEqual(...);
+ expect(wrapper.findComponent(ChildComponent).props()).toMatchObject(...);
```
`toMatchObject` actually changes the nature of the assertion and won't fail if some items are **missing** from the expectation.
### Script issues
## `core-js` errors when running scripts within the GitLab repository
The following command assumes you've set up the GitLab repository in the
`~/workspace/gdk` directory. When running scripts within the GitLab repository,
such as code transformations, you might run into issues with `core-js` like this:
```shell
~/workspace/gdk/gitlab/node_modules/core-js/modules/es.global-this.js:7
$({
^
TypeError: $ is not a function
at Object.<anonymous> (~/workspace/gdk/gitlab/node_modules/core-js/modules/es.global-this.js:6:1)
at Module._compile (internal/modules/cjs/loader.js:1063:30)
at Module._compile (~/workspace/gdk/gitlab/node_modules/pirates/lib/index.js:99:24)
at Module._extensions..js (internal/modules/cjs/loader.js:1092:10)
at Object.newLoader [as .js] (~/workspace/gdk/gitlab/node_modules/pirates/lib/index.js:104:7)
at Module.load (internal/modules/cjs/loader.js:928:32)
at Function.Module._load (internal/modules/cjs/loader.js:769:14)
at Module.require (internal/modules/cjs/loader.js:952:19)
at require (internal/modules/cjs/helpers.js:88:18)
at Object.<anonymous> (~/workspace/gdk/gitlab/node_modules/core-js/modules/esnext.global-this.js:2:1)
```
**Remedy - Try moving the script into a separate repository and point to it to files in the GitLab repository**
## Using Vue component issues
### When rendering a component that uses GlFilteredSearch and the component or its parent uses Vue Apollo
When trying to render our component GlFilteredSearch, you might get an error in the component's `provide` function:
`cannot read suggestionsListClass of undefined`
Currently, `vue-apollo` tries to [manually call a component's `provide()` in the `beforeCreate` part](https://github.com/vuejs/vue-apollo/blob/35e27ec398d844869e1bbbde73c6068b8aabe78a/packages/vue-apollo/src/mixin.js#L149) of the component lifecycle. This means that when a `provide()` references props, which aren't actually setup until after `created`, it will blow up.
See this [closed MR](https://gitlab.com/gitlab-org/gitlab-ui/-/merge_requests/2019#note_514671251) for more context.
**Remedy - try providing `apolloProvider` to the top-level Vue instance options**
VueApollo will skip manually running `provide()` if it sees that an `apolloProvider` is provided in the `$options`.
```diff
new Vue(
el,
+ apolloProvider: {},
render(h) {
return h(App);
},
);
```
## Troubleshooting Apollo Client issues
### console errors when writing to cache
If you see errors like `Missing field 'descriptionHtml' while writing result` , it means we are not adhering to the GraphQL response structure while writing to the Apollo client cache. It seems you're encountering a GraphQL error ("Missing field 'description'") within your web application, likely related to how you're handling Apollo Client's cache and data updates. The error stack trace provides clues about the specific parts of the Apollo Client code where the problem occurs.
**The Core Issue**:
The error "Missing field 'description'" indicates that your GraphQL query expects a field named "description" in the response, but the data you're receiving from your backend (or how it's being processed by Apollo Client) is missing that field. This is causing Apollo Client's cache to fail when it tries to update the store with the incomplete data.
To debug this, follow the below steps
1. Open the error stack developer console
```shell
Missing field 'description' while writing result {
"type": "DESCRIPTION",
"lastEditedAt": null,
"lastEditedBy": null,
"taskCompletionStatus": null,
"__typename": "WorkItemWidgetDescription"
}
```
1. Double-check your GraphQL query to ensure it's requesting the "description" field. If it's not included, Apollo Client won't be able to find it in the response.
1. The backend might not be returning the "description" field in the response for the "WorkItemWidgetDescription" type. Verify that your backend API is correctly sending the data as expected.
1. Use the `cache.readQuery` method to inspect the contents of the Apollo Client cache. Verify that the "description" field is present in the cached data for the relevant query
1. Open the error stack trace suggesting that the issue might be related to how Apollo Client is writing data to its cache. It's possible that the cache is not being updated correctly, leading to missing fields
1. Add console logs within your Apollo Client code (for example, before and after writing to the cache) to track the data being processed and identify where the "description" field might be missing.
**Solution**
Ensure that you're using the correct `writeQuery` or `writeFragment` methods in your Apollo Client code to update the cache with the complete data, including the "description" field
You should be able to see the method in the stack trace where this is originating from. Make sure you add the "description" field when writing to the cache
### Queries not being cached with the same variables
Apollo GraphQL queries may not be cached in several scenarios:
1. Cache Misses or Partial Caches/Query Invalidation or Changes:
If the query only returns partial data or there's a cache miss (when part of the requested data isn't in the cache), Apollo might not be able to cache the result effectively.
If data related to a query has been invalidated or updated, the cache might not have valid information. For example:
When using mutations, the cache might not automatically update unless you configure `refetchQueries` or use a manual cache update after the mutation.
For example: in the first query you have a couple of fields that were not requested in the subsequent query
```graphql
query workItemTreeQuery($id: WorkItemID!, $pageSize: Int = 100, $endCursor: String) {
workItem(id: $id) {
namespace {
id
}
userPermissions {
deleteWorkItem
updateWorkItem
}
}
}
```
```diff
query workItemTreeQuery($id: WorkItemID!, $pageSize: Int = 100, $endCursor: String) {
workItem(id: $id) {
namespace {
id
+ fullPath
}
userPermissions {
deleteWorkItem
updateWorkItem
+ adminParentLink
+ setWorkItemMetadata
+ createNote
+ adminWorkItemLink
}
}
}
```
1. `fetchPolicy` Settings:
Apollo Client uses a fetchPolicy to control how queries interact with the cache. Depending on the policy, the query may bypass caching entirely if the fetchPolicy is `no-cache`. This policy ensures that no part of the query is written to the cache. Each query directly fetches data from the server and doesn't store any results in the cache and hence multiple queries are being fetched
1. When the same query is fired from different Apollo Client instances. It may be that the clients firing the two queries are from different clients.
1. Missing `id` or `__typename`:
Apollo Client uses `id` and `__typename` to uniquely identify entities and cache them. If these fields are missing from your query response, Apollo may not be able to cache the result properly.
1. Complex or Nested Queries:
Some queries might be too complex or involve nested queries that Apollo Client might struggle to cache correctly. This can happen if the structure of the data returned doesn't map cleanly to the cache schema, requiring manual cache management.
1. Pagination Queries:
For queries involving pagination, like those using fetchMore, Apollo might not cache results properly unless the cache is explicitly updated.
In all of these cases, you may need to configure Apollo's cache policies or manually update the cache to handle query caching effectively.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Troubleshooting frontend development issues
breadcrumbs:
- doc
- development
- fe_guide
---
Running into a problem? Maybe this will help ¯\_(ツ)_/¯.
## Troubleshooting issues
### This guide doesn't contain the issue you ran into
If you run into a Frontend development issue that is not in this guide, consider updating this guide with your issue and possible remedies. This way future adventurers can face these dragons with more success, being armed with your experience and knowledge.
## Testing issues
### ``Property or method `nodeType` is not defined`` but you're not using `nodeType` anywhere
This issue can happen in Vue component tests, when an expectation fails, but there is an error thrown when
Jest tries to pretty print the diff in the console. It's been noted that using `toEqual` with an array as a
property might also be a contributing factor.
See [this video](https://youtu.be/-BkEhghP-kM) for an in-depth overview and investigation.
**Remedy - Try cloning the object that has Vue watchers**
```diff
- expect(wrapper.findComponent(ChildComponent).props()).toEqual(...);
+ expect(cloneDeep(wrapper.findComponent(ChildComponent).props())).toEqual(...)
```
**Remedy - Try using `toMatchObject` instead of `toEqual`**
```diff
- expect(wrapper.findComponent(ChildComponent).props()).toEqual(...);
+ expect(wrapper.findComponent(ChildComponent).props()).toMatchObject(...);
```
`toMatchObject` actually changes the nature of the assertion and won't fail if some items are **missing** from the expectation.
### Script issues
## `core-js` errors when running scripts within the GitLab repository
The following command assumes you've set up the GitLab repository in the
`~/workspace/gdk` directory. When running scripts within the GitLab repository,
such as code transformations, you might run into issues with `core-js` like this:
```shell
~/workspace/gdk/gitlab/node_modules/core-js/modules/es.global-this.js:7
$({
^
TypeError: $ is not a function
at Object.<anonymous> (~/workspace/gdk/gitlab/node_modules/core-js/modules/es.global-this.js:6:1)
at Module._compile (internal/modules/cjs/loader.js:1063:30)
at Module._compile (~/workspace/gdk/gitlab/node_modules/pirates/lib/index.js:99:24)
at Module._extensions..js (internal/modules/cjs/loader.js:1092:10)
at Object.newLoader [as .js] (~/workspace/gdk/gitlab/node_modules/pirates/lib/index.js:104:7)
at Module.load (internal/modules/cjs/loader.js:928:32)
at Function.Module._load (internal/modules/cjs/loader.js:769:14)
at Module.require (internal/modules/cjs/loader.js:952:19)
at require (internal/modules/cjs/helpers.js:88:18)
at Object.<anonymous> (~/workspace/gdk/gitlab/node_modules/core-js/modules/esnext.global-this.js:2:1)
```
**Remedy - Try moving the script into a separate repository and point to it to files in the GitLab repository**
## Using Vue component issues
### When rendering a component that uses GlFilteredSearch and the component or its parent uses Vue Apollo
When trying to render our component GlFilteredSearch, you might get an error in the component's `provide` function:
`cannot read suggestionsListClass of undefined`
Currently, `vue-apollo` tries to [manually call a component's `provide()` in the `beforeCreate` part](https://github.com/vuejs/vue-apollo/blob/35e27ec398d844869e1bbbde73c6068b8aabe78a/packages/vue-apollo/src/mixin.js#L149) of the component lifecycle. This means that when a `provide()` references props, which aren't actually setup until after `created`, it will blow up.
See this [closed MR](https://gitlab.com/gitlab-org/gitlab-ui/-/merge_requests/2019#note_514671251) for more context.
**Remedy - try providing `apolloProvider` to the top-level Vue instance options**
VueApollo will skip manually running `provide()` if it sees that an `apolloProvider` is provided in the `$options`.
```diff
new Vue(
el,
+ apolloProvider: {},
render(h) {
return h(App);
},
);
```
## Troubleshooting Apollo Client issues
### console errors when writing to cache
If you see errors like `Missing field 'descriptionHtml' while writing result` , it means we are not adhering to the GraphQL response structure while writing to the Apollo client cache. It seems you're encountering a GraphQL error ("Missing field 'description'") within your web application, likely related to how you're handling Apollo Client's cache and data updates. The error stack trace provides clues about the specific parts of the Apollo Client code where the problem occurs.
**The Core Issue**:
The error "Missing field 'description'" indicates that your GraphQL query expects a field named "description" in the response, but the data you're receiving from your backend (or how it's being processed by Apollo Client) is missing that field. This is causing Apollo Client's cache to fail when it tries to update the store with the incomplete data.
To debug this, follow the below steps
1. Open the error stack developer console
```shell
Missing field 'description' while writing result {
"type": "DESCRIPTION",
"lastEditedAt": null,
"lastEditedBy": null,
"taskCompletionStatus": null,
"__typename": "WorkItemWidgetDescription"
}
```
1. Double-check your GraphQL query to ensure it's requesting the "description" field. If it's not included, Apollo Client won't be able to find it in the response.
1. The backend might not be returning the "description" field in the response for the "WorkItemWidgetDescription" type. Verify that your backend API is correctly sending the data as expected.
1. Use the `cache.readQuery` method to inspect the contents of the Apollo Client cache. Verify that the "description" field is present in the cached data for the relevant query
1. Open the error stack trace suggesting that the issue might be related to how Apollo Client is writing data to its cache. It's possible that the cache is not being updated correctly, leading to missing fields
1. Add console logs within your Apollo Client code (for example, before and after writing to the cache) to track the data being processed and identify where the "description" field might be missing.
**Solution**
Ensure that you're using the correct `writeQuery` or `writeFragment` methods in your Apollo Client code to update the cache with the complete data, including the "description" field
You should be able to see the method in the stack trace where this is originating from. Make sure you add the "description" field when writing to the cache
### Queries not being cached with the same variables
Apollo GraphQL queries may not be cached in several scenarios:
1. Cache Misses or Partial Caches/Query Invalidation or Changes:
If the query only returns partial data or there's a cache miss (when part of the requested data isn't in the cache), Apollo might not be able to cache the result effectively.
If data related to a query has been invalidated or updated, the cache might not have valid information. For example:
When using mutations, the cache might not automatically update unless you configure `refetchQueries` or use a manual cache update after the mutation.
For example: in the first query you have a couple of fields that were not requested in the subsequent query
```graphql
query workItemTreeQuery($id: WorkItemID!, $pageSize: Int = 100, $endCursor: String) {
workItem(id: $id) {
namespace {
id
}
userPermissions {
deleteWorkItem
updateWorkItem
}
}
}
```
```diff
query workItemTreeQuery($id: WorkItemID!, $pageSize: Int = 100, $endCursor: String) {
workItem(id: $id) {
namespace {
id
+ fullPath
}
userPermissions {
deleteWorkItem
updateWorkItem
+ adminParentLink
+ setWorkItemMetadata
+ createNote
+ adminWorkItemLink
}
}
}
```
1. `fetchPolicy` Settings:
Apollo Client uses a fetchPolicy to control how queries interact with the cache. Depending on the policy, the query may bypass caching entirely if the fetchPolicy is `no-cache`. This policy ensures that no part of the query is written to the cache. Each query directly fetches data from the server and doesn't store any results in the cache and hence multiple queries are being fetched
1. When the same query is fired from different Apollo Client instances. It may be that the clients firing the two queries are from different clients.
1. Missing `id` or `__typename`:
Apollo Client uses `id` and `__typename` to uniquely identify entities and cache them. If these fields are missing from your query response, Apollo may not be able to cache the result properly.
1. Complex or Nested Queries:
Some queries might be too complex or involve nested queries that Apollo Client might struggle to cache correctly. This can happen if the structure of the data returned doesn't map cleanly to the cache schema, requiring manual cache management.
1. Pagination Queries:
For queries involving pagination, like those using fetchMore, Apollo might not cache results properly unless the cache is explicitly updated.
In all of these cases, you may need to configure Apollo's cache policies or manually update the cache to handle query caching effectively.
|
https://docs.gitlab.com/development/pinia
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/pinia.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
pinia.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Pinia
| null |
[Pinia](https://pinia.vuejs.org/) is a tool for [managing client-side state](state_management.md) for Vue applications.
Refer to the [official documentation](https://pinia.vuejs.org/core-concepts/) on how to use Pinia.
## Best practices
### Pinia instance
You should always prefer using shared Pinia instance from `~/pinia/instance`.
This allows you to easily add more stores to components without worrying about multiple Pinia instances.
```javascript
import { pinia } from '~/pinia/instance';
new Vue({ pinia, render(h) { return h(MyComponent); } });
```
### Small stores
Prefer creating small stores that focus on a single task only.
This is contrary to the Vuex approach which encourages you to create bigger stores.
Treat Pinia stores like cohesive components rather than giant state façades (Vuex modules).
#### Vuex design ❌
```mermaid
flowchart TD
A[Store]
A --> B[State]
A --> C[Actions]
A --> D[Mutations]
A --> E[Getters]
B --> F[items]
B --> G[isLoadingItems]
B --> H[itemWithActiveForm]
B --> I[isSubmittingForm]
```
#### Pinia's design ✅
```mermaid
flowchart TD
A[Items Store]
A --> B[State]
A --> C[Actions]
A --> D[Getters]
B --> E[items]
B --> F[isLoading]
H[Form Store]
H --> I[State]
H --> J[Actions]
H --> K[Getters]
I --> L[activeItem]
I --> M[isSubmitting]
```
### Single file stores
Place state, actions, and getters in a single file.
Do not create 'barrel' store index files which import everything from `actions.js`, `state.js` and `getters.js`.
If your store file gets too big it's time to consider splitting that store into multiple stores.
### Use Option Stores
Pinia offers two types of store definitions: [option](https://pinia.vuejs.org/core-concepts/#Option-Stores) and [setup](https://pinia.vuejs.org/core-concepts/#Setup-Stores).
Prefer the option type when creating new stores. This promotes consistency and will simplify the migration path from Vuex.
### Global stores
Prefer using global Pinia stores for global reactive state.
```javascript
// bad ❌
import { isNarrowScreenMediaQuery } from '~/lib/utils/css_utils';
new Vue({
data() {
return {
isNarrow: false,
};
},
mounted() {
const query = isNarrowScreenMediaQuery();
this.isNarrow = query.matches;
query.addEventListener('change', (event) => {
this.isNarrow = event.matches;
});
},
render() {
if (this.isNarrow) return null;
//
},
});
```
```javascript
// good ✅
import { pinia } from '~/pinia/instance';
import { useViewport } from '~/pinia/global_stores/viewport';
new Vue({
pinia,
...mapState(useViewport, ['isNarrowScreen']),
render() {
if (this.isNarrowScreen) return null;
//
},
});
```
### Hot Module Replacement
[Pinia offers an HMR option](https://pinia.vuejs.org/cookbook/hot-module-replacement.html#HMR-Hot-Module-Replacement-) that you have to manually attach in your code.
The experience Pinia offers with this method is [subpar](https://github.com/vuejs/pinia/issues/898) and this should be avoided.
## Testing Pinia
### Unit testing the store
[Follow the official testing documentation](https://pinia.vuejs.org/cookbook/testing.html#Unit-testing-a-store).
Official documentation suggests using `setActivePinia(createPinia())` to test the Pinia.
Our recommendation is to leverage `createTestingPinia` with unstubbed actions.
It acts the same as `setActivePinia(createPinia())` but also allows us to spy on any action by default.
**Always** use `createTestingPinia` with `stubActions: false` when unit testing the store.
A basic test could look like this:
```javascript
import { createTestingPinia } from '@pinia/testing';
import { useMyStore } from '~/my_store.js';
describe('MyStore', () => {
beforeEach(() => {
createTestingPinia({ stubActions: false });
});
it('does something', () => {
useMyStore().someAction();
expect(useMyStore().someState).toBe(true);
});
});
```
Any given test should only check for any of these three things:
1. A change in the store state
1. A call to another action
1. A call to side effect (for example an Axios request)
Never try to use the same Pinia instance in more than one test case.
Always create a fresh Pinia instance because it is what actually holds your state.
### Unit testing components with the store
[Follow the official testing documentation](https://pinia.vuejs.org/cookbook/testing.html#Unit-testing-components).
Pinia requires special handling to support Vue 3 compat mode:
1. It must register `PiniaVuePlugin` on the Vue instance
1. Pinia instance must be explicitly provided to the `shallowMount`/`mount` from Vue Test Utils
1. Stores have to be created prior to rendering the component, otherwise Vue will try to use Pinia for Vue 3
A full setup looks like this:
```javascript
import Vue from 'vue';
import { createTestingPinia } from '@pinia/testing';
import { PiniaVuePlugin } from 'pinia';
import { shallowMount } from '@vue/test-utils';
import { useMyStore } from '~/my_store.js';
import MyComponent from '~/my_component.vue';
Vue.use(PiniaVuePlugin);
describe('MyComponent', () => {
let pinia;
let wrapper;
const createComponent = () => {
wrapper = shallowMount(MyComponent, { pinia });
}
beforeEach(() => {
pinia = createTestingPinia();
// store is created before component is rendered
useMyStore();
});
it('does something', () => {
createComponent();
// all actions are stubbed by default
expect(useMyStore().someAction).toHaveBeenCalledWith({ arg: 'foo' });
expect(useMyStore().someAction).toHaveBeenCalledTimes(1);
});
});
```
In most cases you won't need to set `stubActions: false` when testing components.
Instead, the store itself should be properly tested and the component tests should check that the actions were called with correct arguments.
#### Setting up initial state
Pinia doesn't allow to unstub actions once they've been stubbed.
That means you can not use them to set the initial state if you didn't set `stubActions: false`.
In that case it is allowed to set the state directly:
```javascript
describe('MyComponent', () => {
let pinia;
let wrapper;
const createComponent = () => {
wrapper = shallowMount(MyComponent, { pinia });
}
beforeEach(() => {
// all the actions are stubbed, we can't use them to change the state anymore
pinia = createTestingPinia();
// store is created before component is rendered
useMyStore();
});
it('does something', () => {
// state is set directly instead of using an action
useMyStore().someState = { value: 1 };
createComponent();
// ...
});
});
```
## Migrating from Vuex
GitLab is actively migrating from Vuex, you can contribute and follow this progress [here](https://gitlab.com/groups/gitlab-org/-/epics/18476).
Before migrating decide what your primary [state manager](state_management.md) should be first.
Proceed with this guide if Pinia was your choice.
Migration to Pinia could be completed in two ways: a single step migration and a multi-step one.
Follow single step migration if your store meets these criteria:
1. Store contains only one module
1. Actions, getters and mutations cumulatively do not exceed 1000 lines
In any other case prefer the multi-step migration.
### Single step migration
[Follow the official Vuex migration guide](https://pinia.vuejs.org/cookbook/migration-vuex.html).
1. Migrate store to Pinia [using codemods](#automated-migration-using-codemods)
1. Fix store tests following [our guide](#migrating-store-tests) and [best practices](#unit-testing-the-store)
1. Update components to use the migrated Pinia store
1. Replace `mapActions`, `mapState` with Pinia counterparts
1. Replace `mapMutations` with Pinia's `mapActions`
1. Replace `mapGetters` with Pinia's `mapState`
1. Fix components tests following [our guide](#migrating-component-tests) and [best practices](#unit-testing-components-with-the-store)
If your diff starts to exceed reviewable size prefer the multi-step migration.
### Multi-step migration
[Learn about the official Vuex migration guide](https://pinia.vuejs.org/cookbook/migration-vuex.html).
A walkthrough is available in the two part video series:
1. [Migrating the store (part 1)](https://youtu.be/aWVYvhktYfM)
1. [Migrating the components (part 2)](https://youtu.be/9G7h4YmoHRw)
Follow these steps to iterate over the migration process and split the work onto smaller merge requests:
1. Identify the store you are going to migrate.
Start with the file that defines your store via `new Vuex.Store()` and go from there.
Include all the modules that are used inside this store.
1. Create a migration issue, assign a migration DRI(s) and list all the store modules you're going to migrate.
Track your migration progress in that issue. If necessary, split the migration into multiple issues.
1. Create a new CODEOWNERS (`.gitlab/CODEOWNERS`) rule for the store files you're migrating, include all the Vuex module dependencies and store specs.
If you are migrating only a single store module then you would need to include only `state.js` (or your `index.js`),
`actions.js`, `mutations.js` and `getters.js` and their respective spec files.
Assign at least two individuals responsible for reviewing changes made to the Vuex store.
Always sync your changes from Vuex store to Pinia. This is very important so you don't introduce regressions with the Pinia store.
1. Copy existing store as-is to a new location (you can call it `stores/legacy_store` for example). Preserve the file structure.
Do this for every store module you're going to migrate. Split this into multiple merge requests if necessary.
1. Create an index file (`index.js`) with a store definition (`defineStore`) and define your state in there.
Copy the state definition from `state.js`. Do not import actions, mutations and getters yet.
1. Use [code mods](#automated-migration-using-codemods) to migrate the store files.
Import migrated modules in your new store's definition (`index.js`).
1. If you have circular dependencies in your stores consider [using `tryStore` plugin](#avoiding-circular-dependencies).
1. [Migrate the store specs manually](#migrating-store-tests).
1. [Sync your Vuex store with Pinia stores](#syncing-with-vuex).
1. Refactor components to use the new store. Split this into as many merge requests as necessary.
Always [update the specs](#migrating-component-tests) with the components.
1. Remove the Vuex store.
1. Remove CODEOWNERS rule.
1. Close the migration issue.
#### Example migration breakdown
You can use the [merge requests migration](https://gitlab.com/groups/gitlab-org/-/epics/16505) breakdown as a reference:
1. Diffs store
1. [Copy store to a new location and introduce CODEOWNERS rules](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/163826)
1. [Automated store migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/163827)
1. Also creates MrNotes store
1. Specs migration ([actions](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/165733), [getters](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167176), [mutations](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167434))
1. Notes store
1. [Copy store to a new location](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167450)
1. [Automated store migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167946)
1. Specs migration ([actions](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169681), [getters](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170547), [mutations](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170549))
1. Batch comments store
1. [Copy store to a new location](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176485)
1. [Automated store migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176486)
1. Specs migration ([actions](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176487), [getters](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176490), [mutations](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176488))
1. [Sync Vuex stores with Pinia stores](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178302)
1. Diffs store components migration
1. [Diffs app](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186121)
1. [Non diffs components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186365)
1. [File browser](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186370)
1. [Diffs components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186381)
1. [Diff file components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186382)
1. [Rest of diffs components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186962)
1. [Batch comments components migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/180129)
1. [MrNotes components migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178291)
1. Notes store components migration
1. [Diffs components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/188273)
1. [Simple notes components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/193248)
1. [More notes components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/195975)
1. [Rest of notes components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/196142)
1. [Notes app](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/197331)
1. [Remove Vuex from merge requests](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/196307)
1. Also removes the CODEOWNERS rules
### Post migration steps
Once your store is migrated consider refactoring it to follow our best practices. Split big stores into smaller ones.
[Refactor `tryStore` uses](#refactoring-trystore).
### Automated migration using codemods
You can use [ast-grep](https://ast-grep.github.io/) codemods to simplify migration from Vuex to Pinia.
1. [Install ast-grep](https://ast-grep.github.io/guide/quick-start.html#installation) on your system before proceeding.
1. Run `scripts/frontend/codemods/vuex-to-pinia/migrate.sh path/to/your/store`
The codemods will migrate `actions.js`, `mutations.js` and `getters.js` located in your store folder.
Manually scan these files after running the codemods to ensure they are properly migrated.
Vuex specs can not be automatically migrated, migrate them by hand.
Vuex module calls are replaced using Pinia conventions:
| Vuex | Pinia |
|-------------------------------------------------------------|--------------------------------------|
| `dispatch('anotherModule/action', ...args, { root: true })` | `useAnotherModule().action(...args)` |
| `dispatch('action', ...args, { root: true })` | `useRootStore().action(...args)` |
| `rootGetters['anotherModule/getter']` | `useAnotherModule().getter` |
| `rootGetters.getter` | `useRootStore().getter` |
| `rootState.anotherModule.state` | `useAnotherModule().state` |
If you have not yet migrated a dependent module (`useAnotherModule` and `useRootStore` in the examples above) you can create a temporary dummy store.
Use the guidance below to migrate Vuex modules.
### Migrating stores with nested modules
It is not trivial to iteratively migrate stores with nested modules that have dependencies between them.
In such cases prefer migrating nested modules first:
1. Create a Pinia store counterpart of the nested Vuex store module.
1. Create a placeholder Pinia 'root' store for root module dependencies if applicable.
1. Copy and adapt existing tests for the migrated module.
1. **Do not use migrated modules yet.**
1. Once all the nested modules are migrated you can migrate the root module and replace the placeholder store with the real one.
1. Replace Vuex store with Pinia stores in components.
### Avoiding circular dependencies
It is imperative that you don't create circular dependencies in your Pinia stores.
Unfortunately Vuex design allows to create interdependent modules that we have to refactor later.
An example circular dependency in store design:
```mermaid
graph TD
A[Store Alpha] --> Foo(Action Foo)
B[Store Beta] --> Bar(Action Bar)
A -- calls --> Bar
B -- calls --> Foo
```
To mitigate this issue consider using `tryStore` plugin for Pinia during migration from Vuex:
#### Before
```javascript
// store_alpha/actions.js
function callOtherStore() {
// bad ❌, circular dependency created
useBetaStore().bar();
}
```
```javascript
// store_beta/actions.js
function callOtherStore() {
// bad ❌, circular dependency created
useAlphaStore().bar();
}
```
#### After
```javascript
// store_alpha/actions.js
function callOtherStore() {
// OK ✅, circular dependency avoided
this.tryStore('betaStore').bar();
}
```
```javascript
// store_beta/actions.js
function callOtherStore() {
// OK ✅, circular dependency avoided
this.tryStore('alphaStore').bar();
}
```
This will look up the store by its name using Pinia instance and prevent the circular dependency issue.
Store name is defined when calling `defineStore('storeName', ...)`.
You **must** initialize both stores prior to component mounting when using `tryStore`:
```javascript
// stores are created in advance
useAlphaStore();
useBetaStore();
new Vue({ pinia, render(h) { return h(MyComponent); } });
```
The `tryStore` helper function can only be used during migration. Never use this in proper Pinia stores.
#### Refactoring `tryStore`
After you finished the migration it is very important to redesign the stores so there are no more circular dependencies.
The easiest way to solve this would be to create a top level store that would orchestrate other stores.
##### Before
```mermaid
graph TD
A[Store Alpha] --> Foo(Action Foo)
A -- calls --> Bar
B[Store Beta] --> Bar(Action Bar)
B -- calls --> Foo
```
##### After
```mermaid
graph TD
C[Store Gamma]
A[Store Alpha] --- Bar(Action Bar)
B[Store Beta] --- Foo(Action Foo)
C -- calls --> Bar
C -- calls --> Foo
```
### Syncing with Vuex
This `syncWithVuex` plugin syncs your state from Vuex to Pinia and vice versa.
This allows you to iteratively migrate components by having both stores in your app during migration.
Usage example:
```javascript
// Vuex store @ ./store.js
import Vuex from 'vuex';
import createOldStore from './stores/old_store';
export default new Vuex.Store({
modules: {
oldStore: createOldStore(),
},
});
```
```javascript
// Pinia store
import { defineStore } from 'pinia';
import oldVuexStore from './store'
export const useMigratedStore = defineStore('migratedStore', {
syncWith: {
store: oldVuexStore,
name: 'oldStore', // use legacy store name if it is defined inside Vuex `modules`
namespaced: true, // set to 'true' if Vuex module is namespaced
},
// the state here gets sync with Vuex, any changes to migratedStore also propagate to the Vuex store
state() {
// ...
},
// ...
});
```
#### Override
A Vuex store definition can be shared in multiple Vuex store instances.
In that case we can not rely on the store config alone to sync our Pinia store with the Vuex store.
We need to point our Pinia store to the actual Vuex store instance using `syncWith` helper function.
```javascript
// this overrides the existing `syncWith` config
useMigratedStore().syncWith({ store: anotherOldStore });
// `useMigratedStore` state now is synced only with `anotherOldStore`
new Vue({ pinia, render(h) { return h(MyComponent) } });
```
### Migrating store tests
#### `testAction`
Some Vuex tests might use `testAction` helper to test that certain actions or mutations have been called.
We can migrate these specs using `createTestPiniaAction` helper from `helpers/pinia_helpers` in Jest.
##### Before
```javascript
describe('SomeStore', () => {
it('runs actions', () => {
return testAction(
store.actionToBeCalled, // action to be called immediately
{ someArg: 1 }, // action call arguments
{ someState: 1 }, // initial store state
[{ type: 'MUTATION_NAME', payload: '123' }], // mutation calls to expect
[{ type: 'actionName' }], // action calls to expect
);
});
});
```
##### After
```javascript
import { createTestPiniaAction } from 'helpers/pinia_helpers';
describe('SomeStore', () => {
let store;
let testAction;
beforeEach(() => {
store = useMyStore();
testAction = createTestPiniaAction(store);
});
it('runs actions', () => {
return testAction(
store.actionToBeCalled,
{ someArg: 1 },
{ someState: 1 },
[{ type: store.MUTATION_NAME, payload: '123' }], // explicit reference to migrated mutation
[{ type: store.actionName }], // explicit reference to migrated action
);
});
});
```
Avoid using `testAction` in your proper Pinia tests: this should only be used during migration.
Always prefer testing each action call explicitly.
#### Custom getters
Pinia allows to define custom getters in Vue 3. Since we're using Vue 2 this is not possible.
To work around this you can use `createCustomGetters` helper from `helpers/pinia_helpers`.
##### Before
```javascript
describe('SomeStore', () => {
it('runs actions', () => {
const dispatch = jest.fn();
const getters = { someGetter: 1 };
someAction({ dispatch, getters });
expect(dispatch).toHaveBeenCalledWith('anotherAction', 1);
});
});
```
##### After
```javascript
import { createCustomGetters } from 'helpers/pinia_helpers';
describe('SomeStore', () => {
let store;
let getters;
beforeEach(() => {
getters = {};
createTestingPinia({
stubActions: false,
plugins: [
createCustomGetters(() => ({
myStore: getters, // each store used in tests should be also declared here
})),
],
});
store = useMyStore();
});
it('runs actions', () => {
getters.someGetter = 1;
store.someAction();
expect(store.anotherAction).toHaveBeenCalledWith(1);
});
});
```
Avoid mocking getters in proper Pinia tests: this should only be used for migration.
Instead, provide a valid state so a getter can return correct value.
### Migrating component tests
Pinia does not return promises in actions by default.
Because of that pay a special attention when using `createTestingPinia`.
Since it stubs all the actions it does not guarantee that an action would return a promise.
If your component's code is expecting an action to return a promise stub it accordingly.
```javascript
describe('MyComponent', () => {
let pinia;
beforeEach(() => {
pinia = createTestingPinia();
useMyStore().someAsyncAction.mockResolvedValue(); // this now returns a promise
});
});
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Pinia
breadcrumbs:
- doc
- development
- fe_guide
---
[Pinia](https://pinia.vuejs.org/) is a tool for [managing client-side state](state_management.md) for Vue applications.
Refer to the [official documentation](https://pinia.vuejs.org/core-concepts/) on how to use Pinia.
## Best practices
### Pinia instance
You should always prefer using shared Pinia instance from `~/pinia/instance`.
This allows you to easily add more stores to components without worrying about multiple Pinia instances.
```javascript
import { pinia } from '~/pinia/instance';
new Vue({ pinia, render(h) { return h(MyComponent); } });
```
### Small stores
Prefer creating small stores that focus on a single task only.
This is contrary to the Vuex approach which encourages you to create bigger stores.
Treat Pinia stores like cohesive components rather than giant state façades (Vuex modules).
#### Vuex design ❌
```mermaid
flowchart TD
A[Store]
A --> B[State]
A --> C[Actions]
A --> D[Mutations]
A --> E[Getters]
B --> F[items]
B --> G[isLoadingItems]
B --> H[itemWithActiveForm]
B --> I[isSubmittingForm]
```
#### Pinia's design ✅
```mermaid
flowchart TD
A[Items Store]
A --> B[State]
A --> C[Actions]
A --> D[Getters]
B --> E[items]
B --> F[isLoading]
H[Form Store]
H --> I[State]
H --> J[Actions]
H --> K[Getters]
I --> L[activeItem]
I --> M[isSubmitting]
```
### Single file stores
Place state, actions, and getters in a single file.
Do not create 'barrel' store index files which import everything from `actions.js`, `state.js` and `getters.js`.
If your store file gets too big it's time to consider splitting that store into multiple stores.
### Use Option Stores
Pinia offers two types of store definitions: [option](https://pinia.vuejs.org/core-concepts/#Option-Stores) and [setup](https://pinia.vuejs.org/core-concepts/#Setup-Stores).
Prefer the option type when creating new stores. This promotes consistency and will simplify the migration path from Vuex.
### Global stores
Prefer using global Pinia stores for global reactive state.
```javascript
// bad ❌
import { isNarrowScreenMediaQuery } from '~/lib/utils/css_utils';
new Vue({
data() {
return {
isNarrow: false,
};
},
mounted() {
const query = isNarrowScreenMediaQuery();
this.isNarrow = query.matches;
query.addEventListener('change', (event) => {
this.isNarrow = event.matches;
});
},
render() {
if (this.isNarrow) return null;
//
},
});
```
```javascript
// good ✅
import { pinia } from '~/pinia/instance';
import { useViewport } from '~/pinia/global_stores/viewport';
new Vue({
pinia,
...mapState(useViewport, ['isNarrowScreen']),
render() {
if (this.isNarrowScreen) return null;
//
},
});
```
### Hot Module Replacement
[Pinia offers an HMR option](https://pinia.vuejs.org/cookbook/hot-module-replacement.html#HMR-Hot-Module-Replacement-) that you have to manually attach in your code.
The experience Pinia offers with this method is [subpar](https://github.com/vuejs/pinia/issues/898) and this should be avoided.
## Testing Pinia
### Unit testing the store
[Follow the official testing documentation](https://pinia.vuejs.org/cookbook/testing.html#Unit-testing-a-store).
Official documentation suggests using `setActivePinia(createPinia())` to test the Pinia.
Our recommendation is to leverage `createTestingPinia` with unstubbed actions.
It acts the same as `setActivePinia(createPinia())` but also allows us to spy on any action by default.
**Always** use `createTestingPinia` with `stubActions: false` when unit testing the store.
A basic test could look like this:
```javascript
import { createTestingPinia } from '@pinia/testing';
import { useMyStore } from '~/my_store.js';
describe('MyStore', () => {
beforeEach(() => {
createTestingPinia({ stubActions: false });
});
it('does something', () => {
useMyStore().someAction();
expect(useMyStore().someState).toBe(true);
});
});
```
Any given test should only check for any of these three things:
1. A change in the store state
1. A call to another action
1. A call to side effect (for example an Axios request)
Never try to use the same Pinia instance in more than one test case.
Always create a fresh Pinia instance because it is what actually holds your state.
### Unit testing components with the store
[Follow the official testing documentation](https://pinia.vuejs.org/cookbook/testing.html#Unit-testing-components).
Pinia requires special handling to support Vue 3 compat mode:
1. It must register `PiniaVuePlugin` on the Vue instance
1. Pinia instance must be explicitly provided to the `shallowMount`/`mount` from Vue Test Utils
1. Stores have to be created prior to rendering the component, otherwise Vue will try to use Pinia for Vue 3
A full setup looks like this:
```javascript
import Vue from 'vue';
import { createTestingPinia } from '@pinia/testing';
import { PiniaVuePlugin } from 'pinia';
import { shallowMount } from '@vue/test-utils';
import { useMyStore } from '~/my_store.js';
import MyComponent from '~/my_component.vue';
Vue.use(PiniaVuePlugin);
describe('MyComponent', () => {
let pinia;
let wrapper;
const createComponent = () => {
wrapper = shallowMount(MyComponent, { pinia });
}
beforeEach(() => {
pinia = createTestingPinia();
// store is created before component is rendered
useMyStore();
});
it('does something', () => {
createComponent();
// all actions are stubbed by default
expect(useMyStore().someAction).toHaveBeenCalledWith({ arg: 'foo' });
expect(useMyStore().someAction).toHaveBeenCalledTimes(1);
});
});
```
In most cases you won't need to set `stubActions: false` when testing components.
Instead, the store itself should be properly tested and the component tests should check that the actions were called with correct arguments.
#### Setting up initial state
Pinia doesn't allow to unstub actions once they've been stubbed.
That means you can not use them to set the initial state if you didn't set `stubActions: false`.
In that case it is allowed to set the state directly:
```javascript
describe('MyComponent', () => {
let pinia;
let wrapper;
const createComponent = () => {
wrapper = shallowMount(MyComponent, { pinia });
}
beforeEach(() => {
// all the actions are stubbed, we can't use them to change the state anymore
pinia = createTestingPinia();
// store is created before component is rendered
useMyStore();
});
it('does something', () => {
// state is set directly instead of using an action
useMyStore().someState = { value: 1 };
createComponent();
// ...
});
});
```
## Migrating from Vuex
GitLab is actively migrating from Vuex, you can contribute and follow this progress [here](https://gitlab.com/groups/gitlab-org/-/epics/18476).
Before migrating decide what your primary [state manager](state_management.md) should be first.
Proceed with this guide if Pinia was your choice.
Migration to Pinia could be completed in two ways: a single step migration and a multi-step one.
Follow single step migration if your store meets these criteria:
1. Store contains only one module
1. Actions, getters and mutations cumulatively do not exceed 1000 lines
In any other case prefer the multi-step migration.
### Single step migration
[Follow the official Vuex migration guide](https://pinia.vuejs.org/cookbook/migration-vuex.html).
1. Migrate store to Pinia [using codemods](#automated-migration-using-codemods)
1. Fix store tests following [our guide](#migrating-store-tests) and [best practices](#unit-testing-the-store)
1. Update components to use the migrated Pinia store
1. Replace `mapActions`, `mapState` with Pinia counterparts
1. Replace `mapMutations` with Pinia's `mapActions`
1. Replace `mapGetters` with Pinia's `mapState`
1. Fix components tests following [our guide](#migrating-component-tests) and [best practices](#unit-testing-components-with-the-store)
If your diff starts to exceed reviewable size prefer the multi-step migration.
### Multi-step migration
[Learn about the official Vuex migration guide](https://pinia.vuejs.org/cookbook/migration-vuex.html).
A walkthrough is available in the two part video series:
1. [Migrating the store (part 1)](https://youtu.be/aWVYvhktYfM)
1. [Migrating the components (part 2)](https://youtu.be/9G7h4YmoHRw)
Follow these steps to iterate over the migration process and split the work onto smaller merge requests:
1. Identify the store you are going to migrate.
Start with the file that defines your store via `new Vuex.Store()` and go from there.
Include all the modules that are used inside this store.
1. Create a migration issue, assign a migration DRI(s) and list all the store modules you're going to migrate.
Track your migration progress in that issue. If necessary, split the migration into multiple issues.
1. Create a new CODEOWNERS (`.gitlab/CODEOWNERS`) rule for the store files you're migrating, include all the Vuex module dependencies and store specs.
If you are migrating only a single store module then you would need to include only `state.js` (or your `index.js`),
`actions.js`, `mutations.js` and `getters.js` and their respective spec files.
Assign at least two individuals responsible for reviewing changes made to the Vuex store.
Always sync your changes from Vuex store to Pinia. This is very important so you don't introduce regressions with the Pinia store.
1. Copy existing store as-is to a new location (you can call it `stores/legacy_store` for example). Preserve the file structure.
Do this for every store module you're going to migrate. Split this into multiple merge requests if necessary.
1. Create an index file (`index.js`) with a store definition (`defineStore`) and define your state in there.
Copy the state definition from `state.js`. Do not import actions, mutations and getters yet.
1. Use [code mods](#automated-migration-using-codemods) to migrate the store files.
Import migrated modules in your new store's definition (`index.js`).
1. If you have circular dependencies in your stores consider [using `tryStore` plugin](#avoiding-circular-dependencies).
1. [Migrate the store specs manually](#migrating-store-tests).
1. [Sync your Vuex store with Pinia stores](#syncing-with-vuex).
1. Refactor components to use the new store. Split this into as many merge requests as necessary.
Always [update the specs](#migrating-component-tests) with the components.
1. Remove the Vuex store.
1. Remove CODEOWNERS rule.
1. Close the migration issue.
#### Example migration breakdown
You can use the [merge requests migration](https://gitlab.com/groups/gitlab-org/-/epics/16505) breakdown as a reference:
1. Diffs store
1. [Copy store to a new location and introduce CODEOWNERS rules](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/163826)
1. [Automated store migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/163827)
1. Also creates MrNotes store
1. Specs migration ([actions](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/165733), [getters](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167176), [mutations](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167434))
1. Notes store
1. [Copy store to a new location](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167450)
1. [Automated store migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167946)
1. Specs migration ([actions](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169681), [getters](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170547), [mutations](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170549))
1. Batch comments store
1. [Copy store to a new location](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176485)
1. [Automated store migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176486)
1. Specs migration ([actions](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176487), [getters](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176490), [mutations](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/176488))
1. [Sync Vuex stores with Pinia stores](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178302)
1. Diffs store components migration
1. [Diffs app](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186121)
1. [Non diffs components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186365)
1. [File browser](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186370)
1. [Diffs components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186381)
1. [Diff file components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186382)
1. [Rest of diffs components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/186962)
1. [Batch comments components migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/180129)
1. [MrNotes components migration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178291)
1. Notes store components migration
1. [Diffs components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/188273)
1. [Simple notes components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/193248)
1. [More notes components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/195975)
1. [Rest of notes components](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/196142)
1. [Notes app](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/197331)
1. [Remove Vuex from merge requests](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/196307)
1. Also removes the CODEOWNERS rules
### Post migration steps
Once your store is migrated consider refactoring it to follow our best practices. Split big stores into smaller ones.
[Refactor `tryStore` uses](#refactoring-trystore).
### Automated migration using codemods
You can use [ast-grep](https://ast-grep.github.io/) codemods to simplify migration from Vuex to Pinia.
1. [Install ast-grep](https://ast-grep.github.io/guide/quick-start.html#installation) on your system before proceeding.
1. Run `scripts/frontend/codemods/vuex-to-pinia/migrate.sh path/to/your/store`
The codemods will migrate `actions.js`, `mutations.js` and `getters.js` located in your store folder.
Manually scan these files after running the codemods to ensure they are properly migrated.
Vuex specs can not be automatically migrated, migrate them by hand.
Vuex module calls are replaced using Pinia conventions:
| Vuex | Pinia |
|-------------------------------------------------------------|--------------------------------------|
| `dispatch('anotherModule/action', ...args, { root: true })` | `useAnotherModule().action(...args)` |
| `dispatch('action', ...args, { root: true })` | `useRootStore().action(...args)` |
| `rootGetters['anotherModule/getter']` | `useAnotherModule().getter` |
| `rootGetters.getter` | `useRootStore().getter` |
| `rootState.anotherModule.state` | `useAnotherModule().state` |
If you have not yet migrated a dependent module (`useAnotherModule` and `useRootStore` in the examples above) you can create a temporary dummy store.
Use the guidance below to migrate Vuex modules.
### Migrating stores with nested modules
It is not trivial to iteratively migrate stores with nested modules that have dependencies between them.
In such cases prefer migrating nested modules first:
1. Create a Pinia store counterpart of the nested Vuex store module.
1. Create a placeholder Pinia 'root' store for root module dependencies if applicable.
1. Copy and adapt existing tests for the migrated module.
1. **Do not use migrated modules yet.**
1. Once all the nested modules are migrated you can migrate the root module and replace the placeholder store with the real one.
1. Replace Vuex store with Pinia stores in components.
### Avoiding circular dependencies
It is imperative that you don't create circular dependencies in your Pinia stores.
Unfortunately Vuex design allows to create interdependent modules that we have to refactor later.
An example circular dependency in store design:
```mermaid
graph TD
A[Store Alpha] --> Foo(Action Foo)
B[Store Beta] --> Bar(Action Bar)
A -- calls --> Bar
B -- calls --> Foo
```
To mitigate this issue consider using `tryStore` plugin for Pinia during migration from Vuex:
#### Before
```javascript
// store_alpha/actions.js
function callOtherStore() {
// bad ❌, circular dependency created
useBetaStore().bar();
}
```
```javascript
// store_beta/actions.js
function callOtherStore() {
// bad ❌, circular dependency created
useAlphaStore().bar();
}
```
#### After
```javascript
// store_alpha/actions.js
function callOtherStore() {
// OK ✅, circular dependency avoided
this.tryStore('betaStore').bar();
}
```
```javascript
// store_beta/actions.js
function callOtherStore() {
// OK ✅, circular dependency avoided
this.tryStore('alphaStore').bar();
}
```
This will look up the store by its name using Pinia instance and prevent the circular dependency issue.
Store name is defined when calling `defineStore('storeName', ...)`.
You **must** initialize both stores prior to component mounting when using `tryStore`:
```javascript
// stores are created in advance
useAlphaStore();
useBetaStore();
new Vue({ pinia, render(h) { return h(MyComponent); } });
```
The `tryStore` helper function can only be used during migration. Never use this in proper Pinia stores.
#### Refactoring `tryStore`
After you finished the migration it is very important to redesign the stores so there are no more circular dependencies.
The easiest way to solve this would be to create a top level store that would orchestrate other stores.
##### Before
```mermaid
graph TD
A[Store Alpha] --> Foo(Action Foo)
A -- calls --> Bar
B[Store Beta] --> Bar(Action Bar)
B -- calls --> Foo
```
##### After
```mermaid
graph TD
C[Store Gamma]
A[Store Alpha] --- Bar(Action Bar)
B[Store Beta] --- Foo(Action Foo)
C -- calls --> Bar
C -- calls --> Foo
```
### Syncing with Vuex
This `syncWithVuex` plugin syncs your state from Vuex to Pinia and vice versa.
This allows you to iteratively migrate components by having both stores in your app during migration.
Usage example:
```javascript
// Vuex store @ ./store.js
import Vuex from 'vuex';
import createOldStore from './stores/old_store';
export default new Vuex.Store({
modules: {
oldStore: createOldStore(),
},
});
```
```javascript
// Pinia store
import { defineStore } from 'pinia';
import oldVuexStore from './store'
export const useMigratedStore = defineStore('migratedStore', {
syncWith: {
store: oldVuexStore,
name: 'oldStore', // use legacy store name if it is defined inside Vuex `modules`
namespaced: true, // set to 'true' if Vuex module is namespaced
},
// the state here gets sync with Vuex, any changes to migratedStore also propagate to the Vuex store
state() {
// ...
},
// ...
});
```
#### Override
A Vuex store definition can be shared in multiple Vuex store instances.
In that case we can not rely on the store config alone to sync our Pinia store with the Vuex store.
We need to point our Pinia store to the actual Vuex store instance using `syncWith` helper function.
```javascript
// this overrides the existing `syncWith` config
useMigratedStore().syncWith({ store: anotherOldStore });
// `useMigratedStore` state now is synced only with `anotherOldStore`
new Vue({ pinia, render(h) { return h(MyComponent) } });
```
### Migrating store tests
#### `testAction`
Some Vuex tests might use `testAction` helper to test that certain actions or mutations have been called.
We can migrate these specs using `createTestPiniaAction` helper from `helpers/pinia_helpers` in Jest.
##### Before
```javascript
describe('SomeStore', () => {
it('runs actions', () => {
return testAction(
store.actionToBeCalled, // action to be called immediately
{ someArg: 1 }, // action call arguments
{ someState: 1 }, // initial store state
[{ type: 'MUTATION_NAME', payload: '123' }], // mutation calls to expect
[{ type: 'actionName' }], // action calls to expect
);
});
});
```
##### After
```javascript
import { createTestPiniaAction } from 'helpers/pinia_helpers';
describe('SomeStore', () => {
let store;
let testAction;
beforeEach(() => {
store = useMyStore();
testAction = createTestPiniaAction(store);
});
it('runs actions', () => {
return testAction(
store.actionToBeCalled,
{ someArg: 1 },
{ someState: 1 },
[{ type: store.MUTATION_NAME, payload: '123' }], // explicit reference to migrated mutation
[{ type: store.actionName }], // explicit reference to migrated action
);
});
});
```
Avoid using `testAction` in your proper Pinia tests: this should only be used during migration.
Always prefer testing each action call explicitly.
#### Custom getters
Pinia allows to define custom getters in Vue 3. Since we're using Vue 2 this is not possible.
To work around this you can use `createCustomGetters` helper from `helpers/pinia_helpers`.
##### Before
```javascript
describe('SomeStore', () => {
it('runs actions', () => {
const dispatch = jest.fn();
const getters = { someGetter: 1 };
someAction({ dispatch, getters });
expect(dispatch).toHaveBeenCalledWith('anotherAction', 1);
});
});
```
##### After
```javascript
import { createCustomGetters } from 'helpers/pinia_helpers';
describe('SomeStore', () => {
let store;
let getters;
beforeEach(() => {
getters = {};
createTestingPinia({
stubActions: false,
plugins: [
createCustomGetters(() => ({
myStore: getters, // each store used in tests should be also declared here
})),
],
});
store = useMyStore();
});
it('runs actions', () => {
getters.someGetter = 1;
store.someAction();
expect(store.anotherAction).toHaveBeenCalledWith(1);
});
});
```
Avoid mocking getters in proper Pinia tests: this should only be used for migration.
Instead, provide a valid state so a getter can return correct value.
### Migrating component tests
Pinia does not return promises in actions by default.
Because of that pay a special attention when using `createTestingPinia`.
Since it stubs all the actions it does not guarantee that an action would return a promise.
If your component's code is expecting an action to return a promise stub it accordingly.
```javascript
describe('MyComponent', () => {
let pinia;
beforeEach(() => {
pinia = createTestingPinia();
useMyStore().someAsyncAction.mockResolvedValue(); // this now returns a promise
});
});
```
|
https://docs.gitlab.com/development/security
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/security.md
|
2025-08-13
|
doc/development/fe_guide
|
[
"doc",
"development",
"fe_guide"
] |
security.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Security
| null |
## Resources
[Mozilla's HTTP Observatory CLI](https://github.com/mozilla/http-observatory-cli) and
[Qualys SSL Labs Server Test](https://www.ssllabs.com/ssltest/analyze.html) are good resources for finding
potential problems and ensuring compliance with security best practices.
## Including external resources
External fonts, CSS, and JavaScript should never be used with the exception of
Google Analytics and Matomo - and only when the instance has enabled it. Assets
should always be hosted and served locally from the GitLab instance. Embedded
resources via `iframes` should never be used except in certain circumstances
such as with reCAPTCHA, which cannot be used without an `iframe`.
## Avoiding inline scripts and styles
In order to protect users from [XSS vulnerabilities](https://en.wikipedia.org/wiki/Cross-site_scripting), we intend to disable
inline scripts in the future using Content Security Policy.
While inline scripts can make something easier, they're also a security concern. If
user-supplied content is unintentionally left un-sanitized, malicious users can
inject scripts into the web app.
Inline styles should be avoided in almost all cases, they should only be used
when no alternatives can be found. This allows reusability of styles as well as
readability.
### Sanitize HTML output
If you need to output raw HTML, you should sanitize it.
If you are using Vue, you can use the[`v-safe-html` directive](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/vue_shared/directives/safe_html.js).
For other use cases, wrap a preconfigured version of [`dompurify`](https://www.npmjs.com/package/dompurify)
that also allows the icons to be rendered:
```javascript
import { sanitize } from '~/lib/dompurify';
const unsafeHtml = '<some unsafe content ... >';
// ...
element.appendChild(sanitize(unsafeHtml));
```
This `sanitize` function takes the same configuration as the
original.
### Fixing Security Issues
When refactoring old code, it's important that we don't accidentally remove specs written to catch security issues which might still be relevant.
We should mark specs with `#security` in either the `describe` or `it` blocks to communicate to the engineer reading the code that by removing these specs could have severe consequences down the road, and you are removing code that could catch a reintroduction of a security issue.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Security
breadcrumbs:
- doc
- development
- fe_guide
---
## Resources
[Mozilla's HTTP Observatory CLI](https://github.com/mozilla/http-observatory-cli) and
[Qualys SSL Labs Server Test](https://www.ssllabs.com/ssltest/analyze.html) are good resources for finding
potential problems and ensuring compliance with security best practices.
## Including external resources
External fonts, CSS, and JavaScript should never be used with the exception of
Google Analytics and Matomo - and only when the instance has enabled it. Assets
should always be hosted and served locally from the GitLab instance. Embedded
resources via `iframes` should never be used except in certain circumstances
such as with reCAPTCHA, which cannot be used without an `iframe`.
## Avoiding inline scripts and styles
In order to protect users from [XSS vulnerabilities](https://en.wikipedia.org/wiki/Cross-site_scripting), we intend to disable
inline scripts in the future using Content Security Policy.
While inline scripts can make something easier, they're also a security concern. If
user-supplied content is unintentionally left un-sanitized, malicious users can
inject scripts into the web app.
Inline styles should be avoided in almost all cases, they should only be used
when no alternatives can be found. This allows reusability of styles as well as
readability.
### Sanitize HTML output
If you need to output raw HTML, you should sanitize it.
If you are using Vue, you can use the[`v-safe-html` directive](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/vue_shared/directives/safe_html.js).
For other use cases, wrap a preconfigured version of [`dompurify`](https://www.npmjs.com/package/dompurify)
that also allows the icons to be rendered:
```javascript
import { sanitize } from '~/lib/dompurify';
const unsafeHtml = '<some unsafe content ... >';
// ...
element.appendChild(sanitize(unsafeHtml));
```
This `sanitize` function takes the same configuration as the
original.
### Fixing Security Issues
When refactoring old code, it's important that we don't accidentally remove specs written to catch security issues which might still be relevant.
We should mark specs with `#security` in either the `describe` or `it` blocks to communicate to the engineer reading the code that by removing these specs could have severe consequences down the road, and you are removing code that could catch a reintroduction of a security issue.
|
https://docs.gitlab.com/development/fe_guide/storybook_tests
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/storybook_tests.md
|
2025-08-13
|
doc/development/fe_guide/accessibility
|
[
"doc",
"development",
"fe_guide",
"accessibility"
] |
storybook_tests.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see <https://docs.gitlab.com/development/development_processes/#development-guidelines-review>.
|
Accessibility Storybook tests
| null |
## Storybook component tests
We use [Storybook test-runner](https://storybook.js.org/docs/7/writing-tests/test-runner) with [axe-playwright](https://storybook.js.org/docs/7/writing-tests/accessibility-testing#automate-accessibility-tests-with-test-runner) to automatically test Vue components for accessibility violations.
This approach allows us to test components in isolation and catch accessibility issues early in the development process.
### Prerequisites
Before running Storybook accessibility tests, ensure you have:
1. All dependencies installed (`yarn install`)
1. A built Storybook instance running
### Running Storybook accessibility tests
To run automated accessibility tests for Vue components:
#### Step 1: Start Storybook
First, start the Storybook development server. You have two options:
```shell
# Option 1: Start Storybook with fresh fixtures
yarn storybook:start
# Option 2: Start Storybook without updating fixtures (faster for subsequent runs)
yarn storybook:start:skip-fixtures-update
```
**Important:** Keep the Storybook server running throughout your testing session. The Storybook needs to be built and accessible for the tests to run properly.
#### Step 2: Run the accessibility tests
In a separate terminal, from the root directory of the GitLab project, run:
```shell
yarn storybook:dev:test
```
This command will:
1. Launch the test runner against your running Storybook instance.
1. Navigate through all stories.
1. Run axe-playwright accessibility checks on each story.
1. Report any accessibility violations found.
### Understanding test results
The test runner will output:
- **Passing tests**: Components that meet accessibility standards and have no runtime errors.
- **Failing tests**:
- Components with runtime errors.
- Components with accessibility violations, including:
- Specific accessibility rules that failed
- Elements that caused violations
- Severity levels (critical, serious, moderate, minor)
- Suggested fixes when available
The complete output of the test run can be found in `storybook/tmp/storybook-results.json` file.
### Best practices for Storybook accessibility testing
1. **Test all story variants**: Ensure each story in your component represents different states and configurations
1. **Include interactive states**: Create stories that show hover, focus, active, and disabled states
1. **Test with different data**: Use realistic data that reflects actual usage scenarios
1. **Address violations immediately**: Fix accessibility issues as soon as they're identified
1. **Document component accessibility**: Include accessibility considerations in your component's story documentation
### Integration with development workflow
Consider integrating Storybook accessibility testing into your development process:
1. **During component development**: Run tests frequently to catch issues early
1. **Before merge requests**: Ensure all new or modified components pass accessibility tests
### Troubleshooting
If tests fail to run:
1. **Check Storybook is running**: Ensure your Storybook server is accessible at the expected URL
1. **Verify dependencies**: Run `yarn install` to ensure all packages are installed
1. **Check for build errors**: Look for any errors in the Storybook build output
1. **Clear cache**: Try restarting Storybook if you encounter unexpected issues
## Getting help
- For accessibility testing questions, refer to our [Frontend testing guide](../../testing_guide/frontend_testing.md)
- For accessibility best practices, see our [accessibility best practices guide](best_practices.md)
- For component-specific accessibility guidance, check [Pajamas components documentation](https://design.gitlab.com/components/overview)
- If you discover accessibility issues that require global changes, create a follow-up issue with the labels: `accessibility` and accessibility severity label, for example `accessibility:critical`.
Test output will specify the severity for you.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see <https://docs.gitlab.com/development/development_processes/#development-guidelines-review>.
title: Accessibility Storybook tests
breadcrumbs:
- doc
- development
- fe_guide
- accessibility
---
## Storybook component tests
We use [Storybook test-runner](https://storybook.js.org/docs/7/writing-tests/test-runner) with [axe-playwright](https://storybook.js.org/docs/7/writing-tests/accessibility-testing#automate-accessibility-tests-with-test-runner) to automatically test Vue components for accessibility violations.
This approach allows us to test components in isolation and catch accessibility issues early in the development process.
### Prerequisites
Before running Storybook accessibility tests, ensure you have:
1. All dependencies installed (`yarn install`)
1. A built Storybook instance running
### Running Storybook accessibility tests
To run automated accessibility tests for Vue components:
#### Step 1: Start Storybook
First, start the Storybook development server. You have two options:
```shell
# Option 1: Start Storybook with fresh fixtures
yarn storybook:start
# Option 2: Start Storybook without updating fixtures (faster for subsequent runs)
yarn storybook:start:skip-fixtures-update
```
**Important:** Keep the Storybook server running throughout your testing session. The Storybook needs to be built and accessible for the tests to run properly.
#### Step 2: Run the accessibility tests
In a separate terminal, from the root directory of the GitLab project, run:
```shell
yarn storybook:dev:test
```
This command will:
1. Launch the test runner against your running Storybook instance.
1. Navigate through all stories.
1. Run axe-playwright accessibility checks on each story.
1. Report any accessibility violations found.
### Understanding test results
The test runner will output:
- **Passing tests**: Components that meet accessibility standards and have no runtime errors.
- **Failing tests**:
- Components with runtime errors.
- Components with accessibility violations, including:
- Specific accessibility rules that failed
- Elements that caused violations
- Severity levels (critical, serious, moderate, minor)
- Suggested fixes when available
The complete output of the test run can be found in `storybook/tmp/storybook-results.json` file.
### Best practices for Storybook accessibility testing
1. **Test all story variants**: Ensure each story in your component represents different states and configurations
1. **Include interactive states**: Create stories that show hover, focus, active, and disabled states
1. **Test with different data**: Use realistic data that reflects actual usage scenarios
1. **Address violations immediately**: Fix accessibility issues as soon as they're identified
1. **Document component accessibility**: Include accessibility considerations in your component's story documentation
### Integration with development workflow
Consider integrating Storybook accessibility testing into your development process:
1. **During component development**: Run tests frequently to catch issues early
1. **Before merge requests**: Ensure all new or modified components pass accessibility tests
### Troubleshooting
If tests fail to run:
1. **Check Storybook is running**: Ensure your Storybook server is accessible at the expected URL
1. **Verify dependencies**: Run `yarn install` to ensure all packages are installed
1. **Check for build errors**: Look for any errors in the Storybook build output
1. **Clear cache**: Try restarting Storybook if you encounter unexpected issues
## Getting help
- For accessibility testing questions, refer to our [Frontend testing guide](../../testing_guide/frontend_testing.md)
- For accessibility best practices, see our [accessibility best practices guide](best_practices.md)
- For component-specific accessibility guidance, check [Pajamas components documentation](https://design.gitlab.com/components/overview)
- If you discover accessibility issues that require global changes, create a follow-up issue with the labels: `accessibility` and accessibility severity label, for example `accessibility:critical`.
Test output will specify the severity for you.
|
https://docs.gitlab.com/development/fe_guide/feature_tests
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/feature_tests.md
|
2025-08-13
|
doc/development/fe_guide/accessibility
|
[
"doc",
"development",
"fe_guide",
"accessibility"
] |
feature_tests.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Accessibility feature tests
| null |
## When to add accessibility tests
When adding a new view to the application, make sure to include the accessibility check in your feature test.
We aim to have full coverage for all the views.
One of the advantages of testing in feature tests is that we can check different states, not only
single components in isolation.
You can find some examples on how to approach accessibility checks below.
### Empty state
Some views have an empty state that result in a page structure that's different from the default view.
They may also offer some actions, for example to create a first issue or to enable a feature.
In this case, add assertions for both an empty state and a default view.
### Ensure compliance before user interactions
Often we test against a number of steps we expect our users to perform.
In this case, make sure to include the check early on, before any of them has been simulated.
This way we ensure there are no barriers to what we expect of users.
### Ensure compliance after changed page structure
User interactions may result in significant changes in page structure. For example, a modal is shown, or a new section is rendered.
In that case, add an assertion after any such change.
We want to make sure that users are able to interact with all available components.
### Separate file for extensive test suites
For some views, feature tests span multiple files.
Take a look at our [feature tests for a merge request](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/features/merge_request).
The number of user interactions that needs to be covered is too big to fit into one test file.
As a result, multiple feature tests cover one view, with different user privileges, or data sets.
If we were to include accessibility checks in all of them, there is a chance we would cover the same states of a view multiple times and significantly increase the run time.
It would also make it harder to determine the coverage for accessibility, if assertions would be scattered across many files.
In that case, consider creating one test file dedicated to accessibility.
Place it in the same directory and name it `accessibility_spec.rb`, for example `spec/features/merge_request/accessibility_spec.rb`.
Make it explicit that a feature test has accessibility coverage in a separate file, and
doesn't need additional assertions. Include this comment below the opening of the
top-level block:
```ruby
# spec/features/merge_request/user_approves_spec.rb
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe 'Merge request > User approves', :js, feature_category: :code_review_workflow do
# covered by ./accessibility_spec.rb
```
### Shared examples
Often feature tests include shared examples for a number of scenarios.
If they differ only by provided data, but are based on the same user interaction, you can check for accessibility compliance outside the shared examples.
This way we only run the check once and save resources.
## How to add accessibility tests
Axe provides the custom matcher `be_axe_clean`, which can be used like the following:
```ruby
# spec/features/settings_spec.rb
it 'passes axe automated accessibility testing', :js do
visit_settings_page
wait_for_requests # ensures page is fully loaded
expect(page).to be_axe_clean
end
```
If needed, you can scope testing to a specific area of the page by using `within`.
Axe also provides specific [clauses](https://github.com/dequelabs/axe-core-gems/blob/develop/packages/axe-core-rspec/README.md#clauses),
for example:
```ruby
expect(page).to be_axe_clean.within '[data-testid="element"]'
# run only WCAG 2.1 Level AA rules
expect(page).to be_axe_clean.according_to :wcag21aa
# specifies which rule to skip
expect(page).to be_axe_clean.skipping :'link-in-text-block'
# clauses can be chained
expect(page).to be_axe_clean.within('[data-testid="element"]')
.according_to(:wcag21aa)
```
Axe does not test hidden regions, such as inactive menus or modal windows. To test
hidden regions for accessibility, write tests that activate or render the regions visible
and run the matcher again.
You can run accessibility tests locally in the same way as you [run any feature tests](../../testing_guide/frontend_testing.md#how-to-run-a-feature-test).
After adding accessibility tests, make sure to fix all possible errors.
For help on how to do it, refer to [this guide](best_practices.md#quick-checklist).
You can also check accessibility sections in [Pajamas components' documentation](https://design.gitlab.com/components/overview).
If any of the errors require global changes, create a follow-up issue and assign these labels: `accessibility`, `WG::product accessibility`.
### Good practices
Adding accessibility checks in feature tests is easier if you have domain knowledge from the product area in question.
However, there are a few things that can help you contribute to accessibility tests.
#### Find a page from a test
When you don't have the page URL, you can start by running a feature spec in preview mode. To do this, add `WEBDRIVER_HEADLESS=0` to the beginning of the command that runs the tests. You can also pair it with `live_debug` to stop the browser right inside any test case with a `:js` tag (see the documentation on [testing best practices](../../testing_guide/best_practices.md#run-js-spec-in-a-visible-browser)).
#### What parts of a page to add accessibility tests for
In most cases you do not want to test accessibility of a whole page. There are a couple of reasons:
1. We have elements that appear on every application view, such as breadcrumbs or main navigation. Including them in every feature spec takes up quite a lot of resources and multiplies something that can be done just once. These elements have their own feature specs and that's where we want to test them.
1. If a feature spec covers a whole view, the best practice would be to scope it to `<main id="content-body">` element. Here's an example of such test case:
```ruby
it "passes axe automated accessibility testing" do
expect(page).to be_axe_clean.within('#content-body')
end
```
1. If a feature test covers only a part of a page, like a section that includes some components, keep the test scoped to that section. If possible, use the same selector that the feature spec uses for its test cases. Here's an example of such test case:
```ruby
it 'passes axe automated accessibility testing for todo' do
expect(page).to be_axe_clean.within(todo_selector)
end
```
#### Test output not specific enough
When axe test case fails, it outputs the violation found and an element that it concerns. Because we often use Pajamas Components,
it may happen that the element will be a `<div>` without any annotation that could help you identify it. However, we can take
advantage of a fact that axe_core rules is used both for Ruby tests and Deque browser extension - axe devTools. They both
provide the same output.
1. Make sure you have axe DevTools extension installed in a browser of your choice. See [axe DevTools official website for more information](https://www.deque.com/axe/browser-extensions/).
1. Navigate to the view you're testing with a feature test.
1. Open axe DevTools extension and run a scan of the page.
1. Expand found issues and use Highlight option to see the elements on the page for each violation.
### Known accessibility violations
This section documents violations where a recommendation differs with the [design system](https://design.gitlab.com/):
- `link-in-text-block`: For now, use the `skipping` clause to skip `:'link-in-text-block'`
rule to fix the violation. After this is fixed as part of [issue 1444](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/issues/1444)
and underline is added to the `GlLink` component, this clause can be removed.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Accessibility feature tests
breadcrumbs:
- doc
- development
- fe_guide
- accessibility
---
## When to add accessibility tests
When adding a new view to the application, make sure to include the accessibility check in your feature test.
We aim to have full coverage for all the views.
One of the advantages of testing in feature tests is that we can check different states, not only
single components in isolation.
You can find some examples on how to approach accessibility checks below.
### Empty state
Some views have an empty state that result in a page structure that's different from the default view.
They may also offer some actions, for example to create a first issue or to enable a feature.
In this case, add assertions for both an empty state and a default view.
### Ensure compliance before user interactions
Often we test against a number of steps we expect our users to perform.
In this case, make sure to include the check early on, before any of them has been simulated.
This way we ensure there are no barriers to what we expect of users.
### Ensure compliance after changed page structure
User interactions may result in significant changes in page structure. For example, a modal is shown, or a new section is rendered.
In that case, add an assertion after any such change.
We want to make sure that users are able to interact with all available components.
### Separate file for extensive test suites
For some views, feature tests span multiple files.
Take a look at our [feature tests for a merge request](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/features/merge_request).
The number of user interactions that needs to be covered is too big to fit into one test file.
As a result, multiple feature tests cover one view, with different user privileges, or data sets.
If we were to include accessibility checks in all of them, there is a chance we would cover the same states of a view multiple times and significantly increase the run time.
It would also make it harder to determine the coverage for accessibility, if assertions would be scattered across many files.
In that case, consider creating one test file dedicated to accessibility.
Place it in the same directory and name it `accessibility_spec.rb`, for example `spec/features/merge_request/accessibility_spec.rb`.
Make it explicit that a feature test has accessibility coverage in a separate file, and
doesn't need additional assertions. Include this comment below the opening of the
top-level block:
```ruby
# spec/features/merge_request/user_approves_spec.rb
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe 'Merge request > User approves', :js, feature_category: :code_review_workflow do
# covered by ./accessibility_spec.rb
```
### Shared examples
Often feature tests include shared examples for a number of scenarios.
If they differ only by provided data, but are based on the same user interaction, you can check for accessibility compliance outside the shared examples.
This way we only run the check once and save resources.
## How to add accessibility tests
Axe provides the custom matcher `be_axe_clean`, which can be used like the following:
```ruby
# spec/features/settings_spec.rb
it 'passes axe automated accessibility testing', :js do
visit_settings_page
wait_for_requests # ensures page is fully loaded
expect(page).to be_axe_clean
end
```
If needed, you can scope testing to a specific area of the page by using `within`.
Axe also provides specific [clauses](https://github.com/dequelabs/axe-core-gems/blob/develop/packages/axe-core-rspec/README.md#clauses),
for example:
```ruby
expect(page).to be_axe_clean.within '[data-testid="element"]'
# run only WCAG 2.1 Level AA rules
expect(page).to be_axe_clean.according_to :wcag21aa
# specifies which rule to skip
expect(page).to be_axe_clean.skipping :'link-in-text-block'
# clauses can be chained
expect(page).to be_axe_clean.within('[data-testid="element"]')
.according_to(:wcag21aa)
```
Axe does not test hidden regions, such as inactive menus or modal windows. To test
hidden regions for accessibility, write tests that activate or render the regions visible
and run the matcher again.
You can run accessibility tests locally in the same way as you [run any feature tests](../../testing_guide/frontend_testing.md#how-to-run-a-feature-test).
After adding accessibility tests, make sure to fix all possible errors.
For help on how to do it, refer to [this guide](best_practices.md#quick-checklist).
You can also check accessibility sections in [Pajamas components' documentation](https://design.gitlab.com/components/overview).
If any of the errors require global changes, create a follow-up issue and assign these labels: `accessibility`, `WG::product accessibility`.
### Good practices
Adding accessibility checks in feature tests is easier if you have domain knowledge from the product area in question.
However, there are a few things that can help you contribute to accessibility tests.
#### Find a page from a test
When you don't have the page URL, you can start by running a feature spec in preview mode. To do this, add `WEBDRIVER_HEADLESS=0` to the beginning of the command that runs the tests. You can also pair it with `live_debug` to stop the browser right inside any test case with a `:js` tag (see the documentation on [testing best practices](../../testing_guide/best_practices.md#run-js-spec-in-a-visible-browser)).
#### What parts of a page to add accessibility tests for
In most cases you do not want to test accessibility of a whole page. There are a couple of reasons:
1. We have elements that appear on every application view, such as breadcrumbs or main navigation. Including them in every feature spec takes up quite a lot of resources and multiplies something that can be done just once. These elements have their own feature specs and that's where we want to test them.
1. If a feature spec covers a whole view, the best practice would be to scope it to `<main id="content-body">` element. Here's an example of such test case:
```ruby
it "passes axe automated accessibility testing" do
expect(page).to be_axe_clean.within('#content-body')
end
```
1. If a feature test covers only a part of a page, like a section that includes some components, keep the test scoped to that section. If possible, use the same selector that the feature spec uses for its test cases. Here's an example of such test case:
```ruby
it 'passes axe automated accessibility testing for todo' do
expect(page).to be_axe_clean.within(todo_selector)
end
```
#### Test output not specific enough
When axe test case fails, it outputs the violation found and an element that it concerns. Because we often use Pajamas Components,
it may happen that the element will be a `<div>` without any annotation that could help you identify it. However, we can take
advantage of a fact that axe_core rules is used both for Ruby tests and Deque browser extension - axe devTools. They both
provide the same output.
1. Make sure you have axe DevTools extension installed in a browser of your choice. See [axe DevTools official website for more information](https://www.deque.com/axe/browser-extensions/).
1. Navigate to the view you're testing with a feature test.
1. Open axe DevTools extension and run a scan of the page.
1. Expand found issues and use Highlight option to see the elements on the page for each violation.
### Known accessibility violations
This section documents violations where a recommendation differs with the [design system](https://design.gitlab.com/):
- `link-in-text-block`: For now, use the `skipping` clause to skip `:'link-in-text-block'`
rule to fix the violation. After this is fixed as part of [issue 1444](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/issues/1444)
and underline is added to the `GlLink` component, this clause can be removed.
|
https://docs.gitlab.com/development/fe_guide/accessibility
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/_index.md
|
2025-08-13
|
doc/development/fe_guide/accessibility
|
[
"doc",
"development",
"fe_guide",
"accessibility"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Accessibility
| null |
Accessibility is important for users who use screen readers or rely on keyboard-only functionality
to ensure they have an equivalent experience to sighted mouse users.
## Linting for accessibility defects
You can enable linting for accessibility defects with a free VS Code plugin - [axe Accessibility Linter](https://marketplace.visualstudio.com/items?itemName=deque-systems.vscode-axe-linter).
We strongly recommend that to everyone contributing to GitLab that use VS Code.
1. Open VS Code editor
1. Go to Extensions
1. Search for axe Accessibility Linter and install the plugin
Axe Accessibility Linter works in HTML, Markdown and Vue files. As for this moment, there is no support for HAML files. You will get immediate feedback, while writing your code.
GitLab repository contains `axe-linter.yml` file that adds additional configuration to the plugin.
It enables the linter to analyze some of the Pajamas components by mapping them and their attributes to native HTML elements.
## Automated accessibility testing
Uncover accessibility problems and ensure that your features stay accessible over time by
[implementing automated A11Y tests](automated_testing.md).
- [Accessibility Storybook tests](storybook_tests.md)
- [Accessibility feature tests](feature_tests.md)
## Accessibility best practices
Follow these [best practices](best_practices.md) to implement accessible web applications. These are
some of the topics covered in that guide:
- [Quick checklist](best_practices.md#quick-checklist)
- [Accessible names for screen readers](best_practices.md#provide-accessible-names-for-screen-readers)
- [Icons](best_practices.md#icons)
- [When to use ARIA](best_practices.md#when-to-use-aria)
## Other resources
Use these tools and learning resources to improve your web accessibility workflow and skills.
### Viewing the browser accessibility tree
- [Firefox DevTools guide](https://firefox-source-docs.mozilla.org/devtools-user/accessibility_inspector/index.html#accessing-the-accessibility-inspector)
- [Chrome DevTools guide](https://developer.chrome.com/docs/devtools/accessibility/reference/#pane)
### Browser extensions
We have two options for Web accessibility testing:
- axe for [Firefox](https://www.deque.com/axe/devtools/firefox-browser-extension/)
- axe for [Chrome](https://www.deque.com/axe/devtools/chrome-browser-extension/)
### Other links
- [The A11Y Project](https://www.a11yproject.com/) is a good resource for accessibility
- [Awesome Accessibility](https://github.com/brunopulis/awesome-a11y)
is a compilation of accessibility-related material
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Accessibility
breadcrumbs:
- doc
- development
- fe_guide
- accessibility
---
Accessibility is important for users who use screen readers or rely on keyboard-only functionality
to ensure they have an equivalent experience to sighted mouse users.
## Linting for accessibility defects
You can enable linting for accessibility defects with a free VS Code plugin - [axe Accessibility Linter](https://marketplace.visualstudio.com/items?itemName=deque-systems.vscode-axe-linter).
We strongly recommend that to everyone contributing to GitLab that use VS Code.
1. Open VS Code editor
1. Go to Extensions
1. Search for axe Accessibility Linter and install the plugin
Axe Accessibility Linter works in HTML, Markdown and Vue files. As for this moment, there is no support for HAML files. You will get immediate feedback, while writing your code.
GitLab repository contains `axe-linter.yml` file that adds additional configuration to the plugin.
It enables the linter to analyze some of the Pajamas components by mapping them and their attributes to native HTML elements.
## Automated accessibility testing
Uncover accessibility problems and ensure that your features stay accessible over time by
[implementing automated A11Y tests](automated_testing.md).
- [Accessibility Storybook tests](storybook_tests.md)
- [Accessibility feature tests](feature_tests.md)
## Accessibility best practices
Follow these [best practices](best_practices.md) to implement accessible web applications. These are
some of the topics covered in that guide:
- [Quick checklist](best_practices.md#quick-checklist)
- [Accessible names for screen readers](best_practices.md#provide-accessible-names-for-screen-readers)
- [Icons](best_practices.md#icons)
- [When to use ARIA](best_practices.md#when-to-use-aria)
## Other resources
Use these tools and learning resources to improve your web accessibility workflow and skills.
### Viewing the browser accessibility tree
- [Firefox DevTools guide](https://firefox-source-docs.mozilla.org/devtools-user/accessibility_inspector/index.html#accessing-the-accessibility-inspector)
- [Chrome DevTools guide](https://developer.chrome.com/docs/devtools/accessibility/reference/#pane)
### Browser extensions
We have two options for Web accessibility testing:
- axe for [Firefox](https://www.deque.com/axe/devtools/firefox-browser-extension/)
- axe for [Chrome](https://www.deque.com/axe/devtools/chrome-browser-extension/)
### Other links
- [The A11Y Project](https://www.a11yproject.com/) is a good resource for accessibility
- [Awesome Accessibility](https://github.com/brunopulis/awesome-a11y)
is a compilation of accessibility-related material
|
https://docs.gitlab.com/development/fe_guide/best_practices
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/best_practices.md
|
2025-08-13
|
doc/development/fe_guide/accessibility
|
[
"doc",
"development",
"fe_guide",
"accessibility"
] |
best_practices.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Accessibility best practices
| null |
## Quick summary
Since [no ARIA is better than bad ARIA](https://w3c.github.io/aria-practices/#no_aria_better_bad_aria),
review the following recommendations before using `aria-*`, `role`, and `tabindex`.
Use semantic HTML, which has accessibility semantics baked in, and ideally test with
[relevant combinations of screen readers and browsers](https://www.accessibility-developer-guide.com/knowledge/screen-readers/relevant-combinations/).
In [WebAIM's accessibility analysis of the top million home pages](https://webaim.org/projects/million/#aria),
they found that "ARIA correlated to higher detectable errors".
It is likely that misuse of ARIA is a big cause of increased errors,
so when in doubt don't use `aria-*`, `role`, and `tabindex` and stick with semantic HTML.
## Enable keyboard navigation on macOS
By default, macOS limits the <kbd>tab</kbd> key to **Text boxes and lists only**. To enable full keyboard navigation:
1. Open **System Preferences**.
1. Select **Keyboard**.
1. Open the **Shortcuts** tab.
1. Enable the setting **Use keyboard navigation to move focus between controls**.
You can read more about enabling browser-specific keyboard navigation on [a11yproject](https://www.a11yproject.com/posts/macos-browser-keyboard-navigation/).
## Quick checklist
- [Text](https://design.gitlab.com/components/text-input#accessibility),
[textarea](https://design.gitlab.com/components/textarea#accessibility),
[select](https://design.gitlab.com/components/select#accessibility),
[checkbox](https://design.gitlab.com/components/checkbox#accessibility),
[radio](https://design.gitlab.com/components/radio-button#accessibility),
[file](#form-inputs-with-accessible-names),
and [toggle](https://design.gitlab.com/components/toggle#accessibility) inputs have accessible names.
- [Buttons](#buttons-and-links-with-descriptive-accessible-names),
[links](#buttons-and-links-with-descriptive-accessible-names),
and [images](#images-with-accessible-names) have descriptive accessible names.
- Icons
- [Non-decorative icons](#icons-that-convey-information) have an `aria-label`.
- [Clickable icons](#icons-that-are-clickable) are buttons, that is, `<gl-button icon="close" />` is used and not `<gl-icon />`.
- Icon-only buttons have an `aria-label`.
- Interactive elements can be [accessed with the Tab key](#support-keyboard-only-use) and have a visible focus state.
- Elements with [tooltips](https://design.gitlab.com/components/tooltip#accessibility) are focusable using the Tab key.
- Are any `role`, `tabindex` or `aria-*` attributes unnecessary?
- Can any `div` or `span` elements be replaced with a more semantic [HTML element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element) like `p`, `button`, or `time`?
## Provide a good document outline
[Headings are the primary mechanism used by screen reader users to navigate content](https://webaim.org/projects/screenreadersurvey8/#finding).
Therefore, the structure of headings on a page should make sense, like a good table of contents.
We should ensure that:
- There is only one `h1` element on the page.
- Heading levels are not skipped.
- Heading levels are nested correctly.
## Provide accessible names for screen readers
To provide markup with accessible names, ensure every:
- input has an [associated `label`](#examples-of-providing-accessible-names).
- button and link have [visible text](#buttons-and-links-with-descriptive-accessible-names), or `aria-label` when there is no visible text, such as for an icon button with no content.
- image has an [`alt` attribute](#images-with-accessible-names).
- [chart has a long and short description](https://www.w3.org/WAI/tutorials/images/complex/).
- `fieldset` has `legend` as its first child.
- `figure` has `figcaption` as its first child.
- `table` has `caption` as its first child.
Remember that an [`alt` attribute](#images-with-accessible-names) should not be longer than approximately 150 characters. While there's no official guidelines on the length, some screen readers will not read longer strings inside the `alt` attribute.
An accessible name can be provided in multiple ways and is decided with [accessible name calculation](https://www.w3.org/WAI/ARIA/apg/practices/names-and-descriptions/#name_calculation). Here is the simplified order of different techniques taking precedence:
1. `aria-labelledby`
1. `aria-label`
1. `alt`, `legend`, `figcaption` or `caption`
1. `title`.
### Examples of providing accessible names
The following subsections contain examples of markup that render HTML elements with accessible names.
Note that [when using `GlFormGroup`](https://bootstrap-vue.org/docs/components/form-group#accessibility):
- Passing only a `label` prop renders a `fieldset` with a `legend` containing the `label` value.
- Passing both a `label` and a `label-for` prop renders a `label` that points to the form input with the same `label-for` ID.
#### Form inputs with accessible names
Groups of checkboxes and radio inputs should be grouped together in a `fieldset` with a `legend`.
`legend` gives the group of checkboxes and radio inputs a label.
If the `label`, child text, or child element is not visually desired,
use the class name `gl-sr-only` to hide the element from everything but screen readers.
File input examples:
```html
<!-- File input with a label -->
<label for="attach-file">{{ __('Attach a file') }}</label>
<input id="attach-file" type="file" />
<!-- File input with a hidden label -->
<label for="attach-file" class="gl-sr-only">{{ __('Attach a file') }}</label>
<input id="attach-file" type="file" />
```
#### Images with accessible names
Image examples:
```html
<img :src="imagePath" :alt="__('A description of the image')" />
<!-- SVGs implicitly have a graphics role so if it is semantically an image we should apply `role="img"` -->
<svg role="img" :alt="__('A description of the image')" />
<!-- A decorative image, hidden from screen readers -->
<img :src="imagePath" :alt="" />
```
#### Buttons and links with descriptive accessible names
Buttons and links should have accessible names that are descriptive enough to be understood in isolation.
```html
<!-- bad -->
<gl-button @click="handleClick">{{ __('Submit') }}</gl-button>
<gl-link :href="url">{{ __('page') }}</gl-link>
<!-- good -->
<gl-button @click="handleClick">{{ __('Submit review') }}</gl-button>
<gl-link :href="url">{{ __("GitLab's accessibility page") }}</gl-link>
```
## Role
In general, avoid using `role`.
Use semantic HTML elements that implicitly have a `role` instead.
| Bad | Good |
| --- | --- |
| `<div role="button">` | `<button>` |
| `<div role="img">` | `<img>` |
| `<div role="link">` | `<a>` |
| `<div role="header">` | `<h1>` to `<h6>` |
| `<div role="textbox">` | `<input>` or `<textarea>` |
| `<div role="article">` | `<article>` |
| `<div role="list">` | `<ol>` or `<ul>` |
| `<div role="listitem">` | `<li>` |
| `<div role="table">` | `<table>` |
| `<div role="rowgroup">` | `<thead>`, `<tbody>`, or `<tfoot>` |
| `<div role="row">` | `<tr>` |
| `<div role="columnheader">` | `<th>` |
| `<div role="cell">` | `<td>` |
## Support keyboard-only use
Keyboard users rely on focus outlines to understand where they are on the page. Therefore, if an
element is interactive you must ensure:
- It can receive keyboard focus.
- It has a visible focus state.
Use semantic HTML, such as `a` (`GlLink`) and `button` (`GlButton`), which provides these behaviours by default.
Keep in mind that:
- <kbd>Tab</kbd> and <kbd>Shift-Tab</kbd> should only move between interactive elements, not static content.
- When you add `:hover` styles, in most cases you should add `:focus` styles too so that the styling is applied for both mouse **and** keyboard users.
- If you remove an interactive element's `outline`, make sure you maintain visual focus state in another way such as with `box-shadow`.
See the [Pajamas Keyboard-only page](https://design.gitlab.com/accessibility/keyboard-only) for more detail.
## `tabindex`
Prefer **no** `tabindex` to using `tabindex`, since:
- Using semantic HTML such as `button` (`GlButton`) implicitly provides `tabindex="0"`.
- Tabbing order should match the visual reading order and positive `tabindex`s interfere with this.
### Avoid using `tabindex="0"` to make an element interactive
Use interactive elements instead of `div` and `span` tags.
For example:
- If the element should be clickable, use a `button` (`GlButton`).
- If the element should be text editable, use an [`input`](https://design.gitlab.com/components/text-input#accessibility) or [`textarea`](https://design.gitlab.com/components/textarea#accessibility).
Once the markup is semantically complete, use CSS to update it to its desired visual state.
```html
<!-- bad -->
<div role="button" tabindex="0" @click="expand">Expand</div>
<!-- good -->
<gl-button class="gl-p-0!" category="tertiary" @click="expand">Expand</gl-button>
```
### Do not use `tabindex="0"` on interactive elements
Interactive elements are already tab accessible so adding `tabindex` is redundant.
```html
<!-- bad -->
<gl-link href="help" tabindex="0">Help</gl-link>
<gl-button tabindex="0">Submit</gl-button>
<!-- good -->
<gl-link href="help">Help</gl-link>
<gl-button>Submit</gl-button>
```
### Do not use `tabindex="0"` on elements for screen readers to read
Screen readers can read text that is not tab accessible.
The use of `tabindex="0"` is unnecessary and can cause problems,
as screen reader users then expect to be able to interact with it.
```html
<!-- bad -->
<p tabindex="0" :aria-label="message">{{ message }}</p>
<!-- good -->
<p>{{ message }}</p>
```
### Do not use a positive `tabindex`
[Always avoid using `tabindex="1"`](https://webaim.org/techniques/keyboard/tabindex#overview)
or greater.
## Icons
Icons can be split into three different types:
- Icons that are decorative
- Icons that convey meaning
- Icons that are clickable
### Icons that are decorative
Icons are decorative when there's no loss of information to the user when they are removed from the UI.
As the majority of icons within GitLab are decorative, `GlIcon` automatically hides its rendered icons from screen readers.
Therefore, you do not need to add `aria-hidden="true"` to `GlIcon`, as this is redundant.
```html
<!-- unnecessary: gl-icon hides icons from screen readers by default -->
<gl-icon name="rocket" aria-hidden="true" />
<!-- good -->
<gl-icon name="rocket" />
```
### Icons that convey information
Icons convey information if there is loss of information to the user when they are removed from the UI.
An example is a confidential icon that conveys the issue is confidential, and does not have the text "Confidential" next to it.
Icons that convey information must have an accessible name so that the information is conveyed to screen reader users too.
```html
<!-- bad -->
<gl-icon name="eye-slash" />
<!-- good -->
<gl-icon name="eye-slash" :aria-label="__('Confidential issue')" />
```
### Icons that are clickable
Icons that are clickable are semantically buttons, so they should be rendered as buttons, with an accessible name.
```html
<!-- bad -->
<gl-icon name="close" :aria-label="__('Close')" @click="handleClick" />
<!-- good -->
<gl-button icon="close" category="tertiary" :aria-label="__('Close')" @click="handleClick" />
```
## Hiding elements
Use the following table to hide elements from users, when appropriate.
| Hide from sighted users | Hide from screen readers | Hide from both sighted and screen reader users |
| --- | --- | --- |
| `.gl-sr-only` | `aria-hidden="true"` | `display: none`, `visibility: hidden`, or `hidden` attribute |
### Hide decorative images from screen readers
To reduce noise for screen reader users, hide decorative images using `alt=""`.
If the image is not an `img` element, such as an inline SVG, you can hide it by adding both `role="img"` and `alt=""`.
`gl-icon` components automatically hide their icons from screen readers so `aria-hidden="true"` is
unnecessary when using `gl-icon`.
```html
<!-- good - decorative images hidden from screen readers -->
<img src="decorative.jpg" alt="">
<svg role="img" alt="" />
<gl-icon name="epic" />
```
## When to use ARIA
No ARIA is required when using semantic HTML, because it already incorporates accessibility.
However, there are some UI patterns that do not have semantic HTML equivalents.
General examples of these are dialogs (modals) and tabs.
GitLab-specific examples are assignee and label dropdowns.
Building such widgets require ARIA to make them understandable to screen readers.
Proper research and testing should be done to ensure compliance with [WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Accessibility best practices
breadcrumbs:
- doc
- development
- fe_guide
- accessibility
---
## Quick summary
Since [no ARIA is better than bad ARIA](https://w3c.github.io/aria-practices/#no_aria_better_bad_aria),
review the following recommendations before using `aria-*`, `role`, and `tabindex`.
Use semantic HTML, which has accessibility semantics baked in, and ideally test with
[relevant combinations of screen readers and browsers](https://www.accessibility-developer-guide.com/knowledge/screen-readers/relevant-combinations/).
In [WebAIM's accessibility analysis of the top million home pages](https://webaim.org/projects/million/#aria),
they found that "ARIA correlated to higher detectable errors".
It is likely that misuse of ARIA is a big cause of increased errors,
so when in doubt don't use `aria-*`, `role`, and `tabindex` and stick with semantic HTML.
## Enable keyboard navigation on macOS
By default, macOS limits the <kbd>tab</kbd> key to **Text boxes and lists only**. To enable full keyboard navigation:
1. Open **System Preferences**.
1. Select **Keyboard**.
1. Open the **Shortcuts** tab.
1. Enable the setting **Use keyboard navigation to move focus between controls**.
You can read more about enabling browser-specific keyboard navigation on [a11yproject](https://www.a11yproject.com/posts/macos-browser-keyboard-navigation/).
## Quick checklist
- [Text](https://design.gitlab.com/components/text-input#accessibility),
[textarea](https://design.gitlab.com/components/textarea#accessibility),
[select](https://design.gitlab.com/components/select#accessibility),
[checkbox](https://design.gitlab.com/components/checkbox#accessibility),
[radio](https://design.gitlab.com/components/radio-button#accessibility),
[file](#form-inputs-with-accessible-names),
and [toggle](https://design.gitlab.com/components/toggle#accessibility) inputs have accessible names.
- [Buttons](#buttons-and-links-with-descriptive-accessible-names),
[links](#buttons-and-links-with-descriptive-accessible-names),
and [images](#images-with-accessible-names) have descriptive accessible names.
- Icons
- [Non-decorative icons](#icons-that-convey-information) have an `aria-label`.
- [Clickable icons](#icons-that-are-clickable) are buttons, that is, `<gl-button icon="close" />` is used and not `<gl-icon />`.
- Icon-only buttons have an `aria-label`.
- Interactive elements can be [accessed with the Tab key](#support-keyboard-only-use) and have a visible focus state.
- Elements with [tooltips](https://design.gitlab.com/components/tooltip#accessibility) are focusable using the Tab key.
- Are any `role`, `tabindex` or `aria-*` attributes unnecessary?
- Can any `div` or `span` elements be replaced with a more semantic [HTML element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element) like `p`, `button`, or `time`?
## Provide a good document outline
[Headings are the primary mechanism used by screen reader users to navigate content](https://webaim.org/projects/screenreadersurvey8/#finding).
Therefore, the structure of headings on a page should make sense, like a good table of contents.
We should ensure that:
- There is only one `h1` element on the page.
- Heading levels are not skipped.
- Heading levels are nested correctly.
## Provide accessible names for screen readers
To provide markup with accessible names, ensure every:
- input has an [associated `label`](#examples-of-providing-accessible-names).
- button and link have [visible text](#buttons-and-links-with-descriptive-accessible-names), or `aria-label` when there is no visible text, such as for an icon button with no content.
- image has an [`alt` attribute](#images-with-accessible-names).
- [chart has a long and short description](https://www.w3.org/WAI/tutorials/images/complex/).
- `fieldset` has `legend` as its first child.
- `figure` has `figcaption` as its first child.
- `table` has `caption` as its first child.
Remember that an [`alt` attribute](#images-with-accessible-names) should not be longer than approximately 150 characters. While there's no official guidelines on the length, some screen readers will not read longer strings inside the `alt` attribute.
An accessible name can be provided in multiple ways and is decided with [accessible name calculation](https://www.w3.org/WAI/ARIA/apg/practices/names-and-descriptions/#name_calculation). Here is the simplified order of different techniques taking precedence:
1. `aria-labelledby`
1. `aria-label`
1. `alt`, `legend`, `figcaption` or `caption`
1. `title`.
### Examples of providing accessible names
The following subsections contain examples of markup that render HTML elements with accessible names.
Note that [when using `GlFormGroup`](https://bootstrap-vue.org/docs/components/form-group#accessibility):
- Passing only a `label` prop renders a `fieldset` with a `legend` containing the `label` value.
- Passing both a `label` and a `label-for` prop renders a `label` that points to the form input with the same `label-for` ID.
#### Form inputs with accessible names
Groups of checkboxes and radio inputs should be grouped together in a `fieldset` with a `legend`.
`legend` gives the group of checkboxes and radio inputs a label.
If the `label`, child text, or child element is not visually desired,
use the class name `gl-sr-only` to hide the element from everything but screen readers.
File input examples:
```html
<!-- File input with a label -->
<label for="attach-file">{{ __('Attach a file') }}</label>
<input id="attach-file" type="file" />
<!-- File input with a hidden label -->
<label for="attach-file" class="gl-sr-only">{{ __('Attach a file') }}</label>
<input id="attach-file" type="file" />
```
#### Images with accessible names
Image examples:
```html
<img :src="imagePath" :alt="__('A description of the image')" />
<!-- SVGs implicitly have a graphics role so if it is semantically an image we should apply `role="img"` -->
<svg role="img" :alt="__('A description of the image')" />
<!-- A decorative image, hidden from screen readers -->
<img :src="imagePath" :alt="" />
```
#### Buttons and links with descriptive accessible names
Buttons and links should have accessible names that are descriptive enough to be understood in isolation.
```html
<!-- bad -->
<gl-button @click="handleClick">{{ __('Submit') }}</gl-button>
<gl-link :href="url">{{ __('page') }}</gl-link>
<!-- good -->
<gl-button @click="handleClick">{{ __('Submit review') }}</gl-button>
<gl-link :href="url">{{ __("GitLab's accessibility page") }}</gl-link>
```
## Role
In general, avoid using `role`.
Use semantic HTML elements that implicitly have a `role` instead.
| Bad | Good |
| --- | --- |
| `<div role="button">` | `<button>` |
| `<div role="img">` | `<img>` |
| `<div role="link">` | `<a>` |
| `<div role="header">` | `<h1>` to `<h6>` |
| `<div role="textbox">` | `<input>` or `<textarea>` |
| `<div role="article">` | `<article>` |
| `<div role="list">` | `<ol>` or `<ul>` |
| `<div role="listitem">` | `<li>` |
| `<div role="table">` | `<table>` |
| `<div role="rowgroup">` | `<thead>`, `<tbody>`, or `<tfoot>` |
| `<div role="row">` | `<tr>` |
| `<div role="columnheader">` | `<th>` |
| `<div role="cell">` | `<td>` |
## Support keyboard-only use
Keyboard users rely on focus outlines to understand where they are on the page. Therefore, if an
element is interactive you must ensure:
- It can receive keyboard focus.
- It has a visible focus state.
Use semantic HTML, such as `a` (`GlLink`) and `button` (`GlButton`), which provides these behaviours by default.
Keep in mind that:
- <kbd>Tab</kbd> and <kbd>Shift-Tab</kbd> should only move between interactive elements, not static content.
- When you add `:hover` styles, in most cases you should add `:focus` styles too so that the styling is applied for both mouse **and** keyboard users.
- If you remove an interactive element's `outline`, make sure you maintain visual focus state in another way such as with `box-shadow`.
See the [Pajamas Keyboard-only page](https://design.gitlab.com/accessibility/keyboard-only) for more detail.
## `tabindex`
Prefer **no** `tabindex` to using `tabindex`, since:
- Using semantic HTML such as `button` (`GlButton`) implicitly provides `tabindex="0"`.
- Tabbing order should match the visual reading order and positive `tabindex`s interfere with this.
### Avoid using `tabindex="0"` to make an element interactive
Use interactive elements instead of `div` and `span` tags.
For example:
- If the element should be clickable, use a `button` (`GlButton`).
- If the element should be text editable, use an [`input`](https://design.gitlab.com/components/text-input#accessibility) or [`textarea`](https://design.gitlab.com/components/textarea#accessibility).
Once the markup is semantically complete, use CSS to update it to its desired visual state.
```html
<!-- bad -->
<div role="button" tabindex="0" @click="expand">Expand</div>
<!-- good -->
<gl-button class="gl-p-0!" category="tertiary" @click="expand">Expand</gl-button>
```
### Do not use `tabindex="0"` on interactive elements
Interactive elements are already tab accessible so adding `tabindex` is redundant.
```html
<!-- bad -->
<gl-link href="help" tabindex="0">Help</gl-link>
<gl-button tabindex="0">Submit</gl-button>
<!-- good -->
<gl-link href="help">Help</gl-link>
<gl-button>Submit</gl-button>
```
### Do not use `tabindex="0"` on elements for screen readers to read
Screen readers can read text that is not tab accessible.
The use of `tabindex="0"` is unnecessary and can cause problems,
as screen reader users then expect to be able to interact with it.
```html
<!-- bad -->
<p tabindex="0" :aria-label="message">{{ message }}</p>
<!-- good -->
<p>{{ message }}</p>
```
### Do not use a positive `tabindex`
[Always avoid using `tabindex="1"`](https://webaim.org/techniques/keyboard/tabindex#overview)
or greater.
## Icons
Icons can be split into three different types:
- Icons that are decorative
- Icons that convey meaning
- Icons that are clickable
### Icons that are decorative
Icons are decorative when there's no loss of information to the user when they are removed from the UI.
As the majority of icons within GitLab are decorative, `GlIcon` automatically hides its rendered icons from screen readers.
Therefore, you do not need to add `aria-hidden="true"` to `GlIcon`, as this is redundant.
```html
<!-- unnecessary: gl-icon hides icons from screen readers by default -->
<gl-icon name="rocket" aria-hidden="true" />
<!-- good -->
<gl-icon name="rocket" />
```
### Icons that convey information
Icons convey information if there is loss of information to the user when they are removed from the UI.
An example is a confidential icon that conveys the issue is confidential, and does not have the text "Confidential" next to it.
Icons that convey information must have an accessible name so that the information is conveyed to screen reader users too.
```html
<!-- bad -->
<gl-icon name="eye-slash" />
<!-- good -->
<gl-icon name="eye-slash" :aria-label="__('Confidential issue')" />
```
### Icons that are clickable
Icons that are clickable are semantically buttons, so they should be rendered as buttons, with an accessible name.
```html
<!-- bad -->
<gl-icon name="close" :aria-label="__('Close')" @click="handleClick" />
<!-- good -->
<gl-button icon="close" category="tertiary" :aria-label="__('Close')" @click="handleClick" />
```
## Hiding elements
Use the following table to hide elements from users, when appropriate.
| Hide from sighted users | Hide from screen readers | Hide from both sighted and screen reader users |
| --- | --- | --- |
| `.gl-sr-only` | `aria-hidden="true"` | `display: none`, `visibility: hidden`, or `hidden` attribute |
### Hide decorative images from screen readers
To reduce noise for screen reader users, hide decorative images using `alt=""`.
If the image is not an `img` element, such as an inline SVG, you can hide it by adding both `role="img"` and `alt=""`.
`gl-icon` components automatically hide their icons from screen readers so `aria-hidden="true"` is
unnecessary when using `gl-icon`.
```html
<!-- good - decorative images hidden from screen readers -->
<img src="decorative.jpg" alt="">
<svg role="img" alt="" />
<gl-icon name="epic" />
```
## When to use ARIA
No ARIA is required when using semantic HTML, because it already incorporates accessibility.
However, there are some UI patterns that do not have semantic HTML equivalents.
General examples of these are dialogs (modals) and tabs.
GitLab-specific examples are assignee and label dropdowns.
Building such widgets require ARIA to make them understandable to screen readers.
Proper research and testing should be done to ensure compliance with [WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/).
|
https://docs.gitlab.com/development/fe_guide/automated_testing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/automated_testing.md
|
2025-08-13
|
doc/development/fe_guide/accessibility
|
[
"doc",
"development",
"fe_guide",
"accessibility"
] |
automated_testing.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Automated accessibility testing
| null |
GitLab is committed to ensuring our platform is accessible to all users. We use automated accessibility testing as part of our comprehensive approach to identify and prevent accessibility barriers.
[We aim to conform to level AA of the World Wide Web Consortium (W3C) Web Content Accessibility Guidelines 2.1](https://design.gitlab.com/accessibility/a11y).
## Our testing approach
GitLab uses multiple approaches for automated accessibility testing to provide comprehensive coverage:
1. **[Feature tests](feature_tests.md)** - End-to-end accessibility testing using axe-core in feature tests to validate complete user flows and page interactions
1. **[Storybook component tests](storybook_tests.md)** - Isolated component testing using Storybook test-runner with axe-playwright to ensure individual Vue components meet accessibility standards
These complementary approaches ensure that both individual components and complete user experiences are accessible.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Automated accessibility testing
breadcrumbs:
- doc
- development
- fe_guide
- accessibility
---
GitLab is committed to ensuring our platform is accessible to all users. We use automated accessibility testing as part of our comprehensive approach to identify and prevent accessibility barriers.
[We aim to conform to level AA of the World Wide Web Consortium (W3C) Web Content Accessibility Guidelines 2.1](https://design.gitlab.com/accessibility/a11y).
## Our testing approach
GitLab uses multiple approaches for automated accessibility testing to provide comprehensive coverage:
1. **[Feature tests](feature_tests.md)** - End-to-end accessibility testing using axe-core in feature tests to validate complete user flows and page interactions
1. **[Storybook component tests](storybook_tests.md)** - Isolated component testing using Storybook test-runner with axe-playwright to ensure individual Vue components meet accessibility standards
These complementary approaches ensure that both individual components and complete user experiences are accessible.
|
https://docs.gitlab.com/development/fe_guide/html
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/html.md
|
2025-08-13
|
doc/development/fe_guide/style
|
[
"doc",
"development",
"fe_guide",
"style"
] |
html.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
HTML style guide
| null |
See also our [accessibility best practices](../accessibility/best_practices.md).
## Semantic elements
[Semantic elements](https://developer.mozilla.org/en-US/docs/Glossary/Semantics) are HTML tags that
give semantic (rather than presentational) meaning to the data they contain. For example:
- [`<article>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/article)
- [`<nav>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/nav)
- [`<strong>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/strong)
Prefer using semantic tags, but only if the intention is truly accurate with the semantic meaning
of the tag itself. View the [MDN documentation](https://developer.mozilla.org/en-US/docs/Web/HTML/Element)
for a description on what each tag semantically means.
```html
<!-- bad - could use semantic tags instead of div's. -->
<div class="...">
<p>
<!-- bad - this isn't what "strong" is meant for. -->
Simply visit your <strong>Settings</strong> to say hello to the world.
</p>
<div class="...">...</div>
</div>
<!-- good - prefer semantic classes used accurately -->
<section class="...">
<p>
Simply visit your <span class="gl-font-bold">Settings</span> to say hello to the world.
</p>
<footer class="...">...</footer>
</section>
```
## Buttons
### Button type
Button tags requires a `type` attribute according to the [W3C HTML specification](https://www.w3.org/TR/2011/WD-html5-20110525/the-button-element.html#dom-button-type).
```html
// bad
<button></button>
// good
<button type="button"></button>
```
## Links
### Blank target
Arbitrarily opening links in a new tab is not recommended, so refer to the [Pajamas guidelines on links](https://design.gitlab.com/components/link) when considering adding `target="_blank"` to links.
When using `target="_blank"` with `a` tags, you must also add the `rel="noopener noreferrer"` attribute. This prevents a security vulnerability [documented by JitBit](https://www.jitbit.com/alexblog/256-targetblank---the-most-underestimated-vulnerability-ever/).
When using `gl-link`, using `target="_blank"` is sufficient as it automatically adds `rel="noopener noreferrer"` to the link.
```html
// bad
<a href="url" target="_blank"></a>
// good
<a href="url" target="_blank" rel="noopener noreferrer"></a>
// good
<gl-link href="url" target="_blank"></gl-link>
```
### Fake links
**Do not use fake links.** Use a button tag if a link only invokes JavaScript click event handlers, which is more semantic.
```html
// bad
<a class="js-do-something" href="#"></a>
// good
<button class="js-do-something" type="button"></button>
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: HTML style guide
breadcrumbs:
- doc
- development
- fe_guide
- style
---
See also our [accessibility best practices](../accessibility/best_practices.md).
## Semantic elements
[Semantic elements](https://developer.mozilla.org/en-US/docs/Glossary/Semantics) are HTML tags that
give semantic (rather than presentational) meaning to the data they contain. For example:
- [`<article>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/article)
- [`<nav>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/nav)
- [`<strong>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/strong)
Prefer using semantic tags, but only if the intention is truly accurate with the semantic meaning
of the tag itself. View the [MDN documentation](https://developer.mozilla.org/en-US/docs/Web/HTML/Element)
for a description on what each tag semantically means.
```html
<!-- bad - could use semantic tags instead of div's. -->
<div class="...">
<p>
<!-- bad - this isn't what "strong" is meant for. -->
Simply visit your <strong>Settings</strong> to say hello to the world.
</p>
<div class="...">...</div>
</div>
<!-- good - prefer semantic classes used accurately -->
<section class="...">
<p>
Simply visit your <span class="gl-font-bold">Settings</span> to say hello to the world.
</p>
<footer class="...">...</footer>
</section>
```
## Buttons
### Button type
Button tags requires a `type` attribute according to the [W3C HTML specification](https://www.w3.org/TR/2011/WD-html5-20110525/the-button-element.html#dom-button-type).
```html
// bad
<button></button>
// good
<button type="button"></button>
```
## Links
### Blank target
Arbitrarily opening links in a new tab is not recommended, so refer to the [Pajamas guidelines on links](https://design.gitlab.com/components/link) when considering adding `target="_blank"` to links.
When using `target="_blank"` with `a` tags, you must also add the `rel="noopener noreferrer"` attribute. This prevents a security vulnerability [documented by JitBit](https://www.jitbit.com/alexblog/256-targetblank---the-most-underestimated-vulnerability-ever/).
When using `gl-link`, using `target="_blank"` is sufficient as it automatically adds `rel="noopener noreferrer"` to the link.
```html
// bad
<a href="url" target="_blank"></a>
// good
<a href="url" target="_blank" rel="noopener noreferrer"></a>
// good
<gl-link href="url" target="_blank"></gl-link>
```
### Fake links
**Do not use fake links.** Use a button tag if a link only invokes JavaScript click event handlers, which is more semantic.
```html
// bad
<a class="js-do-something" href="#"></a>
// good
<button class="js-do-something" type="button"></button>
```
|
https://docs.gitlab.com/development/fe_guide/vue
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/vue.md
|
2025-08-13
|
doc/development/fe_guide/style
|
[
"doc",
"development",
"fe_guide",
"style"
] |
vue.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Vue.js style guide
| null |
## Linting
We default to [eslint-vue-plugin](https://github.com/vuejs/eslint-plugin-vue), with the `plugin:vue/recommended`.
Check the [rules](https://github.com/vuejs/eslint-plugin-vue#bulb-rules) for more documentation.
## Basic Rules
1. Use `.vue` for Vue templates. Do not use `%template` in HAML.
1. Explicitly define data being passed into the Vue app
```javascript
// bad
return new Vue({
el: '#element',
name: 'ComponentNameRoot',
components: {
componentName
},
provide: {
...someDataset
},
props: {
...anotherDataset
},
render: createElement => createElement('component-name'),
}));
// good
const { foobar, barfoo } = someDataset;
const { foo, bar } = anotherDataset;
return new Vue({
el: '#element',
name: 'ComponentNameRoot',
components: {
componentName
},
provide: {
foobar,
barfoo
},
props: {
foo,
bar
},
render: createElement => createElement('component-name'),
}));
```
We discourage the use of the spread operator in this specific case in
order to keep our codebase explicit, discoverable, and searchable.
This applies in any place where we would benefit from the above, such as
when [initializing Vuex state](../vuex.md#why-not-just-spread-the-initial-state).
The pattern above also enables us to easily parse non scalar values during
instantiation.
```javascript
return new Vue({
el: '#element',
name: 'ComponentNameRoot',
components: {
componentName
},
props: {
foo,
bar: parseBoolean(bar)
},
render: createElement => createElement('component-name'),
}));
```
## Component usage within templates
1. Prefer a component's kebab-cased name over other styles when using it in a template
```html
// bad
<MyComponent />
// good
<my-component />
```
## `<style>` tags
We don't use `<style>` tags in Vue components for a few reasons:
1. You cannot use SCSS variables and mixins or [Tailwind CSS](scss.md#tailwind-css) `@apply` directive.
1. These styles get inserted at runtime.
1. We already have a few other ways to define CSS.
Instead of using a `<style>` tag you should use [Tailwind CSS utility classes](scss.md#tailwind-css) or [page specific CSS](https://gitlab.com/groups/gitlab-org/-/epics/3694).
## Vue testing
Over time, a number of programming patterns and style preferences have emerged in our efforts to
effectively test Vue components. The following guide describes some of these.
**These are not strict guidelines**, but rather a collection of suggestions and good practices that
aim to provide insight into how we write Vue tests at GitLab.
### Mounting a component
Typically, when testing a Vue component, the component should be "re-mounted" in every test block.
To achieve this:
1. Create a mutable `wrapper` variable inside the top-level `describe` block.
1. Mount the component using [`mount`](https://v1.test-utils.vuejs.org/api/#mount) or [`shallowMount`](https://v1.test-utils.vuejs.org/api/#shallowMount).
1. Reassign the resulting [`Wrapper`](https://v1.test-utils.vuejs.org/api/wrapper/#wrapper) instance to our `wrapper` variable.
Creating a global, mutable wrapper provides a number of advantages, including the ability to:
- Define common functions for finding components/DOM elements:
```javascript
import MyComponent from '~/path/to/my_component.vue';
describe('MyComponent', () => {
let wrapper;
// this can now be reused across tests
const findMyComponent = wrapper.findComponent(MyComponent);
// ...
})
```
- Use a `beforeEach` block to mount the component (see
[the `createComponent` factory](#the-createcomponent-factory) for more information).
- Automatically destroy the component after the test is run with [`enableAutoDestroy`](https://v1.test-utils.vuejs.org/api/#enableautodestroy-hook)
set in [`shared_test_setup.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/d0bdc8370ef17891fd718a4578e41fef97cf065d/spec/frontend/__helpers__/shared_test_setup.js#L20).
#### Async child components
`shallowMount` will not create component stubs for [async child components](https://v2.vuejs.org/v2/guide/components-dynamic-async#Async-Components). In order to properly stub async child components, use the [`stubs`](https://v1.test-utils.vuejs.org/api/options.html#stubs) option. Make sure the async child component has a [`name`](https://v2.vuejs.org/v2/api/#name) option defined, otherwise your `wrapper`'s `findComponent` method may not work correctly.
#### The `createComponent` factory
To avoid duplicating our mounting logic, it's useful to define a `createComponent` factory function
that we can reuse in each test block. This is a closure which should reassign our `wrapper` variable
to the result of [`mount`](https://v1.test-utils.vuejs.org/api/#mount) and
[`shallowMount`](https://v1.test-utils.vuejs.org/api/#shallowMount):
```javascript
import MyComponent from '~/path/to/my_component.vue';
import { shallowMount } from '@vue/test-utils';
describe('MyComponent', () => {
// Initiate the "global" wrapper variable. This will be used throughout our test:
let wrapper;
// Define our `createComponent` factory:
function createComponent() {
// Mount component and reassign `wrapper`:
wrapper = shallowMount(MyComponent);
}
it('mounts', () => {
createComponent();
expect(wrapper.exists()).toBe(true);
});
it('`isLoading` prop defaults to `false`', () => {
createComponent();
expect(wrapper.props('isLoading')).toBe(false);
});
})
```
Similarly, we could further de-duplicate our test by calling `createComponent` in a `beforeEach` block:
```javascript
import MyComponent from '~/path/to/my_component.vue';
import { shallowMount } from '@vue/test-utils';
describe('MyComponent', () => {
// Initiate the "global" wrapper variable. This will be used throughout our test
let wrapper;
// define our `createComponent` factory
function createComponent() {
// mount component and reassign `wrapper`
wrapper = shallowMount(MyComponent);
}
beforeEach(() => {
createComponent();
});
it('mounts', () => {
expect(wrapper.exists()).toBe(true);
});
it('`isLoading` prop defaults to `false`', () => {
expect(wrapper.props('isLoading')).toBe(false);
});
})
```
#### `createComponent` best practices
1. Consider using a single (or a limited number of) object arguments over many arguments.
Defining single parameters for common data like `props` is okay,
but keep in mind our [JavaScript style guide](javascript.md#limit-number-of-parameters) and
stay within the parameter number limit:
```javascript
// bad
function createComponent(props, stubs, mountFn, foo) { }
// good
function createComponent({ props, stubs, mountFn, foo } = {}) { }
// good
function createComponent(props = {}, { stubs, mountFn, foo } = {}) { }
```
1. If you require both `mount` _and_ `shallowMount` within the same set of tests, it
can be useful define a `mountFn` parameter for the `createComponent` factory that accepts
the mounting function (`mount` or `shallowMount`) to be used to mount the component:
```javascript
import { shallowMount } from '@vue/test-utils';
function createComponent({ mountFn = shallowMount } = {}) { }
```
1. Use the `mountExtended` and `shallowMountExtended` helpers to expose `wrapper.findByTestId()`:
```javascript
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import { SomeComponent } from 'components/some_component.vue';
let wrapper;
const createWrapper = () => { wrapper = shallowMountExtended(SomeComponent); };
const someButton = () => wrapper.findByTestId('someButtonTestId');
```
1. Avoid using `data`, `methods`, or any other mounting option that extends component internals.
```javascript
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import { SomeComponent } from 'components/some_component.vue';
let wrapper;
// bad :( - This circumvents the actual user interaction and couples the test to component internals.
const createWrapper = ({ data }) => {
wrapper = shallowMountExtended(SomeComponent, {
data
});
};
// good :) - Helpers like `clickShowButton` interact with the actual I/O of the component.
const createWrapper = () => {
wrapper = shallowMountExtended(SomeComponent);
};
const clickShowButton = () => {
wrapper.findByTestId('show').trigger('click');
}
```
### Setting component state
1. Avoid using [`setProps`](https://v1.test-utils.vuejs.org/api/wrapper/#setprops) to set
component state wherever possible. Instead, set the component's
[`propsData`](https://v1.test-utils.vuejs.org/api/options.html#propsdata) when mounting the component:
```javascript
// bad
wrapper = shallowMount(MyComponent);
wrapper.setProps({
myProp: 'my cool prop'
});
// good
wrapper = shallowMount({ propsData: { myProp: 'my cool prop' } });
```
The exception here is when you wish to test component reactivity in some way.
For example, you may want to test the output of a component when after a particular watcher has
executed. Using `setProps` to test such behavior is okay.
1. Avoid using [`setData`](https://v1.test-utils.vuejs.org/api/wrapper/#setdata) which sets the
component's internal state and circumvents testing the actual I/O of the component.
Instead, trigger events on the component's children or other side-effects to force state changes.
### Accessing component state
1. When accessing props or attributes, prefer the `wrapper.props('myProp')` syntax over
`wrapper.props().myProp` or `wrapper.vm.myProp`:
```javascript
// good
expect(wrapper.props().myProp).toBe(true);
expect(wrapper.attributes().myAttr).toBe(true);
// better
expect(wrapper.props('myProp').toBe(true);
expect(wrapper.attributes('myAttr')).toBe(true);
```
1. When asserting multiple props, check the deep equality of the `props()` object with
[`toEqual`](https://jestjs.io/docs/expect#toequalvalue):
```javascript
// good
expect(wrapper.props('propA')).toBe('valueA');
expect(wrapper.props('propB')).toBe('valueB');
expect(wrapper.props('propC')).toBe('valueC');
// better
expect(wrapper.props()).toEqual({
propA: 'valueA',
propB: 'valueB',
propC: 'valueC',
});
```
1. If you are only interested in some of the props, you can use
[`toMatchObject`](https://jestjs.io/docs/expect#tomatchobjectobject). Prefer `toMatchObject`
over [`expect.objectContaining`](https://jestjs.io/docs/expect#expectobjectcontainingobject):
```javascript
// good
expect(wrapper.props()).toEqual(expect.objectContaining({
propA: 'valueA',
propB: 'valueB',
}));
// better
expect(wrapper.props()).toMatchObject({
propA: 'valueA',
propB: 'valueB',
});
```
### Testing props validation
When checking component props use `assertProps` helper. Props validation failures will be thrown as errors:
```javascript
import { assertProps } from 'helpers/assert_props'
// ...
expect(() => assertProps(SomeComponent, { invalidPropValue: '1', someOtherProp: 2 })).toThrow()
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Vue.js style guide
breadcrumbs:
- doc
- development
- fe_guide
- style
---
## Linting
We default to [eslint-vue-plugin](https://github.com/vuejs/eslint-plugin-vue), with the `plugin:vue/recommended`.
Check the [rules](https://github.com/vuejs/eslint-plugin-vue#bulb-rules) for more documentation.
## Basic Rules
1. Use `.vue` for Vue templates. Do not use `%template` in HAML.
1. Explicitly define data being passed into the Vue app
```javascript
// bad
return new Vue({
el: '#element',
name: 'ComponentNameRoot',
components: {
componentName
},
provide: {
...someDataset
},
props: {
...anotherDataset
},
render: createElement => createElement('component-name'),
}));
// good
const { foobar, barfoo } = someDataset;
const { foo, bar } = anotherDataset;
return new Vue({
el: '#element',
name: 'ComponentNameRoot',
components: {
componentName
},
provide: {
foobar,
barfoo
},
props: {
foo,
bar
},
render: createElement => createElement('component-name'),
}));
```
We discourage the use of the spread operator in this specific case in
order to keep our codebase explicit, discoverable, and searchable.
This applies in any place where we would benefit from the above, such as
when [initializing Vuex state](../vuex.md#why-not-just-spread-the-initial-state).
The pattern above also enables us to easily parse non scalar values during
instantiation.
```javascript
return new Vue({
el: '#element',
name: 'ComponentNameRoot',
components: {
componentName
},
props: {
foo,
bar: parseBoolean(bar)
},
render: createElement => createElement('component-name'),
}));
```
## Component usage within templates
1. Prefer a component's kebab-cased name over other styles when using it in a template
```html
// bad
<MyComponent />
// good
<my-component />
```
## `<style>` tags
We don't use `<style>` tags in Vue components for a few reasons:
1. You cannot use SCSS variables and mixins or [Tailwind CSS](scss.md#tailwind-css) `@apply` directive.
1. These styles get inserted at runtime.
1. We already have a few other ways to define CSS.
Instead of using a `<style>` tag you should use [Tailwind CSS utility classes](scss.md#tailwind-css) or [page specific CSS](https://gitlab.com/groups/gitlab-org/-/epics/3694).
## Vue testing
Over time, a number of programming patterns and style preferences have emerged in our efforts to
effectively test Vue components. The following guide describes some of these.
**These are not strict guidelines**, but rather a collection of suggestions and good practices that
aim to provide insight into how we write Vue tests at GitLab.
### Mounting a component
Typically, when testing a Vue component, the component should be "re-mounted" in every test block.
To achieve this:
1. Create a mutable `wrapper` variable inside the top-level `describe` block.
1. Mount the component using [`mount`](https://v1.test-utils.vuejs.org/api/#mount) or [`shallowMount`](https://v1.test-utils.vuejs.org/api/#shallowMount).
1. Reassign the resulting [`Wrapper`](https://v1.test-utils.vuejs.org/api/wrapper/#wrapper) instance to our `wrapper` variable.
Creating a global, mutable wrapper provides a number of advantages, including the ability to:
- Define common functions for finding components/DOM elements:
```javascript
import MyComponent from '~/path/to/my_component.vue';
describe('MyComponent', () => {
let wrapper;
// this can now be reused across tests
const findMyComponent = wrapper.findComponent(MyComponent);
// ...
})
```
- Use a `beforeEach` block to mount the component (see
[the `createComponent` factory](#the-createcomponent-factory) for more information).
- Automatically destroy the component after the test is run with [`enableAutoDestroy`](https://v1.test-utils.vuejs.org/api/#enableautodestroy-hook)
set in [`shared_test_setup.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/d0bdc8370ef17891fd718a4578e41fef97cf065d/spec/frontend/__helpers__/shared_test_setup.js#L20).
#### Async child components
`shallowMount` will not create component stubs for [async child components](https://v2.vuejs.org/v2/guide/components-dynamic-async#Async-Components). In order to properly stub async child components, use the [`stubs`](https://v1.test-utils.vuejs.org/api/options.html#stubs) option. Make sure the async child component has a [`name`](https://v2.vuejs.org/v2/api/#name) option defined, otherwise your `wrapper`'s `findComponent` method may not work correctly.
#### The `createComponent` factory
To avoid duplicating our mounting logic, it's useful to define a `createComponent` factory function
that we can reuse in each test block. This is a closure which should reassign our `wrapper` variable
to the result of [`mount`](https://v1.test-utils.vuejs.org/api/#mount) and
[`shallowMount`](https://v1.test-utils.vuejs.org/api/#shallowMount):
```javascript
import MyComponent from '~/path/to/my_component.vue';
import { shallowMount } from '@vue/test-utils';
describe('MyComponent', () => {
// Initiate the "global" wrapper variable. This will be used throughout our test:
let wrapper;
// Define our `createComponent` factory:
function createComponent() {
// Mount component and reassign `wrapper`:
wrapper = shallowMount(MyComponent);
}
it('mounts', () => {
createComponent();
expect(wrapper.exists()).toBe(true);
});
it('`isLoading` prop defaults to `false`', () => {
createComponent();
expect(wrapper.props('isLoading')).toBe(false);
});
})
```
Similarly, we could further de-duplicate our test by calling `createComponent` in a `beforeEach` block:
```javascript
import MyComponent from '~/path/to/my_component.vue';
import { shallowMount } from '@vue/test-utils';
describe('MyComponent', () => {
// Initiate the "global" wrapper variable. This will be used throughout our test
let wrapper;
// define our `createComponent` factory
function createComponent() {
// mount component and reassign `wrapper`
wrapper = shallowMount(MyComponent);
}
beforeEach(() => {
createComponent();
});
it('mounts', () => {
expect(wrapper.exists()).toBe(true);
});
it('`isLoading` prop defaults to `false`', () => {
expect(wrapper.props('isLoading')).toBe(false);
});
})
```
#### `createComponent` best practices
1. Consider using a single (or a limited number of) object arguments over many arguments.
Defining single parameters for common data like `props` is okay,
but keep in mind our [JavaScript style guide](javascript.md#limit-number-of-parameters) and
stay within the parameter number limit:
```javascript
// bad
function createComponent(props, stubs, mountFn, foo) { }
// good
function createComponent({ props, stubs, mountFn, foo } = {}) { }
// good
function createComponent(props = {}, { stubs, mountFn, foo } = {}) { }
```
1. If you require both `mount` _and_ `shallowMount` within the same set of tests, it
can be useful define a `mountFn` parameter for the `createComponent` factory that accepts
the mounting function (`mount` or `shallowMount`) to be used to mount the component:
```javascript
import { shallowMount } from '@vue/test-utils';
function createComponent({ mountFn = shallowMount } = {}) { }
```
1. Use the `mountExtended` and `shallowMountExtended` helpers to expose `wrapper.findByTestId()`:
```javascript
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import { SomeComponent } from 'components/some_component.vue';
let wrapper;
const createWrapper = () => { wrapper = shallowMountExtended(SomeComponent); };
const someButton = () => wrapper.findByTestId('someButtonTestId');
```
1. Avoid using `data`, `methods`, or any other mounting option that extends component internals.
```javascript
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import { SomeComponent } from 'components/some_component.vue';
let wrapper;
// bad :( - This circumvents the actual user interaction and couples the test to component internals.
const createWrapper = ({ data }) => {
wrapper = shallowMountExtended(SomeComponent, {
data
});
};
// good :) - Helpers like `clickShowButton` interact with the actual I/O of the component.
const createWrapper = () => {
wrapper = shallowMountExtended(SomeComponent);
};
const clickShowButton = () => {
wrapper.findByTestId('show').trigger('click');
}
```
### Setting component state
1. Avoid using [`setProps`](https://v1.test-utils.vuejs.org/api/wrapper/#setprops) to set
component state wherever possible. Instead, set the component's
[`propsData`](https://v1.test-utils.vuejs.org/api/options.html#propsdata) when mounting the component:
```javascript
// bad
wrapper = shallowMount(MyComponent);
wrapper.setProps({
myProp: 'my cool prop'
});
// good
wrapper = shallowMount({ propsData: { myProp: 'my cool prop' } });
```
The exception here is when you wish to test component reactivity in some way.
For example, you may want to test the output of a component when after a particular watcher has
executed. Using `setProps` to test such behavior is okay.
1. Avoid using [`setData`](https://v1.test-utils.vuejs.org/api/wrapper/#setdata) which sets the
component's internal state and circumvents testing the actual I/O of the component.
Instead, trigger events on the component's children or other side-effects to force state changes.
### Accessing component state
1. When accessing props or attributes, prefer the `wrapper.props('myProp')` syntax over
`wrapper.props().myProp` or `wrapper.vm.myProp`:
```javascript
// good
expect(wrapper.props().myProp).toBe(true);
expect(wrapper.attributes().myAttr).toBe(true);
// better
expect(wrapper.props('myProp').toBe(true);
expect(wrapper.attributes('myAttr')).toBe(true);
```
1. When asserting multiple props, check the deep equality of the `props()` object with
[`toEqual`](https://jestjs.io/docs/expect#toequalvalue):
```javascript
// good
expect(wrapper.props('propA')).toBe('valueA');
expect(wrapper.props('propB')).toBe('valueB');
expect(wrapper.props('propC')).toBe('valueC');
// better
expect(wrapper.props()).toEqual({
propA: 'valueA',
propB: 'valueB',
propC: 'valueC',
});
```
1. If you are only interested in some of the props, you can use
[`toMatchObject`](https://jestjs.io/docs/expect#tomatchobjectobject). Prefer `toMatchObject`
over [`expect.objectContaining`](https://jestjs.io/docs/expect#expectobjectcontainingobject):
```javascript
// good
expect(wrapper.props()).toEqual(expect.objectContaining({
propA: 'valueA',
propB: 'valueB',
}));
// better
expect(wrapper.props()).toMatchObject({
propA: 'valueA',
propB: 'valueB',
});
```
### Testing props validation
When checking component props use `assertProps` helper. Props validation failures will be thrown as errors:
```javascript
import { assertProps } from 'helpers/assert_props'
// ...
expect(() => assertProps(SomeComponent, { invalidPropValue: '1', someOtherProp: 2 })).toThrow()
```
|
https://docs.gitlab.com/development/fe_guide/style
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/_index.md
|
2025-08-13
|
doc/development/fe_guide/style
|
[
"doc",
"development",
"fe_guide",
"style"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Frontend style guides
| null |
See below for the relevant style guides, guidelines, linting, and other information for developing GitLab.
## JavaScript style guide
We use `eslint` to enforce our [JavaScript style guides](javascript.md). Our guide is based on
the excellent [Airbnb](https://github.com/airbnb/javascript) style guide with a few small
changes.
## SCSS style guide
Our [SCSS conventions](scss.md) which are enforced through [`stylelint`](https://stylelint.io).
## HTML style guide
Guidelines for writing [HTML code](html.md) consistent with the rest of the codebase.
## Vue style guide
Guidelines and conventions for Vue code may be found within the [Vue style guide](vue.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Frontend style guides
breadcrumbs:
- doc
- development
- fe_guide
- style
---
See below for the relevant style guides, guidelines, linting, and other information for developing GitLab.
## JavaScript style guide
We use `eslint` to enforce our [JavaScript style guides](javascript.md). Our guide is based on
the excellent [Airbnb](https://github.com/airbnb/javascript) style guide with a few small
changes.
## SCSS style guide
Our [SCSS conventions](scss.md) which are enforced through [`stylelint`](https://stylelint.io).
## HTML style guide
Guidelines for writing [HTML code](html.md) consistent with the rest of the codebase.
## Vue style guide
Guidelines and conventions for Vue code may be found within the [Vue style guide](vue.md).
|
https://docs.gitlab.com/development/fe_guide/javascript
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/javascript.md
|
2025-08-13
|
doc/development/fe_guide/style
|
[
"doc",
"development",
"fe_guide",
"style"
] |
javascript.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
JavaScript style guide
| null |
We use [the Airbnb JavaScript Style Guide](https://github.com/airbnb/javascript) and its accompanying
linter to manage most of our JavaScript style guidelines.
In addition to the style guidelines set by Airbnb, we also have a few specific rules
listed below.
{{< alert type="note" >}}
You can run ESLint locally by running `yarn run lint:eslint:all` or `yarn run lint:eslint $PATH_TO_FILE`.
{{< /alert >}}
## Avoid `forEach`
Avoid `forEach` when mutating data. Use `map`, `reduce` or `filter` instead of `forEach`
when mutating data. This minimizes mutations in functions,
which aligns with [the Airbnb style guide](https://github.com/airbnb/javascript#testing--for-real).
```javascript
// bad
users.forEach((user, index) => {
user.id = index;
});
// good
const usersWithId = users.map((user, index) => {
return Object.assign({}, user, { id: index });
});
```
## Limit number of parameters
If your function or method has more than 3 parameters, use an object as a parameter
instead.
```javascript
// bad
function a(p1, p2, p3, p4) {
// ...
};
// good
function a({ p1, p2, p3, p4 }) {
// ...
};
```
## Avoid classes to handle DOM events
If the only purpose of the class is to bind a DOM event and handle the callback, prefer
using a function.
```javascript
// bad
class myClass {
constructor(config) {
this.config = config;
}
init() {
document.addEventListener('click', () => {});
}
}
// good
const myFunction = () => {
document.addEventListener('click', () => {
// handle callback here
});
}
```
## Pass element container to constructor
When your class manipulates the DOM, receive the element container as a parameter.
This is more maintainable and performant.
```javascript
// bad
class a {
constructor() {
document.querySelector('.b');
}
}
// good
class a {
constructor(options) {
options.container.querySelector('.b');
}
}
```
## Converting Strings to Integers
When converting strings to integers, `Number` is semantic and can be more readable. Both are allowable, but `Number` has a slight maintainability advantage.
{{< alert type="warning" >}}
`parseInt` **must** include the [radix argument](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt).
{{< /alert >}}
```javascript
// bad (missing radix argument)
parseInt('10');
// good
parseInt("106", 10);
// good
Number("106");
```
```javascript
// bad (missing radix argument)
things.map(parseInt);
// good
things.map(Number);
```
{{< alert type="note" >}}
If the String could represent a non-integer (a number that includes a decimal), **do not** use `parseInt`. Consider `Number` or `parseFloat` instead.
{{< /alert >}}
## CSS Selectors - Use `js-` prefix
If a CSS class is only being used in JavaScript as a reference to the element, prefix
the class name with `js-`.
```html
// bad
<button class="add-user"></button>
// good
<button class="js-add-user"></button>
```
## ES Module Syntax
For most JavaScript files, use ES module syntax to import or export from modules.
Prefer named exports, as they improve name consistency.
```javascript
// bad (with exceptions, see below)
export default SomeClass;
import SomeClass from 'file';
// good
export { SomeClass };
import { SomeClass } from 'file';
```
Using default exports is acceptable in a few particular circumstances:
- Vue Single File Components (SFCs)
- Vuex mutation files
For more information, see [RFC 20](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/20).
## CommonJS Module Syntax
Our Node configuration requires CommonJS module syntax. Prefer named exports.
```javascript
// bad
module.exports = SomeClass;
const SomeClass = require('./some_class');
// good
module.exports = { SomeClass };
const { SomeClass } = require('./some_class');
```
## Absolute vs relative paths for modules
Use relative paths if the module you are importing is less than two levels up.
```javascript
// bad
import GitLabStyleGuide from '~/guides/GitLabStyleGuide';
// good
import GitLabStyleGuide from '../GitLabStyleGuide';
```
If the module you are importing is two or more levels up, use an absolute path instead:
```javascript
// bad
import GitLabStyleGuide from '../../../guides/GitLabStyleGuide';
// good
import GitLabStyleGuide from '~/GitLabStyleGuide';
```
Additionally, **do not add to global namespace**.
## Do not use `DOMContentLoaded` in non-page modules
Imported modules should act the same each time they are loaded. `DOMContentLoaded`
events are only allowed on modules loaded in the `/pages/*` directory because those
are loaded dynamically with webpack.
## Avoid XSS
Do not use `innerHTML`, `append()` or `html()` to set content. It opens up too many
vulnerabilities.
## ESLint
ESLint behavior can be found in our [tooling guide](../tooling.md).
## IIFEs
Avoid using IIFEs (Immediately-Invoked Function Expressions). Although
we have a lot of examples of files which wrap their contents in IIFEs,
this is no longer necessary after the transition from Sprockets to webpack.
Do not use them anymore and feel free to remove them when refactoring legacy code.
## Global namespace
Avoid adding to the global namespace.
```javascript
// bad
window.MyClass = class { /* ... */ };
// good
export default class MyClass { /* ... */ }
```
## Side effects
### Top-level side effects
Top-level side effects are forbidden in any script which contains `export`:
```javascript
// bad
export default class MyClass { /* ... */ }
document.addEventListener("DOMContentLoaded", function(event) {
new MyClass();
}
```
### Avoid side effects in constructors
Avoid making asynchronous calls, API requests or DOM manipulations in the `constructor`.
Move them into separate functions instead. This makes tests easier to write and
avoids violating the [Single Responsibility Principle](https://en.wikipedia.org/wiki/Single_responsibility_principle).
```javascript
// bad
class myClass {
constructor(config) {
this.config = config;
axios.get(this.config.endpoint)
}
}
// good
class myClass {
constructor(config) {
this.config = config;
}
makeRequest() {
axios.get(this.config.endpoint)
}
}
const instance = new myClass();
instance.makeRequest();
```
## Pure Functions and Data Mutation
Strive to write many small pure functions and minimize where mutations occur
```javascript
// bad
const values = {foo: 1};
function impureFunction(items) {
const bar = 1;
items.foo = items.a * bar + 2;
return items.a;
}
const c = impureFunction(values);
// good
var values = {foo: 1};
function pureFunction (foo) {
var bar = 1;
foo = foo * bar + 2;
return foo;
}
var c = pureFunction(values.foo);
```
## Export constants as primitives
Prefer exporting constant primitives with a common namespace over exporting objects. This allows for better compile-time reference checks and helps to avoid accidental `undefined`s at runtime. In addition, it helps in reducing bundle sizes.
Only export the constants as a collection (array, or object) when there is a need to iterate over them, for instance, for a prop validator.
```javascript
// bad
export const VARIANT = {
WARNING: 'warning',
ERROR: 'error',
};
// good
export const VARIANT_WARNING = 'warning';
export const VARIANT_ERROR = 'error';
// good, if the constants need to be iterated over
export const VARIANTS = [VARIANT_WARNING, VARIANT_ERROR];
```
## Error handling
For internal server errors when the server returns `500`, you should return a
generic error message.
When the backend returns errors, the errors should be
suitable to display back to the user.
If for some reason, it is difficult to do so, as a last resort, you can
select particular error messages with prefixing:
1. Ensure that the backend prefixes the error messages to be displayed with:
```ruby
Gitlab::Utils::ErrorMessage.to_user_facing('Example user-facing error-message')
```
1. Use the error message utility function contained in `app/assets/javascripts/lib/utils/error_message.js`.
This utility accepts two parameters: the error object received from the server response and a
default error message. The utility examines the message in the error object for a prefix that
indicates whether the message is meant to be user-facing or not. If the message is intended
to be user-facing, the utility returns it as is. Otherwise, it returns the default error
message passed as a parameter.
```javascript
import { parseErrorMessage } from '~/lib/utils/error_message';
onError(error) {
const errorMessage = parseErrorMessage(error, genericErrorText);
}
```
Note that this prefixing must not be used for API responses. Instead follow the
[REST API](../../../api/rest/troubleshooting.md#status-code-400),
or [GraphQL guides](../../api_graphql_styleguide.md#error-handling) on how to consume error objects.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: JavaScript style guide
breadcrumbs:
- doc
- development
- fe_guide
- style
---
We use [the Airbnb JavaScript Style Guide](https://github.com/airbnb/javascript) and its accompanying
linter to manage most of our JavaScript style guidelines.
In addition to the style guidelines set by Airbnb, we also have a few specific rules
listed below.
{{< alert type="note" >}}
You can run ESLint locally by running `yarn run lint:eslint:all` or `yarn run lint:eslint $PATH_TO_FILE`.
{{< /alert >}}
## Avoid `forEach`
Avoid `forEach` when mutating data. Use `map`, `reduce` or `filter` instead of `forEach`
when mutating data. This minimizes mutations in functions,
which aligns with [the Airbnb style guide](https://github.com/airbnb/javascript#testing--for-real).
```javascript
// bad
users.forEach((user, index) => {
user.id = index;
});
// good
const usersWithId = users.map((user, index) => {
return Object.assign({}, user, { id: index });
});
```
## Limit number of parameters
If your function or method has more than 3 parameters, use an object as a parameter
instead.
```javascript
// bad
function a(p1, p2, p3, p4) {
// ...
};
// good
function a({ p1, p2, p3, p4 }) {
// ...
};
```
## Avoid classes to handle DOM events
If the only purpose of the class is to bind a DOM event and handle the callback, prefer
using a function.
```javascript
// bad
class myClass {
constructor(config) {
this.config = config;
}
init() {
document.addEventListener('click', () => {});
}
}
// good
const myFunction = () => {
document.addEventListener('click', () => {
// handle callback here
});
}
```
## Pass element container to constructor
When your class manipulates the DOM, receive the element container as a parameter.
This is more maintainable and performant.
```javascript
// bad
class a {
constructor() {
document.querySelector('.b');
}
}
// good
class a {
constructor(options) {
options.container.querySelector('.b');
}
}
```
## Converting Strings to Integers
When converting strings to integers, `Number` is semantic and can be more readable. Both are allowable, but `Number` has a slight maintainability advantage.
{{< alert type="warning" >}}
`parseInt` **must** include the [radix argument](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt).
{{< /alert >}}
```javascript
// bad (missing radix argument)
parseInt('10');
// good
parseInt("106", 10);
// good
Number("106");
```
```javascript
// bad (missing radix argument)
things.map(parseInt);
// good
things.map(Number);
```
{{< alert type="note" >}}
If the String could represent a non-integer (a number that includes a decimal), **do not** use `parseInt`. Consider `Number` or `parseFloat` instead.
{{< /alert >}}
## CSS Selectors - Use `js-` prefix
If a CSS class is only being used in JavaScript as a reference to the element, prefix
the class name with `js-`.
```html
// bad
<button class="add-user"></button>
// good
<button class="js-add-user"></button>
```
## ES Module Syntax
For most JavaScript files, use ES module syntax to import or export from modules.
Prefer named exports, as they improve name consistency.
```javascript
// bad (with exceptions, see below)
export default SomeClass;
import SomeClass from 'file';
// good
export { SomeClass };
import { SomeClass } from 'file';
```
Using default exports is acceptable in a few particular circumstances:
- Vue Single File Components (SFCs)
- Vuex mutation files
For more information, see [RFC 20](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/20).
## CommonJS Module Syntax
Our Node configuration requires CommonJS module syntax. Prefer named exports.
```javascript
// bad
module.exports = SomeClass;
const SomeClass = require('./some_class');
// good
module.exports = { SomeClass };
const { SomeClass } = require('./some_class');
```
## Absolute vs relative paths for modules
Use relative paths if the module you are importing is less than two levels up.
```javascript
// bad
import GitLabStyleGuide from '~/guides/GitLabStyleGuide';
// good
import GitLabStyleGuide from '../GitLabStyleGuide';
```
If the module you are importing is two or more levels up, use an absolute path instead:
```javascript
// bad
import GitLabStyleGuide from '../../../guides/GitLabStyleGuide';
// good
import GitLabStyleGuide from '~/GitLabStyleGuide';
```
Additionally, **do not add to global namespace**.
## Do not use `DOMContentLoaded` in non-page modules
Imported modules should act the same each time they are loaded. `DOMContentLoaded`
events are only allowed on modules loaded in the `/pages/*` directory because those
are loaded dynamically with webpack.
## Avoid XSS
Do not use `innerHTML`, `append()` or `html()` to set content. It opens up too many
vulnerabilities.
## ESLint
ESLint behavior can be found in our [tooling guide](../tooling.md).
## IIFEs
Avoid using IIFEs (Immediately-Invoked Function Expressions). Although
we have a lot of examples of files which wrap their contents in IIFEs,
this is no longer necessary after the transition from Sprockets to webpack.
Do not use them anymore and feel free to remove them when refactoring legacy code.
## Global namespace
Avoid adding to the global namespace.
```javascript
// bad
window.MyClass = class { /* ... */ };
// good
export default class MyClass { /* ... */ }
```
## Side effects
### Top-level side effects
Top-level side effects are forbidden in any script which contains `export`:
```javascript
// bad
export default class MyClass { /* ... */ }
document.addEventListener("DOMContentLoaded", function(event) {
new MyClass();
}
```
### Avoid side effects in constructors
Avoid making asynchronous calls, API requests or DOM manipulations in the `constructor`.
Move them into separate functions instead. This makes tests easier to write and
avoids violating the [Single Responsibility Principle](https://en.wikipedia.org/wiki/Single_responsibility_principle).
```javascript
// bad
class myClass {
constructor(config) {
this.config = config;
axios.get(this.config.endpoint)
}
}
// good
class myClass {
constructor(config) {
this.config = config;
}
makeRequest() {
axios.get(this.config.endpoint)
}
}
const instance = new myClass();
instance.makeRequest();
```
## Pure Functions and Data Mutation
Strive to write many small pure functions and minimize where mutations occur
```javascript
// bad
const values = {foo: 1};
function impureFunction(items) {
const bar = 1;
items.foo = items.a * bar + 2;
return items.a;
}
const c = impureFunction(values);
// good
var values = {foo: 1};
function pureFunction (foo) {
var bar = 1;
foo = foo * bar + 2;
return foo;
}
var c = pureFunction(values.foo);
```
## Export constants as primitives
Prefer exporting constant primitives with a common namespace over exporting objects. This allows for better compile-time reference checks and helps to avoid accidental `undefined`s at runtime. In addition, it helps in reducing bundle sizes.
Only export the constants as a collection (array, or object) when there is a need to iterate over them, for instance, for a prop validator.
```javascript
// bad
export const VARIANT = {
WARNING: 'warning',
ERROR: 'error',
};
// good
export const VARIANT_WARNING = 'warning';
export const VARIANT_ERROR = 'error';
// good, if the constants need to be iterated over
export const VARIANTS = [VARIANT_WARNING, VARIANT_ERROR];
```
## Error handling
For internal server errors when the server returns `500`, you should return a
generic error message.
When the backend returns errors, the errors should be
suitable to display back to the user.
If for some reason, it is difficult to do so, as a last resort, you can
select particular error messages with prefixing:
1. Ensure that the backend prefixes the error messages to be displayed with:
```ruby
Gitlab::Utils::ErrorMessage.to_user_facing('Example user-facing error-message')
```
1. Use the error message utility function contained in `app/assets/javascripts/lib/utils/error_message.js`.
This utility accepts two parameters: the error object received from the server response and a
default error message. The utility examines the message in the error object for a prefix that
indicates whether the message is meant to be user-facing or not. If the message is intended
to be user-facing, the utility returns it as is. Otherwise, it returns the default error
message passed as a parameter.
```javascript
import { parseErrorMessage } from '~/lib/utils/error_message';
onError(error) {
const errorMessage = parseErrorMessage(error, genericErrorText);
}
```
Note that this prefixing must not be used for API responses. Instead follow the
[REST API](../../../api/rest/troubleshooting.md#status-code-400),
or [GraphQL guides](../../api_graphql_styleguide.md#error-handling) on how to consume error objects.
|
https://docs.gitlab.com/development/fe_guide/scss
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/scss.md
|
2025-08-13
|
doc/development/fe_guide/style
|
[
"doc",
"development",
"fe_guide",
"style"
] |
scss.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
SCSS style guide
| null |
## Utility Classes
In order to reduce the generation of more CSS as our site grows, prefer the use
of utility classes over adding new CSS. In complex cases, CSS can be addressed
by adding component classes.
### Where are CSS utility classes defined?
Utility classes are generated by [Tailwind CSS](https://tailwindcss.com/). There are three ways to view Tailwind CSS classes:
- [GitLab Tailwind CSS documentation](https://gitlab-org.gitlab.io/frontend/tailwind-documentation): A documentation site specific to the GitLab Tailwind configuration. It is a searchable list of all available Tailwind CSS classes.
- [Tailwind CSS autocomplete](#tailwind-css-autocomplete): Can be used in VS Code or RubyMine.
- [Tailwind CSS config viewer](https://gitlab-org.gitlab.io/gitlab-ui/tailwind-config-viewer/): A visual view of Tailwind CSS classes specific to our design system (spacing, colors, sizing, etc). Does not show all available Tailwind CSS classes.
### What CSS utility classes are deprecated?
Classes in [`common.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/framework/common.scss)
are being deprecated. Classes in [`common.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/framework/common.scss)
that use non-design-system values should be avoided. Use classes with conforming values instead.
Avoid [Bootstrap's Utility Classes](https://getbootstrap.com/docs/4.3/utilities/).
{{< alert type="note" >}}
While migrating [Bootstrap's Utility Classes](https://getbootstrap.com/docs/4.3/utilities/)
to the [GitLab UI](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/css.md#utilities)
utility classes, note both the classes for margin and padding differ. The size scale used at
GitLab differs from the scale used in the Bootstrap library. For a Bootstrap padding or margin
utility, you may need to double the size of the applied utility to achieve the same visual
result (such as `ml-1` becoming `gl-ml-2`).
{{< /alert >}}
### Tailwind CSS
As of August 2024, we are using [Tailwind CSS](https://tailwindcss.com/) as our CSS utilities provider.
This replaces the previous, custom-built solution. See the [Tailwind CSS design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/tailwindcss/)
for motivation, proposal, and implementation details.
#### Tailwind CSS basics
Below are some Tailwind CSS basics and information about how it has been
configured to use the [Pajamas design system](https://design.gitlab.com/). For a
more in-depth guide see the [official Tailwind CSS documentation](https://tailwindcss.com/docs/utility-first).
##### Prefix
We have configured Tailwind CSS to use a
[prefix](https://tailwindcss.com/docs/configuration#prefix) so all utility classes are prefixed with `gl-`.
When using responsive utilities or state modifiers the prefix goes after the colon.
**Examples**: `gl-mt-5`, `lg:gl-mt-5`.
##### Responsive CSS utility classes
[Responsive CSS utility classes](https://tailwindcss.com/docs/responsive-design) are prefixed with the breakpoint name, followed by the `:` character.
The available breakpoints are configured in [tailwind.defaults.js#L44](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/7c0fb4b07a0f0d0a58dd0137831412dbf53ea498/tailwind.defaults.js#L482)
**Example**: `lg:gl-mt-5`
##### Hover, focus, and other state modifiers
[State modifiers](https://tailwindcss.com/docs/hover-focus-and-other-states)
can be used to conditionally apply any Tailwind CSS class. Prefix the CSS utility class
with the name of the modifier, followed by the `:` character.
**Example**: `hover:gl-underline`
##### `!important` modifier
You can use the [important modifier](https://tailwindcss.com/docs/configuration#important-modifier) by adding `!` to the beginning of the CSS utility class. When using in conjunction with responsive utility classes or state modifiers the `!` goes after the `:` character.
**Examples**: `!gl-mt-5`, `lg:!gl-mt-5`, `hover:!gl-underline`
##### Spacing and sizing CSS utility classes
Spacing and sizing CSS utility classes (for example, `margin`, `padding`, `width`, `height`) use our spacing scale defined in
[src/tokens/build/tailwind/tokens.cjs](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/7c0fb4b07a0f0d0a58dd0137831412dbf53ea498/src/tokens/build/tailwind/tokens.cjs). See [https://gitlab-org.gitlab.io/frontend/tailwind-documentation/margin](https://gitlab-org.gitlab.io/frontend/tailwind-documentation/margin) for available CSS utility classes.
**Example**: `gl-mt-5` is `margin-top: 1rem;`
##### Color CSS utility classes
Color CSS utility classes (e.g. `color` and `background-color`) use colors defined in
[src/tokens/build/tailwind/tokens.cjs](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/7c0fb4b07a0f0d0a58dd0137831412dbf53ea498/src/tokens/build/tailwind/tokens.cjs).
See [https://gitlab-org.gitlab.io/frontend/tailwind-documentation/text-color](https://gitlab-org.gitlab.io/frontend/tailwind-documentation/text-color) for available CSS utility classes.
**Example**: `gl-text-subtle` is `color: var(--gl-text-color-subtle, #626168);`
#### Building the Tailwind CSS bundle
When using Vite or Webpack with the GitLab Development Kit, Tailwind CSS watches for file changes to
build detected utilities on the fly.
To build a fresh Tailwind CSS bundle, run `yarn tailwindcss:build`. This is the script that gets
called internally when building production assets with `bundle exec rake gitlab:assets:compile`.
However the bundle gets built, the output is saved to `app/assets/builds/tailwind.css`.
#### Tailwind CSS autocomplete
Tailwind CSS autocomplete lists all available classes in your code editor.
##### VS Code
{{< alert type="note" >}}
If you are having trouble with slow autocomplete you may need to [increase the amount of memory the TS server is allowed to use](../type_hinting.md#vs-code-settings).
{{< /alert >}}
Install the [Tailwind CSS IntelliSense](https://marketplace.visualstudio.com/items?itemName=bradlc.vscode-tailwindcss)
extension. For HAML and custom `*-class` prop support these are the recommended settings:
```json
{
"tailwindCSS.experimental.classRegex": [
["class: [\"|']+([^\"|']*)[\"|']+", "([a-zA-Z0-9\\-:!/]+)"],
["(\\.[\\w\\-.]+)[\\n\\=\\{\\s]", "([\\w\\-]+)"],
["[a-z]+-class(?:es)?=\"([^'\"]*)\""]
],
"tailwindCSS.emmetCompletions": true
}
```
##### RubyMine
Tailwind CSS autocomplete is [enabled by default](https://www.jetbrains.com/help/ruby/tailwind-css.html).
For full HAML and custom `*-class` prop support these are the recommended updates to the default settings:
```json
{
"includeLanguages": {
"haml": "html"
},
"emmetCompletions": true,
"experimental": {
"classRegex": [
["class: [\"|']+([^\"|']*)[\"|']+", "([a-zA-Z0-9\\-:!/]+)"],
["(\\.[\\w\\-.]+)[\\n\\=\\{\\s]", "([\\w\\-]+)"],
["[a-z]+-class(?:es)?=\"([^'\"]*)\""]
]
}
}
```
### Where should you put new utility classes?
Utility classes are generated by [Tailwind CSS](https://tailwindcss.com/) which
supports most CSS features. If there is something that is not available we should
update [tailwind.defaults.js](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/7c0fb4b07a0f0d0a58dd0137831412dbf53ea498/tailwind.defaults.js) in GitLab UI.
### When should you create component classes?
We recommend a "utility-first" approach.
1. Start with utility classes.
1. If composing utility classes into a component class removes code duplication and encapsulates a clear responsibility, do it.
This encourages an organic growth of component classes and prevents the creation of
one-off non-reusable classes. Also, the kind of classes that emerge from "utility-first"
tend to be design-centered (for example, `.button`, `.alert`, `.card`) rather than
domain-centered (for example, `.security-report-widget`, `.commit-header-icon`).
Inspiration:
- <https://tailwindcss.com/docs/utility-first>
- <https://tailwindcss.com/docs/extracting-components>
### Leveraging Tailwind CSS in HTML and in stylesheets
When writing component classes, it's important to effectively integrate Tailwind CSS's utility classes to
maintain consistency with the design system and keeping the CSS bundles small.
**Utility CSS Classes in HTML vs. in stylesheets**:
By using the utility classes directly in the HTML, we can keep the CSS file size smaller and adhere
to the utility-first philosophy. By avoiding to combine utility classes with custom styles in one components class
unless absolutely necessary, we can prevent confusion and potential conflicts.
- **Reasons for the Preference**:
- **Smaller CSS File Size**: Utilizing utility classes directly can lead to more compact CSS files and
promote a more consistent design system.
- **Clarity and Maintainability**: When utility classes are used in HTML, it's clearer how styles are
applied, reducing the risk of conflicts and regressions.
- **Potential Issues with Combining Styles**:
- **Conflicts**: If utility classes and custom styles are combined in a single class, conflicts can arise,
especially when the styles have interdependencies.
- **Regressions**: It becomes less obvious how styles should resolve, leading to possible regressions
or unexpected behavior.
By following these guidelines, we can create clean, maintainable stylesheets that leverage Tailwind CSS effectively.
#### 1. Use utility classes directly in HTML (preferred approach)
For better maintainability and to adhere to the utility-first principle, add utility classes directly
to the HTML element. A component class should primarily contain only the non-utility CSS styles.
In the following example, you add the utility classes `gl-fixed` and `gl-inset-x-0`, instead of adding
`position: fixed; right: 0; left: 0;` to the SCSS file:
```html
<!-- Bad -->
<div class="my-class"></div>
<style>
.my-class {
top: $header-height;
min-height: $comparison-empty-state-height;
position: fixed;
left: 0px;
right: 0px;
}
</style>
<!-- Good -->
<div class="my-class gl-fixed gl-inset-x-0"></div>
<style>
.my-class {
top: $header-height;
min-height: $comparison-empty-state-height;
}
</style>
```
#### 2. Apply utility classes in component classes (when necessary)
Sometime it might not feasible to use utility classes directly in HTML and you need to include them in our
custom SCSS files. Then, you might want to inherit style definitions from the design system without needing to figure
out the relevant properties or values. To simplify this process, you can use Tailwind CSS's
[`@apply` directive](https://tailwindcss.com/docs/reusing-styles#extracting-classes-with-apply)
to include utilities' style definitions in your custom styles.
Using `@apply` is _encouraged_ for applying CSS properties that depend on the design system (e.g. `margin`, `padding`).
For CSS properties that are unit-less (e.g `display: flex`) it is okay to use CSS properties directly.
```scss
// Bad
.my-class {
margin-top: 0.5rem;
}
// Okay
.my-class {
display: flex;
}
// Good
.my-class {
@apply gl-mt-5 gl-flex;
}
```
The preferred way to use `@apply` is to combine multiple CSS classes in a single line or at most two,
like in the example above. This approach keeps the CSS concise and easy to read:
```css
// Good
.my-class {
@apply gl-mt-5 gl-flex gl-items-center;
}
```
Avoid splitting classes across multiple lines, as shown below.
```css
// Avoid
@apply gl-mt-5;
@apply gl-flex;
@apply gl-items-center;
```
The reason for this is that IDE extensions might only be able to detect conflicts when
the CSS Classes are in one line:
```css
// ✅ Conflict detected: 'gl-bg-subtle' applies the same CSS properties as 'gl-bg-default'.(cssConflict)
@apply gl-bg-default gl-bg-subtle;
// ❌ No conflict detected
@apply gl-bg-default;
@apply gl-bg-subtle;
```
The exception to this rule is when working with `!important`. Since `!important` applies to
the entire line, each class that requires it should be applied on its own line. For instance:
```css
@apply gl-flex gl-items-center;
@apply gl-mt-5 #{!important};
```
This ensures that `!important` applies only where intended without affecting other classes in the same line.
## Responsive design
Our UI should work well on mobile and desktop. To accomplish this we use [CSS media queries](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_media_queries/Using_media_queries). In general we should take a mobile first approach to media queries. This means writing CSS for mobile, then using min-width media queries to override styles on desktop. A exception to this rule is setting the display mode on child components. For example when hiding `GlButton` on mobile we don't want to override the display mode set by our component CSS so we should use a max-width media query such as `max-lg:gl-hidden`.
### Tailwind CSS classes
```html
<!-- Bad -->
<div class="gl-mt-5 max-lg:gl-mt-3"></div>
<!-- Good -->
<div class="gl-mt-3 md:gl-mt-5"></div>
<!-- Bad -->
<div class="gl-mt-3 sm:max-lg:gl-mt-5"></div>
<!-- Good -->
<div class="gl-mt-3 sm:gl-mt-5 lg:gl-mt-3"></div>
<!-- Bad -->
<!-- Changing the display mode of child components can cause visual regressions. -->
<gl-button class="gl-hidden lg:gl-flex">Edit</gl-button>
<!-- Good -->
<gl-button class="max-lg:gl-hidden">Edit</gl-button>
```
### Component classes
```scss
// Bad
.class-name {
@apply gl-mt-5 max-lg:gl-mt-3;
}
// Good
.class-name {
@apply gl-mt-3 lg:gl-mt-5;
}
// Bad
.class-name {
display: block;
@include media-breakpoint-down(lg) {
display: flex;
}
}
// Good
.class-name {
display: flex;
@include media-breakpoint-up(lg) {
display: block;
}
}
```
## Naming
Filenames should use `snake_case`.
CSS classes should use the `lowercase-hyphenated` format rather than
`snake_case` or `camelCase`.
```scss
// Bad
.class_name {
color: #fff;
}
// Bad
.className {
color: #fff;
}
// Good
.class-name {
color: #fff;
}
```
Avoid making compound class names with SCSS `&` features. It makes
searching for usages harder, and provides limited benefit.
```scss
// Bad
.class {
&-name {
color: orange;
}
}
// Good
.class-name {
color: #fff;
}
```
Class names should be used instead of tag name selectors.
Using tag name selectors is discouraged because they can affect
unintended elements in the hierarchy.
```scss
// Bad
ul {
color: #fff;
}
// Good
.class-name {
color: #fff;
}
// Best
// prefer an existing utility class over adding existing styles
```
Class names are also preferable to IDs. Rules that use IDs
are not-reusable, as there can only be one affected element on
the page.
```scss
// Bad
#my-element {
padding: 0;
}
// Good
.my-element {
padding: 0;
}
```
## Nesting
Avoid unnecessary nesting. The extra specificity of a wrapper component
makes things harder to override.
```scss
// Bad
.component-container {
.component-header {
/* ... */
}
.component-body {
/* ... */
}
}
// Good
.component-container {
/* ... */
}
.component-header {
/* ... */
}
.component-body {
/* ... */
}
```
## Selectors with a `js-` Prefix
Do not use any selector prefixed with `js-` for styling purposes. These
selectors are intended for use only with JavaScript to allow for removal or
renaming without breaking styling.
## Selectors with Util CSS Classes
Do not use utility CSS classes as selectors in your stylesheets. These classes
are likely to change, requiring updates to the selectors and making the
implementation harder to maintain. Instead, use another existing CSS class or
add a new custom CSS class for styling elements. This approach improves
maintainability and reduces the risk of bugs.
```scss
// ❌ Bad
.gl-mb-5 {
/* ... */
}
// ✅ Good
.component-header {
/* ... */
}
```
## Selectors with ARIA attributes
Do not use any attribute selector with ARIA for styling purposes. These
attributes and roles are intended for supporting assistive technology.
The structure of the components annotated with ARIA might change
and so its styling. We need to be able to move these roles and attributes
to different elements, without breaking styling.
```scss
// Bad
&[aria-expanded=false] &-header {
border-bottom: 0;
}
// Good
&.is-collapsed &-header {
border-bottom: 0;
}
```
## Using `extend` at-rule
Usage of the `extend` at-rule is prohibited due to
[memory leaks](https://gitlab.com/gitlab-org/gitlab/-/issues/323021) and
[the rule doesn't work as it should](https://sass-lang.com/documentation/breaking-changes/extend-compound/).
## Linting
We use [stylelint](https://stylelint.io) to check for style guide conformity. It uses the
ruleset in `.stylelintrc` and rules from
[our SCSS configuration](https://gitlab.com/gitlab-org/frontend/gitlab-stylelint-config).
`.stylelintrc` is located in the home directory of the project.
To check if any warnings are produced by your changes, run `yarn lint:stylelint`
in the GitLab directory. Stylelint also runs in GitLab CI/CD to
catch any warnings.
If the Rake task is throwing warnings you don't understand, SCSS Lint's
documentation includes [a full list of their rules](https://stylelint.io/user-guide/rules/).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: SCSS style guide
breadcrumbs:
- doc
- development
- fe_guide
- style
---
## Utility Classes
In order to reduce the generation of more CSS as our site grows, prefer the use
of utility classes over adding new CSS. In complex cases, CSS can be addressed
by adding component classes.
### Where are CSS utility classes defined?
Utility classes are generated by [Tailwind CSS](https://tailwindcss.com/). There are three ways to view Tailwind CSS classes:
- [GitLab Tailwind CSS documentation](https://gitlab-org.gitlab.io/frontend/tailwind-documentation): A documentation site specific to the GitLab Tailwind configuration. It is a searchable list of all available Tailwind CSS classes.
- [Tailwind CSS autocomplete](#tailwind-css-autocomplete): Can be used in VS Code or RubyMine.
- [Tailwind CSS config viewer](https://gitlab-org.gitlab.io/gitlab-ui/tailwind-config-viewer/): A visual view of Tailwind CSS classes specific to our design system (spacing, colors, sizing, etc). Does not show all available Tailwind CSS classes.
### What CSS utility classes are deprecated?
Classes in [`common.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/framework/common.scss)
are being deprecated. Classes in [`common.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/framework/common.scss)
that use non-design-system values should be avoided. Use classes with conforming values instead.
Avoid [Bootstrap's Utility Classes](https://getbootstrap.com/docs/4.3/utilities/).
{{< alert type="note" >}}
While migrating [Bootstrap's Utility Classes](https://getbootstrap.com/docs/4.3/utilities/)
to the [GitLab UI](https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/blob/main/packages/gitlab-ui/doc/css.md#utilities)
utility classes, note both the classes for margin and padding differ. The size scale used at
GitLab differs from the scale used in the Bootstrap library. For a Bootstrap padding or margin
utility, you may need to double the size of the applied utility to achieve the same visual
result (such as `ml-1` becoming `gl-ml-2`).
{{< /alert >}}
### Tailwind CSS
As of August 2024, we are using [Tailwind CSS](https://tailwindcss.com/) as our CSS utilities provider.
This replaces the previous, custom-built solution. See the [Tailwind CSS design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/tailwindcss/)
for motivation, proposal, and implementation details.
#### Tailwind CSS basics
Below are some Tailwind CSS basics and information about how it has been
configured to use the [Pajamas design system](https://design.gitlab.com/). For a
more in-depth guide see the [official Tailwind CSS documentation](https://tailwindcss.com/docs/utility-first).
##### Prefix
We have configured Tailwind CSS to use a
[prefix](https://tailwindcss.com/docs/configuration#prefix) so all utility classes are prefixed with `gl-`.
When using responsive utilities or state modifiers the prefix goes after the colon.
**Examples**: `gl-mt-5`, `lg:gl-mt-5`.
##### Responsive CSS utility classes
[Responsive CSS utility classes](https://tailwindcss.com/docs/responsive-design) are prefixed with the breakpoint name, followed by the `:` character.
The available breakpoints are configured in [tailwind.defaults.js#L44](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/7c0fb4b07a0f0d0a58dd0137831412dbf53ea498/tailwind.defaults.js#L482)
**Example**: `lg:gl-mt-5`
##### Hover, focus, and other state modifiers
[State modifiers](https://tailwindcss.com/docs/hover-focus-and-other-states)
can be used to conditionally apply any Tailwind CSS class. Prefix the CSS utility class
with the name of the modifier, followed by the `:` character.
**Example**: `hover:gl-underline`
##### `!important` modifier
You can use the [important modifier](https://tailwindcss.com/docs/configuration#important-modifier) by adding `!` to the beginning of the CSS utility class. When using in conjunction with responsive utility classes or state modifiers the `!` goes after the `:` character.
**Examples**: `!gl-mt-5`, `lg:!gl-mt-5`, `hover:!gl-underline`
##### Spacing and sizing CSS utility classes
Spacing and sizing CSS utility classes (for example, `margin`, `padding`, `width`, `height`) use our spacing scale defined in
[src/tokens/build/tailwind/tokens.cjs](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/7c0fb4b07a0f0d0a58dd0137831412dbf53ea498/src/tokens/build/tailwind/tokens.cjs). See [https://gitlab-org.gitlab.io/frontend/tailwind-documentation/margin](https://gitlab-org.gitlab.io/frontend/tailwind-documentation/margin) for available CSS utility classes.
**Example**: `gl-mt-5` is `margin-top: 1rem;`
##### Color CSS utility classes
Color CSS utility classes (e.g. `color` and `background-color`) use colors defined in
[src/tokens/build/tailwind/tokens.cjs](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/7c0fb4b07a0f0d0a58dd0137831412dbf53ea498/src/tokens/build/tailwind/tokens.cjs).
See [https://gitlab-org.gitlab.io/frontend/tailwind-documentation/text-color](https://gitlab-org.gitlab.io/frontend/tailwind-documentation/text-color) for available CSS utility classes.
**Example**: `gl-text-subtle` is `color: var(--gl-text-color-subtle, #626168);`
#### Building the Tailwind CSS bundle
When using Vite or Webpack with the GitLab Development Kit, Tailwind CSS watches for file changes to
build detected utilities on the fly.
To build a fresh Tailwind CSS bundle, run `yarn tailwindcss:build`. This is the script that gets
called internally when building production assets with `bundle exec rake gitlab:assets:compile`.
However the bundle gets built, the output is saved to `app/assets/builds/tailwind.css`.
#### Tailwind CSS autocomplete
Tailwind CSS autocomplete lists all available classes in your code editor.
##### VS Code
{{< alert type="note" >}}
If you are having trouble with slow autocomplete you may need to [increase the amount of memory the TS server is allowed to use](../type_hinting.md#vs-code-settings).
{{< /alert >}}
Install the [Tailwind CSS IntelliSense](https://marketplace.visualstudio.com/items?itemName=bradlc.vscode-tailwindcss)
extension. For HAML and custom `*-class` prop support these are the recommended settings:
```json
{
"tailwindCSS.experimental.classRegex": [
["class: [\"|']+([^\"|']*)[\"|']+", "([a-zA-Z0-9\\-:!/]+)"],
["(\\.[\\w\\-.]+)[\\n\\=\\{\\s]", "([\\w\\-]+)"],
["[a-z]+-class(?:es)?=\"([^'\"]*)\""]
],
"tailwindCSS.emmetCompletions": true
}
```
##### RubyMine
Tailwind CSS autocomplete is [enabled by default](https://www.jetbrains.com/help/ruby/tailwind-css.html).
For full HAML and custom `*-class` prop support these are the recommended updates to the default settings:
```json
{
"includeLanguages": {
"haml": "html"
},
"emmetCompletions": true,
"experimental": {
"classRegex": [
["class: [\"|']+([^\"|']*)[\"|']+", "([a-zA-Z0-9\\-:!/]+)"],
["(\\.[\\w\\-.]+)[\\n\\=\\{\\s]", "([\\w\\-]+)"],
["[a-z]+-class(?:es)?=\"([^'\"]*)\""]
]
}
}
```
### Where should you put new utility classes?
Utility classes are generated by [Tailwind CSS](https://tailwindcss.com/) which
supports most CSS features. If there is something that is not available we should
update [tailwind.defaults.js](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/7c0fb4b07a0f0d0a58dd0137831412dbf53ea498/tailwind.defaults.js) in GitLab UI.
### When should you create component classes?
We recommend a "utility-first" approach.
1. Start with utility classes.
1. If composing utility classes into a component class removes code duplication and encapsulates a clear responsibility, do it.
This encourages an organic growth of component classes and prevents the creation of
one-off non-reusable classes. Also, the kind of classes that emerge from "utility-first"
tend to be design-centered (for example, `.button`, `.alert`, `.card`) rather than
domain-centered (for example, `.security-report-widget`, `.commit-header-icon`).
Inspiration:
- <https://tailwindcss.com/docs/utility-first>
- <https://tailwindcss.com/docs/extracting-components>
### Leveraging Tailwind CSS in HTML and in stylesheets
When writing component classes, it's important to effectively integrate Tailwind CSS's utility classes to
maintain consistency with the design system and keeping the CSS bundles small.
**Utility CSS Classes in HTML vs. in stylesheets**:
By using the utility classes directly in the HTML, we can keep the CSS file size smaller and adhere
to the utility-first philosophy. By avoiding to combine utility classes with custom styles in one components class
unless absolutely necessary, we can prevent confusion and potential conflicts.
- **Reasons for the Preference**:
- **Smaller CSS File Size**: Utilizing utility classes directly can lead to more compact CSS files and
promote a more consistent design system.
- **Clarity and Maintainability**: When utility classes are used in HTML, it's clearer how styles are
applied, reducing the risk of conflicts and regressions.
- **Potential Issues with Combining Styles**:
- **Conflicts**: If utility classes and custom styles are combined in a single class, conflicts can arise,
especially when the styles have interdependencies.
- **Regressions**: It becomes less obvious how styles should resolve, leading to possible regressions
or unexpected behavior.
By following these guidelines, we can create clean, maintainable stylesheets that leverage Tailwind CSS effectively.
#### 1. Use utility classes directly in HTML (preferred approach)
For better maintainability and to adhere to the utility-first principle, add utility classes directly
to the HTML element. A component class should primarily contain only the non-utility CSS styles.
In the following example, you add the utility classes `gl-fixed` and `gl-inset-x-0`, instead of adding
`position: fixed; right: 0; left: 0;` to the SCSS file:
```html
<!-- Bad -->
<div class="my-class"></div>
<style>
.my-class {
top: $header-height;
min-height: $comparison-empty-state-height;
position: fixed;
left: 0px;
right: 0px;
}
</style>
<!-- Good -->
<div class="my-class gl-fixed gl-inset-x-0"></div>
<style>
.my-class {
top: $header-height;
min-height: $comparison-empty-state-height;
}
</style>
```
#### 2. Apply utility classes in component classes (when necessary)
Sometime it might not feasible to use utility classes directly in HTML and you need to include them in our
custom SCSS files. Then, you might want to inherit style definitions from the design system without needing to figure
out the relevant properties or values. To simplify this process, you can use Tailwind CSS's
[`@apply` directive](https://tailwindcss.com/docs/reusing-styles#extracting-classes-with-apply)
to include utilities' style definitions in your custom styles.
Using `@apply` is _encouraged_ for applying CSS properties that depend on the design system (e.g. `margin`, `padding`).
For CSS properties that are unit-less (e.g `display: flex`) it is okay to use CSS properties directly.
```scss
// Bad
.my-class {
margin-top: 0.5rem;
}
// Okay
.my-class {
display: flex;
}
// Good
.my-class {
@apply gl-mt-5 gl-flex;
}
```
The preferred way to use `@apply` is to combine multiple CSS classes in a single line or at most two,
like in the example above. This approach keeps the CSS concise and easy to read:
```css
// Good
.my-class {
@apply gl-mt-5 gl-flex gl-items-center;
}
```
Avoid splitting classes across multiple lines, as shown below.
```css
// Avoid
@apply gl-mt-5;
@apply gl-flex;
@apply gl-items-center;
```
The reason for this is that IDE extensions might only be able to detect conflicts when
the CSS Classes are in one line:
```css
// ✅ Conflict detected: 'gl-bg-subtle' applies the same CSS properties as 'gl-bg-default'.(cssConflict)
@apply gl-bg-default gl-bg-subtle;
// ❌ No conflict detected
@apply gl-bg-default;
@apply gl-bg-subtle;
```
The exception to this rule is when working with `!important`. Since `!important` applies to
the entire line, each class that requires it should be applied on its own line. For instance:
```css
@apply gl-flex gl-items-center;
@apply gl-mt-5 #{!important};
```
This ensures that `!important` applies only where intended without affecting other classes in the same line.
## Responsive design
Our UI should work well on mobile and desktop. To accomplish this we use [CSS media queries](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_media_queries/Using_media_queries). In general we should take a mobile first approach to media queries. This means writing CSS for mobile, then using min-width media queries to override styles on desktop. A exception to this rule is setting the display mode on child components. For example when hiding `GlButton` on mobile we don't want to override the display mode set by our component CSS so we should use a max-width media query such as `max-lg:gl-hidden`.
### Tailwind CSS classes
```html
<!-- Bad -->
<div class="gl-mt-5 max-lg:gl-mt-3"></div>
<!-- Good -->
<div class="gl-mt-3 md:gl-mt-5"></div>
<!-- Bad -->
<div class="gl-mt-3 sm:max-lg:gl-mt-5"></div>
<!-- Good -->
<div class="gl-mt-3 sm:gl-mt-5 lg:gl-mt-3"></div>
<!-- Bad -->
<!-- Changing the display mode of child components can cause visual regressions. -->
<gl-button class="gl-hidden lg:gl-flex">Edit</gl-button>
<!-- Good -->
<gl-button class="max-lg:gl-hidden">Edit</gl-button>
```
### Component classes
```scss
// Bad
.class-name {
@apply gl-mt-5 max-lg:gl-mt-3;
}
// Good
.class-name {
@apply gl-mt-3 lg:gl-mt-5;
}
// Bad
.class-name {
display: block;
@include media-breakpoint-down(lg) {
display: flex;
}
}
// Good
.class-name {
display: flex;
@include media-breakpoint-up(lg) {
display: block;
}
}
```
## Naming
Filenames should use `snake_case`.
CSS classes should use the `lowercase-hyphenated` format rather than
`snake_case` or `camelCase`.
```scss
// Bad
.class_name {
color: #fff;
}
// Bad
.className {
color: #fff;
}
// Good
.class-name {
color: #fff;
}
```
Avoid making compound class names with SCSS `&` features. It makes
searching for usages harder, and provides limited benefit.
```scss
// Bad
.class {
&-name {
color: orange;
}
}
// Good
.class-name {
color: #fff;
}
```
Class names should be used instead of tag name selectors.
Using tag name selectors is discouraged because they can affect
unintended elements in the hierarchy.
```scss
// Bad
ul {
color: #fff;
}
// Good
.class-name {
color: #fff;
}
// Best
// prefer an existing utility class over adding existing styles
```
Class names are also preferable to IDs. Rules that use IDs
are not-reusable, as there can only be one affected element on
the page.
```scss
// Bad
#my-element {
padding: 0;
}
// Good
.my-element {
padding: 0;
}
```
## Nesting
Avoid unnecessary nesting. The extra specificity of a wrapper component
makes things harder to override.
```scss
// Bad
.component-container {
.component-header {
/* ... */
}
.component-body {
/* ... */
}
}
// Good
.component-container {
/* ... */
}
.component-header {
/* ... */
}
.component-body {
/* ... */
}
```
## Selectors with a `js-` Prefix
Do not use any selector prefixed with `js-` for styling purposes. These
selectors are intended for use only with JavaScript to allow for removal or
renaming without breaking styling.
## Selectors with Util CSS Classes
Do not use utility CSS classes as selectors in your stylesheets. These classes
are likely to change, requiring updates to the selectors and making the
implementation harder to maintain. Instead, use another existing CSS class or
add a new custom CSS class for styling elements. This approach improves
maintainability and reduces the risk of bugs.
```scss
// ❌ Bad
.gl-mb-5 {
/* ... */
}
// ✅ Good
.component-header {
/* ... */
}
```
## Selectors with ARIA attributes
Do not use any attribute selector with ARIA for styling purposes. These
attributes and roles are intended for supporting assistive technology.
The structure of the components annotated with ARIA might change
and so its styling. We need to be able to move these roles and attributes
to different elements, without breaking styling.
```scss
// Bad
&[aria-expanded=false] &-header {
border-bottom: 0;
}
// Good
&.is-collapsed &-header {
border-bottom: 0;
}
```
## Using `extend` at-rule
Usage of the `extend` at-rule is prohibited due to
[memory leaks](https://gitlab.com/gitlab-org/gitlab/-/issues/323021) and
[the rule doesn't work as it should](https://sass-lang.com/documentation/breaking-changes/extend-compound/).
## Linting
We use [stylelint](https://stylelint.io) to check for style guide conformity. It uses the
ruleset in `.stylelintrc` and rules from
[our SCSS configuration](https://gitlab.com/gitlab-org/frontend/gitlab-stylelint-config).
`.stylelintrc` is located in the home directory of the project.
To check if any warnings are produced by your changes, run `yarn lint:stylelint`
in the GitLab directory. Stylelint also runs in GitLab CI/CD to
catch any warnings.
If the Rake task is throwing warnings you don't understand, SCSS Lint's
documentation includes [a full list of their rules](https://stylelint.io/user-guide/rules/).
|
https://docs.gitlab.com/development/fe_guide/typescript
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/typescript.md
|
2025-08-13
|
doc/development/fe_guide/style
|
[
"doc",
"development",
"fe_guide",
"style"
] |
typescript.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
TypeScript
| null |
## History with GitLab
TypeScript has been [considered](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/35),
discussed, promoted, and rejected for years at GitLab. The general
conclusion is that we are unable to integrate TypeScript into the main
project because the costs outweigh the benefits.
- The main project has **a lot** of pre-existing code that is not strongly typed.
- The main contributors to the main project are not all familiar with TypeScript.
Apart from the main project, TypeScript has been profitably employed in
a handful of satellite projects.
## Projects using TypeScript
The following GitLab projects use TypeScript:
- [`gitlab-web-ide`](https://gitlab.com/gitlab-org/gitlab-web-ide/)
- [`gitlab-vscode-extension`](https://gitlab.com/gitlab-org/gitlab-vscode-extension/)
- [`gitlab-language-server-for-code-suggestions`](https://gitlab.com/gitlab-org/editor-extensions/gitlab-language-server-for-code-suggestions)
- [`gitlab-org/cluster-integration/javascript-client`](https://gitlab.com/gitlab-org/cluster-integration/javascript-client)
## Recommendations
### Setup ESLint and TypeScript configuration
When setting up a new TypeScript project, configure strict type-safety rules for
ESLint and TypeScript. This ensures that the project remains as type-safe as possible.
The [GitLab Workflow Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension/)
project is a good model for a TypeScript project's boilerplate and configuration.
Consider copying the `tsconfig.json` and `.eslintrc.json` from there.
For `tsconfig.json`:
- Use [`"strict": true`](https://www.typescriptlang.org/tsconfig/#strict).
This enforces the strongest type-checking capabilities in the project and
prohibits overriding type-safety.
- Use [`"skipLibCheck": true`](https://www.typescriptlang.org/tsconfig/#skipLibCheck).
This improves compile time by only checking references `.d.ts`
files as opposed to all `.d.ts` files in `node_modules`.
For `.eslintrc.json` (or `.eslintrc.js`):
- Make sure that TypeScript-specific parsing and linting are placed in an `overrides`
for `**/*.ts` files. This way, linting regular `.js` files
remains unaffected by the TypeScript-specific rules.
- Extend from [`plugin:@typescript-eslint/recommended`](https://typescript-eslint.io/rules/?supported-rules=recommended)
which has some very sensible defaults, such as:
- [`"@typescript-eslint/no-explicit-any": "error"`](https://typescript-eslint.io/rules/no-explicit-any/)
- [`"@typescript-eslint/no-unsafe-assignment": "error"`](https://typescript-eslint.io/rules/no-unsafe-assignment/)
- [`"@typescript-eslint/no-unsafe-return": "error"`](https://typescript-eslint.io/rules/no-unsafe-return/)
### Avoid `any`
Avoid `any` at all costs. This should already be configured in the project's linter,
but it's worth calling out here.
Developers commonly resort to `any` when dealing with data structures that cross
domain boundaries, such as handling HTTP responses or interacting with untyped
libraries. This appears convenient at first. However, opting for a well-defined type (or using
`unknown` and employing type narrowing through predicates) carries substantial benefits.
```typescript
// Bad :(
function handleMessage(data: any) {
console.log("We don't know what data is. This could blow up!", data.special.stuff);
}
// Good :)
function handleMessage(data: unknown) {
console.log("Sometimes it's okay that it remains unknown.", JSON.stringify(data));
}
// Also good :)
function isFooMessage(data: unknown): data is { foo: string } {
return typeof data === 'object' && data && 'foo' in data;
}
function handleMessage(data: unknown) {
if (isFooMessage(data)) {
console.log("We know it's a foo now. This is safe!", data.foo);
}
}
```
### Avoid casting with `<>` or `as`
Avoid casting with `<>` or `as` as much as possible.
Type casting explicitly circumvents type-safety. Consider using
[type predicates](https://www.typescriptlang.org/docs/handbook/2/narrowing.html#using-type-predicates).
```typescript
// Bad :(
function handler(data: unknown) {
console.log((data as StuffContainer).stuff);
}
// Good :)
function hasStuff(data: unknown): data is StuffContainer {
if (data && typeof data === 'object') {
return 'stuff' in data;
}
return false;
}
function handler(data: unknown) {
if (hasStuff(data)) {
// No casting needed :)
console.log(data.stuff);
}
throw new Error('Expected data to have stuff. Catastrophic consequences might follow...');
}
```
There's some rare cases this might be acceptable (consider
[this test utility](https://gitlab.com/gitlab-org/gitlab-web-ide/-/blob/3ea8191ed066811caa4fb108713e7538b8d8def1/packages/vscode-extension-web-ide/test-utils/createFakePartial.ts#L1)). However, 99% of the
time, there's a better way.
### Prefer `interface` over `type` for new structures
Prefer declaring a new `interface` over declaring a new `type` alias when defining new structures.
Interfaces and type aliases have a lot of cross-over, but only interfaces can be used
with the `implements` keyword. A class is not able to `implement` a `type` (only an `interface`),
so using `type` would restrict the usability of the structure.
```typescript
// Bad :(
type Fooer = {
foo: () => string;
}
// Good :)
interface Fooer {
foo: () => string;
}
```
From the [TypeScript guide](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#differences-between-type-aliases-and-interfaces):
> If you would like a heuristic, use `interface` until you need to use features from `type`.
### Use `type` to define aliases for existing types
Use type to define aliases for existing types, classes or interfaces. Use
the TypeScript [Utility Types](https://www.typescriptlang.org/docs/handbook/utility-types.html)
to provide transformations.
```typescript
interface Config = {
foo: string;
isBad: boolean;
}
// Bad :(
type PartialConfig = {
foo?: string;
isBad?: boolean;
}
// Good :)
type PartialConfig = Partial<Config>;
```
### Use union types to improve inference
```typescript
// Bad :(
interface Foo { type: string }
interface FooBar extends Foo { bar: string }
interface FooZed extends Foo { zed: string }
const doThing = (foo: Foo) => {
if (foo.type === 'bar') {
// Casting bad :(
console.log((foo as FooBar).bar);
}
}
// Good :)
interface FooBar { type: 'bar', bar: string }
interface FooZed { type: 'zed', zed: string }
type Foo = FooBar | FooZed;
const doThing = (foo: Foo) => {
if (foo.type === 'bar') {
// No casting needed :) - TS knows we are FooBar now
console.log(foo.bar);
}
}
```
## Future plans
- Shared ESLint configuration to reuse across TypeScript projects.
## Related topics
- [TypeScript Handbook](https://www.typescriptlang.org/docs/handbook/intro.html)
- [TypeScript notes in GitLab Workflow Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/blob/main/docs/developer/coding-guidelines.md?ref_type=heads#typescript)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: TypeScript
breadcrumbs:
- doc
- development
- fe_guide
- style
---
## History with GitLab
TypeScript has been [considered](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/35),
discussed, promoted, and rejected for years at GitLab. The general
conclusion is that we are unable to integrate TypeScript into the main
project because the costs outweigh the benefits.
- The main project has **a lot** of pre-existing code that is not strongly typed.
- The main contributors to the main project are not all familiar with TypeScript.
Apart from the main project, TypeScript has been profitably employed in
a handful of satellite projects.
## Projects using TypeScript
The following GitLab projects use TypeScript:
- [`gitlab-web-ide`](https://gitlab.com/gitlab-org/gitlab-web-ide/)
- [`gitlab-vscode-extension`](https://gitlab.com/gitlab-org/gitlab-vscode-extension/)
- [`gitlab-language-server-for-code-suggestions`](https://gitlab.com/gitlab-org/editor-extensions/gitlab-language-server-for-code-suggestions)
- [`gitlab-org/cluster-integration/javascript-client`](https://gitlab.com/gitlab-org/cluster-integration/javascript-client)
## Recommendations
### Setup ESLint and TypeScript configuration
When setting up a new TypeScript project, configure strict type-safety rules for
ESLint and TypeScript. This ensures that the project remains as type-safe as possible.
The [GitLab Workflow Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension/)
project is a good model for a TypeScript project's boilerplate and configuration.
Consider copying the `tsconfig.json` and `.eslintrc.json` from there.
For `tsconfig.json`:
- Use [`"strict": true`](https://www.typescriptlang.org/tsconfig/#strict).
This enforces the strongest type-checking capabilities in the project and
prohibits overriding type-safety.
- Use [`"skipLibCheck": true`](https://www.typescriptlang.org/tsconfig/#skipLibCheck).
This improves compile time by only checking references `.d.ts`
files as opposed to all `.d.ts` files in `node_modules`.
For `.eslintrc.json` (or `.eslintrc.js`):
- Make sure that TypeScript-specific parsing and linting are placed in an `overrides`
for `**/*.ts` files. This way, linting regular `.js` files
remains unaffected by the TypeScript-specific rules.
- Extend from [`plugin:@typescript-eslint/recommended`](https://typescript-eslint.io/rules/?supported-rules=recommended)
which has some very sensible defaults, such as:
- [`"@typescript-eslint/no-explicit-any": "error"`](https://typescript-eslint.io/rules/no-explicit-any/)
- [`"@typescript-eslint/no-unsafe-assignment": "error"`](https://typescript-eslint.io/rules/no-unsafe-assignment/)
- [`"@typescript-eslint/no-unsafe-return": "error"`](https://typescript-eslint.io/rules/no-unsafe-return/)
### Avoid `any`
Avoid `any` at all costs. This should already be configured in the project's linter,
but it's worth calling out here.
Developers commonly resort to `any` when dealing with data structures that cross
domain boundaries, such as handling HTTP responses or interacting with untyped
libraries. This appears convenient at first. However, opting for a well-defined type (or using
`unknown` and employing type narrowing through predicates) carries substantial benefits.
```typescript
// Bad :(
function handleMessage(data: any) {
console.log("We don't know what data is. This could blow up!", data.special.stuff);
}
// Good :)
function handleMessage(data: unknown) {
console.log("Sometimes it's okay that it remains unknown.", JSON.stringify(data));
}
// Also good :)
function isFooMessage(data: unknown): data is { foo: string } {
return typeof data === 'object' && data && 'foo' in data;
}
function handleMessage(data: unknown) {
if (isFooMessage(data)) {
console.log("We know it's a foo now. This is safe!", data.foo);
}
}
```
### Avoid casting with `<>` or `as`
Avoid casting with `<>` or `as` as much as possible.
Type casting explicitly circumvents type-safety. Consider using
[type predicates](https://www.typescriptlang.org/docs/handbook/2/narrowing.html#using-type-predicates).
```typescript
// Bad :(
function handler(data: unknown) {
console.log((data as StuffContainer).stuff);
}
// Good :)
function hasStuff(data: unknown): data is StuffContainer {
if (data && typeof data === 'object') {
return 'stuff' in data;
}
return false;
}
function handler(data: unknown) {
if (hasStuff(data)) {
// No casting needed :)
console.log(data.stuff);
}
throw new Error('Expected data to have stuff. Catastrophic consequences might follow...');
}
```
There's some rare cases this might be acceptable (consider
[this test utility](https://gitlab.com/gitlab-org/gitlab-web-ide/-/blob/3ea8191ed066811caa4fb108713e7538b8d8def1/packages/vscode-extension-web-ide/test-utils/createFakePartial.ts#L1)). However, 99% of the
time, there's a better way.
### Prefer `interface` over `type` for new structures
Prefer declaring a new `interface` over declaring a new `type` alias when defining new structures.
Interfaces and type aliases have a lot of cross-over, but only interfaces can be used
with the `implements` keyword. A class is not able to `implement` a `type` (only an `interface`),
so using `type` would restrict the usability of the structure.
```typescript
// Bad :(
type Fooer = {
foo: () => string;
}
// Good :)
interface Fooer {
foo: () => string;
}
```
From the [TypeScript guide](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#differences-between-type-aliases-and-interfaces):
> If you would like a heuristic, use `interface` until you need to use features from `type`.
### Use `type` to define aliases for existing types
Use type to define aliases for existing types, classes or interfaces. Use
the TypeScript [Utility Types](https://www.typescriptlang.org/docs/handbook/utility-types.html)
to provide transformations.
```typescript
interface Config = {
foo: string;
isBad: boolean;
}
// Bad :(
type PartialConfig = {
foo?: string;
isBad?: boolean;
}
// Good :)
type PartialConfig = Partial<Config>;
```
### Use union types to improve inference
```typescript
// Bad :(
interface Foo { type: string }
interface FooBar extends Foo { bar: string }
interface FooZed extends Foo { zed: string }
const doThing = (foo: Foo) => {
if (foo.type === 'bar') {
// Casting bad :(
console.log((foo as FooBar).bar);
}
}
// Good :)
interface FooBar { type: 'bar', bar: string }
interface FooZed { type: 'zed', zed: string }
type Foo = FooBar | FooZed;
const doThing = (foo: Foo) => {
if (foo.type === 'bar') {
// No casting needed :) - TS knows we are FooBar now
console.log(foo.bar);
}
}
```
## Future plans
- Shared ESLint configuration to reuse across TypeScript projects.
## Related topics
- [TypeScript Handbook](https://www.typescriptlang.org/docs/handbook/intro.html)
- [TypeScript notes in GitLab Workflow Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/blob/main/docs/developer/coding-guidelines.md?ref_type=heads#typescript)
|
https://docs.gitlab.com/development/fe_guide/onboarding_course
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/fe_guide/_index.md
|
2025-08-13
|
doc/development/fe_guide/onboarding_course
|
[
"doc",
"development",
"fe_guide",
"onboarding_course"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Frontend onboarding course
| null |
Welcome to the Frontend Onboarding Course at GitLab!
In this course, we walk you through your first professional frontend development experience, helping you gain real world skills and learn how to contribute to a large-scale codebase effectively.
Throughout the course, we'll follow a structured approach.
Each lesson focuses on solving a specific problem on GitLab, giving you hands-on experience.
You'll learn theory that you can immediately put into practice.
By working on real-world GitLab issues, you'll encounter challenges and learn how to navigate the codebase effectively while at the same time improving GitLab the product.
We believe in an interactive learning experience.
You'll have the opportunity to ask questions and seek help from the GitLab community.
We appreciate your contributions and are here to support your learning while at the same time making GitLab better.
Our teaching style prioritizes practical learning.
Lessons include an introduction to the problem, theory, live coding walkthroughs, and similar issues for you to tackle.
As you progress, the complexity of the tasks increase, helping you grow your skills.
Join us on this journey of front-end development at GitLab. Say hello in [the Discord community](https://discord.com/invite/gitlab) and let's learn and improve together.
## Lessons
- [Lesson 1](lesson_1.md)
## Structure and timings
The course is run over 6 weeks, with a required time commitment of 5-10 hours per week.
The course is free of charge, but we do ask for a commitment to complete the curriculum (including 10 merged merge requests).
After completing the course, you receive a certificate and GitLab achievement.
Each week consists of the following sessions:
- 1-hour relaxed discussion-style lesson with explanation of how GitLab frontend works. Each week features a different guest and includes an AMA portion.
- 2-hour live coding lesson with a practical task for participants to complete.
- 2 x 2-hour dedicated office hours sessions where participants can work on the task assigned in the lesson with GitLab frontend engineers. (2 sessions in different timezones as this will require participants to join synchronously)
A fortnightly 1-on-1 mentoring sessions are also available to each participant.
There are 10 places available on the course.
The date will be set after the course material has been prepared.
Complete the [Frontend Onboarding Course Application Form](https://forms.gle/39Rs4w4ZxQuByhE4A) to apply.
You may also participate in the course informally at your own pace, without the benefit of the synchronous office hours or mentoring session.
GitLab team members are happy to support you regardless.
## Curriculum summary
### Lesson 1
- What is a development environment?
- What is the GDK?
- Installing the GDK.
- GDK tips and tricks.
- Using Gitpod to run the GDK.
- Navigating the GitLab codebase.
- Writing a good merge request.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Frontend onboarding course
breadcrumbs:
- doc
- development
- fe_guide
- onboarding_course
---
Welcome to the Frontend Onboarding Course at GitLab!
In this course, we walk you through your first professional frontend development experience, helping you gain real world skills and learn how to contribute to a large-scale codebase effectively.
Throughout the course, we'll follow a structured approach.
Each lesson focuses on solving a specific problem on GitLab, giving you hands-on experience.
You'll learn theory that you can immediately put into practice.
By working on real-world GitLab issues, you'll encounter challenges and learn how to navigate the codebase effectively while at the same time improving GitLab the product.
We believe in an interactive learning experience.
You'll have the opportunity to ask questions and seek help from the GitLab community.
We appreciate your contributions and are here to support your learning while at the same time making GitLab better.
Our teaching style prioritizes practical learning.
Lessons include an introduction to the problem, theory, live coding walkthroughs, and similar issues for you to tackle.
As you progress, the complexity of the tasks increase, helping you grow your skills.
Join us on this journey of front-end development at GitLab. Say hello in [the Discord community](https://discord.com/invite/gitlab) and let's learn and improve together.
## Lessons
- [Lesson 1](lesson_1.md)
## Structure and timings
The course is run over 6 weeks, with a required time commitment of 5-10 hours per week.
The course is free of charge, but we do ask for a commitment to complete the curriculum (including 10 merged merge requests).
After completing the course, you receive a certificate and GitLab achievement.
Each week consists of the following sessions:
- 1-hour relaxed discussion-style lesson with explanation of how GitLab frontend works. Each week features a different guest and includes an AMA portion.
- 2-hour live coding lesson with a practical task for participants to complete.
- 2 x 2-hour dedicated office hours sessions where participants can work on the task assigned in the lesson with GitLab frontend engineers. (2 sessions in different timezones as this will require participants to join synchronously)
A fortnightly 1-on-1 mentoring sessions are also available to each participant.
There are 10 places available on the course.
The date will be set after the course material has been prepared.
Complete the [Frontend Onboarding Course Application Form](https://forms.gle/39Rs4w4ZxQuByhE4A) to apply.
You may also participate in the course informally at your own pace, without the benefit of the synchronous office hours or mentoring session.
GitLab team members are happy to support you regardless.
## Curriculum summary
### Lesson 1
- What is a development environment?
- What is the GDK?
- Installing the GDK.
- GDK tips and tricks.
- Using Gitpod to run the GDK.
- Navigating the GitLab codebase.
- Writing a good merge request.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.