url
stringlengths 24
122
| repo_url
stringlengths 60
156
| date_extracted
stringdate 2025-08-13 00:00:00
2025-08-13 00:00:00
| root
stringlengths 3
85
| breadcrumbs
listlengths 1
6
| filename
stringlengths 6
60
| stage
stringclasses 33
values | group
stringclasses 81
values | info
stringclasses 22
values | title
stringlengths 3
110
⌀ | description
stringlengths 11
359
⌀ | clean_text
stringlengths 47
3.32M
| rich_text
stringlengths 321
3.32M
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://docs.gitlab.com/reply_by_email
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/reply_by_email.md
|
2025-08-13
|
doc/administration
|
[
"doc",
"administration"
] |
reply_by_email.md
|
Monitor
|
Platform Insights
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Reply by email
|
Configure comments on issues and merge requests with replies by email.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab can be set up to allow users to comment on issues and merge requests by
replying to notification emails.
## Prerequisite
Make sure [incoming email](incoming_email.md) is set up.
## How replying by email works
Replying by email happens in three steps:
1. GitLab sends a notification email.
1. You reply to the notification email.
1. GitLab receives your reply to the notification email.
### GitLab sends a notification email
When GitLab sends a notification email:
- The `Reply-To` header is set to your configured email address.
- If the address contains a `%{key}` placeholder, it's replaced with a specific reply key.
- The reply key is added to the `References` header.
### You reply to the notification email
When you reply to the notification email, your email client:
- Sends the email to the `Reply-To` address it got from the notification email.
- Sets the `In-Reply-To` header to the value of the `Message-ID` header from the
notification email.
- Sets the `References` header to the value of the `Message-ID` plus the value of
the notification email's `References` header.
### GitLab receives your reply to the notification email
When GitLab receives your reply, it looks for the reply key in the
[list of accepted headers](incoming_email.md#accepted-headers).
If a reply key is found, your response appears as a comment on the relevant issue,
merge request, commit, or other item that triggered the notification.
For more information about the `Message-ID`, `In-Reply-To`, and `References` headers,
see [RFC 5322](https://www.rfc-editor.org/rfc/rfc5322#section-3.6.4).
|
---
stage: Monitor
group: Platform Insights
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Reply by email
description: Configure comments on issues and merge requests with replies by email.
breadcrumbs:
- doc
- administration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab can be set up to allow users to comment on issues and merge requests by
replying to notification emails.
## Prerequisite
Make sure [incoming email](incoming_email.md) is set up.
## How replying by email works
Replying by email happens in three steps:
1. GitLab sends a notification email.
1. You reply to the notification email.
1. GitLab receives your reply to the notification email.
### GitLab sends a notification email
When GitLab sends a notification email:
- The `Reply-To` header is set to your configured email address.
- If the address contains a `%{key}` placeholder, it's replaced with a specific reply key.
- The reply key is added to the `References` header.
### You reply to the notification email
When you reply to the notification email, your email client:
- Sends the email to the `Reply-To` address it got from the notification email.
- Sets the `In-Reply-To` header to the value of the `Message-ID` header from the
notification email.
- Sets the `References` header to the value of the `Message-ID` plus the value of
the notification email's `References` header.
### GitLab receives your reply to the notification email
When GitLab receives your reply, it looks for the reply key in the
[list of accepted headers](incoming_email.md#accepted-headers).
If a reply key is found, your response appears as a comment on the relevant issue,
merge request, commit, or other item that triggered the notification.
For more information about the `Message-ID`, `In-Reply-To`, and `References` headers,
see [RFC 5322](https://www.rfc-editor.org/rfc/rfc5322#section-3.6.4).
|
https://docs.gitlab.com/housekeeping
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/housekeeping.md
|
2025-08-13
|
doc/administration
|
[
"doc",
"administration"
] |
housekeeping.md
|
Data Access
|
Gitaly
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Housekeeping
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
GitLab supports and automates housekeeping tasks in Git repositories to ensure
that they can be served as efficiently as possible. Housekeeping tasks include:
- Compressing Git objects and revisions.
- Removing unreachable objects.
- Removing stale data like lock files.
- Maintaining data structures that improve performance.
- Updating object pools to improve object deduplication across forks.
{{< alert type="warning" >}}
Do not manually execute Git commands to perform housekeeping in Git
repositories that are controlled by GitLab. Doing so may lead to corrupt
repositories and data loss.
{{< /alert >}}
## Housekeeping strategy
Gitaly can perform housekeeping tasks in a Git repository in two ways:
- [Eager housekeeping](#eager-housekeeping) executes specific housekeeping tasks
independent of the state a repository is in.
- [Heuristical housekeeping](#heuristical-housekeeping) executes housekeeping
tasks based on a set of heuristics that determine what housekeeping tasks need
to be executed based on the repository state.
### Eager housekeeping
The "eager" housekeeping strategy executes housekeeping tasks in a repository
independent of the repository state. This is the default strategy as used by the
[manual trigger](#manual-trigger) and the push-based trigger.
The eager housekeeping strategy is controlled by the GitLab application.
Depending on the trigger that caused the housekeeping job to run, GitLab asks
Gitaly to perform specific housekeeping tasks. Gitaly performs these tasks even
if the repository is in an optimized state. As a result, this strategy can be
inefficient in large repositories where performing the housekeeping tasks may
be slow.
### Heuristical housekeeping
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/2634) in GitLab 14.9 for the [manual trigger](#manual-trigger) and the push-based trigger [with a flag](feature_flags/_index.md) named `optimized_housekeeping`. Enabled by default.
- [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/353607) in GitLab 14.10.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/107661) in GitLab 15.8. Feature flag `optimized_housekeeping` removed.
{{< /history >}}
The heuristical (or "opportunistic") housekeeping strategy analyzes the
repository's state and executes housekeeping tasks only when it finds one or
more data structures are insufficiently optimized. This is the strategy used by
[scheduled housekeeping](#scheduled-housekeeping).
Heuristical housekeeping uses the following information to decide on the tasks
it needs to run:
- The number of loose and stale objects.
- The number of packfiles that contain already-compressed objects.
- The number of loose references.
- The presence of a commit-graph.
The decision whether any of the analyzed data structures need to be optimized is
based on the size of the repository:
- Objects are repacked frequently the bigger the total size of all objects.
- References are repacked less frequently the more references there are in
total.
Gitaly does this to offset the fact that optimizing those data structures takes
more time the bigger they get. It is especially important in large
monorepos (which receive a lot of traffic) to avoid optimizing them too
frequently.
You can change how often Gitaly is asked to optimize a repository.
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings** > **Repository**.
1. Expand **Repository maintenance**.
1. In the **Housekeeping** section, configure the housekeeping options.
1. Select **Save changes**.
- **Enable automatic repository housekeeping**: Regularly ask Gitaly to run repository optimization. If you
keep this setting disabled for a long time, Git repository access on your GitLab server becomes
slower and your repositories use more disk space.
- **Optimize repository period**: Number of Git pushes after which Gitaly is asked to optimize a repository.
## Running housekeeping tasks
There are different ways in which GitLab runs housekeeping tasks:
- A project's administrator can [manually trigger](#manual-trigger) repository
housekeeping tasks.
- GitLab can automatically schedule housekeeping tasks after a number of Git pushes.
- GitLab can [schedule a job](#scheduled-housekeeping) that runs housekeeping
tasks for all repositories in a configurable time frame.
### Manual trigger
Administrators of repositories can manually trigger housekeeping tasks in a
repository. In general this is not required as GitLab knows to automatically run
housekeeping tasks. The manual trigger can be useful when either:
- A repository is known to require housekeeping.
- Automated push-based scheduling of housekeeping tasks has been disabled.
To trigger housekeeping tasks manually:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **General**.
1. Expand **Advanced**.
1. Select **Run housekeeping**.
This starts an asynchronous background worker for the project's repository. The
background worker asks Gitaly to perform a number of optimizations.
Housekeeping also [removes unreferenced LFS files](raketasks/cleanup.md#remove-unreferenced-lfs-files)
from your project every `200` push, freeing up storage space for your project.
### Prune unreachable objects
Unreachable objects are pruned as part of scheduled housekeeping. However, you can trigger
manual pruning as well. Triggering housekeeping prunes unreachable objects with a grace
period of two weeks. When you manually trigger the pruning of unreachable objects, the
grace period is reduced to 30 minutes.
{{< alert type="warning" >}}
Pruning unreachable objects does not guarantee the removal of leaked secrets and other sensitive information. For information on how to remove secrets that
were committed but not pushed, see the [remove a secret from your commits tutorial](../user/application_security/secret_detection/remove_secrets_tutorial.md).
Additionally, you can [remove blobs individually](../user/project/repository/repository_size.md#remove-blobs). Refer to that documentation for possible
consequences of performing that operation.
If a concurrent process (like `git push`) has created an object but hasn't created
a reference to the object yet, your repository can become corrupted if a reference
to the object is added after the object is deleted. The grace period exists to
reduce the likelihood of such race conditions.
For example, if pushing many large objects frequently over a sometimes very slow connection,
then the risk that comes with pruning unreachable objects is much higher than in a corporate
environment where the project can be accessed only from inside the company with a performant
connection. Consider the project usage profile when using this option and select a quiet period.
{{< /alert >}}
To trigger a manual prune of unreachable objects:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **General**.
1. Expand **Advanced**.
1. Select **Run housekeeping**.
1. Wait 30 minutes for the operation to complete.
1. Return to the page where you selected **Run housekeeping**, and select **Prune unreachable objects**.
### Scheduled housekeeping
{{< details >}}
- Offering: GitLab Self-Managed
{{< /details >}}
While GitLab automatically performs housekeeping tasks based on the number of
pushes, it does not maintain repositories that don't receive any pushes at all.
As a result, dormant repositories or repositories that are only getting read
requests may not benefit from improvements in the repository housekeeping
strategy.
Administrators can enable a background job that performs housekeeping in all
repositories at a customizable interval to remedy this situation. This
background job processes all repositories hosted by a Gitaly node in a random
order and eagerly performs housekeeping tasks on them. The Gitaly node stops
processing repositories if it takes longer than the configured interval.
#### Configure scheduled housekeeping
Background maintenance of Git repositories is configured in Gitaly. By default,
Gitaly performs background repository maintenance every day at 12:00 noon for a
duration of 10 minutes.
You can change this default in Gitaly configuration.
For environments with Gitaly Cluster (Praefect), the scheduled housekeeping start time can be
staggered across Gitaly nodes so the scheduled housekeeping is not running
simultaneously on multiple nodes.
If a scheduled housekeeping run reaches the `duration` specified, the running tasks are
gracefully canceled. On subsequent scheduled housekeeping runs, Gitaly randomly shuffles
the repository list to process.
The following snippet enables daily background repository maintenance starting at
23:00 for 1 hour for the `default` storage:
{{< tabs >}}
{{< tab title="Self-compiled (source)" >}}
```toml
[daily_maintenance]
start_hour = 23
start_minute = 00
duration = 1h
storages = ["default"]
```
Use the following snippet to completely disable background repository
maintenance:
```toml
[daily_maintenance]
disabled = true
```
{{< /tab >}}
{{< tab title="Linux package (Omnibus)" >}}
```ruby
gitaly['configuration'] = {
daily_maintenance: {
disabled: false,
start_hour: 23,
start_minute: 00,
duration: '1h',
storages: ['default'],
},
}
```
Use the following snippet to completely disable background repository
maintenance:
```ruby
gitaly['configuration'] = {
daily_maintenance: {
disabled: true,
},
}
```
{{< /tab >}}
{{< /tabs >}}
When the scheduled housekeeping is executed, you can see the following entries in
your [Gitaly log](logs/_index.md#gitaly-logs):
```json
# When the scheduled housekeeping starts
{"level":"info","msg":"maintenance: daily scheduled","pid":197260,"scheduled":"2023-09-27T13:10:00+13:00","time":"2023-09-27T00:08:31.624Z"}
# When the scheduled housekeeping completes
{"actual_duration":321181874818,"error":null,"level":"info","max_duration":"1h0m0s","msg":"maintenance: daily completed","pid":197260,"time":"2023-09-27T00:15:21.182Z"}
```
The `actual_duration` (in nanoseconds) indicates how long the scheduled maintenance
took to execute. In the previous example, the scheduled housekeeping completed
in just over 5 minutes.
## Object pool repositories
{{< details >}}
- Offering: GitLab Self-Managed
{{< /details >}}
Object pool repositories are used by GitLab to deduplicate objects across forks
of a repository. When creating the first fork, we:
1. Create an object pool repository that contains all objects of the repository
that is about to be forked.
1. Link the repository to this new object pool by using the alternates mechanism of Git.
1. Repack the repository so that it uses objects from the object pool. It thus
can drop its own copy of the objects.
Any forks of this repository can now link against the object pool and thus only
have to keep objects that diverge from the primary repository.
GitLab needs to perform special housekeeping operations in object pools:
- Gitaly cannot ever delete unreachable objects from object pools because they
might be used by any of the forks that are connected to it.
- Gitaly must keep all objects reachable due to the same reason. Object pools
thus maintain references to unreachable "dangling" objects so that they don't
ever get deleted.
- GitLab must update object pools regularly to pull in new objects that have
been added in the primary repository. Otherwise, an object pool becomes
increasingly inefficient at deduplicating objects.
These housekeeping operations are performed by the specialized
`FetchIntoObjectPool` RPC that handles all of these special tasks while also
executing the regular housekeeping tasks we execute for standard Git
repositories.
Object pools are getting optimized automatically whenever the primary member is
getting garbage collected. Therefore, the cadence can be configured using the
same Git GC period in that project.
If you need to manually invoke the RPC from a [Rails console](operations/rails_console.md),
you can call `project.pool_repository.object_pool.fetch`. This is a potentially
long-running task, though Gitaly times out after about 8 hours.
|
---
stage: Data Access
group: Gitaly
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Housekeeping
breadcrumbs:
- doc
- administration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
GitLab supports and automates housekeeping tasks in Git repositories to ensure
that they can be served as efficiently as possible. Housekeeping tasks include:
- Compressing Git objects and revisions.
- Removing unreachable objects.
- Removing stale data like lock files.
- Maintaining data structures that improve performance.
- Updating object pools to improve object deduplication across forks.
{{< alert type="warning" >}}
Do not manually execute Git commands to perform housekeeping in Git
repositories that are controlled by GitLab. Doing so may lead to corrupt
repositories and data loss.
{{< /alert >}}
## Housekeeping strategy
Gitaly can perform housekeeping tasks in a Git repository in two ways:
- [Eager housekeeping](#eager-housekeeping) executes specific housekeeping tasks
independent of the state a repository is in.
- [Heuristical housekeeping](#heuristical-housekeeping) executes housekeeping
tasks based on a set of heuristics that determine what housekeeping tasks need
to be executed based on the repository state.
### Eager housekeeping
The "eager" housekeeping strategy executes housekeeping tasks in a repository
independent of the repository state. This is the default strategy as used by the
[manual trigger](#manual-trigger) and the push-based trigger.
The eager housekeeping strategy is controlled by the GitLab application.
Depending on the trigger that caused the housekeeping job to run, GitLab asks
Gitaly to perform specific housekeeping tasks. Gitaly performs these tasks even
if the repository is in an optimized state. As a result, this strategy can be
inefficient in large repositories where performing the housekeeping tasks may
be slow.
### Heuristical housekeeping
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/2634) in GitLab 14.9 for the [manual trigger](#manual-trigger) and the push-based trigger [with a flag](feature_flags/_index.md) named `optimized_housekeeping`. Enabled by default.
- [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/353607) in GitLab 14.10.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/107661) in GitLab 15.8. Feature flag `optimized_housekeeping` removed.
{{< /history >}}
The heuristical (or "opportunistic") housekeeping strategy analyzes the
repository's state and executes housekeeping tasks only when it finds one or
more data structures are insufficiently optimized. This is the strategy used by
[scheduled housekeeping](#scheduled-housekeeping).
Heuristical housekeeping uses the following information to decide on the tasks
it needs to run:
- The number of loose and stale objects.
- The number of packfiles that contain already-compressed objects.
- The number of loose references.
- The presence of a commit-graph.
The decision whether any of the analyzed data structures need to be optimized is
based on the size of the repository:
- Objects are repacked frequently the bigger the total size of all objects.
- References are repacked less frequently the more references there are in
total.
Gitaly does this to offset the fact that optimizing those data structures takes
more time the bigger they get. It is especially important in large
monorepos (which receive a lot of traffic) to avoid optimizing them too
frequently.
You can change how often Gitaly is asked to optimize a repository.
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings** > **Repository**.
1. Expand **Repository maintenance**.
1. In the **Housekeeping** section, configure the housekeeping options.
1. Select **Save changes**.
- **Enable automatic repository housekeeping**: Regularly ask Gitaly to run repository optimization. If you
keep this setting disabled for a long time, Git repository access on your GitLab server becomes
slower and your repositories use more disk space.
- **Optimize repository period**: Number of Git pushes after which Gitaly is asked to optimize a repository.
## Running housekeeping tasks
There are different ways in which GitLab runs housekeeping tasks:
- A project's administrator can [manually trigger](#manual-trigger) repository
housekeeping tasks.
- GitLab can automatically schedule housekeeping tasks after a number of Git pushes.
- GitLab can [schedule a job](#scheduled-housekeeping) that runs housekeeping
tasks for all repositories in a configurable time frame.
### Manual trigger
Administrators of repositories can manually trigger housekeeping tasks in a
repository. In general this is not required as GitLab knows to automatically run
housekeeping tasks. The manual trigger can be useful when either:
- A repository is known to require housekeeping.
- Automated push-based scheduling of housekeeping tasks has been disabled.
To trigger housekeeping tasks manually:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **General**.
1. Expand **Advanced**.
1. Select **Run housekeeping**.
This starts an asynchronous background worker for the project's repository. The
background worker asks Gitaly to perform a number of optimizations.
Housekeeping also [removes unreferenced LFS files](raketasks/cleanup.md#remove-unreferenced-lfs-files)
from your project every `200` push, freeing up storage space for your project.
### Prune unreachable objects
Unreachable objects are pruned as part of scheduled housekeeping. However, you can trigger
manual pruning as well. Triggering housekeeping prunes unreachable objects with a grace
period of two weeks. When you manually trigger the pruning of unreachable objects, the
grace period is reduced to 30 minutes.
{{< alert type="warning" >}}
Pruning unreachable objects does not guarantee the removal of leaked secrets and other sensitive information. For information on how to remove secrets that
were committed but not pushed, see the [remove a secret from your commits tutorial](../user/application_security/secret_detection/remove_secrets_tutorial.md).
Additionally, you can [remove blobs individually](../user/project/repository/repository_size.md#remove-blobs). Refer to that documentation for possible
consequences of performing that operation.
If a concurrent process (like `git push`) has created an object but hasn't created
a reference to the object yet, your repository can become corrupted if a reference
to the object is added after the object is deleted. The grace period exists to
reduce the likelihood of such race conditions.
For example, if pushing many large objects frequently over a sometimes very slow connection,
then the risk that comes with pruning unreachable objects is much higher than in a corporate
environment where the project can be accessed only from inside the company with a performant
connection. Consider the project usage profile when using this option and select a quiet period.
{{< /alert >}}
To trigger a manual prune of unreachable objects:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings** > **General**.
1. Expand **Advanced**.
1. Select **Run housekeeping**.
1. Wait 30 minutes for the operation to complete.
1. Return to the page where you selected **Run housekeeping**, and select **Prune unreachable objects**.
### Scheduled housekeeping
{{< details >}}
- Offering: GitLab Self-Managed
{{< /details >}}
While GitLab automatically performs housekeeping tasks based on the number of
pushes, it does not maintain repositories that don't receive any pushes at all.
As a result, dormant repositories or repositories that are only getting read
requests may not benefit from improvements in the repository housekeeping
strategy.
Administrators can enable a background job that performs housekeeping in all
repositories at a customizable interval to remedy this situation. This
background job processes all repositories hosted by a Gitaly node in a random
order and eagerly performs housekeeping tasks on them. The Gitaly node stops
processing repositories if it takes longer than the configured interval.
#### Configure scheduled housekeeping
Background maintenance of Git repositories is configured in Gitaly. By default,
Gitaly performs background repository maintenance every day at 12:00 noon for a
duration of 10 minutes.
You can change this default in Gitaly configuration.
For environments with Gitaly Cluster (Praefect), the scheduled housekeeping start time can be
staggered across Gitaly nodes so the scheduled housekeeping is not running
simultaneously on multiple nodes.
If a scheduled housekeeping run reaches the `duration` specified, the running tasks are
gracefully canceled. On subsequent scheduled housekeeping runs, Gitaly randomly shuffles
the repository list to process.
The following snippet enables daily background repository maintenance starting at
23:00 for 1 hour for the `default` storage:
{{< tabs >}}
{{< tab title="Self-compiled (source)" >}}
```toml
[daily_maintenance]
start_hour = 23
start_minute = 00
duration = 1h
storages = ["default"]
```
Use the following snippet to completely disable background repository
maintenance:
```toml
[daily_maintenance]
disabled = true
```
{{< /tab >}}
{{< tab title="Linux package (Omnibus)" >}}
```ruby
gitaly['configuration'] = {
daily_maintenance: {
disabled: false,
start_hour: 23,
start_minute: 00,
duration: '1h',
storages: ['default'],
},
}
```
Use the following snippet to completely disable background repository
maintenance:
```ruby
gitaly['configuration'] = {
daily_maintenance: {
disabled: true,
},
}
```
{{< /tab >}}
{{< /tabs >}}
When the scheduled housekeeping is executed, you can see the following entries in
your [Gitaly log](logs/_index.md#gitaly-logs):
```json
# When the scheduled housekeeping starts
{"level":"info","msg":"maintenance: daily scheduled","pid":197260,"scheduled":"2023-09-27T13:10:00+13:00","time":"2023-09-27T00:08:31.624Z"}
# When the scheduled housekeeping completes
{"actual_duration":321181874818,"error":null,"level":"info","max_duration":"1h0m0s","msg":"maintenance: daily completed","pid":197260,"time":"2023-09-27T00:15:21.182Z"}
```
The `actual_duration` (in nanoseconds) indicates how long the scheduled maintenance
took to execute. In the previous example, the scheduled housekeeping completed
in just over 5 minutes.
## Object pool repositories
{{< details >}}
- Offering: GitLab Self-Managed
{{< /details >}}
Object pool repositories are used by GitLab to deduplicate objects across forks
of a repository. When creating the first fork, we:
1. Create an object pool repository that contains all objects of the repository
that is about to be forked.
1. Link the repository to this new object pool by using the alternates mechanism of Git.
1. Repack the repository so that it uses objects from the object pool. It thus
can drop its own copy of the objects.
Any forks of this repository can now link against the object pool and thus only
have to keep objects that diverge from the primary repository.
GitLab needs to perform special housekeeping operations in object pools:
- Gitaly cannot ever delete unreachable objects from object pools because they
might be used by any of the forks that are connected to it.
- Gitaly must keep all objects reachable due to the same reason. Object pools
thus maintain references to unreachable "dangling" objects so that they don't
ever get deleted.
- GitLab must update object pools regularly to pull in new objects that have
been added in the primary repository. Otherwise, an object pool becomes
increasingly inefficient at deduplicating objects.
These housekeeping operations are performed by the specialized
`FetchIntoObjectPool` RPC that handles all of these special tasks while also
executing the regular housekeeping tasks we execute for standard Git
repositories.
Object pools are getting optimized automatically whenever the primary member is
getting garbage collected. Therefore, the cadence can be configured using the
same Git GC period in that project.
If you need to manually invoke the RPC from a [Rails console](operations/rails_console.md),
you can call `project.pool_repository.object_pool.fetch`. This is a potentially
long-running task, though Gitaly times out after about 8 hours.
|
https://docs.gitlab.com/consul
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/consul.md
|
2025-08-13
|
doc/administration
|
[
"doc",
"administration"
] |
consul.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
How to set up Consul
|
Configure a Consul cluster.
|
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
A Consul cluster consists of both
[server and client agents](https://developer.hashicorp.com/consul/docs/agent).
The servers run on their own nodes and the clients run on other nodes that in
turn communicate with the servers.
GitLab Premium includes a bundled version of [Consul](https://www.consul.io/)
a service networking solution that you can manage by using `/etc/gitlab/gitlab.rb`.
## Prerequisites
Before configuring Consul:
1. Review the [reference architecture](reference_architectures/_index.md#available-reference-architectures)
documentation to determine the number of Consul server nodes you should have.
1. If necessary, ensure the [appropriate ports are open](package_information/defaults.md#ports) in your firewall.
## Configure the Consul nodes
On each Consul server node:
1. Follow the instructions to [install](https://about.gitlab.com/install/)
GitLab by choosing your preferred platform, but do not supply the
`EXTERNAL_URL` value when asked.
1. Edit `/etc/gitlab/gitlab.rb`, and add the following by replacing the values
noted in the `retry_join` section. In the example below, there are three
nodes, two denoted with their IP, and one with its FQDN, you can use either
notation:
```ruby
# Disable all components except Consul
roles ['consul_role']
# Consul nodes: can be FQDN or IP, separated by a whitespace
consul['configuration'] = {
server: true,
retry_join: %w(10.10.10.1 consul1.gitlab.example.com 10.10.10.2)
}
# Disable auto migrations
gitlab_rails['auto_migrate'] = false
```
1. [Reconfigure GitLab](restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes
to take effect.
1. Run the following command to ensure Consul is both configured correctly and
to verify that all server nodes are communicating:
```shell
sudo /opt/gitlab/embedded/bin/consul members
```
The output should be similar to:
```plaintext
Node Address Status Type Build Protocol DC
CONSUL_NODE_ONE XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
CONSUL_NODE_TWO XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
CONSUL_NODE_THREE XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
```
If the results display any nodes with a status that isn't `alive`, or if any
of the three nodes are missing, see the [Troubleshooting section](#troubleshooting-consul).
## Securing the Consul nodes
There are two ways you can secure the communication between the Consul nodes, using either TLS or gossip encryption.
### TLS encryption
By default TLS is not enabled for the Consul cluster, the default configuration
options and their defaults are:
```ruby
consul['use_tls'] = false
consul['tls_ca_file'] = nil
consul['tls_certificate_file'] = nil
consul['tls_key_file'] = nil
consul['tls_verify_client'] = nil
```
These configuration options apply to both client and server nodes.
To enable TLS on a Consul node start with `consul['use_tls'] = true`. Depending
on the role of the node (server or client) and your TLS preferences you need to
provide further configuration:
- On a server node you must at least specify `tls_ca_file`,
`tls_certificate_file`, and `tls_key_file`.
- On a client node, when client TLS authentication is disabled on the server
(enabled by default) you must at least specify `tls_ca_file`, otherwise you have
to pass the client TLS certificate and key using `tls_certificate_file`,
`tls_key_file`.
When TLS is enabled, by default the server uses mTLS and listens on both HTTPS
and HTTP (and TLS and non-TLS RPC). It expects clients to use TLS
authentication. You can disable client TLS authentication by setting
`consul['tls_verify_client'] = false`.
On the other hand, clients only use TLS for outgoing connection to server nodes
and only listen on HTTP (and non-TLS RPC) for incoming requests. You can enforce
client Consul agents to use TLS for incoming connections by setting
`consul['https_port']` to a non-negative integer (`8501` is the Consul's default
HTTPS port). You must also pass `tls_certificate_file` and `tls_key_file` for
this to work. When server nodes use client TLS authentication, the client TLS
certificate and key is used for both TLS authentication and incoming HTTPS
connections.
Consul client nodes do not use TLS client authentication by default (as opposed
to servers) and you need to explicitly instruct them to do it by setting
`consul['tls_verify_client'] = true`.
Below are some examples of TLS encryption.
#### Minimal TLS support
In the following example, the server uses TLS for incoming connections (without client TLS authentication).
{{< tabs >}}
{{< tab title="Consul server node" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['configuration'] = {
'server' => true
}
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/server.crt.pem'
consul['tls_key_file'] = '/path/to/server.key.pem'
consul['tls_verify_client'] = false
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Consul client node" >}}
The following can be configured on a Patroni node for example.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
patroni['consul']['url'] = 'http://localhost:8500'
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
Patroni talks to the local Consul agent which does not use TLS for incoming
connections. Hence the HTTP URL for `patroni['consul']['url']`.
{{< /tab >}}
{{< /tabs >}}
#### Default TLS support
In the following example, the server uses mutual TLS authentication.
{{< tabs >}}
{{< tab title="Consul server node" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['configuration'] = {
'server' => true
}
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/server.crt.pem'
consul['tls_key_file'] = '/path/to/server.key.pem'
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Consul client node" >}}
The following can be configured on a Patroni node for example.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/client.crt.pem'
consul['tls_key_file'] = '/path/to/client.key.pem'
patroni['consul']['url'] = 'http://localhost:8500'
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
Patroni talks to the local Consul agent which does not use TLS for incoming
connections, even though it uses TLS authentication to Consul server nodes.
Hence the HTTP URL for `patroni['consul']['url']`.
{{< /tab >}}
{{< /tabs >}}
#### Full TLS support
In the following example, both client and server use mutual TLS authentication.
The Consul server, client, and Patroni client certificates must be issued by the
same CA for mutual TLS authentication to work.
{{< tabs >}}
{{< tab title="Consul server node" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['configuration'] = {
'server' => true
}
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/server.crt.pem'
consul['tls_key_file'] = '/path/to/server.key.pem'
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Consul client node" >}}
The following can be configured on a Patroni node for example.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['use_tls'] = true
consul['tls_verify_client'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/client.crt.pem'
consul['tls_key_file'] = '/path/to/client.key.pem'
consul['https_port'] = 8501
patroni['consul']['url'] = 'https://localhost:8501'
patroni['consul']['cacert'] = '/path/to/ca.crt.pem'
patroni['consul']['cert'] = '/opt/tls/patroni.crt.pem'
patroni['consul']['key'] = '/opt/tls/patroni.key.pem'
patroni['consul']['verify'] = true
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< /tabs >}}
### Gossip encryption
The Gossip protocol can be encrypted to secure communication between Consul
agents. By default encryption is not enabled, to enable encryption a shared
encryption key is required. For convenience, the key can be generated by using
the `gitlab-ctl consul keygen` command. The key must be 32 bytes long, Base 64
encoded and shared on all agents.
The following options work on both client and server nodes.
To enable the gossip protocol:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['encryption_key'] = <base-64-key>
consul['encryption_verify_incoming'] = true
consul['encryption_verify_outgoing'] = true
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
To [enable encryption in an existing datacenter](https://developer.hashicorp.com/consul/docs/security/encryption#enable-on-an-existing-consul-datacenter),
manually set these options for a rolling update.
## Upgrade the Consul nodes
To upgrade your Consul nodes, upgrade the GitLab package.
Nodes should be:
- Members of a healthy cluster prior to upgrading the Linux package.
- Upgraded one node at a time.
Identify any existing health issues in the cluster by running the following command
in each node. The command returns an empty array if the cluster is healthy:
```shell
curl "http://127.0.0.1:8500/v1/health/state/critical"
```
If the Consul version has changed, you see a notice at the end of `gitlab-ctl reconfigure`
informing you that Consul must be restarted for the new version to be used.
Restart Consul one node at a time:
```shell
sudo gitlab-ctl restart consul
```
Consul nodes communicate using the raft protocol. If the current leader goes
offline, there must be a leader election. A leader node must exist to facilitate
synchronization across the cluster. If too many nodes go offline at the same time,
the cluster loses quorum and doesn't elect a leader due to
[broken consensus](https://developer.hashicorp.com/consul/docs/architecture/consensus).
Consult the [troubleshooting section](#troubleshooting-consul) if the cluster is not
able to recover after the upgrade. The [outage recovery](#outage-recovery) may
be of particular interest.
GitLab uses Consul to store only easily regenerated, transient data. If the
bundled Consul wasn't used by any process other than GitLab itself, you can
[rebuild the cluster from scratch](#recreate-from-scratch).
## Troubleshooting Consul
Below are some operations should you debug any issues.
You can see any error logs by running:
```shell
sudo gitlab-ctl tail consul
```
### Check the cluster membership
To determine which nodes are part of the cluster, run the following on any member in the cluster:
```shell
sudo /opt/gitlab/embedded/bin/consul members
```
The output should be similar to:
```plaintext
Node Address Status Type Build Protocol DC
consul-b XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
consul-c XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
consul-c XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
db-a XX.XX.X.Y:8301 alive client 0.9.0 2 gitlab_consul
db-b XX.XX.X.Y:8301 alive client 0.9.0 2 gitlab_consul
```
Ideally all nodes have a `Status` of `alive`.
### Restart Consul
If it is necessary to restart Consul, it is important to do this in
a controlled manner to maintain quorum. If quorum is lost, to recover the cluster,
you follow the Consul [outage recovery](#outage-recovery) process.
To be safe, it's recommended that you only restart Consul in one node at a time to
ensure the cluster remains intact. For larger clusters, it is possible to restart
multiple nodes at a time. See the
[Consul consensus document](https://developer.hashicorp.com/consul/docs/architecture/consensus#deployment-table)
for the number of failures it can tolerate. This is the number of simultaneous
restarts it can sustain.
To restart Consul:
```shell
sudo gitlab-ctl restart consul
```
### Consul nodes unable to communicate
By default, Consul attempts to
[bind](https://developer.hashicorp.com/consul/docs/agent/config/config-files#bind_addr) to `0.0.0.0`, but
it advertises the first private IP address on the node for other Consul nodes
to communicate with it. If the other nodes cannot communicate with a node on
this address, then the cluster has a failed status.
If you run into this issue, then messages like the following are output in `gitlab-ctl tail consul`:
```plaintext
2017-09-25_19:53:39.90821 2017/09/25 19:53:39 [WARN] raft: no known peers, aborting election
2017-09-25_19:53:41.74356 2017/09/25 19:53:41 [ERR] agent: failed to sync remote state: No cluster leader
```
To fix this:
1. Pick an address on each node that all of the other nodes can reach this node through.
1. Update your `/etc/gitlab/gitlab.rb`
```ruby
consul['configuration'] = {
...
bind_addr: 'IP ADDRESS'
}
```
1. Reconfigure GitLab;
```shell
gitlab-ctl reconfigure
```
If you still see the errors, you may have to
[erase the Consul database and reinitialize](#recreate-from-scratch) on the affected node.
### Consul does not start - multiple private IPs
If a node has multiple private IPs, Consul doesn't know about
which of the private addresses to advertise, and then it immediately exits on start.
Messages like the following are output in `gitlab-ctl tail consul`:
```plaintext
2017-11-09_17:41:45.52876 ==> Starting Consul agent...
2017-11-09_17:41:45.53057 ==> Error creating agent: Failed to get advertise address: Multiple private IPs found. Please configure one.
```
To fix this:
1. Pick an address on the node that all of the other nodes can reach this node through.
1. Update your `/etc/gitlab/gitlab.rb`
```ruby
consul['configuration'] = {
...
bind_addr: 'IP ADDRESS'
}
```
1. Reconfigure GitLab;
```shell
gitlab-ctl reconfigure
```
### Outage recovery
If you have lost enough Consul nodes in the cluster to break quorum, then the cluster
is considered to have failed and cannot function without manual intervention.
In that case, you can either recreate the nodes from scratch or attempt a
recover.
#### Recreate from scratch
By default, GitLab does not store anything in the Consul node that cannot be
recreated. To erase the Consul database and reinitialize:
```shell
sudo gitlab-ctl stop consul
sudo rm -rf /var/opt/gitlab/consul/data
sudo gitlab-ctl start consul
```
After this, the node should start back up, and the rest of the server agents rejoin.
Shortly after that, the client agents should rejoin as well.
If they do not join, you might also need to erase the Consul data on the client:
```shell
sudo rm -rf /var/opt/gitlab/consul/data
```
#### Recover a failed node
If you have taken advantage of Consul to store other data and want to restore
the failed node, follow the
[Consul guide](https://developer.hashicorp.com/consul/tutorials/operate-consul/recovery-outage)
to recover a failed cluster.
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: How to set up Consul
description: Configure a Consul cluster.
breadcrumbs:
- doc
- administration
---
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
A Consul cluster consists of both
[server and client agents](https://developer.hashicorp.com/consul/docs/agent).
The servers run on their own nodes and the clients run on other nodes that in
turn communicate with the servers.
GitLab Premium includes a bundled version of [Consul](https://www.consul.io/)
a service networking solution that you can manage by using `/etc/gitlab/gitlab.rb`.
## Prerequisites
Before configuring Consul:
1. Review the [reference architecture](reference_architectures/_index.md#available-reference-architectures)
documentation to determine the number of Consul server nodes you should have.
1. If necessary, ensure the [appropriate ports are open](package_information/defaults.md#ports) in your firewall.
## Configure the Consul nodes
On each Consul server node:
1. Follow the instructions to [install](https://about.gitlab.com/install/)
GitLab by choosing your preferred platform, but do not supply the
`EXTERNAL_URL` value when asked.
1. Edit `/etc/gitlab/gitlab.rb`, and add the following by replacing the values
noted in the `retry_join` section. In the example below, there are three
nodes, two denoted with their IP, and one with its FQDN, you can use either
notation:
```ruby
# Disable all components except Consul
roles ['consul_role']
# Consul nodes: can be FQDN or IP, separated by a whitespace
consul['configuration'] = {
server: true,
retry_join: %w(10.10.10.1 consul1.gitlab.example.com 10.10.10.2)
}
# Disable auto migrations
gitlab_rails['auto_migrate'] = false
```
1. [Reconfigure GitLab](restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes
to take effect.
1. Run the following command to ensure Consul is both configured correctly and
to verify that all server nodes are communicating:
```shell
sudo /opt/gitlab/embedded/bin/consul members
```
The output should be similar to:
```plaintext
Node Address Status Type Build Protocol DC
CONSUL_NODE_ONE XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
CONSUL_NODE_TWO XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
CONSUL_NODE_THREE XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
```
If the results display any nodes with a status that isn't `alive`, or if any
of the three nodes are missing, see the [Troubleshooting section](#troubleshooting-consul).
## Securing the Consul nodes
There are two ways you can secure the communication between the Consul nodes, using either TLS or gossip encryption.
### TLS encryption
By default TLS is not enabled for the Consul cluster, the default configuration
options and their defaults are:
```ruby
consul['use_tls'] = false
consul['tls_ca_file'] = nil
consul['tls_certificate_file'] = nil
consul['tls_key_file'] = nil
consul['tls_verify_client'] = nil
```
These configuration options apply to both client and server nodes.
To enable TLS on a Consul node start with `consul['use_tls'] = true`. Depending
on the role of the node (server or client) and your TLS preferences you need to
provide further configuration:
- On a server node you must at least specify `tls_ca_file`,
`tls_certificate_file`, and `tls_key_file`.
- On a client node, when client TLS authentication is disabled on the server
(enabled by default) you must at least specify `tls_ca_file`, otherwise you have
to pass the client TLS certificate and key using `tls_certificate_file`,
`tls_key_file`.
When TLS is enabled, by default the server uses mTLS and listens on both HTTPS
and HTTP (and TLS and non-TLS RPC). It expects clients to use TLS
authentication. You can disable client TLS authentication by setting
`consul['tls_verify_client'] = false`.
On the other hand, clients only use TLS for outgoing connection to server nodes
and only listen on HTTP (and non-TLS RPC) for incoming requests. You can enforce
client Consul agents to use TLS for incoming connections by setting
`consul['https_port']` to a non-negative integer (`8501` is the Consul's default
HTTPS port). You must also pass `tls_certificate_file` and `tls_key_file` for
this to work. When server nodes use client TLS authentication, the client TLS
certificate and key is used for both TLS authentication and incoming HTTPS
connections.
Consul client nodes do not use TLS client authentication by default (as opposed
to servers) and you need to explicitly instruct them to do it by setting
`consul['tls_verify_client'] = true`.
Below are some examples of TLS encryption.
#### Minimal TLS support
In the following example, the server uses TLS for incoming connections (without client TLS authentication).
{{< tabs >}}
{{< tab title="Consul server node" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['configuration'] = {
'server' => true
}
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/server.crt.pem'
consul['tls_key_file'] = '/path/to/server.key.pem'
consul['tls_verify_client'] = false
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Consul client node" >}}
The following can be configured on a Patroni node for example.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
patroni['consul']['url'] = 'http://localhost:8500'
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
Patroni talks to the local Consul agent which does not use TLS for incoming
connections. Hence the HTTP URL for `patroni['consul']['url']`.
{{< /tab >}}
{{< /tabs >}}
#### Default TLS support
In the following example, the server uses mutual TLS authentication.
{{< tabs >}}
{{< tab title="Consul server node" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['configuration'] = {
'server' => true
}
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/server.crt.pem'
consul['tls_key_file'] = '/path/to/server.key.pem'
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Consul client node" >}}
The following can be configured on a Patroni node for example.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/client.crt.pem'
consul['tls_key_file'] = '/path/to/client.key.pem'
patroni['consul']['url'] = 'http://localhost:8500'
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
Patroni talks to the local Consul agent which does not use TLS for incoming
connections, even though it uses TLS authentication to Consul server nodes.
Hence the HTTP URL for `patroni['consul']['url']`.
{{< /tab >}}
{{< /tabs >}}
#### Full TLS support
In the following example, both client and server use mutual TLS authentication.
The Consul server, client, and Patroni client certificates must be issued by the
same CA for mutual TLS authentication to work.
{{< tabs >}}
{{< tab title="Consul server node" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['configuration'] = {
'server' => true
}
consul['use_tls'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/server.crt.pem'
consul['tls_key_file'] = '/path/to/server.key.pem'
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Consul client node" >}}
The following can be configured on a Patroni node for example.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['enable'] = true
consul['use_tls'] = true
consul['tls_verify_client'] = true
consul['tls_ca_file'] = '/path/to/ca.crt.pem'
consul['tls_certificate_file'] = '/path/to/client.crt.pem'
consul['tls_key_file'] = '/path/to/client.key.pem'
consul['https_port'] = 8501
patroni['consul']['url'] = 'https://localhost:8501'
patroni['consul']['cacert'] = '/path/to/ca.crt.pem'
patroni['consul']['cert'] = '/opt/tls/patroni.crt.pem'
patroni['consul']['key'] = '/opt/tls/patroni.key.pem'
patroni['consul']['verify'] = true
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< /tabs >}}
### Gossip encryption
The Gossip protocol can be encrypted to secure communication between Consul
agents. By default encryption is not enabled, to enable encryption a shared
encryption key is required. For convenience, the key can be generated by using
the `gitlab-ctl consul keygen` command. The key must be 32 bytes long, Base 64
encoded and shared on all agents.
The following options work on both client and server nodes.
To enable the gossip protocol:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
consul['encryption_key'] = <base-64-key>
consul['encryption_verify_incoming'] = true
consul['encryption_verify_outgoing'] = true
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
To [enable encryption in an existing datacenter](https://developer.hashicorp.com/consul/docs/security/encryption#enable-on-an-existing-consul-datacenter),
manually set these options for a rolling update.
## Upgrade the Consul nodes
To upgrade your Consul nodes, upgrade the GitLab package.
Nodes should be:
- Members of a healthy cluster prior to upgrading the Linux package.
- Upgraded one node at a time.
Identify any existing health issues in the cluster by running the following command
in each node. The command returns an empty array if the cluster is healthy:
```shell
curl "http://127.0.0.1:8500/v1/health/state/critical"
```
If the Consul version has changed, you see a notice at the end of `gitlab-ctl reconfigure`
informing you that Consul must be restarted for the new version to be used.
Restart Consul one node at a time:
```shell
sudo gitlab-ctl restart consul
```
Consul nodes communicate using the raft protocol. If the current leader goes
offline, there must be a leader election. A leader node must exist to facilitate
synchronization across the cluster. If too many nodes go offline at the same time,
the cluster loses quorum and doesn't elect a leader due to
[broken consensus](https://developer.hashicorp.com/consul/docs/architecture/consensus).
Consult the [troubleshooting section](#troubleshooting-consul) if the cluster is not
able to recover after the upgrade. The [outage recovery](#outage-recovery) may
be of particular interest.
GitLab uses Consul to store only easily regenerated, transient data. If the
bundled Consul wasn't used by any process other than GitLab itself, you can
[rebuild the cluster from scratch](#recreate-from-scratch).
## Troubleshooting Consul
Below are some operations should you debug any issues.
You can see any error logs by running:
```shell
sudo gitlab-ctl tail consul
```
### Check the cluster membership
To determine which nodes are part of the cluster, run the following on any member in the cluster:
```shell
sudo /opt/gitlab/embedded/bin/consul members
```
The output should be similar to:
```plaintext
Node Address Status Type Build Protocol DC
consul-b XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
consul-c XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
consul-c XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
db-a XX.XX.X.Y:8301 alive client 0.9.0 2 gitlab_consul
db-b XX.XX.X.Y:8301 alive client 0.9.0 2 gitlab_consul
```
Ideally all nodes have a `Status` of `alive`.
### Restart Consul
If it is necessary to restart Consul, it is important to do this in
a controlled manner to maintain quorum. If quorum is lost, to recover the cluster,
you follow the Consul [outage recovery](#outage-recovery) process.
To be safe, it's recommended that you only restart Consul in one node at a time to
ensure the cluster remains intact. For larger clusters, it is possible to restart
multiple nodes at a time. See the
[Consul consensus document](https://developer.hashicorp.com/consul/docs/architecture/consensus#deployment-table)
for the number of failures it can tolerate. This is the number of simultaneous
restarts it can sustain.
To restart Consul:
```shell
sudo gitlab-ctl restart consul
```
### Consul nodes unable to communicate
By default, Consul attempts to
[bind](https://developer.hashicorp.com/consul/docs/agent/config/config-files#bind_addr) to `0.0.0.0`, but
it advertises the first private IP address on the node for other Consul nodes
to communicate with it. If the other nodes cannot communicate with a node on
this address, then the cluster has a failed status.
If you run into this issue, then messages like the following are output in `gitlab-ctl tail consul`:
```plaintext
2017-09-25_19:53:39.90821 2017/09/25 19:53:39 [WARN] raft: no known peers, aborting election
2017-09-25_19:53:41.74356 2017/09/25 19:53:41 [ERR] agent: failed to sync remote state: No cluster leader
```
To fix this:
1. Pick an address on each node that all of the other nodes can reach this node through.
1. Update your `/etc/gitlab/gitlab.rb`
```ruby
consul['configuration'] = {
...
bind_addr: 'IP ADDRESS'
}
```
1. Reconfigure GitLab;
```shell
gitlab-ctl reconfigure
```
If you still see the errors, you may have to
[erase the Consul database and reinitialize](#recreate-from-scratch) on the affected node.
### Consul does not start - multiple private IPs
If a node has multiple private IPs, Consul doesn't know about
which of the private addresses to advertise, and then it immediately exits on start.
Messages like the following are output in `gitlab-ctl tail consul`:
```plaintext
2017-11-09_17:41:45.52876 ==> Starting Consul agent...
2017-11-09_17:41:45.53057 ==> Error creating agent: Failed to get advertise address: Multiple private IPs found. Please configure one.
```
To fix this:
1. Pick an address on the node that all of the other nodes can reach this node through.
1. Update your `/etc/gitlab/gitlab.rb`
```ruby
consul['configuration'] = {
...
bind_addr: 'IP ADDRESS'
}
```
1. Reconfigure GitLab;
```shell
gitlab-ctl reconfigure
```
### Outage recovery
If you have lost enough Consul nodes in the cluster to break quorum, then the cluster
is considered to have failed and cannot function without manual intervention.
In that case, you can either recreate the nodes from scratch or attempt a
recover.
#### Recreate from scratch
By default, GitLab does not store anything in the Consul node that cannot be
recreated. To erase the Consul database and reinitialize:
```shell
sudo gitlab-ctl stop consul
sudo rm -rf /var/opt/gitlab/consul/data
sudo gitlab-ctl start consul
```
After this, the node should start back up, and the rest of the server agents rejoin.
Shortly after that, the client agents should rejoin as well.
If they do not join, you might also need to erase the Consul data on the client:
```shell
sudo rm -rf /var/opt/gitlab/consul/data
```
#### Recover a failed node
If you have taken advantage of Consul to store other data and want to restore
the failed node, follow the
[Consul guide](https://developer.hashicorp.com/consul/tutorials/operate-consul/recovery-outage)
to recover a failed cluster.
|
https://docs.gitlab.com/repository_storage_paths
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/repository_storage_paths.md
|
2025-08-13
|
doc/administration
|
[
"doc",
"administration"
] |
repository_storage_paths.md
|
Data Access
|
Gitaly
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Repository storage
|
How GitLab stores repository data.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab stores [repositories](../user/project/repository/_index.md) on repository storage. Repository
storage is either:
- Physical storage configured with a `gitaly_address` that points to a [Gitaly node](gitaly/_index.md).
- [Virtual storage](gitaly/praefect/_index.md#virtual-storage) that stores repositories on a Gitaly Cluster (Praefect).
{{< alert type="warning" >}}
Repository storage could be configured as a `path` that points directly to the directory where the repositories are
stored. GitLab directly accessing a directory containing repositories is deprecated. You should configure GitLab to
access repositories through a physical or virtual storage.
{{< /alert >}}
For more information on:
- Configuring Gitaly, see [Configure Gitaly](gitaly/configure_gitaly.md).
- Configuring Gitaly Cluster (Praefect), see [Configure Gitaly Cluster (Praefect)](gitaly/praefect/configure.md).
## Hashed storage
{{< history >}}
- Support for legacy storage, where repository paths were generated based on the project path, has been completely removed in GitLab 14.0.
- **Storage name** field [renamed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128416) from **Gitaly storage name** and **Relative path** field [renamed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128416) from **Gitaly relative path** in GitLab 16.3.
{{< /history >}}
Hashed storage stores projects on disk in a location based on a hash of the project's ID. This makes the folder
structure immutable and eliminates the need to synchronize state from URLs to disk structure. This means that renaming a
group, user, or project:
- Costs only the database transaction.
- Takes effect immediately.
The hash also helps spread the repositories more evenly on the disk. The top-level directory
contains fewer folders than the total number of top-level namespaces.
The hash format is based on the hexadecimal representation of a SHA256, calculated with
`SHA256(project.id)`. The top-level folder uses the first two characters, followed by another folder
with the next two characters. They are both stored in a special `@hashed` folder so they can
co-exist with existing legacy storage projects. For example:
```ruby
# Project's repository:
"@hashed/#{hash[0..1]}/#{hash[2..3]}/#{hash}.git"
# Wiki's repository:
"@hashed/#{hash[0..1]}/#{hash[2..3]}/#{hash}.wiki.git"
```
### Translate hashed storage paths
Troubleshooting problems with the Git repositories, adding hooks, and other tasks requires you
translate between the human-readable project name and the hashed storage path. You can translate:
- From a [project's name to its hashed path](#from-project-name-to-hashed-path).
- From a [hashed path to a project's name](#from-hashed-path-to-project-name).
#### From project name to hashed path
Administrators can look up a project's hashed path from its name or ID using:
- The [**Admin** area](admin_area.md#administering-projects).
- A Rails console.
To look up a project's hash path in the **Admin** area:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Overview > Projects** and select the project.
1. Locate the **Relative path** field. The value is similar to:
```plaintext
"@hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git"
```
To look up a project's hash path using a Rails console:
1. Start a [Rails console](operations/rails_console.md#starting-a-rails-console-session).
1. Run a command similar to this example (use either the project's ID or its name):
```ruby
Project.find(16).disk_path
Project.find_by_full_path('group/project').disk_path
```
#### From hashed path to project name
Administrators can look up a project's name from its hashed relative path using:
- A Rails console.
- The `config` file in the `*.git` directory.
To look up a project's name using the Rails console:
1. Start a [Rails console](operations/rails_console.md#starting-a-rails-console-session).
1. Run a command similar to this example:
```ruby
ProjectRepository.find_by(disk_path: '@hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9').project
```
The quoted string in that command is the directory tree you can find on your GitLab server. For
example, on a default Linux package installation this would be `/var/opt/gitlab/git-data/repositories/@hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git`
with `.git` from the end of the directory name removed.
The output includes the project ID and the project name. For example:
```plaintext
=> #<Project id:16 it/supportteam/ticketsystem>
```
### Hashed object pools
Object pools are repositories used to deduplicate [forks of public and internal projects](../user/project/repository/forking_workflow.md) and
contain the objects from the source project. Using `objects/info/alternates`, the source project and
forks use the object pool for shared objects. For more information, see
Git object deduplication information in the GitLab development documentation.
Objects are moved from the source project to the object pool when housekeeping is run on the source
project. Object pool repositories are stored similarly to regular repositories in a directory called `@pools` instead of `@hashed`
```ruby
# object pool paths
"@pools/#{hash[0..1]}/#{hash[2..3]}/#{hash}.git"
```
{{< alert type="warning" >}}
Do not run `git prune` or `git gc` in object pool repositories, which are stored in the `@pools` directory.
This can cause data loss in the regular repositories that depend on the object pool.
{{< /alert >}}
### Translate hashed object pool storage paths
To look up a project's object pool using a Rails console:
1. Start a [Rails console](operations/rails_console.md#starting-a-rails-console-session).
1. Run a command similar to the following example:
```ruby
project_id = 1
pool_repository = Project.find(project_id).pool_repository
pool_repository = Project.find_by_full_path('group/project').pool_repository
# Get more details about the pool repository
pool_repository.source_project
pool_repository.member_projects
pool_repository.shard
pool_repository.disk_path
```
### Group wiki storage
Unlike project wikis that are stored in the `@hashed` directory, group wikis are stored in a directory called `@groups`.
Like project wikis, group wikis follow the hashed storage folder convention, but use a hash of the group ID rather than the project ID.
For example:
```ruby
# group wiki paths
"@groups/#{hash[0..1]}/#{hash[2..3]}/#{hash}.wiki.git"
```
### Gitaly Cluster (Praefect) storage
If Gitaly Cluster (Praefect) is used, Praefect manages storage locations. The internal path used by Praefect for the repository
differs from the hashed path. For more information, see
[Praefect-generated replica paths](gitaly/praefect/_index.md#praefect-generated-replica-paths).
### Repository file archive cache
Users can download an archive in formats such as `.zip` or `.tar.gz` of a repository by using either:
- The GitLab UI.
- The [Repositories API](../api/repositories.md#get-file-archive).
GitLab stores this archive in a cache in a directory on the GitLab server.
A background job running on Sidekiq periodically cleans out stale
archives from this directory. For this reason, this directory must be
accessible by both the Sidekiq and GitLab Workhorse services. If Sidekiq
can't access the same directory used by GitLab Workhorse, the [disk containing the directory fills up](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6005).
If you don't want to use a shared mount for Sidekiq and GitLab
Workhorse, you can instead configure a separate `cron` job to delete
files from this directory.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
The default directory for the file archive cache is `/var/opt/gitlab/gitlab-rails/shared/cache/archive`. You can
configure this with the `gitlab_rails['gitlab_repository_downloads_path']` setting in `/etc/gitlab/gitlab.rb`.
To disable the cache:
1. Set the `WORKHORSE_ARCHIVE_CACHE_DISABLED` environment variable on all nodes that run Puma:
```shell
sudo -e /etc/gitlab/gitlab.rb
```
```ruby
gitlab_rails['env'] = { 'WORKHORSE_ARCHIVE_CACHE_DISABLED' => '1' }
```
1. Reconfigure the updated nodes for the change to take effect:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
The Helm chart stores the cache in `/srv/gitlab/shared/cache/archive`.
The directory cannot be configured.
To disable the cache, you can use `--set gitlab.webservice.extraEnv.WORKHORSE_ARCHIVE_CACHE_DISABLED="1"`, or
specify the following in your values file:
```yaml
gitlab:
webservice:
extraEnv:
WORKHORSE_ARCHIVE_CACHE_DISABLED: "1"
```
{{< /tab >}}
{{< /tabs >}}
### Object storage support
This table shows which storable objects are storable in each storage type:
| Storable object | Hashed storage | S3 compatible |
|:-----------------|:---------------|:--------------|
| Repository | Yes | - |
| Attachments | Yes | - |
| Avatars | No | - |
| Pages | No | - |
| Docker Registry | No | - |
| CI/CD job logs | No | - |
| CI/CD artifacts | No | Yes |
| CI/CD cache | No | Yes |
| LFS objects | Similar | Yes |
| Repository pools | Yes | - |
Files stored in an S3-compatible endpoint can have the same advantages as
[hashed storage](#hashed-storage), as long as they are not prefixed with
`#{namespace}/#{project_name}`. This is true for CI/CD cache and LFS objects.
#### Avatars
Each file is stored in a directory that matches the `id` assigned to it in the database. The
filename is always `avatar.png` for user avatars. When an avatar is replaced, the `Upload` model is
destroyed and a new one takes place with a different `id`.
#### CI/CD artifacts
CI/CD artifacts are S3-compatible.
#### LFS objects
[LFS Objects in GitLab](../topics/git/lfs/_index.md) implement a similar
storage pattern using two characters and two-level folders, following the Git implementation:
```ruby
"shared/lfs-objects/#{oid[0..1}/#{oid[2..3]}/#{oid[4..-1]}"
# Based on object `oid`: `8909029eb962194cfb326259411b22ae3f4a814b5be4f80651735aeef9f3229c`, path will be:
"shared/lfs-objects/89/09/029eb962194cfb326259411b22ae3f4a814b5be4f80651735aeef9f3229c"
```
LFS objects are also [S3-compatible](lfs/_index.md#storing-lfs-objects-in-remote-object-storage).
## Configure where new repositories are stored
After you [configure multiple repository storages](https://docs.gitlab.com/omnibus/settings/configuration.html#store-git-data-in-an-alternative-directory), you can choose where new repositories are stored:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Repository**.
1. Expand **Repository storage**.
1. Enter values in the **Storage nodes for new repositories** fields.
1. Select **Save changes**.
Each repository storage path can be assigned a weight from 0-100. When a new project is created,
these weights are used to determine the storage location the repository is created on.
The higher the weight of a given repository storage path relative to other repository storages
paths, the more often it is chosen (`(storage weight) / (sum of all weights) * 100 = chance %`).
By default, if repository weights have not been configured earlier:
- `default` is weighted `100`.
- All other storages are weighted `0`.
{{< alert type="note" >}}
If all storage weights are `0` (for example, when `default` does not exist), GitLab attempts to
create new repositories on `default`, regardless of the configuration or if `default` exists.
See [the tracking issue](https://gitlab.com/gitlab-org/gitlab/-/issues/36175) for more information.
{{< /alert >}}
## Move repositories
To move a repository to a different repository storage (for example, from `default` to `storage2`), use the
same process as [migrating to Gitaly Cluster (Praefect)](gitaly/praefect/_index.md#migrate-to-gitaly-cluster-praefect).
|
---
stage: Data Access
group: Gitaly
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
gitlab_dedicated: false
title: Repository storage
description: How GitLab stores repository data.
breadcrumbs:
- doc
- administration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab stores [repositories](../user/project/repository/_index.md) on repository storage. Repository
storage is either:
- Physical storage configured with a `gitaly_address` that points to a [Gitaly node](gitaly/_index.md).
- [Virtual storage](gitaly/praefect/_index.md#virtual-storage) that stores repositories on a Gitaly Cluster (Praefect).
{{< alert type="warning" >}}
Repository storage could be configured as a `path` that points directly to the directory where the repositories are
stored. GitLab directly accessing a directory containing repositories is deprecated. You should configure GitLab to
access repositories through a physical or virtual storage.
{{< /alert >}}
For more information on:
- Configuring Gitaly, see [Configure Gitaly](gitaly/configure_gitaly.md).
- Configuring Gitaly Cluster (Praefect), see [Configure Gitaly Cluster (Praefect)](gitaly/praefect/configure.md).
## Hashed storage
{{< history >}}
- Support for legacy storage, where repository paths were generated based on the project path, has been completely removed in GitLab 14.0.
- **Storage name** field [renamed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128416) from **Gitaly storage name** and **Relative path** field [renamed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128416) from **Gitaly relative path** in GitLab 16.3.
{{< /history >}}
Hashed storage stores projects on disk in a location based on a hash of the project's ID. This makes the folder
structure immutable and eliminates the need to synchronize state from URLs to disk structure. This means that renaming a
group, user, or project:
- Costs only the database transaction.
- Takes effect immediately.
The hash also helps spread the repositories more evenly on the disk. The top-level directory
contains fewer folders than the total number of top-level namespaces.
The hash format is based on the hexadecimal representation of a SHA256, calculated with
`SHA256(project.id)`. The top-level folder uses the first two characters, followed by another folder
with the next two characters. They are both stored in a special `@hashed` folder so they can
co-exist with existing legacy storage projects. For example:
```ruby
# Project's repository:
"@hashed/#{hash[0..1]}/#{hash[2..3]}/#{hash}.git"
# Wiki's repository:
"@hashed/#{hash[0..1]}/#{hash[2..3]}/#{hash}.wiki.git"
```
### Translate hashed storage paths
Troubleshooting problems with the Git repositories, adding hooks, and other tasks requires you
translate between the human-readable project name and the hashed storage path. You can translate:
- From a [project's name to its hashed path](#from-project-name-to-hashed-path).
- From a [hashed path to a project's name](#from-hashed-path-to-project-name).
#### From project name to hashed path
Administrators can look up a project's hashed path from its name or ID using:
- The [**Admin** area](admin_area.md#administering-projects).
- A Rails console.
To look up a project's hash path in the **Admin** area:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Overview > Projects** and select the project.
1. Locate the **Relative path** field. The value is similar to:
```plaintext
"@hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git"
```
To look up a project's hash path using a Rails console:
1. Start a [Rails console](operations/rails_console.md#starting-a-rails-console-session).
1. Run a command similar to this example (use either the project's ID or its name):
```ruby
Project.find(16).disk_path
Project.find_by_full_path('group/project').disk_path
```
#### From hashed path to project name
Administrators can look up a project's name from its hashed relative path using:
- A Rails console.
- The `config` file in the `*.git` directory.
To look up a project's name using the Rails console:
1. Start a [Rails console](operations/rails_console.md#starting-a-rails-console-session).
1. Run a command similar to this example:
```ruby
ProjectRepository.find_by(disk_path: '@hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9').project
```
The quoted string in that command is the directory tree you can find on your GitLab server. For
example, on a default Linux package installation this would be `/var/opt/gitlab/git-data/repositories/@hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git`
with `.git` from the end of the directory name removed.
The output includes the project ID and the project name. For example:
```plaintext
=> #<Project id:16 it/supportteam/ticketsystem>
```
### Hashed object pools
Object pools are repositories used to deduplicate [forks of public and internal projects](../user/project/repository/forking_workflow.md) and
contain the objects from the source project. Using `objects/info/alternates`, the source project and
forks use the object pool for shared objects. For more information, see
Git object deduplication information in the GitLab development documentation.
Objects are moved from the source project to the object pool when housekeeping is run on the source
project. Object pool repositories are stored similarly to regular repositories in a directory called `@pools` instead of `@hashed`
```ruby
# object pool paths
"@pools/#{hash[0..1]}/#{hash[2..3]}/#{hash}.git"
```
{{< alert type="warning" >}}
Do not run `git prune` or `git gc` in object pool repositories, which are stored in the `@pools` directory.
This can cause data loss in the regular repositories that depend on the object pool.
{{< /alert >}}
### Translate hashed object pool storage paths
To look up a project's object pool using a Rails console:
1. Start a [Rails console](operations/rails_console.md#starting-a-rails-console-session).
1. Run a command similar to the following example:
```ruby
project_id = 1
pool_repository = Project.find(project_id).pool_repository
pool_repository = Project.find_by_full_path('group/project').pool_repository
# Get more details about the pool repository
pool_repository.source_project
pool_repository.member_projects
pool_repository.shard
pool_repository.disk_path
```
### Group wiki storage
Unlike project wikis that are stored in the `@hashed` directory, group wikis are stored in a directory called `@groups`.
Like project wikis, group wikis follow the hashed storage folder convention, but use a hash of the group ID rather than the project ID.
For example:
```ruby
# group wiki paths
"@groups/#{hash[0..1]}/#{hash[2..3]}/#{hash}.wiki.git"
```
### Gitaly Cluster (Praefect) storage
If Gitaly Cluster (Praefect) is used, Praefect manages storage locations. The internal path used by Praefect for the repository
differs from the hashed path. For more information, see
[Praefect-generated replica paths](gitaly/praefect/_index.md#praefect-generated-replica-paths).
### Repository file archive cache
Users can download an archive in formats such as `.zip` or `.tar.gz` of a repository by using either:
- The GitLab UI.
- The [Repositories API](../api/repositories.md#get-file-archive).
GitLab stores this archive in a cache in a directory on the GitLab server.
A background job running on Sidekiq periodically cleans out stale
archives from this directory. For this reason, this directory must be
accessible by both the Sidekiq and GitLab Workhorse services. If Sidekiq
can't access the same directory used by GitLab Workhorse, the [disk containing the directory fills up](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6005).
If you don't want to use a shared mount for Sidekiq and GitLab
Workhorse, you can instead configure a separate `cron` job to delete
files from this directory.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
The default directory for the file archive cache is `/var/opt/gitlab/gitlab-rails/shared/cache/archive`. You can
configure this with the `gitlab_rails['gitlab_repository_downloads_path']` setting in `/etc/gitlab/gitlab.rb`.
To disable the cache:
1. Set the `WORKHORSE_ARCHIVE_CACHE_DISABLED` environment variable on all nodes that run Puma:
```shell
sudo -e /etc/gitlab/gitlab.rb
```
```ruby
gitlab_rails['env'] = { 'WORKHORSE_ARCHIVE_CACHE_DISABLED' => '1' }
```
1. Reconfigure the updated nodes for the change to take effect:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
The Helm chart stores the cache in `/srv/gitlab/shared/cache/archive`.
The directory cannot be configured.
To disable the cache, you can use `--set gitlab.webservice.extraEnv.WORKHORSE_ARCHIVE_CACHE_DISABLED="1"`, or
specify the following in your values file:
```yaml
gitlab:
webservice:
extraEnv:
WORKHORSE_ARCHIVE_CACHE_DISABLED: "1"
```
{{< /tab >}}
{{< /tabs >}}
### Object storage support
This table shows which storable objects are storable in each storage type:
| Storable object | Hashed storage | S3 compatible |
|:-----------------|:---------------|:--------------|
| Repository | Yes | - |
| Attachments | Yes | - |
| Avatars | No | - |
| Pages | No | - |
| Docker Registry | No | - |
| CI/CD job logs | No | - |
| CI/CD artifacts | No | Yes |
| CI/CD cache | No | Yes |
| LFS objects | Similar | Yes |
| Repository pools | Yes | - |
Files stored in an S3-compatible endpoint can have the same advantages as
[hashed storage](#hashed-storage), as long as they are not prefixed with
`#{namespace}/#{project_name}`. This is true for CI/CD cache and LFS objects.
#### Avatars
Each file is stored in a directory that matches the `id` assigned to it in the database. The
filename is always `avatar.png` for user avatars. When an avatar is replaced, the `Upload` model is
destroyed and a new one takes place with a different `id`.
#### CI/CD artifacts
CI/CD artifacts are S3-compatible.
#### LFS objects
[LFS Objects in GitLab](../topics/git/lfs/_index.md) implement a similar
storage pattern using two characters and two-level folders, following the Git implementation:
```ruby
"shared/lfs-objects/#{oid[0..1}/#{oid[2..3]}/#{oid[4..-1]}"
# Based on object `oid`: `8909029eb962194cfb326259411b22ae3f4a814b5be4f80651735aeef9f3229c`, path will be:
"shared/lfs-objects/89/09/029eb962194cfb326259411b22ae3f4a814b5be4f80651735aeef9f3229c"
```
LFS objects are also [S3-compatible](lfs/_index.md#storing-lfs-objects-in-remote-object-storage).
## Configure where new repositories are stored
After you [configure multiple repository storages](https://docs.gitlab.com/omnibus/settings/configuration.html#store-git-data-in-an-alternative-directory), you can choose where new repositories are stored:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Repository**.
1. Expand **Repository storage**.
1. Enter values in the **Storage nodes for new repositories** fields.
1. Select **Save changes**.
Each repository storage path can be assigned a weight from 0-100. When a new project is created,
these weights are used to determine the storage location the repository is created on.
The higher the weight of a given repository storage path relative to other repository storages
paths, the more often it is chosen (`(storage weight) / (sum of all weights) * 100 = chance %`).
By default, if repository weights have not been configured earlier:
- `default` is weighted `100`.
- All other storages are weighted `0`.
{{< alert type="note" >}}
If all storage weights are `0` (for example, when `default` does not exist), GitLab attempts to
create new repositories on `default`, regardless of the configuration or if `default` exists.
See [the tracking issue](https://gitlab.com/gitlab-org/gitlab/-/issues/36175) for more information.
{{< /alert >}}
## Move repositories
To move a repository to a different repository storage (for example, from `default` to `storage2`), use the
same process as [migrating to Gitaly Cluster (Praefect)](gitaly/praefect/_index.md#migrate-to-gitaly-cluster-praefect).
|
https://docs.gitlab.com/whats-new
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/whats-new.md
|
2025-08-13
|
doc/administration
|
[
"doc",
"administration"
] |
whats-new.md
|
none
|
unassigned
|
For assistance with this What's new page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
|
What's new
|
Configure the What's new feature.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
The **What's new** feature shows users some of the highlights of new features from the last 10 GitLab versions.
All users can see the feature list, but the entries might differ depending on the subscription type:
- Features only available on GitLab.com are not shown on GitLab Self-Managed instances.
- Features only available to GitLab Self-Managed instances are not shown on GitLab.com.
For GitLab Self-Managed, the updated **What's new** is included in the first patch release after a new version. For
example, `13.10.1`.
## Configure What's new
You can configure **What's new** to display features based on the tier, or you can hide it. To configure it:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Preferences**.
1. Expand **What's new**, and choose the required option.
1. Select **Save changes**.
|
---
stage: none
group: unassigned
info: For assistance with this What's new page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
title: What's new
description: Configure the What's new feature.
breadcrumbs:
- doc
- administration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
The **What's new** feature shows users some of the highlights of new features from the last 10 GitLab versions.
All users can see the feature list, but the entries might differ depending on the subscription type:
- Features only available on GitLab.com are not shown on GitLab Self-Managed instances.
- Features only available to GitLab Self-Managed instances are not shown on GitLab.com.
For GitLab Self-Managed, the updated **What's new** is included in the first patch release after a new version. For
example, `13.10.1`.
## Configure What's new
You can configure **What's new** to display features based on the tier, or you can hide it. To configure it:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Preferences**.
1. Expand **What's new**, and choose the required option.
1. Select **Save changes**.
|
https://docs.gitlab.com/external_users
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/external_users.md
|
2025-08-13
|
doc/administration
|
[
"doc",
"administration"
] |
external_users.md
|
Software Supply Chain Security
|
Authentication
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
External users
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
External users have limited access to internal or private groups and projects in the instance. Unlike regular users, external users must be explicitly added to a group or project. However, like regular users, external users are assigned a member role and gain all the associated [permissions](../user/permissions.md#project-members-permissions).
External users:
- Can access public groups, projects, and snippets.
- Can access internal or private groups and projects where they are members.
- Can create subgroups, projects, and snippets in any top-level groups where they are members.
- Cannot create groups, projects, or snippets in their personal namespace.
External users are commonly created when a user outside an organization needs access to only a
specific project. When assigning a role to an external user, you should be aware of the
[project visibility](../user/public_access.md#change-project-visibility) and
[permissions](../user/project/settings/_index.md#configure-project-features-and-permissions)
associated with the role. For example, if an external user is assigned the Guest role for a
private project, they cannot access the code.
{{< alert type="note" >}}
An external user counts as a billable user and consumes a license seat.
{{< /alert >}}
## Create an external user
To create a new external user:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Overview > Users**.
1. Select **New user**.
1. In the **Account** section, enter the required account information.
1. Optional. In the **Access** section, configure any project limits or user type settings.
1. Select the **External** checkbox.
1. Select **Create user**.
You can also create external users with:
- [SAML groups](../integration/saml.md#external-groups).
- [LDAP groups](auth/ldap/ldap_synchronization.md#external-groups).
- The [External providers list](../integration/omniauth.md#create-an-external-providers-list).
- The [users API](../api/users.md).
## Make new users external by default
You can configure your instance to make all new users external by default. You can modify these user
accounts later to remove the external designation.
When you configure this feature, you can also define a regular expression used to identify email
addresses. New users with a matching email are excluded and not marked as an external user. This
regular expression must:
- Use the Ruby format.
- Be convertible to JavaScript.
- Have the ignore case flag set (`/regex pattern/i`).
For example:
- `\.int@example\.com$`: Matches email addresses that end with `.int@domain.com`.
- `^(?:(?!\.ext@example\.com).)*$\r?`: Matches email address that don't include `.ext@example.com`.
{{< alert type="warning" >}}
Adding an regular expression can increase the risk of a regular expression denial of service (ReDoS) attack.
{{< /alert >}}
Prerequisites:
- You must be an administrator for the GitLab Self-Managed instance.
To make new users external by default:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > General**.
1. Expand the **Account and limit** section.
1. Select the **Make new users external by default** checkbox.
1. Optional. In the **Email exclusion pattern** field, enter a regular expression.
1. Select **Save changes**.
|
---
stage: Software Supply Chain Security
group: Authentication
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: External users
breadcrumbs:
- doc
- administration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
External users have limited access to internal or private groups and projects in the instance. Unlike regular users, external users must be explicitly added to a group or project. However, like regular users, external users are assigned a member role and gain all the associated [permissions](../user/permissions.md#project-members-permissions).
External users:
- Can access public groups, projects, and snippets.
- Can access internal or private groups and projects where they are members.
- Can create subgroups, projects, and snippets in any top-level groups where they are members.
- Cannot create groups, projects, or snippets in their personal namespace.
External users are commonly created when a user outside an organization needs access to only a
specific project. When assigning a role to an external user, you should be aware of the
[project visibility](../user/public_access.md#change-project-visibility) and
[permissions](../user/project/settings/_index.md#configure-project-features-and-permissions)
associated with the role. For example, if an external user is assigned the Guest role for a
private project, they cannot access the code.
{{< alert type="note" >}}
An external user counts as a billable user and consumes a license seat.
{{< /alert >}}
## Create an external user
To create a new external user:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Overview > Users**.
1. Select **New user**.
1. In the **Account** section, enter the required account information.
1. Optional. In the **Access** section, configure any project limits or user type settings.
1. Select the **External** checkbox.
1. Select **Create user**.
You can also create external users with:
- [SAML groups](../integration/saml.md#external-groups).
- [LDAP groups](auth/ldap/ldap_synchronization.md#external-groups).
- The [External providers list](../integration/omniauth.md#create-an-external-providers-list).
- The [users API](../api/users.md).
## Make new users external by default
You can configure your instance to make all new users external by default. You can modify these user
accounts later to remove the external designation.
When you configure this feature, you can also define a regular expression used to identify email
addresses. New users with a matching email are excluded and not marked as an external user. This
regular expression must:
- Use the Ruby format.
- Be convertible to JavaScript.
- Have the ignore case flag set (`/regex pattern/i`).
For example:
- `\.int@example\.com$`: Matches email addresses that end with `.int@domain.com`.
- `^(?:(?!\.ext@example\.com).)*$\r?`: Matches email address that don't include `.ext@example.com`.
{{< alert type="warning" >}}
Adding an regular expression can increase the risk of a regular expression denial of service (ReDoS) attack.
{{< /alert >}}
Prerequisites:
- You must be an administrator for the GitLab Self-Managed instance.
To make new users external by default:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > General**.
1. Expand the **Account and limit** section.
1. Select the **Make new users external by default** checkbox.
1. Optional. In the **Email exclusion pattern** field, enter a regular expression.
1. Select **Save changes**.
|
https://docs.gitlab.com/administration/silent_mode
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/silent_mode
|
[
"doc",
"administration",
"silent_mode"
] |
_index.md
|
Tenant Scale
|
Geo
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Silent Mode
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/9826) in GitLab 15.11. This feature was an [experiment](../../policy/development_stages_support.md#experiment).
- Enabling and disabling Silent Mode through the web UI was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131090) in GitLab 16.4.
- [Generally available](../../policy/development_stages_support.md#generally-available) in GitLab 16.6.
{{< /history >}}
Silent Mode allows you to silence outbound communication, such as emails, from GitLab. Silent Mode is not intended to be used on environments which are in-use. Two use-cases are:
- Validating Geo site promotion. You have a secondary Geo site as part of your
[disaster recovery](../geo/disaster_recovery/_index.md) solution. You want to
regularly test promoting it to become a primary Geo site, as a best practice
to ensure your disaster recovery plan actually works. But you don't want to
actually perform an entire failover because the primary site lives in a region
which provides the lowest latency to your users. And you don't want to take
downtime during every regular test. So, you let the primary site remain up,
while you promote the secondary site. You start smoke testing the promoted
site. But, the promoted site starts emailing users, the push mirrors push
changes to external Git repositories, etc. This is where Silent Mode comes in.
You can enable it as part of site promotion, to avoid this issue.
- Validating GitLab backups. You set up a testing instance to test that your
backups restore successfully. As part of the restore, you enable Silent Mode,
for example to avoid sending invalid emails to users.
## Enable Silent Mode
Prerequisites:
- You must have administrator access.
There are multiple ways to enable Silent Mode:
- **Web UI**
1. On the left sidebar, at the bottom, select **Admin**..
1. On the left sidebar, select **Settings > General**.
1. Expand **Silent Mode**, and toggle **Enable Silent Mode**.
1. Changes are saved immediately.
- [**API**](../../api/settings.md):
```shell
curl --request PUT --header "PRIVATE-TOKEN:$ADMIN_TOKEN" "<gitlab-url>/api/v4/application/settings?silent_mode_enabled=true"
```
- [**Rails console**](../operations/rails_console.md#starting-a-rails-console-session):
```ruby
::Gitlab::CurrentSettings.update!(silent_mode_enabled: true)
```
It may take up to a minute to take effect. [Issue 405433](https://gitlab.com/gitlab-org/gitlab/-/issues/405433) proposes removing this delay.
## Disable Silent Mode
Prerequisites:
- You must have administrator access.
There are multiple ways to disable Silent Mode:
- **Web UI**
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, select **Settings > General**.
1. Expand **Silent Mode**, and toggle **Enable Silent Mode**.
1. Changes are saved immediately.
- [**API**](../../api/settings.md):
```shell
curl --request PUT --header "PRIVATE-TOKEN:$ADMIN_TOKEN" "<gitlab-url>/api/v4/application/settings?silent_mode_enabled=false"
```
- [**Rails console**](../operations/rails_console.md#starting-a-rails-console-session):
```ruby
::Gitlab::CurrentSettings.update!(silent_mode_enabled: false)
```
It may take up to a minute to take effect. [Issue 405433](https://gitlab.com/gitlab-org/gitlab/-/issues/405433) proposes removing this delay.
## Behavior of GitLab features in Silent Mode
This section documents the current behavior of GitLab when Silent Mode is enabled. The work for the first iteration of Silent Mode is tracked by [Epic 9826](https://gitlab.com/groups/gitlab-org/-/epics/9826).
When Silent Mode is enabled, a banner is displayed at the top of the page for all users stating the setting is enabled and **All outbound communications are blocked**.
### Outbound communications that are silenced
Outbound communications from the following features are silenced by Silent Mode.
| Feature | Notes |
| ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [GitLab Duo](../../user/gitlab_duo_chat/_index.md) | GitLab Duo features cannot contact external language model providers. |
| [Project and group webhooks](../../user/project/integrations/webhooks.md) | Triggering webhook tests via the UI results in HTTP status 500 responses. |
| [System hooks](../system_hooks.md) | |
| [Remote mirrors](../../user/project/repository/mirror/_index.md) | Pushes to remote mirrors are skipped. Pulls from remote mirrors is skipped. |
| [Executable integrations](../../user/project/integrations/_index.md) | The integrations are not executed. |
| [Service Desk](../../user/project/service_desk/_index.md) | Incoming emails still raise issues, but the users who sent the emails to Service Desk are not notified of issue creation or comments on their issues. |
| Outbound emails | At the moment when an email should be sent by GitLab, it is instead dropped. It is not queued anywhere. |
| Outbound HTTP requests | Many HTTP requests are blocked where features are not blocked or skipped explicitly. These may produce errors. If a particular error is problematic for testing during Silent Mode, consult [GitLab Support](https://about.gitlab.com/support/). |
### Outbound communications that are not silenced
Outbound communications from the following features are not silenced by Silent Mode.
| Feature | Notes |
| ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Dependency proxy](../packages/dependency_proxy.md) | Pulling images that are not cached will fetch from the source as usual. Consider pull rate limits. |
| [File hooks](../file_hooks.md) | |
| [Server hooks](../server_hooks.md) | |
| [Advanced search](../../integration/advanced_search/elasticsearch.md) | If two GitLab instances are using the same Advanced Search instance, then they can both modify Search data. This is a split-brain scenario which can occur for example after promoting a secondary Geo site while the primary Geo site is live. |
| Snowplow | There is [a proposal to silence these requests](https://gitlab.com/gitlab-org/gitlab/-/issues/409661). |
| [Deprecated Kubernetes Connections](../../user/clusters/agent/_index.md) | There is [a proposal to silence these requests](https://gitlab.com/gitlab-org/gitlab/-/issues/396470). |
| [Container registry webhooks](../packages/container_registry.md#configure-container-registry-notifications) | There is [a proposal to silence these requests](https://gitlab.com/gitlab-org/gitlab/-/issues/409682). |
|
---
stage: Tenant Scale
group: Geo
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: GitLab Silent Mode
breadcrumbs:
- doc
- administration
- silent_mode
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/9826) in GitLab 15.11. This feature was an [experiment](../../policy/development_stages_support.md#experiment).
- Enabling and disabling Silent Mode through the web UI was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131090) in GitLab 16.4.
- [Generally available](../../policy/development_stages_support.md#generally-available) in GitLab 16.6.
{{< /history >}}
Silent Mode allows you to silence outbound communication, such as emails, from GitLab. Silent Mode is not intended to be used on environments which are in-use. Two use-cases are:
- Validating Geo site promotion. You have a secondary Geo site as part of your
[disaster recovery](../geo/disaster_recovery/_index.md) solution. You want to
regularly test promoting it to become a primary Geo site, as a best practice
to ensure your disaster recovery plan actually works. But you don't want to
actually perform an entire failover because the primary site lives in a region
which provides the lowest latency to your users. And you don't want to take
downtime during every regular test. So, you let the primary site remain up,
while you promote the secondary site. You start smoke testing the promoted
site. But, the promoted site starts emailing users, the push mirrors push
changes to external Git repositories, etc. This is where Silent Mode comes in.
You can enable it as part of site promotion, to avoid this issue.
- Validating GitLab backups. You set up a testing instance to test that your
backups restore successfully. As part of the restore, you enable Silent Mode,
for example to avoid sending invalid emails to users.
## Enable Silent Mode
Prerequisites:
- You must have administrator access.
There are multiple ways to enable Silent Mode:
- **Web UI**
1. On the left sidebar, at the bottom, select **Admin**..
1. On the left sidebar, select **Settings > General**.
1. Expand **Silent Mode**, and toggle **Enable Silent Mode**.
1. Changes are saved immediately.
- [**API**](../../api/settings.md):
```shell
curl --request PUT --header "PRIVATE-TOKEN:$ADMIN_TOKEN" "<gitlab-url>/api/v4/application/settings?silent_mode_enabled=true"
```
- [**Rails console**](../operations/rails_console.md#starting-a-rails-console-session):
```ruby
::Gitlab::CurrentSettings.update!(silent_mode_enabled: true)
```
It may take up to a minute to take effect. [Issue 405433](https://gitlab.com/gitlab-org/gitlab/-/issues/405433) proposes removing this delay.
## Disable Silent Mode
Prerequisites:
- You must have administrator access.
There are multiple ways to disable Silent Mode:
- **Web UI**
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, select **Settings > General**.
1. Expand **Silent Mode**, and toggle **Enable Silent Mode**.
1. Changes are saved immediately.
- [**API**](../../api/settings.md):
```shell
curl --request PUT --header "PRIVATE-TOKEN:$ADMIN_TOKEN" "<gitlab-url>/api/v4/application/settings?silent_mode_enabled=false"
```
- [**Rails console**](../operations/rails_console.md#starting-a-rails-console-session):
```ruby
::Gitlab::CurrentSettings.update!(silent_mode_enabled: false)
```
It may take up to a minute to take effect. [Issue 405433](https://gitlab.com/gitlab-org/gitlab/-/issues/405433) proposes removing this delay.
## Behavior of GitLab features in Silent Mode
This section documents the current behavior of GitLab when Silent Mode is enabled. The work for the first iteration of Silent Mode is tracked by [Epic 9826](https://gitlab.com/groups/gitlab-org/-/epics/9826).
When Silent Mode is enabled, a banner is displayed at the top of the page for all users stating the setting is enabled and **All outbound communications are blocked**.
### Outbound communications that are silenced
Outbound communications from the following features are silenced by Silent Mode.
| Feature | Notes |
| ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [GitLab Duo](../../user/gitlab_duo_chat/_index.md) | GitLab Duo features cannot contact external language model providers. |
| [Project and group webhooks](../../user/project/integrations/webhooks.md) | Triggering webhook tests via the UI results in HTTP status 500 responses. |
| [System hooks](../system_hooks.md) | |
| [Remote mirrors](../../user/project/repository/mirror/_index.md) | Pushes to remote mirrors are skipped. Pulls from remote mirrors is skipped. |
| [Executable integrations](../../user/project/integrations/_index.md) | The integrations are not executed. |
| [Service Desk](../../user/project/service_desk/_index.md) | Incoming emails still raise issues, but the users who sent the emails to Service Desk are not notified of issue creation or comments on their issues. |
| Outbound emails | At the moment when an email should be sent by GitLab, it is instead dropped. It is not queued anywhere. |
| Outbound HTTP requests | Many HTTP requests are blocked where features are not blocked or skipped explicitly. These may produce errors. If a particular error is problematic for testing during Silent Mode, consult [GitLab Support](https://about.gitlab.com/support/). |
### Outbound communications that are not silenced
Outbound communications from the following features are not silenced by Silent Mode.
| Feature | Notes |
| ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Dependency proxy](../packages/dependency_proxy.md) | Pulling images that are not cached will fetch from the source as usual. Consider pull rate limits. |
| [File hooks](../file_hooks.md) | |
| [Server hooks](../server_hooks.md) | |
| [Advanced search](../../integration/advanced_search/elasticsearch.md) | If two GitLab instances are using the same Advanced Search instance, then they can both modify Search data. This is a split-brain scenario which can occur for example after promoting a secondary Geo site while the primary Geo site is live. |
| Snowplow | There is [a proposal to silence these requests](https://gitlab.com/gitlab-org/gitlab/-/issues/409661). |
| [Deprecated Kubernetes Connections](../../user/clusters/agent/_index.md) | There is [a proposal to silence these requests](https://gitlab.com/gitlab-org/gitlab/-/issues/396470). |
| [Container registry webhooks](../packages/container_registry.md#configure-container-registry-notifications) | There is [a proposal to silence these requests](https://gitlab.com/gitlab-org/gitlab/-/issues/409682). |
|
https://docs.gitlab.com/administration/dependency_proxy
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dependency_proxy.md
|
2025-08-13
|
doc/administration/packages
|
[
"doc",
"administration",
"packages"
] |
dependency_proxy.md
|
Package
|
Container Registry
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Dependency Proxy administration
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/7934) in [GitLab Premium](https://about.gitlab.com/pricing/) 11.11.
- [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/273655) from GitLab Premium to GitLab Free in 13.6.
{{< /history >}}
You can use GitLab as a dependency proxy for frequently-accessed upstream artifacts, including container images and packages.
This is the administration documentation. To learn how to use the
dependency proxies, see:
- The [dependency proxy for container images](../../user/packages/dependency_proxy/_index.md) user guide
- The [virtual registry](../../user/packages/virtual_registry/_index.md) user guide
The GitLab Dependency Proxy:
- Is turned on by default.
- Can be turned off by an administrator.
## Turn off the Dependency Proxy
The Dependency Proxy is enabled by default. If you are an administrator, you
can turn off the Dependency Proxy. To turn off the Dependency Proxy, follow the instructions that
correspond to your GitLab installation.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['dependency_proxy_enabled'] = false
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
After the installation is complete, update the global `appConfig` to turn off the Dependency Proxy:
```yaml
global:
appConfig:
dependencyProxy:
enabled: false
bucket: gitlab-dependency-proxy
connection:
secret:
key:
```
For more information, see [Configure Charts using Globals](https://docs.gitlab.com/charts/charts/globals.html#configure-appconfig-settings).
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. After the installation is complete, configure the `dependency_proxy` section in
`config/gitlab.yml`. Set `enabled` to `false` to turn off the Dependency Proxy:
```yaml
dependency_proxy:
enabled: false
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Multi-node GitLab installations
Follow the steps for Linux package installations for each Web and Sidekiq node.
## Turn on the Dependency Proxy
The Dependency Proxy is turned on by default, but can be turned off by an
administrator. To turn it off manually, follow the instructions in
[Turn off the Dependency Proxy](#turn-off-the-dependency-proxy).
## Changing the storage path
By default, the Dependency Proxy files are stored locally, but you can change the default
local location or even use object storage.
### Changing the local storage path
The Dependency Proxy files for Linux package installations are stored under
`/var/opt/gitlab/gitlab-rails/shared/dependency_proxy/` and for source
installations under `shared/dependency_proxy/` (relative to the Git home directory).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['dependency_proxy_storage_path'] = "/mnt/dependency_proxy"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the `dependency_proxy` section in `config/gitlab.yml`:
```yaml
dependency_proxy:
enabled: true
storage_path: shared/dependency_proxy
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Using object storage
Instead of relying on the local storage, you can use the
[consolidated object storage settings](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form).
This section describes the earlier configuration format. [Migration steps still apply](#migrate-local-dependency-proxy-blobs-and-manifests-to-object-storage).
[Read more about using object storage with GitLab](../object_storage.md).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines (uncomment where
necessary):
```ruby
gitlab_rails['dependency_proxy_enabled'] = true
gitlab_rails['dependency_proxy_storage_path'] = "/var/opt/gitlab/gitlab-rails/shared/dependency_proxy"
gitlab_rails['dependency_proxy_object_store_enabled'] = true
gitlab_rails['dependency_proxy_object_store_remote_directory'] = "dependency_proxy" # The bucket name.
gitlab_rails['dependency_proxy_object_store_proxy_download'] = false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage.
gitlab_rails['dependency_proxy_object_store_connection'] = {
##
## If the provider is AWS S3, uncomment the following
##
#'provider' => 'AWS',
#'region' => 'eu-west-1',
#'aws_access_key_id' => 'AWS_ACCESS_KEY_ID',
#'aws_secret_access_key' => 'AWS_SECRET_ACCESS_KEY',
##
## If the provider is other than AWS (an S3-compatible one), uncomment the following
##
#'host' => 's3.amazonaws.com',
#'aws_signature_version' => 4 # For creation of signed URLs. Set to 2 if provider does not support v4.
#'endpoint' => 'https://s3.amazonaws.com' # Useful for S3-compliant services such as DigitalOcean Spaces.
#'path_style' => false # If true, use 'host/bucket_name/object' instead of 'bucket_name.host/object'.
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the `dependency_proxy` section in `config/gitlab.yml` (uncomment where necessary):
```yaml
dependency_proxy:
enabled: true
##
## The location where build dependency_proxy are stored (default: shared/dependency_proxy).
##
# storage_path: shared/dependency_proxy
object_store:
enabled: false
remote_directory: dependency_proxy # The bucket name.
# proxy_download: false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage.
connection:
##
## If the provider is AWS S3, use the following
##
provider: AWS
region: us-east-1
aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY
##
## If the provider is other than AWS (an S3-compatible one), comment out the previous 4 lines and use the following instead:
##
# host: 's3.amazonaws.com' # default: s3.amazonaws.com.
# aws_signature_version: 4 # For creation of signed URLs. Set to 2 if provider does not support v4.
# endpoint: 'https://s3.amazonaws.com' # Useful for S3-compliant services such as DigitalOcean Spaces.
# path_style: false # If true, use 'host/bucket_name/object' instead of 'bucket_name.host/object'.
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Migrate local Dependency Proxy blobs and manifests to object storage
After [configuring object storage](#using-object-storage),
use the following task to migrate existing Dependency Proxy blobs and manifests from local storage
to remote storage. The processing is done in a background worker and requires no downtime.
- For Linux package installations:
```shell
sudo gitlab-rake "gitlab:dependency_proxy:migrate"
```
- For self-compiled installations:
```shell
RAILS_ENV=production sudo -u git -H bundle exec rake gitlab:dependency_proxy:migrate
```
You can optionally track progress and verify that all Dependency Proxy blobs and manifests migrated successfully using the
[PostgreSQL console](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-bundled-postgresql-database):
- `sudo gitlab-rails dbconsole` for Linux package installations running version 14.1 and earlier.
- `sudo gitlab-rails dbconsole --database main` for Linux package installations running version 14.2 and later.
- `sudo -u git -H psql -d gitlabhq_production` for self-compiled instances.
Verify that `objectstg` (where `file_store = '2'`) has the count of all Dependency Proxy blobs and
manifests for each respective query:
```shell
gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM dependency_proxy_blobs;
total | filesystem | objectstg
------+------------+-----------
22 | 0 | 22
gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM dependency_proxy_manifests;
total | filesystem | objectstg
------+------------+-----------
10 | 0 | 10
```
Verify that there are no files on disk in the `dependency_proxy` folder:
```shell
sudo find /var/opt/gitlab/gitlab-rails/shared/dependency_proxy -type f | grep -v tmp | wc -l
```
## Changing the JWT expiration
The Dependency Proxy follows the [Docker v2 token authentication flow](https://distribution.github.io/distribution/spec/auth/token/),
issuing the client a JWT to use for the pull requests. The token expiration time is a configurable
using the application setting `container_registry_token_expire_delay`. It can be changed from the
rails console:
```ruby
# update the JWT expiration to 30 minutes
ApplicationSetting.update(container_registry_token_expire_delay: 30)
```
The default expiration and the expiration on GitLab.com is 15 minutes.
## Using the dependency proxy behind a proxy
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines:
```ruby
gitlab_workhorse['env'] = {
"http_proxy" => "http://USERNAME:PASSWORD@example.com:8080",
"https_proxy" => "http://USERNAME:PASSWORD@example.com:8080"
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
|
---
stage: Package
group: Container Registry
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: GitLab Dependency Proxy administration
breadcrumbs:
- doc
- administration
- packages
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/7934) in [GitLab Premium](https://about.gitlab.com/pricing/) 11.11.
- [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/273655) from GitLab Premium to GitLab Free in 13.6.
{{< /history >}}
You can use GitLab as a dependency proxy for frequently-accessed upstream artifacts, including container images and packages.
This is the administration documentation. To learn how to use the
dependency proxies, see:
- The [dependency proxy for container images](../../user/packages/dependency_proxy/_index.md) user guide
- The [virtual registry](../../user/packages/virtual_registry/_index.md) user guide
The GitLab Dependency Proxy:
- Is turned on by default.
- Can be turned off by an administrator.
## Turn off the Dependency Proxy
The Dependency Proxy is enabled by default. If you are an administrator, you
can turn off the Dependency Proxy. To turn off the Dependency Proxy, follow the instructions that
correspond to your GitLab installation.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['dependency_proxy_enabled'] = false
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
After the installation is complete, update the global `appConfig` to turn off the Dependency Proxy:
```yaml
global:
appConfig:
dependencyProxy:
enabled: false
bucket: gitlab-dependency-proxy
connection:
secret:
key:
```
For more information, see [Configure Charts using Globals](https://docs.gitlab.com/charts/charts/globals.html#configure-appconfig-settings).
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. After the installation is complete, configure the `dependency_proxy` section in
`config/gitlab.yml`. Set `enabled` to `false` to turn off the Dependency Proxy:
```yaml
dependency_proxy:
enabled: false
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Multi-node GitLab installations
Follow the steps for Linux package installations for each Web and Sidekiq node.
## Turn on the Dependency Proxy
The Dependency Proxy is turned on by default, but can be turned off by an
administrator. To turn it off manually, follow the instructions in
[Turn off the Dependency Proxy](#turn-off-the-dependency-proxy).
## Changing the storage path
By default, the Dependency Proxy files are stored locally, but you can change the default
local location or even use object storage.
### Changing the local storage path
The Dependency Proxy files for Linux package installations are stored under
`/var/opt/gitlab/gitlab-rails/shared/dependency_proxy/` and for source
installations under `shared/dependency_proxy/` (relative to the Git home directory).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['dependency_proxy_storage_path'] = "/mnt/dependency_proxy"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the `dependency_proxy` section in `config/gitlab.yml`:
```yaml
dependency_proxy:
enabled: true
storage_path: shared/dependency_proxy
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Using object storage
Instead of relying on the local storage, you can use the
[consolidated object storage settings](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form).
This section describes the earlier configuration format. [Migration steps still apply](#migrate-local-dependency-proxy-blobs-and-manifests-to-object-storage).
[Read more about using object storage with GitLab](../object_storage.md).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines (uncomment where
necessary):
```ruby
gitlab_rails['dependency_proxy_enabled'] = true
gitlab_rails['dependency_proxy_storage_path'] = "/var/opt/gitlab/gitlab-rails/shared/dependency_proxy"
gitlab_rails['dependency_proxy_object_store_enabled'] = true
gitlab_rails['dependency_proxy_object_store_remote_directory'] = "dependency_proxy" # The bucket name.
gitlab_rails['dependency_proxy_object_store_proxy_download'] = false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage.
gitlab_rails['dependency_proxy_object_store_connection'] = {
##
## If the provider is AWS S3, uncomment the following
##
#'provider' => 'AWS',
#'region' => 'eu-west-1',
#'aws_access_key_id' => 'AWS_ACCESS_KEY_ID',
#'aws_secret_access_key' => 'AWS_SECRET_ACCESS_KEY',
##
## If the provider is other than AWS (an S3-compatible one), uncomment the following
##
#'host' => 's3.amazonaws.com',
#'aws_signature_version' => 4 # For creation of signed URLs. Set to 2 if provider does not support v4.
#'endpoint' => 'https://s3.amazonaws.com' # Useful for S3-compliant services such as DigitalOcean Spaces.
#'path_style' => false # If true, use 'host/bucket_name/object' instead of 'bucket_name.host/object'.
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the `dependency_proxy` section in `config/gitlab.yml` (uncomment where necessary):
```yaml
dependency_proxy:
enabled: true
##
## The location where build dependency_proxy are stored (default: shared/dependency_proxy).
##
# storage_path: shared/dependency_proxy
object_store:
enabled: false
remote_directory: dependency_proxy # The bucket name.
# proxy_download: false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage.
connection:
##
## If the provider is AWS S3, use the following
##
provider: AWS
region: us-east-1
aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY
##
## If the provider is other than AWS (an S3-compatible one), comment out the previous 4 lines and use the following instead:
##
# host: 's3.amazonaws.com' # default: s3.amazonaws.com.
# aws_signature_version: 4 # For creation of signed URLs. Set to 2 if provider does not support v4.
# endpoint: 'https://s3.amazonaws.com' # Useful for S3-compliant services such as DigitalOcean Spaces.
# path_style: false # If true, use 'host/bucket_name/object' instead of 'bucket_name.host/object'.
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Migrate local Dependency Proxy blobs and manifests to object storage
After [configuring object storage](#using-object-storage),
use the following task to migrate existing Dependency Proxy blobs and manifests from local storage
to remote storage. The processing is done in a background worker and requires no downtime.
- For Linux package installations:
```shell
sudo gitlab-rake "gitlab:dependency_proxy:migrate"
```
- For self-compiled installations:
```shell
RAILS_ENV=production sudo -u git -H bundle exec rake gitlab:dependency_proxy:migrate
```
You can optionally track progress and verify that all Dependency Proxy blobs and manifests migrated successfully using the
[PostgreSQL console](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-bundled-postgresql-database):
- `sudo gitlab-rails dbconsole` for Linux package installations running version 14.1 and earlier.
- `sudo gitlab-rails dbconsole --database main` for Linux package installations running version 14.2 and later.
- `sudo -u git -H psql -d gitlabhq_production` for self-compiled instances.
Verify that `objectstg` (where `file_store = '2'`) has the count of all Dependency Proxy blobs and
manifests for each respective query:
```shell
gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM dependency_proxy_blobs;
total | filesystem | objectstg
------+------------+-----------
22 | 0 | 22
gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM dependency_proxy_manifests;
total | filesystem | objectstg
------+------------+-----------
10 | 0 | 10
```
Verify that there are no files on disk in the `dependency_proxy` folder:
```shell
sudo find /var/opt/gitlab/gitlab-rails/shared/dependency_proxy -type f | grep -v tmp | wc -l
```
## Changing the JWT expiration
The Dependency Proxy follows the [Docker v2 token authentication flow](https://distribution.github.io/distribution/spec/auth/token/),
issuing the client a JWT to use for the pull requests. The token expiration time is a configurable
using the application setting `container_registry_token_expire_delay`. It can be changed from the
rails console:
```ruby
# update the JWT expiration to 30 minutes
ApplicationSetting.update(container_registry_token_expire_delay: 30)
```
The default expiration and the expiration on GitLab.com is 15 minutes.
## Using the dependency proxy behind a proxy
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines:
```ruby
gitlab_workhorse['env'] = {
"http_proxy" => "http://USERNAME:PASSWORD@example.com:8080",
"https_proxy" => "http://USERNAME:PASSWORD@example.com:8080"
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
|
https://docs.gitlab.com/administration/container_registry_troubleshooting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/container_registry_troubleshooting.md
|
2025-08-13
|
doc/administration/packages
|
[
"doc",
"administration",
"packages"
] |
container_registry_troubleshooting.md
|
Package
|
Container Registry
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Troubleshooting the container registry
| null |
Before investigating specific issues, try these troubleshooting steps:
1. Verify that the system clock on your Docker client and GitLab server are synchronized (for example, through NTP).
1. For S3-backed registries, verify your IAM permissions and S3 credentials (including region) are correct.
For more information, see the [sample IAM policy](https://distribution.github.io/distribution/storage-drivers/s3/).
1. Check for errors in the registry logs (for example, `/var/log/gitlab/registry/current`) and the GitLab production logs
(for example, `/var/log/gitlab/gitlab-rails/production.log`).
1. Review the NGINX configuration file for the container registry (for example, `/var/opt/gitlab/nginx/conf/gitlab-registry.conf`)
to confirm which port receives requests.
1. Verify that requests are correctly forwarded to the container registry:
```shell
curl --verbose --noproxy "*" https://<hostname>:<port>/v2/_catalog
```
The response should include a line with `Www-Authenticate: Bearer` containing `service="container_registry"`. For example:
```plaintext
< HTTP/1.1 401 Unauthorized
< Server: nginx
< Date: Fri, 07 Mar 2025 08:24:43 GMT
< Content-Type: application/json
< Content-Length: 162
< Connection: keep-alive
< Docker-Distribution-Api-Version: registry/2.0
< Www-Authenticate: Bearer realm="https://<hostname>/jwt/auth",service="container_registry",scope="registry:catalog:*"
< X-Content-Type-Options: nosniff
<
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":
[{"Type":"registry","Class":"","Name":"catalog","ProjectPath":"","Action":"*"}]}]}
* Connection #0 to host <hostname> left intact
```
## Using self-signed certificates with container registry
If you're using a self-signed certificate with your container registry, you
might encounter issues during the CI jobs like the following:
```plaintext
Error response from daemon: Get registry.example.com/v1/users/: x509: certificate signed by unknown authority
```
The Docker daemon running the command expects a cert signed by a recognized CA,
thus the previous error.
While GitLab doesn't support using self-signed certificates with Container
Registry out of the box, it is possible to make it work by
[instructing the Docker daemon to trust the self-signed certificates](https://distribution.github.io/distribution/about/insecure/#use-self-signed-certificates),
mounting the Docker daemon and setting `privileged = false` in the GitLab Runner
`config.toml` file. Setting `privileged = true` takes precedence over the Docker daemon:
```toml
[runners.docker]
image = "ruby:2.6"
privileged = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
```
Additional information about this: [issue 18239](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/18239).
## Docker login attempt fails with: 'token signed by untrusted key'
[Registry relies on GitLab to validate credentials](container_registry.md#container-registry-architecture)
If the registry fails to authenticate valid login attempts, you get the following error message:
```shell
# docker login gitlab.company.com:4567
Username: user
Password:
Error response from daemon: login attempt to https://gitlab.company.com:4567/v2/ failed with status: 401 Unauthorized
```
And more specifically, this appears in the `/var/log/gitlab/registry/current` log file:
```plaintext
level=info msg="token signed by untrusted key with ID: "TOKE:NL6Q:7PW6:EXAM:PLET:OKEN:BG27:RCIB:D2S3:EXAM:PLET:OKEN""
level=warning msg="error authorizing context: invalid token" go.version=go1.12.7 http.request.host="gitlab.company.com:4567" http.request.id=74613829-2655-4f96-8991-1c9fe33869b8 http.request.method=GET http.request.remoteaddr=10.72.11.20 http.request.uri="/v2/" http.request.useragent="docker/19.03.2 go/go1.12.8 git-commit/6a30dfc kernel/3.10.0-693.2.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.2 \(linux\))"
```
GitLab uses the contents of the certificate key pair's two sides to encrypt the authentication token
for the Registry. This message means that those contents do not align.
Check which files are in use:
- `grep -A6 'auth:' /var/opt/gitlab/registry/config.yml`
```yaml
## Container registry certificate
auth:
token:
realm: https://gitlab.my.net/jwt/auth
service: container_registry
issuer: omnibus-gitlab-issuer
--> rootcertbundle: /var/opt/gitlab/registry/gitlab-registry.crt
autoredirect: false
```
- `grep -A9 'Container Registry' /var/opt/gitlab/gitlab-rails/etc/gitlab.yml`
```yaml
## Container registry key
registry:
enabled: true
host: gitlab.company.com
port: 4567
api_url: http://127.0.0.1:5000 # internal address to the registry, is used by GitLab to directly communicate with API
path: /var/opt/gitlab/gitlab-rails/shared/registry
--> key: /var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key
issuer: omnibus-gitlab-issuer
notification_secret:
```
The output of these `openssl` commands should match, proving that the cert-key pair is a match:
```shell
/opt/gitlab/embedded/bin/openssl x509 -noout -modulus -in /var/opt/gitlab/registry/gitlab-registry.crt | /opt/gitlab/embedded/bin/openssl sha256
/opt/gitlab/embedded/bin/openssl rsa -noout -modulus -in /var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key | /opt/gitlab/embedded/bin/openssl sha256
```
If the two pieces of the certificate do not align, remove the files and run `gitlab-ctl reconfigure`
to regenerate the pair. The pair is recreated using the existing values in `/etc/gitlab/gitlab-secrets.json` if they exist. To generate a new pair,
delete the `registry` section in your `/etc/gitlab/gitlab-secrets.json` before running `gitlab-ctl reconfigure`.
If you have overridden the automatically generated self-signed pair with
your own certificates and have made sure that their contents align, you can delete the 'registry'
section in your `/etc/gitlab/gitlab-secrets.json` and run `gitlab-ctl reconfigure`.
## AWS S3 with the GitLab registry error when pushing large images
When using AWS S3 with the GitLab registry, an error may occur when pushing
large images. Look in the Registry log for the following error:
```plaintext
level=error msg="response completed with error" err.code=unknown err.detail="unexpected EOF" err.message="unknown error"
```
To resolve the error specify a `chunksize` value in the Registry configuration.
Start with a value between `25000000` (25 MB) and `50000000` (50 MB).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['storage'] = {
's3' => {
'accesskey' => 'AKIAKIAKI',
'secretkey' => 'secret123',
'bucket' => 'gitlab-registry-bucket-AKIAKIAKI',
'chunksize' => 25000000
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `config/gitlab.yml`:
```yaml
storage:
s3:
accesskey: 'AKIAKIAKI'
secretkey: 'secret123'
bucket: 'gitlab-registry-bucket-AKIAKIAKI'
chunksize: 25000000
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
## Supporting older Docker clients
The Docker container registry shipped with GitLab disables the schema1 manifest
by default. If you are still using older Docker clients (1.9 or older), you may
experience an error pushing images. See
[issue 4145](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4145) for more details.
You can add a configuration option for backwards compatibility.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['compatibility_schema1_enabled'] = true
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the YAML configuration file you created when you deployed the registry. Add the following snippet:
```yaml
compatibility:
schema1:
enabled: true
```
1. Restart the registry for the changes to take affect.
{{< /tab >}}
{{< /tabs >}}
## Docker connection error
A Docker connection error can occur when there are special characters in either the group,
project or branch name. Special characters can include:
- Leading underscore
- Trailing hyphen/dash
- Double hyphen/dash
To get around this, you can [change the group path](../../user/group/manage.md#change-a-groups-path),
[change the project path](../../user/project/working_with_projects.md#rename-a-repository) or change the
branch name. Another option is to create a [push rule](../../user/project/repository/push_rules.md) to prevent
this error for the entire instance.
## Image push errors
You might get stuck in retry loops when pushing Docker images, even though `docker login` succeeds.
This issue occurs when NGINX isn't properly forwarding headers to the registry, typically in custom
setups where SSL is offloaded to a third-party reverse proxy.
For more information, see [Docker push through NGINX proxy fails trying to send a 32B layer #970](https://github.com/docker/distribution/issues/970).
To resolve this issue, update your NGINX configuration to enable relative URLs in the registry:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['env'] = {
"REGISTRY_HTTP_RELATIVEURLS" => true
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the YAML configuration file you created when you deployed the registry. Add the following snippet:
```yaml
http:
relativeurls: true
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< tab title="Docker Compose" >}}
1. Edit your `docker-compose.yaml` file:
```yaml
GITLAB_OMNIBUS_CONFIG: |
registry['env'] = {
"REGISTRY_HTTP_RELATIVEURLS" => true
}
```
1. If the issue persists, ensure both URLs use HTTPS:
```yaml
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://git.example.com'
registry_external_url 'https://git.example.com:5050'
```
1. Save the file and restart the container:
```shell
sudo docker restart gitlab
```
{{< /tab >}}
{{< /tabs >}}
## Enable the Registry debug server
You can use the container registry debug server to diagnose problems. The debug endpoint can monitor metrics and health, as well as do profiling.
{{< alert type="warning" >}}
Sensitive information may be available from the debug endpoint.
Access to the debug endpoint must be locked down in a production environment.
{{< /alert >}}
The optional debug server can be enabled by setting the registry debug address
in your `gitlab.rb` configuration.
```ruby
registry['debug_addr'] = "localhost:5001"
```
After adding the setting, [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) to apply the change.
Use curl to request debug output from the debug server:
```shell
curl "localhost:5001/debug/health"
curl "localhost:5001/debug/vars"
```
## Enable registry debug logs
You can enable debug logs to help troubleshoot issues with the container registry.
{{< alert type="warning" >}}
Debug logs may contain sensitive information such as authentication details, tokens, or repository information.
Enable debug logs only when necessary, and disable them when troubleshooting is complete.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/var/opt/gitlab/registry/config.yml`:
```yaml
level: debug
```
1. Save the file and restart the registry:
```shell
sudo gitlab-ctl restart registry
```
This configuration is temporary and is discarded when you run `gitlab-ctl reconfigure`.
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
registry:
log:
level: debug
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab --namespace <namespace>
```
{{< /tab >}}
{{< /tabs >}}
### Enable Registry Prometheus Metrics
If the debug server is enabled, you can also enable Prometheus metrics. This endpoint exposes highly detailed telemetry
related to almost all registry operations.
```ruby
registry['debug'] = {
'prometheus' => {
'enabled' => true,
'path' => '/metrics'
}
}
```
Use curl to request debug output from Prometheus:
```shell
curl "localhost:5001/debug/metrics"
```
## Tags with an empty name
If using [AWS DataSync](https://aws.amazon.com/datasync/)
to copy the registry data to or between S3 buckets, an empty metadata object is created in the root
path of each container repository in the destination bucket. This causes the registry to interpret
such files as a tag that appears with no name in the GitLab UI and API. For more information, see
[this issue](https://gitlab.com/gitlab-org/container-registry/-/issues/341).
To fix this you can do one of two things:
- Use the AWS CLI [`rm`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/rm.html)
command to remove the empty objects from the root of **each** affected repository. Pay special
attention to the trailing `/` and make sure **not** to use the `--recursive` option:
```shell
aws s3 rm s3://<bucket>/docker/registry/v2/repositories/<path to repository>/
```
- Use the AWS CLI [`sync`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html)
command to copy the registry data to a new bucket and configure the registry to use it. This
leaves the empty objects behind.
## Advanced Troubleshooting
We use a concrete example to illustrate how to
diagnose a problem with the S3 setup.
### Investigate a cleanup policy
If you're unsure why your cleanup policy did or didn't delete a tag, execute the policy line by line
by running the below script from the [Rails console](../operations/rails_console.md).
This can help diagnose problems with the policy.
```ruby
repo = ContainerRepository.find(<repository_id>)
policy = repo.project.container_expiration_policy
tags = repo.tags
tags.map(&:name)
tags.reject!(&:latest?)
tags.map(&:name)
regex_delete = ::Gitlab::UntrustedRegexp.new("\\A#{policy.name_regex}\\z")
regex_retain = ::Gitlab::UntrustedRegexp.new("\\A#{policy.name_regex_keep}\\z")
tags.select! { |tag| regex_delete.match?(tag.name) && !regex_retain.match?(tag.name) }
tags.map(&:name)
now = DateTime.current
tags.sort_by! { |tag| tag.created_at || now }.reverse! # Lengthy operation
tags = tags.drop(policy.keep_n)
tags.map(&:name)
older_than_timestamp = ChronicDuration.parse(policy.older_than).seconds.ago
tags.select! { |tag| tag.created_at && tag.created_at < older_than_timestamp }
tags.map(&:name)
```
- The script builds the list of tags to delete (`tags`).
- `tags.map(&:name)` prints a list of tags to remove. This may be a lengthy operation.
- After each filter, check the list of `tags` to see if it contains the intended tags to destroy.
### Unexpected 403 error during push
A user attempted to enable an S3-backed Registry. The `docker login` step went
fine. However, when pushing an image, the output showed:
```plaintext
The push refers to a repository [s3-testing.myregistry.com:5050/root/docker-test/docker-image]
dc5e59c14160: Pushing [==================================================>] 14.85 kB
03c20c1a019a: Pushing [==================================================>] 2.048 kB
a08f14ef632e: Pushing [==================================================>] 2.048 kB
228950524c88: Pushing 2.048 kB
6a8ecde4cc03: Pushing [==> ] 9.901 MB/205.7 MB
5f70bf18a086: Pushing 1.024 kB
737f40e80b7f: Waiting
82b57dbc5385: Waiting
19429b698a22: Waiting
9436069b92a3: Waiting
error parsing HTTP 403 response body: unexpected end of JSON input: ""
```
This error is ambiguous because it's not clear whether the 403 is coming from the
GitLab Rails application, the Docker Registry, or something else. In this
case, because we know that the login succeeded, we probably need to look
at the communication between the client and the Registry.
The REST API between the Docker client and Registry is described
[in the Docker documentation](https://distribution.github.io/distribution/spec/api/). Usually, one would just
use Wireshark or tcpdump to capture the traffic and see where things went
wrong. However, because all communications between Docker clients and servers
are done over HTTPS, it's a bit difficult to decrypt the traffic quickly even
if you know the private key. What can we do instead?
One way would be to disable HTTPS by setting up an
[insecure Registry](https://distribution.github.io/distribution/about/insecure/). This could introduce a
security hole and is only recommended for local testing. If you have a
production system and can't or don't want to do this, there is another way:
use mitmproxy, which stands for Man-in-the-Middle Proxy.
### mitmproxy
[mitmproxy](https://mitmproxy.org/) allows you to place a proxy between your
client and server to inspect all traffic. One wrinkle is that your system
needs to trust the mitmproxy SSL certificates for this to work.
The following installation instructions assume you are running Ubuntu:
1. [Install mitmproxy](https://docs.mitmproxy.org/stable/overview-installation/).
1. Run `mitmproxy --port 9000` to generate its certificates.
Enter <kbd>CTRL</kbd>-<kbd>C</kbd> to quit.
1. Install the certificate from `~/.mitmproxy` to your system:
```shell
sudo cp ~/.mitmproxy/mitmproxy-ca-cert.pem /usr/local/share/ca-certificates/mitmproxy-ca-cert.crt
sudo update-ca-certificates
```
If successful, the output should indicate that a certificate was added:
```shell
Updating certificates in /etc/ssl/certs... 1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....done.
```
To verify that the certificates are properly installed, run:
```shell
mitmproxy --port 9000
```
This command runs mitmproxy on port `9000`. In another window, run:
```shell
curl --proxy "http://localhost:9000" "https://httpbin.org/status/200"
```
If everything is set up correctly, information is displayed on the mitmproxy window and
no errors are generated by the curl commands.
### Running the Docker daemon with a proxy
For Docker to connect through a proxy, you must start the Docker daemon with the
proper environment variables. The easiest way is to shutdown Docker (for example `sudo initctl stop docker`)
and then run Docker by hand. As root, run:
```shell
export HTTP_PROXY="http://localhost:9000"
export HTTPS_PROXY="http://localhost:9000"
docker daemon --debug
```
This command launches the Docker daemon and proxies all connections through mitmproxy.
### Running the Docker client
Now that we have mitmproxy and Docker running, we can attempt to sign in and
push a container image. You may need to run as root to do this. For example:
```shell
docker login example.s3.amazonaws.com:5050
docker push example.s3.amazonaws.com:5050/root/docker-test/docker-image
```
In the previous example, we see the following trace on the mitmproxy window:
```plaintext
PUT https://example.s3.amazonaws.com:4567/v2/root/docker-test/blobs/uploads/(UUID)/(QUERYSTRING)
← 201 text/plain [no content] 661ms
HEAD https://example.s3.amazonaws.com:4567/v2/root/docker-test/blobs/sha256:(SHA)
← 307 application/octet-stream [no content] 93ms
HEAD https://example.s3.amazonaws.com:4567/v2/root/docker-test/blobs/sha256:(SHA)
← 307 application/octet-stream [no content] 101ms
HEAD https://example.s3.amazonaws.com:4567/v2/root/docker-test/blobs/sha256:(SHA)
← 307 application/octet-stream [no content] 87ms
HEAD https://amazonaws.example.com/docker/registry/vs/blobs/sha256/dd/(UUID)/data(QUERYSTRING)
← 403 application/xml [no content] 80ms
HEAD https://amazonaws.example.com/docker/registry/vs/blobs/sha256/dd/(UUID)/data(QUERYSTRING)
← 403 application/xml [no content] 62ms
```
This output shows:
- The initial PUT requests went through fine with a `201` status code.
- The `201` redirected the client to the Amazon S3 bucket.
- The HEAD request to the AWS bucket reported a `403 Unauthorized`.
What does this mean? This strongly suggests that the S3 user does not have the right
[permissions to perform a HEAD request](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html).
The solution: check the [IAM permissions again](https://distribution.github.io/distribution/storage-drivers/s3/).
After the right permissions were set, the error went away.
## Missing `gitlab-registry.key` prevents container repository deletion
If you disable your GitLab instance's container registry and try to remove a project that has
container repositories, the following error occurs:
```plaintext
Errno::ENOENT: No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key
```
In this case, follow these steps:
1. Temporarily enable the instance-wide setting for the container registry in your `gitlab.rb`:
```ruby
gitlab_rails['registry_enabled'] = true
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
1. Try the removal again.
If you still can't remove the repository using the common methods, you can use the
[GitLab Rails console](../operations/rails_console.md)
to remove the project by force:
```ruby
# Path to the project you'd like to remove
prj = Project.find_by_full_path(<project_path>)
# The following will delete the project's container registry, so be sure to double-check the path beforehand!
if prj.has_container_registry_tags?
prj.container_repositories.each { |p| p.destroy }
end
```
## Registry service listens on IPv6 address instead of IPv4
You might see the following error if the `localhost` hostname resolves to a IPv6
loopback address (`::1`) on your GitLab server and GitLab expects the registry service
to be available on the IPv4 loopback address (`127.0.0.1`):
```plaintext
request: "GET /v2/ HTTP/1.1", upstream: "http://[::1]:5000/v2/", host: "registry.example.com:5005"
[error] 1201#0: *13442797 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: registry.example.com, request: "GET /v2/<path> HTTP/1.1", upstream: "http://[::1]:5000/v2/<path>", host: "registry.example.com:5005"
```
To fix the error, change `registry['registry_http_addr']` to an IPv4 address in `/etc/gitlab/gitlab.rb`. For example:
```ruby
registry['registry_http_addr'] = "127.0.0.1:5000"
```
See [issue 5449](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5449) for more details.
## Push failures and high CPU usage with Google Cloud Storage (GCS)
You might get a `502 Bad Gateway` error when pushing container images to a registry that uses GCS as the backend. The registry might also experience CPU usage spikes when pushing large images.
This issue occurs when the registry communicates with GCS using the HTTP/2 protocol.
The workaround is to disable HTTP/2 in your registry deployment by setting the `GODEBUG` environment variable to `http2client=0`.
For more information, see [issue 1425](https://gitlab.com/gitlab-org/container-registry/-/issues/1425).
|
---
stage: Package
group: Container Registry
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Troubleshooting the container registry
breadcrumbs:
- doc
- administration
- packages
---
Before investigating specific issues, try these troubleshooting steps:
1. Verify that the system clock on your Docker client and GitLab server are synchronized (for example, through NTP).
1. For S3-backed registries, verify your IAM permissions and S3 credentials (including region) are correct.
For more information, see the [sample IAM policy](https://distribution.github.io/distribution/storage-drivers/s3/).
1. Check for errors in the registry logs (for example, `/var/log/gitlab/registry/current`) and the GitLab production logs
(for example, `/var/log/gitlab/gitlab-rails/production.log`).
1. Review the NGINX configuration file for the container registry (for example, `/var/opt/gitlab/nginx/conf/gitlab-registry.conf`)
to confirm which port receives requests.
1. Verify that requests are correctly forwarded to the container registry:
```shell
curl --verbose --noproxy "*" https://<hostname>:<port>/v2/_catalog
```
The response should include a line with `Www-Authenticate: Bearer` containing `service="container_registry"`. For example:
```plaintext
< HTTP/1.1 401 Unauthorized
< Server: nginx
< Date: Fri, 07 Mar 2025 08:24:43 GMT
< Content-Type: application/json
< Content-Length: 162
< Connection: keep-alive
< Docker-Distribution-Api-Version: registry/2.0
< Www-Authenticate: Bearer realm="https://<hostname>/jwt/auth",service="container_registry",scope="registry:catalog:*"
< X-Content-Type-Options: nosniff
<
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":
[{"Type":"registry","Class":"","Name":"catalog","ProjectPath":"","Action":"*"}]}]}
* Connection #0 to host <hostname> left intact
```
## Using self-signed certificates with container registry
If you're using a self-signed certificate with your container registry, you
might encounter issues during the CI jobs like the following:
```plaintext
Error response from daemon: Get registry.example.com/v1/users/: x509: certificate signed by unknown authority
```
The Docker daemon running the command expects a cert signed by a recognized CA,
thus the previous error.
While GitLab doesn't support using self-signed certificates with Container
Registry out of the box, it is possible to make it work by
[instructing the Docker daemon to trust the self-signed certificates](https://distribution.github.io/distribution/about/insecure/#use-self-signed-certificates),
mounting the Docker daemon and setting `privileged = false` in the GitLab Runner
`config.toml` file. Setting `privileged = true` takes precedence over the Docker daemon:
```toml
[runners.docker]
image = "ruby:2.6"
privileged = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
```
Additional information about this: [issue 18239](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/18239).
## Docker login attempt fails with: 'token signed by untrusted key'
[Registry relies on GitLab to validate credentials](container_registry.md#container-registry-architecture)
If the registry fails to authenticate valid login attempts, you get the following error message:
```shell
# docker login gitlab.company.com:4567
Username: user
Password:
Error response from daemon: login attempt to https://gitlab.company.com:4567/v2/ failed with status: 401 Unauthorized
```
And more specifically, this appears in the `/var/log/gitlab/registry/current` log file:
```plaintext
level=info msg="token signed by untrusted key with ID: "TOKE:NL6Q:7PW6:EXAM:PLET:OKEN:BG27:RCIB:D2S3:EXAM:PLET:OKEN""
level=warning msg="error authorizing context: invalid token" go.version=go1.12.7 http.request.host="gitlab.company.com:4567" http.request.id=74613829-2655-4f96-8991-1c9fe33869b8 http.request.method=GET http.request.remoteaddr=10.72.11.20 http.request.uri="/v2/" http.request.useragent="docker/19.03.2 go/go1.12.8 git-commit/6a30dfc kernel/3.10.0-693.2.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.2 \(linux\))"
```
GitLab uses the contents of the certificate key pair's two sides to encrypt the authentication token
for the Registry. This message means that those contents do not align.
Check which files are in use:
- `grep -A6 'auth:' /var/opt/gitlab/registry/config.yml`
```yaml
## Container registry certificate
auth:
token:
realm: https://gitlab.my.net/jwt/auth
service: container_registry
issuer: omnibus-gitlab-issuer
--> rootcertbundle: /var/opt/gitlab/registry/gitlab-registry.crt
autoredirect: false
```
- `grep -A9 'Container Registry' /var/opt/gitlab/gitlab-rails/etc/gitlab.yml`
```yaml
## Container registry key
registry:
enabled: true
host: gitlab.company.com
port: 4567
api_url: http://127.0.0.1:5000 # internal address to the registry, is used by GitLab to directly communicate with API
path: /var/opt/gitlab/gitlab-rails/shared/registry
--> key: /var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key
issuer: omnibus-gitlab-issuer
notification_secret:
```
The output of these `openssl` commands should match, proving that the cert-key pair is a match:
```shell
/opt/gitlab/embedded/bin/openssl x509 -noout -modulus -in /var/opt/gitlab/registry/gitlab-registry.crt | /opt/gitlab/embedded/bin/openssl sha256
/opt/gitlab/embedded/bin/openssl rsa -noout -modulus -in /var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key | /opt/gitlab/embedded/bin/openssl sha256
```
If the two pieces of the certificate do not align, remove the files and run `gitlab-ctl reconfigure`
to regenerate the pair. The pair is recreated using the existing values in `/etc/gitlab/gitlab-secrets.json` if they exist. To generate a new pair,
delete the `registry` section in your `/etc/gitlab/gitlab-secrets.json` before running `gitlab-ctl reconfigure`.
If you have overridden the automatically generated self-signed pair with
your own certificates and have made sure that their contents align, you can delete the 'registry'
section in your `/etc/gitlab/gitlab-secrets.json` and run `gitlab-ctl reconfigure`.
## AWS S3 with the GitLab registry error when pushing large images
When using AWS S3 with the GitLab registry, an error may occur when pushing
large images. Look in the Registry log for the following error:
```plaintext
level=error msg="response completed with error" err.code=unknown err.detail="unexpected EOF" err.message="unknown error"
```
To resolve the error specify a `chunksize` value in the Registry configuration.
Start with a value between `25000000` (25 MB) and `50000000` (50 MB).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['storage'] = {
's3' => {
'accesskey' => 'AKIAKIAKI',
'secretkey' => 'secret123',
'bucket' => 'gitlab-registry-bucket-AKIAKIAKI',
'chunksize' => 25000000
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `config/gitlab.yml`:
```yaml
storage:
s3:
accesskey: 'AKIAKIAKI'
secretkey: 'secret123'
bucket: 'gitlab-registry-bucket-AKIAKIAKI'
chunksize: 25000000
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
## Supporting older Docker clients
The Docker container registry shipped with GitLab disables the schema1 manifest
by default. If you are still using older Docker clients (1.9 or older), you may
experience an error pushing images. See
[issue 4145](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4145) for more details.
You can add a configuration option for backwards compatibility.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['compatibility_schema1_enabled'] = true
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the YAML configuration file you created when you deployed the registry. Add the following snippet:
```yaml
compatibility:
schema1:
enabled: true
```
1. Restart the registry for the changes to take affect.
{{< /tab >}}
{{< /tabs >}}
## Docker connection error
A Docker connection error can occur when there are special characters in either the group,
project or branch name. Special characters can include:
- Leading underscore
- Trailing hyphen/dash
- Double hyphen/dash
To get around this, you can [change the group path](../../user/group/manage.md#change-a-groups-path),
[change the project path](../../user/project/working_with_projects.md#rename-a-repository) or change the
branch name. Another option is to create a [push rule](../../user/project/repository/push_rules.md) to prevent
this error for the entire instance.
## Image push errors
You might get stuck in retry loops when pushing Docker images, even though `docker login` succeeds.
This issue occurs when NGINX isn't properly forwarding headers to the registry, typically in custom
setups where SSL is offloaded to a third-party reverse proxy.
For more information, see [Docker push through NGINX proxy fails trying to send a 32B layer #970](https://github.com/docker/distribution/issues/970).
To resolve this issue, update your NGINX configuration to enable relative URLs in the registry:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['env'] = {
"REGISTRY_HTTP_RELATIVEURLS" => true
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit the YAML configuration file you created when you deployed the registry. Add the following snippet:
```yaml
http:
relativeurls: true
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< tab title="Docker Compose" >}}
1. Edit your `docker-compose.yaml` file:
```yaml
GITLAB_OMNIBUS_CONFIG: |
registry['env'] = {
"REGISTRY_HTTP_RELATIVEURLS" => true
}
```
1. If the issue persists, ensure both URLs use HTTPS:
```yaml
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://git.example.com'
registry_external_url 'https://git.example.com:5050'
```
1. Save the file and restart the container:
```shell
sudo docker restart gitlab
```
{{< /tab >}}
{{< /tabs >}}
## Enable the Registry debug server
You can use the container registry debug server to diagnose problems. The debug endpoint can monitor metrics and health, as well as do profiling.
{{< alert type="warning" >}}
Sensitive information may be available from the debug endpoint.
Access to the debug endpoint must be locked down in a production environment.
{{< /alert >}}
The optional debug server can be enabled by setting the registry debug address
in your `gitlab.rb` configuration.
```ruby
registry['debug_addr'] = "localhost:5001"
```
After adding the setting, [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) to apply the change.
Use curl to request debug output from the debug server:
```shell
curl "localhost:5001/debug/health"
curl "localhost:5001/debug/vars"
```
## Enable registry debug logs
You can enable debug logs to help troubleshoot issues with the container registry.
{{< alert type="warning" >}}
Debug logs may contain sensitive information such as authentication details, tokens, or repository information.
Enable debug logs only when necessary, and disable them when troubleshooting is complete.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/var/opt/gitlab/registry/config.yml`:
```yaml
level: debug
```
1. Save the file and restart the registry:
```shell
sudo gitlab-ctl restart registry
```
This configuration is temporary and is discarded when you run `gitlab-ctl reconfigure`.
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
registry:
log:
level: debug
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab --namespace <namespace>
```
{{< /tab >}}
{{< /tabs >}}
### Enable Registry Prometheus Metrics
If the debug server is enabled, you can also enable Prometheus metrics. This endpoint exposes highly detailed telemetry
related to almost all registry operations.
```ruby
registry['debug'] = {
'prometheus' => {
'enabled' => true,
'path' => '/metrics'
}
}
```
Use curl to request debug output from Prometheus:
```shell
curl "localhost:5001/debug/metrics"
```
## Tags with an empty name
If using [AWS DataSync](https://aws.amazon.com/datasync/)
to copy the registry data to or between S3 buckets, an empty metadata object is created in the root
path of each container repository in the destination bucket. This causes the registry to interpret
such files as a tag that appears with no name in the GitLab UI and API. For more information, see
[this issue](https://gitlab.com/gitlab-org/container-registry/-/issues/341).
To fix this you can do one of two things:
- Use the AWS CLI [`rm`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/rm.html)
command to remove the empty objects from the root of **each** affected repository. Pay special
attention to the trailing `/` and make sure **not** to use the `--recursive` option:
```shell
aws s3 rm s3://<bucket>/docker/registry/v2/repositories/<path to repository>/
```
- Use the AWS CLI [`sync`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html)
command to copy the registry data to a new bucket and configure the registry to use it. This
leaves the empty objects behind.
## Advanced Troubleshooting
We use a concrete example to illustrate how to
diagnose a problem with the S3 setup.
### Investigate a cleanup policy
If you're unsure why your cleanup policy did or didn't delete a tag, execute the policy line by line
by running the below script from the [Rails console](../operations/rails_console.md).
This can help diagnose problems with the policy.
```ruby
repo = ContainerRepository.find(<repository_id>)
policy = repo.project.container_expiration_policy
tags = repo.tags
tags.map(&:name)
tags.reject!(&:latest?)
tags.map(&:name)
regex_delete = ::Gitlab::UntrustedRegexp.new("\\A#{policy.name_regex}\\z")
regex_retain = ::Gitlab::UntrustedRegexp.new("\\A#{policy.name_regex_keep}\\z")
tags.select! { |tag| regex_delete.match?(tag.name) && !regex_retain.match?(tag.name) }
tags.map(&:name)
now = DateTime.current
tags.sort_by! { |tag| tag.created_at || now }.reverse! # Lengthy operation
tags = tags.drop(policy.keep_n)
tags.map(&:name)
older_than_timestamp = ChronicDuration.parse(policy.older_than).seconds.ago
tags.select! { |tag| tag.created_at && tag.created_at < older_than_timestamp }
tags.map(&:name)
```
- The script builds the list of tags to delete (`tags`).
- `tags.map(&:name)` prints a list of tags to remove. This may be a lengthy operation.
- After each filter, check the list of `tags` to see if it contains the intended tags to destroy.
### Unexpected 403 error during push
A user attempted to enable an S3-backed Registry. The `docker login` step went
fine. However, when pushing an image, the output showed:
```plaintext
The push refers to a repository [s3-testing.myregistry.com:5050/root/docker-test/docker-image]
dc5e59c14160: Pushing [==================================================>] 14.85 kB
03c20c1a019a: Pushing [==================================================>] 2.048 kB
a08f14ef632e: Pushing [==================================================>] 2.048 kB
228950524c88: Pushing 2.048 kB
6a8ecde4cc03: Pushing [==> ] 9.901 MB/205.7 MB
5f70bf18a086: Pushing 1.024 kB
737f40e80b7f: Waiting
82b57dbc5385: Waiting
19429b698a22: Waiting
9436069b92a3: Waiting
error parsing HTTP 403 response body: unexpected end of JSON input: ""
```
This error is ambiguous because it's not clear whether the 403 is coming from the
GitLab Rails application, the Docker Registry, or something else. In this
case, because we know that the login succeeded, we probably need to look
at the communication between the client and the Registry.
The REST API between the Docker client and Registry is described
[in the Docker documentation](https://distribution.github.io/distribution/spec/api/). Usually, one would just
use Wireshark or tcpdump to capture the traffic and see where things went
wrong. However, because all communications between Docker clients and servers
are done over HTTPS, it's a bit difficult to decrypt the traffic quickly even
if you know the private key. What can we do instead?
One way would be to disable HTTPS by setting up an
[insecure Registry](https://distribution.github.io/distribution/about/insecure/). This could introduce a
security hole and is only recommended for local testing. If you have a
production system and can't or don't want to do this, there is another way:
use mitmproxy, which stands for Man-in-the-Middle Proxy.
### mitmproxy
[mitmproxy](https://mitmproxy.org/) allows you to place a proxy between your
client and server to inspect all traffic. One wrinkle is that your system
needs to trust the mitmproxy SSL certificates for this to work.
The following installation instructions assume you are running Ubuntu:
1. [Install mitmproxy](https://docs.mitmproxy.org/stable/overview-installation/).
1. Run `mitmproxy --port 9000` to generate its certificates.
Enter <kbd>CTRL</kbd>-<kbd>C</kbd> to quit.
1. Install the certificate from `~/.mitmproxy` to your system:
```shell
sudo cp ~/.mitmproxy/mitmproxy-ca-cert.pem /usr/local/share/ca-certificates/mitmproxy-ca-cert.crt
sudo update-ca-certificates
```
If successful, the output should indicate that a certificate was added:
```shell
Updating certificates in /etc/ssl/certs... 1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....done.
```
To verify that the certificates are properly installed, run:
```shell
mitmproxy --port 9000
```
This command runs mitmproxy on port `9000`. In another window, run:
```shell
curl --proxy "http://localhost:9000" "https://httpbin.org/status/200"
```
If everything is set up correctly, information is displayed on the mitmproxy window and
no errors are generated by the curl commands.
### Running the Docker daemon with a proxy
For Docker to connect through a proxy, you must start the Docker daemon with the
proper environment variables. The easiest way is to shutdown Docker (for example `sudo initctl stop docker`)
and then run Docker by hand. As root, run:
```shell
export HTTP_PROXY="http://localhost:9000"
export HTTPS_PROXY="http://localhost:9000"
docker daemon --debug
```
This command launches the Docker daemon and proxies all connections through mitmproxy.
### Running the Docker client
Now that we have mitmproxy and Docker running, we can attempt to sign in and
push a container image. You may need to run as root to do this. For example:
```shell
docker login example.s3.amazonaws.com:5050
docker push example.s3.amazonaws.com:5050/root/docker-test/docker-image
```
In the previous example, we see the following trace on the mitmproxy window:
```plaintext
PUT https://example.s3.amazonaws.com:4567/v2/root/docker-test/blobs/uploads/(UUID)/(QUERYSTRING)
← 201 text/plain [no content] 661ms
HEAD https://example.s3.amazonaws.com:4567/v2/root/docker-test/blobs/sha256:(SHA)
← 307 application/octet-stream [no content] 93ms
HEAD https://example.s3.amazonaws.com:4567/v2/root/docker-test/blobs/sha256:(SHA)
← 307 application/octet-stream [no content] 101ms
HEAD https://example.s3.amazonaws.com:4567/v2/root/docker-test/blobs/sha256:(SHA)
← 307 application/octet-stream [no content] 87ms
HEAD https://amazonaws.example.com/docker/registry/vs/blobs/sha256/dd/(UUID)/data(QUERYSTRING)
← 403 application/xml [no content] 80ms
HEAD https://amazonaws.example.com/docker/registry/vs/blobs/sha256/dd/(UUID)/data(QUERYSTRING)
← 403 application/xml [no content] 62ms
```
This output shows:
- The initial PUT requests went through fine with a `201` status code.
- The `201` redirected the client to the Amazon S3 bucket.
- The HEAD request to the AWS bucket reported a `403 Unauthorized`.
What does this mean? This strongly suggests that the S3 user does not have the right
[permissions to perform a HEAD request](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html).
The solution: check the [IAM permissions again](https://distribution.github.io/distribution/storage-drivers/s3/).
After the right permissions were set, the error went away.
## Missing `gitlab-registry.key` prevents container repository deletion
If you disable your GitLab instance's container registry and try to remove a project that has
container repositories, the following error occurs:
```plaintext
Errno::ENOENT: No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key
```
In this case, follow these steps:
1. Temporarily enable the instance-wide setting for the container registry in your `gitlab.rb`:
```ruby
gitlab_rails['registry_enabled'] = true
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
1. Try the removal again.
If you still can't remove the repository using the common methods, you can use the
[GitLab Rails console](../operations/rails_console.md)
to remove the project by force:
```ruby
# Path to the project you'd like to remove
prj = Project.find_by_full_path(<project_path>)
# The following will delete the project's container registry, so be sure to double-check the path beforehand!
if prj.has_container_registry_tags?
prj.container_repositories.each { |p| p.destroy }
end
```
## Registry service listens on IPv6 address instead of IPv4
You might see the following error if the `localhost` hostname resolves to a IPv6
loopback address (`::1`) on your GitLab server and GitLab expects the registry service
to be available on the IPv4 loopback address (`127.0.0.1`):
```plaintext
request: "GET /v2/ HTTP/1.1", upstream: "http://[::1]:5000/v2/", host: "registry.example.com:5005"
[error] 1201#0: *13442797 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: registry.example.com, request: "GET /v2/<path> HTTP/1.1", upstream: "http://[::1]:5000/v2/<path>", host: "registry.example.com:5005"
```
To fix the error, change `registry['registry_http_addr']` to an IPv4 address in `/etc/gitlab/gitlab.rb`. For example:
```ruby
registry['registry_http_addr'] = "127.0.0.1:5000"
```
See [issue 5449](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5449) for more details.
## Push failures and high CPU usage with Google Cloud Storage (GCS)
You might get a `502 Bad Gateway` error when pushing container images to a registry that uses GCS as the backend. The registry might also experience CPU usage spikes when pushing large images.
This issue occurs when the registry communicates with GCS using the HTTP/2 protocol.
The workaround is to disable HTTP/2 in your registry deployment by setting the `GODEBUG` environment variable to `http2client=0`.
For more information, see [issue 1425](https://gitlab.com/gitlab-org/container-registry/-/issues/1425).
|
https://docs.gitlab.com/administration/packages
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/packages
|
[
"doc",
"administration",
"packages"
] |
_index.md
|
Package
|
Package Registry
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab package registry administration
|
Administer the package registry.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
To use GitLab as a private repository for a variety of common package managers, use the package registry.
You can build and publish
packages, which can be consumed as dependencies in downstream projects.
## Supported formats
The package registry supports the following formats:
| Package type | GitLab version |
|--------------------------------------------------------------------|----------------|
| [Composer](../../user/packages/composer_repository/_index.md) | 13.2+ |
| [Conan 1](../../user/packages/conan_1_repository/_index.md) | 12.6+ |
| [Conan 2](../../user/packages/conan_2_repository/_index.md) | 18.1+ |
| [Go](../../user/packages/go_proxy/_index.md) | 13.1+ |
| [Maven](../../user/packages/maven_repository/_index.md) | 11.3+ |
| [npm](../../user/packages/npm_registry/_index.md) | 11.7+ |
| [NuGet](../../user/packages/nuget_repository/_index.md) | 12.8+ |
| [PyPI](../../user/packages/pypi_repository/_index.md) | 12.10+ |
| [Generic packages](../../user/packages/generic_packages/_index.md) | 13.5+ |
| [Helm Charts](../../user/packages/helm_repository/_index.md) | 14.1+ |
The package registry is also used to store [model registry data](../../user/project/ml/model_registry/_index.md).
## Accepting contributions
The following table lists package formats that are not supported.
Consider contributing to GitLab to add support for these formats.
<!-- vale gitlab_base.Spelling = NO -->
| Format | Status |
| ------ | ------ |
| Chef | [#36889](https://gitlab.com/gitlab-org/gitlab/-/issues/36889) |
| CocoaPods | [#36890](https://gitlab.com/gitlab-org/gitlab/-/issues/36890) |
| Conda | [#36891](https://gitlab.com/gitlab-org/gitlab/-/issues/36891) |
| CRAN | [#36892](https://gitlab.com/gitlab-org/gitlab/-/issues/36892) |
| Debian | [Draft: Merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/50438) |
| Opkg | [#36894](https://gitlab.com/gitlab-org/gitlab/-/issues/36894) |
| P2 | [#36895](https://gitlab.com/gitlab-org/gitlab/-/issues/36895) |
| Puppet | [#36897](https://gitlab.com/gitlab-org/gitlab/-/issues/36897) |
| RPM | [#5932](https://gitlab.com/gitlab-org/gitlab/-/issues/5932) |
| RubyGems | [#803](https://gitlab.com/gitlab-org/gitlab/-/issues/803) |
| SBT | [#36898](https://gitlab.com/gitlab-org/gitlab/-/issues/36898) |
| Terraform | [Draft: Merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18834) |
| Vagrant | [#36899](https://gitlab.com/gitlab-org/gitlab/-/issues/36899) |
<!-- vale gitlab_base.Spelling = YES -->
## Rate limits
When downloading packages as dependencies in downstream projects, many requests are made through the
Packages API. You may therefore reach enforced user and IP rate limits. To address this issue, you
can define specific rate limits for the Packages API. For more details, see [package registry rate limits](../settings/package_registry_rate_limits.md).
## Enable or disable the package registry
The package registry is enabled by default. To disable it:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
# Change to true to enable packages - enabled by default if not defined
gitlab_rails['packages_enabled'] = false
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
appConfig:
packages:
enabled: false
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['packages_enabled'] = false
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
packages:
enabled: false
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
## Change the storage path
By default, the packages are stored locally, but you can change the default
local location or even use object storage.
### Change the local storage path
By default, the packages are stored in a local path, relative to the GitLab
installation:
- Linux package (Omnibus): `/var/opt/gitlab/gitlab-rails/shared/packages/`
- Self-compiled (source): `/home/git/gitlab/shared/packages/`
To change the local storage path:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['packages_storage_path'] = "/mnt/packages"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
packages:
enabled: true
storage_path: /mnt/packages
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
If you already had packages stored in the old storage path, move everything
from the old to the new location to ensure existing packages stay accessible:
```shell
mv /var/opt/gitlab/gitlab-rails/shared/packages/* /mnt/packages/
```
Docker and Kubernetes do not use local storage.
- For the Helm chart (Kubernetes): Use object storage instead.
- For Docker: The `/var/opt/gitlab/` directory is already
mounted in a directory on the host. There's no need to change the local
storage path inside the container.
### Use object storage
Instead of relying on the local storage, you can use an object storage to store
packages.
For more information, see how to use the
[consolidated object storage settings](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form).
### Migrate packages between object storage and local storage
After configuring object storage, you can use the following tasks to migrate packages between local and remote storage. The processing is done in a background worker and requires **no downtime**.
#### Migrate to object storage
1. Migrate the packages to object storage:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake "gitlab:packages:migrate"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
RAILS_ENV=production sudo -u git -H bundle exec rake gitlab:packages:migrate
```
{{< /tab >}}
{{< /tabs >}}
1. Track the progress and verify that all packages migrated successfully using the PostgreSQL console:
{{< tabs >}}
{{< tab title="Linux package (Omnibus) 14.1 and earlier" >}}
```shell
sudo gitlab-rails dbconsole
```
{{< /tab >}}
{{< tab title="Linux package (Omnibus) 14.2 and later" >}}
```shell
sudo gitlab-rails dbconsole --database main
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
RAILS_ENV=production sudo -u git -H psql -d gitlabhq_production
```
{{< /tab >}}
{{< /tabs >}}
1. Verify that all packages migrated to object storage with the following SQL query. The number of `objectstg` should be the same as `total`:
```sql
SELECT count(*) AS total,
sum(case when file_store = '1' then 1 else 0 end) AS filesystem,
sum(case when file_store = '2' then 1 else 0 end) AS objectstg
FROM packages_package_files;
```
Example output:
```plaintext
total | filesystem | objectstg
------+------------+-----------
34 | 0 | 34
```
1. Finally, verify that there are no files on disk in the `packages` directory:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo find /var/opt/gitlab/gitlab-rails/shared/packages -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git find /home/git/gitlab/shared/packages -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< /tabs >}}
#### Migrate from object storage to local storage
1. Migrate the packages from object storage to local storage:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake "gitlab:packages:migrate[local]"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
RAILS_ENV=production sudo -u git -H bundle exec rake "gitlab:packages:migrate[local]"
```
{{< /tab >}}
{{< /tabs >}}
1. Track the progress and verify that all packages migrated successfully using the PostgreSQL console:
{{< tabs >}}
{{< tab title="Linux package (Omnibus) 14.1 and earlier" >}}
```shell
sudo gitlab-rails dbconsole
```
{{< /tab >}}
{{< tab title="Linux package (Omnibus) 14.2 and later" >}}
```shell
sudo gitlab-rails dbconsole --database main
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
RAILS_ENV=production sudo -u git -H psql -d gitlabhq_production
```
{{< /tab >}}
{{< /tabs >}}
1. Verify that all packages migrated to local storage with the following SQL query. The number of `filesystem` should be the same as `total`:
```sql
SELECT count(*) AS total,
sum(case when file_store = '1' then 1 else 0 end) AS filesystem,
sum(case when file_store = '2' then 1 else 0 end) AS objectstg
FROM packages_package_files;
```
Example output:
```plaintext
total | filesystem | objectstg
------+------------+-----------
34 | 34 | 0
```
1. Finally, verify that the files exist in the `packages` directory:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo find /var/opt/gitlab/gitlab-rails/shared/packages -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git find /home/git/gitlab/shared/packages -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< /tabs >}}
|
---
stage: Package
group: Package Registry
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: GitLab package registry administration
description: Administer the package registry.
breadcrumbs:
- doc
- administration
- packages
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
To use GitLab as a private repository for a variety of common package managers, use the package registry.
You can build and publish
packages, which can be consumed as dependencies in downstream projects.
## Supported formats
The package registry supports the following formats:
| Package type | GitLab version |
|--------------------------------------------------------------------|----------------|
| [Composer](../../user/packages/composer_repository/_index.md) | 13.2+ |
| [Conan 1](../../user/packages/conan_1_repository/_index.md) | 12.6+ |
| [Conan 2](../../user/packages/conan_2_repository/_index.md) | 18.1+ |
| [Go](../../user/packages/go_proxy/_index.md) | 13.1+ |
| [Maven](../../user/packages/maven_repository/_index.md) | 11.3+ |
| [npm](../../user/packages/npm_registry/_index.md) | 11.7+ |
| [NuGet](../../user/packages/nuget_repository/_index.md) | 12.8+ |
| [PyPI](../../user/packages/pypi_repository/_index.md) | 12.10+ |
| [Generic packages](../../user/packages/generic_packages/_index.md) | 13.5+ |
| [Helm Charts](../../user/packages/helm_repository/_index.md) | 14.1+ |
The package registry is also used to store [model registry data](../../user/project/ml/model_registry/_index.md).
## Accepting contributions
The following table lists package formats that are not supported.
Consider contributing to GitLab to add support for these formats.
<!-- vale gitlab_base.Spelling = NO -->
| Format | Status |
| ------ | ------ |
| Chef | [#36889](https://gitlab.com/gitlab-org/gitlab/-/issues/36889) |
| CocoaPods | [#36890](https://gitlab.com/gitlab-org/gitlab/-/issues/36890) |
| Conda | [#36891](https://gitlab.com/gitlab-org/gitlab/-/issues/36891) |
| CRAN | [#36892](https://gitlab.com/gitlab-org/gitlab/-/issues/36892) |
| Debian | [Draft: Merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/50438) |
| Opkg | [#36894](https://gitlab.com/gitlab-org/gitlab/-/issues/36894) |
| P2 | [#36895](https://gitlab.com/gitlab-org/gitlab/-/issues/36895) |
| Puppet | [#36897](https://gitlab.com/gitlab-org/gitlab/-/issues/36897) |
| RPM | [#5932](https://gitlab.com/gitlab-org/gitlab/-/issues/5932) |
| RubyGems | [#803](https://gitlab.com/gitlab-org/gitlab/-/issues/803) |
| SBT | [#36898](https://gitlab.com/gitlab-org/gitlab/-/issues/36898) |
| Terraform | [Draft: Merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18834) |
| Vagrant | [#36899](https://gitlab.com/gitlab-org/gitlab/-/issues/36899) |
<!-- vale gitlab_base.Spelling = YES -->
## Rate limits
When downloading packages as dependencies in downstream projects, many requests are made through the
Packages API. You may therefore reach enforced user and IP rate limits. To address this issue, you
can define specific rate limits for the Packages API. For more details, see [package registry rate limits](../settings/package_registry_rate_limits.md).
## Enable or disable the package registry
The package registry is enabled by default. To disable it:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
# Change to true to enable packages - enabled by default if not defined
gitlab_rails['packages_enabled'] = false
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
appConfig:
packages:
enabled: false
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['packages_enabled'] = false
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
packages:
enabled: false
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
## Change the storage path
By default, the packages are stored locally, but you can change the default
local location or even use object storage.
### Change the local storage path
By default, the packages are stored in a local path, relative to the GitLab
installation:
- Linux package (Omnibus): `/var/opt/gitlab/gitlab-rails/shared/packages/`
- Self-compiled (source): `/home/git/gitlab/shared/packages/`
To change the local storage path:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['packages_storage_path'] = "/mnt/packages"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
packages:
enabled: true
storage_path: /mnt/packages
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
If you already had packages stored in the old storage path, move everything
from the old to the new location to ensure existing packages stay accessible:
```shell
mv /var/opt/gitlab/gitlab-rails/shared/packages/* /mnt/packages/
```
Docker and Kubernetes do not use local storage.
- For the Helm chart (Kubernetes): Use object storage instead.
- For Docker: The `/var/opt/gitlab/` directory is already
mounted in a directory on the host. There's no need to change the local
storage path inside the container.
### Use object storage
Instead of relying on the local storage, you can use an object storage to store
packages.
For more information, see how to use the
[consolidated object storage settings](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form).
### Migrate packages between object storage and local storage
After configuring object storage, you can use the following tasks to migrate packages between local and remote storage. The processing is done in a background worker and requires **no downtime**.
#### Migrate to object storage
1. Migrate the packages to object storage:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake "gitlab:packages:migrate"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
RAILS_ENV=production sudo -u git -H bundle exec rake gitlab:packages:migrate
```
{{< /tab >}}
{{< /tabs >}}
1. Track the progress and verify that all packages migrated successfully using the PostgreSQL console:
{{< tabs >}}
{{< tab title="Linux package (Omnibus) 14.1 and earlier" >}}
```shell
sudo gitlab-rails dbconsole
```
{{< /tab >}}
{{< tab title="Linux package (Omnibus) 14.2 and later" >}}
```shell
sudo gitlab-rails dbconsole --database main
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
RAILS_ENV=production sudo -u git -H psql -d gitlabhq_production
```
{{< /tab >}}
{{< /tabs >}}
1. Verify that all packages migrated to object storage with the following SQL query. The number of `objectstg` should be the same as `total`:
```sql
SELECT count(*) AS total,
sum(case when file_store = '1' then 1 else 0 end) AS filesystem,
sum(case when file_store = '2' then 1 else 0 end) AS objectstg
FROM packages_package_files;
```
Example output:
```plaintext
total | filesystem | objectstg
------+------------+-----------
34 | 0 | 34
```
1. Finally, verify that there are no files on disk in the `packages` directory:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo find /var/opt/gitlab/gitlab-rails/shared/packages -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git find /home/git/gitlab/shared/packages -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< /tabs >}}
#### Migrate from object storage to local storage
1. Migrate the packages from object storage to local storage:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake "gitlab:packages:migrate[local]"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
RAILS_ENV=production sudo -u git -H bundle exec rake "gitlab:packages:migrate[local]"
```
{{< /tab >}}
{{< /tabs >}}
1. Track the progress and verify that all packages migrated successfully using the PostgreSQL console:
{{< tabs >}}
{{< tab title="Linux package (Omnibus) 14.1 and earlier" >}}
```shell
sudo gitlab-rails dbconsole
```
{{< /tab >}}
{{< tab title="Linux package (Omnibus) 14.2 and later" >}}
```shell
sudo gitlab-rails dbconsole --database main
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
RAILS_ENV=production sudo -u git -H psql -d gitlabhq_production
```
{{< /tab >}}
{{< /tabs >}}
1. Verify that all packages migrated to local storage with the following SQL query. The number of `filesystem` should be the same as `total`:
```sql
SELECT count(*) AS total,
sum(case when file_store = '1' then 1 else 0 end) AS filesystem,
sum(case when file_store = '2' then 1 else 0 end) AS objectstg
FROM packages_package_files;
```
Example output:
```plaintext
total | filesystem | objectstg
------+------------+-----------
34 | 34 | 0
```
1. Finally, verify that the files exist in the `packages` directory:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo find /var/opt/gitlab/gitlab-rails/shared/packages -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git find /home/git/gitlab/shared/packages -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< /tabs >}}
|
https://docs.gitlab.com/administration/container_registry_metadata_database
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/container_registry_metadata_database.md
|
2025-08-13
|
doc/administration/packages
|
[
"doc",
"administration",
"packages"
] |
container_registry_metadata_database.md
|
Package
|
Container Registry
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Container registry metadata database
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/423459) in GitLab 16.4 as a [beta feature](../../policy/development_stages_support.md) for GitLab Self-Managed.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/423459) in GitLab 17.3.
{{< /history >}}
The metadata database provides several [enhancements](#enhancements) to the container registry
that improve performance and add new features.
The work on the GitLab Self-Managed release of the registry metadata database feature
is tracked in [epic 5521](https://gitlab.com/groups/gitlab-org/-/epics/5521).
By default, the container registry uses object storage to persist metadata
related to container images. This method to store metadata limits how efficiently
the data can be accessed, especially data spanning multiple images, such as when listing tags.
By using a database to store this data, many new features are possible, including
[online garbage collection](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/gitlab/online-garbage-collection.md)
which removes old data automatically with zero downtime.
This database works in conjunction with the object storage already used by the registry, but does not replace object storage.
You must continue to maintain an object storage solution even after performing a metadata import to the metadata database.
For Helm Charts installations, see [Manage the container registry metadata database](https://docs.gitlab.com/charts/charts/registry/metadata_database.html#create-the-database)
in the Helm Charts documentation.
## Enhancements
The metadata database architecture supports performance improvements, bug fixes, and new features
that are not available with the object storage metadata architecture. These enhancements include:
- Automatic [online garbage collection](../../user/packages/container_registry/delete_container_registry_images.md#garbage-collection)
- [Storage usage visibility](../../user/packages/container_registry/reduce_container_registry_storage.md#view-container-registry-usage) for repositories, projects, and groups
- [Image signing](../../user/packages/container_registry/_index.md#container-image-signatures)
- [Moving and renaming repositories](../../user/packages/container_registry/_index.md#move-or-rename-container-registry-repositories)
- [Protected tags](../../user/packages/container_registry/protected_container_tags.md)
- Performance improvements for [cleanup policies](../../user/packages/container_registry/reduce_container_registry_storage.md#cleanup-policy), enabling successful cleanup of large repositories
- Performance improvements for listing repository tags
- Tracking and displaying tag publish timestamps (see [issue 290949](https://gitlab.com/gitlab-org/gitlab/-/issues/290949))
- Sorting repository tags by additional attributes beyond name
Due to technical constraints of the object storage metadata architecture, new features are only
implemented for the metadata database version. Non-security bug fixes might be limited to the
metadata database version.
## Known limitations
- Metadata import for existing registries requires a period of read-only time.
- Geo functionality is limited. Additional features are proposed in [epic 15325](https://gitlab.com/groups/gitlab-org/-/epics/15325).
- Registry regular schema and post-deployment database migrations must be run manually when upgrading versions.
- No guarantee for registry [zero downtime during upgrades](../../update/zero_downtime.md) on multi-node Linux package environments.
## Metadata database feature support
You can import metadata from existing registries to the metadata database, and use online garbage collection.
Some database-enabled features are only enabled for GitLab.com and automatic database provisioning for
the registry database is not available. Review the feature support table in the [feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/423459#supported-feature-status)
for the status of features related to the container registry database.
## Enable the metadata database for Linux package installations
Prerequisites:
- GitLab 17.3 or later.
- PostgreSQL database [version 14 or later](../../install/requirements.md#postgresql). It must be accessible from the registry node.
Follow the instructions that match your situation:
- [New installations](#new-installations) or enabling the container registry for the first time.
- Import existing container image metadata to the metadata database:
- [One-step import](#one-step-import). Only recommended for relatively small registries or no requirement to avoid downtime.
- [Three-step import](#three-step-import). Recommended for larger container registries.
### Before you start
- All database connection values are placeholders. You must [create](../postgresql/external.md#container-registry-metadata-database), verify your ability to
connect to, and manage a new PostgreSQL database for the registry before completing any step.
- See the full [database configuration](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/configuration.md?ref_type=heads#database).
- See [epic 17005](https://gitlab.com/groups/gitlab-org/-/epics/17005) for progress towards automatic registry database provisioning and management.
- After you enable the database, you must continue to use it. The database is
now the source of the registry metadata, disabling it after this point
causes the registry to lose visibility on all images written to it while
the database was active.
- Never run [offline garbage collection](container_registry.md#container-registry-garbage-collection) at any point
after the import step has been completed. That command is not compatible with registries using
the metadata database and may delete data associated with tagged images.
- Verify you have not automated offline garbage collection.
- You can first [reduce the storage of your registry](../../user/packages/container_registry/reduce_container_registry_storage.md)
to speed up the process.
- Back up [your container registry data](../backup_restore/backup_gitlab.md#container-registry)
if possible.
### New installations
Prerequisites:
- Create an [external database](../postgresql/external.md#container-registry-metadata-database).
To enable the database:
1. Edit `/etc/gitlab/gitlab.rb` by adding your database connection details, but start with the metadata database **disabled**:
```ruby
registry['database'] = {
'enabled' => false,
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. [Apply database migrations](#apply-database-migrations).
1. Enable the database by editing `/etc/gitlab/gitlab.rb` and setting `enabled` to `true`:
```ruby
registry['database'] = {
'enabled' => true,
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
### Existing registries
You can import your existing container registry metadata in one step or three steps.
A few factors affect the duration of the import:
- The size of your existing registry data.
- The specifications of your PostgreSQL instance.
- The number of registry instances running.
- Network latency between the registry, PostgreSQL and your configured Object Storage.
{{< alert type="note" >}}
The metadata import only targets tagged images. Untagged and unreferenced manifests, and the layers
exclusively referenced by them, are left behind and become inaccessible. Untagged images
were never visible through the GitLab UI or API, but they can become "dangling" and
left behind in the backend. After import to the new registry, all images are subject
to continuous online garbage collection, by default deleting any untagged and unreferenced manifests
and layers that remain for longer than 24 hours.
{{< /alert >}}
Choose the one or three step method according to your registry installation.
#### One-step import
Prerequisites:
- Create an [external database](../postgresql/external.md#container-registry-metadata-database).
{{< alert type="warning" >}}
The registry must be shut down or remain in `read-only` mode during the import.
Only choose this method if you do not need to write to the registry during the import
and your registry contains a relatively small amount of data.
{{< /alert >}}
1. Add the `database` section to your `/etc/gitlab/gitlab.rb` file, but start with the metadata database **disabled**:
```ruby
registry['database'] = {
'enabled' => false, # Must be false!
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Ensure the registry is set to `read-only` mode.
Edit your `/etc/gitlab/gitlab.rb` and add the `maintenance` section to the `registry['storage']` configuration.
For example, for a `gcs` backed registry using a `gs://my-company-container-registry` bucket,
the configuration could be:
```ruby
## Object Storage - Container Registry
registry['storage'] = {
'gcs' => {
'bucket' => '<my-company-container-registry>',
'chunksize' => 5242880
},
'maintenance' => {
'readonly' => {
'enabled' => true # Must be set to true.
}
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. [Apply database migrations](#apply-database-migrations) if you have not done so.
1. Run the following command:
```shell
sudo gitlab-ctl registry-database import
```
1. If the command completed successfully, the registry is now fully imported. You
can now enable the database, turn off read-only mode in the configuration, and
start the registry service:
```ruby
registry['database'] = {
'enabled' => true, # Must be enabled now!
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
## Object Storage - Container Registry
registry['storage'] = {
'gcs' => {
'bucket' => '<my-company-container-registry>',
'chunksize' => 5242880
},
'maintenance' => {
'readonly' => {
'enabled' => false
}
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
You can now use the metadata database for all operations!
#### Three-step import
Prerequisites:
- Create an [external database](../postgresql/external.md#container-registry-metadata-database).
Follow this guide to import your existing container registry metadata.
This procedure is recommended for larger sets of metadata or if you are
trying to minimize downtime while completing the import.
{{< alert type="note" >}}
Users have reported step one import completed at [rates of 2 to 4 TB per hour](https://gitlab.com/gitlab-org/gitlab/-/issues/423459).
At the slower speed, registries with over 100TB of data could take longer than 48 hours.
{{< /alert >}}
##### Pre-import repositories (step one)
For larger instances, this command can take hours to days to complete, depending
on the size of your registry. You may continue to use the registry as normal while
step one is being completed.
{{< alert type="warning" >}}
It is [not yet possible](https://gitlab.com/gitlab-org/container-registry/-/issues/1162)
to restart the import, so it's important to let the import run to completion.
If you must halt the operation, you have to restart this step.
{{< /alert >}}
1. Add the `database` section to your `/etc/gitlab/gitlab.rb` file, but start with the metadata database **disabled**:
```ruby
registry['database'] = {
'enabled' => false, # Must be false!
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. [Apply database migrations](#apply-database-migrations) if you have not done so.
1. Run the first step to begin the import:
```shell
sudo gitlab-ctl registry-database import --step-one
```
{{< alert type="note" >}}
You should try to schedule the following step as soon as possible
to reduce the amount of downtime required. Ideally, less than one week
after step one completes. Any new data written to the registry between steps one and two,
causes step two to take more time.
{{< /alert >}}
##### Import all repository data (step two)
This step requires the registry to be shut down or set in `read-only` mode.
Allow enough time for downtime while step two is being executed.
1. Ensure the registry is set to `read-only` mode.
Edit your `/etc/gitlab/gitlab.rb` and add the `maintenance` section to the `registry['storage']`
configuration. For example, for a `gcs` backed registry using a `gs://my-company-container-registry`
bucket, the configuration could be:
```ruby
## Object Storage - Container Registry
registry['storage'] = {
'gcs' => {
'bucket' => '<my-company-container-registry>',
'chunksize' => 5242880
},
'maintenance' => {
'readonly' => {
'enabled' => true # Must be set to true.
}
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. Run step two of the import:
```shell
sudo gitlab-ctl registry-database import --step-two
```
1. If the command completed successfully, all images are now fully imported. You
can now enable the database, turn off read-only mode in the configuration, and
start the registry service:
```ruby
registry['database'] = {
'enabled' => true, # Must be set to true!
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
## Object Storage - Container Registry
registry['storage'] = {
'gcs' => {
'bucket' => '<my-company-container-registry>',
'chunksize' => 5242880
},
'maintenance' => { # This section can be removed.
'readonly' => {
'enabled' => false
}
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
You can now use the metadata database for all operations!
##### Import remaining data (step three)
Even though the registry is now fully using the database for its metadata, it
does not yet have access to any potentially unused layer blobs, preventing these
blobs from being removed by the online garbage collector.
To complete the process, run the final step of the migration:
```shell
sudo gitlab-ctl registry-database import --step-three
```
After that command exists successfully, registry metadata is now fully imported to the database.
#### Post import
It may take approximately 48 hours post import to see your registry storage
decrease. This is a normal and expected part of online garbage collection, as this
delay ensures that online garbage collection does not interfere with image pushes.
Check out the [monitor online garbage collection](#online-garbage-collection-monitoring) section
to see how to monitor the progress and health of the online garbage collector.
## Database migrations
You must manually execute database migrations after each GitLab upgrade. Support to automate database migrations after upgrades is proposed in [issue 8670](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/8670).
The container registry supports two types of migrations:
- **Regular schema migrations**: Changes to the database structure that must run before deploying new application code, also known as pre-deployment migrations. These should be fast (no more than a few minutes) to avoid deployment delays.
- **Post-deployment migrations**: Changes to the database structure that can run while the application is running. Used for longer operations like creating indexes on large tables, avoiding startup delays and extended upgrade downtime.
By default, the registry applies both regular schema and post-deployment migrations simultaneously.
To reduce downtime during upgrades, you can skip post-deployment migrations and apply them manually after the application starts.
### Apply database migrations
To apply both regular schema and post-deployment migrations before the application starts:
1. Run database migrations:
```shell
sudo gitlab-ctl registry-database migrate up
```
To skip post-deployment migrations:
1. Run regular schema migrations only:
```shell
sudo gitlab-ctl registry-database migrate up --skip-post-deployment
```
As an alternative to the `--skip-post-deployment` flag, you can also set the `SKIP_POST_DEPLOYMENT_MIGRATIONS` environment variable to `true`:
```shell
SKIP_POST_DEPLOYMENT_MIGRATIONS=true sudo gitlab-ctl registry-database migrate up
```
1. After starting the application, apply any pending post-deployment migrations:
```shell
sudo gitlab-ctl registry-database migrate up
```
{{< alert type="note" >}}
The `migrate up` command offers some extra flags that can be used to control how the migrations are applied.
Run `sudo gitlab-ctl registry-database migrate up --help` for details.
{{< /alert >}}
## Online garbage collection monitoring
The initial runs of online garbage collection following the import process varies
in duration based on the number of imported images. You should monitor the efficiency and
health of your online garbage collection during this period.
### Monitor database performance
After completing an import, expect the database to experience a period of high load as
the garbage collection queues drain. This high load is caused by a high number of individual database calls
from the online garbage collector processing the queued tasks.
Regularly check PostgreSQL and registry logs for any errors or warnings. In the registry logs,
pay special attention to logs filtered by `component=registry.gc.*`.
### Track metrics
Use monitoring tools like Prometheus and Grafana to visualize and track garbage collection metrics,
focusing on metrics with a prefix of `registry_gc_*`. These include the number of objects
marked for deletion, objects successfully deleted, run intervals, and durations.
See [enable the registry debug server](container_registry_troubleshooting.md#enable-the-registry-debug-server)
for how to enable Prometheus.
### Queue monitoring
Check the size of the queues by counting the rows in the `gc_blob_review_queue` and
`gc_manifest_review_queue` tables. Large queues are expected initially, with the number of rows
proportional to the number of imported blobs and manifests. The queues should reduce over time,
indicating that garbage collection is successfully reviewing jobs.
```sql
SELECT COUNT(*) FROM gc_blob_review_queue;
SELECT COUNT(*) FROM gc_manifest_review_queue;
```
Interpreting Queue Sizes:
- Shrinking queues: Indicate garbage collection is successfully processing tasks.
- Near-Zero `gc_manifest_review_queue`: Most images flagged for potential deletion
have been reviewed and classified either as still in use or removed.
- Overdue Tasks: Check for overdue GC tasks by running the following queries:
```sql
SELECT COUNT(*) FROM gc_blob_review_queue WHERE review_after < NOW();
SELECT COUNT(*) FROM gc_manifest_review_queue WHERE review_after < NOW();
```
A high number of overdue tasks indicates a problem. Large queue sizes are not concerning
as long as they are decreasing over time and the number of overdue tasks
is close to zero. A high number of overdue tasks should prompt an urgent inspection of logs.
Check GC logs for messages indicating that blobs are still in use, for example `msg=the blob is not dangling`,
which implies they will not be deleted.
### Adjust blobs interval
If the size of your `gc_blob_review_queue` is high, and you want to increase the frequency between
the garbage collection blob or manifest worker runs, update your interval configuration
from the default (`5s`) to `1s`:
```ruby
registry['gc'] = {
'blobs' => {
'interval' => '1s'
},
'manifests' => {
'interval' => '1s'
}
}
```
After the import load has been cleared, you should fine-tune these settings for the long term
to avoid unnecessary CPU load on the database and registry instances. You can gradually increase
the interval to a value that balances performance and resource usage.
### Validate data consistency
To ensure data consistency after the import, use the [`crane validate`](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane_validate.md)
tool. This tool checks that all image layers and manifests in your container registry
are accessible and correctly linked. By running `crane validate`, you confirm that
the images in your registry are complete and accessible, ensuring a successful import.
### Review cleanup policies
If most of your images are tagged, garbage collection won't significantly reduce storage space
because it only deletes untagged images.
Implement cleanup policies to remove unneeded tags, which eventually causes images
to be removed through garbage collection and storage space being recovered.
## Backup with metadata database
{{< alert type="note" >}}
If you have configured your own database for container registry metadata,
you must manage backups manually. `gitlab-backup` does not backup the metadata database.
{{< /alert >}}
When the metadata database is enabled, backups must capture both the object storage
used by the registry, as before, but also the database. Backups of object storage
and the database should be coordinated to capture the state of the registry as close as possible
to each other. To restore the registry, you must apply both backups together.
## Downgrade a registry
To downgrade the registry to a previous version after the import is complete,
you must restore to a backup of the desired version in order to downgrade.
## Database architecture with Geo
When using GitLab Geo with the container registry, you must configure separate database and
object storage stacks for the registry at each site. Geo replication to the
container registry uses events generated from registry notifications,
rather than by database replication.
### Prerequisites
Each Geo site requires a separate, site-specific:
1. PostgreSQL instance for the container registry database.
1. Object storage instance for the container registry.
1. Container registry configured to use these site-specific resources.
This diagram illustrates the data flow and basic architecture:
```mermaid
flowchart TB
subgraph "Primary site"
P_Rails[GitLab Rails]
P_Reg[Container registry]
P_RegDB[(Registry database)]
P_Obj[(Object storage)]
P_Reg --> P_RegDB
P_RegDB --> P_Obj
end
subgraph "Secondary site"
S_Rails[GitLab Rails]
S_Reg[Container registry]
S_RegDB[(Registry database)]
S_Obj[(Object storage)]
S_Reg --> S_RegDB
S_RegDB --> S_Obj
end
P_Reg -- "Notifications" --> P_Rails
P_Rails -- "Events" --> S_Rails
S_Rails --> S_Reg
classDef primary fill:#d1f7c4
classDef secondary fill:#b8d4ff
class P_Rails,P_Reg,P_MainDB,P_RegDB,P_Obj primary
class S_Rails,S_Reg,S_MainDB,S_RegDB,S_Obj secondary
```
Use separate database instances on each site because:
1. The main GitLab database is replicated to the secondary site as read-only.
1. This replication cannot be selectively disabled for the registry database.
1. The container registry requires write access to its database at both sites.
1. Homogeneous setups ensure the greatest parity between Geo sites.
## Revert to object storage metadata
You can revert your registry to use object storage metadata after completing a metadata import.
{{< alert type="warning" >}}
When you revert to object storage metadata, any container images, tags, or repositories
added or deleted between the import completion and this revert operation are not available.
{{< /alert >}}
To revert to object storage metadata:
1. Restore a [backup](../backup_restore/backup_gitlab.md#container-registry) taken before the migration.
1. Add the following configuration to your `/etc/gitlab/gitlab.rb` file:
```ruby
registry['database'] = {
'enabled' => false,
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
## Troubleshooting
### Error: `there are pending database migrations`
If the registry has been updated and there are pending schema migrations,
the registry fails to start with the following error message:
```shell
FATA[0000] configuring application: there are pending database migrations, use the 'registry database migrate' CLI command to check and apply them
```
To fix this issue, follow the steps to [apply database migrations](#apply-database-migrations).
### Error: `offline garbage collection is no longer possible`
If the registry uses the metadata database and you try to run
[offline garbage collection](container_registry.md#container-registry-garbage-collection),
the registry fails with the following error message:
```shell
ERRO[0000] this filesystem is managed by the metadata database, and offline garbage collection is no longer possible, if you are not using the database anymore, remove the file at the lock_path in this log message lock_path=/docker/registry/lockfiles/database-in-use
```
You must either:
- Stop using offline garbage collection.
- If you no longer use the metadata database, delete the indicated lock file at the `lock_path` shown in the error message.
For example, remove the `/docker/registry/lockfiles/database-in-use` file.
### Error: `cannot execute <STATEMENT> in a read-only transaction`
The registry could fail to [apply database migrations](#apply-database-migrations)
with the following error message:
```shell
err="ERROR: cannot execute CREATE TABLE in a read-only transaction (SQLSTATE 25006)"
```
Also, the registry could fail with the following error message if you try to run
[online garbage collection](container_registry.md#performing-garbage-collection-without-downtime):
```shell
error="processing task: fetching next GC blob task: scanning GC blob task: ERROR: cannot execute SELECT FOR UPDATE in a read-only transaction (SQLSTATE 25006)"
```
You must verify that read-only transactions are disabled by checking the values of
`default_transaction_read_only` and `transaction_read_only` in the PostgreSQL console.
For example:
```sql
# SHOW default_transaction_read_only;
default_transaction_read_only
-------------------------------
on
(1 row)
# SHOW transaction_read_only;
transaction_read_only
-----------------------
on
(1 row)
```
If either of these values is set to `on`, you must disable it:
1. Edit your `postgresql.conf` and set the following value:
```shell
default_transaction_read_only=off
```
1. Restart your Postgres server to apply these settings.
1. Try to [apply database migrations](#apply-database-migrations) again, if applicable.
1. Restart the registry `sudo gitlab-ctl restart registry`.
### Error: `cannot import all repositories while the tags table has entries`
If you try to [import existing registry metadata](#existing-registries) and encounter the following error:
```shell
ERRO[0000] cannot import all repositories while the tags table has entries, you must truncate the table manually before retrying,
see https://docs.gitlab.com/ee/administration/packages/container_registry_metadata_database.html#troubleshooting
common_blobs=true dry_run=false error="tags table is not empty"
```
This error happens when there are existing entries in the `tags` table of the registry database,
which can happen if you:
- Attempted the [one step import](#one-step-import) and encountered errors.
- Attempted the [three-step import](#three-step-import) process and encountered errors.
- Stopped the import process on purpose.
- Tried to run the import again after any of the previous actions.
- Ran the import against the wrong configuration file.
To resolve this issue, you must delete the existing entries in the tags table.
You must truncate the table manually on your PostgreSQL instance:
1. Edit `/etc/gitlab/gitlab.rb` and ensure the metadata database is **disabled**:
```ruby
registry['database'] = {
'enabled' => false,
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Connect to your registry database using a PostgreSQL client.
1. Truncate the `tags` table to remove all existing entries:
```sql
TRUNCATE TABLE tags RESTART IDENTITY CASCADE;
```
1. After truncating the `tags` table, try running the import process again.
### Error: `database-in-use lockfile exists`
If you try to [import existing registry metadata](#existing-registries) and encounter the following error:
```shell
| [0s] step two: import tags failed to import metadata: importing all repositories: 1 error occurred:
* could not restore lockfiles: database-in-use lockfile exists
```
This error means that you have previously imported the registry and completed importing all
repository data (step two) and the `database-in-use` exists in the registry file system.
You should not run the importer again if you encounter this issue.
If you must proceed, you must delete the `database-in-use` lock file manually from the file system.
The file is located at `/path/to/rootdirectory/docker/registry/lockfiles/database-in-use`.
### Error: `pre importing all repositories: AccessDenied:`
You might receive an `AccessDenied` error when [importing existing registries](#existing-registries)
and using AWS S3 as your storage backend:
```shell
/opt/gitlab/embedded/bin/registry database import --step-one /var/opt/gitlab/registry/config.yml
[0s] step one: import manifests
[0s] step one: import manifests failed to import metadata: pre importing all repositories: AccessDenied: Access Denied
```
Ensure that the user executing the command has the
correct [permission scopes](https://docker-docs.uclv.cu/registry/storage-drivers/s3/#s3-permission-scopes).
### Registry fails to start due to metadata management issues
The registry could fail to start with of the following errors:
#### Error: `registry filesystem metadata in use, please import data before enabling the database`
This error happens when the database is enabled in your configuration `registry['database'] = { 'enabled' => true}`
but you have not [imported existing registry metadata](#existing-registries) to the metadata database yet.
#### Error: `registry metadata database in use, please enable the database`
This error happens when you have completed the [import of existing registry metadata](#existing-registries) to the metadata database,
but you have not enabled the database in your configuration.
#### Problems checking or creating the lock files
If you encounter any of the following errors:
- `could not check if filesystem metadata is locked`
- `could not check if database metadata is locked`
- `failed to mark filesystem for database only usage`
- `failed to mark filesystem only usage`
The registry cannot access the configured `rootdirectory`. This error is unlikely to happen if you
had a working registry previously. Review the error logs for any misconfiguration issues.
### Storage usage not decreasing after deleting tags
By default, the online garbage collector will only start deleting unreferenced layers 48 hours from the time
that all tags they were associated with were deleted. This delay ensures that the garbage collector does
not interfere with long-running or interrupted image pushes, as layers are pushed to the registry before
they are associated with an image and tag.
### Error: `permission denied for schema public (SQLSTATE 42501)`
During a registry migration, you might get one of the following errors:
- `ERROR: permission denied for schema public (SQLSTATE 42501)`
- `ERROR: relation "public.blobs" does not exist (SQLSTATE 42P01)`
These types of errors are due to a change in PostgreSQL 15+, which removes the default CREATE privileges on the public schema for security reasons.
By default, only database owners can create objects in the public schema in PostgreSQL 15+.
To resolve the error, run the following command to give a registry user owner privileges of the registry database:
```sql
ALTER DATABASE <registry_database_name> OWNER TO <registry_user>;
```
This gives the registry user the necessary permissions to create tables and run migrations successfully.
|
---
stage: Package
group: Container Registry
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Container registry metadata database
breadcrumbs:
- doc
- administration
- packages
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/423459) in GitLab 16.4 as a [beta feature](../../policy/development_stages_support.md) for GitLab Self-Managed.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/423459) in GitLab 17.3.
{{< /history >}}
The metadata database provides several [enhancements](#enhancements) to the container registry
that improve performance and add new features.
The work on the GitLab Self-Managed release of the registry metadata database feature
is tracked in [epic 5521](https://gitlab.com/groups/gitlab-org/-/epics/5521).
By default, the container registry uses object storage to persist metadata
related to container images. This method to store metadata limits how efficiently
the data can be accessed, especially data spanning multiple images, such as when listing tags.
By using a database to store this data, many new features are possible, including
[online garbage collection](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/gitlab/online-garbage-collection.md)
which removes old data automatically with zero downtime.
This database works in conjunction with the object storage already used by the registry, but does not replace object storage.
You must continue to maintain an object storage solution even after performing a metadata import to the metadata database.
For Helm Charts installations, see [Manage the container registry metadata database](https://docs.gitlab.com/charts/charts/registry/metadata_database.html#create-the-database)
in the Helm Charts documentation.
## Enhancements
The metadata database architecture supports performance improvements, bug fixes, and new features
that are not available with the object storage metadata architecture. These enhancements include:
- Automatic [online garbage collection](../../user/packages/container_registry/delete_container_registry_images.md#garbage-collection)
- [Storage usage visibility](../../user/packages/container_registry/reduce_container_registry_storage.md#view-container-registry-usage) for repositories, projects, and groups
- [Image signing](../../user/packages/container_registry/_index.md#container-image-signatures)
- [Moving and renaming repositories](../../user/packages/container_registry/_index.md#move-or-rename-container-registry-repositories)
- [Protected tags](../../user/packages/container_registry/protected_container_tags.md)
- Performance improvements for [cleanup policies](../../user/packages/container_registry/reduce_container_registry_storage.md#cleanup-policy), enabling successful cleanup of large repositories
- Performance improvements for listing repository tags
- Tracking and displaying tag publish timestamps (see [issue 290949](https://gitlab.com/gitlab-org/gitlab/-/issues/290949))
- Sorting repository tags by additional attributes beyond name
Due to technical constraints of the object storage metadata architecture, new features are only
implemented for the metadata database version. Non-security bug fixes might be limited to the
metadata database version.
## Known limitations
- Metadata import for existing registries requires a period of read-only time.
- Geo functionality is limited. Additional features are proposed in [epic 15325](https://gitlab.com/groups/gitlab-org/-/epics/15325).
- Registry regular schema and post-deployment database migrations must be run manually when upgrading versions.
- No guarantee for registry [zero downtime during upgrades](../../update/zero_downtime.md) on multi-node Linux package environments.
## Metadata database feature support
You can import metadata from existing registries to the metadata database, and use online garbage collection.
Some database-enabled features are only enabled for GitLab.com and automatic database provisioning for
the registry database is not available. Review the feature support table in the [feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/423459#supported-feature-status)
for the status of features related to the container registry database.
## Enable the metadata database for Linux package installations
Prerequisites:
- GitLab 17.3 or later.
- PostgreSQL database [version 14 or later](../../install/requirements.md#postgresql). It must be accessible from the registry node.
Follow the instructions that match your situation:
- [New installations](#new-installations) or enabling the container registry for the first time.
- Import existing container image metadata to the metadata database:
- [One-step import](#one-step-import). Only recommended for relatively small registries or no requirement to avoid downtime.
- [Three-step import](#three-step-import). Recommended for larger container registries.
### Before you start
- All database connection values are placeholders. You must [create](../postgresql/external.md#container-registry-metadata-database), verify your ability to
connect to, and manage a new PostgreSQL database for the registry before completing any step.
- See the full [database configuration](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/configuration.md?ref_type=heads#database).
- See [epic 17005](https://gitlab.com/groups/gitlab-org/-/epics/17005) for progress towards automatic registry database provisioning and management.
- After you enable the database, you must continue to use it. The database is
now the source of the registry metadata, disabling it after this point
causes the registry to lose visibility on all images written to it while
the database was active.
- Never run [offline garbage collection](container_registry.md#container-registry-garbage-collection) at any point
after the import step has been completed. That command is not compatible with registries using
the metadata database and may delete data associated with tagged images.
- Verify you have not automated offline garbage collection.
- You can first [reduce the storage of your registry](../../user/packages/container_registry/reduce_container_registry_storage.md)
to speed up the process.
- Back up [your container registry data](../backup_restore/backup_gitlab.md#container-registry)
if possible.
### New installations
Prerequisites:
- Create an [external database](../postgresql/external.md#container-registry-metadata-database).
To enable the database:
1. Edit `/etc/gitlab/gitlab.rb` by adding your database connection details, but start with the metadata database **disabled**:
```ruby
registry['database'] = {
'enabled' => false,
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. [Apply database migrations](#apply-database-migrations).
1. Enable the database by editing `/etc/gitlab/gitlab.rb` and setting `enabled` to `true`:
```ruby
registry['database'] = {
'enabled' => true,
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
### Existing registries
You can import your existing container registry metadata in one step or three steps.
A few factors affect the duration of the import:
- The size of your existing registry data.
- The specifications of your PostgreSQL instance.
- The number of registry instances running.
- Network latency between the registry, PostgreSQL and your configured Object Storage.
{{< alert type="note" >}}
The metadata import only targets tagged images. Untagged and unreferenced manifests, and the layers
exclusively referenced by them, are left behind and become inaccessible. Untagged images
were never visible through the GitLab UI or API, but they can become "dangling" and
left behind in the backend. After import to the new registry, all images are subject
to continuous online garbage collection, by default deleting any untagged and unreferenced manifests
and layers that remain for longer than 24 hours.
{{< /alert >}}
Choose the one or three step method according to your registry installation.
#### One-step import
Prerequisites:
- Create an [external database](../postgresql/external.md#container-registry-metadata-database).
{{< alert type="warning" >}}
The registry must be shut down or remain in `read-only` mode during the import.
Only choose this method if you do not need to write to the registry during the import
and your registry contains a relatively small amount of data.
{{< /alert >}}
1. Add the `database` section to your `/etc/gitlab/gitlab.rb` file, but start with the metadata database **disabled**:
```ruby
registry['database'] = {
'enabled' => false, # Must be false!
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Ensure the registry is set to `read-only` mode.
Edit your `/etc/gitlab/gitlab.rb` and add the `maintenance` section to the `registry['storage']` configuration.
For example, for a `gcs` backed registry using a `gs://my-company-container-registry` bucket,
the configuration could be:
```ruby
## Object Storage - Container Registry
registry['storage'] = {
'gcs' => {
'bucket' => '<my-company-container-registry>',
'chunksize' => 5242880
},
'maintenance' => {
'readonly' => {
'enabled' => true # Must be set to true.
}
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. [Apply database migrations](#apply-database-migrations) if you have not done so.
1. Run the following command:
```shell
sudo gitlab-ctl registry-database import
```
1. If the command completed successfully, the registry is now fully imported. You
can now enable the database, turn off read-only mode in the configuration, and
start the registry service:
```ruby
registry['database'] = {
'enabled' => true, # Must be enabled now!
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
## Object Storage - Container Registry
registry['storage'] = {
'gcs' => {
'bucket' => '<my-company-container-registry>',
'chunksize' => 5242880
},
'maintenance' => {
'readonly' => {
'enabled' => false
}
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
You can now use the metadata database for all operations!
#### Three-step import
Prerequisites:
- Create an [external database](../postgresql/external.md#container-registry-metadata-database).
Follow this guide to import your existing container registry metadata.
This procedure is recommended for larger sets of metadata or if you are
trying to minimize downtime while completing the import.
{{< alert type="note" >}}
Users have reported step one import completed at [rates of 2 to 4 TB per hour](https://gitlab.com/gitlab-org/gitlab/-/issues/423459).
At the slower speed, registries with over 100TB of data could take longer than 48 hours.
{{< /alert >}}
##### Pre-import repositories (step one)
For larger instances, this command can take hours to days to complete, depending
on the size of your registry. You may continue to use the registry as normal while
step one is being completed.
{{< alert type="warning" >}}
It is [not yet possible](https://gitlab.com/gitlab-org/container-registry/-/issues/1162)
to restart the import, so it's important to let the import run to completion.
If you must halt the operation, you have to restart this step.
{{< /alert >}}
1. Add the `database` section to your `/etc/gitlab/gitlab.rb` file, but start with the metadata database **disabled**:
```ruby
registry['database'] = {
'enabled' => false, # Must be false!
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. [Apply database migrations](#apply-database-migrations) if you have not done so.
1. Run the first step to begin the import:
```shell
sudo gitlab-ctl registry-database import --step-one
```
{{< alert type="note" >}}
You should try to schedule the following step as soon as possible
to reduce the amount of downtime required. Ideally, less than one week
after step one completes. Any new data written to the registry between steps one and two,
causes step two to take more time.
{{< /alert >}}
##### Import all repository data (step two)
This step requires the registry to be shut down or set in `read-only` mode.
Allow enough time for downtime while step two is being executed.
1. Ensure the registry is set to `read-only` mode.
Edit your `/etc/gitlab/gitlab.rb` and add the `maintenance` section to the `registry['storage']`
configuration. For example, for a `gcs` backed registry using a `gs://my-company-container-registry`
bucket, the configuration could be:
```ruby
## Object Storage - Container Registry
registry['storage'] = {
'gcs' => {
'bucket' => '<my-company-container-registry>',
'chunksize' => 5242880
},
'maintenance' => {
'readonly' => {
'enabled' => true # Must be set to true.
}
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. Run step two of the import:
```shell
sudo gitlab-ctl registry-database import --step-two
```
1. If the command completed successfully, all images are now fully imported. You
can now enable the database, turn off read-only mode in the configuration, and
start the registry service:
```ruby
registry['database'] = {
'enabled' => true, # Must be set to true!
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
## Object Storage - Container Registry
registry['storage'] = {
'gcs' => {
'bucket' => '<my-company-container-registry>',
'chunksize' => 5242880
},
'maintenance' => { # This section can be removed.
'readonly' => {
'enabled' => false
}
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
You can now use the metadata database for all operations!
##### Import remaining data (step three)
Even though the registry is now fully using the database for its metadata, it
does not yet have access to any potentially unused layer blobs, preventing these
blobs from being removed by the online garbage collector.
To complete the process, run the final step of the migration:
```shell
sudo gitlab-ctl registry-database import --step-three
```
After that command exists successfully, registry metadata is now fully imported to the database.
#### Post import
It may take approximately 48 hours post import to see your registry storage
decrease. This is a normal and expected part of online garbage collection, as this
delay ensures that online garbage collection does not interfere with image pushes.
Check out the [monitor online garbage collection](#online-garbage-collection-monitoring) section
to see how to monitor the progress and health of the online garbage collector.
## Database migrations
You must manually execute database migrations after each GitLab upgrade. Support to automate database migrations after upgrades is proposed in [issue 8670](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/8670).
The container registry supports two types of migrations:
- **Regular schema migrations**: Changes to the database structure that must run before deploying new application code, also known as pre-deployment migrations. These should be fast (no more than a few minutes) to avoid deployment delays.
- **Post-deployment migrations**: Changes to the database structure that can run while the application is running. Used for longer operations like creating indexes on large tables, avoiding startup delays and extended upgrade downtime.
By default, the registry applies both regular schema and post-deployment migrations simultaneously.
To reduce downtime during upgrades, you can skip post-deployment migrations and apply them manually after the application starts.
### Apply database migrations
To apply both regular schema and post-deployment migrations before the application starts:
1. Run database migrations:
```shell
sudo gitlab-ctl registry-database migrate up
```
To skip post-deployment migrations:
1. Run regular schema migrations only:
```shell
sudo gitlab-ctl registry-database migrate up --skip-post-deployment
```
As an alternative to the `--skip-post-deployment` flag, you can also set the `SKIP_POST_DEPLOYMENT_MIGRATIONS` environment variable to `true`:
```shell
SKIP_POST_DEPLOYMENT_MIGRATIONS=true sudo gitlab-ctl registry-database migrate up
```
1. After starting the application, apply any pending post-deployment migrations:
```shell
sudo gitlab-ctl registry-database migrate up
```
{{< alert type="note" >}}
The `migrate up` command offers some extra flags that can be used to control how the migrations are applied.
Run `sudo gitlab-ctl registry-database migrate up --help` for details.
{{< /alert >}}
## Online garbage collection monitoring
The initial runs of online garbage collection following the import process varies
in duration based on the number of imported images. You should monitor the efficiency and
health of your online garbage collection during this period.
### Monitor database performance
After completing an import, expect the database to experience a period of high load as
the garbage collection queues drain. This high load is caused by a high number of individual database calls
from the online garbage collector processing the queued tasks.
Regularly check PostgreSQL and registry logs for any errors or warnings. In the registry logs,
pay special attention to logs filtered by `component=registry.gc.*`.
### Track metrics
Use monitoring tools like Prometheus and Grafana to visualize and track garbage collection metrics,
focusing on metrics with a prefix of `registry_gc_*`. These include the number of objects
marked for deletion, objects successfully deleted, run intervals, and durations.
See [enable the registry debug server](container_registry_troubleshooting.md#enable-the-registry-debug-server)
for how to enable Prometheus.
### Queue monitoring
Check the size of the queues by counting the rows in the `gc_blob_review_queue` and
`gc_manifest_review_queue` tables. Large queues are expected initially, with the number of rows
proportional to the number of imported blobs and manifests. The queues should reduce over time,
indicating that garbage collection is successfully reviewing jobs.
```sql
SELECT COUNT(*) FROM gc_blob_review_queue;
SELECT COUNT(*) FROM gc_manifest_review_queue;
```
Interpreting Queue Sizes:
- Shrinking queues: Indicate garbage collection is successfully processing tasks.
- Near-Zero `gc_manifest_review_queue`: Most images flagged for potential deletion
have been reviewed and classified either as still in use or removed.
- Overdue Tasks: Check for overdue GC tasks by running the following queries:
```sql
SELECT COUNT(*) FROM gc_blob_review_queue WHERE review_after < NOW();
SELECT COUNT(*) FROM gc_manifest_review_queue WHERE review_after < NOW();
```
A high number of overdue tasks indicates a problem. Large queue sizes are not concerning
as long as they are decreasing over time and the number of overdue tasks
is close to zero. A high number of overdue tasks should prompt an urgent inspection of logs.
Check GC logs for messages indicating that blobs are still in use, for example `msg=the blob is not dangling`,
which implies they will not be deleted.
### Adjust blobs interval
If the size of your `gc_blob_review_queue` is high, and you want to increase the frequency between
the garbage collection blob or manifest worker runs, update your interval configuration
from the default (`5s`) to `1s`:
```ruby
registry['gc'] = {
'blobs' => {
'interval' => '1s'
},
'manifests' => {
'interval' => '1s'
}
}
```
After the import load has been cleared, you should fine-tune these settings for the long term
to avoid unnecessary CPU load on the database and registry instances. You can gradually increase
the interval to a value that balances performance and resource usage.
### Validate data consistency
To ensure data consistency after the import, use the [`crane validate`](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane_validate.md)
tool. This tool checks that all image layers and manifests in your container registry
are accessible and correctly linked. By running `crane validate`, you confirm that
the images in your registry are complete and accessible, ensuring a successful import.
### Review cleanup policies
If most of your images are tagged, garbage collection won't significantly reduce storage space
because it only deletes untagged images.
Implement cleanup policies to remove unneeded tags, which eventually causes images
to be removed through garbage collection and storage space being recovered.
## Backup with metadata database
{{< alert type="note" >}}
If you have configured your own database for container registry metadata,
you must manage backups manually. `gitlab-backup` does not backup the metadata database.
{{< /alert >}}
When the metadata database is enabled, backups must capture both the object storage
used by the registry, as before, but also the database. Backups of object storage
and the database should be coordinated to capture the state of the registry as close as possible
to each other. To restore the registry, you must apply both backups together.
## Downgrade a registry
To downgrade the registry to a previous version after the import is complete,
you must restore to a backup of the desired version in order to downgrade.
## Database architecture with Geo
When using GitLab Geo with the container registry, you must configure separate database and
object storage stacks for the registry at each site. Geo replication to the
container registry uses events generated from registry notifications,
rather than by database replication.
### Prerequisites
Each Geo site requires a separate, site-specific:
1. PostgreSQL instance for the container registry database.
1. Object storage instance for the container registry.
1. Container registry configured to use these site-specific resources.
This diagram illustrates the data flow and basic architecture:
```mermaid
flowchart TB
subgraph "Primary site"
P_Rails[GitLab Rails]
P_Reg[Container registry]
P_RegDB[(Registry database)]
P_Obj[(Object storage)]
P_Reg --> P_RegDB
P_RegDB --> P_Obj
end
subgraph "Secondary site"
S_Rails[GitLab Rails]
S_Reg[Container registry]
S_RegDB[(Registry database)]
S_Obj[(Object storage)]
S_Reg --> S_RegDB
S_RegDB --> S_Obj
end
P_Reg -- "Notifications" --> P_Rails
P_Rails -- "Events" --> S_Rails
S_Rails --> S_Reg
classDef primary fill:#d1f7c4
classDef secondary fill:#b8d4ff
class P_Rails,P_Reg,P_MainDB,P_RegDB,P_Obj primary
class S_Rails,S_Reg,S_MainDB,S_RegDB,S_Obj secondary
```
Use separate database instances on each site because:
1. The main GitLab database is replicated to the secondary site as read-only.
1. This replication cannot be selectively disabled for the registry database.
1. The container registry requires write access to its database at both sites.
1. Homogeneous setups ensure the greatest parity between Geo sites.
## Revert to object storage metadata
You can revert your registry to use object storage metadata after completing a metadata import.
{{< alert type="warning" >}}
When you revert to object storage metadata, any container images, tags, or repositories
added or deleted between the import completion and this revert operation are not available.
{{< /alert >}}
To revert to object storage metadata:
1. Restore a [backup](../backup_restore/backup_gitlab.md#container-registry) taken before the migration.
1. Add the following configuration to your `/etc/gitlab/gitlab.rb` file:
```ruby
registry['database'] = {
'enabled' => false,
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
## Troubleshooting
### Error: `there are pending database migrations`
If the registry has been updated and there are pending schema migrations,
the registry fails to start with the following error message:
```shell
FATA[0000] configuring application: there are pending database migrations, use the 'registry database migrate' CLI command to check and apply them
```
To fix this issue, follow the steps to [apply database migrations](#apply-database-migrations).
### Error: `offline garbage collection is no longer possible`
If the registry uses the metadata database and you try to run
[offline garbage collection](container_registry.md#container-registry-garbage-collection),
the registry fails with the following error message:
```shell
ERRO[0000] this filesystem is managed by the metadata database, and offline garbage collection is no longer possible, if you are not using the database anymore, remove the file at the lock_path in this log message lock_path=/docker/registry/lockfiles/database-in-use
```
You must either:
- Stop using offline garbage collection.
- If you no longer use the metadata database, delete the indicated lock file at the `lock_path` shown in the error message.
For example, remove the `/docker/registry/lockfiles/database-in-use` file.
### Error: `cannot execute <STATEMENT> in a read-only transaction`
The registry could fail to [apply database migrations](#apply-database-migrations)
with the following error message:
```shell
err="ERROR: cannot execute CREATE TABLE in a read-only transaction (SQLSTATE 25006)"
```
Also, the registry could fail with the following error message if you try to run
[online garbage collection](container_registry.md#performing-garbage-collection-without-downtime):
```shell
error="processing task: fetching next GC blob task: scanning GC blob task: ERROR: cannot execute SELECT FOR UPDATE in a read-only transaction (SQLSTATE 25006)"
```
You must verify that read-only transactions are disabled by checking the values of
`default_transaction_read_only` and `transaction_read_only` in the PostgreSQL console.
For example:
```sql
# SHOW default_transaction_read_only;
default_transaction_read_only
-------------------------------
on
(1 row)
# SHOW transaction_read_only;
transaction_read_only
-----------------------
on
(1 row)
```
If either of these values is set to `on`, you must disable it:
1. Edit your `postgresql.conf` and set the following value:
```shell
default_transaction_read_only=off
```
1. Restart your Postgres server to apply these settings.
1. Try to [apply database migrations](#apply-database-migrations) again, if applicable.
1. Restart the registry `sudo gitlab-ctl restart registry`.
### Error: `cannot import all repositories while the tags table has entries`
If you try to [import existing registry metadata](#existing-registries) and encounter the following error:
```shell
ERRO[0000] cannot import all repositories while the tags table has entries, you must truncate the table manually before retrying,
see https://docs.gitlab.com/ee/administration/packages/container_registry_metadata_database.html#troubleshooting
common_blobs=true dry_run=false error="tags table is not empty"
```
This error happens when there are existing entries in the `tags` table of the registry database,
which can happen if you:
- Attempted the [one step import](#one-step-import) and encountered errors.
- Attempted the [three-step import](#three-step-import) process and encountered errors.
- Stopped the import process on purpose.
- Tried to run the import again after any of the previous actions.
- Ran the import against the wrong configuration file.
To resolve this issue, you must delete the existing entries in the tags table.
You must truncate the table manually on your PostgreSQL instance:
1. Edit `/etc/gitlab/gitlab.rb` and ensure the metadata database is **disabled**:
```ruby
registry['database'] = {
'enabled' => false,
'host' => '<registry_database_host_placeholder_change_me>',
'port' => 5432, # Default, but set to the port of your database instance if it differs.
'user' => '<registry_database_username_placeholder_change_me>',
'password' => '<registry_database_placeholder_change_me>',
'dbname' => '<registry_database_name_placeholder_change_me>',
'sslmode' => 'require', # See the PostgreSQL documentation for additional information https://www.postgresql.org/docs/16/libpq-ssl.html.
'sslcert' => '</path/to/cert.pem>',
'sslkey' => '</path/to/private.key>',
'sslrootcert' => '</path/to/ca.pem>'
}
```
1. Connect to your registry database using a PostgreSQL client.
1. Truncate the `tags` table to remove all existing entries:
```sql
TRUNCATE TABLE tags RESTART IDENTITY CASCADE;
```
1. After truncating the `tags` table, try running the import process again.
### Error: `database-in-use lockfile exists`
If you try to [import existing registry metadata](#existing-registries) and encounter the following error:
```shell
| [0s] step two: import tags failed to import metadata: importing all repositories: 1 error occurred:
* could not restore lockfiles: database-in-use lockfile exists
```
This error means that you have previously imported the registry and completed importing all
repository data (step two) and the `database-in-use` exists in the registry file system.
You should not run the importer again if you encounter this issue.
If you must proceed, you must delete the `database-in-use` lock file manually from the file system.
The file is located at `/path/to/rootdirectory/docker/registry/lockfiles/database-in-use`.
### Error: `pre importing all repositories: AccessDenied:`
You might receive an `AccessDenied` error when [importing existing registries](#existing-registries)
and using AWS S3 as your storage backend:
```shell
/opt/gitlab/embedded/bin/registry database import --step-one /var/opt/gitlab/registry/config.yml
[0s] step one: import manifests
[0s] step one: import manifests failed to import metadata: pre importing all repositories: AccessDenied: Access Denied
```
Ensure that the user executing the command has the
correct [permission scopes](https://docker-docs.uclv.cu/registry/storage-drivers/s3/#s3-permission-scopes).
### Registry fails to start due to metadata management issues
The registry could fail to start with of the following errors:
#### Error: `registry filesystem metadata in use, please import data before enabling the database`
This error happens when the database is enabled in your configuration `registry['database'] = { 'enabled' => true}`
but you have not [imported existing registry metadata](#existing-registries) to the metadata database yet.
#### Error: `registry metadata database in use, please enable the database`
This error happens when you have completed the [import of existing registry metadata](#existing-registries) to the metadata database,
but you have not enabled the database in your configuration.
#### Problems checking or creating the lock files
If you encounter any of the following errors:
- `could not check if filesystem metadata is locked`
- `could not check if database metadata is locked`
- `failed to mark filesystem for database only usage`
- `failed to mark filesystem only usage`
The registry cannot access the configured `rootdirectory`. This error is unlikely to happen if you
had a working registry previously. Review the error logs for any misconfiguration issues.
### Storage usage not decreasing after deleting tags
By default, the online garbage collector will only start deleting unreferenced layers 48 hours from the time
that all tags they were associated with were deleted. This delay ensures that the garbage collector does
not interfere with long-running or interrupted image pushes, as layers are pushed to the registry before
they are associated with an image and tag.
### Error: `permission denied for schema public (SQLSTATE 42501)`
During a registry migration, you might get one of the following errors:
- `ERROR: permission denied for schema public (SQLSTATE 42501)`
- `ERROR: relation "public.blobs" does not exist (SQLSTATE 42P01)`
These types of errors are due to a change in PostgreSQL 15+, which removes the default CREATE privileges on the public schema for security reasons.
By default, only database owners can create objects in the public schema in PostgreSQL 15+.
To resolve the error, run the following command to give a registry user owner privileges of the registry database:
```sql
ALTER DATABASE <registry_database_name> OWNER TO <registry_user>;
```
This gives the registry user the necessary permissions to create tables and run migrations successfully.
|
https://docs.gitlab.com/administration/container_registry
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/container_registry.md
|
2025-08-13
|
doc/administration/packages
|
[
"doc",
"administration",
"packages"
] |
container_registry.md
|
Package
|
Container Registry
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab container registry administration
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< alert type="note" >}}
The [next-generation container registry](container_registry_metadata_database.md)
is now available for upgrade on GitLab Self-Managed instances.
This upgraded registry supports online garbage collection, and has significant performance
and reliability improvements.
{{< /alert >}}
With the GitLab container registry, every project can have its
own space to store Docker images.
For more details about the Distribution Registry:
- [Configuration](https://distribution.github.io/distribution/about/configuration/)
- [Storage drivers](https://distribution.github.io/distribution/storage-drivers/)
- [Deploy a registry server](https://distribution.github.io/distribution/about/deploying/)
This document is the administrator's guide. To learn how to use the GitLab Container
Registry, see the [user documentation](../../user/packages/container_registry/_index.md).
## Enable the container registry
The process for enabling the container registry depends on the type of installation you use.
### Linux package installations
If you installed GitLab by using the Linux package, the container registry
may or may not be available by default.
The container registry is automatically enabled and available on your GitLab domain, port 5050 if
you're using the built-in [Let's Encrypt integration](https://docs.gitlab.com/omnibus/settings/ssl/#enable-the-lets-encrypt-integration).
Otherwise, the container registry is not enabled. To enable it:
- You can configure it for your [GitLab domain](#configure-container-registry-under-an-existing-gitlab-domain), or
- You can configure it for [a different domain](#configure-container-registry-under-its-own-domain).
The container registry works under HTTPS by default. You can use HTTP
but it's not recommended and is beyond the scope of this document.
### Helm Charts installations
For Helm Charts installations, see [Using the container registry](https://docs.gitlab.com/charts/charts/registry/).
in the Helm Charts documentation.
### Self-compiled installations
If you self-compiled your GitLab installation:
1. You must deploy a registry using the image corresponding to the
version of GitLab you are installing
(for example: `registry.gitlab.com/gitlab-org/build/cng/gitlab-container-registry:v3.15.0-gitlab`)
1. After the installation is complete, to enable it, you must configure the Registry's
settings in `gitlab.yml`.
1. Use the sample NGINX configuration file from under
[`lib/support/nginx/registry-ssl`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/support/nginx/registry-ssl) and edit it to match the
`host`, `port`, and TLS certificate paths.
The contents of `gitlab.yml` are:
```yaml
registry:
enabled: true
host: <registry.gitlab.example.com>
port: <5005>
api_url: <http://localhost:5000/>
key: <config/registry.key>
path: <shared/registry>
issuer: <gitlab-issuer>
```
Where:
| Parameter | Description |
| --------- | ----------- |
| `enabled` | `true` or `false`. Enables the Registry in GitLab. By default this is `false`. |
| `host` | The host URL under which the Registry runs and users can use. |
| `port` | The port the external Registry domain listens on. |
| `api_url` | The internal API URL under which the Registry is exposed. It defaults to `http://localhost:5000`. Do not change this unless you are setting up an [external Docker registry](#use-an-external-container-registry-with-gitlab-as-an-auth-endpoint). |
| `key` | The private key location that is a pair of Registry's `rootcertbundle`. |
| `path` | This should be the same directory like specified in Registry's `rootdirectory`. This path needs to be readable by the GitLab user, the web-server user and the Registry user. |
| `issuer` | This should be the same value as configured in Registry's `issuer`. |
A Registry init file is not shipped with GitLab if you install it from source.
Hence, [restarting GitLab](../restart_gitlab.md#self-compiled-installations) does not restart the Registry should
you modify its settings. Read the upstream documentation on how to achieve that.
At the **absolute** minimum, make sure your Registry configuration
has `container_registry` as the service and `https://gitlab.example.com/jwt/auth`
as the realm:
```yaml
auth:
token:
realm: <https://gitlab.example.com/jwt/auth>
service: container_registry
issuer: gitlab-issuer
rootcertbundle: /root/certs/certbundle
```
{{< alert type="warning" >}}
If `auth` is not set up, users can pull Docker images without authentication.
{{< /alert >}}
## Container registry domain configuration
You can configure the Registry's external domain in either of these ways:
- [Use the existing GitLab domain](#configure-container-registry-under-an-existing-gitlab-domain).
The Registry listens on a port and reuses the TLS certificate from GitLab.
- [Use a completely separate domain](#configure-container-registry-under-its-own-domain) with a new TLS certificate
for that domain.
Because the container registry requires a TLS certificate, cost may be a factor.
Take this into consideration before configuring the container registry
for the first time.
### Configure container registry under an existing GitLab domain
If the container registry is configured to use the existing GitLab domain, you can
expose the container registry on a port. This way you can reuse the existing GitLab TLS
certificate.
If the GitLab domain is `https://gitlab.example.com` and the port to the outside world is `5050`,
to configure the container registry:
- Edit `gitlab.rb` if you are using a Linux package installation.
- Edit `gitlab.yml` if you are using a self-compiled installation.
Ensure you choose a port different than the one that Registry listens to (`5000` by default),
otherwise conflicts occur.
{{< alert type="note" >}}
Host and container firewall rules must be configured to allow traffic in through the port listed
under the `registry_external_url` line, rather than the port listed under
`gitlab_rails['registry_port']` (default `5000`).
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Your `/etc/gitlab/gitlab.rb` should contain the Registry URL as well as the
path to the existing TLS certificate and key used by GitLab:
```ruby
registry_external_url '<https://gitlab.example.com:5050>'
```
The `registry_external_url` is listening on HTTPS under the
existing GitLab URL, but on a different port.
If your TLS certificate is not in `/etc/gitlab/ssl/gitlab.example.com.crt`
and key not in `/etc/gitlab/ssl/gitlab.example.com.key` uncomment the lines
below:
```ruby
registry_nginx['ssl_certificate'] = "</path/to/certificate.pem>"
registry_nginx['ssl_certificate_key'] = "</path/to/certificate.key>"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
1. Validate using:
```shell
openssl s_client -showcerts -servername gitlab.example.com -connect gitlab.example.com:5050 > cacert.pem
```
If your certificate provider provides the CA Bundle certificates, append them to the TLS certificate file.
An administrator may want the container registry listening on an arbitrary port such as `5678`.
However, the registry and application server are behind an AWS application load balancer that only
listens on ports `80` and `443`. The administrator may remove the port number for
`registry_external_url`, so HTTP or HTTPS is assumed. Then, the rules apply that map the load
balancer to the registry from ports `80` or `443` to the arbitrary port. This is important if users
rely on the `docker login` example in the container registry. Here's an example:
```ruby
registry_external_url '<https://registry-gitlab.example.com>'
registry_nginx['redirect_http_to_https'] = true
registry_nginx['listen_port'] = 5678
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
configure it with the following settings:
```yaml
registry:
enabled: true
host: <gitlab.example.com>
port: 5050
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
1. Make the relevant changes in NGINX as well (domain, port, TLS certificates path).
{{< /tab >}}
{{< /tabs >}}
Users should now be able to sign in to the container registry with their GitLab
credentials using:
```shell
docker login <gitlab.example.com:5050>
```
### Configure container registry under its own domain
When the Registry is configured to use its own domain, you need a TLS
certificate for that specific domain (for example, `registry.example.com`). You might need
a wildcard certificate if hosted under a subdomain of your existing GitLab
domain. For example, `*.gitlab.example.com`, is a wildcard that matches `registry.gitlab.example.com`,
and is distinct from `*.example.com`.
As well as manually generated SSL certificates (explained here), certificates automatically
generated by Let's Encrypt are also [supported in Linux package installations](https://docs.gitlab.com/omnibus/settings/ssl/).
Let's assume that you want the container registry to be accessible at
`https://registry.gitlab.example.com`.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Place your TLS certificate and key in
`/etc/gitlab/ssl/<registry.gitlab.example.com>.crt` and
`/etc/gitlab/ssl/<registry.gitlab.example.com>.key` and make sure they have
correct permissions:
```shell
chmod 600 /etc/gitlab/ssl/<registry.gitlab.example.com>.*
```
1. After the TLS certificate is in place, edit `/etc/gitlab/gitlab.rb` with:
```ruby
registry_external_url '<https://registry.gitlab.example.com>'
```
The `registry_external_url` is listening on HTTPS.
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
If you have a [wildcard certificate](https://en.wikipedia.org/wiki/Wildcard_certificate), you must specify the path to the
certificate in addition to the URL, in this case `/etc/gitlab/gitlab.rb`
looks like:
```ruby
registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/certificate.pem"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/certificate.key"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
configure it with the following settings:
```yaml
registry:
enabled: true
host: <registry.gitlab.example.com>
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
1. Make the relevant changes in NGINX as well (domain, port, TLS certificates path).
{{< /tab >}}
{{< /tabs >}}
Users should now be able to sign in to the container registry using their GitLab
credentials:
```shell
docker login <registry.gitlab.example.com>
```
## Disable container registry site-wide
When you disable the Registry by following these steps, you do not
remove any existing Docker images. Docker image removal is handled by the
Registry application itself.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Open `/etc/gitlab/gitlab.rb` and set `registry['enable']` to `false`:
```ruby
registry['enable'] = false
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
set `enabled` to `false`:
```yaml
registry:
enabled: false
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
## Disable container registry for new projects site-wide
If the container registry is enabled, then it should be available on all new
projects. To disable this function and let the owners of a project to enable
the container registry by themselves, follow the steps below.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['gitlab_default_projects_features_container_registry'] = false
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `default_projects_features`
entry and configure it so that `container_registry` is set to `false`:
```yaml
## Default project features settings
default_projects_features:
issues: true
merge_requests: true
wiki: true
snippets: false
builds: true
container_registry: false
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Increase token duration
In GitLab, tokens for the container registry expire every five minutes.
To increase the token duration:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > CI/CD**.
1. Expand **Container Registry**.
1. For the **Authorization token duration (minutes)**, update the value.
1. Select **Save changes**.
## Configure storage for the container registry
{{< alert type="note" >}}
For storage backends that support it, you can use object versioning to preserve, retrieve, and
restore the non-current versions of every object stored in your buckets. However, this may result in
higher storage usage and costs. Due to how the registry operates, image uploads are first stored in
a temporary path and then transferred to a final location. For object storage backends, including S3
and GCS, this transfer is achieved with a copy followed by a delete. With object versioning enabled,
these deleted temporary upload artifacts are kept as non-current versions, therefore increasing the
storage bucket size. To ensure that non-current versions are deleted after a given amount of time,
you should configure an object lifecycle policy with your storage provider.
{{< /alert >}}
{{< alert type="warning" >}}
Do not directly modify the files or objects stored by the container registry. Anything other than the registry writing or deleting these entries can lead to instance-wide data consistency and instability issues from which recovery may not be possible.
{{< /alert >}}
You can configure the container registry to use various storage backends by
configuring a storage driver. By default the GitLab container registry
is configured to use the [file system driver](#use-file-system)
configuration.
The different supported drivers are:
| Driver | Description |
|--------------|--------------------------------------|
| `filesystem` | Uses a path on the local file system |
| `azure` | Microsoft Azure Blob Storage |
| `gcs` | Google Cloud Storage |
| `s3` | Amazon Simple Storage Service. Be sure to configure your storage bucket with the correct [S3 Permission Scopes](https://distribution.github.io/distribution/storage-drivers/s3/#s3-permission-scopes). |
Although most S3 compatible services (like [MinIO](https://min.io/)) should work with the container registry,
we only guarantee support for AWS S3. Because we cannot assert the correctness of third-party S3 implementations,
we can debug issues, but we cannot patch the registry unless an issue is reproducible against an AWS S3 bucket.
### Use file system
If you want to store your images on the file system, you can change the storage
path for the container registry, follow the steps below.
This path is accessible to:
- The user running the container registry daemon.
- The user running GitLab.
All GitLab, Registry, and web server users must
have access to this directory.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
The default location where images are stored in Linux package installations is
`/var/opt/gitlab/gitlab-rails/shared/registry`. To change it:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['registry_path'] = "</path/to/registry/storage>"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
The default location where images are stored in self-compiled installations is
`/home/git/gitlab/shared/registry`. To change it:
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
change the `path` setting:
```yaml
registry:
path: shared/registry
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Use object storage
If you want to store your container registry images in object storage instead of the local file system,
you can configure one of the supported storage drivers.
For more information, see [Object storage](../object_storage.md).
{{< alert type="warning" >}}
GitLab does not back up Docker images that are not stored on the
file system. Enable backups with your object storage provider if
desired.
{{< /alert >}}
#### Configure object storage for Linux package installations
To configure object storage for your container registry:
1. Choose the storage driver you want to use.
1. Edit `/etc/gitlab/gitlab.rb` with the appropriate configuration.
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< tabs >}}
{{< tab title="S3" >}}
The S3 storage driver integrates with Amazon S3 or any S3-compatible object storage service.
<!--- start_remove The following content will be removed on remove_date: '2025-08-15' -->
{{< alert type="warning" >}}
The S3 storage driver that uses AWS SDK v1 was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/523095) in GitLab 17.10 and is planned for removal in GitLab 19.0.
Use the `s3_v2` driver (in Beta) instead when it becomes available in May 2025. This driver offers improved performance, reliability, and compatibility with AWS authentication requirements. While this is a breaking change, the new driver has been thoroughly tested and is designed to be a drop-in replacement for most configurations.
Make sure to test the new driver in non-production environments before deploying to production to ensure compatibility with your specific setup and usage patterns. This allows you to identify and address any edge cases unique to your environment.
Report any issues or feedback using [issue 525855](https://gitlab.com/gitlab-org/gitlab/-/issues/525855).
{{< /alert >}}
<!--- end_remove -->
The `s3_v2` driver (in Beta) uses AWS SDK v2 and only supports Signature Version 4 for authentication.
This driver improves performance and reliability while ensuring compatibility with AWS authentication requirements,
as support for older signature methods is deprecated. For more information, see [epic 16272](https://gitlab.com/groups/gitlab-org/-/epics/16272).
For a complete list of configuration parameters for each driver, see [`s3_v1`](https://gitlab.com/gitlab-org/container-registry/-/blob/f4ece8cdba4413b968c8a3fd20497a8186f23d26/docs/storage-drivers/s3_v1.md) and [`s3_v2`](https://gitlab.com/gitlab-org/container-registry/-/blob/f4ece8cdba4413b968c8a3fd20497a8186f23d26/docs/storage-drivers/s3_v2.md).
To configure the S3 storage driver, add one of the following configurations to your `/etc/gitlab/gitlab.rb` file:
```ruby
# Deprecated: Will be removed in GitLab 19.0
registry['storage'] = {
's3' => {
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-regionendpoint>'
}
}
```
Or
```ruby
# Beta: s3_v2 driver
registry['storage'] = {
's3_v2' => {
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-regionendpoint>'
}
}
```
For improved security, you can use an IAM role instead of static credentials by not including the `accesskey` and `secretkey` parameters.
To prevent storage cost increases, configure a lifecycle policy in your S3 bucket to purge incomplete multipart uploads.
The container registry does not automatically clean these up.
A three-day expiration policy for incomplete multipart uploads works well for most usage patterns.
{{< alert type="note" >}}
`loglevel` settings differ between the [`s3_v1`](https://gitlab.com/gitlab-org/container-registry/-/blob/f4ece8cdba4413b968c8a3fd20497a8186f23d26/docs/storage-drivers/s3_v1.md#configuration-parameters) and [`s3_v2`](https://gitlab.com/gitlab-org/container-registry/-/blob/f4ece8cdba4413b968c8a3fd20497a8186f23d26/docs/storage-drivers/s3_v2.md#configuration-parameters) drivers.
If you set the `loglevel` for the wrong driver, it is ignored and a warning message is printed.
{{< /alert >}}
When using MinIO with the `s3_v2` driver, add the `checksum_disabled` parameter to disable AWS checksums:
```ruby
registry['storage'] = {
's3_v2' => {
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-regionendpoint>',
'checksum_disabled' => true
}
}
```
For S3 VPC endpoints:
```ruby
registry['storage'] = {
's3_v2' => { # Beta driver
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-vpc-endpoint>',
'pathstyle' => false
}
}
```
S3 configuration parameters:
- `<your-s3-bucket>`: The name of an existing bucket. Cannot include subdirectories.
- `regionendpoint`: Required only when using an S3-compatible service like MinIO or an AWS S3 VPC Endpoint.
- `pathstyle`: Controls URL formatting. Set to `true` for `host/bucket_name/object` (most S3-compatible services) or `false` for `bucket_name.host/object` (AWS S3).
To avoid 503 errors from the S3 API, add the `maxrequestspersecond` parameter to set a rate limit on connections:
```ruby
registry['storage'] = {
's3' => {
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-regionendpoint>',
'maxrequestspersecond' => 100
}
}
```
{{< /tab >}}
{{< tab title="Azure" >}}
The Azure storage driver integrates with Microsoft Azure Blob Storage.
{{< alert type="warning" >}}
The legacy Azure storage driver was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/523096) in GitLab 17.10 and is planned for removal in GitLab 19.0.
Use the `azure_v2` driver (in Beta) instead. This driver offers improved performance, reliability, and modern authentication methods. While this is a breaking change, the new driver has been extensively tested to ensure a smooth transition for most configurations.
Make sure to test the new driver in non-production environments before deploying to production to identify and address any edge cases specific to your environment and usage patterns.
Report any issues or feedback using [issue 525855](https://gitlab.com/gitlab-org/gitlab/-/issues/525855).
{{< /alert >}}
For a complete list of configuration parameters for each driver, see [`azure_v1`](https://gitlab.com/gitlab-org/container-registry/-/blob/7b1786d261481a3c69912ad3423225f47f7c8242/docs/storage-drivers/azure_v1.md) and [`azure_v2`](https://gitlab.com/gitlab-org/container-registry/-/blob/7b1786d261481a3c69912ad3423225f47f7c8242/docs/storage-drivers/azure_v2.md).
To configure the Azure storage driver, add one of the following configurations to your `/etc/gitlab/gitlab.rb` file:
```ruby
# Deprecated: Will be removed in GitLab 19.0
registry['storage'] = {
'azure' => {
'accountname' => '<your_storage_account_name>',
'accountkey' => '<base64_encoded_account_key>',
'container' => '<container_name>'
}
}
```
Or
```ruby
# Beta: azure_v2 driver
registry['storage'] = {
'azure_v2' => {
'credentials_type' => '<client_secret>',
'tenant_id' => '<your_tenant_id>',
'client_id' => '<your_client_id>',
'secret' => '<your_secret>',
'container' => '<your_container>',
'accountname' => '<your_account_name>'
}
}
```
By default, the Azure storage driver uses the `core.windows.net realm`. You can set another value for realm in the Azure section (for example, `core.usgovcloudapi.net` for Azure Government Cloud).
{{< /tab >}}
{{< tab title="GCS" >}}
The GCS storage driver integrates with Google Cloud Storage.
```ruby
registry['storage'] = {
'gcs' => {
'bucket' => '<your_bucket_name>',
'keyfile' => '<path/to/keyfile>',
# If you have the bucket shared with other apps beyond the registry, uncomment the following:
# 'rootdirectory' => '/gcs/object/name/prefix'
}
}
```
GitLab supports all [available parameters](https://docs.docker.com/registry/storage-drivers/gcs/).
{{< /tab >}}
{{< /tabs >}}
#### Self-compiled installations
Configuring the storage driver is done in the registry configuration YAML file created
when you deployed your Docker registry.
`s3` storage driver example:
```yaml
storage:
s3:
accesskey: '<s3-access-key>' # Not needed if IAM role used
secretkey: '<s3-secret-key-for-access-key>' # Not needed if IAM role used
bucket: '<your-s3-bucket>'
region: '<your-s3-region>'
regionendpoint: '<your-s3-regionendpoint>'
cache:
blobdescriptor: inmemory
delete:
enabled: true
```
`<your-s3-bucket>` should be the name of a bucket that exists, and can't include subdirectories.
#### Migrate to object storage without downtime
{{< alert type="warning" >}}
Using [AWS DataSync](https://aws.amazon.com/datasync/)
to copy the registry data to or between S3 buckets creates invalid metadata objects in the bucket.
For additional details, see [Tags with an empty name](container_registry_troubleshooting.md#tags-with-an-empty-name).
To move data to and between S3 buckets, the AWS CLI `sync` operation is recommended.
{{< /alert >}}
To migrate storage without stopping the container registry, set the container registry
to read-only mode. On large instances, this may require the container registry
to be in read-only mode for a while. During this time,
you can pull from the container registry, but you cannot push.
1. Optional: To reduce the amount of data to be migrated, run the [garbage collection tool without downtime](#performing-garbage-collection-without-downtime).
1. This example uses the `aws` CLI. If you haven't configured the
CLI before, you have to configure your credentials by running `sudo aws configure`.
Because a non-administrator user likely can't access the container registry folder,
ensure you use `sudo`. To check your credential configuration, run
[`ls`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/ls.html) to list
all buckets.
```shell
sudo aws --endpoint-url <https://your-object-storage-backend.com> s3 ls
```
If you are using AWS as your back end, you do not need the [`--endpoint-url`](https://docs.aws.amazon.com/cli/latest/reference/#options).
1. Copy initial data to your S3 bucket, for example with the `aws` CLI
[`cp`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/cp.html)
or [`sync`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html)
command. Make sure to keep the `docker` folder as the top-level folder inside the bucket.
```shell
sudo aws --endpoint-url <https://your-object-storage-backend.com> s3 sync registry s3://mybucket
```
{{< alert type="note" >}}
If you have a lot of data, you may be able to improve performance by
[running parallel sync operations](https://repost.aws/knowledge-center/s3-improve-transfer-sync-command).
{{< /alert >}}
1. To perform the final data sync,
[put the container registry in `read-only` mode](#performing-garbage-collection-without-downtime) and
[reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. Sync any changes dating from after the initial data load to your S3 bucket, and delete files that exist in the destination bucket but not in the source:
```shell
sudo aws --endpoint-url <https://your-object-storage-backend.com> s3 sync registry s3://mybucket --delete --dryrun
```
After verifying the command performs as expected, remove the
[`--dryrun`](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html)
flag and run the command.
{{< alert type="warning" >}}
The [`--delete`](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html)
flag deletes files that exist in the destination but not in the source.
If you swap the source and destination, all data in the Registry is deleted.
{{< /alert >}}
1. Verify all container registry files have been uploaded to object storage
by looking at the file count returned by these two commands:
```shell
sudo find registry -type f | wc -l
```
```shell
sudo aws --endpoint-url <https://your-object-storage-backend.com> s3 ls s3://<mybucket> --recursive | wc -l
```
The output of these commands should match, except for the content in the
`_uploads` directories and subdirectories.
1. Configure your registry to [use the S3 bucket for storage](#use-object-storage).
1. For the changes to take effect, set the Registry back to `read-write` mode and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
#### Moving to Azure Object Storage
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```ruby
registry['storage'] = {
'azure' => {
'accountname' => '<your_storage_account_name>',
'accountkey' => '<base64_encoded_account_key>',
'container' => '<container_name>',
'trimlegacyrootprefix' => true
}
}
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```yaml
storage:
azure:
accountname: <your_storage_account_name>
accountkey: <base64_encoded_account_key>
container: <container_name>
trimlegacyrootprefix: true
```
{{< /tab >}}
{{< /tabs >}}
By default, Azure Storage Driver uses the `core.windows.net` realm. You can set another value for `realm` in the `azure` section (for example, `core.usgovcloudapi.net` for Azure Government Cloud).
### Disable redirect for storage driver
By default, users accessing a registry configured with a remote backend are redirected to the default backend for the storage driver. For example, registries can be configured using the `s3` storage driver, which redirects requests to a remote S3 bucket to alleviate load on the GitLab server.
However, this behavior is undesirable for registries used by internal hosts that usually can't access public servers. To disable redirects and [proxy download](../object_storage.md#proxy-download), set the `disable` flag to true as follows. This makes all traffic always go through the Registry service. This results in improved security (less surface attack as the storage backend is not publicly accessible), but worse performance (all traffic is redirected via the service).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['storage'] = {
's3' => {
'accesskey' => '<s3_access_key>',
'secretkey' => '<s3_secret_key_for_access_key>',
'bucket' => '<your_s3_bucket>',
'region' => '<your_s3_region>',
'regionendpoint' => '<your_s3_regionendpoint>'
},
'redirect' => {
'disable' => true
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Add the `redirect` flag to your registry configuration YAML file:
```yaml
storage:
s3:
accesskey: '<s3_access_key>'
secretkey: '<s3_secret_key_for_access_key>'
bucket: '<your_s3_bucket>'
region: '<your_s3_region>'
regionendpoint: '<your_s3_regionendpoint>'
redirect:
disable: true
cache:
blobdescriptor: inmemory
delete:
enabled: true
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Encrypted S3 buckets
You can use server-side encryption with AWS KMS for S3 buckets that have
[SSE-S3 or SSE-KMS encryption enabled by default](https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html).
Customer master keys (CMKs) and SSE-C encryption aren't supported because this requires sending the
encryption keys in every request.
For SSE-S3, you must enable the `encrypt` option in the registry settings. How you do this depends
on how you installed GitLab. Follow the instructions here that match your installation method.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['storage'] = {
's3' => {
'accesskey' => '<s3_access_key>',
'secretkey' => '<s3_secret_key_for_access_key>',
'bucket' => '<your_s3_bucket>',
'region' => '<your_s3_region>',
'regionendpoint' => '<your_s3_regionendpoint>',
'encrypt' => true
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit your registry configuration YAML file:
```yaml
storage:
s3:
accesskey: '<s3_access_key>'
secretkey: '<s3_secret_key_for_access_key>'
bucket: '<your_s3_bucket>'
region: '<your_s3_region>'
regionendpoint: '<your_s3_regionendpoint>'
encrypt: true
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Storage limitations
There is no storage limitation, which means a user can upload an
infinite amount of Docker images with arbitrary sizes. This setting should be
configurable in future releases.
## Change the registry's internal port
The Registry server listens on localhost at port `5000` by default,
which is the address for which the Registry server should accept connections.
In the examples below we set the Registry's port to `5010`.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Open `/etc/gitlab/gitlab.rb` and set `registry['registry_http_addr']`:
```ruby
registry['registry_http_addr'] = "localhost:5010"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open the configuration file of your Registry server and edit the
[`http:addr`](https://distribution.github.io/distribution/about/configuration/#http) value:
```yaml
http:
addr: localhost:5010
```
1. Save the file and restart the Registry server.
{{< /tab >}}
{{< /tabs >}}
## Disable container registry per project
If Registry is enabled in your GitLab instance, but you don't need it for your
project, you can [disable it from your project's settings](../../user/project/settings/_index.md#configure-project-features-and-permissions).
## Use an external container registry with GitLab as an auth endpoint
{{< alert type="warning" >}}
Using third-party container registries in GitLab was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/376217)
in GitLab 15.8 and support ended in GitLab 16.0.
If you need to use third-party container registries instead of the GitLab container registry,
tell us about your use cases in [feedback issue 958](https://gitlab.com/gitlab-org/container-registry/-/issues/958).
{{< /alert >}}
If you use an external container registry, some features associated with the
container registry may be unavailable or have [inherent risks](../../user/packages/container_registry/reduce_container_registry_storage.md#use-with-external-container-registries).
For the integration to work, the external registry must be configured to
use a JSON Web Token to authenticate with GitLab. The
[external registry's runtime configuration](https://distribution.github.io/distribution/about/configuration/#token)
**must** have the following entries:
```yaml
auth:
token:
realm: https://<gitlab.example.com>/jwt/auth
service: container_registry
issuer: gitlab-issuer
rootcertbundle: /root/certs/certbundle
```
Without these entries, the registry logins cannot authenticate with GitLab.
GitLab also remains unaware of
[nested image names](../../user/packages/container_registry/_index.md#naming-convention-for-your-container-images)
under the project hierarchy, like
`registry.example.com/group/project/image-name:tag` or
`registry.example.com/group/project/my/image-name:tag`, and only recognizes
`registry.example.com/group/project:tag`.
### Linux package installations
You can use GitLab as an auth endpoint with an external container registry.
1. Open `/etc/gitlab/gitlab.rb` and set necessary configurations:
```ruby
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_api_url'] = "https://<external_registry_host>:5000"
gitlab_rails['registry_issuer'] = "gitlab-issuer"
```
- `gitlab_rails['registry_enabled'] = true` is needed to enable GitLab
container registry features and authentication endpoint. The GitLab bundled
container registry service does not start, even with this enabled.
- `gitlab_rails['registry_api_url'] = "http://<external_registry_host>:5000"`
must be changed to match the host where Registry is installed.
It must also specify `https` if the external registry is
configured to use TLS.
1. A certificate-key pair is required for GitLab and the external container
registry to communicate securely. You need to create a certificate-key
pair, configuring the external container registry with the public
certificate (`rootcertbundle`) and configuring GitLab with the private key.
To do that, add the following to `/etc/gitlab/gitlab.rb`:
```ruby
# registry['internal_key'] should contain the contents of the custom key
# file. Line breaks in the key file should be marked using `\n` character
# Example:
registry['internal_key'] = "---BEGIN RSA PRIVATE KEY---\nMIIEpQIBAA\n"
# Optionally define a custom file for a Linux package installation to write the contents
# of registry['internal_key'] to.
gitlab_rails['registry_key_path'] = "/custom/path/to/registry-key.key"
```
Each time reconfigure is executed, the file specified at `registry_key_path`
gets populated with the content specified by `internal_key`. If
no file is specified, Linux package installations default it to
`/var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key` and populates
it.
1. To change the container registry URL displayed in the GitLab Container
Registry pages, set the following configurations:
```ruby
gitlab_rails['registry_host'] = "<registry.gitlab.example.com>"
gitlab_rails['registry_port'] = "5005"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
### Self-compiled installations
1. Open `/home/git/gitlab/config/gitlab.yml`, and edit the configuration settings under `registry`:
```yaml
## Container registry
registry:
enabled: true
host: "<registry.gitlab.example.com>"
port: "5005"
api_url: "https://<external_registry_host>:5000"
path: /var/lib/registry
key: </path/to/keyfile>
issuer: gitlab-issuer
```
[Read more](#enable-the-container-registry) about what these parameters mean.
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
## Configure container registry notifications
You can configure the container registry to send webhook notifications in
response to events happening in the registry.
Read more about the container registry notifications configuration options in the
[Docker Registry notifications documentation](https://distribution.github.io/distribution/about/notifications/).
{{< alert type="warning" >}}
Support for the `threshold` parameter was [deprecated](https://gitlab.com/gitlab-org/container-registry/-/issues/1243)
in GitLab 17.0, and is planned for removal in 18.0. Use `maxretries` instead.
{{< /alert >}}
You can configure multiple endpoints for the container registry.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
To configure a notification endpoint for a Linux package installation:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['notifications'] = [
{
'name' => '<test_endpoint>',
'url' => 'https://<gitlab.example.com>/api/v4/container_registry_event/events',
'timeout' => '500ms',
'threshold' => 5, # DEPRECATED: use `maxretries` instead.
'maxretries' => 5,
'backoff' => '1s',
'headers' => {
"Authorization" => ["<AUTHORIZATION_EXAMPLE_TOKEN>"]
}
}
]
gitlab_rails['registry_notification_secret'] = '<AUTHORIZATION_EXAMPLE_TOKEN>' # Must match the auth token in registry['notifications']
```
{{< alert type="note" >}}
Replace `<AUTHORIZATION_EXAMPLE_TOKEN>` with a case-sensitive alphanumeric string
that starts with a letter. You can generate one with `< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c 32 | sed "s/^[0-9]*//"; echo`
{{< /alert >}}
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
Configuring the notification endpoint is done in your registry configuration YAML file created
when you deployed your Docker registry.
Example:
```yaml
notifications:
endpoints:
- name: <alistener>
disabled: false
url: https://<my.listener.com>/event
headers: <http.Header>
timeout: 500
threshold: 5 # DEPRECATED: use `maxretries` instead.
maxretries: 5
backoff: 1000
```
{{< /tab >}}
{{< /tabs >}}
## Run the cleanup policy
Prerequisites:
- If you use a distributed architecture where the container registry runs on a different node than Sidekiq, follow the steps in [Configure the container registry when using an external Sidekiq](../sidekiq/_index.md#configure-the-container-registry-when-using-an-external-sidekiq).
After you [create a cleanup policy](../../user/packages/container_registry/reduce_container_registry_storage.md#create-a-cleanup-policy), you can run it immediately to reduce the container registry storage space. You don't have to wait for the scheduled cleanup.
To reduce the amount of container registry disk space used by a given project, administrators can:
1. [Check disk space usage by project](#registry-disk-space-usage-by-project) to identify projects that need cleanup.
1. Run the cleanup policy using the GitLab Rails console to remove image tags.
1. [Run garbage collection](#container-registry-garbage-collection) to remove unreferenced layers and untagged manifests.
### Registry Disk Space Usage by Project
To find the disk space used by each project, run the following in the
[GitLab Rails console](../operations/rails_console.md#starting-a-rails-console-session):
```ruby
projects_and_size = [["project_id", "creator_id", "registry_size_bytes", "project path"]]
# You need to specify the projects that you want to look through. You can get these in any manner.
projects = Project.last(100)
registry_metadata_database = ContainerRegistry::GitlabApiClient.supports_gitlab_api?
if registry_metadata_database
projects.each do |project|
size = project.container_repositories_size
if size > 0
projects_and_size << [project.project_id, project.creator&.id, size, project.full_path]
end
end
else
projects.each do |project|
project_layers = {}
project.container_repositories.each do |repository|
repository.tags.each do |tag|
tag.layers.each do |layer|
project_layers[layer.digest] ||= layer.size
end
end
end
total_size = project_layers.values.compact.sum
if total_size > 0
projects_and_size << [project.project_id, project.creator&.id, total_size, project.full_path]
end
end
end
# print it as comma separated output
projects_and_size.each do |ps|
puts "%s,%s,%s,%s" % ps
end
```
{{< alert type="note" >}}
The script calculates size based on container image layers. Because layers can be shared across multiple projects, the results are approximate but give a good indication of relative disk usage between projects.
{{< /alert >}}
To remove image tags by running the cleanup policy, run the following commands in the
[GitLab Rails console](../operations/rails_console.md):
```ruby
# Numeric ID of the project whose container registry should be cleaned up
P = <project_id>
# Numeric ID of a user with Developer, Maintainer, or Owner role for the project
U = <user_id>
# Get required details / objects
user = User.find_by_id(U)
project = Project.find_by_id(P)
policy = ContainerExpirationPolicy.find_by(project_id: P)
# Loop through each container repository
project.container_repositories.find_each do |repo|
puts repo.attributes
# Start the tag cleanup
puts Projects::ContainerRepository::CleanupTagsService.new(container_repository: repo, current_user: user, params: policy.attributes.except("created_at", "updated_at")).execute
end
```
You can also [run cleanup on a schedule](../../user/packages/container_registry/reduce_container_registry_storage.md#cleanup-policy).
To enable cleanup policies for all projects instance-wide, you need to find all projects
with a container registry, but with the cleanup policy disabled:
```ruby
# Find all projects where Container registry is enabled, and cleanup policies disabled
projects = Project.find_by_sql ("SELECT * FROM projects WHERE id IN (SELECT project_id FROM container_expiration_policies WHERE enabled=false AND id IN (SELECT project_id FROM container_repositories))")
# Loop through each project
projects.each do |p|
# Print project IDs and project full names
puts "#{p.id},#{p.full_name}"
end
```
## Container registry metadata database
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/423459) in GitLab 17.3.
{{< /history >}}
The metadata database enables many new registry features, including
online garbage collection, and increases the efficiency of many registry operations.
See the [Container registry metadata database](container_registry_metadata_database.md) page for details.
## Container registry garbage collection
Prerequisites:
- You must have installed GitLab by using a Linux package or the
[GitLab Helm chart](https://docs.gitlab.com/charts/charts/registry/#garbage-collection).
{{< alert type="note" >}}
Retention policies in your object storage provider, such as Amazon S3 Lifecycle, may prevent
objects from being properly deleted.
{{< /alert >}}
The container registry can use considerable amounts of storage space, and you might want to
[reduce storage usage](../../user/packages/container_registry/reduce_container_registry_storage.md).
Among the listed options, deleting tags is the most effective option. However, tag deletion
alone does not delete image layers, it only leaves the underlying image manifests untagged.
To more effectively free up space, the container registry has a garbage collector that can
delete unreferenced layers and (optionally) untagged manifests.
To start the garbage collector, run the following `gitlab-ctl` command:
```shell
`registry-garbage-collect`
```
The time required to perform garbage collection is proportional to the container registry data size.
{{< alert type="warning" >}}
The `registry-garbage-collect` command shuts down the container registry prior to the garbage collection and
only starts it again after garbage collection completes. If you prefer to avoid downtime,
you can manually set the container registry to [read-only mode and bypass `gitlab-ctl`](#performing-garbage-collection-without-downtime).
This command proceeds only if the metadata is in object storage. This command does not proceed
if the [container registry metadata database](#container-registry-metadata-database) is enabled.
{{< /alert >}}
### Understanding the content-addressable layers
Consider the following example, where you first build the image:
```shell
# This builds a image with content of sha256:<111111...>
docker build -t <my.registry.com>/<my.group>/<my.project>:latest .
docker push <my.registry.com>/<my.group>/<my.project>:latest
```
Now, you do overwrite `latest` with a new version:
```shell
# This builds a image with content of sha256:<222222...>
docker build -t <my.registry.com>/<my.group>/<my.project>:latest .
docker push <my.registry.com>/<my.group>/<my.project>:latest
```
Now, the `latest` tag points to manifest of `sha256:<222222...>`.
Due to the architecture of registry, this data is still accessible when pulling the
image `<my.registry.com>/<my.group>/<my.project>@sha256:<111111...>`, though it is
no longer directly accessible via the `latest` tag.
### Remove unreferenced layers
Image layers are the bulk of the container registry storage. A layer is considered
unreferenced when no image manifest references it. Unreferenced layers are the
default target of the container registry garbage collector.
If you did not change the default location of the configuration file, run:
```shell
sudo gitlab-ctl registry-garbage-collect
```
If you changed the location of the container registry `config.yml`:
```shell
sudo gitlab-ctl registry-garbage-collect /path/to/config.yml
```
You can also [remove all untagged manifests and unreferenced layers](#removing-untagged-manifests-and-unreferenced-layers)
to recover additional space.
### Removing untagged manifests and unreferenced layers
By default the container registry garbage collector ignores images that are untagged,
and users can keep pulling untagged images by digest. Users can also re-tag images
in the future, making them visible again in the GitLab UI and API.
If you do not care about untagged images and the layers exclusively referenced by these images,
you can delete them all. Use the `-m` flag on the `registry-garbage-collect` command:
```shell
sudo gitlab-ctl registry-garbage-collect -m
```
If you are unsure about deleting untagged images, back up your registry data before proceeding.
### Performing garbage collection without downtime
To do garbage collection while keeping the container registry online, put the registry
in read-only mode and bypass the built-in `gitlab-ctl registry-garbage-collect` command.
You can pull but not push images while the container registry is in read-only mode. The container
registry must remain in read-only for the full duration of the garbage collection.
By default, the [registry storage path](#configure-storage-for-the-container-registry)
is `/var/opt/gitlab/gitlab-rails/shared/registry`.
To enable the read-only mode:
1. In `/etc/gitlab/gitlab.rb`, specify the read-only mode:
```ruby
registry['storage'] = {
'filesystem' => {
'rootdirectory' => "<your_registry_storage_path>"
},
'maintenance' => {
'readonly' => {
'enabled' => true
}
}
}
```
1. Save and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
This command sets the container registry into the read-only mode.
1. Next, trigger one of the garbage collect commands:
```shell
# Remove unreferenced layers
sudo /opt/gitlab/embedded/bin/registry garbage-collect /var/opt/gitlab/registry/config.yml
# Remove untagged manifests and unreferenced layers
sudo /opt/gitlab/embedded/bin/registry garbage-collect -m /var/opt/gitlab/registry/config.yml
```
This command starts the garbage collection. The time to complete is proportional to the registry data size.
1. Once done, in `/etc/gitlab/gitlab.rb` change it back to read-write mode:
```ruby
registry['storage'] = {
'filesystem' => {
'rootdirectory' => "<your_registry_storage_path>"
},
'maintenance' => {
'readonly' => {
'enabled' => false
}
}
}
```
1. Save and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
### Running the garbage collection on schedule
Ideally, you want to run the garbage collection of the registry regularly on a
weekly basis at a time when the registry is not being in-use.
The simplest way is to add a new crontab job that it runs periodically
once a week.
Create a file under `/etc/cron.d/registry-garbage-collect`:
```shell
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# Run every Sunday at 04:05am
5 4 * * 0 root gitlab-ctl registry-garbage-collect
```
You may want to add the `-m` flag to [remove untagged manifests and unreferenced layers](#removing-untagged-manifests-and-unreferenced-layers).
### Stop garbage collection
If you anticipate stopping garbage collection, you should manually run garbage collection as
described in [Performing garbage collection without downtime](#performing-garbage-collection-without-downtime).
You can then stop garbage collection by pressing <kbd>Control</kbd>+<kbd>C</kbd>.
Otherwise, interrupting `gitlab-ctl` could leave your registry service in a down state. In this
case, you must find the [garbage collection process](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/files/gitlab-ctl-commands/registry_garbage_collect.rb#L26-35)
itself on the system so that the `gitlab-ctl` command can bring the registry service back up again.
Also, there's no way to save progress or results during the mark phase of the process. Only once
blobs start being deleted is anything permanent done.
### Continuous zero-downtime garbage collection
You can run garbage collection in the background without the need to schedule it or require read-only mode,
if you migrate to the [metadata database](container_registry_metadata_database.md).
## Scaling by component
This section outlines the potential performance bottlenecks as registry traffic increases by component.
Each subsection is roughly ordered by recommendations that benefit from smaller to larger registry workloads.
The registry is not included in the [reference architectures](../reference_architectures/_index.md),
and there are no scaling guides which target number of seats or requests per second.
### Database
1. **Move to a separate database**: As database load increases, scale vertically by moving the registry metadata database
to a separate physical database. A separate database can increase the amount of resources available
to the registry database while isolating the traffic produced by the registry.
1. **Move to a HA PostgreSQL third-party solution**: Similar to [Praefect](../reference_architectures/5k_users.md#praefect-ha-postgresql-third-party-solution),
moving to a reputable provider or solution enables HA and is suitable for multi-node registry deployments.
You must pick a provider that supports native Postgres partitioning, triggers, and functions,
as the registry makes heavy use of these.
### Registry server
1. **Move to a separate node**: A [separate node](#configure-gitlab-and-registry-on-separate-nodes-linux-package-installations)
is one way to scale vertically to increase the resources available to the container registry server process.
1. **Run multiple registry nodes behind a load balancer**: While the registry can handle
a high amount of traffic with a single large node, the registry is generally intended to
scale horizontally with multiple deployments. Configuring multiple smaller nodes
also enables techniques such as autoscaling.
### Redis Cache
Enabling the [Redis](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/configuration.md?ref_type=heads#redis)
cache improves performance, but also enables features such as renaming repositories.
1. **Redis Server**: A single Redis instance is supported and is the simplest way
to access the benefits of the Redis caching.
1. **Redis Sentinel**: Redis Sentinel is also supported and enables the cache to be HA.
1. **Redis Cluster**: Redis Cluster can also be used for further scaling as deployments grow.
### Storage
1. **Local file system**: A local file system is the default and is relatively performant,
but not suitable for multi-node deployments or a large amount of registry data.
1. **Object storage**: [Use object storage](#use-object-storage) to enable the practical storage
of a larger amount of registry data. Object storage is also suitable for multi-node registry deployments.
### Online garbage collection
1. **Adjust defaults**: If online garbage collection is not reliably clearing the [review queues](container_registry_metadata_database.md#queue-monitoring),
you can adjust the `interval` settings in the `manifests` and `blobs` sections under the
[`gc`](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/configuration.md?ref_type=heads#gc)
configuration section. The default is `5s`, and these can be configured with milliseconds as well,
for example `500ms`.
1. **Scale horizontally with the registry server**: If you are scaling the registry application horizontally
with multi-node deployments, online garbage collection automatically scales without
the need for configuration changes.
## Configure GitLab and registry on separate nodes (Linux package installations)
By default, the GitLab package assumes both services run on the same node.
Running them on separate nodes requires separate configuration.
### Configuration options
The following configuration options should be set in `/etc/gitlab/gitlab.rb` on the respective nodes.
#### Registry node settings
| Option | Description |
| ------------------------------------------ | ----------- |
| `registry['registry_http_addr']` | Network address and port that the registry listens on. Must be reachable by the web server or load balancer. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L50). |
| `registry['token_realm']` | Authentication endpoint URL, typically the GitLab instance URL. Must be reachable by users. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L53). |
| `registry['http_secret']` | Security token used to protect against client-side tampering. Generated as a [random string](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L32). |
| `registry['internal_key']` | Token-signing key, created on the registry server but used by GitLab. Default: [automatically generated](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/recipes/gitlab-rails.rb#L113-119). |
| `registry['internal_certificate']` | Certificate for token signing. Default: [automatically generated](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/registry/recipes/enable.rb#L60-66). |
| `registry['rootcertbundle']` | File path where the `internal_certificate` is stored. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/registry/recipes/enable.rb#L60). |
| `registry['health_storagedriver_enabled']` | Enables health monitoring of the storage driver. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-7-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L88). |
| `gitlab_rails['registry_key_path']` | File path where the `internal_key` is stored. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/recipes/gitlab-rails.rb#L35). |
| `gitlab_rails['registry_issuer']` | Token issuer name. Must match between registry and GitLab configurations. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/attributes/default.rb#L153). |
<!--- start_remove The following content will be removed on remove_date: '2025/08/15' -->
{{< alert type="warning" >}}
Support for authenticating requests using Amazon S3 Signature Version 2 in the container registry is deprecated in GitLab 17.8 and is planned for removal in 18.0. Use Signature Version 4 instead. This is a breaking change. For more information, see [issue 1449](https://gitlab.com/gitlab-org/container-registry/-/issues/1449).
{{< /alert >}}
<!--- end_remove -->
#### GitLab node settings
| Option | Description |
| ----------------------------------- | ----------- |
| `gitlab_rails['registry_enabled']` | Enables the GitLab registry API integration. Must be set to `true`. |
| `gitlab_rails['registry_api_url']` | Internal registry URL used by GitLab (not visible to users). Uses `registry['registry_http_addr']` with scheme. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L52). |
| `gitlab_rails['registry_host']` | Public registry hostname without scheme (example: `registry.gitlab.example`). This address is shown to users. |
| `gitlab_rails['registry_port']` | Public registry port number shown to users. |
| `gitlab_rails['registry_issuer']` | Token issuer name that must match the registry's configuration. |
| `gitlab_rails['registry_key_path']` | File path to the certificate key used by the registry. |
| `gitlab_rails['internal_key']` | Token-signing key content used by GitLab. |
### Set up the nodes
To configure GitLab and the container registry on separate nodes:
1. On the registry node, edit `/etc/gitlab/gitlab.rb` with the following settings:
```ruby
# Registry server details
# - IP address: 10.30.227.194
# - Domain: registry.example.com
# Disable unneeded services
gitlab_workhorse['enable'] = false
puma['enable'] = false
sidekiq['enable'] = false
postgresql['enable'] = false
redis['enable'] = false
gitlab_kas['enable'] = false
gitaly['enable'] = false
nginx['enable'] = false
# Configure registry settings
registry['enable'] = true
registry['registry_http_addr'] = '0.0.0.0:5000'
registry['token_realm'] = 'https://<gitlab.example.com>'
registry['http_secret'] = '<6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b>'
# Configure GitLab Rails settings
gitlab_rails['registry_issuer'] = 'omnibus-gitlab-issuer'
gitlab_rails['registry_key_path'] = '/etc/gitlab/gitlab-registry.key'
```
1. On the GitLab node, edit `/etc/gitlab/gitlab.rb` with the following settings:
```ruby
# GitLab server details
# - IP address: 10.30.227.149
# - Domain: gitlab.example.com
# Configure GitLab URL
external_url 'https://<gitlab.example.com>'
# Configure registry settings
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_api_url'] = '<http://10.30.227.194:5000>'
gitlab_rails['registry_host'] = '<registry.example.com>'
gitlab_rails['registry_port'] = 5000
gitlab_rails['registry_issuer'] = 'omnibus-gitlab-issuer'
gitlab_rails['registry_key_path'] = '/etc/gitlab/gitlab-registry.key'
```
1. Synchronize the `/etc/gitlab/gitlab-secrets.json` file between both nodes:
1. Copy the file from the GitLab node to the registry node.
1. Ensure file permissions are correct.
1. Run `sudo gitlab-ctl reconfigure` on both nodes.
## Container registry architecture
Users can store their own Docker images in the container registry. Because the registry
is client facing, the registry is directly exposed
on the web server or load balancer (LB).
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart LR
accTitle: Container registry authentication flow
accDescr: Shows how users authenticate with the container registry with GitLab API to push and pull Docker images
A[User] --->|1: Docker loginon port 443| C{Frontend loadbalancer}
C --->|2: connection attemptwithout token fails| D[Container registry]
C --->|5: connect with token succeeds| D[Container registry]
C --->|3: Dockerrequests token| E[API frontend]
E --->|4:API returnssigned token| C
linkStyle 1 stroke-width:4px,stroke:red
linkStyle 2 stroke-width:4px,stroke:green
```
The authentication flow includes these steps:
1. A user runs `docker login registry.gitlab.example` on their client. This request reaches the web server (or LB) on port 443.
1. The web server connects to the registry backend pool (port 5000 by default). Because the user does not have a valid token, the registry returns a `401 Unauthorized` HTTP code and a URL to get a token. The URL is defined by the [`token_realm`](#registry-node-settings) setting in the registry configuration and points to the GitLab API.
1. The Docker client connects to the GitLab API and obtains a token.
1. The API signs the token with the registry key and sends it to the Docker client.
1. The Docker client logs in again with the token received from the API. The authenticated client can now push and pull Docker images.
Reference: <https://distribution.github.io/distribution/spec/auth/token/>
### Communication between GitLab and the container registry
The container registry cannot authenticate users internally, so it validates credentials through GitLab.
The connection between the registry and GitLab is
TLS encrypted.
GitLab uses the private key to sign tokens, and the registry uses the public key provided
by the certificate to validate the signature.
By default, a self-signed certificate key pair is generated
for all installations. You can override this behavior using the [`internal_key`](#registry-node-settings) setting in the registry configuration.
The following steps describe the communication flow:
1. GitLab interacts with the registry using the registry's private key. When a registry
request is sent, a short-lived (10 minutes), namespace-limited token is generated
and signed with the private key.
1. The registry verifies that the signature matches the registry certificate
specified in its configuration and allows the operation.
1. GitLab processes background jobs through Sidekiq, which also interacts with the registry.
These jobs communicate directly with the registry to handle image deletion.
## Migrate from a third-party registry
Using external container registries in GitLab was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/376217)
in GitLab 15.8 and the end of support occurred in GitLab 16.0. See the [deprecation notice](../../update/deprecations.md#use-of-third-party-container-registries-is-deprecated) for more details.
The integration is not disabled in GitLab 16.0, but support for debugging and fixing issues
is no longer provided. Additionally, the integration is no longer being developed or
enhanced with new features. Third-party registry functionality might be completely removed
after the new GitLab container registry version is available for GitLab Self-Managed (see epic [5521](https://gitlab.com/groups/gitlab-org/-/epics/5521)). Only the GitLab container registry is planned to be supported.
This section has guidance for administrators migrating from third-party registries
to the GitLab container registry. If the third-party container registry you are using is not listed here,
you can describe your use cases in [the feedback issue](https://gitlab.com/gitlab-org/container-registry/-/issues/958).
For all of the instructions provided below, you should try them first on a test environment.
Make sure everything continues to work as expected before replicating it in production.
### Docker Distribution Registry
The [Docker Distribution Registry](https://docs.docker.com/registry/) was donated to the CNCF
and is now known as the [Distribution Registry](https://distribution.github.io/distribution/).
This registry is the open source implementation that the GitLab container registry is based on.
The GitLab container registry is compatible with the basic functionality provided by the Distribution Registry,
including all the supported storage backends. To migrate to the GitLab container registry
you can follow the instructions on this page, and use the same storage backend as the Distribution Registry.
The GitLab container registry should accept the same configuration that you are using for the Distribution Registry.
## Max retries for deleting container images
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/480652) in GitLab 17.5 [with a flag](../feature_flags/_index.md) named `set_delete_failed_container_repository`. Disabled by default.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/490354) in GitLab 17.6. Feature flag `set_delete_failed_container_repository` removed.
{{< /history >}}
Errors could happen when deleting container images, so deletions are retried to ensure
the error is not a transient issue. Deletion is retried up to 10 times, with a back off delay
between retries. This delay gives more time between retries for any transient errors to resolve.
Setting a maximum number of retries also helps detect if there are any persistent errors
that haven't been solved in between retries. After a deletion fails the maximum number of retries,
the container repository `status` is set to `delete_failed`. With this status, the
repository no longer retries deletions.
You should investigate any container repositories with a `delete_failed` status and
try to resolve the issue. After the issue is resolved, you can set the repository status
back to `delete_scheduled` so images can start to be deleted again. To update the repository status,
from the rails console:
```ruby
container_repository = ContainerRepository.find(<id>)
container_repository.update(status: 'delete_scheduled')
```
|
---
stage: Package
group: Container Registry
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: GitLab container registry administration
breadcrumbs:
- doc
- administration
- packages
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< alert type="note" >}}
The [next-generation container registry](container_registry_metadata_database.md)
is now available for upgrade on GitLab Self-Managed instances.
This upgraded registry supports online garbage collection, and has significant performance
and reliability improvements.
{{< /alert >}}
With the GitLab container registry, every project can have its
own space to store Docker images.
For more details about the Distribution Registry:
- [Configuration](https://distribution.github.io/distribution/about/configuration/)
- [Storage drivers](https://distribution.github.io/distribution/storage-drivers/)
- [Deploy a registry server](https://distribution.github.io/distribution/about/deploying/)
This document is the administrator's guide. To learn how to use the GitLab Container
Registry, see the [user documentation](../../user/packages/container_registry/_index.md).
## Enable the container registry
The process for enabling the container registry depends on the type of installation you use.
### Linux package installations
If you installed GitLab by using the Linux package, the container registry
may or may not be available by default.
The container registry is automatically enabled and available on your GitLab domain, port 5050 if
you're using the built-in [Let's Encrypt integration](https://docs.gitlab.com/omnibus/settings/ssl/#enable-the-lets-encrypt-integration).
Otherwise, the container registry is not enabled. To enable it:
- You can configure it for your [GitLab domain](#configure-container-registry-under-an-existing-gitlab-domain), or
- You can configure it for [a different domain](#configure-container-registry-under-its-own-domain).
The container registry works under HTTPS by default. You can use HTTP
but it's not recommended and is beyond the scope of this document.
### Helm Charts installations
For Helm Charts installations, see [Using the container registry](https://docs.gitlab.com/charts/charts/registry/).
in the Helm Charts documentation.
### Self-compiled installations
If you self-compiled your GitLab installation:
1. You must deploy a registry using the image corresponding to the
version of GitLab you are installing
(for example: `registry.gitlab.com/gitlab-org/build/cng/gitlab-container-registry:v3.15.0-gitlab`)
1. After the installation is complete, to enable it, you must configure the Registry's
settings in `gitlab.yml`.
1. Use the sample NGINX configuration file from under
[`lib/support/nginx/registry-ssl`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/support/nginx/registry-ssl) and edit it to match the
`host`, `port`, and TLS certificate paths.
The contents of `gitlab.yml` are:
```yaml
registry:
enabled: true
host: <registry.gitlab.example.com>
port: <5005>
api_url: <http://localhost:5000/>
key: <config/registry.key>
path: <shared/registry>
issuer: <gitlab-issuer>
```
Where:
| Parameter | Description |
| --------- | ----------- |
| `enabled` | `true` or `false`. Enables the Registry in GitLab. By default this is `false`. |
| `host` | The host URL under which the Registry runs and users can use. |
| `port` | The port the external Registry domain listens on. |
| `api_url` | The internal API URL under which the Registry is exposed. It defaults to `http://localhost:5000`. Do not change this unless you are setting up an [external Docker registry](#use-an-external-container-registry-with-gitlab-as-an-auth-endpoint). |
| `key` | The private key location that is a pair of Registry's `rootcertbundle`. |
| `path` | This should be the same directory like specified in Registry's `rootdirectory`. This path needs to be readable by the GitLab user, the web-server user and the Registry user. |
| `issuer` | This should be the same value as configured in Registry's `issuer`. |
A Registry init file is not shipped with GitLab if you install it from source.
Hence, [restarting GitLab](../restart_gitlab.md#self-compiled-installations) does not restart the Registry should
you modify its settings. Read the upstream documentation on how to achieve that.
At the **absolute** minimum, make sure your Registry configuration
has `container_registry` as the service and `https://gitlab.example.com/jwt/auth`
as the realm:
```yaml
auth:
token:
realm: <https://gitlab.example.com/jwt/auth>
service: container_registry
issuer: gitlab-issuer
rootcertbundle: /root/certs/certbundle
```
{{< alert type="warning" >}}
If `auth` is not set up, users can pull Docker images without authentication.
{{< /alert >}}
## Container registry domain configuration
You can configure the Registry's external domain in either of these ways:
- [Use the existing GitLab domain](#configure-container-registry-under-an-existing-gitlab-domain).
The Registry listens on a port and reuses the TLS certificate from GitLab.
- [Use a completely separate domain](#configure-container-registry-under-its-own-domain) with a new TLS certificate
for that domain.
Because the container registry requires a TLS certificate, cost may be a factor.
Take this into consideration before configuring the container registry
for the first time.
### Configure container registry under an existing GitLab domain
If the container registry is configured to use the existing GitLab domain, you can
expose the container registry on a port. This way you can reuse the existing GitLab TLS
certificate.
If the GitLab domain is `https://gitlab.example.com` and the port to the outside world is `5050`,
to configure the container registry:
- Edit `gitlab.rb` if you are using a Linux package installation.
- Edit `gitlab.yml` if you are using a self-compiled installation.
Ensure you choose a port different than the one that Registry listens to (`5000` by default),
otherwise conflicts occur.
{{< alert type="note" >}}
Host and container firewall rules must be configured to allow traffic in through the port listed
under the `registry_external_url` line, rather than the port listed under
`gitlab_rails['registry_port']` (default `5000`).
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Your `/etc/gitlab/gitlab.rb` should contain the Registry URL as well as the
path to the existing TLS certificate and key used by GitLab:
```ruby
registry_external_url '<https://gitlab.example.com:5050>'
```
The `registry_external_url` is listening on HTTPS under the
existing GitLab URL, but on a different port.
If your TLS certificate is not in `/etc/gitlab/ssl/gitlab.example.com.crt`
and key not in `/etc/gitlab/ssl/gitlab.example.com.key` uncomment the lines
below:
```ruby
registry_nginx['ssl_certificate'] = "</path/to/certificate.pem>"
registry_nginx['ssl_certificate_key'] = "</path/to/certificate.key>"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
1. Validate using:
```shell
openssl s_client -showcerts -servername gitlab.example.com -connect gitlab.example.com:5050 > cacert.pem
```
If your certificate provider provides the CA Bundle certificates, append them to the TLS certificate file.
An administrator may want the container registry listening on an arbitrary port such as `5678`.
However, the registry and application server are behind an AWS application load balancer that only
listens on ports `80` and `443`. The administrator may remove the port number for
`registry_external_url`, so HTTP or HTTPS is assumed. Then, the rules apply that map the load
balancer to the registry from ports `80` or `443` to the arbitrary port. This is important if users
rely on the `docker login` example in the container registry. Here's an example:
```ruby
registry_external_url '<https://registry-gitlab.example.com>'
registry_nginx['redirect_http_to_https'] = true
registry_nginx['listen_port'] = 5678
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
configure it with the following settings:
```yaml
registry:
enabled: true
host: <gitlab.example.com>
port: 5050
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
1. Make the relevant changes in NGINX as well (domain, port, TLS certificates path).
{{< /tab >}}
{{< /tabs >}}
Users should now be able to sign in to the container registry with their GitLab
credentials using:
```shell
docker login <gitlab.example.com:5050>
```
### Configure container registry under its own domain
When the Registry is configured to use its own domain, you need a TLS
certificate for that specific domain (for example, `registry.example.com`). You might need
a wildcard certificate if hosted under a subdomain of your existing GitLab
domain. For example, `*.gitlab.example.com`, is a wildcard that matches `registry.gitlab.example.com`,
and is distinct from `*.example.com`.
As well as manually generated SSL certificates (explained here), certificates automatically
generated by Let's Encrypt are also [supported in Linux package installations](https://docs.gitlab.com/omnibus/settings/ssl/).
Let's assume that you want the container registry to be accessible at
`https://registry.gitlab.example.com`.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Place your TLS certificate and key in
`/etc/gitlab/ssl/<registry.gitlab.example.com>.crt` and
`/etc/gitlab/ssl/<registry.gitlab.example.com>.key` and make sure they have
correct permissions:
```shell
chmod 600 /etc/gitlab/ssl/<registry.gitlab.example.com>.*
```
1. After the TLS certificate is in place, edit `/etc/gitlab/gitlab.rb` with:
```ruby
registry_external_url '<https://registry.gitlab.example.com>'
```
The `registry_external_url` is listening on HTTPS.
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
If you have a [wildcard certificate](https://en.wikipedia.org/wiki/Wildcard_certificate), you must specify the path to the
certificate in addition to the URL, in this case `/etc/gitlab/gitlab.rb`
looks like:
```ruby
registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/certificate.pem"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/certificate.key"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
configure it with the following settings:
```yaml
registry:
enabled: true
host: <registry.gitlab.example.com>
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
1. Make the relevant changes in NGINX as well (domain, port, TLS certificates path).
{{< /tab >}}
{{< /tabs >}}
Users should now be able to sign in to the container registry using their GitLab
credentials:
```shell
docker login <registry.gitlab.example.com>
```
## Disable container registry site-wide
When you disable the Registry by following these steps, you do not
remove any existing Docker images. Docker image removal is handled by the
Registry application itself.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Open `/etc/gitlab/gitlab.rb` and set `registry['enable']` to `false`:
```ruby
registry['enable'] = false
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
set `enabled` to `false`:
```yaml
registry:
enabled: false
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
## Disable container registry for new projects site-wide
If the container registry is enabled, then it should be available on all new
projects. To disable this function and let the owners of a project to enable
the container registry by themselves, follow the steps below.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['gitlab_default_projects_features_container_registry'] = false
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `default_projects_features`
entry and configure it so that `container_registry` is set to `false`:
```yaml
## Default project features settings
default_projects_features:
issues: true
merge_requests: true
wiki: true
snippets: false
builds: true
container_registry: false
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Increase token duration
In GitLab, tokens for the container registry expire every five minutes.
To increase the token duration:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > CI/CD**.
1. Expand **Container Registry**.
1. For the **Authorization token duration (minutes)**, update the value.
1. Select **Save changes**.
## Configure storage for the container registry
{{< alert type="note" >}}
For storage backends that support it, you can use object versioning to preserve, retrieve, and
restore the non-current versions of every object stored in your buckets. However, this may result in
higher storage usage and costs. Due to how the registry operates, image uploads are first stored in
a temporary path and then transferred to a final location. For object storage backends, including S3
and GCS, this transfer is achieved with a copy followed by a delete. With object versioning enabled,
these deleted temporary upload artifacts are kept as non-current versions, therefore increasing the
storage bucket size. To ensure that non-current versions are deleted after a given amount of time,
you should configure an object lifecycle policy with your storage provider.
{{< /alert >}}
{{< alert type="warning" >}}
Do not directly modify the files or objects stored by the container registry. Anything other than the registry writing or deleting these entries can lead to instance-wide data consistency and instability issues from which recovery may not be possible.
{{< /alert >}}
You can configure the container registry to use various storage backends by
configuring a storage driver. By default the GitLab container registry
is configured to use the [file system driver](#use-file-system)
configuration.
The different supported drivers are:
| Driver | Description |
|--------------|--------------------------------------|
| `filesystem` | Uses a path on the local file system |
| `azure` | Microsoft Azure Blob Storage |
| `gcs` | Google Cloud Storage |
| `s3` | Amazon Simple Storage Service. Be sure to configure your storage bucket with the correct [S3 Permission Scopes](https://distribution.github.io/distribution/storage-drivers/s3/#s3-permission-scopes). |
Although most S3 compatible services (like [MinIO](https://min.io/)) should work with the container registry,
we only guarantee support for AWS S3. Because we cannot assert the correctness of third-party S3 implementations,
we can debug issues, but we cannot patch the registry unless an issue is reproducible against an AWS S3 bucket.
### Use file system
If you want to store your images on the file system, you can change the storage
path for the container registry, follow the steps below.
This path is accessible to:
- The user running the container registry daemon.
- The user running GitLab.
All GitLab, Registry, and web server users must
have access to this directory.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
The default location where images are stored in Linux package installations is
`/var/opt/gitlab/gitlab-rails/shared/registry`. To change it:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['registry_path'] = "</path/to/registry/storage>"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
The default location where images are stored in self-compiled installations is
`/home/git/gitlab/shared/registry`. To change it:
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
change the `path` setting:
```yaml
registry:
path: shared/registry
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Use object storage
If you want to store your container registry images in object storage instead of the local file system,
you can configure one of the supported storage drivers.
For more information, see [Object storage](../object_storage.md).
{{< alert type="warning" >}}
GitLab does not back up Docker images that are not stored on the
file system. Enable backups with your object storage provider if
desired.
{{< /alert >}}
#### Configure object storage for Linux package installations
To configure object storage for your container registry:
1. Choose the storage driver you want to use.
1. Edit `/etc/gitlab/gitlab.rb` with the appropriate configuration.
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< tabs >}}
{{< tab title="S3" >}}
The S3 storage driver integrates with Amazon S3 or any S3-compatible object storage service.
<!--- start_remove The following content will be removed on remove_date: '2025-08-15' -->
{{< alert type="warning" >}}
The S3 storage driver that uses AWS SDK v1 was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/523095) in GitLab 17.10 and is planned for removal in GitLab 19.0.
Use the `s3_v2` driver (in Beta) instead when it becomes available in May 2025. This driver offers improved performance, reliability, and compatibility with AWS authentication requirements. While this is a breaking change, the new driver has been thoroughly tested and is designed to be a drop-in replacement for most configurations.
Make sure to test the new driver in non-production environments before deploying to production to ensure compatibility with your specific setup and usage patterns. This allows you to identify and address any edge cases unique to your environment.
Report any issues or feedback using [issue 525855](https://gitlab.com/gitlab-org/gitlab/-/issues/525855).
{{< /alert >}}
<!--- end_remove -->
The `s3_v2` driver (in Beta) uses AWS SDK v2 and only supports Signature Version 4 for authentication.
This driver improves performance and reliability while ensuring compatibility with AWS authentication requirements,
as support for older signature methods is deprecated. For more information, see [epic 16272](https://gitlab.com/groups/gitlab-org/-/epics/16272).
For a complete list of configuration parameters for each driver, see [`s3_v1`](https://gitlab.com/gitlab-org/container-registry/-/blob/f4ece8cdba4413b968c8a3fd20497a8186f23d26/docs/storage-drivers/s3_v1.md) and [`s3_v2`](https://gitlab.com/gitlab-org/container-registry/-/blob/f4ece8cdba4413b968c8a3fd20497a8186f23d26/docs/storage-drivers/s3_v2.md).
To configure the S3 storage driver, add one of the following configurations to your `/etc/gitlab/gitlab.rb` file:
```ruby
# Deprecated: Will be removed in GitLab 19.0
registry['storage'] = {
's3' => {
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-regionendpoint>'
}
}
```
Or
```ruby
# Beta: s3_v2 driver
registry['storage'] = {
's3_v2' => {
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-regionendpoint>'
}
}
```
For improved security, you can use an IAM role instead of static credentials by not including the `accesskey` and `secretkey` parameters.
To prevent storage cost increases, configure a lifecycle policy in your S3 bucket to purge incomplete multipart uploads.
The container registry does not automatically clean these up.
A three-day expiration policy for incomplete multipart uploads works well for most usage patterns.
{{< alert type="note" >}}
`loglevel` settings differ between the [`s3_v1`](https://gitlab.com/gitlab-org/container-registry/-/blob/f4ece8cdba4413b968c8a3fd20497a8186f23d26/docs/storage-drivers/s3_v1.md#configuration-parameters) and [`s3_v2`](https://gitlab.com/gitlab-org/container-registry/-/blob/f4ece8cdba4413b968c8a3fd20497a8186f23d26/docs/storage-drivers/s3_v2.md#configuration-parameters) drivers.
If you set the `loglevel` for the wrong driver, it is ignored and a warning message is printed.
{{< /alert >}}
When using MinIO with the `s3_v2` driver, add the `checksum_disabled` parameter to disable AWS checksums:
```ruby
registry['storage'] = {
's3_v2' => {
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-regionendpoint>',
'checksum_disabled' => true
}
}
```
For S3 VPC endpoints:
```ruby
registry['storage'] = {
's3_v2' => { # Beta driver
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-vpc-endpoint>',
'pathstyle' => false
}
}
```
S3 configuration parameters:
- `<your-s3-bucket>`: The name of an existing bucket. Cannot include subdirectories.
- `regionendpoint`: Required only when using an S3-compatible service like MinIO or an AWS S3 VPC Endpoint.
- `pathstyle`: Controls URL formatting. Set to `true` for `host/bucket_name/object` (most S3-compatible services) or `false` for `bucket_name.host/object` (AWS S3).
To avoid 503 errors from the S3 API, add the `maxrequestspersecond` parameter to set a rate limit on connections:
```ruby
registry['storage'] = {
's3' => {
'accesskey' => '<s3-access-key>',
'secretkey' => '<s3-secret-key-for-access-key>',
'bucket' => '<your-s3-bucket>',
'region' => '<your-s3-region>',
'regionendpoint' => '<your-s3-regionendpoint>',
'maxrequestspersecond' => 100
}
}
```
{{< /tab >}}
{{< tab title="Azure" >}}
The Azure storage driver integrates with Microsoft Azure Blob Storage.
{{< alert type="warning" >}}
The legacy Azure storage driver was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/523096) in GitLab 17.10 and is planned for removal in GitLab 19.0.
Use the `azure_v2` driver (in Beta) instead. This driver offers improved performance, reliability, and modern authentication methods. While this is a breaking change, the new driver has been extensively tested to ensure a smooth transition for most configurations.
Make sure to test the new driver in non-production environments before deploying to production to identify and address any edge cases specific to your environment and usage patterns.
Report any issues or feedback using [issue 525855](https://gitlab.com/gitlab-org/gitlab/-/issues/525855).
{{< /alert >}}
For a complete list of configuration parameters for each driver, see [`azure_v1`](https://gitlab.com/gitlab-org/container-registry/-/blob/7b1786d261481a3c69912ad3423225f47f7c8242/docs/storage-drivers/azure_v1.md) and [`azure_v2`](https://gitlab.com/gitlab-org/container-registry/-/blob/7b1786d261481a3c69912ad3423225f47f7c8242/docs/storage-drivers/azure_v2.md).
To configure the Azure storage driver, add one of the following configurations to your `/etc/gitlab/gitlab.rb` file:
```ruby
# Deprecated: Will be removed in GitLab 19.0
registry['storage'] = {
'azure' => {
'accountname' => '<your_storage_account_name>',
'accountkey' => '<base64_encoded_account_key>',
'container' => '<container_name>'
}
}
```
Or
```ruby
# Beta: azure_v2 driver
registry['storage'] = {
'azure_v2' => {
'credentials_type' => '<client_secret>',
'tenant_id' => '<your_tenant_id>',
'client_id' => '<your_client_id>',
'secret' => '<your_secret>',
'container' => '<your_container>',
'accountname' => '<your_account_name>'
}
}
```
By default, the Azure storage driver uses the `core.windows.net realm`. You can set another value for realm in the Azure section (for example, `core.usgovcloudapi.net` for Azure Government Cloud).
{{< /tab >}}
{{< tab title="GCS" >}}
The GCS storage driver integrates with Google Cloud Storage.
```ruby
registry['storage'] = {
'gcs' => {
'bucket' => '<your_bucket_name>',
'keyfile' => '<path/to/keyfile>',
# If you have the bucket shared with other apps beyond the registry, uncomment the following:
# 'rootdirectory' => '/gcs/object/name/prefix'
}
}
```
GitLab supports all [available parameters](https://docs.docker.com/registry/storage-drivers/gcs/).
{{< /tab >}}
{{< /tabs >}}
#### Self-compiled installations
Configuring the storage driver is done in the registry configuration YAML file created
when you deployed your Docker registry.
`s3` storage driver example:
```yaml
storage:
s3:
accesskey: '<s3-access-key>' # Not needed if IAM role used
secretkey: '<s3-secret-key-for-access-key>' # Not needed if IAM role used
bucket: '<your-s3-bucket>'
region: '<your-s3-region>'
regionendpoint: '<your-s3-regionendpoint>'
cache:
blobdescriptor: inmemory
delete:
enabled: true
```
`<your-s3-bucket>` should be the name of a bucket that exists, and can't include subdirectories.
#### Migrate to object storage without downtime
{{< alert type="warning" >}}
Using [AWS DataSync](https://aws.amazon.com/datasync/)
to copy the registry data to or between S3 buckets creates invalid metadata objects in the bucket.
For additional details, see [Tags with an empty name](container_registry_troubleshooting.md#tags-with-an-empty-name).
To move data to and between S3 buckets, the AWS CLI `sync` operation is recommended.
{{< /alert >}}
To migrate storage without stopping the container registry, set the container registry
to read-only mode. On large instances, this may require the container registry
to be in read-only mode for a while. During this time,
you can pull from the container registry, but you cannot push.
1. Optional: To reduce the amount of data to be migrated, run the [garbage collection tool without downtime](#performing-garbage-collection-without-downtime).
1. This example uses the `aws` CLI. If you haven't configured the
CLI before, you have to configure your credentials by running `sudo aws configure`.
Because a non-administrator user likely can't access the container registry folder,
ensure you use `sudo`. To check your credential configuration, run
[`ls`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/ls.html) to list
all buckets.
```shell
sudo aws --endpoint-url <https://your-object-storage-backend.com> s3 ls
```
If you are using AWS as your back end, you do not need the [`--endpoint-url`](https://docs.aws.amazon.com/cli/latest/reference/#options).
1. Copy initial data to your S3 bucket, for example with the `aws` CLI
[`cp`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/cp.html)
or [`sync`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html)
command. Make sure to keep the `docker` folder as the top-level folder inside the bucket.
```shell
sudo aws --endpoint-url <https://your-object-storage-backend.com> s3 sync registry s3://mybucket
```
{{< alert type="note" >}}
If you have a lot of data, you may be able to improve performance by
[running parallel sync operations](https://repost.aws/knowledge-center/s3-improve-transfer-sync-command).
{{< /alert >}}
1. To perform the final data sync,
[put the container registry in `read-only` mode](#performing-garbage-collection-without-downtime) and
[reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. Sync any changes dating from after the initial data load to your S3 bucket, and delete files that exist in the destination bucket but not in the source:
```shell
sudo aws --endpoint-url <https://your-object-storage-backend.com> s3 sync registry s3://mybucket --delete --dryrun
```
After verifying the command performs as expected, remove the
[`--dryrun`](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html)
flag and run the command.
{{< alert type="warning" >}}
The [`--delete`](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html)
flag deletes files that exist in the destination but not in the source.
If you swap the source and destination, all data in the Registry is deleted.
{{< /alert >}}
1. Verify all container registry files have been uploaded to object storage
by looking at the file count returned by these two commands:
```shell
sudo find registry -type f | wc -l
```
```shell
sudo aws --endpoint-url <https://your-object-storage-backend.com> s3 ls s3://<mybucket> --recursive | wc -l
```
The output of these commands should match, except for the content in the
`_uploads` directories and subdirectories.
1. Configure your registry to [use the S3 bucket for storage](#use-object-storage).
1. For the changes to take effect, set the Registry back to `read-write` mode and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
#### Moving to Azure Object Storage
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```ruby
registry['storage'] = {
'azure' => {
'accountname' => '<your_storage_account_name>',
'accountkey' => '<base64_encoded_account_key>',
'container' => '<container_name>',
'trimlegacyrootprefix' => true
}
}
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```yaml
storage:
azure:
accountname: <your_storage_account_name>
accountkey: <base64_encoded_account_key>
container: <container_name>
trimlegacyrootprefix: true
```
{{< /tab >}}
{{< /tabs >}}
By default, Azure Storage Driver uses the `core.windows.net` realm. You can set another value for `realm` in the `azure` section (for example, `core.usgovcloudapi.net` for Azure Government Cloud).
### Disable redirect for storage driver
By default, users accessing a registry configured with a remote backend are redirected to the default backend for the storage driver. For example, registries can be configured using the `s3` storage driver, which redirects requests to a remote S3 bucket to alleviate load on the GitLab server.
However, this behavior is undesirable for registries used by internal hosts that usually can't access public servers. To disable redirects and [proxy download](../object_storage.md#proxy-download), set the `disable` flag to true as follows. This makes all traffic always go through the Registry service. This results in improved security (less surface attack as the storage backend is not publicly accessible), but worse performance (all traffic is redirected via the service).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['storage'] = {
's3' => {
'accesskey' => '<s3_access_key>',
'secretkey' => '<s3_secret_key_for_access_key>',
'bucket' => '<your_s3_bucket>',
'region' => '<your_s3_region>',
'regionendpoint' => '<your_s3_regionendpoint>'
},
'redirect' => {
'disable' => true
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Add the `redirect` flag to your registry configuration YAML file:
```yaml
storage:
s3:
accesskey: '<s3_access_key>'
secretkey: '<s3_secret_key_for_access_key>'
bucket: '<your_s3_bucket>'
region: '<your_s3_region>'
regionendpoint: '<your_s3_regionendpoint>'
redirect:
disable: true
cache:
blobdescriptor: inmemory
delete:
enabled: true
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Encrypted S3 buckets
You can use server-side encryption with AWS KMS for S3 buckets that have
[SSE-S3 or SSE-KMS encryption enabled by default](https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html).
Customer master keys (CMKs) and SSE-C encryption aren't supported because this requires sending the
encryption keys in every request.
For SSE-S3, you must enable the `encrypt` option in the registry settings. How you do this depends
on how you installed GitLab. Follow the instructions here that match your installation method.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['storage'] = {
's3' => {
'accesskey' => '<s3_access_key>',
'secretkey' => '<s3_secret_key_for_access_key>',
'bucket' => '<your_s3_bucket>',
'region' => '<your_s3_region>',
'regionendpoint' => '<your_s3_regionendpoint>',
'encrypt' => true
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit your registry configuration YAML file:
```yaml
storage:
s3:
accesskey: '<s3_access_key>'
secretkey: '<s3_secret_key_for_access_key>'
bucket: '<your_s3_bucket>'
region: '<your_s3_region>'
regionendpoint: '<your_s3_regionendpoint>'
encrypt: true
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
### Storage limitations
There is no storage limitation, which means a user can upload an
infinite amount of Docker images with arbitrary sizes. This setting should be
configurable in future releases.
## Change the registry's internal port
The Registry server listens on localhost at port `5000` by default,
which is the address for which the Registry server should accept connections.
In the examples below we set the Registry's port to `5010`.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Open `/etc/gitlab/gitlab.rb` and set `registry['registry_http_addr']`:
```ruby
registry['registry_http_addr'] = "localhost:5010"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Open the configuration file of your Registry server and edit the
[`http:addr`](https://distribution.github.io/distribution/about/configuration/#http) value:
```yaml
http:
addr: localhost:5010
```
1. Save the file and restart the Registry server.
{{< /tab >}}
{{< /tabs >}}
## Disable container registry per project
If Registry is enabled in your GitLab instance, but you don't need it for your
project, you can [disable it from your project's settings](../../user/project/settings/_index.md#configure-project-features-and-permissions).
## Use an external container registry with GitLab as an auth endpoint
{{< alert type="warning" >}}
Using third-party container registries in GitLab was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/376217)
in GitLab 15.8 and support ended in GitLab 16.0.
If you need to use third-party container registries instead of the GitLab container registry,
tell us about your use cases in [feedback issue 958](https://gitlab.com/gitlab-org/container-registry/-/issues/958).
{{< /alert >}}
If you use an external container registry, some features associated with the
container registry may be unavailable or have [inherent risks](../../user/packages/container_registry/reduce_container_registry_storage.md#use-with-external-container-registries).
For the integration to work, the external registry must be configured to
use a JSON Web Token to authenticate with GitLab. The
[external registry's runtime configuration](https://distribution.github.io/distribution/about/configuration/#token)
**must** have the following entries:
```yaml
auth:
token:
realm: https://<gitlab.example.com>/jwt/auth
service: container_registry
issuer: gitlab-issuer
rootcertbundle: /root/certs/certbundle
```
Without these entries, the registry logins cannot authenticate with GitLab.
GitLab also remains unaware of
[nested image names](../../user/packages/container_registry/_index.md#naming-convention-for-your-container-images)
under the project hierarchy, like
`registry.example.com/group/project/image-name:tag` or
`registry.example.com/group/project/my/image-name:tag`, and only recognizes
`registry.example.com/group/project:tag`.
### Linux package installations
You can use GitLab as an auth endpoint with an external container registry.
1. Open `/etc/gitlab/gitlab.rb` and set necessary configurations:
```ruby
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_api_url'] = "https://<external_registry_host>:5000"
gitlab_rails['registry_issuer'] = "gitlab-issuer"
```
- `gitlab_rails['registry_enabled'] = true` is needed to enable GitLab
container registry features and authentication endpoint. The GitLab bundled
container registry service does not start, even with this enabled.
- `gitlab_rails['registry_api_url'] = "http://<external_registry_host>:5000"`
must be changed to match the host where Registry is installed.
It must also specify `https` if the external registry is
configured to use TLS.
1. A certificate-key pair is required for GitLab and the external container
registry to communicate securely. You need to create a certificate-key
pair, configuring the external container registry with the public
certificate (`rootcertbundle`) and configuring GitLab with the private key.
To do that, add the following to `/etc/gitlab/gitlab.rb`:
```ruby
# registry['internal_key'] should contain the contents of the custom key
# file. Line breaks in the key file should be marked using `\n` character
# Example:
registry['internal_key'] = "---BEGIN RSA PRIVATE KEY---\nMIIEpQIBAA\n"
# Optionally define a custom file for a Linux package installation to write the contents
# of registry['internal_key'] to.
gitlab_rails['registry_key_path'] = "/custom/path/to/registry-key.key"
```
Each time reconfigure is executed, the file specified at `registry_key_path`
gets populated with the content specified by `internal_key`. If
no file is specified, Linux package installations default it to
`/var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key` and populates
it.
1. To change the container registry URL displayed in the GitLab Container
Registry pages, set the following configurations:
```ruby
gitlab_rails['registry_host'] = "<registry.gitlab.example.com>"
gitlab_rails['registry_port'] = "5005"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
### Self-compiled installations
1. Open `/home/git/gitlab/config/gitlab.yml`, and edit the configuration settings under `registry`:
```yaml
## Container registry
registry:
enabled: true
host: "<registry.gitlab.example.com>"
port: "5005"
api_url: "https://<external_registry_host>:5000"
path: /var/lib/registry
key: </path/to/keyfile>
issuer: gitlab-issuer
```
[Read more](#enable-the-container-registry) about what these parameters mean.
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
## Configure container registry notifications
You can configure the container registry to send webhook notifications in
response to events happening in the registry.
Read more about the container registry notifications configuration options in the
[Docker Registry notifications documentation](https://distribution.github.io/distribution/about/notifications/).
{{< alert type="warning" >}}
Support for the `threshold` parameter was [deprecated](https://gitlab.com/gitlab-org/container-registry/-/issues/1243)
in GitLab 17.0, and is planned for removal in 18.0. Use `maxretries` instead.
{{< /alert >}}
You can configure multiple endpoints for the container registry.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
To configure a notification endpoint for a Linux package installation:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
registry['notifications'] = [
{
'name' => '<test_endpoint>',
'url' => 'https://<gitlab.example.com>/api/v4/container_registry_event/events',
'timeout' => '500ms',
'threshold' => 5, # DEPRECATED: use `maxretries` instead.
'maxretries' => 5,
'backoff' => '1s',
'headers' => {
"Authorization" => ["<AUTHORIZATION_EXAMPLE_TOKEN>"]
}
}
]
gitlab_rails['registry_notification_secret'] = '<AUTHORIZATION_EXAMPLE_TOKEN>' # Must match the auth token in registry['notifications']
```
{{< alert type="note" >}}
Replace `<AUTHORIZATION_EXAMPLE_TOKEN>` with a case-sensitive alphanumeric string
that starts with a letter. You can generate one with `< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c 32 | sed "s/^[0-9]*//"; echo`
{{< /alert >}}
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
Configuring the notification endpoint is done in your registry configuration YAML file created
when you deployed your Docker registry.
Example:
```yaml
notifications:
endpoints:
- name: <alistener>
disabled: false
url: https://<my.listener.com>/event
headers: <http.Header>
timeout: 500
threshold: 5 # DEPRECATED: use `maxretries` instead.
maxretries: 5
backoff: 1000
```
{{< /tab >}}
{{< /tabs >}}
## Run the cleanup policy
Prerequisites:
- If you use a distributed architecture where the container registry runs on a different node than Sidekiq, follow the steps in [Configure the container registry when using an external Sidekiq](../sidekiq/_index.md#configure-the-container-registry-when-using-an-external-sidekiq).
After you [create a cleanup policy](../../user/packages/container_registry/reduce_container_registry_storage.md#create-a-cleanup-policy), you can run it immediately to reduce the container registry storage space. You don't have to wait for the scheduled cleanup.
To reduce the amount of container registry disk space used by a given project, administrators can:
1. [Check disk space usage by project](#registry-disk-space-usage-by-project) to identify projects that need cleanup.
1. Run the cleanup policy using the GitLab Rails console to remove image tags.
1. [Run garbage collection](#container-registry-garbage-collection) to remove unreferenced layers and untagged manifests.
### Registry Disk Space Usage by Project
To find the disk space used by each project, run the following in the
[GitLab Rails console](../operations/rails_console.md#starting-a-rails-console-session):
```ruby
projects_and_size = [["project_id", "creator_id", "registry_size_bytes", "project path"]]
# You need to specify the projects that you want to look through. You can get these in any manner.
projects = Project.last(100)
registry_metadata_database = ContainerRegistry::GitlabApiClient.supports_gitlab_api?
if registry_metadata_database
projects.each do |project|
size = project.container_repositories_size
if size > 0
projects_and_size << [project.project_id, project.creator&.id, size, project.full_path]
end
end
else
projects.each do |project|
project_layers = {}
project.container_repositories.each do |repository|
repository.tags.each do |tag|
tag.layers.each do |layer|
project_layers[layer.digest] ||= layer.size
end
end
end
total_size = project_layers.values.compact.sum
if total_size > 0
projects_and_size << [project.project_id, project.creator&.id, total_size, project.full_path]
end
end
end
# print it as comma separated output
projects_and_size.each do |ps|
puts "%s,%s,%s,%s" % ps
end
```
{{< alert type="note" >}}
The script calculates size based on container image layers. Because layers can be shared across multiple projects, the results are approximate but give a good indication of relative disk usage between projects.
{{< /alert >}}
To remove image tags by running the cleanup policy, run the following commands in the
[GitLab Rails console](../operations/rails_console.md):
```ruby
# Numeric ID of the project whose container registry should be cleaned up
P = <project_id>
# Numeric ID of a user with Developer, Maintainer, or Owner role for the project
U = <user_id>
# Get required details / objects
user = User.find_by_id(U)
project = Project.find_by_id(P)
policy = ContainerExpirationPolicy.find_by(project_id: P)
# Loop through each container repository
project.container_repositories.find_each do |repo|
puts repo.attributes
# Start the tag cleanup
puts Projects::ContainerRepository::CleanupTagsService.new(container_repository: repo, current_user: user, params: policy.attributes.except("created_at", "updated_at")).execute
end
```
You can also [run cleanup on a schedule](../../user/packages/container_registry/reduce_container_registry_storage.md#cleanup-policy).
To enable cleanup policies for all projects instance-wide, you need to find all projects
with a container registry, but with the cleanup policy disabled:
```ruby
# Find all projects where Container registry is enabled, and cleanup policies disabled
projects = Project.find_by_sql ("SELECT * FROM projects WHERE id IN (SELECT project_id FROM container_expiration_policies WHERE enabled=false AND id IN (SELECT project_id FROM container_repositories))")
# Loop through each project
projects.each do |p|
# Print project IDs and project full names
puts "#{p.id},#{p.full_name}"
end
```
## Container registry metadata database
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/423459) in GitLab 17.3.
{{< /history >}}
The metadata database enables many new registry features, including
online garbage collection, and increases the efficiency of many registry operations.
See the [Container registry metadata database](container_registry_metadata_database.md) page for details.
## Container registry garbage collection
Prerequisites:
- You must have installed GitLab by using a Linux package or the
[GitLab Helm chart](https://docs.gitlab.com/charts/charts/registry/#garbage-collection).
{{< alert type="note" >}}
Retention policies in your object storage provider, such as Amazon S3 Lifecycle, may prevent
objects from being properly deleted.
{{< /alert >}}
The container registry can use considerable amounts of storage space, and you might want to
[reduce storage usage](../../user/packages/container_registry/reduce_container_registry_storage.md).
Among the listed options, deleting tags is the most effective option. However, tag deletion
alone does not delete image layers, it only leaves the underlying image manifests untagged.
To more effectively free up space, the container registry has a garbage collector that can
delete unreferenced layers and (optionally) untagged manifests.
To start the garbage collector, run the following `gitlab-ctl` command:
```shell
`registry-garbage-collect`
```
The time required to perform garbage collection is proportional to the container registry data size.
{{< alert type="warning" >}}
The `registry-garbage-collect` command shuts down the container registry prior to the garbage collection and
only starts it again after garbage collection completes. If you prefer to avoid downtime,
you can manually set the container registry to [read-only mode and bypass `gitlab-ctl`](#performing-garbage-collection-without-downtime).
This command proceeds only if the metadata is in object storage. This command does not proceed
if the [container registry metadata database](#container-registry-metadata-database) is enabled.
{{< /alert >}}
### Understanding the content-addressable layers
Consider the following example, where you first build the image:
```shell
# This builds a image with content of sha256:<111111...>
docker build -t <my.registry.com>/<my.group>/<my.project>:latest .
docker push <my.registry.com>/<my.group>/<my.project>:latest
```
Now, you do overwrite `latest` with a new version:
```shell
# This builds a image with content of sha256:<222222...>
docker build -t <my.registry.com>/<my.group>/<my.project>:latest .
docker push <my.registry.com>/<my.group>/<my.project>:latest
```
Now, the `latest` tag points to manifest of `sha256:<222222...>`.
Due to the architecture of registry, this data is still accessible when pulling the
image `<my.registry.com>/<my.group>/<my.project>@sha256:<111111...>`, though it is
no longer directly accessible via the `latest` tag.
### Remove unreferenced layers
Image layers are the bulk of the container registry storage. A layer is considered
unreferenced when no image manifest references it. Unreferenced layers are the
default target of the container registry garbage collector.
If you did not change the default location of the configuration file, run:
```shell
sudo gitlab-ctl registry-garbage-collect
```
If you changed the location of the container registry `config.yml`:
```shell
sudo gitlab-ctl registry-garbage-collect /path/to/config.yml
```
You can also [remove all untagged manifests and unreferenced layers](#removing-untagged-manifests-and-unreferenced-layers)
to recover additional space.
### Removing untagged manifests and unreferenced layers
By default the container registry garbage collector ignores images that are untagged,
and users can keep pulling untagged images by digest. Users can also re-tag images
in the future, making them visible again in the GitLab UI and API.
If you do not care about untagged images and the layers exclusively referenced by these images,
you can delete them all. Use the `-m` flag on the `registry-garbage-collect` command:
```shell
sudo gitlab-ctl registry-garbage-collect -m
```
If you are unsure about deleting untagged images, back up your registry data before proceeding.
### Performing garbage collection without downtime
To do garbage collection while keeping the container registry online, put the registry
in read-only mode and bypass the built-in `gitlab-ctl registry-garbage-collect` command.
You can pull but not push images while the container registry is in read-only mode. The container
registry must remain in read-only for the full duration of the garbage collection.
By default, the [registry storage path](#configure-storage-for-the-container-registry)
is `/var/opt/gitlab/gitlab-rails/shared/registry`.
To enable the read-only mode:
1. In `/etc/gitlab/gitlab.rb`, specify the read-only mode:
```ruby
registry['storage'] = {
'filesystem' => {
'rootdirectory' => "<your_registry_storage_path>"
},
'maintenance' => {
'readonly' => {
'enabled' => true
}
}
}
```
1. Save and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
This command sets the container registry into the read-only mode.
1. Next, trigger one of the garbage collect commands:
```shell
# Remove unreferenced layers
sudo /opt/gitlab/embedded/bin/registry garbage-collect /var/opt/gitlab/registry/config.yml
# Remove untagged manifests and unreferenced layers
sudo /opt/gitlab/embedded/bin/registry garbage-collect -m /var/opt/gitlab/registry/config.yml
```
This command starts the garbage collection. The time to complete is proportional to the registry data size.
1. Once done, in `/etc/gitlab/gitlab.rb` change it back to read-write mode:
```ruby
registry['storage'] = {
'filesystem' => {
'rootdirectory' => "<your_registry_storage_path>"
},
'maintenance' => {
'readonly' => {
'enabled' => false
}
}
}
```
1. Save and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
### Running the garbage collection on schedule
Ideally, you want to run the garbage collection of the registry regularly on a
weekly basis at a time when the registry is not being in-use.
The simplest way is to add a new crontab job that it runs periodically
once a week.
Create a file under `/etc/cron.d/registry-garbage-collect`:
```shell
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# Run every Sunday at 04:05am
5 4 * * 0 root gitlab-ctl registry-garbage-collect
```
You may want to add the `-m` flag to [remove untagged manifests and unreferenced layers](#removing-untagged-manifests-and-unreferenced-layers).
### Stop garbage collection
If you anticipate stopping garbage collection, you should manually run garbage collection as
described in [Performing garbage collection without downtime](#performing-garbage-collection-without-downtime).
You can then stop garbage collection by pressing <kbd>Control</kbd>+<kbd>C</kbd>.
Otherwise, interrupting `gitlab-ctl` could leave your registry service in a down state. In this
case, you must find the [garbage collection process](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/files/gitlab-ctl-commands/registry_garbage_collect.rb#L26-35)
itself on the system so that the `gitlab-ctl` command can bring the registry service back up again.
Also, there's no way to save progress or results during the mark phase of the process. Only once
blobs start being deleted is anything permanent done.
### Continuous zero-downtime garbage collection
You can run garbage collection in the background without the need to schedule it or require read-only mode,
if you migrate to the [metadata database](container_registry_metadata_database.md).
## Scaling by component
This section outlines the potential performance bottlenecks as registry traffic increases by component.
Each subsection is roughly ordered by recommendations that benefit from smaller to larger registry workloads.
The registry is not included in the [reference architectures](../reference_architectures/_index.md),
and there are no scaling guides which target number of seats or requests per second.
### Database
1. **Move to a separate database**: As database load increases, scale vertically by moving the registry metadata database
to a separate physical database. A separate database can increase the amount of resources available
to the registry database while isolating the traffic produced by the registry.
1. **Move to a HA PostgreSQL third-party solution**: Similar to [Praefect](../reference_architectures/5k_users.md#praefect-ha-postgresql-third-party-solution),
moving to a reputable provider or solution enables HA and is suitable for multi-node registry deployments.
You must pick a provider that supports native Postgres partitioning, triggers, and functions,
as the registry makes heavy use of these.
### Registry server
1. **Move to a separate node**: A [separate node](#configure-gitlab-and-registry-on-separate-nodes-linux-package-installations)
is one way to scale vertically to increase the resources available to the container registry server process.
1. **Run multiple registry nodes behind a load balancer**: While the registry can handle
a high amount of traffic with a single large node, the registry is generally intended to
scale horizontally with multiple deployments. Configuring multiple smaller nodes
also enables techniques such as autoscaling.
### Redis Cache
Enabling the [Redis](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/configuration.md?ref_type=heads#redis)
cache improves performance, but also enables features such as renaming repositories.
1. **Redis Server**: A single Redis instance is supported and is the simplest way
to access the benefits of the Redis caching.
1. **Redis Sentinel**: Redis Sentinel is also supported and enables the cache to be HA.
1. **Redis Cluster**: Redis Cluster can also be used for further scaling as deployments grow.
### Storage
1. **Local file system**: A local file system is the default and is relatively performant,
but not suitable for multi-node deployments or a large amount of registry data.
1. **Object storage**: [Use object storage](#use-object-storage) to enable the practical storage
of a larger amount of registry data. Object storage is also suitable for multi-node registry deployments.
### Online garbage collection
1. **Adjust defaults**: If online garbage collection is not reliably clearing the [review queues](container_registry_metadata_database.md#queue-monitoring),
you can adjust the `interval` settings in the `manifests` and `blobs` sections under the
[`gc`](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/configuration.md?ref_type=heads#gc)
configuration section. The default is `5s`, and these can be configured with milliseconds as well,
for example `500ms`.
1. **Scale horizontally with the registry server**: If you are scaling the registry application horizontally
with multi-node deployments, online garbage collection automatically scales without
the need for configuration changes.
## Configure GitLab and registry on separate nodes (Linux package installations)
By default, the GitLab package assumes both services run on the same node.
Running them on separate nodes requires separate configuration.
### Configuration options
The following configuration options should be set in `/etc/gitlab/gitlab.rb` on the respective nodes.
#### Registry node settings
| Option | Description |
| ------------------------------------------ | ----------- |
| `registry['registry_http_addr']` | Network address and port that the registry listens on. Must be reachable by the web server or load balancer. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L50). |
| `registry['token_realm']` | Authentication endpoint URL, typically the GitLab instance URL. Must be reachable by users. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L53). |
| `registry['http_secret']` | Security token used to protect against client-side tampering. Generated as a [random string](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L32). |
| `registry['internal_key']` | Token-signing key, created on the registry server but used by GitLab. Default: [automatically generated](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/recipes/gitlab-rails.rb#L113-119). |
| `registry['internal_certificate']` | Certificate for token signing. Default: [automatically generated](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/registry/recipes/enable.rb#L60-66). |
| `registry['rootcertbundle']` | File path where the `internal_certificate` is stored. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/registry/recipes/enable.rb#L60). |
| `registry['health_storagedriver_enabled']` | Enables health monitoring of the storage driver. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-7-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L88). |
| `gitlab_rails['registry_key_path']` | File path where the `internal_key` is stored. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/recipes/gitlab-rails.rb#L35). |
| `gitlab_rails['registry_issuer']` | Token issuer name. Must match between registry and GitLab configurations. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/attributes/default.rb#L153). |
<!--- start_remove The following content will be removed on remove_date: '2025/08/15' -->
{{< alert type="warning" >}}
Support for authenticating requests using Amazon S3 Signature Version 2 in the container registry is deprecated in GitLab 17.8 and is planned for removal in 18.0. Use Signature Version 4 instead. This is a breaking change. For more information, see [issue 1449](https://gitlab.com/gitlab-org/container-registry/-/issues/1449).
{{< /alert >}}
<!--- end_remove -->
#### GitLab node settings
| Option | Description |
| ----------------------------------- | ----------- |
| `gitlab_rails['registry_enabled']` | Enables the GitLab registry API integration. Must be set to `true`. |
| `gitlab_rails['registry_api_url']` | Internal registry URL used by GitLab (not visible to users). Uses `registry['registry_http_addr']` with scheme. Default: [set programmatically](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/10-3-stable/files/gitlab-cookbooks/gitlab/libraries/registry.rb#L52). |
| `gitlab_rails['registry_host']` | Public registry hostname without scheme (example: `registry.gitlab.example`). This address is shown to users. |
| `gitlab_rails['registry_port']` | Public registry port number shown to users. |
| `gitlab_rails['registry_issuer']` | Token issuer name that must match the registry's configuration. |
| `gitlab_rails['registry_key_path']` | File path to the certificate key used by the registry. |
| `gitlab_rails['internal_key']` | Token-signing key content used by GitLab. |
### Set up the nodes
To configure GitLab and the container registry on separate nodes:
1. On the registry node, edit `/etc/gitlab/gitlab.rb` with the following settings:
```ruby
# Registry server details
# - IP address: 10.30.227.194
# - Domain: registry.example.com
# Disable unneeded services
gitlab_workhorse['enable'] = false
puma['enable'] = false
sidekiq['enable'] = false
postgresql['enable'] = false
redis['enable'] = false
gitlab_kas['enable'] = false
gitaly['enable'] = false
nginx['enable'] = false
# Configure registry settings
registry['enable'] = true
registry['registry_http_addr'] = '0.0.0.0:5000'
registry['token_realm'] = 'https://<gitlab.example.com>'
registry['http_secret'] = '<6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b>'
# Configure GitLab Rails settings
gitlab_rails['registry_issuer'] = 'omnibus-gitlab-issuer'
gitlab_rails['registry_key_path'] = '/etc/gitlab/gitlab-registry.key'
```
1. On the GitLab node, edit `/etc/gitlab/gitlab.rb` with the following settings:
```ruby
# GitLab server details
# - IP address: 10.30.227.149
# - Domain: gitlab.example.com
# Configure GitLab URL
external_url 'https://<gitlab.example.com>'
# Configure registry settings
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_api_url'] = '<http://10.30.227.194:5000>'
gitlab_rails['registry_host'] = '<registry.example.com>'
gitlab_rails['registry_port'] = 5000
gitlab_rails['registry_issuer'] = 'omnibus-gitlab-issuer'
gitlab_rails['registry_key_path'] = '/etc/gitlab/gitlab-registry.key'
```
1. Synchronize the `/etc/gitlab/gitlab-secrets.json` file between both nodes:
1. Copy the file from the GitLab node to the registry node.
1. Ensure file permissions are correct.
1. Run `sudo gitlab-ctl reconfigure` on both nodes.
## Container registry architecture
Users can store their own Docker images in the container registry. Because the registry
is client facing, the registry is directly exposed
on the web server or load balancer (LB).
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart LR
accTitle: Container registry authentication flow
accDescr: Shows how users authenticate with the container registry with GitLab API to push and pull Docker images
A[User] --->|1: Docker loginon port 443| C{Frontend loadbalancer}
C --->|2: connection attemptwithout token fails| D[Container registry]
C --->|5: connect with token succeeds| D[Container registry]
C --->|3: Dockerrequests token| E[API frontend]
E --->|4:API returnssigned token| C
linkStyle 1 stroke-width:4px,stroke:red
linkStyle 2 stroke-width:4px,stroke:green
```
The authentication flow includes these steps:
1. A user runs `docker login registry.gitlab.example` on their client. This request reaches the web server (or LB) on port 443.
1. The web server connects to the registry backend pool (port 5000 by default). Because the user does not have a valid token, the registry returns a `401 Unauthorized` HTTP code and a URL to get a token. The URL is defined by the [`token_realm`](#registry-node-settings) setting in the registry configuration and points to the GitLab API.
1. The Docker client connects to the GitLab API and obtains a token.
1. The API signs the token with the registry key and sends it to the Docker client.
1. The Docker client logs in again with the token received from the API. The authenticated client can now push and pull Docker images.
Reference: <https://distribution.github.io/distribution/spec/auth/token/>
### Communication between GitLab and the container registry
The container registry cannot authenticate users internally, so it validates credentials through GitLab.
The connection between the registry and GitLab is
TLS encrypted.
GitLab uses the private key to sign tokens, and the registry uses the public key provided
by the certificate to validate the signature.
By default, a self-signed certificate key pair is generated
for all installations. You can override this behavior using the [`internal_key`](#registry-node-settings) setting in the registry configuration.
The following steps describe the communication flow:
1. GitLab interacts with the registry using the registry's private key. When a registry
request is sent, a short-lived (10 minutes), namespace-limited token is generated
and signed with the private key.
1. The registry verifies that the signature matches the registry certificate
specified in its configuration and allows the operation.
1. GitLab processes background jobs through Sidekiq, which also interacts with the registry.
These jobs communicate directly with the registry to handle image deletion.
## Migrate from a third-party registry
Using external container registries in GitLab was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/376217)
in GitLab 15.8 and the end of support occurred in GitLab 16.0. See the [deprecation notice](../../update/deprecations.md#use-of-third-party-container-registries-is-deprecated) for more details.
The integration is not disabled in GitLab 16.0, but support for debugging and fixing issues
is no longer provided. Additionally, the integration is no longer being developed or
enhanced with new features. Third-party registry functionality might be completely removed
after the new GitLab container registry version is available for GitLab Self-Managed (see epic [5521](https://gitlab.com/groups/gitlab-org/-/epics/5521)). Only the GitLab container registry is planned to be supported.
This section has guidance for administrators migrating from third-party registries
to the GitLab container registry. If the third-party container registry you are using is not listed here,
you can describe your use cases in [the feedback issue](https://gitlab.com/gitlab-org/container-registry/-/issues/958).
For all of the instructions provided below, you should try them first on a test environment.
Make sure everything continues to work as expected before replicating it in production.
### Docker Distribution Registry
The [Docker Distribution Registry](https://docs.docker.com/registry/) was donated to the CNCF
and is now known as the [Distribution Registry](https://distribution.github.io/distribution/).
This registry is the open source implementation that the GitLab container registry is based on.
The GitLab container registry is compatible with the basic functionality provided by the Distribution Registry,
including all the supported storage backends. To migrate to the GitLab container registry
you can follow the instructions on this page, and use the same storage backend as the Distribution Registry.
The GitLab container registry should accept the same configuration that you are using for the Distribution Registry.
## Max retries for deleting container images
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/480652) in GitLab 17.5 [with a flag](../feature_flags/_index.md) named `set_delete_failed_container_repository`. Disabled by default.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/490354) in GitLab 17.6. Feature flag `set_delete_failed_container_repository` removed.
{{< /history >}}
Errors could happen when deleting container images, so deletions are retried to ensure
the error is not a transient issue. Deletion is retried up to 10 times, with a back off delay
between retries. This delay gives more time between retries for any transient errors to resolve.
Setting a maximum number of retries also helps detect if there are any persistent errors
that haven't been solved in between retries. After a deletion fails the maximum number of retries,
the container repository `status` is set to `delete_failed`. With this status, the
repository no longer retries deletions.
You should investigate any container repositories with a `delete_failed` status and
try to resolve the issue. After the issue is resolved, you can set the repository status
back to `delete_scheduled` so images can start to be deleted again. To update the repository status,
from the rails console:
```ruby
container_repository = ContainerRepository.find(<id>)
container_repository.update(status: 'delete_scheduled')
```
|
https://docs.gitlab.com/administration/raketasks
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
_index.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides [Rake](https://ruby.github.io/rake/) tasks to assist you with common administration and operational
processes.
All Rake tasks must be run on a Rails node unless the documentation for a task specifically states otherwise.
You can perform GitLab Rake tasks by using:
- `gitlab-rake <raketask>` for [Linux package](https://docs.gitlab.com/omnibus/) and [GitLab Helm chart](https://docs.gitlab.com/charts/troubleshooting/kubernetes_cheat_sheet.html#gitlab-specific-kubernetes-information) installations.
- `bundle exec rake <raketask>` for [self-compiled](../../install/self_compiled/_index.md) installations.
## Available Rake tasks
The following Rake tasks are available for use with GitLab:
| Tasks | Description |
|:------------------------------------------------------------------------------------------------------|:------------|
| [Access token expiration tasks](tokens/_index.md) | Bulk extend or remove expiration dates for access tokens. |
| [Back up and restore](../backup_restore/_index.md) | Back up, restore, and migrate GitLab instances between servers. |
| [Clean up](cleanup.md) | Clean up unneeded items from GitLab instances. |
| Development | Tasks for GitLab contributors. For more information, see the development documentation. |
| [Elasticsearch](../../integration/advanced_search/elasticsearch.md#gitlab-advanced-search-rake-tasks) | Maintain Elasticsearch in a GitLab instance. |
| [General maintenance](maintenance.md) | General maintenance and self-check tasks. |
| [GitHub import](../../user/project/import/github.md) | Retrieve and import repositories from GitHub. |
| [Import large project exports](project_import_export.md#import-large-projects) | Import large GitLab [project exports](../../user/project/settings/import_export.md). |
| [Incoming email](incoming_email.md) | Incoming email-related tasks. |
| [Integrity checks](check.md) | Check the integrity of repositories, files, LDAP, and more. |
| [LDAP maintenance](ldap.md) | [LDAP](../auth/ldap/_index.md)-related tasks. |
| [Password](password.md) | Password management tasks. |
| [Praefect Rake tasks](praefect.md) | [Praefect](../gitaly/praefect/_index.md)-related tasks. |
| [Project import/export](project_import_export.md) | Prepare for [project exports and imports](../../user/project/settings/import_export.md). |
| [Sidekiq job migration](../sidekiq/sidekiq_job_migration.md) | Migrate Sidekiq jobs scheduled for future dates to a new queue. |
| [Service Desk email](service_desk_email.md) | Service Desk email-related tasks. |
| [SMTP maintenance](smtp.md) | SMTP-related tasks. |
| [SPDX license list import](spdx.md) | Import a local copy of the [SPDX license list](https://spdx.org/licenses/) for matching [License approval policies](../../user/compliance/license_approval_policies.md). |
| [Reset user passwords](../../security/reset_user_password.md#use-a-rake-task) | Reset user passwords using Rake. |
| [Uploads migrate](uploads/migrate.md) | Migrate uploads between local storage and object storage. |
| [Uploads sanitize](uploads/sanitize.md) | Remove EXIF data from images uploaded to earlier versions of GitLab. |
| Service Data | Generate and troubleshoot Service Ping. For more information, see Service Ping development documentation. |
| [User management](user_management.md) | Perform user management tasks. |
| [Webhook administration](web_hooks.md) | Maintain project webhooks. |
| [X.509 signatures](x509_signatures.md) | Update X.509 commit signatures, which can be useful if the certificate store changed. |
To list all available Rake tasks:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake -vT
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
gitlab-rake -vT
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake -vT RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides [Rake](https://ruby.github.io/rake/) tasks to assist you with common administration and operational
processes.
All Rake tasks must be run on a Rails node unless the documentation for a task specifically states otherwise.
You can perform GitLab Rake tasks by using:
- `gitlab-rake <raketask>` for [Linux package](https://docs.gitlab.com/omnibus/) and [GitLab Helm chart](https://docs.gitlab.com/charts/troubleshooting/kubernetes_cheat_sheet.html#gitlab-specific-kubernetes-information) installations.
- `bundle exec rake <raketask>` for [self-compiled](../../install/self_compiled/_index.md) installations.
## Available Rake tasks
The following Rake tasks are available for use with GitLab:
| Tasks | Description |
|:------------------------------------------------------------------------------------------------------|:------------|
| [Access token expiration tasks](tokens/_index.md) | Bulk extend or remove expiration dates for access tokens. |
| [Back up and restore](../backup_restore/_index.md) | Back up, restore, and migrate GitLab instances between servers. |
| [Clean up](cleanup.md) | Clean up unneeded items from GitLab instances. |
| Development | Tasks for GitLab contributors. For more information, see the development documentation. |
| [Elasticsearch](../../integration/advanced_search/elasticsearch.md#gitlab-advanced-search-rake-tasks) | Maintain Elasticsearch in a GitLab instance. |
| [General maintenance](maintenance.md) | General maintenance and self-check tasks. |
| [GitHub import](../../user/project/import/github.md) | Retrieve and import repositories from GitHub. |
| [Import large project exports](project_import_export.md#import-large-projects) | Import large GitLab [project exports](../../user/project/settings/import_export.md). |
| [Incoming email](incoming_email.md) | Incoming email-related tasks. |
| [Integrity checks](check.md) | Check the integrity of repositories, files, LDAP, and more. |
| [LDAP maintenance](ldap.md) | [LDAP](../auth/ldap/_index.md)-related tasks. |
| [Password](password.md) | Password management tasks. |
| [Praefect Rake tasks](praefect.md) | [Praefect](../gitaly/praefect/_index.md)-related tasks. |
| [Project import/export](project_import_export.md) | Prepare for [project exports and imports](../../user/project/settings/import_export.md). |
| [Sidekiq job migration](../sidekiq/sidekiq_job_migration.md) | Migrate Sidekiq jobs scheduled for future dates to a new queue. |
| [Service Desk email](service_desk_email.md) | Service Desk email-related tasks. |
| [SMTP maintenance](smtp.md) | SMTP-related tasks. |
| [SPDX license list import](spdx.md) | Import a local copy of the [SPDX license list](https://spdx.org/licenses/) for matching [License approval policies](../../user/compliance/license_approval_policies.md). |
| [Reset user passwords](../../security/reset_user_password.md#use-a-rake-task) | Reset user passwords using Rake. |
| [Uploads migrate](uploads/migrate.md) | Migrate uploads between local storage and object storage. |
| [Uploads sanitize](uploads/sanitize.md) | Remove EXIF data from images uploaded to earlier versions of GitLab. |
| Service Data | Generate and troubleshoot Service Ping. For more information, see Service Ping development documentation. |
| [User management](user_management.md) | Perform user management tasks. |
| [Webhook administration](web_hooks.md) | Maintain project webhooks. |
| [X.509 signatures](x509_signatures.md) | Update X.509 commit signatures, which can be useful if the certificate store changed. |
To list all available Rake tasks:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake -vT
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
gitlab-rake -vT
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake -vT RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
|
https://docs.gitlab.com/administration/maintenance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/maintenance.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
maintenance.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Maintenance Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for general maintenance.
## Gather GitLab and system information
This command gathers information about your GitLab installation and the system it runs on.
These may be useful when asking for help or reporting issues. In a multi-node environment, run this command on nodes running GitLab Rails to avoid PostgreSQL socket errors.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:env:info
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:env:info RAILS_ENV=production
```
Example output:
```plaintext
System information
System: Ubuntu 20.04
Proxy: no
Current User: git
Using RVM: no
Ruby Version: 2.7.6p219
Gem Version: 3.1.6
Bundler Version:2.3.15
Rake Version: 13.0.6
Redis Version: 6.2.7
Sidekiq Version:6.4.2
Go Version: unknown
GitLab information
Version: 15.5.5-ee
Revision: 5f5109f142d
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: PostgreSQL
DB Version: 13.8
URL: https://app.gitaly.gcp.gitlabsandbox.net
HTTP Clone URL: https://app.gitaly.gcp.gitlabsandbox.net/some-group/some-project.git
SSH Clone URL: git@app.gitaly.gcp.gitlabsandbox.net:some-group/some-project.git
Elasticsearch: no
Geo: no
Using LDAP: no
Using Omniauth: yes
Omniauth Providers:
GitLab Shell
Version: 14.12.0
Repository storage paths:
- default: /var/opt/gitlab/git-data/repositories
- gitaly: /var/opt/gitlab/git-data/repositories
GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell
Gitaly
- default Address: unix:/var/opt/gitlab/gitaly/gitaly.socket
- default Version: 15.5.5
- default Git Version: 2.37.1.gl1
- gitaly Address: tcp://10.128.20.6:2305
- gitaly Version: 15.5.5
- gitaly Git Version: 2.37.1.gl1
```
## Show GitLab license information
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This command shows information about your [GitLab license](../license.md) and
how many seats are used. It is only available on GitLab Enterprise
installations: a license cannot be installed into GitLab Community Edition.
These may be useful when raising tickets with Support, or for programmatically
checking your license parameters.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:license:info
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:license:info RAILS_ENV=production
```
Example output:
```plaintext
Today's Date: 2020-02-29
Current User Count: 30
Max Historical Count: 30
Max Users in License: 40
License valid from: 2019-11-29 to 2020-11-28
Email associated with license: user@example.com
```
## Check GitLab configuration
The `gitlab:check` Rake task runs the following Rake tasks:
- `gitlab:gitlab_shell:check`
- `gitlab:gitaly:check`
- `gitlab:sidekiq:check`
- `gitlab:incoming_email:check`
- `gitlab:ldap:check`
- `gitlab:app:check`
- `gitlab:geo:check` (only if you're running [Geo](../geo/replication/troubleshooting/common.md#health-check-rake-task))
It checks that each component was set up according to the installation guide and suggest fixes
for issues found. This command must be run from your application server and doesn't work correctly on
component servers like [Gitaly](../gitaly/configure_gitaly.md#run-gitaly-on-its-own-server).
You may also have a look at our troubleshooting guides for:
- [GitLab](../troubleshooting/_index.md).
- [Linux package installations](https://docs.gitlab.com/omnibus/#troubleshooting).
Additionally you should also [verify database values can be decrypted using the current secrets](check.md#verify-database-values-can-be-decrypted-using-the-current-secrets).
To run `gitlab:check`, run:
- Linux package installations:
```shell
sudo gitlab-rake gitlab:check
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:check RAILS_ENV=production
```
- Kubernetes installations:
```shell
kubectl exec -it <toolbox-pod-name> -- sudo gitlab-rake gitlab:check
```
{{< alert type="note" >}}
Due to the specific architecture of Helm-based GitLab installations, the output may contain
false negatives for connectivity verification to `gitlab-shell`, Sidekiq, and `systemd`-related files.
These reported failures are expected and do not indicate actual issues, disregard them when reviewing diagnostic results.
{{< /alert >}}
Use `SANITIZE=true` for `gitlab:check` if you want to omit project names from the output.
Example output:
```plaintext
Checking Environment ...
Git configured for git user? ... yes
Has python2? ... yes
python2 is supported version? ... yes
Checking Environment ... Finished
Checking GitLab Shell ...
GitLab Shell version? ... OK (1.2.0)
Repo base directory exists? ... yes
Repo base directory is a symlink? ... no
Repo base owned by git:git? ... yes
Repo base access is drwxrws---? ... yes
post-receive hook up-to-date? ... yes
post-receive hooks in repos are links: ... yes
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Checking Sidekiq ... Finished
Checking GitLab App...
Database config exists? ... yes
Database is SQLite ... no
All migrations up? ... yes
GitLab config exists? ... yes
GitLab config up to date? ... no
Cable config exists? ... yes
Resque config exists? ... yes
Log directory writable? ... yes
Tmp directory writable? ... yes
Init script exists? ... yes
Init script up-to-date? ... yes
Redis version >= 2.0.0? ... yes
Checking GitLab ... Finished
```
## Rebuild `authorized_keys` file
In some cases it is necessary to rebuild the `authorized_keys` file,
for example, if after an upgrade you receive `Permission denied (publickey)` when pushing [via SSH](../../user/ssh.md)
and find `404 Key Not Found` errors in [the `gitlab-shell.log` file](../logs/_index.md#gitlab-shelllog).
To rebuild `authorized_keys`, run:
- Linux package installations:
```shell
sudo gitlab-rake gitlab:shell:setup
```
- Self-compiled installations:
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:shell:setup RAILS_ENV=production
```
Example output:
```plaintext
This will rebuild an authorized_keys file.
You will lose any data stored in authorized_keys file.
Do you want to continue (yes/no)? yes
```
## Clear Redis cache
If for some reason the dashboard displays the wrong information, you might want to
clear Redis' cache. To do this, run:
- Linux package installations:
```shell
sudo gitlab-rake cache:clear
```
- Self-compiled installations:
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake cache:clear RAILS_ENV=production
```
## Precompile the assets
Sometimes during version upgrades you might end up with some wrong CSS or
missing some icons. In that case, try to precompile the assets again.
This Rake task only applies to self-compiled installations. [Read more](../../update/package/package_troubleshooting.md#missing-asset-files)
about troubleshooting this problem when running the Linux package.
The guidance for Linux package might be applicable for Kubernetes and Docker
deployments of GitLab, though in general, container-based installations
don't have issues with missing assets.
- Self-compiled installations:
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:assets:compile RAILS_ENV=production
```
For Linux package installations, the unoptimized assets (JavaScript, CSS) are frozen at
the release of upstream GitLab. The Linux package installation includes optimized versions
of those assets. Unless you are modifying the JavaScript / CSS code on your
production machine after installing the package, there should be no reason to redo
`rake gitlab:assets:compile` on the production machine. If you suspect that assets
have been corrupted, you should reinstall the Linux package.
## Check TCP connectivity to a remote site
Sometimes you need to know if your GitLab installation can connect to a TCP
service on another machine (for example a PostgreSQL or web server)
to troubleshoot proxy issues.
A Rake task is included to help you with this.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:tcp_check[example.com,80]
```
- Self-compiled installations:
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:tcp_check[example.com,80] RAILS_ENV=production
```
## Clear exclusive lease (DANGER)
GitLab uses a shared lock mechanism: `ExclusiveLease` to prevent simultaneous operations
in a shared resource. An example is running periodic garbage collection on repositories.
In very specific situations, an operation locked by an Exclusive Lease can fail without
releasing the lock. If you can't wait for it to expire, you can run this task to manually
clear it.
To clear all exclusive leases:
{{< alert type="warning" >}}
Don't run it while GitLab or Sidekiq is running
{{< /alert >}}
```shell
sudo gitlab-rake gitlab:exclusive_lease:clear
```
To specify a lease `type` or lease `type + id`, specify a scope:
```shell
# to clear all leases for repository garbage collection:
sudo gitlab-rake gitlab:exclusive_lease:clear[project_housekeeping:*]
# to clear a lease for repository garbage collection in a specific project: (id=4)
sudo gitlab-rake gitlab:exclusive_lease:clear[project_housekeeping:4]
```
## Display status of database migrations
See the [background migrations documentation](../../update/background_migrations.md)
for how to check that migrations are complete when upgrading GitLab.
To check the status of specific migrations, you can use the following Rake task:
```shell
sudo gitlab-rake db:migrate:status
```
To check the [tracking database on a Geo secondary site](../geo/setup/external_database.md#configure-the-tracking-database), you can use the following Rake task:
```shell
sudo gitlab-rake db:migrate:status:geo
```
This outputs a table with a `Status` of `up` or `down` for
each migration. Example:
```shell
database: gitlabhq_production
Status Migration ID Type Milestone Name
--------------------------------------------------
up 20240701074848 regular 17.2 AddGroupIdToPackagesDebianGroupComponents
up 20240701153843 regular 17.2 AddWorkItemsDatesSourcesSyncToIssuesTrigger
up 20240702072515 regular 17.2 AddGroupIdToPackagesDebianGroupArchitectures
up 20240702133021 regular 17.2 AddWorkspaceTerminationTimeoutsToRemoteDevelopmentAgentConfigs
up 20240604064938 post 17.2 FinalizeBackfillPartitionIdCiPipelineMessage
up 20240604111157 post 17.2 AddApprovalPolicyRulesFkOnApprovalGroupRules
```
Starting with GitLab 17.1, migrations are executed in an
order that conforms to the GitLab release cadence.
## Run incomplete database migrations
Database migrations can be stuck in an incomplete state, with a `down`
status in the output of the `sudo gitlab-rake db:migrate:status` command.
1. To complete these migrations, use the following Rake task:
```shell
sudo gitlab-rake db:migrate
```
1. After the command completes, run `sudo gitlab-rake db:migrate:status` to check if all migrations are completed (have an `up` status).
1. Hot reload `puma` and `sidekiq` services:
```shell
sudo gitlab-ctl hup puma
sudo gitlab-ctl restart sidekiq
```
Starting with GitLab 17.1, migrations are executed in an
order that conforms to the GitLab release cadence.
## Rebuild database indexes
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/42705) in GitLab 13.5 [with a flag](../../administration/feature_flags/_index.md) named `database_reindexing`. Disabled by default.
- [Enabled on GitLab.com](https://gitlab.com/groups/gitlab-org/-/epics/3989) in GitLab 13.9.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/188548) in GitLab 18.0.
{{< /history >}}
{{< alert type="warning" >}}
Use with caution when running in a production environment, and run during off-peak times.
{{< /alert >}}
Database indexes can be rebuilt regularly to reclaim space and maintain healthy
levels of index bloat over time. Reindexing can also be run as a
[regular cron job](https://docs.gitlab.com/omnibus/settings/database.html#automatic-database-reindexing).
A "healthy" level of bloat is highly dependent on the specific index, but generally
should be below 30%.
Prerequisites:
- This feature requires PostgreSQL 12 or later.
- These index types are **not supported**: expression indexes and indexes used for constraint exclusion.
### Run reindexing
The following task rebuilds only the two indexes in each database with the highest bloat. To rebuild more than two indexes, run the task again until all desired indexes have been rebuilt.
1. Run the reindexing task:
```shell
sudo gitlab-rake gitlab:db:reindex
```
1. Check [application_json.log](../../administration/logs/_index.md#application_jsonlog) to verify execution or to troubleshoot.
### Customize reindexing settings
For smaller instances or to adjust reindexing behavior, you can modify these settings using the Rails console:
```shell
sudo gitlab-rails console
```
Then customize the configuration:
```ruby
# Lower minimum index size to 100 MB (default is 1 GB)
Gitlab::Database::Reindexing.minimum_index_size!(100.megabytes)
# Change minimum bloat threshold to 30% (default is 20%, there is no benefit from setting it lower)
Gitlab::Database::Reindexing.minimum_relative_bloat_size!(0.3)
```
### Automated reindexing
For larger instances with significant database size, automate database reindexing by scheduling it to run during periods of low activity.
#### Schedule with crontab
For packaged GitLab installations, use crontab:
1. Edit the crontab:
```shell
sudo crontab -e
```
1. Add an entry based on your preferred schedule:
1. Option 1: Run daily during quiet periods
```shell
# Run database reindexing every day at 21:12
# The log will be rotated by the packaged logrotate daemon
12 21 * * * /opt/gitlab/bin/gitlab-rake gitlab:db:reindex >> /var/log/gitlab/gitlab-rails/cron_reindex.log 2>&1
```
1. Option 2: Run on weekends only
```shell
# Run database reindexing at 01:00 AM on weekends
0 1 * * 0,6 /opt/gitlab/bin/gitlab-rake gitlab:db:reindex >> /var/log/gitlab/gitlab-rails/cron_reindex.log 2>&1
```
1. Option 3: Run frequently during low-traffic hours
```shell
# Run database reindexing every 3 hours during night hours (22:00-07:00)
0 22,1,4,7 * * * /opt/gitlab/bin/gitlab-rake gitlab:db:reindex >> /var/log/gitlab/gitlab-rails/cron_reindex.log 2>&1
```
For Kubernetes deployments, you can create a similar schedule using the CronJob resource to run the reindexing task.
### Notes
- Rebuilding database indexes is a disk-intensive task, so you should perform the
task during off-peak hours. Running the task during peak hours can lead to
increased bloat, and can also cause certain queries to perform slowly.
- The task requires free disk space for the index being restored. The created
indexes are appended with `_ccnew`. If the reindexing task fails, re-running the
task cleans up the temporary indexes.
- The time it takes for database index rebuilding to complete depends on the size
of the target database. It can take between several hours and several days.
- The task uses Redis locks, it's safe to schedule it to run frequently.
It's a no-op if another reindexing task is already running.
## Dump the database schema
In rare circumstances, the database schema can differ from what the application code expects
even if all database migrations are complete. If this does occur, it can lead to odd errors
in GitLab.
To dump the database schema:
```shell
SCHEMA=/tmp/structure.sql gitlab-rake db:schema:dump
```
The Rake task creates a `/tmp/structure.sql` file that contains the database schema dump.
To determine if there are any differences:
1. Go to the [`db/structure.sql`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql) file in the [`gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
Select the branch that matches your GitLab version. For example, the file for GitLab 16.2: <https://gitlab.com/gitlab-org/gitlab/-/blob/16-2-stable-ee/db/structure.sql>.
1. Compare `/tmp/structure.sql` with the `db/structure.sql` file for your version.
## Check the database for schema inconsistencies
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/390719) in GitLab 15.11.
{{< /history >}}
This Rake task checks the database schema for any inconsistencies and prints them in the terminal.
This task is a diagnostic tool to be used under the guidance of GitLab Support.
You should not use the task for routine checks as database inconsistencies might be expected.
```shell
gitlab-rake gitlab:db:schema_checker:run
```
## Collect information and statistics about the database
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-com/-/epics/2456) in GitLab 17.11.
{{< /history >}}
The `gitlab:db:sos` command gathers configuration, performance, and diagnostic data about your GitLab
database to help you troubleshoot issues. Where you run this command depends on your configuration. Make sure
to run this command relative to where GitLab is installed `(/gitlab)`.
- **Scaled GitLab**: on your Puma or Sidekiq server.
- **Cloud native install**: on the toolbox pod.
- **All other configurations**: on your GitLab server.
Modify the command as needed:
- **Default path** - To run the command with the default file path (`/var/opt/gitlab/gitlab-rails/tmp/sos.zip`), run `gitlab-rake gitlab:db:sos`.
- **Custom path** - To change the file path, run `gitlab-rake gitlab:db:sos["/absolute/custom/path/to/file.zip"]`.
- **Zsh users** - If you have not modified your Zsh configuration, you must add quotation marks
around the entire command, like this: `gitlab-rake "gitlab:db:sos[/absolute/custom/path/to/file.zip]"`
The Rake task runs for five minutes. It creates a compressed folder in the path you specify.
The compressed folder contains a large number of files.
### Enable optional query statistics data
The `gitlab:db:sos` Rake task can also gather data for troubleshooting slow queries using the
[`pg_stat_statements` extension](https://www.postgresql.org/docs/16/pgstatstatements.html).
Enabling this extension is optional, and requires restarting PostgreSQL and GitLab. This data is
likely required for troubleshooting GitLab performance issues caused by slow database queries.
Prerequisites:
- You must be a PostgreSQL user with superuser privileges to enable or disable an extension.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Modify `/etc/gitlab/gitlab.rb` to add the following line:
```ruby
postgresql['shared_preload_libraries'] = 'pg_stat_statements'
```
1. Run reconfigure:
```shell
sudo gitlab-ctl reconfigure
```
1. PostgreSQL needs to restart to load this extension, requiring a GitLab restart as well:
```shell
sudo gitlab-ctl restart postgresql
sudo gitlab-ctl restart sidekiq
sudo gitlab-ctl restart puma
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Modify `/etc/gitlab/gitlab.rb` to add the following line:
```ruby
postgresql['shared_preload_libraries'] = 'pg_stat_statements'
```
1. Run reconfigure:
```shell
docker exec -it <container-id> gitlab-ctl reconfigure
```
1. PostgreSQL needs to restart to load this extension, requiring a GitLab restart as well:
```shell
docker exec -it <container-id> gitlab-ctl restart postgresql
docker exec -it <container-id> gitlab-ctl restart sidekiq
docker exec -it <container-id> gitlab-ctl restart puma
```
{{< /tab >}}
{{< tab title="External PostgreSQL service" >}}
1. Add or uncomment the following parameters in your `postgresql.conf` file
```shell
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
```
1. Restart PostgreSQL for the changes to take effect.
1. Restart GitLab: the web (Puma) and Sidekiq services should be restarted.
{{< /tab >}}
{{< /tabs >}}
1. On the [database console](../troubleshooting/postgresql.md) run:
```SQL
CREATE EXTENSION pg_stat_statements;
```
1. Check the extension is working:
```SQL
SELECT extname FROM pg_extension WHERE extname = 'pg_stat_statements';
SELECT * FROM pg_stat_statements LIMIT 10;
```
## Check the database for duplicate CI/CD tags
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/518698) in GitLab 17.10.
{{< /history >}}
This Rake task checks the `ci` database for duplicate tags in the `tags` table.
This issue might affect instances that have undergone multiple major upgrades over an extended period.
Run the following command to search duplicate tags, then rewrite any tag assignments that
reference duplicate tags to use the original tag instead.
```shell
sudo gitlab-rake gitlab:db:deduplicate_tags
```
To run this command in dry-run mode, set the environment variable `DRY_RUN=true`.
## Detect PostgreSQL collation version mismatches
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/195450) in GitLab 18.2.
{{< /history >}}
The PostgreSQL collation checker detects collation version mismatches between the database and
operating system that can cause index corruption. PostgreSQL uses the operating
system's `glibc` library for string collation (sorting and comparison rules).
Run this task after operating system upgrades that change the underlying `glibc` library.
Prerequisites:
- PostgreSQL 13 or later.
To check for PostgreSQL collation mismatches in all databases:
```shell
sudo gitlab-rake gitlab:db:collation_checker
```
To check a specific database:
```shell
# Check main database
sudo gitlab-rake gitlab:db:collation_checker:main
# Check CI database
sudo gitlab-rake gitlab:db:collation_checker:ci
```
### Example output
When no issues are found:
```plaintext
Checking for PostgreSQL collation mismatches on main database...
No collation mismatches detected on main.
```
If mismatches are detected, the task provides remediation steps to fix the affected indexes.
Example output with mismatches:
```plaintext
Checking for PostgreSQL collation mismatches on main database...
⚠️ COLLATION MISMATCHES DETECTED on main database!
2 collation(s) have version mismatches:
- en_US.utf8: stored=428.1, actual=513.1
- es_ES.utf8: stored=428.1, actual=513.1
Affected indexes that need to be rebuilt:
- index_projects_on_name (btree) on table projects
• Affected columns: name
• Type: UNIQUE
REMEDIATION STEPS:
1. Put GitLab into maintenance mode
2. Run the following SQL commands:
# Step 1: Check for duplicate entries in unique indexes
SELECT name, COUNT(*), ARRAY_AGG(id) FROM projects GROUP BY name HAVING COUNT(*) > 1 LIMIT 1;
# If duplicates exist, you may need to use gitlab:db:deduplicate_tags or similar tasks
# to fix duplicate entries before rebuilding unique indexes.
# Step 2: Rebuild affected indexes
# Option A: Rebuild individual indexes with minimal downtime:
REINDEX INDEX CONCURRENTLY index_projects_on_name;
# Option B: Alternatively, rebuild all indexes at once (requires downtime):
REINDEX DATABASE main;
# Step 3: Refresh collation versions
ALTER COLLATION "en_US.utf8" REFRESH VERSION;
ALTER COLLATION "es_ES.utf8" REFRESH VERSION;
3. Take GitLab out of maintenance mode
```
For more information about PostgreSQL collation issues and how they affect database indexes, see the [PostgreSQL upgrading OS documentation](../postgresql/upgrading_os.md).
## Repair corrupted database indexes
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/196677) in GitLab 18.2.
{{< /history >}}
The index repair tool fixes corrupted or missing database indexes that can cause data integrity issues.
The tool addresses specific problematic indexes that are affected by
collation mismatches or other corruption issues. The tool:
- Deduplicates data when unique indexes are corrupted.
- Updates references to maintain data integrity.
- Rebuilds or creates indexes with correct configuration.
Before repairing indexes, run the tool in dry-run mode to analyze potential changes:
```shell
sudo DRY_RUN=true gitlab-rake gitlab:db:repair_index
```
The following example output shows the changes:
```shell
INFO -- : DRY RUN: Analysis only, no changes will be made.
INFO -- : Running Index repair on database main...
INFO -- : Processing index 'index_merge_request_diff_commit_users_on_name_and_email'...
INFO -- : Index is unique. Checking for duplicate data...
INFO -- : No duplicates found in 'merge_request_diff_commit_users' for columns: name,email.
INFO -- : Index exists. Reindexing...
INFO -- : Index reindexed successfully.
```
To repair all known problematic indexes in all databases:
```shell
sudo gitlab-rake gitlab:db:repair_index
```
The command processes each database and repairs the indexes. For example:
```shell
INFO -- : Running Index repair on database main...
INFO -- : Processing index 'index_merge_request_diff_commit_users_on_name_and_email'...
INFO -- : Index is unique. Checking for duplicate data...
INFO -- : No duplicates found in 'merge_request_diff_commit_users' for columns: name,email.
INFO -- : Index does not exist. Creating new index...
INFO -- : Index created successfully.
INFO -- : Index repair completed for database main.
```
To repair indexes in a specific database:
```shell
# Repair indexes in main database
sudo gitlab-rake gitlab:db:repair_index:main
# Repair indexes in CI database
sudo gitlab-rake gitlab:db:repair_index:ci
```
## Troubleshooting
### Advisory lock connection information
After running the `db:migrate` Rake task, you may see output like the following:
```shell
main: == [advisory_lock_connection] object_id: 173580, pg_backend_pid: 5532
main: == [advisory_lock_connection] object_id: 173580, pg_backend_pid: 5532
```
The messages returned are informational and can be ignored.
### PostgreSQL socket errors when executing the `gitlab:env:info` Rake task
After running `sudo gitlab-rake gitlab:env:info` on Gitaly or other non-Rails nodes, you might see the following error:
```plaintext
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
```
This is because, in a multi-node environment, the `gitlab:env:info` Rake task should only be executed on the nodes running **GitLab Rails**.
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Maintenance Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for general maintenance.
## Gather GitLab and system information
This command gathers information about your GitLab installation and the system it runs on.
These may be useful when asking for help or reporting issues. In a multi-node environment, run this command on nodes running GitLab Rails to avoid PostgreSQL socket errors.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:env:info
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:env:info RAILS_ENV=production
```
Example output:
```plaintext
System information
System: Ubuntu 20.04
Proxy: no
Current User: git
Using RVM: no
Ruby Version: 2.7.6p219
Gem Version: 3.1.6
Bundler Version:2.3.15
Rake Version: 13.0.6
Redis Version: 6.2.7
Sidekiq Version:6.4.2
Go Version: unknown
GitLab information
Version: 15.5.5-ee
Revision: 5f5109f142d
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: PostgreSQL
DB Version: 13.8
URL: https://app.gitaly.gcp.gitlabsandbox.net
HTTP Clone URL: https://app.gitaly.gcp.gitlabsandbox.net/some-group/some-project.git
SSH Clone URL: git@app.gitaly.gcp.gitlabsandbox.net:some-group/some-project.git
Elasticsearch: no
Geo: no
Using LDAP: no
Using Omniauth: yes
Omniauth Providers:
GitLab Shell
Version: 14.12.0
Repository storage paths:
- default: /var/opt/gitlab/git-data/repositories
- gitaly: /var/opt/gitlab/git-data/repositories
GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell
Gitaly
- default Address: unix:/var/opt/gitlab/gitaly/gitaly.socket
- default Version: 15.5.5
- default Git Version: 2.37.1.gl1
- gitaly Address: tcp://10.128.20.6:2305
- gitaly Version: 15.5.5
- gitaly Git Version: 2.37.1.gl1
```
## Show GitLab license information
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This command shows information about your [GitLab license](../license.md) and
how many seats are used. It is only available on GitLab Enterprise
installations: a license cannot be installed into GitLab Community Edition.
These may be useful when raising tickets with Support, or for programmatically
checking your license parameters.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:license:info
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:license:info RAILS_ENV=production
```
Example output:
```plaintext
Today's Date: 2020-02-29
Current User Count: 30
Max Historical Count: 30
Max Users in License: 40
License valid from: 2019-11-29 to 2020-11-28
Email associated with license: user@example.com
```
## Check GitLab configuration
The `gitlab:check` Rake task runs the following Rake tasks:
- `gitlab:gitlab_shell:check`
- `gitlab:gitaly:check`
- `gitlab:sidekiq:check`
- `gitlab:incoming_email:check`
- `gitlab:ldap:check`
- `gitlab:app:check`
- `gitlab:geo:check` (only if you're running [Geo](../geo/replication/troubleshooting/common.md#health-check-rake-task))
It checks that each component was set up according to the installation guide and suggest fixes
for issues found. This command must be run from your application server and doesn't work correctly on
component servers like [Gitaly](../gitaly/configure_gitaly.md#run-gitaly-on-its-own-server).
You may also have a look at our troubleshooting guides for:
- [GitLab](../troubleshooting/_index.md).
- [Linux package installations](https://docs.gitlab.com/omnibus/#troubleshooting).
Additionally you should also [verify database values can be decrypted using the current secrets](check.md#verify-database-values-can-be-decrypted-using-the-current-secrets).
To run `gitlab:check`, run:
- Linux package installations:
```shell
sudo gitlab-rake gitlab:check
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:check RAILS_ENV=production
```
- Kubernetes installations:
```shell
kubectl exec -it <toolbox-pod-name> -- sudo gitlab-rake gitlab:check
```
{{< alert type="note" >}}
Due to the specific architecture of Helm-based GitLab installations, the output may contain
false negatives for connectivity verification to `gitlab-shell`, Sidekiq, and `systemd`-related files.
These reported failures are expected and do not indicate actual issues, disregard them when reviewing diagnostic results.
{{< /alert >}}
Use `SANITIZE=true` for `gitlab:check` if you want to omit project names from the output.
Example output:
```plaintext
Checking Environment ...
Git configured for git user? ... yes
Has python2? ... yes
python2 is supported version? ... yes
Checking Environment ... Finished
Checking GitLab Shell ...
GitLab Shell version? ... OK (1.2.0)
Repo base directory exists? ... yes
Repo base directory is a symlink? ... no
Repo base owned by git:git? ... yes
Repo base access is drwxrws---? ... yes
post-receive hook up-to-date? ... yes
post-receive hooks in repos are links: ... yes
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Checking Sidekiq ... Finished
Checking GitLab App...
Database config exists? ... yes
Database is SQLite ... no
All migrations up? ... yes
GitLab config exists? ... yes
GitLab config up to date? ... no
Cable config exists? ... yes
Resque config exists? ... yes
Log directory writable? ... yes
Tmp directory writable? ... yes
Init script exists? ... yes
Init script up-to-date? ... yes
Redis version >= 2.0.0? ... yes
Checking GitLab ... Finished
```
## Rebuild `authorized_keys` file
In some cases it is necessary to rebuild the `authorized_keys` file,
for example, if after an upgrade you receive `Permission denied (publickey)` when pushing [via SSH](../../user/ssh.md)
and find `404 Key Not Found` errors in [the `gitlab-shell.log` file](../logs/_index.md#gitlab-shelllog).
To rebuild `authorized_keys`, run:
- Linux package installations:
```shell
sudo gitlab-rake gitlab:shell:setup
```
- Self-compiled installations:
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:shell:setup RAILS_ENV=production
```
Example output:
```plaintext
This will rebuild an authorized_keys file.
You will lose any data stored in authorized_keys file.
Do you want to continue (yes/no)? yes
```
## Clear Redis cache
If for some reason the dashboard displays the wrong information, you might want to
clear Redis' cache. To do this, run:
- Linux package installations:
```shell
sudo gitlab-rake cache:clear
```
- Self-compiled installations:
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake cache:clear RAILS_ENV=production
```
## Precompile the assets
Sometimes during version upgrades you might end up with some wrong CSS or
missing some icons. In that case, try to precompile the assets again.
This Rake task only applies to self-compiled installations. [Read more](../../update/package/package_troubleshooting.md#missing-asset-files)
about troubleshooting this problem when running the Linux package.
The guidance for Linux package might be applicable for Kubernetes and Docker
deployments of GitLab, though in general, container-based installations
don't have issues with missing assets.
- Self-compiled installations:
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:assets:compile RAILS_ENV=production
```
For Linux package installations, the unoptimized assets (JavaScript, CSS) are frozen at
the release of upstream GitLab. The Linux package installation includes optimized versions
of those assets. Unless you are modifying the JavaScript / CSS code on your
production machine after installing the package, there should be no reason to redo
`rake gitlab:assets:compile` on the production machine. If you suspect that assets
have been corrupted, you should reinstall the Linux package.
## Check TCP connectivity to a remote site
Sometimes you need to know if your GitLab installation can connect to a TCP
service on another machine (for example a PostgreSQL or web server)
to troubleshoot proxy issues.
A Rake task is included to help you with this.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:tcp_check[example.com,80]
```
- Self-compiled installations:
```shell
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:tcp_check[example.com,80] RAILS_ENV=production
```
## Clear exclusive lease (DANGER)
GitLab uses a shared lock mechanism: `ExclusiveLease` to prevent simultaneous operations
in a shared resource. An example is running periodic garbage collection on repositories.
In very specific situations, an operation locked by an Exclusive Lease can fail without
releasing the lock. If you can't wait for it to expire, you can run this task to manually
clear it.
To clear all exclusive leases:
{{< alert type="warning" >}}
Don't run it while GitLab or Sidekiq is running
{{< /alert >}}
```shell
sudo gitlab-rake gitlab:exclusive_lease:clear
```
To specify a lease `type` or lease `type + id`, specify a scope:
```shell
# to clear all leases for repository garbage collection:
sudo gitlab-rake gitlab:exclusive_lease:clear[project_housekeeping:*]
# to clear a lease for repository garbage collection in a specific project: (id=4)
sudo gitlab-rake gitlab:exclusive_lease:clear[project_housekeeping:4]
```
## Display status of database migrations
See the [background migrations documentation](../../update/background_migrations.md)
for how to check that migrations are complete when upgrading GitLab.
To check the status of specific migrations, you can use the following Rake task:
```shell
sudo gitlab-rake db:migrate:status
```
To check the [tracking database on a Geo secondary site](../geo/setup/external_database.md#configure-the-tracking-database), you can use the following Rake task:
```shell
sudo gitlab-rake db:migrate:status:geo
```
This outputs a table with a `Status` of `up` or `down` for
each migration. Example:
```shell
database: gitlabhq_production
Status Migration ID Type Milestone Name
--------------------------------------------------
up 20240701074848 regular 17.2 AddGroupIdToPackagesDebianGroupComponents
up 20240701153843 regular 17.2 AddWorkItemsDatesSourcesSyncToIssuesTrigger
up 20240702072515 regular 17.2 AddGroupIdToPackagesDebianGroupArchitectures
up 20240702133021 regular 17.2 AddWorkspaceTerminationTimeoutsToRemoteDevelopmentAgentConfigs
up 20240604064938 post 17.2 FinalizeBackfillPartitionIdCiPipelineMessage
up 20240604111157 post 17.2 AddApprovalPolicyRulesFkOnApprovalGroupRules
```
Starting with GitLab 17.1, migrations are executed in an
order that conforms to the GitLab release cadence.
## Run incomplete database migrations
Database migrations can be stuck in an incomplete state, with a `down`
status in the output of the `sudo gitlab-rake db:migrate:status` command.
1. To complete these migrations, use the following Rake task:
```shell
sudo gitlab-rake db:migrate
```
1. After the command completes, run `sudo gitlab-rake db:migrate:status` to check if all migrations are completed (have an `up` status).
1. Hot reload `puma` and `sidekiq` services:
```shell
sudo gitlab-ctl hup puma
sudo gitlab-ctl restart sidekiq
```
Starting with GitLab 17.1, migrations are executed in an
order that conforms to the GitLab release cadence.
## Rebuild database indexes
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/42705) in GitLab 13.5 [with a flag](../../administration/feature_flags/_index.md) named `database_reindexing`. Disabled by default.
- [Enabled on GitLab.com](https://gitlab.com/groups/gitlab-org/-/epics/3989) in GitLab 13.9.
- [Enabled on GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/188548) in GitLab 18.0.
{{< /history >}}
{{< alert type="warning" >}}
Use with caution when running in a production environment, and run during off-peak times.
{{< /alert >}}
Database indexes can be rebuilt regularly to reclaim space and maintain healthy
levels of index bloat over time. Reindexing can also be run as a
[regular cron job](https://docs.gitlab.com/omnibus/settings/database.html#automatic-database-reindexing).
A "healthy" level of bloat is highly dependent on the specific index, but generally
should be below 30%.
Prerequisites:
- This feature requires PostgreSQL 12 or later.
- These index types are **not supported**: expression indexes and indexes used for constraint exclusion.
### Run reindexing
The following task rebuilds only the two indexes in each database with the highest bloat. To rebuild more than two indexes, run the task again until all desired indexes have been rebuilt.
1. Run the reindexing task:
```shell
sudo gitlab-rake gitlab:db:reindex
```
1. Check [application_json.log](../../administration/logs/_index.md#application_jsonlog) to verify execution or to troubleshoot.
### Customize reindexing settings
For smaller instances or to adjust reindexing behavior, you can modify these settings using the Rails console:
```shell
sudo gitlab-rails console
```
Then customize the configuration:
```ruby
# Lower minimum index size to 100 MB (default is 1 GB)
Gitlab::Database::Reindexing.minimum_index_size!(100.megabytes)
# Change minimum bloat threshold to 30% (default is 20%, there is no benefit from setting it lower)
Gitlab::Database::Reindexing.minimum_relative_bloat_size!(0.3)
```
### Automated reindexing
For larger instances with significant database size, automate database reindexing by scheduling it to run during periods of low activity.
#### Schedule with crontab
For packaged GitLab installations, use crontab:
1. Edit the crontab:
```shell
sudo crontab -e
```
1. Add an entry based on your preferred schedule:
1. Option 1: Run daily during quiet periods
```shell
# Run database reindexing every day at 21:12
# The log will be rotated by the packaged logrotate daemon
12 21 * * * /opt/gitlab/bin/gitlab-rake gitlab:db:reindex >> /var/log/gitlab/gitlab-rails/cron_reindex.log 2>&1
```
1. Option 2: Run on weekends only
```shell
# Run database reindexing at 01:00 AM on weekends
0 1 * * 0,6 /opt/gitlab/bin/gitlab-rake gitlab:db:reindex >> /var/log/gitlab/gitlab-rails/cron_reindex.log 2>&1
```
1. Option 3: Run frequently during low-traffic hours
```shell
# Run database reindexing every 3 hours during night hours (22:00-07:00)
0 22,1,4,7 * * * /opt/gitlab/bin/gitlab-rake gitlab:db:reindex >> /var/log/gitlab/gitlab-rails/cron_reindex.log 2>&1
```
For Kubernetes deployments, you can create a similar schedule using the CronJob resource to run the reindexing task.
### Notes
- Rebuilding database indexes is a disk-intensive task, so you should perform the
task during off-peak hours. Running the task during peak hours can lead to
increased bloat, and can also cause certain queries to perform slowly.
- The task requires free disk space for the index being restored. The created
indexes are appended with `_ccnew`. If the reindexing task fails, re-running the
task cleans up the temporary indexes.
- The time it takes for database index rebuilding to complete depends on the size
of the target database. It can take between several hours and several days.
- The task uses Redis locks, it's safe to schedule it to run frequently.
It's a no-op if another reindexing task is already running.
## Dump the database schema
In rare circumstances, the database schema can differ from what the application code expects
even if all database migrations are complete. If this does occur, it can lead to odd errors
in GitLab.
To dump the database schema:
```shell
SCHEMA=/tmp/structure.sql gitlab-rake db:schema:dump
```
The Rake task creates a `/tmp/structure.sql` file that contains the database schema dump.
To determine if there are any differences:
1. Go to the [`db/structure.sql`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql) file in the [`gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
Select the branch that matches your GitLab version. For example, the file for GitLab 16.2: <https://gitlab.com/gitlab-org/gitlab/-/blob/16-2-stable-ee/db/structure.sql>.
1. Compare `/tmp/structure.sql` with the `db/structure.sql` file for your version.
## Check the database for schema inconsistencies
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/390719) in GitLab 15.11.
{{< /history >}}
This Rake task checks the database schema for any inconsistencies and prints them in the terminal.
This task is a diagnostic tool to be used under the guidance of GitLab Support.
You should not use the task for routine checks as database inconsistencies might be expected.
```shell
gitlab-rake gitlab:db:schema_checker:run
```
## Collect information and statistics about the database
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-com/-/epics/2456) in GitLab 17.11.
{{< /history >}}
The `gitlab:db:sos` command gathers configuration, performance, and diagnostic data about your GitLab
database to help you troubleshoot issues. Where you run this command depends on your configuration. Make sure
to run this command relative to where GitLab is installed `(/gitlab)`.
- **Scaled GitLab**: on your Puma or Sidekiq server.
- **Cloud native install**: on the toolbox pod.
- **All other configurations**: on your GitLab server.
Modify the command as needed:
- **Default path** - To run the command with the default file path (`/var/opt/gitlab/gitlab-rails/tmp/sos.zip`), run `gitlab-rake gitlab:db:sos`.
- **Custom path** - To change the file path, run `gitlab-rake gitlab:db:sos["/absolute/custom/path/to/file.zip"]`.
- **Zsh users** - If you have not modified your Zsh configuration, you must add quotation marks
around the entire command, like this: `gitlab-rake "gitlab:db:sos[/absolute/custom/path/to/file.zip]"`
The Rake task runs for five minutes. It creates a compressed folder in the path you specify.
The compressed folder contains a large number of files.
### Enable optional query statistics data
The `gitlab:db:sos` Rake task can also gather data for troubleshooting slow queries using the
[`pg_stat_statements` extension](https://www.postgresql.org/docs/16/pgstatstatements.html).
Enabling this extension is optional, and requires restarting PostgreSQL and GitLab. This data is
likely required for troubleshooting GitLab performance issues caused by slow database queries.
Prerequisites:
- You must be a PostgreSQL user with superuser privileges to enable or disable an extension.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Modify `/etc/gitlab/gitlab.rb` to add the following line:
```ruby
postgresql['shared_preload_libraries'] = 'pg_stat_statements'
```
1. Run reconfigure:
```shell
sudo gitlab-ctl reconfigure
```
1. PostgreSQL needs to restart to load this extension, requiring a GitLab restart as well:
```shell
sudo gitlab-ctl restart postgresql
sudo gitlab-ctl restart sidekiq
sudo gitlab-ctl restart puma
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Modify `/etc/gitlab/gitlab.rb` to add the following line:
```ruby
postgresql['shared_preload_libraries'] = 'pg_stat_statements'
```
1. Run reconfigure:
```shell
docker exec -it <container-id> gitlab-ctl reconfigure
```
1. PostgreSQL needs to restart to load this extension, requiring a GitLab restart as well:
```shell
docker exec -it <container-id> gitlab-ctl restart postgresql
docker exec -it <container-id> gitlab-ctl restart sidekiq
docker exec -it <container-id> gitlab-ctl restart puma
```
{{< /tab >}}
{{< tab title="External PostgreSQL service" >}}
1. Add or uncomment the following parameters in your `postgresql.conf` file
```shell
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
```
1. Restart PostgreSQL for the changes to take effect.
1. Restart GitLab: the web (Puma) and Sidekiq services should be restarted.
{{< /tab >}}
{{< /tabs >}}
1. On the [database console](../troubleshooting/postgresql.md) run:
```SQL
CREATE EXTENSION pg_stat_statements;
```
1. Check the extension is working:
```SQL
SELECT extname FROM pg_extension WHERE extname = 'pg_stat_statements';
SELECT * FROM pg_stat_statements LIMIT 10;
```
## Check the database for duplicate CI/CD tags
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/518698) in GitLab 17.10.
{{< /history >}}
This Rake task checks the `ci` database for duplicate tags in the `tags` table.
This issue might affect instances that have undergone multiple major upgrades over an extended period.
Run the following command to search duplicate tags, then rewrite any tag assignments that
reference duplicate tags to use the original tag instead.
```shell
sudo gitlab-rake gitlab:db:deduplicate_tags
```
To run this command in dry-run mode, set the environment variable `DRY_RUN=true`.
## Detect PostgreSQL collation version mismatches
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/195450) in GitLab 18.2.
{{< /history >}}
The PostgreSQL collation checker detects collation version mismatches between the database and
operating system that can cause index corruption. PostgreSQL uses the operating
system's `glibc` library for string collation (sorting and comparison rules).
Run this task after operating system upgrades that change the underlying `glibc` library.
Prerequisites:
- PostgreSQL 13 or later.
To check for PostgreSQL collation mismatches in all databases:
```shell
sudo gitlab-rake gitlab:db:collation_checker
```
To check a specific database:
```shell
# Check main database
sudo gitlab-rake gitlab:db:collation_checker:main
# Check CI database
sudo gitlab-rake gitlab:db:collation_checker:ci
```
### Example output
When no issues are found:
```plaintext
Checking for PostgreSQL collation mismatches on main database...
No collation mismatches detected on main.
```
If mismatches are detected, the task provides remediation steps to fix the affected indexes.
Example output with mismatches:
```plaintext
Checking for PostgreSQL collation mismatches on main database...
⚠️ COLLATION MISMATCHES DETECTED on main database!
2 collation(s) have version mismatches:
- en_US.utf8: stored=428.1, actual=513.1
- es_ES.utf8: stored=428.1, actual=513.1
Affected indexes that need to be rebuilt:
- index_projects_on_name (btree) on table projects
• Affected columns: name
• Type: UNIQUE
REMEDIATION STEPS:
1. Put GitLab into maintenance mode
2. Run the following SQL commands:
# Step 1: Check for duplicate entries in unique indexes
SELECT name, COUNT(*), ARRAY_AGG(id) FROM projects GROUP BY name HAVING COUNT(*) > 1 LIMIT 1;
# If duplicates exist, you may need to use gitlab:db:deduplicate_tags or similar tasks
# to fix duplicate entries before rebuilding unique indexes.
# Step 2: Rebuild affected indexes
# Option A: Rebuild individual indexes with minimal downtime:
REINDEX INDEX CONCURRENTLY index_projects_on_name;
# Option B: Alternatively, rebuild all indexes at once (requires downtime):
REINDEX DATABASE main;
# Step 3: Refresh collation versions
ALTER COLLATION "en_US.utf8" REFRESH VERSION;
ALTER COLLATION "es_ES.utf8" REFRESH VERSION;
3. Take GitLab out of maintenance mode
```
For more information about PostgreSQL collation issues and how they affect database indexes, see the [PostgreSQL upgrading OS documentation](../postgresql/upgrading_os.md).
## Repair corrupted database indexes
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/196677) in GitLab 18.2.
{{< /history >}}
The index repair tool fixes corrupted or missing database indexes that can cause data integrity issues.
The tool addresses specific problematic indexes that are affected by
collation mismatches or other corruption issues. The tool:
- Deduplicates data when unique indexes are corrupted.
- Updates references to maintain data integrity.
- Rebuilds or creates indexes with correct configuration.
Before repairing indexes, run the tool in dry-run mode to analyze potential changes:
```shell
sudo DRY_RUN=true gitlab-rake gitlab:db:repair_index
```
The following example output shows the changes:
```shell
INFO -- : DRY RUN: Analysis only, no changes will be made.
INFO -- : Running Index repair on database main...
INFO -- : Processing index 'index_merge_request_diff_commit_users_on_name_and_email'...
INFO -- : Index is unique. Checking for duplicate data...
INFO -- : No duplicates found in 'merge_request_diff_commit_users' for columns: name,email.
INFO -- : Index exists. Reindexing...
INFO -- : Index reindexed successfully.
```
To repair all known problematic indexes in all databases:
```shell
sudo gitlab-rake gitlab:db:repair_index
```
The command processes each database and repairs the indexes. For example:
```shell
INFO -- : Running Index repair on database main...
INFO -- : Processing index 'index_merge_request_diff_commit_users_on_name_and_email'...
INFO -- : Index is unique. Checking for duplicate data...
INFO -- : No duplicates found in 'merge_request_diff_commit_users' for columns: name,email.
INFO -- : Index does not exist. Creating new index...
INFO -- : Index created successfully.
INFO -- : Index repair completed for database main.
```
To repair indexes in a specific database:
```shell
# Repair indexes in main database
sudo gitlab-rake gitlab:db:repair_index:main
# Repair indexes in CI database
sudo gitlab-rake gitlab:db:repair_index:ci
```
## Troubleshooting
### Advisory lock connection information
After running the `db:migrate` Rake task, you may see output like the following:
```shell
main: == [advisory_lock_connection] object_id: 173580, pg_backend_pid: 5532
main: == [advisory_lock_connection] object_id: 173580, pg_backend_pid: 5532
```
The messages returned are informational and can be ignored.
### PostgreSQL socket errors when executing the `gitlab:env:info` Rake task
After running `sudo gitlab-rake gitlab:env:info` on Gitaly or other non-Rails nodes, you might see the following error:
```plaintext
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
```
This is because, in a multi-node environment, the `gitlab:env:info` Rake task should only be executed on the nodes running **GitLab Rails**.
|
https://docs.gitlab.com/administration/praefect
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/praefect.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
praefect.md
|
Data Access
|
Gitaly
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Praefect Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Rake tasks are available for projects that have been created on Praefect storage. See the
[Praefect documentation](../gitaly/praefect/_index.md) for information on configuring Praefect.
## Replica checksums
`gitlab:praefect:replicas` prints out checksums of the repository of a given `project_id` on:
- The primary Gitaly node.
- Secondary internal Gitaly nodes.
Run this Rake task on the node that GitLab is installed and not on the node that Praefect is installed.
- Linux package installations:
```shell
sudo gitlab-rake "gitlab:praefect:replicas[project_id]"
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake "gitlab:praefect:replicas[project_id]" RAILS_ENV=production
```
|
---
stage: Data Access
group: Gitaly
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Praefect Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Rake tasks are available for projects that have been created on Praefect storage. See the
[Praefect documentation](../gitaly/praefect/_index.md) for information on configuring Praefect.
## Replica checksums
`gitlab:praefect:replicas` prints out checksums of the repository of a given `project_id` on:
- The primary Gitaly node.
- Secondary internal Gitaly nodes.
Run this Rake task on the node that GitLab is installed and not on the node that Praefect is installed.
- Linux package installations:
```shell
sudo gitlab-rake "gitlab:praefect:replicas[project_id]"
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake "gitlab:praefect:replicas[project_id]" RAILS_ENV=production
```
|
https://docs.gitlab.com/administration/import_export_rake_tasks_troubleshooting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/import_export_rake_tasks_troubleshooting.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
import_export_rake_tasks_troubleshooting.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Troubleshooting project import and export
| null |
If you are having trouble with import or export, use a Rake task to enable debug mode:
```shell
# Import
IMPORT_DEBUG=true gitlab-rake "gitlab:import_export:import[root, group/subgroup, testingprojectimport, /path/to/file_to_import.tar.gz]"
# Export
EXPORT_DEBUG=true gitlab-rake "gitlab:import_export:export[root, group/subgroup, projectnametoexport, /tmp/export_file.tar.gz]"
```
Then, review the following details on specific error messages.
## `Exception: undefined method 'name' for nil:NilClass`
The `username` is not valid.
## `Exception: undefined method 'full_path' for nil:NilClass`
The `namespace_path` does not exist.
For example, one of the groups or subgroups is mistyped or missing,
or you've specified the project name in the path.
The task only creates the project.
If you want to import it to a new group or subgroup, create it first.
## `Exception: No such file or directory @ rb_sysopen - (filename)`
The specified project export file in `archive_path` is missing.
## `Exception: Permission denied @ rb_sysopen - (filename)`
The specified project export file cannot be accessed by the `git` user.
To fix the issue:
1. Set the file owner to `git:git`.
1. Change the file permissions to `0400`.
1. Move the file to a public folder (for example `/tmp/`).
## `Name can contain only letters, digits, emoji ...`
```plaintext
Name can contain only letters, digits, emoji, '_', '.', '+', dashes, or spaces. It must start with a letter,
digit, emoji, or '_', and Path can contain only letters, digits, '_', '-', or '.'. It cannot start
with '-', end in '.git', or end in '.atom'.
```
The project name specified in `project_path` is not valid for one of the specified reasons.
Only put the project name in `project_path`. For example, if you provide a path of subgroups
it fails with this error as `/` is not a valid character in a project name.
## `Name has already been taken and Path has already been taken`
A project with that name already exists.
## `Exception: Error importing repository into (namespace) - No space left on device`
The disk has insufficient space to complete the import.
During import, the tarball is cached in your configured `shared_path` directory. Verify the
disk has enough free space to accommodate both the cached tarball and the unpacked
project files on disk.
## Import succeeds with `Total number of not imported relations: XX` message
If you receive a `Total number of not imported relations: XX` message, and issues
aren't created during the import, check [exceptions_json.log](../logs/_index.md#exceptions_jsonlog).
You might see an error like `N is out of range for ActiveModel::Type::Integer with limit 4 bytes`,
where `N` is the integer exceeding the 4-byte integer limit. If that's the case, you
are likely hitting the issue with rebalancing of `relative_position` field of the issues.
```ruby
# Check the current maximum value of relative_position
Issue.where(project_id: Project.find(ID).root_namespace.all_projects).maximum(:relative_position)
# Run the rebalancing process and check if the maximum value of relative_position has changed
Issues::RelativePositionRebalancingService.new(Project.find(ID).root_namespace.all_projects).execute
Issue.where(project_id: Project.find(ID).root_namespace.all_projects).maximum(:relative_position)
```
Repeat the import attempt and check if the issues are imported successfully.
## Gitaly calls error when importing
If you're attempting to import a large project into a development environment, Gitaly might throw an error about too many calls or invocations. For example:
```plaintext
Error importing repository into qa-perf-testing/gitlabhq - GitalyClient#call called 31 times from single request. Potential n+1?
```
This error is due to a n+1 calls limit for development setups. To resolve this error, set `GITALY_DISABLE_REQUEST_LIMITS=1` as an environment variable. Then restart your development environment and import again.
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Troubleshooting project import and export
breadcrumbs:
- doc
- administration
- raketasks
---
If you are having trouble with import or export, use a Rake task to enable debug mode:
```shell
# Import
IMPORT_DEBUG=true gitlab-rake "gitlab:import_export:import[root, group/subgroup, testingprojectimport, /path/to/file_to_import.tar.gz]"
# Export
EXPORT_DEBUG=true gitlab-rake "gitlab:import_export:export[root, group/subgroup, projectnametoexport, /tmp/export_file.tar.gz]"
```
Then, review the following details on specific error messages.
## `Exception: undefined method 'name' for nil:NilClass`
The `username` is not valid.
## `Exception: undefined method 'full_path' for nil:NilClass`
The `namespace_path` does not exist.
For example, one of the groups or subgroups is mistyped or missing,
or you've specified the project name in the path.
The task only creates the project.
If you want to import it to a new group or subgroup, create it first.
## `Exception: No such file or directory @ rb_sysopen - (filename)`
The specified project export file in `archive_path` is missing.
## `Exception: Permission denied @ rb_sysopen - (filename)`
The specified project export file cannot be accessed by the `git` user.
To fix the issue:
1. Set the file owner to `git:git`.
1. Change the file permissions to `0400`.
1. Move the file to a public folder (for example `/tmp/`).
## `Name can contain only letters, digits, emoji ...`
```plaintext
Name can contain only letters, digits, emoji, '_', '.', '+', dashes, or spaces. It must start with a letter,
digit, emoji, or '_', and Path can contain only letters, digits, '_', '-', or '.'. It cannot start
with '-', end in '.git', or end in '.atom'.
```
The project name specified in `project_path` is not valid for one of the specified reasons.
Only put the project name in `project_path`. For example, if you provide a path of subgroups
it fails with this error as `/` is not a valid character in a project name.
## `Name has already been taken and Path has already been taken`
A project with that name already exists.
## `Exception: Error importing repository into (namespace) - No space left on device`
The disk has insufficient space to complete the import.
During import, the tarball is cached in your configured `shared_path` directory. Verify the
disk has enough free space to accommodate both the cached tarball and the unpacked
project files on disk.
## Import succeeds with `Total number of not imported relations: XX` message
If you receive a `Total number of not imported relations: XX` message, and issues
aren't created during the import, check [exceptions_json.log](../logs/_index.md#exceptions_jsonlog).
You might see an error like `N is out of range for ActiveModel::Type::Integer with limit 4 bytes`,
where `N` is the integer exceeding the 4-byte integer limit. If that's the case, you
are likely hitting the issue with rebalancing of `relative_position` field of the issues.
```ruby
# Check the current maximum value of relative_position
Issue.where(project_id: Project.find(ID).root_namespace.all_projects).maximum(:relative_position)
# Run the rebalancing process and check if the maximum value of relative_position has changed
Issues::RelativePositionRebalancingService.new(Project.find(ID).root_namespace.all_projects).execute
Issue.where(project_id: Project.find(ID).root_namespace.all_projects).maximum(:relative_position)
```
Repeat the import attempt and check if the issues are imported successfully.
## Gitaly calls error when importing
If you're attempting to import a large project into a development environment, Gitaly might throw an error about too many calls or invocations. For example:
```plaintext
Error importing repository into qa-perf-testing/gitlabhq - GitalyClient#call called 31 times from single request. Potential n+1?
```
This error is due to a n+1 calls limit for development setups. To resolve this error, set `GITALY_DISABLE_REQUEST_LIMITS=1` as an environment variable. Then restart your development environment and import again.
|
https://docs.gitlab.com/administration/service_desk_email
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/service_desk_email.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
service_desk_email.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Service Desk email Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/108279) in GitLab 15.9.
{{< /history >}}
The following are Service Desk email-related Rake tasks.
## Secrets
GitLab can use [Service Desk email](../../user/project/service_desk/configure.md#configure-service-desk-alias-email) secrets read from an encrypted file instead of storing them in plaintext in the file system. The following Rake tasks are provided for updating the contents of the encrypted file.
### Show secret
Show the contents of the current Service Desk email secrets.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:service_desk_email:secret:show
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the Service Desk email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-service-desk-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab:service_desk_email:secret:show
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:service_desk_email:secret:show RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
#### Example output
```plaintext
password: 'examplepassword'
user: 'service-desk-email@mail.example.com'
```
### Edit secret
Opens the secret contents in your editor, and writes the resulting content to the encrypted secret file when you exit.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:service_desk_email:secret:edit EDITOR=vim
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the Service Desk email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-service-desk-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab:service_desk_email:secret:edit EDITOR=editor
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:service_desk_email:secret:edit RAILS_ENV=production EDITOR=vim
```
{{< /tab >}}
{{< /tabs >}}
### Write raw secret
Write new secret content by providing it on `STDIN`.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
echo -e "password: 'examplepassword'" | sudo gitlab-rake gitlab:service_desk_email:secret:write
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the Service Desk email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-service-desk-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> /bin/bash
echo -e "password: 'examplepassword'" | gitlab-rake gitlab:service_desk_email:secret:write
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
echo -e "password: 'examplepassword'" | bundle exec rake gitlab:service_desk_email:secret:write RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
### Secrets examples
**Editor example**
The write task can be used in cases where the edit command does not work with your editor:
```shell
# Write the existing secret to a plaintext file
sudo gitlab-rake gitlab:service_desk_email:secret:show > service_desk_email.yaml
# Edit the service_desk_email file in your editor
...
# Re-encrypt the file
cat service_desk_email.yaml | sudo gitlab-rake gitlab:service_desk_email:secret:write
# Remove the plaintext file
rm service_desk_email.yaml
```
**KMS integration example**
It can also be used as a receiving application for content encrypted with a KMS:
```shell
gcloud kms decrypt --key my-key --keyring my-test-kms --plaintext-file=- --ciphertext-file=my-file --location=us-west1 | sudo gitlab-rake gitlab:service_desk_email:secret:write
```
**Google Cloud secret integration example**
It can also be used as a receiving application for secrets out of Google Cloud:
```shell
gcloud secrets versions access latest --secret="my-test-secret" > $1 | sudo gitlab-rake gitlab:service_desk_email:secret:write
```
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Service Desk email Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/108279) in GitLab 15.9.
{{< /history >}}
The following are Service Desk email-related Rake tasks.
## Secrets
GitLab can use [Service Desk email](../../user/project/service_desk/configure.md#configure-service-desk-alias-email) secrets read from an encrypted file instead of storing them in plaintext in the file system. The following Rake tasks are provided for updating the contents of the encrypted file.
### Show secret
Show the contents of the current Service Desk email secrets.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:service_desk_email:secret:show
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the Service Desk email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-service-desk-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab:service_desk_email:secret:show
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:service_desk_email:secret:show RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
#### Example output
```plaintext
password: 'examplepassword'
user: 'service-desk-email@mail.example.com'
```
### Edit secret
Opens the secret contents in your editor, and writes the resulting content to the encrypted secret file when you exit.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:service_desk_email:secret:edit EDITOR=vim
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the Service Desk email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-service-desk-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab:service_desk_email:secret:edit EDITOR=editor
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:service_desk_email:secret:edit RAILS_ENV=production EDITOR=vim
```
{{< /tab >}}
{{< /tabs >}}
### Write raw secret
Write new secret content by providing it on `STDIN`.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
echo -e "password: 'examplepassword'" | sudo gitlab-rake gitlab:service_desk_email:secret:write
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the Service Desk email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-service-desk-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> /bin/bash
echo -e "password: 'examplepassword'" | gitlab-rake gitlab:service_desk_email:secret:write
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
echo -e "password: 'examplepassword'" | bundle exec rake gitlab:service_desk_email:secret:write RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
### Secrets examples
**Editor example**
The write task can be used in cases where the edit command does not work with your editor:
```shell
# Write the existing secret to a plaintext file
sudo gitlab-rake gitlab:service_desk_email:secret:show > service_desk_email.yaml
# Edit the service_desk_email file in your editor
...
# Re-encrypt the file
cat service_desk_email.yaml | sudo gitlab-rake gitlab:service_desk_email:secret:write
# Remove the plaintext file
rm service_desk_email.yaml
```
**KMS integration example**
It can also be used as a receiving application for content encrypted with a KMS:
```shell
gcloud kms decrypt --key my-key --keyring my-test-kms --plaintext-file=- --ciphertext-file=my-file --location=us-west1 | sudo gitlab-rake gitlab:service_desk_email:secret:write
```
**Google Cloud secret integration example**
It can also be used as a receiving application for secrets out of Google Cloud:
```shell
gcloud secrets versions access latest --secret="my-test-secret" > $1 | sudo gitlab-rake gitlab:service_desk_email:secret:write
```
|
https://docs.gitlab.com/administration/check
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/check.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
check.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Integrity check Rake task
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks to check the integrity of various components.
See also the [check GitLab configuration Rake task](maintenance.md#check-gitlab-configuration).
## Repository integrity
Even though Git is very resilient and tries to prevent data integrity issues,
there are times when things go wrong. The following Rake tasks intend to
help GitLab administrators diagnose problem repositories so they can be fixed.
These Rake tasks use three different methods to determine the integrity of Git
repositories.
1. Git repository file system check ([`git fsck`](https://git-scm.com/docs/git-fsck)).
This step verifies the connectivity and validity of objects in the repository.
1. Check for `config.lock` in the repository directory.
1. Check for any branch/references lock files in `refs/heads`.
The existence of `config.lock` or reference locks
alone do not necessarily indicate a problem. Lock files are routinely created
and removed as Git and GitLab perform operations on the repository. They serve
to prevent data integrity issues. However, if a Git operation is interrupted these
locks may not be cleaned up properly.
The following symptoms may indicate a problem with repository integrity. If users
experience these symptoms you may use the Rake tasks described below to determine
exactly which repositories are causing the trouble.
- Receiving an error when trying to push code - `remote: error: cannot lock ref`
- A 500 error when viewing the GitLab dashboard or when accessing a specific project.
### Check all project code repositories
This task loops through the project code repositories and runs the integrity check
described previously. If a project uses a pool repository, that is also checked.
Other types of Git repositories [are not checked](https://gitlab.com/gitlab-org/gitaly/-/issues/3643).
To check project code repositories:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:git:fsck
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:git:fsck RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
### Check specific project code repositories
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/197990) in GitLab 18.3.
{{< /history >}}
Limit the check to the repositories of projects with specific project IDs by setting the `PROJECT_IDS` environment
variable to a comma-separated list of project IDs.
For example, to check the repositories of projects with project IDs `1` and `3`:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo PROJECT_IDS="1,3" gitlab-rake gitlab:git:fsck
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H PROJECT_IDS="1,3" bundle exec rake gitlab:git:fsck RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
## Checksum of repository refs
One Git repository can be compared to another by checksumming all refs of each
repository. If both repositories have the same refs, and if both repositories
pass an integrity check, then we can be confident that both repositories are the
same.
For example, this can be used to compare a backup of a repository against the
source repository.
### Check all GitLab repositories
This task loops through all repositories on the GitLab server and outputs
checksums in the format `<PROJECT ID>,<CHECKSUM>`.
- If a repository doesn't exist, the project ID is a blank checksum.
- If a repository exists but is empty, the output checksum is `0000000000000000000000000000000000000000`.
- Projects which don't exist are skipped.
To check all GitLab repositories:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:git:checksum_projects
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:git:checksum_projects RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
For example, if:
- Project with ID#2 doesn't exist, it is skipped.
- Project with ID#4 doesn't have a repository, its checksum is blank.
- Project with ID#5 has an empty repository, its checksum is `0000000000000000000000000000000000000000`.
The output would then look something like:
```plaintext
1,cfa3f06ba235c13df0bb28e079bcea62c5848af2
3,3f3fb58a8106230e3a6c6b48adc2712fb3b6ef87
4,
5,0000000000000000000000000000000000000000
6,6c6b48adc2712fb3b6ef87cfa3f06ba235c13df0
```
### Check specific GitLab repositories
Optionally, specific project IDs can be checksummed by setting an environment
variable `CHECKSUM_PROJECT_IDS` with a list of comma-separated integers, for example:
```shell
sudo CHECKSUM_PROJECT_IDS="1,3" gitlab-rake gitlab:git:checksum_projects
```
## Uploaded files integrity
Various types of files can be uploaded to a GitLab installation by users.
These integrity checks can detect missing files. Additionally, for locally
stored files, checksums are generated and stored in the database upon upload,
and these checks verify them against current files.
Integrity checks are supported for the following types of file:
- CI artifacts
- LFS objects
- Project-level Secure Files (introduced in GitLab 16.1.0)
- User uploads
To check the integrity of uploaded files:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:artifacts:check
sudo gitlab-rake gitlab:ci_secure_files:check
sudo gitlab-rake gitlab:lfs:check
sudo gitlab-rake gitlab:uploads:check
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:artifacts:check RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:ci_secure_files:check RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:lfs:check RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:uploads:check RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
These tasks also accept some environment variables which you can use to override
certain values:
Variable | Type | Description
--------- | ------- | -----------
`BATCH` | integer | Specifies the size of the batch. Defaults to 200.
`ID_FROM` | integer | Specifies the ID to start from, inclusive of the value.
`ID_TO` | integer | Specifies the ID value to end at, inclusive of the value.
`VERBOSE` | boolean | Causes failures to be listed individually, rather than being summarized.
```shell
sudo gitlab-rake gitlab:artifacts:check BATCH=100 ID_FROM=50 ID_TO=250
sudo gitlab-rake gitlab:ci_secure_files:check BATCH=100 ID_FROM=50 ID_TO=250
sudo gitlab-rake gitlab:lfs:check BATCH=100 ID_FROM=50 ID_TO=250
sudo gitlab-rake gitlab:uploads:check BATCH=100 ID_FROM=50 ID_TO=250
```
Example output:
```shell
$ sudo gitlab-rake gitlab:uploads:check
Checking integrity of Uploads
- 1..1350: Failures: 0
- 1351..2743: Failures: 0
- 2745..4349: Failures: 2
- 4357..5762: Failures: 1
- 5764..7140: Failures: 2
- 7142..8651: Failures: 0
- 8653..10134: Failures: 0
- 10135..11773: Failures: 0
- 11777..13315: Failures: 0
Done!
```
Example verbose output:
```shell
$ sudo gitlab-rake gitlab:uploads:check VERBOSE=1
Checking integrity of Uploads
- 1..1350: Failures: 0
- 1351..2743: Failures: 0
- 2745..4349: Failures: 2
- Upload: 3573: #<Errno::ENOENT: No such file or directory @ rb_sysopen - /opt/gitlab/embedded/service/gitlab-rails/public/uploads/user-foo/project-bar/7a77cc52947bfe188adeff42f890bb77/image.png>
- Upload: 3580: #<Errno::ENOENT: No such file or directory @ rb_sysopen - /opt/gitlab/embedded/service/gitlab-rails/public/uploads/user-foo/project-bar/2840ba1ba3b2ecfa3478a7b161375f8a/pug.png>
- 4357..5762: Failures: 1
- Upload: 4636: #<Google::Apis::ServerError: Server error>
- 5764..7140: Failures: 2
- Upload: 5812: #<NoMethodError: undefined method `hashed_storage?' for nil:NilClass>
- Upload: 5837: #<NoMethodError: undefined method `hashed_storage?' for nil:NilClass>
- 7142..8651: Failures: 0
- 8653..10134: Failures: 0
- 10135..11773: Failures: 0
- 11777..13315: Failures: 0
Done!
```
## LDAP check
The LDAP check Rake task tests the bind DN and password credentials
(if configured) and lists a sample of LDAP users. This task is also
executed as part of the `gitlab:check` task, but can run independently.
See [LDAP Rake Tasks - LDAP Check](ldap.md#check) for details.
## Verify database values can be decrypted using the current secrets
This task runs through all possible encrypted values in the
database, verifying that they are decryptable using the current
secrets file (`gitlab-secrets.json`).
Automatic resolution is not yet implemented. If you have values that
cannot be decrypted, you can follow steps to reset them, see our
documentation on what to do [when the secrets file is lost](../backup_restore/troubleshooting_backup_gitlab.md#when-the-secrets-file-is-lost).
This can take a very long time, depending on the size of your
database, as it checks all rows in all tables.
To verify database values can be decrypted using the current secrets:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:doctor:secrets
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:doctor:secrets RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
**Example output**
```plaintext
I, [2020-06-11T17:17:54.951815 #27148] INFO -- : Checking encrypted values in the database
I, [2020-06-11T17:18:12.677708 #27148] INFO -- : - ApplicationSetting failures: 0
I, [2020-06-11T17:18:12.823692 #27148] INFO -- : - User failures: 0
[...] other models possibly containing encrypted data
I, [2020-06-11T17:18:14.938335 #27148] INFO -- : - Group failures: 1
I, [2020-06-11T17:18:15.559162 #27148] INFO -- : - Operations::FeatureFlagsClient failures: 0
I, [2020-06-11T17:18:15.575533 #27148] INFO -- : - ScimOauthAccessToken failures: 0
I, [2020-06-11T17:18:15.575678 #27148] INFO -- : Total: 1 row(s) affected
I, [2020-06-11T17:18:15.575711 #27148] INFO -- : Done!
```
### Verbose mode
To get more detailed information about which rows and columns can't be
decrypted, you can pass a `VERBOSE` environment variable.
To verify database values can be decrypted using the current secrets with detailed information:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:doctor:secrets VERBOSE=1
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:doctor:secrets RAILS_ENV=production VERBOSE=1
```
{{< /tab >}}
{{< /tabs >}}
**Example verbose output**
<!-- vale gitlab_base.SentenceSpacing = NO -->
```plaintext
I, [2020-06-11T17:17:54.951815 #27148] INFO -- : Checking encrypted values in the database
I, [2020-06-11T17:18:12.677708 #27148] INFO -- : - ApplicationSetting failures: 0
I, [2020-06-11T17:18:12.823692 #27148] INFO -- : - User failures: 0
[...] other models possibly containing encrypted data
D, [2020-06-11T17:19:53.224344 #27351] DEBUG -- : > Something went wrong for Group[10].runners_token: Validation failed: Route can't be blank
I, [2020-06-11T17:19:53.225178 #27351] INFO -- : - Group failures: 1
D, [2020-06-11T17:19:53.225267 #27351] DEBUG -- : - Group[10]: runners_token
I, [2020-06-11T17:18:15.559162 #27148] INFO -- : - Operations::FeatureFlagsClient failures: 0
I, [2020-06-11T17:18:15.575533 #27148] INFO -- : - ScimOauthAccessToken failures: 0
I, [2020-06-11T17:18:15.575678 #27148] INFO -- : Total: 1 row(s) affected
I, [2020-06-11T17:18:15.575711 #27148] INFO -- : Done!
```
<!-- vale gitlab_base.SentenceSpacing = YES -->
## Reset encrypted tokens when they can't be recovered
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131893) in GitLab 16.6.
{{< /history >}}
{{< alert type="warning" >}}
This operation is dangerous and can result in data-loss. Proceed with extreme caution.
You must have knowledge about GitLab internals before you perform this operation.
{{< /alert >}}
In some cases, encrypted tokens can no longer be recovered and cause issues.
Most often, runner registration tokens for groups and projects might be broken on very large instances.
To reset broken tokens:
1. Identify the database models that have broken encrypted tokens. For example, it can be `Group` and `Project`.
1. Identify the broken tokens. For example `runners_token`.
1. To reset broken tokens, run `gitlab:doctor:reset_encrypted_tokens` with `VERBOSE=true MODEL_NAMES=Model1,Model2 TOKEN_NAMES=broken_token1,broken_token2`. For example:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
VERBOSE=true MODEL_NAMES=Project,Group TOKEN_NAMES=runners_token gitlab-rake gitlab:doctor:reset_encrypted_tokens
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:doctor:reset_encrypted_tokens RAILS_ENV=production VERBOSE=true MODEL_NAMES=Project,Group TOKEN_NAMES=runners_token
```
{{< /tab >}}
{{< /tabs >}}
You will see every action this task would try to perform:
```plain
I, [2023-09-26T16:20:23.230942 #88920] INFO -- : Resetting runners_token on Project, Group if they can not be read
I, [2023-09-26T16:20:23.230975 #88920] INFO -- : Executing in DRY RUN mode, no records will actually be updated
D, [2023-09-26T16:20:30.151585 #88920] DEBUG -- : > Fix Project[1].runners_token
I, [2023-09-26T16:20:30.151617 #88920] INFO -- : Checked 1/9 Projects
D, [2023-09-26T16:20:30.151873 #88920] DEBUG -- : > Fix Project[3].runners_token
D, [2023-09-26T16:20:30.152975 #88920] DEBUG -- : > Fix Project[10].runners_token
I, [2023-09-26T16:20:30.152992 #88920] INFO -- : Checked 11/29 Projects
I, [2023-09-26T16:20:30.153230 #88920] INFO -- : Checked 21/29 Projects
I, [2023-09-26T16:20:30.153882 #88920] INFO -- : Checked 29 Projects
D, [2023-09-26T16:20:30.195929 #88920] DEBUG -- : > Fix Group[22].runners_token
I, [2023-09-26T16:20:30.196125 #88920] INFO -- : Checked 1/19 Groups
D, [2023-09-26T16:20:30.196192 #88920] DEBUG -- : > Fix Group[25].runners_token
D, [2023-09-26T16:20:30.197557 #88920] DEBUG -- : > Fix Group[82].runners_token
I, [2023-09-26T16:20:30.197581 #88920] INFO -- : Checked 11/19 Groups
I, [2023-09-26T16:20:30.198455 #88920] INFO -- : Checked 19 Groups
I, [2023-09-26T16:20:30.198462 #88920] INFO -- : Done!
```
1. If you are confident that this operation resets the correct tokens, disable dry-run mode and run the operation again:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
DRY_RUN=false VERBOSE=true MODEL_NAMES=Project,Group TOKEN_NAMES=runners_token gitlab-rake gitlab:doctor:reset_encrypted_tokens
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:doctor:reset_encrypted_tokens RAILS_ENV=production DRY_RUN=false VERBOSE=true MODEL_NAMES=Project,Group TOKEN_NAMES=runners_token
```
{{< /tab >}}
{{< /tabs >}}
The `gitlab:doctor:reset_encrypted_tokens` task has the following limitations:
- Non-token attributes, for example `ApplicationSetting:ci_jwt_signing_key` are not reset.
- The presence of more than one undecryptable attribute in a single model record causes the task
to fail with a `TypeError: no implicit conversion of nil into String ... block in aes256_gcm_decrypt` error.
## Troubleshooting
The following are solutions to problems you might discover using the Rake tasks
documented previously.
### Dangling objects
The `gitlab-rake gitlab:git:fsck` task can find dangling objects such as:
```plaintext
dangling blob a12...
dangling commit b34...
dangling tag c56...
dangling tree d78...
```
To delete them, try [running housekeeping](../housekeeping.md).
If the issue persists, try triggering garbage collection via the
[Rails Console](../operations/rails_console.md#starting-a-rails-console-session):
```ruby
p = Project.find_by_path("project-name")
Repositories::HousekeepingService.new(p, :gc).execute
```
If the dangling objects are younger than the 2 weeks default grace period,
and you don't want to wait until they expire automatically, run:
```ruby
Repositories::HousekeepingService.new(p, :prune).execute
```
### Delete references to missing remote uploads
`gitlab-rake gitlab:uploads:check VERBOSE=1` detects remote objects that do not exist because they were
deleted externally but their references still exist in the GitLab database.
Example output with error message:
```shell
$ sudo gitlab-rake gitlab:uploads:check VERBOSE=1
Checking integrity of Uploads
- 100..434: Failures: 2
- Upload: 100: Remote object does not exist
- Upload: 101: Remote object does not exist
Done!
```
To delete these references to remote uploads that were deleted externally, open the [GitLab Rails Console](../operations/rails_console.md#starting-a-rails-console-session) and run:
```ruby
uploads_deleted=0
Upload.find_each do |upload|
next if upload.retrieve_uploader.file.exists?
uploads_deleted=uploads_deleted + 1
p upload ### allow verification before destroy
# p upload.destroy! ### uncomment to actually destroy
end
p "#{uploads_deleted} remote objects were destroyed."
```
### Delete references to missing artifacts
`gitlab-rake gitlab:artifacts:check VERBOSE=1` detects when artifacts (or `job.log` files):
- Are deleted outside of GitLab.
- Have references still in the GitLab database.
When this scenario is detected, the Rake task displays an error message. For example:
```shell
Checking integrity of Job artifacts
- 1..15: Failures: 2
- Job artifact: 9: #<Errno::ENOENT: No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/shared/artifacts/4b/22/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/2022_06_30/8/9/job.log>
- Job artifact: 15: Remote object does not exist
Done!
```
To delete these references to missing local and/or remote artifacts (`job.log` files):
1. Open the [GitLab Rails Console](../operations/rails_console.md#starting-a-rails-console-session).
1. Run the following Ruby code:
```ruby
artifacts_deleted = 0
::Ci::JobArtifact.find_each do |artifact| ### Iterate artifacts
# next if artifact.file.filename != "job.log" ### Uncomment if only `job.log` files' references are to be processed
next if artifact.file.file.exists? ### Skip if the file reference is valid
artifacts_deleted += 1
puts "#{artifact.id} #{artifact.file.path} is missing." ### Allow verification before destroy
# artifact.destroy! ### Uncomment to actually destroy
end
puts "Count of identified/destroyed invalid references: #{artifacts_deleted}"
```
### Delete references to missing LFS objects
If `gitlab-rake gitlab:lfs:check VERBOSE=1` detects LFS objects that exist in the database
but not on disk, [follow the procedure in the LFS documentation](../lfs/_index.md#missing-lfs-objects)
to remove the database entries.
### Update dangling object storage references
If you have [migrated from object storage to local storage](../cicd/job_artifacts.md#migrating-from-object-storage-to-local-storage) and files were missing, then dangling database references remain.
This is visible in the migration logs with errors like the following:
```shell
W, [2022-11-28T13:14:09.283833 #10025] WARN -- : Failed to transfer Ci::JobArtifact ID 11 with error: undefined method `body' for nil:NilClass
W, [2022-11-28T13:14:09.296911 #10025] WARN -- : Failed to transfer Ci::JobArtifact ID 12 with error: undefined method `body' for nil:NilClass
```
Attempting to [delete references to missing artifacts](check.md#delete-references-to-missing-artifacts) after you have disabled object storage, results in the following error:
```plaintext
RuntimeError (Object Storage is not enabled for JobArtifactUploader)
```
To update these references to point to local storage:
1. Open the [GitLab Rails Console](../operations/rails_console.md#starting-a-rails-console-session).
1. Run the following Ruby code:
```ruby
artifacts_updated = 0
::Ci::JobArtifact.find_each do |artifact| ### Iterate artifacts
next if artifact.file_store != 2 ### Skip if file_store already points to local storage
artifacts_updated += 1
# artifact.update(file_store: 1) ### Uncomment to actually update
end
puts "Updated file_store count: #{artifacts_updated}"
```
The script to [delete references to missing artifacts](check.md#delete-references-to-missing-artifacts) now functions correctly and cleans up the database.
### Delete references to missing secure files
`VERBOSE=1 gitlab-rake gitlab:ci_secure_files:check` detects when secure files:
- Are deleted outside of GitLab.
- Have references still in the GitLab database.
When this scenario is detected, the Rake task displays an error message. For example:
```shell
Checking integrity of CI Secure Files
- 1..15: Failures: 2
- Job SecureFile: 9: #<Errno::ENOENT: No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/shared/ci_secure_files/4b/22/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/2022_06_30/8/9/distribution.cer>
- Job SecureFile: 15: Remote object does not exist
Done!
```
To delete these references to missing local or remote secure files:
1. Open the [GitLab Rails Console](../operations/rails_console.md#starting-a-rails-console-session).
1. Run the following Ruby code:
```ruby
secure_files_deleted = 0
::Ci::SecureFile.find_each do |secure_file| ### Iterate secure files
next if secure_file.file.file.exists? ### Skip if the file reference is valid
secure_files_deleted += 1
puts "#{secure_file.id} #{secure_file.file.path} is missing." ### Allow verification before destroy
# secure_file.destroy! ### Uncomment to actually destroy
end
puts "Count of identified/destroyed invalid references: #{secure_files_deleted}"
```
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Integrity check Rake task
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks to check the integrity of various components.
See also the [check GitLab configuration Rake task](maintenance.md#check-gitlab-configuration).
## Repository integrity
Even though Git is very resilient and tries to prevent data integrity issues,
there are times when things go wrong. The following Rake tasks intend to
help GitLab administrators diagnose problem repositories so they can be fixed.
These Rake tasks use three different methods to determine the integrity of Git
repositories.
1. Git repository file system check ([`git fsck`](https://git-scm.com/docs/git-fsck)).
This step verifies the connectivity and validity of objects in the repository.
1. Check for `config.lock` in the repository directory.
1. Check for any branch/references lock files in `refs/heads`.
The existence of `config.lock` or reference locks
alone do not necessarily indicate a problem. Lock files are routinely created
and removed as Git and GitLab perform operations on the repository. They serve
to prevent data integrity issues. However, if a Git operation is interrupted these
locks may not be cleaned up properly.
The following symptoms may indicate a problem with repository integrity. If users
experience these symptoms you may use the Rake tasks described below to determine
exactly which repositories are causing the trouble.
- Receiving an error when trying to push code - `remote: error: cannot lock ref`
- A 500 error when viewing the GitLab dashboard or when accessing a specific project.
### Check all project code repositories
This task loops through the project code repositories and runs the integrity check
described previously. If a project uses a pool repository, that is also checked.
Other types of Git repositories [are not checked](https://gitlab.com/gitlab-org/gitaly/-/issues/3643).
To check project code repositories:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:git:fsck
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:git:fsck RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
### Check specific project code repositories
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/197990) in GitLab 18.3.
{{< /history >}}
Limit the check to the repositories of projects with specific project IDs by setting the `PROJECT_IDS` environment
variable to a comma-separated list of project IDs.
For example, to check the repositories of projects with project IDs `1` and `3`:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo PROJECT_IDS="1,3" gitlab-rake gitlab:git:fsck
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H PROJECT_IDS="1,3" bundle exec rake gitlab:git:fsck RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
## Checksum of repository refs
One Git repository can be compared to another by checksumming all refs of each
repository. If both repositories have the same refs, and if both repositories
pass an integrity check, then we can be confident that both repositories are the
same.
For example, this can be used to compare a backup of a repository against the
source repository.
### Check all GitLab repositories
This task loops through all repositories on the GitLab server and outputs
checksums in the format `<PROJECT ID>,<CHECKSUM>`.
- If a repository doesn't exist, the project ID is a blank checksum.
- If a repository exists but is empty, the output checksum is `0000000000000000000000000000000000000000`.
- Projects which don't exist are skipped.
To check all GitLab repositories:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:git:checksum_projects
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:git:checksum_projects RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
For example, if:
- Project with ID#2 doesn't exist, it is skipped.
- Project with ID#4 doesn't have a repository, its checksum is blank.
- Project with ID#5 has an empty repository, its checksum is `0000000000000000000000000000000000000000`.
The output would then look something like:
```plaintext
1,cfa3f06ba235c13df0bb28e079bcea62c5848af2
3,3f3fb58a8106230e3a6c6b48adc2712fb3b6ef87
4,
5,0000000000000000000000000000000000000000
6,6c6b48adc2712fb3b6ef87cfa3f06ba235c13df0
```
### Check specific GitLab repositories
Optionally, specific project IDs can be checksummed by setting an environment
variable `CHECKSUM_PROJECT_IDS` with a list of comma-separated integers, for example:
```shell
sudo CHECKSUM_PROJECT_IDS="1,3" gitlab-rake gitlab:git:checksum_projects
```
## Uploaded files integrity
Various types of files can be uploaded to a GitLab installation by users.
These integrity checks can detect missing files. Additionally, for locally
stored files, checksums are generated and stored in the database upon upload,
and these checks verify them against current files.
Integrity checks are supported for the following types of file:
- CI artifacts
- LFS objects
- Project-level Secure Files (introduced in GitLab 16.1.0)
- User uploads
To check the integrity of uploaded files:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:artifacts:check
sudo gitlab-rake gitlab:ci_secure_files:check
sudo gitlab-rake gitlab:lfs:check
sudo gitlab-rake gitlab:uploads:check
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:artifacts:check RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:ci_secure_files:check RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:lfs:check RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:uploads:check RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
These tasks also accept some environment variables which you can use to override
certain values:
Variable | Type | Description
--------- | ------- | -----------
`BATCH` | integer | Specifies the size of the batch. Defaults to 200.
`ID_FROM` | integer | Specifies the ID to start from, inclusive of the value.
`ID_TO` | integer | Specifies the ID value to end at, inclusive of the value.
`VERBOSE` | boolean | Causes failures to be listed individually, rather than being summarized.
```shell
sudo gitlab-rake gitlab:artifacts:check BATCH=100 ID_FROM=50 ID_TO=250
sudo gitlab-rake gitlab:ci_secure_files:check BATCH=100 ID_FROM=50 ID_TO=250
sudo gitlab-rake gitlab:lfs:check BATCH=100 ID_FROM=50 ID_TO=250
sudo gitlab-rake gitlab:uploads:check BATCH=100 ID_FROM=50 ID_TO=250
```
Example output:
```shell
$ sudo gitlab-rake gitlab:uploads:check
Checking integrity of Uploads
- 1..1350: Failures: 0
- 1351..2743: Failures: 0
- 2745..4349: Failures: 2
- 4357..5762: Failures: 1
- 5764..7140: Failures: 2
- 7142..8651: Failures: 0
- 8653..10134: Failures: 0
- 10135..11773: Failures: 0
- 11777..13315: Failures: 0
Done!
```
Example verbose output:
```shell
$ sudo gitlab-rake gitlab:uploads:check VERBOSE=1
Checking integrity of Uploads
- 1..1350: Failures: 0
- 1351..2743: Failures: 0
- 2745..4349: Failures: 2
- Upload: 3573: #<Errno::ENOENT: No such file or directory @ rb_sysopen - /opt/gitlab/embedded/service/gitlab-rails/public/uploads/user-foo/project-bar/7a77cc52947bfe188adeff42f890bb77/image.png>
- Upload: 3580: #<Errno::ENOENT: No such file or directory @ rb_sysopen - /opt/gitlab/embedded/service/gitlab-rails/public/uploads/user-foo/project-bar/2840ba1ba3b2ecfa3478a7b161375f8a/pug.png>
- 4357..5762: Failures: 1
- Upload: 4636: #<Google::Apis::ServerError: Server error>
- 5764..7140: Failures: 2
- Upload: 5812: #<NoMethodError: undefined method `hashed_storage?' for nil:NilClass>
- Upload: 5837: #<NoMethodError: undefined method `hashed_storage?' for nil:NilClass>
- 7142..8651: Failures: 0
- 8653..10134: Failures: 0
- 10135..11773: Failures: 0
- 11777..13315: Failures: 0
Done!
```
## LDAP check
The LDAP check Rake task tests the bind DN and password credentials
(if configured) and lists a sample of LDAP users. This task is also
executed as part of the `gitlab:check` task, but can run independently.
See [LDAP Rake Tasks - LDAP Check](ldap.md#check) for details.
## Verify database values can be decrypted using the current secrets
This task runs through all possible encrypted values in the
database, verifying that they are decryptable using the current
secrets file (`gitlab-secrets.json`).
Automatic resolution is not yet implemented. If you have values that
cannot be decrypted, you can follow steps to reset them, see our
documentation on what to do [when the secrets file is lost](../backup_restore/troubleshooting_backup_gitlab.md#when-the-secrets-file-is-lost).
This can take a very long time, depending on the size of your
database, as it checks all rows in all tables.
To verify database values can be decrypted using the current secrets:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:doctor:secrets
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:doctor:secrets RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
**Example output**
```plaintext
I, [2020-06-11T17:17:54.951815 #27148] INFO -- : Checking encrypted values in the database
I, [2020-06-11T17:18:12.677708 #27148] INFO -- : - ApplicationSetting failures: 0
I, [2020-06-11T17:18:12.823692 #27148] INFO -- : - User failures: 0
[...] other models possibly containing encrypted data
I, [2020-06-11T17:18:14.938335 #27148] INFO -- : - Group failures: 1
I, [2020-06-11T17:18:15.559162 #27148] INFO -- : - Operations::FeatureFlagsClient failures: 0
I, [2020-06-11T17:18:15.575533 #27148] INFO -- : - ScimOauthAccessToken failures: 0
I, [2020-06-11T17:18:15.575678 #27148] INFO -- : Total: 1 row(s) affected
I, [2020-06-11T17:18:15.575711 #27148] INFO -- : Done!
```
### Verbose mode
To get more detailed information about which rows and columns can't be
decrypted, you can pass a `VERBOSE` environment variable.
To verify database values can be decrypted using the current secrets with detailed information:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:doctor:secrets VERBOSE=1
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:doctor:secrets RAILS_ENV=production VERBOSE=1
```
{{< /tab >}}
{{< /tabs >}}
**Example verbose output**
<!-- vale gitlab_base.SentenceSpacing = NO -->
```plaintext
I, [2020-06-11T17:17:54.951815 #27148] INFO -- : Checking encrypted values in the database
I, [2020-06-11T17:18:12.677708 #27148] INFO -- : - ApplicationSetting failures: 0
I, [2020-06-11T17:18:12.823692 #27148] INFO -- : - User failures: 0
[...] other models possibly containing encrypted data
D, [2020-06-11T17:19:53.224344 #27351] DEBUG -- : > Something went wrong for Group[10].runners_token: Validation failed: Route can't be blank
I, [2020-06-11T17:19:53.225178 #27351] INFO -- : - Group failures: 1
D, [2020-06-11T17:19:53.225267 #27351] DEBUG -- : - Group[10]: runners_token
I, [2020-06-11T17:18:15.559162 #27148] INFO -- : - Operations::FeatureFlagsClient failures: 0
I, [2020-06-11T17:18:15.575533 #27148] INFO -- : - ScimOauthAccessToken failures: 0
I, [2020-06-11T17:18:15.575678 #27148] INFO -- : Total: 1 row(s) affected
I, [2020-06-11T17:18:15.575711 #27148] INFO -- : Done!
```
<!-- vale gitlab_base.SentenceSpacing = YES -->
## Reset encrypted tokens when they can't be recovered
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131893) in GitLab 16.6.
{{< /history >}}
{{< alert type="warning" >}}
This operation is dangerous and can result in data-loss. Proceed with extreme caution.
You must have knowledge about GitLab internals before you perform this operation.
{{< /alert >}}
In some cases, encrypted tokens can no longer be recovered and cause issues.
Most often, runner registration tokens for groups and projects might be broken on very large instances.
To reset broken tokens:
1. Identify the database models that have broken encrypted tokens. For example, it can be `Group` and `Project`.
1. Identify the broken tokens. For example `runners_token`.
1. To reset broken tokens, run `gitlab:doctor:reset_encrypted_tokens` with `VERBOSE=true MODEL_NAMES=Model1,Model2 TOKEN_NAMES=broken_token1,broken_token2`. For example:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
VERBOSE=true MODEL_NAMES=Project,Group TOKEN_NAMES=runners_token gitlab-rake gitlab:doctor:reset_encrypted_tokens
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:doctor:reset_encrypted_tokens RAILS_ENV=production VERBOSE=true MODEL_NAMES=Project,Group TOKEN_NAMES=runners_token
```
{{< /tab >}}
{{< /tabs >}}
You will see every action this task would try to perform:
```plain
I, [2023-09-26T16:20:23.230942 #88920] INFO -- : Resetting runners_token on Project, Group if they can not be read
I, [2023-09-26T16:20:23.230975 #88920] INFO -- : Executing in DRY RUN mode, no records will actually be updated
D, [2023-09-26T16:20:30.151585 #88920] DEBUG -- : > Fix Project[1].runners_token
I, [2023-09-26T16:20:30.151617 #88920] INFO -- : Checked 1/9 Projects
D, [2023-09-26T16:20:30.151873 #88920] DEBUG -- : > Fix Project[3].runners_token
D, [2023-09-26T16:20:30.152975 #88920] DEBUG -- : > Fix Project[10].runners_token
I, [2023-09-26T16:20:30.152992 #88920] INFO -- : Checked 11/29 Projects
I, [2023-09-26T16:20:30.153230 #88920] INFO -- : Checked 21/29 Projects
I, [2023-09-26T16:20:30.153882 #88920] INFO -- : Checked 29 Projects
D, [2023-09-26T16:20:30.195929 #88920] DEBUG -- : > Fix Group[22].runners_token
I, [2023-09-26T16:20:30.196125 #88920] INFO -- : Checked 1/19 Groups
D, [2023-09-26T16:20:30.196192 #88920] DEBUG -- : > Fix Group[25].runners_token
D, [2023-09-26T16:20:30.197557 #88920] DEBUG -- : > Fix Group[82].runners_token
I, [2023-09-26T16:20:30.197581 #88920] INFO -- : Checked 11/19 Groups
I, [2023-09-26T16:20:30.198455 #88920] INFO -- : Checked 19 Groups
I, [2023-09-26T16:20:30.198462 #88920] INFO -- : Done!
```
1. If you are confident that this operation resets the correct tokens, disable dry-run mode and run the operation again:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
DRY_RUN=false VERBOSE=true MODEL_NAMES=Project,Group TOKEN_NAMES=runners_token gitlab-rake gitlab:doctor:reset_encrypted_tokens
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:doctor:reset_encrypted_tokens RAILS_ENV=production DRY_RUN=false VERBOSE=true MODEL_NAMES=Project,Group TOKEN_NAMES=runners_token
```
{{< /tab >}}
{{< /tabs >}}
The `gitlab:doctor:reset_encrypted_tokens` task has the following limitations:
- Non-token attributes, for example `ApplicationSetting:ci_jwt_signing_key` are not reset.
- The presence of more than one undecryptable attribute in a single model record causes the task
to fail with a `TypeError: no implicit conversion of nil into String ... block in aes256_gcm_decrypt` error.
## Troubleshooting
The following are solutions to problems you might discover using the Rake tasks
documented previously.
### Dangling objects
The `gitlab-rake gitlab:git:fsck` task can find dangling objects such as:
```plaintext
dangling blob a12...
dangling commit b34...
dangling tag c56...
dangling tree d78...
```
To delete them, try [running housekeeping](../housekeeping.md).
If the issue persists, try triggering garbage collection via the
[Rails Console](../operations/rails_console.md#starting-a-rails-console-session):
```ruby
p = Project.find_by_path("project-name")
Repositories::HousekeepingService.new(p, :gc).execute
```
If the dangling objects are younger than the 2 weeks default grace period,
and you don't want to wait until they expire automatically, run:
```ruby
Repositories::HousekeepingService.new(p, :prune).execute
```
### Delete references to missing remote uploads
`gitlab-rake gitlab:uploads:check VERBOSE=1` detects remote objects that do not exist because they were
deleted externally but their references still exist in the GitLab database.
Example output with error message:
```shell
$ sudo gitlab-rake gitlab:uploads:check VERBOSE=1
Checking integrity of Uploads
- 100..434: Failures: 2
- Upload: 100: Remote object does not exist
- Upload: 101: Remote object does not exist
Done!
```
To delete these references to remote uploads that were deleted externally, open the [GitLab Rails Console](../operations/rails_console.md#starting-a-rails-console-session) and run:
```ruby
uploads_deleted=0
Upload.find_each do |upload|
next if upload.retrieve_uploader.file.exists?
uploads_deleted=uploads_deleted + 1
p upload ### allow verification before destroy
# p upload.destroy! ### uncomment to actually destroy
end
p "#{uploads_deleted} remote objects were destroyed."
```
### Delete references to missing artifacts
`gitlab-rake gitlab:artifacts:check VERBOSE=1` detects when artifacts (or `job.log` files):
- Are deleted outside of GitLab.
- Have references still in the GitLab database.
When this scenario is detected, the Rake task displays an error message. For example:
```shell
Checking integrity of Job artifacts
- 1..15: Failures: 2
- Job artifact: 9: #<Errno::ENOENT: No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/shared/artifacts/4b/22/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/2022_06_30/8/9/job.log>
- Job artifact: 15: Remote object does not exist
Done!
```
To delete these references to missing local and/or remote artifacts (`job.log` files):
1. Open the [GitLab Rails Console](../operations/rails_console.md#starting-a-rails-console-session).
1. Run the following Ruby code:
```ruby
artifacts_deleted = 0
::Ci::JobArtifact.find_each do |artifact| ### Iterate artifacts
# next if artifact.file.filename != "job.log" ### Uncomment if only `job.log` files' references are to be processed
next if artifact.file.file.exists? ### Skip if the file reference is valid
artifacts_deleted += 1
puts "#{artifact.id} #{artifact.file.path} is missing." ### Allow verification before destroy
# artifact.destroy! ### Uncomment to actually destroy
end
puts "Count of identified/destroyed invalid references: #{artifacts_deleted}"
```
### Delete references to missing LFS objects
If `gitlab-rake gitlab:lfs:check VERBOSE=1` detects LFS objects that exist in the database
but not on disk, [follow the procedure in the LFS documentation](../lfs/_index.md#missing-lfs-objects)
to remove the database entries.
### Update dangling object storage references
If you have [migrated from object storage to local storage](../cicd/job_artifacts.md#migrating-from-object-storage-to-local-storage) and files were missing, then dangling database references remain.
This is visible in the migration logs with errors like the following:
```shell
W, [2022-11-28T13:14:09.283833 #10025] WARN -- : Failed to transfer Ci::JobArtifact ID 11 with error: undefined method `body' for nil:NilClass
W, [2022-11-28T13:14:09.296911 #10025] WARN -- : Failed to transfer Ci::JobArtifact ID 12 with error: undefined method `body' for nil:NilClass
```
Attempting to [delete references to missing artifacts](check.md#delete-references-to-missing-artifacts) after you have disabled object storage, results in the following error:
```plaintext
RuntimeError (Object Storage is not enabled for JobArtifactUploader)
```
To update these references to point to local storage:
1. Open the [GitLab Rails Console](../operations/rails_console.md#starting-a-rails-console-session).
1. Run the following Ruby code:
```ruby
artifacts_updated = 0
::Ci::JobArtifact.find_each do |artifact| ### Iterate artifacts
next if artifact.file_store != 2 ### Skip if file_store already points to local storage
artifacts_updated += 1
# artifact.update(file_store: 1) ### Uncomment to actually update
end
puts "Updated file_store count: #{artifacts_updated}"
```
The script to [delete references to missing artifacts](check.md#delete-references-to-missing-artifacts) now functions correctly and cleans up the database.
### Delete references to missing secure files
`VERBOSE=1 gitlab-rake gitlab:ci_secure_files:check` detects when secure files:
- Are deleted outside of GitLab.
- Have references still in the GitLab database.
When this scenario is detected, the Rake task displays an error message. For example:
```shell
Checking integrity of CI Secure Files
- 1..15: Failures: 2
- Job SecureFile: 9: #<Errno::ENOENT: No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/shared/ci_secure_files/4b/22/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/2022_06_30/8/9/distribution.cer>
- Job SecureFile: 15: Remote object does not exist
Done!
```
To delete these references to missing local or remote secure files:
1. Open the [GitLab Rails Console](../operations/rails_console.md#starting-a-rails-console-session).
1. Run the following Ruby code:
```ruby
secure_files_deleted = 0
::Ci::SecureFile.find_each do |secure_file| ### Iterate secure files
next if secure_file.file.file.exists? ### Skip if the file reference is valid
secure_files_deleted += 1
puts "#{secure_file.id} #{secure_file.file.path} is missing." ### Allow verification before destroy
# secure_file.destroy! ### Uncomment to actually destroy
end
puts "Count of identified/destroyed invalid references: #{secure_files_deleted}"
```
|
https://docs.gitlab.com/administration/project_import_export
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/project_import_export.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
project_import_export.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Project import and export Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for [project import and export](../../user/project/settings/import_export.md).
You can only import from a [compatible](../../user/project/settings/import_export.md#compatibility) GitLab instance.
## Import large projects
The [Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/import_export/import.rake) is used for importing large GitLab project exports.
As part of this task, we also disable direct upload. This avoids uploading a huge archive to GCS, which can cause idle transaction timeouts.
We can run this task from the terminal:
Parameters:
| Attribute | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `username` | string | yes | User name |
| `namespace_path` | string | yes | Namespace path |
| `project_path` | string | yes | Project path |
| `archive_path` | string | yes | Path to the exported project tarball you want to import |
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake "gitlab:import_export:import[root, group/subgroup, testingprojectimport, /path/to/file.tar.gz]"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake "gitlab:import_export:import[root, group/subgroup, testingprojectimport, /path/to/file.tar.gz]" RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
## Export large projects
You can use a Rake task to export large project.
Parameters:
| Attribute | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `username` | string | yes | User name |
| `namespace_path` | string | yes | Namespace path |
| `project_path` | string | yes | Project name |
| `archive_path` | string | yes | Path to file to store the export project tarball |
```shell
gitlab-rake "gitlab:import_export:export[username, namespace_path, project_path, archive_path]"
```
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Project import and export Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for [project import and export](../../user/project/settings/import_export.md).
You can only import from a [compatible](../../user/project/settings/import_export.md#compatibility) GitLab instance.
## Import large projects
The [Rake task](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/tasks/gitlab/import_export/import.rake) is used for importing large GitLab project exports.
As part of this task, we also disable direct upload. This avoids uploading a huge archive to GCS, which can cause idle transaction timeouts.
We can run this task from the terminal:
Parameters:
| Attribute | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `username` | string | yes | User name |
| `namespace_path` | string | yes | Namespace path |
| `project_path` | string | yes | Project path |
| `archive_path` | string | yes | Path to the exported project tarball you want to import |
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake "gitlab:import_export:import[root, group/subgroup, testingprojectimport, /path/to/file.tar.gz]"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake "gitlab:import_export:import[root, group/subgroup, testingprojectimport, /path/to/file.tar.gz]" RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
## Export large projects
You can use a Rake task to export large project.
Parameters:
| Attribute | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `username` | string | yes | User name |
| `namespace_path` | string | yes | Namespace path |
| `project_path` | string | yes | Project name |
| `archive_path` | string | yes | Path to file to store the export project tarball |
```shell
gitlab-rake "gitlab:import_export:export[username, namespace_path, project_path, archive_path]"
```
|
https://docs.gitlab.com/administration/incoming_email
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/incoming_email.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
incoming_email.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Incoming email Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/108279) in GitLab 15.9.
{{< /history >}}
The following are Incoming email-related Rake tasks.
## Secrets
GitLab can use [Incoming email](../incoming_email.md) secrets read from an encrypted file instead of storing them in plaintext in the file system. The following Rake tasks are provided for updating the contents of the encrypted file.
### Show secret
Show the contents of the current Incoming email secrets.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:incoming_email:secret:show
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the incoming email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-incoming-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab:incoming_email:secret:show
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:incoming_email:secret:show RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
#### Example output
```plaintext
password: 'examplepassword'
user: 'incoming-email@mail.example.com'
```
### Edit secret
Opens the secret contents in your editor, and writes the resulting content to the encrypted secret file when you exit.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:incoming_email:secret:edit EDITOR=vim
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the incoming email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-incoming-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab:incoming_email:secret:edit EDITOR=editor
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:incoming_email:secret:edit RAILS_ENV=production EDITOR=vim
```
{{< /tab >}}
{{< /tabs >}}
### Write raw secret
Write new secret content by providing it on `STDIN`.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
echo -e "password: 'examplepassword'" | sudo gitlab-rake gitlab:incoming_email:secret:write
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the incoming email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-incoming-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> /bin/bash
echo -e "password: 'examplepassword'" | gitlab-rake gitlab:incoming_email:secret:write
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
echo -e "password: 'examplepassword'" | bundle exec rake gitlab:incoming_email:secret:write RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
### Secrets examples
**Editor example**
The write task can be used in cases where the edit command does not work with your editor:
```shell
# Write the existing secret to a plaintext file
sudo gitlab-rake gitlab:incoming_email:secret:show > incoming_email.yaml
# Edit the incoming_email file in your editor
...
# Re-encrypt the file
cat incoming_email.yaml | sudo gitlab-rake gitlab:incoming_email:secret:write
# Remove the plaintext file
rm incoming_email.yaml
```
**KMS integration example**
It can also be used as a receiving application for content encrypted with a KMS:
```shell
gcloud kms decrypt --key my-key --keyring my-test-kms --plaintext-file=- --ciphertext-file=my-file --location=us-west1 | sudo gitlab-rake gitlab:incoming_email:secret:write
```
**Google Cloud secret integration example**
It can also be used as a receiving application for secrets out of Google Cloud:
```shell
gcloud secrets versions access latest --secret="my-test-secret" > $1 | sudo gitlab-rake gitlab:incoming_email:secret:write
```
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Incoming email Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/108279) in GitLab 15.9.
{{< /history >}}
The following are Incoming email-related Rake tasks.
## Secrets
GitLab can use [Incoming email](../incoming_email.md) secrets read from an encrypted file instead of storing them in plaintext in the file system. The following Rake tasks are provided for updating the contents of the encrypted file.
### Show secret
Show the contents of the current Incoming email secrets.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:incoming_email:secret:show
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the incoming email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-incoming-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab:incoming_email:secret:show
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:incoming_email:secret:show RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
#### Example output
```plaintext
password: 'examplepassword'
user: 'incoming-email@mail.example.com'
```
### Edit secret
Opens the secret contents in your editor, and writes the resulting content to the encrypted secret file when you exit.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:incoming_email:secret:edit EDITOR=vim
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the incoming email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-incoming-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab:incoming_email:secret:edit EDITOR=editor
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:incoming_email:secret:edit RAILS_ENV=production EDITOR=vim
```
{{< /tab >}}
{{< /tabs >}}
### Write raw secret
Write new secret content by providing it on `STDIN`.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
echo -e "password: 'examplepassword'" | sudo gitlab-rake gitlab:incoming_email:secret:write
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Use a Kubernetes secret to store the incoming email password. For more information,
read about [Helm IMAP secrets](https://docs.gitlab.com/charts/installation/secrets.html#imap-password-for-incoming-emails).
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> /bin/bash
echo -e "password: 'examplepassword'" | gitlab-rake gitlab:incoming_email:secret:write
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
echo -e "password: 'examplepassword'" | bundle exec rake gitlab:incoming_email:secret:write RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
### Secrets examples
**Editor example**
The write task can be used in cases where the edit command does not work with your editor:
```shell
# Write the existing secret to a plaintext file
sudo gitlab-rake gitlab:incoming_email:secret:show > incoming_email.yaml
# Edit the incoming_email file in your editor
...
# Re-encrypt the file
cat incoming_email.yaml | sudo gitlab-rake gitlab:incoming_email:secret:write
# Remove the plaintext file
rm incoming_email.yaml
```
**KMS integration example**
It can also be used as a receiving application for content encrypted with a KMS:
```shell
gcloud kms decrypt --key my-key --keyring my-test-kms --plaintext-file=- --ciphertext-file=my-file --location=us-west1 | sudo gitlab-rake gitlab:incoming_email:secret:write
```
**Google Cloud secret integration example**
It can also be used as a receiving application for secrets out of Google Cloud:
```shell
gcloud secrets versions access latest --secret="my-test-secret" > $1 | sudo gitlab-rake gitlab:incoming_email:secret:write
```
|
https://docs.gitlab.com/administration/x509_signatures
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/x509_signatures.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
x509_signatures.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
X.509 signatures Rake task
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
When [signing commits with X.509](../../user/project/repository/signed_commits/x509.md),
the trust anchor might change and the signatures stored in the database must be updated.
## Update all X.509 signatures
This task:
- Iterates through all X.509-signed commits.
- Updates their verification status based on the current certificate store.
- Modifies only the database entries for the signatures.
- Leaves the commits unchanged.
To update all X.509 signatures, run:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:x509:update_signatures
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:x509:update_signatures RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
## Troubleshooting
When working with X.509 certificates, you might encounter the following issues.
### Error: `GRPC::DeadlineExceeded` during signature updates
You might get an error that states `GRPC::DeadlineExceeded` when updating X.509 signatures.
This issue occurs when network timeouts or connectivity problems prevent the task from
completing.
To resolve this issue, the task automatically retries up to 5 times for each signature by default.
You can customize the retry limit by setting the `GRPC_DEADLINE_EXCEEDED_RETRY_LIMIT`
environment variable:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
GRPC_DEADLINE_EXCEEDED_RETRY_LIMIT=2 sudo gitlab-rake gitlab:x509:update_signatures
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
GRPC_DEADLINE_EXCEEDED_RETRY_LIMIT=2 sudo -u git -H bundle exec rake gitlab:x509:update_signatures RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: X.509 signatures Rake task
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
When [signing commits with X.509](../../user/project/repository/signed_commits/x509.md),
the trust anchor might change and the signatures stored in the database must be updated.
## Update all X.509 signatures
This task:
- Iterates through all X.509-signed commits.
- Updates their verification status based on the current certificate store.
- Modifies only the database entries for the signatures.
- Leaves the commits unchanged.
To update all X.509 signatures, run:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:x509:update_signatures
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:x509:update_signatures RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
## Troubleshooting
When working with X.509 certificates, you might encounter the following issues.
### Error: `GRPC::DeadlineExceeded` during signature updates
You might get an error that states `GRPC::DeadlineExceeded` when updating X.509 signatures.
This issue occurs when network timeouts or connectivity problems prevent the task from
completing.
To resolve this issue, the task automatically retries up to 5 times for each signature by default.
You can customize the retry limit by setting the `GRPC_DEADLINE_EXCEEDED_RETRY_LIMIT`
environment variable:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
GRPC_DEADLINE_EXCEEDED_RETRY_LIMIT=2 sudo gitlab-rake gitlab:x509:update_signatures
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
GRPC_DEADLINE_EXCEEDED_RETRY_LIMIT=2 sudo -u git -H bundle exec rake gitlab:x509:update_signatures RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
|
https://docs.gitlab.com/administration/cleanup
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/cleanup.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
cleanup.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Clean up Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for cleaning up GitLab instances.
## Remove unreferenced LFS files
{{< alert type="warning" >}}
Do not run this within 12 hours of a GitLab upgrade. This is to ensure that all background migrations
have finished, which otherwise may lead to data loss.
{{< /alert >}}
When you remove LFS files from a repository's history, they become orphaned and continue to consume
disk space. With this Rake task, you can remove invalid references from the database, which
allows garbage collection of LFS files. For example:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:orphan_lfs_file_references PROJECT_PATH="gitlab-org/gitlab-foss"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:orphan_lfs_file_references RAILS_ENV=production PROJECT_PATH="gitlab-org/gitlab-foss"
```
{{< /tab >}}
{{< /tabs >}}
You can also specify the project with `PROJECT_ID` instead of `PROJECT_PATH`.
For example:
```shell
$ sudo gitlab-rake gitlab:cleanup:orphan_lfs_file_references PROJECT_ID="13083"
I, [2019-12-13T16:35:31.764962 #82356] INFO -- : Looking for orphan LFS files for project GitLab Org / GitLab Foss
I, [2019-12-13T16:35:31.923659 #82356] INFO -- : Removed invalid references: 12
```
By default, this task does not delete anything but shows how many file references it can
delete. Run the command with `DRY_RUN=false` if you actually want to
delete the references. You can also use `LIMIT={number}` parameter to limit the number of deleted references.
This Rake task only removes the references to LFS files. Unreferenced LFS files are garbage-collected
later (once a day). If you need to garbage collect them immediately, run
`rake gitlab:cleanup:orphan_lfs_files` described below.
### Remove unreferenced LFS files immediately
Unreferenced LFS files are removed on a daily basis but you can remove them immediately if
you need to. To remove unreferenced LFS files immediately:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:orphan_lfs_files
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:orphan_lfs_files
```
{{< /tab >}}
{{< /tabs >}}
Example output:
```shell
$ sudo gitlab-rake gitlab:cleanup:orphan_lfs_files
I, [2020-01-08T20:51:17.148765 #43765] INFO -- : Removed unreferenced LFS files: 12
```
## Clean up project upload files
Clean up project upload files if they don't exist in GitLab database.
### Clean up project upload files from file system
Clean up local project upload files if they don't exist in GitLab database. The
task attempts to fix the file if it can find its project, otherwise it moves the
file to a lost and found directory. To clean up project upload files from file system:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:project_uploads
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:project_uploads RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
Example output:
```shell
$ sudo gitlab-rake gitlab:cleanup:project_uploads
I, [2018-07-27T12:08:27.671559 #89817] INFO -- : Looking for orphaned project uploads to clean up. Dry run...
D, [2018-07-27T12:08:28.293568 #89817] DEBUG -- : Processing batch of 500 project upload file paths, starting with /opt/gitlab/embedded/service/gitlab-rails/public/uploads/test.out
I, [2018-07-27T12:08:28.689869 #89817] INFO -- : Can move to lost and found /opt/gitlab/embedded/service/gitlab-rails/public/uploads/test.out -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/-/project-lost-found/test.out
I, [2018-07-27T12:08:28.755624 #89817] INFO -- : Can fix /opt/gitlab/embedded/service/gitlab-rails/public/uploads/foo/bar/89a0f7b0b97008a4a18cedccfdcd93fb/foo.txt -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/qux/foo/bar/89a0f7b0b97008a4a18cedccfdcd93fb/foo.txt
I, [2018-07-27T12:08:28.760257 #89817] INFO -- : Can move to lost and found /opt/gitlab/embedded/service/gitlab-rails/public/uploads/foo/bar/1dd6f0f7eefd2acc4c2233f89a0f7b0b/image.png -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/-/project-lost-found/foo/bar/1dd6f0f7eefd2acc4c2233f89a0f7b0b/image.png
I, [2018-07-27T12:08:28.764470 #89817] INFO -- : To cleanup these files run this command with DRY_RUN=false
$ sudo gitlab-rake gitlab:cleanup:project_uploads DRY_RUN=false
I, [2018-07-27T12:08:32.944414 #89936] INFO -- : Looking for orphaned project uploads to clean up...
D, [2018-07-27T12:08:33.293568 #89817] DEBUG -- : Processing batch of 500 project upload file paths, starting with /opt/gitlab/embedded/service/gitlab-rails/public/uploads/test.out
I, [2018-07-27T12:08:33.689869 #89817] INFO -- : Did move to lost and found /opt/gitlab/embedded/service/gitlab-rails/public/uploads/test.out -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/-/project-lost-found/test.out
I, [2018-07-27T12:08:33.755624 #89817] INFO -- : Did fix /opt/gitlab/embedded/service/gitlab-rails/public/uploads/foo/bar/89a0f7b0b97008a4a18cedccfdcd93fb/foo.txt -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/qux/foo/bar/89a0f7b0b97008a4a18cedccfdcd93fb/foo.txt
I, [2018-07-27T12:08:33.760257 #89817] INFO -- : Did move to lost and found /opt/gitlab/embedded/service/gitlab-rails/public/uploads/foo/bar/1dd6f0f7eefd2acc4c2233f89a0f7b0b/image.png -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/-/project-lost-found/foo/bar/1dd6f0f7eefd2acc4c2233f89a0f7b0b/image.png
```
If using object storage, run the [All-in-one Rake task](uploads/migrate.md#all-in-one-rake-task) to ensure
all uploads are migrated to object storage and there are no files on disk in the uploads folder.
### Clean up project upload files from object storage
Move object store upload files to a lost and found directory if they don't exist in GitLab database.
To clean up project upload files from object storage:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:remote_upload_files
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:remote_upload_files RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
Example output:
```shell
$ sudo gitlab-rake gitlab:cleanup:remote_upload_files
I, [2018-08-02T10:26:13.995978 #45011] INFO -- : Looking for orphaned remote uploads to remove. Dry run...
I, [2018-08-02T10:26:14.120400 #45011] INFO -- : Can be moved to lost and found: @hashed/6b/DSC_6152.JPG
I, [2018-08-02T10:26:14.120482 #45011] INFO -- : Can be moved to lost and found: @hashed/79/02/7902699be42c8a8e46fbbb4501726517e86b22c56a189f7625a6da49081b2451/711491b29d3eb08837798c4909e2aa4d/DSC00314.jpg
I, [2018-08-02T10:26:14.120634 #45011] INFO -- : To cleanup these files run this command with DRY_RUN=false
```
```shell
$ sudo gitlab-rake gitlab:cleanup:remote_upload_files DRY_RUN=false
I, [2018-08-02T10:26:47.598424 #45087] INFO -- : Looking for orphaned remote uploads to remove...
I, [2018-08-02T10:26:47.753131 #45087] INFO -- : Moved to lost and found: @hashed/6b/DSC_6152.JPG -> lost_and_found/@hashed/6b/DSC_6152.JPG
I, [2018-08-02T10:26:47.764356 #45087] INFO -- : Moved to lost and found: @hashed/79/02/7902699be42c8a8e46fbbb4501726517e86b22c56a189f7625a6da49081b2451/711491b29d3eb08837798c4909e2aa4d/DSC00314.jpg -> lost_and_found/@hashed/79/02/7902699be42c8a8e46fbbb4501726517e86b22c56a189f7625a6da49081b2451/711491b29d3eb08837798c4909e2aa4d/DSC00314.jpg
```
## Remove orphan artifact files
{{< alert type="note" >}}
These commands don't work for artifacts stored on
[object storage](../object_storage.md).
{{< /alert >}}
When you notice there are more job artifacts files and/or directories on disk than there
should be, you can run:
```shell
sudo gitlab-rake gitlab:cleanup:orphan_job_artifact_files
```
This command:
- Scans through the entire artifacts folder.
- Checks which files still have a record in the database.
- If no database record is found, the file and directory is deleted from disk.
By default, this task does not delete anything but shows what it can
delete. Run the command with `DRY_RUN=false` if you actually want to
delete the files:
```shell
sudo gitlab-rake gitlab:cleanup:orphan_job_artifact_files DRY_RUN=false
```
You can also limit the number of files to delete with `LIMIT` (default `100`):
```shell
sudo gitlab-rake gitlab:cleanup:orphan_job_artifact_files LIMIT=100
```
This deletes only up to 100 files from disk. You can use this to delete a small
set for testing purposes.
Providing `DEBUG=1` displays the full path of every file that
is detected as being an orphan.
If `ionice` is installed, the tasks uses it to ensure the command is
not causing too much load on the disk. You can configure the niceness
level with `NICENESS`. Below are the valid levels, but consult
`man 1 ionice` to be sure.
- `0` or `None`
- `1` or `Realtime`
- `2` or `Best-effort` (default)
- `3` or `Idle`
## Remove expired ActiveSession lookup keys
To remove expired ActiveSession lookup keys:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:sessions:active_sessions_lookup_keys
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:sessions:active_sessions_lookup_keys RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
## Container registry garbage collection
Container Registry can use considerable amounts of disk space. To clear up
unused layers, the registry includes a [garbage collect command](../packages/container_registry.md#container-registry-garbage-collection).
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Clean up Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for cleaning up GitLab instances.
## Remove unreferenced LFS files
{{< alert type="warning" >}}
Do not run this within 12 hours of a GitLab upgrade. This is to ensure that all background migrations
have finished, which otherwise may lead to data loss.
{{< /alert >}}
When you remove LFS files from a repository's history, they become orphaned and continue to consume
disk space. With this Rake task, you can remove invalid references from the database, which
allows garbage collection of LFS files. For example:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:orphan_lfs_file_references PROJECT_PATH="gitlab-org/gitlab-foss"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:orphan_lfs_file_references RAILS_ENV=production PROJECT_PATH="gitlab-org/gitlab-foss"
```
{{< /tab >}}
{{< /tabs >}}
You can also specify the project with `PROJECT_ID` instead of `PROJECT_PATH`.
For example:
```shell
$ sudo gitlab-rake gitlab:cleanup:orphan_lfs_file_references PROJECT_ID="13083"
I, [2019-12-13T16:35:31.764962 #82356] INFO -- : Looking for orphan LFS files for project GitLab Org / GitLab Foss
I, [2019-12-13T16:35:31.923659 #82356] INFO -- : Removed invalid references: 12
```
By default, this task does not delete anything but shows how many file references it can
delete. Run the command with `DRY_RUN=false` if you actually want to
delete the references. You can also use `LIMIT={number}` parameter to limit the number of deleted references.
This Rake task only removes the references to LFS files. Unreferenced LFS files are garbage-collected
later (once a day). If you need to garbage collect them immediately, run
`rake gitlab:cleanup:orphan_lfs_files` described below.
### Remove unreferenced LFS files immediately
Unreferenced LFS files are removed on a daily basis but you can remove them immediately if
you need to. To remove unreferenced LFS files immediately:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:orphan_lfs_files
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:orphan_lfs_files
```
{{< /tab >}}
{{< /tabs >}}
Example output:
```shell
$ sudo gitlab-rake gitlab:cleanup:orphan_lfs_files
I, [2020-01-08T20:51:17.148765 #43765] INFO -- : Removed unreferenced LFS files: 12
```
## Clean up project upload files
Clean up project upload files if they don't exist in GitLab database.
### Clean up project upload files from file system
Clean up local project upload files if they don't exist in GitLab database. The
task attempts to fix the file if it can find its project, otherwise it moves the
file to a lost and found directory. To clean up project upload files from file system:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:project_uploads
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:project_uploads RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
Example output:
```shell
$ sudo gitlab-rake gitlab:cleanup:project_uploads
I, [2018-07-27T12:08:27.671559 #89817] INFO -- : Looking for orphaned project uploads to clean up. Dry run...
D, [2018-07-27T12:08:28.293568 #89817] DEBUG -- : Processing batch of 500 project upload file paths, starting with /opt/gitlab/embedded/service/gitlab-rails/public/uploads/test.out
I, [2018-07-27T12:08:28.689869 #89817] INFO -- : Can move to lost and found /opt/gitlab/embedded/service/gitlab-rails/public/uploads/test.out -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/-/project-lost-found/test.out
I, [2018-07-27T12:08:28.755624 #89817] INFO -- : Can fix /opt/gitlab/embedded/service/gitlab-rails/public/uploads/foo/bar/89a0f7b0b97008a4a18cedccfdcd93fb/foo.txt -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/qux/foo/bar/89a0f7b0b97008a4a18cedccfdcd93fb/foo.txt
I, [2018-07-27T12:08:28.760257 #89817] INFO -- : Can move to lost and found /opt/gitlab/embedded/service/gitlab-rails/public/uploads/foo/bar/1dd6f0f7eefd2acc4c2233f89a0f7b0b/image.png -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/-/project-lost-found/foo/bar/1dd6f0f7eefd2acc4c2233f89a0f7b0b/image.png
I, [2018-07-27T12:08:28.764470 #89817] INFO -- : To cleanup these files run this command with DRY_RUN=false
$ sudo gitlab-rake gitlab:cleanup:project_uploads DRY_RUN=false
I, [2018-07-27T12:08:32.944414 #89936] INFO -- : Looking for orphaned project uploads to clean up...
D, [2018-07-27T12:08:33.293568 #89817] DEBUG -- : Processing batch of 500 project upload file paths, starting with /opt/gitlab/embedded/service/gitlab-rails/public/uploads/test.out
I, [2018-07-27T12:08:33.689869 #89817] INFO -- : Did move to lost and found /opt/gitlab/embedded/service/gitlab-rails/public/uploads/test.out -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/-/project-lost-found/test.out
I, [2018-07-27T12:08:33.755624 #89817] INFO -- : Did fix /opt/gitlab/embedded/service/gitlab-rails/public/uploads/foo/bar/89a0f7b0b97008a4a18cedccfdcd93fb/foo.txt -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/qux/foo/bar/89a0f7b0b97008a4a18cedccfdcd93fb/foo.txt
I, [2018-07-27T12:08:33.760257 #89817] INFO -- : Did move to lost and found /opt/gitlab/embedded/service/gitlab-rails/public/uploads/foo/bar/1dd6f0f7eefd2acc4c2233f89a0f7b0b/image.png -> /opt/gitlab/embedded/service/gitlab-rails/public/uploads/-/project-lost-found/foo/bar/1dd6f0f7eefd2acc4c2233f89a0f7b0b/image.png
```
If using object storage, run the [All-in-one Rake task](uploads/migrate.md#all-in-one-rake-task) to ensure
all uploads are migrated to object storage and there are no files on disk in the uploads folder.
### Clean up project upload files from object storage
Move object store upload files to a lost and found directory if they don't exist in GitLab database.
To clean up project upload files from object storage:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:remote_upload_files
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:remote_upload_files RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
Example output:
```shell
$ sudo gitlab-rake gitlab:cleanup:remote_upload_files
I, [2018-08-02T10:26:13.995978 #45011] INFO -- : Looking for orphaned remote uploads to remove. Dry run...
I, [2018-08-02T10:26:14.120400 #45011] INFO -- : Can be moved to lost and found: @hashed/6b/DSC_6152.JPG
I, [2018-08-02T10:26:14.120482 #45011] INFO -- : Can be moved to lost and found: @hashed/79/02/7902699be42c8a8e46fbbb4501726517e86b22c56a189f7625a6da49081b2451/711491b29d3eb08837798c4909e2aa4d/DSC00314.jpg
I, [2018-08-02T10:26:14.120634 #45011] INFO -- : To cleanup these files run this command with DRY_RUN=false
```
```shell
$ sudo gitlab-rake gitlab:cleanup:remote_upload_files DRY_RUN=false
I, [2018-08-02T10:26:47.598424 #45087] INFO -- : Looking for orphaned remote uploads to remove...
I, [2018-08-02T10:26:47.753131 #45087] INFO -- : Moved to lost and found: @hashed/6b/DSC_6152.JPG -> lost_and_found/@hashed/6b/DSC_6152.JPG
I, [2018-08-02T10:26:47.764356 #45087] INFO -- : Moved to lost and found: @hashed/79/02/7902699be42c8a8e46fbbb4501726517e86b22c56a189f7625a6da49081b2451/711491b29d3eb08837798c4909e2aa4d/DSC00314.jpg -> lost_and_found/@hashed/79/02/7902699be42c8a8e46fbbb4501726517e86b22c56a189f7625a6da49081b2451/711491b29d3eb08837798c4909e2aa4d/DSC00314.jpg
```
## Remove orphan artifact files
{{< alert type="note" >}}
These commands don't work for artifacts stored on
[object storage](../object_storage.md).
{{< /alert >}}
When you notice there are more job artifacts files and/or directories on disk than there
should be, you can run:
```shell
sudo gitlab-rake gitlab:cleanup:orphan_job_artifact_files
```
This command:
- Scans through the entire artifacts folder.
- Checks which files still have a record in the database.
- If no database record is found, the file and directory is deleted from disk.
By default, this task does not delete anything but shows what it can
delete. Run the command with `DRY_RUN=false` if you actually want to
delete the files:
```shell
sudo gitlab-rake gitlab:cleanup:orphan_job_artifact_files DRY_RUN=false
```
You can also limit the number of files to delete with `LIMIT` (default `100`):
```shell
sudo gitlab-rake gitlab:cleanup:orphan_job_artifact_files LIMIT=100
```
This deletes only up to 100 files from disk. You can use this to delete a small
set for testing purposes.
Providing `DEBUG=1` displays the full path of every file that
is detected as being an orphan.
If `ionice` is installed, the tasks uses it to ensure the command is
not causing too much load on the disk. You can configure the niceness
level with `NICENESS`. Below are the valid levels, but consult
`man 1 ionice` to be sure.
- `0` or `None`
- `1` or `Realtime`
- `2` or `Best-effort` (default)
- `3` or `Idle`
## Remove expired ActiveSession lookup keys
To remove expired ActiveSession lookup keys:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:sessions:active_sessions_lookup_keys
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
bundle exec rake gitlab:cleanup:sessions:active_sessions_lookup_keys RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
## Container registry garbage collection
Container Registry can use considerable amounts of disk space. To clear up
unused layers, the registry includes a [garbage collect command](../packages/container_registry.md#container-registry-garbage-collection).
|
https://docs.gitlab.com/administration/smtp
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/smtp.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
smtp.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
SMTP Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The following are SMTP-related Rake tasks.
## Secrets
GitLab can use SMTP configuration secrets to read from an encrypted file. The following Rake tasks are provided for updating the contents of the encrypted file.
### Show secret
Show the contents of the current SMTP secrets.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:smtp:secret:show
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:smtp:secret:show RAILS_ENV=production
```
**Example output**:
```plaintext
password: '123'
user_name: 'gitlab-inst'
```
### Edit secret
Opens the secret contents in your editor, and writes the resulting content to the encrypted secret file when you exit.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:smtp:secret:edit EDITOR=vim
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:smtp:secret:edit RAILS_ENV=production EDITOR=vim
```
### Write raw secret
Write new secret content by providing it on `STDIN`.
- Linux package installations:
```shell
echo -e "password: '123'" | sudo gitlab-rake gitlab:smtp:secret:write
```
- Self-compiled installations:
```shell
echo -e "password: '123'" | bundle exec rake gitlab:smtp:secret:write RAILS_ENV=production
```
### Secrets examples
**Editor example**
The write task can be used in cases where the edit command does not work with your editor:
```shell
# Write the existing secret to a plaintext file
sudo gitlab-rake gitlab:smtp:secret:show > smtp.yaml
# Edit the smtp file in your editor
...
# Re-encrypt the file
cat smtp.yaml | sudo gitlab-rake gitlab:smtp:secret:write
# Remove the plaintext file
rm smtp.yaml
```
**KMS integration example**
It can also be used as a receiving application for content encrypted with a KMS:
```shell
gcloud kms decrypt --key my-key --keyring my-test-kms --plaintext-file=- --ciphertext-file=my-file --location=us-west1 | sudo gitlab-rake gitlab:smtp:secret:write
```
**Google Cloud secret integration example**
It can also be used as a receiving application for secrets out of Google Cloud:
```shell
gcloud secrets versions access latest --secret="my-test-secret" > $1 | sudo gitlab-rake gitlab:smtp:secret:write
```
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: SMTP Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The following are SMTP-related Rake tasks.
## Secrets
GitLab can use SMTP configuration secrets to read from an encrypted file. The following Rake tasks are provided for updating the contents of the encrypted file.
### Show secret
Show the contents of the current SMTP secrets.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:smtp:secret:show
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:smtp:secret:show RAILS_ENV=production
```
**Example output**:
```plaintext
password: '123'
user_name: 'gitlab-inst'
```
### Edit secret
Opens the secret contents in your editor, and writes the resulting content to the encrypted secret file when you exit.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:smtp:secret:edit EDITOR=vim
```
- Self-compiled installations:
```shell
bundle exec rake gitlab:smtp:secret:edit RAILS_ENV=production EDITOR=vim
```
### Write raw secret
Write new secret content by providing it on `STDIN`.
- Linux package installations:
```shell
echo -e "password: '123'" | sudo gitlab-rake gitlab:smtp:secret:write
```
- Self-compiled installations:
```shell
echo -e "password: '123'" | bundle exec rake gitlab:smtp:secret:write RAILS_ENV=production
```
### Secrets examples
**Editor example**
The write task can be used in cases where the edit command does not work with your editor:
```shell
# Write the existing secret to a plaintext file
sudo gitlab-rake gitlab:smtp:secret:show > smtp.yaml
# Edit the smtp file in your editor
...
# Re-encrypt the file
cat smtp.yaml | sudo gitlab-rake gitlab:smtp:secret:write
# Remove the plaintext file
rm smtp.yaml
```
**KMS integration example**
It can also be used as a receiving application for content encrypted with a KMS:
```shell
gcloud kms decrypt --key my-key --keyring my-test-kms --plaintext-file=- --ciphertext-file=my-file --location=us-west1 | sudo gitlab-rake gitlab:smtp:secret:write
```
**Google Cloud secret integration example**
It can also be used as a receiving application for secrets out of Google Cloud:
```shell
gcloud secrets versions access latest --secret="my-test-secret" > $1 | sudo gitlab-rake gitlab:smtp:secret:write
```
|
https://docs.gitlab.com/administration/password
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/password.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
password.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Password maintenance Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for managing passwords.
## Reset passwords
To reset a password using a Rake task, see [reset user passwords](../../security/reset_user_password.md#use-a-rake-task).
## Check password salt length
Starting with GitLab 17.11, the salts of password hashes on FIPS instances
are increased when a user signs in.
You can check how many users need this migration:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:password:fips_check_salts:[true]
# installation from source
bundle exec rake gitlab:password:fips_check_salts:[true] RAILS_ENV=production
```
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Password maintenance Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for managing passwords.
## Reset passwords
To reset a password using a Rake task, see [reset user passwords](../../security/reset_user_password.md#use-a-rake-task).
## Check password salt length
Starting with GitLab 17.11, the salts of password hashes on FIPS instances
are increased when a user signs in.
You can check how many users need this migration:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:password:fips_check_salts:[true]
# installation from source
bundle exec rake gitlab:password:fips_check_salts:[true] RAILS_ENV=production
```
|
https://docs.gitlab.com/administration/spdx
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/spdx.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
spdx.md
|
Application Security Testing
|
Composition Analysis
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
SPDX license list import Rake task
| null |
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides a Rake task for uploading a fresh copy of the [SPDX license list](https://spdx.org/licenses/)
to a GitLab instance. This list is needed for matching the names of [License approval policies](../../user/compliance/license_approval_policies.md).
To import a fresh copy of the PDX license list, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:spdx:import
# source installations
bundle exec rake gitlab:spdx:import RAILS_ENV=production
```
To perform this task in the [offline environment](../../user/application_security/offline_deployments/_index.md#defining-offline-environments),
an outbound connection to [`licenses.json`](https://spdx.org/licenses/licenses.json) should be
allowed.
|
---
stage: Application Security Testing
group: Composition Analysis
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: SPDX license list import Rake task
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides a Rake task for uploading a fresh copy of the [SPDX license list](https://spdx.org/licenses/)
to a GitLab instance. This list is needed for matching the names of [License approval policies](../../user/compliance/license_approval_policies.md).
To import a fresh copy of the PDX license list, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:spdx:import
# source installations
bundle exec rake gitlab:spdx:import RAILS_ENV=production
```
To perform this task in the [offline environment](../../user/application_security/offline_deployments/_index.md#defining-offline-environments),
an outbound connection to [`licenses.json`](https://spdx.org/licenses/licenses.json) should be
allowed.
|
https://docs.gitlab.com/administration/ldap
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/ldap.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
ldap.md
|
Software Supply Chain Security
|
Authentication
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
LDAP Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The following are LDAP-related Rake tasks.
## Check
The LDAP check Rake task tests the `bind_dn` and `password` credentials
(if configured) and lists a sample of LDAP users. This task is also
executed as part of the `gitlab:check` task, but can run independently
using the command below.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:check
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:check
```
{{< /tab >}}
{{< /tabs >}}
By default, the task returns a sample of 100 LDAP users. Change this
limit by passing a number to the check task:
```shell
rake gitlab:ldap:check[50]
```
## Run a group sync
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The following task runs a [group sync](../auth/ldap/ldap_synchronization.md#group-sync) immediately.
This is valuable when you'd like to update all configured group memberships against LDAP without
waiting for the next scheduled group sync to be run.
{{< alert type="note" >}}
If you'd like to change the frequency at which a group sync is performed,
[adjust the cron schedule](../auth/ldap/ldap_synchronization.md#adjust-ldap-group-sync-schedule)
instead.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:group_sync
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:group_sync
```
{{< /tab >}}
{{< /tabs >}}
## Rename a provider
If you change the LDAP server ID in `gitlab.yml` or `gitlab.rb` you need
to update all user identities or users aren't able to sign in. Input the
old and new provider and this task updates all matching identities in the
database.
`old_provider` and `new_provider` are derived from the prefix `ldap` plus the
LDAP server ID from the configuration file. For example, in `gitlab.yml` or
`gitlab.rb` you may see LDAP configuration like this:
```yaml
main:
label: 'LDAP'
host: '_your_ldap_server'
port: 389
uid: 'sAMAccountName'
# ...
```
`main` is the LDAP server ID. Together, the unique provider is `ldapmain`.
{{< alert type="warning" >}}
If you input an incorrect new provider, users cannot sign in. If this happens,
run the task again with the incorrect provider as the `old_provider` and the
correct provider as the `new_provider`.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:rename_provider[old_provider,new_provider]
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:rename_provider[old_provider,new_provider]
```
{{< /tab >}}
{{< /tabs >}}
### Example
Consider beginning with the default server ID `main` (full provider `ldapmain`).
If we change `main` to `mycompany`, the `new_provider` is `ldapmycompany`.
To rename all user identities run the following command:
```shell
sudo gitlab-rake gitlab:ldap:rename_provider[ldapmain,ldapmycompany]
```
Example output:
```plaintext
100 users with provider 'ldapmain' will be updated to 'ldapmycompany'.
If the new provider is incorrect, users will be unable to sign in.
Do you want to continue (yes/no)? yes
User identities were successfully updated
```
### Other options
If you do not specify an `old_provider` and `new_provider` the task prompts you
for them:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:rename_provider
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:rename_provider
```
{{< /tab >}}
{{< /tabs >}}
**Example output**:
```plaintext
What is the old provider? Ex. 'ldapmain': ldapmain
What is the new provider? Ex. 'ldapcustom': ldapmycompany
```
This task also accepts the `force` environment variable, which skips the
confirmation dialog:
```shell
sudo gitlab-rake gitlab:ldap:rename_provider[old_provider,new_provider] force=yes
```
## Secrets
GitLab can use [LDAP configuration secrets](../auth/ldap/_index.md#use-encrypted-credentials) to read from an encrypted file.
The following Rake tasks are provided for updating the contents of the encrypted file.
### Show secret
Show the contents of the current LDAP secrets.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:secret:show
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:secret:show
```
{{< /tab >}}
{{< /tabs >}}
**Example output**:
```plaintext
main:
password: '123'
bind_dn: 'gitlab-adm'
```
### Edit secret
Opens the secret contents in your editor, and writes the resulting content to the encrypted secret file when you exit.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:secret:edit EDITOR=vim
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production EDITOR=vim -u git -H bundle exec rake gitlab:ldap:secret:edit
```
{{< /tab >}}
{{< /tabs >}}
### Write raw secret
Write new secret content by providing it on STDIN.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
echo -e "main:\n password: '123'" | sudo gitlab-rake gitlab:ldap:secret:write
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
echo -e "main:\n password: '123'" | sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:secret:write
```
{{< /tab >}}
{{< /tabs >}}
### Secrets examples
**Editor example**
The write task can be used in cases where the edit command does not work with your editor:
```shell
# Write the existing secret to a plaintext file
sudo gitlab-rake gitlab:ldap:secret:show > ldap.yaml
# Edit the ldap file in your editor
...
# Re-encrypt the file
cat ldap.yaml | sudo gitlab-rake gitlab:ldap:secret:write
# Remove the plaintext file
rm ldap.yaml
```
**KMS integration example**
It can also be used as a receiving application for content encrypted with a KMS:
```shell
gcloud kms decrypt --key my-key --keyring my-test-kms --plaintext-file=- --ciphertext-file=my-file --location=us-west1 | sudo gitlab-rake gitlab:ldap:secret:write
```
**Google Cloud secret integration example**
It can also be used as a receiving application for secrets out of Google Cloud:
```shell
gcloud secrets versions access latest --secret="my-test-secret" > $1 | sudo gitlab-rake gitlab:ldap:secret:write
```
|
---
stage: Software Supply Chain Security
group: Authentication
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
gitlab_dedicated: false
title: LDAP Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The following are LDAP-related Rake tasks.
## Check
The LDAP check Rake task tests the `bind_dn` and `password` credentials
(if configured) and lists a sample of LDAP users. This task is also
executed as part of the `gitlab:check` task, but can run independently
using the command below.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:check
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:check
```
{{< /tab >}}
{{< /tabs >}}
By default, the task returns a sample of 100 LDAP users. Change this
limit by passing a number to the check task:
```shell
rake gitlab:ldap:check[50]
```
## Run a group sync
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The following task runs a [group sync](../auth/ldap/ldap_synchronization.md#group-sync) immediately.
This is valuable when you'd like to update all configured group memberships against LDAP without
waiting for the next scheduled group sync to be run.
{{< alert type="note" >}}
If you'd like to change the frequency at which a group sync is performed,
[adjust the cron schedule](../auth/ldap/ldap_synchronization.md#adjust-ldap-group-sync-schedule)
instead.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:group_sync
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:group_sync
```
{{< /tab >}}
{{< /tabs >}}
## Rename a provider
If you change the LDAP server ID in `gitlab.yml` or `gitlab.rb` you need
to update all user identities or users aren't able to sign in. Input the
old and new provider and this task updates all matching identities in the
database.
`old_provider` and `new_provider` are derived from the prefix `ldap` plus the
LDAP server ID from the configuration file. For example, in `gitlab.yml` or
`gitlab.rb` you may see LDAP configuration like this:
```yaml
main:
label: 'LDAP'
host: '_your_ldap_server'
port: 389
uid: 'sAMAccountName'
# ...
```
`main` is the LDAP server ID. Together, the unique provider is `ldapmain`.
{{< alert type="warning" >}}
If you input an incorrect new provider, users cannot sign in. If this happens,
run the task again with the incorrect provider as the `old_provider` and the
correct provider as the `new_provider`.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:rename_provider[old_provider,new_provider]
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:rename_provider[old_provider,new_provider]
```
{{< /tab >}}
{{< /tabs >}}
### Example
Consider beginning with the default server ID `main` (full provider `ldapmain`).
If we change `main` to `mycompany`, the `new_provider` is `ldapmycompany`.
To rename all user identities run the following command:
```shell
sudo gitlab-rake gitlab:ldap:rename_provider[ldapmain,ldapmycompany]
```
Example output:
```plaintext
100 users with provider 'ldapmain' will be updated to 'ldapmycompany'.
If the new provider is incorrect, users will be unable to sign in.
Do you want to continue (yes/no)? yes
User identities were successfully updated
```
### Other options
If you do not specify an `old_provider` and `new_provider` the task prompts you
for them:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:rename_provider
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:rename_provider
```
{{< /tab >}}
{{< /tabs >}}
**Example output**:
```plaintext
What is the old provider? Ex. 'ldapmain': ldapmain
What is the new provider? Ex. 'ldapcustom': ldapmycompany
```
This task also accepts the `force` environment variable, which skips the
confirmation dialog:
```shell
sudo gitlab-rake gitlab:ldap:rename_provider[old_provider,new_provider] force=yes
```
## Secrets
GitLab can use [LDAP configuration secrets](../auth/ldap/_index.md#use-encrypted-credentials) to read from an encrypted file.
The following Rake tasks are provided for updating the contents of the encrypted file.
### Show secret
Show the contents of the current LDAP secrets.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:secret:show
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:secret:show
```
{{< /tab >}}
{{< /tabs >}}
**Example output**:
```plaintext
main:
password: '123'
bind_dn: 'gitlab-adm'
```
### Edit secret
Opens the secret contents in your editor, and writes the resulting content to the encrypted secret file when you exit.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:ldap:secret:edit EDITOR=vim
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production EDITOR=vim -u git -H bundle exec rake gitlab:ldap:secret:edit
```
{{< /tab >}}
{{< /tabs >}}
### Write raw secret
Write new secret content by providing it on STDIN.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
echo -e "main:\n password: '123'" | sudo gitlab-rake gitlab:ldap:secret:write
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
echo -e "main:\n password: '123'" | sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:ldap:secret:write
```
{{< /tab >}}
{{< /tabs >}}
### Secrets examples
**Editor example**
The write task can be used in cases where the edit command does not work with your editor:
```shell
# Write the existing secret to a plaintext file
sudo gitlab-rake gitlab:ldap:secret:show > ldap.yaml
# Edit the ldap file in your editor
...
# Re-encrypt the file
cat ldap.yaml | sudo gitlab-rake gitlab:ldap:secret:write
# Remove the plaintext file
rm ldap.yaml
```
**KMS integration example**
It can also be used as a receiving application for content encrypted with a KMS:
```shell
gcloud kms decrypt --key my-key --keyring my-test-kms --plaintext-file=- --ciphertext-file=my-file --location=us-west1 | sudo gitlab-rake gitlab:ldap:secret:write
```
**Google Cloud secret integration example**
It can also be used as a receiving application for secrets out of Google Cloud:
```shell
gcloud secrets versions access latest --secret="my-test-secret" > $1 | sudo gitlab-rake gitlab:ldap:secret:write
```
|
https://docs.gitlab.com/administration/user_management
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/user_management.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
user_management.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
User management Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for managing users. Administrators can also use the **Admin** area to
[manage users](../admin_area.md#administering-users).
## Add user as a developer to all projects
To add a user as a developer to all projects, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:import:user_to_projects[username@domain.tld]
# installation from source
bundle exec rake gitlab:import:user_to_projects[username@domain.tld] RAILS_ENV=production
```
## Add all users to all projects
To add all users to all projects, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:import:all_users_to_all_projects
# installation from source
bundle exec rake gitlab:import:all_users_to_all_projects RAILS_ENV=production
```
Administrators are added as maintainers and all other users are added as developers.
## Add user as a developer to all groups
To add a user as a developer to all groups, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:import:user_to_groups[username@domain.tld]
# installation from source
bundle exec rake gitlab:import:user_to_groups[username@domain.tld] RAILS_ENV=production
```
## Add all users to all groups
To add all users to all groups, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:import:all_users_to_all_groups
# installation from source
bundle exec rake gitlab:import:all_users_to_all_groups RAILS_ENV=production
```
Administrators are added as owners so they can add additional users to the group.
## Update all users in a given group to `project_limit:0` and `can_create_group: false`
To update all users in given group to `project_limit: 0` and `can_create_group: false`, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:user_management:disable_project_and_group_creation\[:group_id\]
# installation from source
bundle exec rake gitlab:user_management:disable_project_and_group_creation\[:group_id\] RAILS_ENV=production
```
It updates all users in the given group, its subgroups and projects in this group namespace, with the noted limits.
## Control the number of billable users
Enable this setting to keep new users blocked until they have been cleared by the administrator.
Defaults to `false`:
```plaintext
block_auto_created_users: false
```
## Disable two-factor authentication for all users
This task disables two-factor authentication (2FA) for all users that have it enabled. This can be
useful if the GitLab `config/secrets.yml` file has been lost and users are unable
to sign in, for example.
To disable two-factor authentication for all users, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:two_factor:disable_for_all_users
# installation from source
bundle exec rake gitlab:two_factor:disable_for_all_users RAILS_ENV=production
```
## Rotate two-factor authentication encryption key
GitLab stores the secret data required for two-factor authentication (2FA) in an encrypted
database column. The encryption key for this data is known as `otp_key_base`, and is
stored in `config/secrets.yml`.
If that file is leaked, but the individual 2FA secrets have not, it's possible
to re-encrypt those secrets with a new encryption key. This allows you to change
the leaked key without forcing all users to change their 2FA details.
To rotate the two-factor authentication encryption key:
1. Look up the old key in the `config/secrets.yml` file, but **make sure you're working
with the production section**. The line you're interested in looks like this:
```yaml
production:
otp_key_base: fffffffffffffffffffffffffffffffffffffffffffffff
```
1. Generate a new secret:
```shell
# omnibus-gitlab
sudo gitlab-rake secret
# installation from source
bundle exec rake secret RAILS_ENV=production
```
1. Stop the GitLab server, back up the existing secrets file, and update the database:
```shell
# omnibus-gitlab
sudo gitlab-ctl stop
sudo cp config/secrets.yml config/secrets.yml.bak
sudo gitlab-rake gitlab:two_factor:rotate_key:apply filename=backup.csv old_key=<old key> new_key=<new key>
# installation from source
sudo /etc/init.d/gitlab stop
cp config/secrets.yml config/secrets.yml.bak
bundle exec rake gitlab:two_factor:rotate_key:apply filename=backup.csv old_key=<old key> new_key=<new key> RAILS_ENV=production
```
The `<old key>` value can be read from `config/secrets.yml` (`<new key>` was
generated earlier). The **encrypted** values for the user 2FA secrets are
written to the specified `filename`. You can use this to rollback in case of
error.
1. Change `config/secrets.yml` to set `otp_key_base` to `<new key>` and restart. Again, make sure
you're operating in the **production** section.
```shell
# omnibus-gitlab
sudo gitlab-ctl start
# installation from source
sudo /etc/init.d/gitlab start
```
If there are any problems (perhaps using the wrong value for `old_key`), you can
restore your backup of `config/secrets.yml` and rollback the changes:
```shell
# omnibus-gitlab
sudo gitlab-ctl stop
sudo gitlab-rake gitlab:two_factor:rotate_key:rollback filename=backup.csv
sudo cp config/secrets.yml.bak config/secrets.yml
sudo gitlab-ctl start
# installation from source
sudo /etc/init.d/gitlab start
bundle exec rake gitlab:two_factor:rotate_key:rollback filename=backup.csv RAILS_ENV=production
cp config/secrets.yml.bak config/secrets.yml
sudo /etc/init.d/gitlab start
```
## Bulk assign users to GitLab Duo
You can assign users in bulk to GitLab Duo using a CSV file with the names of the users.
The CSV file must have a header named `username`, followed by the usernames on each subsequent row.
```plaintext
username
user1
user2
user3
user4
```
### GitLab Duo Pro
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142189) in GitLab 16.9.
{{< /history >}}
To perform bulk user assignment for GitLab Duo Pro, you can use the following Rake task:
```shell
bundle exec rake duo_pro:bulk_user_assignment DUO_PRO_BULK_USER_FILE_PATH=path/to/your/file.csv
```
If you prefer to use square brackets in the file path, you can escape them or use double quotes:
```shell
bundle exec rake duo_pro:bulk_user_assignment\['path/to/your/file.csv'\]
# or
bundle exec rake "duo_pro:bulk_user_assignment[path/to/your/file.csv]"
```
### GitLab Duo Pro and Enterprise
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/187230) in GitLab 18.0.
{{< /history >}}
#### GitLab Self-Managed
This Rake task bulk assigns GitLab Duo Pro or Enterprise seats at the instance level to a list of users from a
CSV file, based on the available purchased add-on.
To perform bulk user assignment for a GitLab Self-Managed instance:
```shell
bundle exec rake gitlab_subscriptions:duo:bulk_user_assignment DUO_BULK_USER_FILE_PATH=path/to/your/file.csv
```
If you prefer to use square brackets in the file path, you can escape them or use double quotes:
```shell
bundle exec rake gitlab_subscriptions:duo:bulk_user_assignment\['path/to/your/file.csv'\]
# or
bundle exec rake "gitlab_subscriptions:duo:bulk_user_assignment[path/to/your/file.csv]"
```
#### GitLab.com
GitLab.com administrators can also use this Rake task to bulk assign GitLab Duo Pro or Enterprise seats for GitLab.com
groups, based on the available purchased add-on for that group.
To perform bulk user assignment for a GitLab.com group:
```shell
bundle exec rake gitlab_subscriptions:duo:bulk_user_assignment DUO_BULK_USER_FILE_PATH=path/to/your/file.csv NAMESPACE_ID=<namespace_id>
```
If you prefer to use square brackets in the file path, you can escape them or use double quotes:
```shell
bundle exec rake gitlab_subscriptions:duo:bulk_user_assignment\['path/to/your/file.csv','<namespace_id>'\]
# or
bundle exec rake "gitlab_subscriptions:duo:bulk_user_assignment[path/to/your/file.csv,<namespace_id>]"
```
## Troubleshooting
### Errors during bulk user assignment
When using the Rake task for bulk user assignment, you might encounter the following errors:
- `User is not found`: The specified user was not found. Please ensure the provided username matches an existing user.
- `ERROR_NO_SEATS_AVAILABLE`: No more seats are available for user assignment. Please see how to [view assigned GitLab Duo users](../../subscriptions/subscription-add-ons.md#view-assigned-gitlab-duo-users) to check current seat assignments.
- `ERROR_INVALID_USER_MEMBERSHIP`: The user is not eligible for assignment due to being inactive, a bot, or a ghost. Please ensure the user is active and, if on GitLab.com, a member of the provided namespace.
## Related topics
- [Reset user passwords](../../security/reset_user_password.md#use-a-rake-task)
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: User management Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for managing users. Administrators can also use the **Admin** area to
[manage users](../admin_area.md#administering-users).
## Add user as a developer to all projects
To add a user as a developer to all projects, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:import:user_to_projects[username@domain.tld]
# installation from source
bundle exec rake gitlab:import:user_to_projects[username@domain.tld] RAILS_ENV=production
```
## Add all users to all projects
To add all users to all projects, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:import:all_users_to_all_projects
# installation from source
bundle exec rake gitlab:import:all_users_to_all_projects RAILS_ENV=production
```
Administrators are added as maintainers and all other users are added as developers.
## Add user as a developer to all groups
To add a user as a developer to all groups, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:import:user_to_groups[username@domain.tld]
# installation from source
bundle exec rake gitlab:import:user_to_groups[username@domain.tld] RAILS_ENV=production
```
## Add all users to all groups
To add all users to all groups, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:import:all_users_to_all_groups
# installation from source
bundle exec rake gitlab:import:all_users_to_all_groups RAILS_ENV=production
```
Administrators are added as owners so they can add additional users to the group.
## Update all users in a given group to `project_limit:0` and `can_create_group: false`
To update all users in given group to `project_limit: 0` and `can_create_group: false`, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:user_management:disable_project_and_group_creation\[:group_id\]
# installation from source
bundle exec rake gitlab:user_management:disable_project_and_group_creation\[:group_id\] RAILS_ENV=production
```
It updates all users in the given group, its subgroups and projects in this group namespace, with the noted limits.
## Control the number of billable users
Enable this setting to keep new users blocked until they have been cleared by the administrator.
Defaults to `false`:
```plaintext
block_auto_created_users: false
```
## Disable two-factor authentication for all users
This task disables two-factor authentication (2FA) for all users that have it enabled. This can be
useful if the GitLab `config/secrets.yml` file has been lost and users are unable
to sign in, for example.
To disable two-factor authentication for all users, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:two_factor:disable_for_all_users
# installation from source
bundle exec rake gitlab:two_factor:disable_for_all_users RAILS_ENV=production
```
## Rotate two-factor authentication encryption key
GitLab stores the secret data required for two-factor authentication (2FA) in an encrypted
database column. The encryption key for this data is known as `otp_key_base`, and is
stored in `config/secrets.yml`.
If that file is leaked, but the individual 2FA secrets have not, it's possible
to re-encrypt those secrets with a new encryption key. This allows you to change
the leaked key without forcing all users to change their 2FA details.
To rotate the two-factor authentication encryption key:
1. Look up the old key in the `config/secrets.yml` file, but **make sure you're working
with the production section**. The line you're interested in looks like this:
```yaml
production:
otp_key_base: fffffffffffffffffffffffffffffffffffffffffffffff
```
1. Generate a new secret:
```shell
# omnibus-gitlab
sudo gitlab-rake secret
# installation from source
bundle exec rake secret RAILS_ENV=production
```
1. Stop the GitLab server, back up the existing secrets file, and update the database:
```shell
# omnibus-gitlab
sudo gitlab-ctl stop
sudo cp config/secrets.yml config/secrets.yml.bak
sudo gitlab-rake gitlab:two_factor:rotate_key:apply filename=backup.csv old_key=<old key> new_key=<new key>
# installation from source
sudo /etc/init.d/gitlab stop
cp config/secrets.yml config/secrets.yml.bak
bundle exec rake gitlab:two_factor:rotate_key:apply filename=backup.csv old_key=<old key> new_key=<new key> RAILS_ENV=production
```
The `<old key>` value can be read from `config/secrets.yml` (`<new key>` was
generated earlier). The **encrypted** values for the user 2FA secrets are
written to the specified `filename`. You can use this to rollback in case of
error.
1. Change `config/secrets.yml` to set `otp_key_base` to `<new key>` and restart. Again, make sure
you're operating in the **production** section.
```shell
# omnibus-gitlab
sudo gitlab-ctl start
# installation from source
sudo /etc/init.d/gitlab start
```
If there are any problems (perhaps using the wrong value for `old_key`), you can
restore your backup of `config/secrets.yml` and rollback the changes:
```shell
# omnibus-gitlab
sudo gitlab-ctl stop
sudo gitlab-rake gitlab:two_factor:rotate_key:rollback filename=backup.csv
sudo cp config/secrets.yml.bak config/secrets.yml
sudo gitlab-ctl start
# installation from source
sudo /etc/init.d/gitlab start
bundle exec rake gitlab:two_factor:rotate_key:rollback filename=backup.csv RAILS_ENV=production
cp config/secrets.yml.bak config/secrets.yml
sudo /etc/init.d/gitlab start
```
## Bulk assign users to GitLab Duo
You can assign users in bulk to GitLab Duo using a CSV file with the names of the users.
The CSV file must have a header named `username`, followed by the usernames on each subsequent row.
```plaintext
username
user1
user2
user3
user4
```
### GitLab Duo Pro
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142189) in GitLab 16.9.
{{< /history >}}
To perform bulk user assignment for GitLab Duo Pro, you can use the following Rake task:
```shell
bundle exec rake duo_pro:bulk_user_assignment DUO_PRO_BULK_USER_FILE_PATH=path/to/your/file.csv
```
If you prefer to use square brackets in the file path, you can escape them or use double quotes:
```shell
bundle exec rake duo_pro:bulk_user_assignment\['path/to/your/file.csv'\]
# or
bundle exec rake "duo_pro:bulk_user_assignment[path/to/your/file.csv]"
```
### GitLab Duo Pro and Enterprise
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/187230) in GitLab 18.0.
{{< /history >}}
#### GitLab Self-Managed
This Rake task bulk assigns GitLab Duo Pro or Enterprise seats at the instance level to a list of users from a
CSV file, based on the available purchased add-on.
To perform bulk user assignment for a GitLab Self-Managed instance:
```shell
bundle exec rake gitlab_subscriptions:duo:bulk_user_assignment DUO_BULK_USER_FILE_PATH=path/to/your/file.csv
```
If you prefer to use square brackets in the file path, you can escape them or use double quotes:
```shell
bundle exec rake gitlab_subscriptions:duo:bulk_user_assignment\['path/to/your/file.csv'\]
# or
bundle exec rake "gitlab_subscriptions:duo:bulk_user_assignment[path/to/your/file.csv]"
```
#### GitLab.com
GitLab.com administrators can also use this Rake task to bulk assign GitLab Duo Pro or Enterprise seats for GitLab.com
groups, based on the available purchased add-on for that group.
To perform bulk user assignment for a GitLab.com group:
```shell
bundle exec rake gitlab_subscriptions:duo:bulk_user_assignment DUO_BULK_USER_FILE_PATH=path/to/your/file.csv NAMESPACE_ID=<namespace_id>
```
If you prefer to use square brackets in the file path, you can escape them or use double quotes:
```shell
bundle exec rake gitlab_subscriptions:duo:bulk_user_assignment\['path/to/your/file.csv','<namespace_id>'\]
# or
bundle exec rake "gitlab_subscriptions:duo:bulk_user_assignment[path/to/your/file.csv,<namespace_id>]"
```
## Troubleshooting
### Errors during bulk user assignment
When using the Rake task for bulk user assignment, you might encounter the following errors:
- `User is not found`: The specified user was not found. Please ensure the provided username matches an existing user.
- `ERROR_NO_SEATS_AVAILABLE`: No more seats are available for user assignment. Please see how to [view assigned GitLab Duo users](../../subscriptions/subscription-add-ons.md#view-assigned-gitlab-duo-users) to check current seat assignments.
- `ERROR_INVALID_USER_MEMBERSHIP`: The user is not eligible for assignment due to being inactive, a bot, or a ghost. Please ensure the user is active and, if on GitLab.com, a member of the provided namespace.
## Related topics
- [Reset user passwords](../../security/reset_user_password.md#use-a-rake-task)
|
https://docs.gitlab.com/administration/web_hooks
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/web_hooks.md
|
2025-08-13
|
doc/administration/raketasks
|
[
"doc",
"administration",
"raketasks"
] |
web_hooks.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Webhook administration Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for webhooks management.
Requests to the [local network by webhooks](../../security/webhooks.md) can be allowed or blocked by an
administrator.
## Add a webhook to all projects
To add a webhook to all projects, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:add URL="http://example.com/hook"
# source installations
bundle exec rake gitlab:web_hook:add URL="http://example.com/hook" RAILS_ENV=production
```
## Add a webhook to projects in a namespace
To add a webhook to all projects in a specific namespace, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:add URL="http://example.com/hook" NAMESPACE=<namespace>
# source installations
bundle exec rake gitlab:web_hook:add URL="http://example.com/hook" NAMESPACE=<namespace> RAILS_ENV=production
```
## Remove a webhook from projects
To remove a webhook from all projects, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:rm URL="http://example.com/hook"
# source installations
bundle exec rake gitlab:web_hook:rm URL="http://example.com/hook" RAILS_ENV=production
```
## Remove a webhook from projects in a namespace
To remove a webhook from projects in a specific namespace, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:rm URL="http://example.com/hook" NAMESPACE=<namespace>
# source installations
bundle exec rake gitlab:web_hook:rm URL="http://example.com/hook" NAMESPACE=<namespace> RAILS_ENV=production
```
## List all webhooks
To list all webhooks, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:list
# source installations
bundle exec rake gitlab:web_hook:list RAILS_ENV=production
```
## List webhooks for projects in a namespace
To list all webhook for projects in a specified namespace, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:list NAMESPACE=<namespace>
# source installations
bundle exec rake gitlab:web_hook:list NAMESPACE=<namespace> RAILS_ENV=production
```
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Webhook administration Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides Rake tasks for webhooks management.
Requests to the [local network by webhooks](../../security/webhooks.md) can be allowed or blocked by an
administrator.
## Add a webhook to all projects
To add a webhook to all projects, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:add URL="http://example.com/hook"
# source installations
bundle exec rake gitlab:web_hook:add URL="http://example.com/hook" RAILS_ENV=production
```
## Add a webhook to projects in a namespace
To add a webhook to all projects in a specific namespace, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:add URL="http://example.com/hook" NAMESPACE=<namespace>
# source installations
bundle exec rake gitlab:web_hook:add URL="http://example.com/hook" NAMESPACE=<namespace> RAILS_ENV=production
```
## Remove a webhook from projects
To remove a webhook from all projects, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:rm URL="http://example.com/hook"
# source installations
bundle exec rake gitlab:web_hook:rm URL="http://example.com/hook" RAILS_ENV=production
```
## Remove a webhook from projects in a namespace
To remove a webhook from projects in a specific namespace, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:rm URL="http://example.com/hook" NAMESPACE=<namespace>
# source installations
bundle exec rake gitlab:web_hook:rm URL="http://example.com/hook" NAMESPACE=<namespace> RAILS_ENV=production
```
## List all webhooks
To list all webhooks, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:list
# source installations
bundle exec rake gitlab:web_hook:list RAILS_ENV=production
```
## List webhooks for projects in a namespace
To list all webhook for projects in a specified namespace, run:
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:list NAMESPACE=<namespace>
# source installations
bundle exec rake gitlab:web_hook:list NAMESPACE=<namespace> RAILS_ENV=production
```
|
https://docs.gitlab.com/administration/raketasks/sanitize
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/raketasks/sanitize.md
|
2025-08-13
|
doc/administration/raketasks/uploads
|
[
"doc",
"administration",
"raketasks",
"uploads"
] |
sanitize.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Uploads sanitize Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
EXIF data is automatically stripped from JPG or TIFF image uploads.
EXIF data may contain sensitive information (for example, GPS location), so you
can remove EXIF data from existing images that were uploaded to an earlier version of GitLab.
## Prerequisite
To run this Rake task, you need `exiftool` installed on your system. If you installed GitLab:
- By using the Linux package, you're all set.
- By using the self-compiled installation, make sure `exiftool` is installed:
```shell
# Debian/Ubuntu
sudo apt-get install libimage-exiftool-perl
# RHEL/CentOS
sudo yum install perl-Image-ExifTool
```
## Remove EXIF data from existing uploads
To remove EXIF data from existing uploads, run the following command:
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:sanitize:remove_exif
```
By default, this command runs in "dry run" mode and doesn't remove EXIF data. It can be used for
checking if (and how many) images should be sanitized.
The Rake task accepts following parameters.
| Parameter | Type | Description |
|:-------------|:--------|:----------------------------------------------------------------------------------------------------------------------------|
| `start_id` | integer | Only uploads with equal or greater ID are processed |
| `stop_id` | integer | Only uploads with equal or smaller ID are processed |
| `dry_run` | boolean | Do not remove EXIF data, only check if EXIF data are present or not. Defaults to `true` |
| `sleep_time` | float | Pause for number of seconds after processing each image. Defaults to 0.3 seconds |
| `uploader` | string | Run sanitization only for uploads of the given uploader: `FileUploader`, `PersonalFileUploader`, or `NamespaceFileUploader` |
| `since` | date | Run sanitization only for uploads newer than given date. For example, `2019-05-01` |
If you have too many uploads, you can speed up sanitization by:
- Setting `sleep_time` to a lower value.
- Running multiple Rake tasks in parallel, each with a separate range of upload IDs (by setting
`start_id` and `stop_id`).
To remove EXIF data from all uploads, use:
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:sanitize:remove_exif[,,false,] 2>&1 | tee exif.log
```
To remove EXIF data on uploads with an ID between 100 and 5000 and pause for 0.1 second after each file, use:
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:sanitize:remove_exif[100,5000,false,0.1] 2>&1 | tee exif.log
```
The output is written into an `exif.log` file because it is often long.
If sanitization fails for an upload, an error message should be in the output of the Rake task.
Typical reasons include that the file is missing in the storage or it's not a valid image.
[Report](https://gitlab.com/gitlab-org/gitlab/-/issues/new) any issues and use the prefix 'EXIF' in
the issue title with the error output and (if possible) the image.
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Uploads sanitize Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
- uploads
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
EXIF data is automatically stripped from JPG or TIFF image uploads.
EXIF data may contain sensitive information (for example, GPS location), so you
can remove EXIF data from existing images that were uploaded to an earlier version of GitLab.
## Prerequisite
To run this Rake task, you need `exiftool` installed on your system. If you installed GitLab:
- By using the Linux package, you're all set.
- By using the self-compiled installation, make sure `exiftool` is installed:
```shell
# Debian/Ubuntu
sudo apt-get install libimage-exiftool-perl
# RHEL/CentOS
sudo yum install perl-Image-ExifTool
```
## Remove EXIF data from existing uploads
To remove EXIF data from existing uploads, run the following command:
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:sanitize:remove_exif
```
By default, this command runs in "dry run" mode and doesn't remove EXIF data. It can be used for
checking if (and how many) images should be sanitized.
The Rake task accepts following parameters.
| Parameter | Type | Description |
|:-------------|:--------|:----------------------------------------------------------------------------------------------------------------------------|
| `start_id` | integer | Only uploads with equal or greater ID are processed |
| `stop_id` | integer | Only uploads with equal or smaller ID are processed |
| `dry_run` | boolean | Do not remove EXIF data, only check if EXIF data are present or not. Defaults to `true` |
| `sleep_time` | float | Pause for number of seconds after processing each image. Defaults to 0.3 seconds |
| `uploader` | string | Run sanitization only for uploads of the given uploader: `FileUploader`, `PersonalFileUploader`, or `NamespaceFileUploader` |
| `since` | date | Run sanitization only for uploads newer than given date. For example, `2019-05-01` |
If you have too many uploads, you can speed up sanitization by:
- Setting `sleep_time` to a lower value.
- Running multiple Rake tasks in parallel, each with a separate range of upload IDs (by setting
`start_id` and `stop_id`).
To remove EXIF data from all uploads, use:
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:sanitize:remove_exif[,,false,] 2>&1 | tee exif.log
```
To remove EXIF data on uploads with an ID between 100 and 5000 and pause for 0.1 second after each file, use:
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:sanitize:remove_exif[100,5000,false,0.1] 2>&1 | tee exif.log
```
The output is written into an `exif.log` file because it is often long.
If sanitization fails for an upload, an error message should be in the output of the Rake task.
Typical reasons include that the file is missing in the storage or it's not a valid image.
[Report](https://gitlab.com/gitlab-org/gitlab/-/issues/new) any issues and use the prefix 'EXIF' in
the issue title with the error output and (if possible) the image.
|
https://docs.gitlab.com/administration/raketasks/migrate
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/raketasks/migrate.md
|
2025-08-13
|
doc/administration/raketasks/uploads
|
[
"doc",
"administration",
"raketasks",
"uploads"
] |
migrate.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Uploads migrate Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
There is a Rake task for migrating uploads between different storage types.
- Migrate all uploads with [`gitlab:uploads:migrate:all`](#all-in-one-rake-task) or
- To only migrate specific upload types, use [`gitlab:uploads:migrate`](#individual-rake-tasks).
## Migrate to object storage
After [configuring the object storage](../../uploads.md#using-object-storage) for uploads
to GitLab, use this task to migrate existing uploads from the local storage to the remote storage.
All of the processing is done in a background worker and requires **no downtime**.
Read more about using [object storage with GitLab](../../object_storage.md).
### All-in-one Rake task
GitLab provides a wrapper Rake task that migrates all uploaded files (for example avatars, logos,
attachments, and favicon) to object storage in one step. The wrapper task invokes individual Rake
tasks to migrate files falling under each of these categories one by one.
These [individual Rake tasks](#individual-rake-tasks) are described in the next section.
To migrate all uploads from local storage to object storage, run:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake "gitlab:uploads:migrate:all"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:migrate:all
```
{{< /tab >}}
{{< /tabs >}}
You can optionally track progress and verify that all uploads migrated successfully using the
[PostgreSQL console](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-bundled-postgresql-database):
- `sudo gitlab-rails dbconsole --database main` for Linux package installations.
- `sudo -u git -H psql -d gitlabhq_production` for self-compiled installations.
Verify `objectstg` below (where `store=2`) has count of all artifacts:
```shell
gitlabhq_production=# SELECT count(*) AS total, sum(case when store = '1' then 1 else 0 end) AS filesystem, sum(case when store = '2' then 1 else 0 end) AS objectstg FROM uploads;
total | filesystem | objectstg
------+------------+-----------
2409 | 0 | 2409
```
Verify that there are no files on disk in the `uploads` folder:
```shell
sudo find /var/opt/gitlab/gitlab-rails/uploads -type f | grep -v tmp | wc -l
```
### Individual Rake tasks
If you already ran the [all-in-one Rake task](#all-in-one-rake-task), there is no need to run these
individual tasks.
The Rake task uses three parameters to find uploads to migrate:
| Parameter | Type | Description |
|:-----------------|:--------------|:-------------------------------------------------------|
| `uploader_class` | string | Type of the uploader to migrate from. |
| `model_class` | string | Type of the model to migrate from. |
| `mount_point` | string/symbol | Name of the model's column the uploader is mounted on. |
{{< alert type="note" >}}
These parameters are mainly internal to the structure of GitLab, you may want to refer to the task list
instead below. After running these individual tasks, we recommend that you run the [all-in-one Rake task](#all-in-one-rake-task)
to migrate any uploads not included in the listed types.
{{< /alert >}}
This task also accepts an environment variable which you can use to override
the default batch size:
| Variable | Type | Description |
|:---------|:--------|:--------------------------------------------------|
| `BATCH` | integer | Specifies the size of the batch. Defaults to 200. |
The following shows how to run `gitlab:uploads:migrate` for individual types of uploads.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
# gitlab-rake gitlab:uploads:migrate[uploader_class, model_class, mount_point]
# Avatars
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, Project, :avatar]"
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, Group, :avatar]"
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, User, :avatar]"
# Attachments
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :logo]"
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :header_logo]"
# Favicon
gitlab-rake "gitlab:uploads:migrate[FaviconUploader, Appearance, :favicon]"
# Markdown
gitlab-rake "gitlab:uploads:migrate[FileUploader, Project]"
gitlab-rake "gitlab:uploads:migrate[PersonalFileUploader, Snippet]"
gitlab-rake "gitlab:uploads:migrate[NamespaceFileUploader, Snippet]"
gitlab-rake "gitlab:uploads:migrate[FileUploader, MergeRequest]"
# Design Management design thumbnails
gitlab-rake "gitlab:uploads:migrate[DesignManagement::DesignV432x230Uploader, DesignManagement::Action, :image_v432x230]"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
Use `RAILS_ENV=production` for every task.
```shell
# sudo -u git -H bundle exec rake gitlab:uploads:migrate
# Avatars
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, Project, :avatar]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, Group, :avatar]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, User, :avatar]"
# Attachments
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :logo]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :header_logo]"
# Favicon
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FaviconUploader, Appearance, :favicon]"
# Markdown
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FileUploader, Project]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[PersonalFileUploader, Snippet]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[NamespaceFileUploader, Snippet]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FileUploader, MergeRequest]"
# Design Management design thumbnails
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[DesignManagement::DesignV432x230Uploader, DesignManagement::Action]"
```
{{< /tab >}}
{{< /tabs >}}
## Migrate to local storage
If you need to disable [object storage](../../object_storage.md) for any reason, you must first
migrate your data out of object storage and back into your local storage.
{{< alert type="warning" >}}
**Extended downtime is required** so no new files are created in object storage during
the migration. A configuration setting to allow migrating
from object storage to local files with only a brief moment of downtime for configuration changes
is tracked [in this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/30979).
{{< /alert >}}
### All-in-one Rake task
GitLab provides a wrapper Rake task that migrates all uploaded files (for example, avatars, logos,
attachments, and favicon) to local storage in one step. The wrapper task invokes individual Rake
tasks to migrate files falling under each of these categories one by one.
For details on these Rake tasks, refer to [Individual Rake tasks](#individual-rake-tasks).
Keep in mind the task name in this case is `gitlab:uploads:migrate_to_local`.
To migrate uploads from object storage to local storage:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake "gitlab:uploads:migrate_to_local:all"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:migrate_to_local:all
```
{{< /tab >}}
{{< /tabs >}}
After running the Rake task, you can disable object storage by undoing the changes described
in the instructions to [configure object storage](../../uploads.md#using-object-storage).
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Uploads migrate Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
- uploads
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
There is a Rake task for migrating uploads between different storage types.
- Migrate all uploads with [`gitlab:uploads:migrate:all`](#all-in-one-rake-task) or
- To only migrate specific upload types, use [`gitlab:uploads:migrate`](#individual-rake-tasks).
## Migrate to object storage
After [configuring the object storage](../../uploads.md#using-object-storage) for uploads
to GitLab, use this task to migrate existing uploads from the local storage to the remote storage.
All of the processing is done in a background worker and requires **no downtime**.
Read more about using [object storage with GitLab](../../object_storage.md).
### All-in-one Rake task
GitLab provides a wrapper Rake task that migrates all uploaded files (for example avatars, logos,
attachments, and favicon) to object storage in one step. The wrapper task invokes individual Rake
tasks to migrate files falling under each of these categories one by one.
These [individual Rake tasks](#individual-rake-tasks) are described in the next section.
To migrate all uploads from local storage to object storage, run:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake "gitlab:uploads:migrate:all"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:migrate:all
```
{{< /tab >}}
{{< /tabs >}}
You can optionally track progress and verify that all uploads migrated successfully using the
[PostgreSQL console](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-bundled-postgresql-database):
- `sudo gitlab-rails dbconsole --database main` for Linux package installations.
- `sudo -u git -H psql -d gitlabhq_production` for self-compiled installations.
Verify `objectstg` below (where `store=2`) has count of all artifacts:
```shell
gitlabhq_production=# SELECT count(*) AS total, sum(case when store = '1' then 1 else 0 end) AS filesystem, sum(case when store = '2' then 1 else 0 end) AS objectstg FROM uploads;
total | filesystem | objectstg
------+------------+-----------
2409 | 0 | 2409
```
Verify that there are no files on disk in the `uploads` folder:
```shell
sudo find /var/opt/gitlab/gitlab-rails/uploads -type f | grep -v tmp | wc -l
```
### Individual Rake tasks
If you already ran the [all-in-one Rake task](#all-in-one-rake-task), there is no need to run these
individual tasks.
The Rake task uses three parameters to find uploads to migrate:
| Parameter | Type | Description |
|:-----------------|:--------------|:-------------------------------------------------------|
| `uploader_class` | string | Type of the uploader to migrate from. |
| `model_class` | string | Type of the model to migrate from. |
| `mount_point` | string/symbol | Name of the model's column the uploader is mounted on. |
{{< alert type="note" >}}
These parameters are mainly internal to the structure of GitLab, you may want to refer to the task list
instead below. After running these individual tasks, we recommend that you run the [all-in-one Rake task](#all-in-one-rake-task)
to migrate any uploads not included in the listed types.
{{< /alert >}}
This task also accepts an environment variable which you can use to override
the default batch size:
| Variable | Type | Description |
|:---------|:--------|:--------------------------------------------------|
| `BATCH` | integer | Specifies the size of the batch. Defaults to 200. |
The following shows how to run `gitlab:uploads:migrate` for individual types of uploads.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
# gitlab-rake gitlab:uploads:migrate[uploader_class, model_class, mount_point]
# Avatars
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, Project, :avatar]"
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, Group, :avatar]"
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, User, :avatar]"
# Attachments
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :logo]"
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :header_logo]"
# Favicon
gitlab-rake "gitlab:uploads:migrate[FaviconUploader, Appearance, :favicon]"
# Markdown
gitlab-rake "gitlab:uploads:migrate[FileUploader, Project]"
gitlab-rake "gitlab:uploads:migrate[PersonalFileUploader, Snippet]"
gitlab-rake "gitlab:uploads:migrate[NamespaceFileUploader, Snippet]"
gitlab-rake "gitlab:uploads:migrate[FileUploader, MergeRequest]"
# Design Management design thumbnails
gitlab-rake "gitlab:uploads:migrate[DesignManagement::DesignV432x230Uploader, DesignManagement::Action, :image_v432x230]"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
Use `RAILS_ENV=production` for every task.
```shell
# sudo -u git -H bundle exec rake gitlab:uploads:migrate
# Avatars
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, Project, :avatar]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, Group, :avatar]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, User, :avatar]"
# Attachments
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :logo]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :header_logo]"
# Favicon
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FaviconUploader, Appearance, :favicon]"
# Markdown
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FileUploader, Project]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[PersonalFileUploader, Snippet]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[NamespaceFileUploader, Snippet]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FileUploader, MergeRequest]"
# Design Management design thumbnails
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[DesignManagement::DesignV432x230Uploader, DesignManagement::Action]"
```
{{< /tab >}}
{{< /tabs >}}
## Migrate to local storage
If you need to disable [object storage](../../object_storage.md) for any reason, you must first
migrate your data out of object storage and back into your local storage.
{{< alert type="warning" >}}
**Extended downtime is required** so no new files are created in object storage during
the migration. A configuration setting to allow migrating
from object storage to local files with only a brief moment of downtime for configuration changes
is tracked [in this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/30979).
{{< /alert >}}
### All-in-one Rake task
GitLab provides a wrapper Rake task that migrates all uploaded files (for example, avatars, logos,
attachments, and favicon) to local storage in one step. The wrapper task invokes individual Rake
tasks to migrate files falling under each of these categories one by one.
For details on these Rake tasks, refer to [Individual Rake tasks](#individual-rake-tasks).
Keep in mind the task name in this case is `gitlab:uploads:migrate_to_local`.
To migrate uploads from object storage to local storage:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake "gitlab:uploads:migrate_to_local:all"
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:uploads:migrate_to_local:all
```
{{< /tab >}}
{{< /tabs >}}
After running the Rake task, you can disable object storage by undoing the changes described
in the instructions to [configure object storage](../../uploads.md#using-object-storage).
|
https://docs.gitlab.com/administration/raketasks/tokens
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/raketasks/_index.md
|
2025-08-13
|
doc/administration/raketasks/tokens
|
[
"doc",
"administration",
"raketasks",
"tokens"
] |
_index.md
|
Software Supply Chain Security
|
Authentication
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Access token Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/467416) in GitLab 17.2.
{{< /history >}}
## Analyze token expiration dates
In GitLab 16.0, a [background migration](https://gitlab.com/gitlab-org/gitlab/-/issues/369123)
gave all non-expiring personal, project, and group access tokens an expiration date set at one
year after those tokens were created.
To identify which tokens might have been affected by this migration, you can run a
Rake task that analyses all access tokens and displays the top ten most common expiration dates:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake gitlab:tokens:analyze
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# Find the toolbox pod
kubectl --namespace <namespace> get pods -lapp=toolbox
kubectl exec -it <toolbox-pod-name> -- sh -c 'cd /srv/gitlab && bin/rake gitlab:tokens:analyze'
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -it <container_name> /bin/bash
gitlab-rake gitlab:tokens:analyze
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:tokens:analyze
```
{{< /tab >}}
{{< /tabs >}}
This task analyzes all the access tokens and groups them by expiration date.
The left column shows the expiration date, and the right column shows how many tokens
have that expiration date. Example output:
```plaintext
======= Personal/Project/Group Access Token Expiration Migration =======
Started at: 2023-06-15 10:20:35 +0000
Finished : 2023-06-15 10:23:01 +0000
===== Top 10 Personal/Project/Group Access Token Expiration Dates =====
| Expiration Date | Count |
|-----------------|-------|
| 2024-06-15 | 1565353 |
| 2017-12-31 | 2508 |
| 2018-01-01 | 1008 |
| 2016-12-31 | 833 |
| 2017-08-31 | 705 |
| 2017-06-30 | 596 |
| 2018-12-31 | 548 |
| 2017-05-31 | 523 |
| 2017-09-30 | 520 |
| 2017-07-31 | 494 |
========================================================================
```
In this example, you can see that over 1.5 million access tokens have an
expiration date of 2024-06-15, one year after the migration was run
on 2023-06-15. This suggests that most of these tokens were assigned by
the migration. However, there is no way to know for sure whether other
tokens were created manually with the same date.
## Update expiration dates in bulk
Prerequisites:
You must:
- Be an administrator.
- Have an interactive terminal.
Run the following Rake task to extend or remove expiration dates from tokens in bulk:
1. Run the tool:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake gitlab:tokens:edit
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# Find the toolbox pod
kubectl --namespace <namespace> get pods -lapp=toolbox
kubectl exec -it <toolbox-pod-name> -- sh -c 'cd /srv/gitlab && bin/rake gitlab:tokens:edit'
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -it <container_name> /bin/bash
gitlab-rake gitlab:tokens:edit
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:tokens:edit
```
{{< /tab >}}
{{< /tabs >}}
After the tool starts, it shows the output from the [analyze step](#analyze-token-expiration-dates)
plus an additional prompt about modifying the expiration dates:
```plaintext
======= Personal/Project/Group Access Token Expiration Migration =======
Started at: 2023-06-15 10:20:35 +0000
Finished : 2023-06-15 10:23:01 +0000
===== Top 10 Personal/Project/Group Access Token Expiration Dates =====
| Expiration Date | Count |
|-----------------|-------|
| 2024-05-14 | 1565353 |
| 2017-12-31 | 2508 |
| 2018-01-01 | 1008 |
| 2016-12-31 | 833 |
| 2017-08-31 | 705 |
| 2017-06-30 | 596 |
| 2018-12-31 | 548 |
| 2017-05-31 | 523 |
| 2017-09-30 | 520 |
| 2017-07-31 | 494 |
========================================================================
What do you want to do? (Press ↑/↓ arrow or 1-3 number to move and Enter to select)
‣ 1. Extend expiration date
2. Remove expiration date
3. Quit
```
### Extend expiration dates
To extend expiration dates on all tokens matching a given expiration date:
1. Select option 1, `Extend expiration date`:
```plaintext
What do you want to do?
‣ 1. Extend expiration date
2. Remove expiration date
3. Quit
```
1. The tool asks you to select one of the expiration dates listed. For example:
```plaintext
Select an expiration date (Press ↑/↓/←/→ arrow to move and Enter to select)
‣ 2024-05-14
2017-12-31
2018-01-01
2016-12-31
2017-08-31
2017-06-30
```
Use the arrow keys on your keyboard to select a date. To abort,
scroll all the way down and select `--> Abort`. Press <kbd>Enter</kbd> to confirm
your selection:
```plaintext
Select an expiration date
2017-06-30
2018-12-31
2017-05-31
2017-09-30
2017-07-31
‣ --> Abort
```
If you select a date, the tool prompts you for a new expiration date:
```plaintext
What would you like the new expiration date to be? (2025-05-14) 2024-05-14
```
The default is one year from the selected date. Press <kbd>Enter</kbd>
to use the default, or manually enter a date in `YYYY-MM-DD` format.
1. After you have entered a valid date, the tool asks one more time for confirmation:
```plaintext
Old expiration date: 2024-05-14
New expiration date: 2025-05-14
WARNING: This will now update 1565353 token(s). Are you sure? (y/N)
```
If you enter `y`, the tool extends the expiration date
for all the tokens with the selected expiration date.
If you enter `N`, the tool aborts the update task and return to the
original analyze output.
### Remove expiration dates
To remove expiration dates on all tokens matching
a given expiration date:
1. Select option 2, `Remove expiration date`:
```plaintext
What do you want to do?
1. Extend expiration date
‣ 2. Remove expiration date
3. Quit
```
1. The tool asks you to select the expiration date from the table. For example:
```plaintext
Select an expiration date (Press ↑/↓/←/→ arrow to move and Enter to select)
‣ 2024-05-14
2017-12-31
2018-01-01
2016-12-31
2017-08-31
2017-06-30
```
Use the arrow keys on your keyboard to select a date. To abort,
scroll all the way down and select `--> Abort`. Press <kbd>Enter</kbd> to confirm
your selection:
```plaintext
Select an expiration date
2017-06-30
2018-12-31
2017-05-31
2017-09-30
2017-07-31
‣ --> Abort
```
1. After selecting a date, the tool prompts you to confirm the selection:
```plaintext
WARNING: This will remove the expiration for tokens that expire on 2024-05-14.
This will affect 1565353 tokens. Are you sure? (y/N)
```
If you enter `y`, the tool removes the expiration date for all the
tokens with the selected expiration date.
If you enter `N`, the tool aborts the update task and returns to the first menu.
## Validate custom issuer URL configuration for CI/CD ID Tokens
If you configure a non-public GitLab instance with [OpenID Connect in AWS to retrieve temporary credentials](../../../ci/cloud_services/aws/_index.md#configure-a-non-public-gitlab-instance),
use the `ci:validate_id_token_configuration` Rake task to validate the token configuration:
```shell
bundle exec rake ci:validate_id_token_configuration
```
|
---
stage: Software Supply Chain Security
group: Authentication
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Access token Rake tasks
breadcrumbs:
- doc
- administration
- raketasks
- tokens
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/467416) in GitLab 17.2.
{{< /history >}}
## Analyze token expiration dates
In GitLab 16.0, a [background migration](https://gitlab.com/gitlab-org/gitlab/-/issues/369123)
gave all non-expiring personal, project, and group access tokens an expiration date set at one
year after those tokens were created.
To identify which tokens might have been affected by this migration, you can run a
Rake task that analyses all access tokens and displays the top ten most common expiration dates:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake gitlab:tokens:analyze
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# Find the toolbox pod
kubectl --namespace <namespace> get pods -lapp=toolbox
kubectl exec -it <toolbox-pod-name> -- sh -c 'cd /srv/gitlab && bin/rake gitlab:tokens:analyze'
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -it <container_name> /bin/bash
gitlab-rake gitlab:tokens:analyze
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:tokens:analyze
```
{{< /tab >}}
{{< /tabs >}}
This task analyzes all the access tokens and groups them by expiration date.
The left column shows the expiration date, and the right column shows how many tokens
have that expiration date. Example output:
```plaintext
======= Personal/Project/Group Access Token Expiration Migration =======
Started at: 2023-06-15 10:20:35 +0000
Finished : 2023-06-15 10:23:01 +0000
===== Top 10 Personal/Project/Group Access Token Expiration Dates =====
| Expiration Date | Count |
|-----------------|-------|
| 2024-06-15 | 1565353 |
| 2017-12-31 | 2508 |
| 2018-01-01 | 1008 |
| 2016-12-31 | 833 |
| 2017-08-31 | 705 |
| 2017-06-30 | 596 |
| 2018-12-31 | 548 |
| 2017-05-31 | 523 |
| 2017-09-30 | 520 |
| 2017-07-31 | 494 |
========================================================================
```
In this example, you can see that over 1.5 million access tokens have an
expiration date of 2024-06-15, one year after the migration was run
on 2023-06-15. This suggests that most of these tokens were assigned by
the migration. However, there is no way to know for sure whether other
tokens were created manually with the same date.
## Update expiration dates in bulk
Prerequisites:
You must:
- Be an administrator.
- Have an interactive terminal.
Run the following Rake task to extend or remove expiration dates from tokens in bulk:
1. Run the tool:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
gitlab-rake gitlab:tokens:edit
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# Find the toolbox pod
kubectl --namespace <namespace> get pods -lapp=toolbox
kubectl exec -it <toolbox-pod-name> -- sh -c 'cd /srv/gitlab && bin/rake gitlab:tokens:edit'
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -it <container_name> /bin/bash
gitlab-rake gitlab:tokens:edit
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo RAILS_ENV=production -u git -H bundle exec rake gitlab:tokens:edit
```
{{< /tab >}}
{{< /tabs >}}
After the tool starts, it shows the output from the [analyze step](#analyze-token-expiration-dates)
plus an additional prompt about modifying the expiration dates:
```plaintext
======= Personal/Project/Group Access Token Expiration Migration =======
Started at: 2023-06-15 10:20:35 +0000
Finished : 2023-06-15 10:23:01 +0000
===== Top 10 Personal/Project/Group Access Token Expiration Dates =====
| Expiration Date | Count |
|-----------------|-------|
| 2024-05-14 | 1565353 |
| 2017-12-31 | 2508 |
| 2018-01-01 | 1008 |
| 2016-12-31 | 833 |
| 2017-08-31 | 705 |
| 2017-06-30 | 596 |
| 2018-12-31 | 548 |
| 2017-05-31 | 523 |
| 2017-09-30 | 520 |
| 2017-07-31 | 494 |
========================================================================
What do you want to do? (Press ↑/↓ arrow or 1-3 number to move and Enter to select)
‣ 1. Extend expiration date
2. Remove expiration date
3. Quit
```
### Extend expiration dates
To extend expiration dates on all tokens matching a given expiration date:
1. Select option 1, `Extend expiration date`:
```plaintext
What do you want to do?
‣ 1. Extend expiration date
2. Remove expiration date
3. Quit
```
1. The tool asks you to select one of the expiration dates listed. For example:
```plaintext
Select an expiration date (Press ↑/↓/←/→ arrow to move and Enter to select)
‣ 2024-05-14
2017-12-31
2018-01-01
2016-12-31
2017-08-31
2017-06-30
```
Use the arrow keys on your keyboard to select a date. To abort,
scroll all the way down and select `--> Abort`. Press <kbd>Enter</kbd> to confirm
your selection:
```plaintext
Select an expiration date
2017-06-30
2018-12-31
2017-05-31
2017-09-30
2017-07-31
‣ --> Abort
```
If you select a date, the tool prompts you for a new expiration date:
```plaintext
What would you like the new expiration date to be? (2025-05-14) 2024-05-14
```
The default is one year from the selected date. Press <kbd>Enter</kbd>
to use the default, or manually enter a date in `YYYY-MM-DD` format.
1. After you have entered a valid date, the tool asks one more time for confirmation:
```plaintext
Old expiration date: 2024-05-14
New expiration date: 2025-05-14
WARNING: This will now update 1565353 token(s). Are you sure? (y/N)
```
If you enter `y`, the tool extends the expiration date
for all the tokens with the selected expiration date.
If you enter `N`, the tool aborts the update task and return to the
original analyze output.
### Remove expiration dates
To remove expiration dates on all tokens matching
a given expiration date:
1. Select option 2, `Remove expiration date`:
```plaintext
What do you want to do?
1. Extend expiration date
‣ 2. Remove expiration date
3. Quit
```
1. The tool asks you to select the expiration date from the table. For example:
```plaintext
Select an expiration date (Press ↑/↓/←/→ arrow to move and Enter to select)
‣ 2024-05-14
2017-12-31
2018-01-01
2016-12-31
2017-08-31
2017-06-30
```
Use the arrow keys on your keyboard to select a date. To abort,
scroll all the way down and select `--> Abort`. Press <kbd>Enter</kbd> to confirm
your selection:
```plaintext
Select an expiration date
2017-06-30
2018-12-31
2017-05-31
2017-09-30
2017-07-31
‣ --> Abort
```
1. After selecting a date, the tool prompts you to confirm the selection:
```plaintext
WARNING: This will remove the expiration for tokens that expire on 2024-05-14.
This will affect 1565353 tokens. Are you sure? (y/N)
```
If you enter `y`, the tool removes the expiration date for all the
tokens with the selected expiration date.
If you enter `N`, the tool aborts the update task and returns to the first menu.
## Validate custom issuer URL configuration for CI/CD ID Tokens
If you configure a non-public GitLab instance with [OpenID Connect in AWS to retrieve temporary credentials](../../../ci/cloud_services/aws/_index.md#configure-a-non-public-gitlab-instance),
use the `ci:validate_id_token_configuration` Rake task to validate the token configuration:
```shell
bundle exec rake ci:validate_id_token_configuration
```
|
https://docs.gitlab.com/administration/terminal
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/terminal.md
|
2025-08-13
|
doc/administration/integration
|
[
"doc",
"administration",
"integration"
] |
terminal.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Web terminals (deprecated)
|
Information about Web terminals.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Disabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/353410) in GitLab 15.0.
{{< /history >}}
{{< alert type="warning" >}}
This feature was [deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8) in GitLab 14.5.
{{< /alert >}}
{{< alert type="flag" >}}
On GitLab Self-Managed, by default this feature is not available. To make it available, an administrator can [enable the feature flag](../feature_flags/_index.md) named `certificate_based_clusters`.
{{< /alert >}}
- Read more about the non-deprecated [Web Terminals accessible through the Web IDE](../../user/project/web_ide/_index.md).
- Read more about the non-deprecated [Web Terminals accessible from a running CI job](../../ci/interactive_web_terminal/_index.md).
---
With the introduction of the [Kubernetes integration](../../user/infrastructure/clusters/_index.md),
GitLab can store and use credentials for a Kubernetes cluster.
GitLab uses these credentials to provide access to
[web terminals](../../ci/environments/_index.md#web-terminals-deprecated) for environments.
{{< alert type="note" >}}
Only users with at least the [Maintainer role](../../user/permissions.md) for the project access web terminals.
{{< /alert >}}
## How web terminals work
A detailed overview of the architecture of web terminals and how they work
can be found in [this document](https://gitlab.com/gitlab-org/gitlab-workhorse/blob/master/doc/channel.md).
In brief:
- GitLab relies on the user to provide their own Kubernetes credentials, and to
appropriately label the pods they create when deploying.
- When a user goes to the terminal page for an environment, they are served
a JavaScript application that opens a WebSocket connection back to GitLab.
- The WebSocket is handled in [Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse),
rather than the Rails application server.
- Workhorse queries Rails for connection details and user permissions. Rails
queries Kubernetes for them in the background using [Sidekiq](../sidekiq/sidekiq_troubleshooting.md).
- Workhorse acts as a proxy server between the user's browser and the Kubernetes
API, passing WebSocket frames between the two.
- Workhorse regularly polls Rails, terminating the WebSocket connection if the
user no longer has permission to access the terminal, or if the connection
details have changed.
## Security
GitLab and [GitLab Runner](https://docs.gitlab.com/runner/) take some
precautions to keep interactive web terminal data encrypted between them, and
everything protected with authorization guards. This is described in more
detail below.
- Interactive web terminals are completely disabled unless [`[session_server]`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-session_server-section) is configured.
- Every time the runner starts, it generates an `x509` certificate that is used for a `wss` (Web Socket Secure) connection.
- For every created job, a random URL is generated which is discarded at the end of the job. This URL is used to establish a web socket connection. The URL for the session is in the format `(IP|HOST):PORT/session/$SOME_HASH`, where the `IP/HOST` and `PORT` are the configured [`listen_address`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-session_server-section).
- Every session URL that is created has an authorization header that needs to be sent, to establish a `wss` connection.
- The session URL is not exposed to the users in any way. GitLab holds all the state internally and proxies accordingly.
## Enabling and disabling terminal support
{{< alert type="note" >}}
AWS Classic Load Balancers do not support web sockets.
If you want web terminals to work, use AWS Network Load Balancers.
Read [AWS Elastic Load Balancing Product Comparison](https://aws.amazon.com/elasticloadbalancing/features/#compare)
for more information.
{{< /alert >}}
As web terminals use WebSockets, every HTTP/HTTPS reverse proxy in front of
Workhorse must be configured to pass the `Connection` and `Upgrade` headers
to the next one in the chain. GitLab is configured by default to do so.
However, if you run a [load balancer](../load_balancer.md) in
front of GitLab, you may need to make some changes to your configuration. These
guides document the necessary steps for a selection of popular reverse proxies:
- [Apache](https://httpd.apache.org/docs/2.4/mod/mod_proxy_wstunnel.html)
- [NGINX](https://www.f5.com/company/blog/nginx/websocket-nginx/)
- [HAProxy](https://www.haproxy.com/blog/websockets-load-balancing-with-haproxy)
- [Varnish](https://varnish-cache.org/docs/4.1/users-guide/vcl-example-websockets.html)
Workhorse doesn't let WebSocket requests through to non-WebSocket endpoints, so
it's safe to enable support for these headers globally. If you prefer a
narrower set of rules, you can restrict it to URLs ending with `/terminal.ws`.
This approach may still result in a few false positives.
If you self-compiled your installation, you may need to make some changes to your configuration. Read
[Upgrading Community Edition and Enterprise Edition from source](../../update/upgrading_from_source.md#new-configuration-for-nginx-or-apache)
for more details.
To disable web terminal support in GitLab, stop passing
the `Connection` and `Upgrade` hop-by-hop headers in the first HTTP reverse
proxy in the chain. For most users, this is the NGINX server bundled with
Linux package installations. In this case, you need to:
- Find the `nginx['proxy_set_headers']` section of your `gitlab.rb` file
- Ensure the whole block is uncommented, and then comment out or remove the
`Connection` and `Upgrade` lines.
For your own load balancer, just reverse the configuration changes recommended
by the previously listed guides.
When these headers are not passed through, Workhorse returns a
`400 Bad Request` response to users attempting to use a web terminal. In turn,
they receive a `Connection failed` message.
## Limiting WebSocket connection time
By default, terminal sessions do not expire. To limit the terminal session
lifetime in your GitLab instance:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Web terminal**.
1. Set a `max session time`.
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Web terminals (deprecated)
description: Information about Web terminals.
breadcrumbs:
- doc
- administration
- integration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Disabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/353410) in GitLab 15.0.
{{< /history >}}
{{< alert type="warning" >}}
This feature was [deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8) in GitLab 14.5.
{{< /alert >}}
{{< alert type="flag" >}}
On GitLab Self-Managed, by default this feature is not available. To make it available, an administrator can [enable the feature flag](../feature_flags/_index.md) named `certificate_based_clusters`.
{{< /alert >}}
- Read more about the non-deprecated [Web Terminals accessible through the Web IDE](../../user/project/web_ide/_index.md).
- Read more about the non-deprecated [Web Terminals accessible from a running CI job](../../ci/interactive_web_terminal/_index.md).
---
With the introduction of the [Kubernetes integration](../../user/infrastructure/clusters/_index.md),
GitLab can store and use credentials for a Kubernetes cluster.
GitLab uses these credentials to provide access to
[web terminals](../../ci/environments/_index.md#web-terminals-deprecated) for environments.
{{< alert type="note" >}}
Only users with at least the [Maintainer role](../../user/permissions.md) for the project access web terminals.
{{< /alert >}}
## How web terminals work
A detailed overview of the architecture of web terminals and how they work
can be found in [this document](https://gitlab.com/gitlab-org/gitlab-workhorse/blob/master/doc/channel.md).
In brief:
- GitLab relies on the user to provide their own Kubernetes credentials, and to
appropriately label the pods they create when deploying.
- When a user goes to the terminal page for an environment, they are served
a JavaScript application that opens a WebSocket connection back to GitLab.
- The WebSocket is handled in [Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse),
rather than the Rails application server.
- Workhorse queries Rails for connection details and user permissions. Rails
queries Kubernetes for them in the background using [Sidekiq](../sidekiq/sidekiq_troubleshooting.md).
- Workhorse acts as a proxy server between the user's browser and the Kubernetes
API, passing WebSocket frames between the two.
- Workhorse regularly polls Rails, terminating the WebSocket connection if the
user no longer has permission to access the terminal, or if the connection
details have changed.
## Security
GitLab and [GitLab Runner](https://docs.gitlab.com/runner/) take some
precautions to keep interactive web terminal data encrypted between them, and
everything protected with authorization guards. This is described in more
detail below.
- Interactive web terminals are completely disabled unless [`[session_server]`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-session_server-section) is configured.
- Every time the runner starts, it generates an `x509` certificate that is used for a `wss` (Web Socket Secure) connection.
- For every created job, a random URL is generated which is discarded at the end of the job. This URL is used to establish a web socket connection. The URL for the session is in the format `(IP|HOST):PORT/session/$SOME_HASH`, where the `IP/HOST` and `PORT` are the configured [`listen_address`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-session_server-section).
- Every session URL that is created has an authorization header that needs to be sent, to establish a `wss` connection.
- The session URL is not exposed to the users in any way. GitLab holds all the state internally and proxies accordingly.
## Enabling and disabling terminal support
{{< alert type="note" >}}
AWS Classic Load Balancers do not support web sockets.
If you want web terminals to work, use AWS Network Load Balancers.
Read [AWS Elastic Load Balancing Product Comparison](https://aws.amazon.com/elasticloadbalancing/features/#compare)
for more information.
{{< /alert >}}
As web terminals use WebSockets, every HTTP/HTTPS reverse proxy in front of
Workhorse must be configured to pass the `Connection` and `Upgrade` headers
to the next one in the chain. GitLab is configured by default to do so.
However, if you run a [load balancer](../load_balancer.md) in
front of GitLab, you may need to make some changes to your configuration. These
guides document the necessary steps for a selection of popular reverse proxies:
- [Apache](https://httpd.apache.org/docs/2.4/mod/mod_proxy_wstunnel.html)
- [NGINX](https://www.f5.com/company/blog/nginx/websocket-nginx/)
- [HAProxy](https://www.haproxy.com/blog/websockets-load-balancing-with-haproxy)
- [Varnish](https://varnish-cache.org/docs/4.1/users-guide/vcl-example-websockets.html)
Workhorse doesn't let WebSocket requests through to non-WebSocket endpoints, so
it's safe to enable support for these headers globally. If you prefer a
narrower set of rules, you can restrict it to URLs ending with `/terminal.ws`.
This approach may still result in a few false positives.
If you self-compiled your installation, you may need to make some changes to your configuration. Read
[Upgrading Community Edition and Enterprise Edition from source](../../update/upgrading_from_source.md#new-configuration-for-nginx-or-apache)
for more details.
To disable web terminal support in GitLab, stop passing
the `Connection` and `Upgrade` hop-by-hop headers in the first HTTP reverse
proxy in the chain. For most users, this is the NGINX server bundled with
Linux package installations. In this case, you need to:
- Find the `nginx['proxy_set_headers']` section of your `gitlab.rb` file
- Ensure the whole block is uncommented, and then comment out or remove the
`Connection` and `Upgrade` lines.
For your own load balancer, just reverse the configuration changes recommended
by the previously listed guides.
When these headers are not passed through, Workhorse returns a
`400 Bad Request` response to users attempting to use a web terminal. In turn,
they receive a `Connection failed` message.
## Limiting WebSocket connection time
By default, terminal sessions do not expire. To limit the terminal session
lifetime in your GitLab instance:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Web terminal**.
1. Set a `max session time`.
|
https://docs.gitlab.com/administration/diagrams_net
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/diagrams_net.md
|
2025-08-13
|
doc/administration/integration
|
[
"doc",
"administration",
"integration"
] |
diagrams_net.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Diagrams.net
|
Configure a Diagrams.net integration for GitLab.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- Offline environment support [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/116281) in GitLab 16.1.
{{< /history >}}
Use the [diagrams.net](https://www.drawio.com/) integration to create and embed SVG diagrams in wikis.
The diagram editor is available in both the plain text editor and the rich text editor.
GitLab.com enables this integration for all SaaS users. No additional configuration is required.
For GitLab Self-Managed and GitLab Dedicated, integrate with either the free [diagrams.net](https://www.drawio.com/)
website, or host your own diagrams.net site in offline environments.
To set up the integration:
1. Choose to integrate with the free diagrams.net website or
[configure your diagrams.net server](#configure-your-diagramsnet-server).
1. [Enable the integration](#enable-diagramsnet-integration).
After completing the integration, the diagrams.net editor opens with the URL you provided.
## Configure your diagrams.net server
You can set up your own diagrams.net server to generate the diagrams.
It's a required step for users on an offline installation of GitLab Self-Managed.
For example, to run a diagrams.net container in Docker, run the following command:
```shell
docker run -it --rm --name="draw" -p 8080:8080 -p 8443:8443 jgraph/drawio
```
Make note of the hostname of the server running the container, to be used as the diagrams.net URL
when you enable the integration.
For more information, see [Run your own diagrams.net server with Docker](https://www.drawio.com/blog/diagrams-docker-app).
## Enable Diagrams.net integration
1. Sign in to GitLab as an [Administrator](../../user/permissions.md) user.
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings** > **General**.
1. Expand **Diagrams.net**.
1. Select the **Enable Diagrams.net** checkbox.
1. Enter the Diagrams.net URL. To connect to:
- The free public instance: enter `https://embed.diagrams.net`.
- A locally hosted diagrams.net instance: enter the URL you [configured earlier](#configure-your-diagramsnet-server).
1. Select **Save changes**.
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Configure a Diagrams.net integration for GitLab.
gitlab_dedicated: true
title: Diagrams.net
breadcrumbs:
- doc
- administration
- integration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- Offline environment support [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/116281) in GitLab 16.1.
{{< /history >}}
Use the [diagrams.net](https://www.drawio.com/) integration to create and embed SVG diagrams in wikis.
The diagram editor is available in both the plain text editor and the rich text editor.
GitLab.com enables this integration for all SaaS users. No additional configuration is required.
For GitLab Self-Managed and GitLab Dedicated, integrate with either the free [diagrams.net](https://www.drawio.com/)
website, or host your own diagrams.net site in offline environments.
To set up the integration:
1. Choose to integrate with the free diagrams.net website or
[configure your diagrams.net server](#configure-your-diagramsnet-server).
1. [Enable the integration](#enable-diagramsnet-integration).
After completing the integration, the diagrams.net editor opens with the URL you provided.
## Configure your diagrams.net server
You can set up your own diagrams.net server to generate the diagrams.
It's a required step for users on an offline installation of GitLab Self-Managed.
For example, to run a diagrams.net container in Docker, run the following command:
```shell
docker run -it --rm --name="draw" -p 8080:8080 -p 8443:8443 jgraph/drawio
```
Make note of the hostname of the server running the container, to be used as the diagrams.net URL
when you enable the integration.
For more information, see [Run your own diagrams.net server with Docker](https://www.drawio.com/blog/diagrams-docker-app).
## Enable Diagrams.net integration
1. Sign in to GitLab as an [Administrator](../../user/permissions.md) user.
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings** > **General**.
1. Expand **Diagrams.net**.
1. Select the **Enable Diagrams.net** checkbox.
1. Enter the Diagrams.net URL. To connect to:
- The free public instance: enter `https://embed.diagrams.net`.
- A locally hosted diagrams.net instance: enter the URL you [configured earlier](#configure-your-diagramsnet-server).
1. Select **Save changes**.
|
https://docs.gitlab.com/administration/plantuml
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/plantuml.md
|
2025-08-13
|
doc/administration/integration
|
[
"doc",
"administration",
"integration"
] |
plantuml.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
PlantUML
|
Configure PlantUML integration with GitLab Self-Managed.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Use the [PlantUML](https://plantuml.com) integration, to create diagrams in snippets, wikis, and repositories.
GitLab.com integrates with PlantUML for all users, and requires no additional configuration.
To set up the integration on your GitLab Self-Managed instance, you must [configure your PlantUML server](#configure-your-plantuml-server).
After completing the integration, PlantUML converts `plantuml`
blocks to an HTML image tag, with the source pointing to the PlantUML instance. The PlantUML
diagram delimiters `@startuml`/`@enduml` aren't required because they are replaced
by the `plantuml` block:
- Markdown files with the extension `.md`:
````markdown
```plantuml
Bob -> Alice : hello
Alice -> Bob : hi
```
````
For additional acceptable extensions, review the
[`languages.yaml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/vendor/languages.yml#L3174) file.
- AsciiDoc files with the extension `.asciidoc`, `.adoc`, or `.asc`:
```plaintext
[plantuml, format="png", id="myDiagram", width="200px"]
----
Bob->Alice : hello
Alice -> Bob : hi
----
```
- reStructuredText:
```plaintext
.. plantuml::
:caption: Caption with **bold** and *italic*
Bob -> Alice: hello
Alice -> Bob: hi
```
Although you can use the `uml::` directive for compatibility with
[`sphinxcontrib-plantuml`](https://pypi.org/project/sphinxcontrib-plantuml/),
GitLab supports only the `caption` option.
If the PlantUML server is configured correctly, these examples should render a
diagram instead of the code block:
```plantuml
Bob -> Alice : hello
Alice -> Bob : hi
```
Inside blocks, add any of the diagrams PlantUML supports, such as:
- [Activity](https://plantuml.com/activity-diagram-legacy)
- [Class](https://plantuml.com/class-diagram)
- [Component](https://plantuml.com/component-diagram)
- [Object](https://plantuml.com/object-diagram)
- [Sequence](https://plantuml.com/sequence-diagram)
- [State](https://plantuml.com/state-diagram)
- [Use Case](https://plantuml.com/use-case-diagram)
Add parameters to block definitions:
- `id`: A CSS ID added to the diagram HTML tag.
- `width`: Width attribute added to the image tag.
- `height`: Height attribute added to the image tag.
Markdown does not support any parameters, and always uses PNG format.
## Include diagram files
To include or embed a PlantUML diagram from separate files in the repository, use
the `include` directive. Use this to maintain complex diagrams in dedicated files, or to
reuse diagrams. For example:
- Markdown:
````markdown
```plantuml
::include{file=diagram.puml}
```
````
- AsciiDoc:
```plaintext
[plantuml, format="png", id="myDiagram", width="200px"]
----
include::diagram.puml[]
----
```
## Configure your PlantUML server
Before you can enable PlantUML in GitLab, set up your own PlantUML
server to generate the diagrams:
- [Docker](#docker) (recommended)
- [Debian/Ubuntu](#debianubuntu)
### Docker
To run a PlantUML container in Docker, run this command:
```shell
docker run -d --name plantuml -p 8005:8080 plantuml/plantuml-server:tomcat
```
The **PlantUML URL** is the hostname of the server running the container.
When running GitLab in Docker, it must have access to the PlantUML container.
To achieve that, use [Docker Compose](https://docs.docker.com/compose/).
In this basic `docker-compose.yml` file, PlantUML is accessible to GitLab at the URL
`http://plantuml:8005/`:
```yaml
version: "3"
services:
gitlab:
image: 'gitlab/gitlab-ee:17.9.1-ee.0'
environment:
GITLAB_OMNIBUS_CONFIG: |
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n rewrite ^/-/plantuml/(.*) /$1 break;\n proxy_cache off; \n proxy_pass http://plantuml:8005/; \n}\n"
plantuml:
image: 'plantuml/plantuml-server:tomcat'
container_name: plantuml
ports:
- "8005:8080"
```
Next, you can:
1. [Configure local PlantUML access](#configure-local-plantuml-access)
1. [Verify that the PlantUML installation](#verify-the-plantuml-installation) succeeded
### Debian/Ubuntu
You can install and configure a PlantUML server in Debian/Ubuntu distributions
using Tomcat or Jetty. The instructions below are for Tomcat.
Prerequisites:
- JRE/JDK version 11 or later.
- (Recommended) Jetty version 11 or later.
- (Recommended) Tomcat version 10 or later.
#### Installation
PlantUML recommends to install Tomcat 10.1 or later. The scope of this page only
includes setting up a basic Tomcat server. For more production-ready configurations,
see the [Tomcat Documentation](https://tomcat.apache.org/tomcat-10.1-doc/index.html).
1. Install JDK/JRE 11:
```shell
sudo apt update
sudo apt install default-jre-headless graphviz git
```
1. Add a user for Tomcat:
```shell
sudo useradd -m -d /opt/tomcat -U -s /bin/false tomcat
```
1. Install and configure Tomcat 10.1:
```shell
wget https://dlcdn.apache.org/tomcat/tomcat-10/v10.1.33/bin/apache-tomcat-10.1.33.tar.gz -P /tmp
sudo tar xzvf /tmp/apache-tomcat-10*tar.gz -C /opt/tomcat --strip-components=1
sudo chown -R tomcat:tomcat /opt/tomcat/
sudo chmod -R u+x /opt/tomcat/bin
```
1. Create a systemd service. Edit the `/etc/systemd/system/tomcat.service` file and add:
```shell
[Unit]
Description=Tomcat
After=network.target
[Service]
Type=forking
User=tomcat
Group=tomcat
Environment="JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64"
Environment="JAVA_OPTS=-Djava.security.egd=file:///dev/urandom"
Environment="CATALINA_BASE=/opt/tomcat"
Environment="CATALINA_HOME=/opt/tomcat"
Environment="CATALINA_PID=/opt/tomcat/temp/tomcat.pid"
Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC"
ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/opt/tomcat/bin/shutdown.sh
RestartSec=10
Restart=always
[Install]
WantedBy=multi-user.target
```
`JAVA_HOME` should be the same path as seen in `sudo update-java-alternatives -l`.
1. To configure ports, edit your `/opt/tomcat/conf/server.xml` and choose your
ports. Recommended:
- Change the Tomcat shutdown port from `8005` to `8006`
- Use port `8005` for the Tomcat HTTP endpoint. The default port `8080` should be avoided,
because [Puma](../operations/puma.md) listens on port `8080` for metrics.
```diff
- <Server port="8006" shutdown="SHUTDOWN">
+ <Server port="8005" shutdown="SHUTDOWN">
- <Connector port="8005" protocol="HTTP/1.1"
+ <Connector port="8080" protocol="HTTP/1.1"
```
1. Reload and start Tomcat:
```shell
sudo systemctl daemon-reload
sudo systemctl start tomcat
sudo systemctl status tomcat
sudo systemctl enable tomcat
```
The Java process should be listening on these ports:
```shell
root@gitlab-omnibus:/plantuml-server# ❯ ss -plnt | grep java
LISTEN 0 1 [::ffff:127.0.0.1]:8006 *:* users:(("java",pid=27338,fd=52))
LISTEN 0 100 *:8005 *:* users:(("java",pid=27338,fd=43))
```
1. Install PlantUML and copy the `.war` file:
Use the [latest release](https://github.com/plantuml/plantuml-server/releases) of `plantuml-jsp`
(for example: `plantuml-jsp-v1.2024.8.war`).
For context, see [issue 265](https://github.com/plantuml/plantuml-server/issues/265).
```shell
wget -P /tmp https://github.com/plantuml/plantuml-server/releases/download/v1.2024.8/plantuml-jsp-v1.2024.8.war
sudo cp /tmp/plantuml-jsp-v1.2024.8.war /opt/tomcat/webapps/plantuml.war
sudo chown tomcat:tomcat /opt/tomcat/webapps/plantuml.war
sudo systemctl restart tomcat
```
The Tomcat service should restart. After the restart is complete, the
PlantUML integration is ready and listening for requests on port `8005`:
`http://localhost:8005/plantuml`.
To change the Tomcat defaults, edit the `/opt/tomcat/conf/server.xml` file.
{{< alert type="note" >}}
The default URL is different when using this approach. The Docker-based image
makes the service available at the root URL, with no relative path. Adjust
the configuration below accordingly.
{{< /alert >}}
Next, you can:
1. [Configure local PlantUML access](#configure-local-plantuml-access). Ensure the `proxy_pass` port
configured in the link matches the Connector port in `server.xml`.
1. [Verify that the PlantUML installation](#verify-the-plantuml-installation) succeeded.
### Configure local PlantUML access
The PlantUML server runs locally on your server, so it can't be accessed
externally by default. Your server must catch external PlantUML
calls to `https://gitlab.example.com/-/plantuml/` and redirect them to the
local PlantUML server. Depending on your setup, the URL is either of the
following:
- `http://plantuml:8080/`
- `http://localhost:8080/plantuml/`
- `http://plantuml:8005/`
- `http://localhost:8005/plantuml/`
If you're running [GitLab with TLS](https://docs.gitlab.com/omnibus/settings/ssl/)
you must configure this redirection, because PlantUML uses the insecure HTTP protocol.
Newer browsers, such as [Google Chrome 86+](https://www.chromestatus.com/feature/4926989725073408),
don't load insecure HTTP resources on pages served over HTTPS.
#### Use bundled GitLab NGINX
If you can modify `/etc/gitlab/gitlab.rb`, configure the bundled NGINX to handle the redirection:
1. Add the following line in `/etc/gitlab/gitlab.rb`, depending on your setup method:
```ruby
# Docker install
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n rewrite ^/-/plantuml/(.*) /$1 break;\n proxy_cache off; \n proxy_pass http://plantuml:8005/; \n}\n"
# Debian/Ubuntu install
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n rewrite ^/-/plantuml/(.*) /$1 break;\n proxy_cache off; \n proxy_pass http://localhost:8005/plantuml; \n}\n"
```
1. To activate the changes, run the following command:
```shell
sudo gitlab-ctl reconfigure
```
#### Use HTTPS PlantUML server
If you cannot modify the `gitlab.rb` file, configure your PlantUML server to use
HTTPS directly. This method is recommended for GitLab Dedicated instances.
This setup uses NGINX to handle SSL termination and proxy requests to the PlantUML
container. You can also use cloud-based load balancers like AWS Application Load Balancer (ALB) for
SSL termination.
1. Create an `nginx.conf` file:
```nginx
events {
worker_connections 1024;
}
http {
server {
listen 443 ssl;
server_name _;
ssl_certificate /etc/nginx/ssl/plantuml.crt;
ssl_certificate_key /etc/nginx/ssl/plantuml.key;
location / {
proxy_pass http://plantuml:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
```
1. Add the `plantuml.crt` and `plantuml.key` files to an `ssl` directory.
1. Configure the `docker-compose.yml` file:
```yaml
version: '3.8'
services:
plantuml:
image: plantuml/plantuml-server:tomcat
container_name: plantuml
networks:
- plantuml-net
plantuml-ssl:
image: nginx
container_name: plantuml-ssl
ports:
- "8443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- plantuml
networks:
- plantuml-net
networks:
plantuml-net:
driver: bridge
```
1. Start your PlantUML server with `docker-compose up`.
1. [Enable PlantUML integration](#enable-plantuml-integration) with the URL
`https://your-server:8443`.
### Verify the PlantUML installation
To verify the installation was successful:
1. Test the PlantUML server directly:
```shell
# Docker install
curl --location --verbose "http://localhost:8005/svg/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000"
# Debian/Ubuntu install
curl --location --verbose "http://localhost:8005/plantuml/svg/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000"
```
You should receive SVG output containing the text `hello`.
1. Test that GitLab can access PlantUML through NGINX by visiting:
```plaintext
http://gitlab.example.com/-/plantuml/svg/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
```
Replace `gitlab.example.com` with your GitLab instance URL. You should see a rendered
PlantUML diagram displaying `hello`.
```plaintext
Bob -> Alice : hello
```
### Configure PlantUML security
PlantUML has features that allow fetching network resources. If you self-host the
PlantUML server, put network controls in place to isolate it.
For example, make use of PlantUML's [security profiles](https://plantuml.com/security).
```plaintext
@startuml
start
' ...
!include http://localhost/
stop;
@enduml
```
#### Secure PlantUML SVG diagram output
When generating PlantUML diagrams in SVG format, configure your server for enhanced security.
Disable the SVG output route in your NGINX configuration to prevent potential security issues.
To disable the SVG output route, add this configuration to your NGINX server hosting
the PlantUML service:
```nginx
location ~ ^/-/plantuml/svg/ {
return 403;
}
```
This configuration prevents potentially malicious diagram code from executing in browsers.
## Enable PlantUML integration
After configuring your local PlantUML server, you're ready to enable the PlantUML integration:
1. Sign in to GitLab as an [Administrator](../../user/permissions.md) user.
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, go to **Settings > General** and expand the **PlantUML** section.
1. Select the **Enable PlantUML** checkbox.
1. Set the PlantUML instance as `https://gitlab.example.com/-/plantuml/`,
and select **Save changes**.
Depending on your PlantUML and GitLab version numbers, you may also need to take
these steps:
- For PlantUML servers running v1.2020.9 and later, such as [plantuml.com](https://plantuml.com),
you must set the `PLANTUML_ENCODING` environment variable to enable the `deflate`
compression. In Linux package installations, you can set this value in `/etc/gitlab/gitlab.rb` with
this command:
```ruby
gitlab_rails['env'] = { 'PLANTUML_ENCODING' => 'deflate' }
```
In GitLab Helm chart, you can set it by adding a variable to the
[global.extraEnv](https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/charts/globals.md#extraenv)
section, like this:
```yaml
global:
extraEnv:
PLANTUML_ENCODING: deflate
```
- `deflate` is the default encoding type for PlantUML. To use a different encoding type, PlantUML integration
[requires a header prefix in the URL](https://plantuml.com/text-encoding)
to distinguish different encoding types.
## Troubleshooting
### Rendered diagram URL remains the same after update
Rendered diagrams are cached. To see the updates, try these steps:
- If the diagram is in a Markdown file, make a small change to the Markdown file, and commit it. This triggers a re-render.
- [Invalidate the Markdown cache](../invalidate_markdown_cache.md#invalidate-the-cache) to force any cached Markdown
in the database or Redis to be cleared.
If you're still not seeing the updated URL, check the following:
- Ensure the PlantUML server is accessible from your GitLab instance.
- Verify that the PlantUML integration is enabled in your GitLab settings.
- Check the GitLab logs for errors related to PlantUML rendering.
- [Clear your GitLab Redis cache](../raketasks/maintenance.md#clear-redis-cache).
### `404` error when opening the PlantUML page in the browser
You might get a `404` error when visiting `https://gitlab.example.com/-/plantuml/`, when the PlantUML
server is set up [in Debian or Ubuntu](#debianubuntu).
This can happen even when the integration is working.
It does not necessarily indicate a problem with your PlantUML server or configuration.
To confirm if PlantUML is working correctly, you can [verify the PlantUML installation](#verify-the-plantuml-installation).
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
gitlab_dedicated: false
description: Configure PlantUML integration with GitLab Self-Managed.
title: PlantUML
breadcrumbs:
- doc
- administration
- integration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Use the [PlantUML](https://plantuml.com) integration, to create diagrams in snippets, wikis, and repositories.
GitLab.com integrates with PlantUML for all users, and requires no additional configuration.
To set up the integration on your GitLab Self-Managed instance, you must [configure your PlantUML server](#configure-your-plantuml-server).
After completing the integration, PlantUML converts `plantuml`
blocks to an HTML image tag, with the source pointing to the PlantUML instance. The PlantUML
diagram delimiters `@startuml`/`@enduml` aren't required because they are replaced
by the `plantuml` block:
- Markdown files with the extension `.md`:
````markdown
```plantuml
Bob -> Alice : hello
Alice -> Bob : hi
```
````
For additional acceptable extensions, review the
[`languages.yaml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/vendor/languages.yml#L3174) file.
- AsciiDoc files with the extension `.asciidoc`, `.adoc`, or `.asc`:
```plaintext
[plantuml, format="png", id="myDiagram", width="200px"]
----
Bob->Alice : hello
Alice -> Bob : hi
----
```
- reStructuredText:
```plaintext
.. plantuml::
:caption: Caption with **bold** and *italic*
Bob -> Alice: hello
Alice -> Bob: hi
```
Although you can use the `uml::` directive for compatibility with
[`sphinxcontrib-plantuml`](https://pypi.org/project/sphinxcontrib-plantuml/),
GitLab supports only the `caption` option.
If the PlantUML server is configured correctly, these examples should render a
diagram instead of the code block:
```plantuml
Bob -> Alice : hello
Alice -> Bob : hi
```
Inside blocks, add any of the diagrams PlantUML supports, such as:
- [Activity](https://plantuml.com/activity-diagram-legacy)
- [Class](https://plantuml.com/class-diagram)
- [Component](https://plantuml.com/component-diagram)
- [Object](https://plantuml.com/object-diagram)
- [Sequence](https://plantuml.com/sequence-diagram)
- [State](https://plantuml.com/state-diagram)
- [Use Case](https://plantuml.com/use-case-diagram)
Add parameters to block definitions:
- `id`: A CSS ID added to the diagram HTML tag.
- `width`: Width attribute added to the image tag.
- `height`: Height attribute added to the image tag.
Markdown does not support any parameters, and always uses PNG format.
## Include diagram files
To include or embed a PlantUML diagram from separate files in the repository, use
the `include` directive. Use this to maintain complex diagrams in dedicated files, or to
reuse diagrams. For example:
- Markdown:
````markdown
```plantuml
::include{file=diagram.puml}
```
````
- AsciiDoc:
```plaintext
[plantuml, format="png", id="myDiagram", width="200px"]
----
include::diagram.puml[]
----
```
## Configure your PlantUML server
Before you can enable PlantUML in GitLab, set up your own PlantUML
server to generate the diagrams:
- [Docker](#docker) (recommended)
- [Debian/Ubuntu](#debianubuntu)
### Docker
To run a PlantUML container in Docker, run this command:
```shell
docker run -d --name plantuml -p 8005:8080 plantuml/plantuml-server:tomcat
```
The **PlantUML URL** is the hostname of the server running the container.
When running GitLab in Docker, it must have access to the PlantUML container.
To achieve that, use [Docker Compose](https://docs.docker.com/compose/).
In this basic `docker-compose.yml` file, PlantUML is accessible to GitLab at the URL
`http://plantuml:8005/`:
```yaml
version: "3"
services:
gitlab:
image: 'gitlab/gitlab-ee:17.9.1-ee.0'
environment:
GITLAB_OMNIBUS_CONFIG: |
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n rewrite ^/-/plantuml/(.*) /$1 break;\n proxy_cache off; \n proxy_pass http://plantuml:8005/; \n}\n"
plantuml:
image: 'plantuml/plantuml-server:tomcat'
container_name: plantuml
ports:
- "8005:8080"
```
Next, you can:
1. [Configure local PlantUML access](#configure-local-plantuml-access)
1. [Verify that the PlantUML installation](#verify-the-plantuml-installation) succeeded
### Debian/Ubuntu
You can install and configure a PlantUML server in Debian/Ubuntu distributions
using Tomcat or Jetty. The instructions below are for Tomcat.
Prerequisites:
- JRE/JDK version 11 or later.
- (Recommended) Jetty version 11 or later.
- (Recommended) Tomcat version 10 or later.
#### Installation
PlantUML recommends to install Tomcat 10.1 or later. The scope of this page only
includes setting up a basic Tomcat server. For more production-ready configurations,
see the [Tomcat Documentation](https://tomcat.apache.org/tomcat-10.1-doc/index.html).
1. Install JDK/JRE 11:
```shell
sudo apt update
sudo apt install default-jre-headless graphviz git
```
1. Add a user for Tomcat:
```shell
sudo useradd -m -d /opt/tomcat -U -s /bin/false tomcat
```
1. Install and configure Tomcat 10.1:
```shell
wget https://dlcdn.apache.org/tomcat/tomcat-10/v10.1.33/bin/apache-tomcat-10.1.33.tar.gz -P /tmp
sudo tar xzvf /tmp/apache-tomcat-10*tar.gz -C /opt/tomcat --strip-components=1
sudo chown -R tomcat:tomcat /opt/tomcat/
sudo chmod -R u+x /opt/tomcat/bin
```
1. Create a systemd service. Edit the `/etc/systemd/system/tomcat.service` file and add:
```shell
[Unit]
Description=Tomcat
After=network.target
[Service]
Type=forking
User=tomcat
Group=tomcat
Environment="JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64"
Environment="JAVA_OPTS=-Djava.security.egd=file:///dev/urandom"
Environment="CATALINA_BASE=/opt/tomcat"
Environment="CATALINA_HOME=/opt/tomcat"
Environment="CATALINA_PID=/opt/tomcat/temp/tomcat.pid"
Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC"
ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/opt/tomcat/bin/shutdown.sh
RestartSec=10
Restart=always
[Install]
WantedBy=multi-user.target
```
`JAVA_HOME` should be the same path as seen in `sudo update-java-alternatives -l`.
1. To configure ports, edit your `/opt/tomcat/conf/server.xml` and choose your
ports. Recommended:
- Change the Tomcat shutdown port from `8005` to `8006`
- Use port `8005` for the Tomcat HTTP endpoint. The default port `8080` should be avoided,
because [Puma](../operations/puma.md) listens on port `8080` for metrics.
```diff
- <Server port="8006" shutdown="SHUTDOWN">
+ <Server port="8005" shutdown="SHUTDOWN">
- <Connector port="8005" protocol="HTTP/1.1"
+ <Connector port="8080" protocol="HTTP/1.1"
```
1. Reload and start Tomcat:
```shell
sudo systemctl daemon-reload
sudo systemctl start tomcat
sudo systemctl status tomcat
sudo systemctl enable tomcat
```
The Java process should be listening on these ports:
```shell
root@gitlab-omnibus:/plantuml-server# ❯ ss -plnt | grep java
LISTEN 0 1 [::ffff:127.0.0.1]:8006 *:* users:(("java",pid=27338,fd=52))
LISTEN 0 100 *:8005 *:* users:(("java",pid=27338,fd=43))
```
1. Install PlantUML and copy the `.war` file:
Use the [latest release](https://github.com/plantuml/plantuml-server/releases) of `plantuml-jsp`
(for example: `plantuml-jsp-v1.2024.8.war`).
For context, see [issue 265](https://github.com/plantuml/plantuml-server/issues/265).
```shell
wget -P /tmp https://github.com/plantuml/plantuml-server/releases/download/v1.2024.8/plantuml-jsp-v1.2024.8.war
sudo cp /tmp/plantuml-jsp-v1.2024.8.war /opt/tomcat/webapps/plantuml.war
sudo chown tomcat:tomcat /opt/tomcat/webapps/plantuml.war
sudo systemctl restart tomcat
```
The Tomcat service should restart. After the restart is complete, the
PlantUML integration is ready and listening for requests on port `8005`:
`http://localhost:8005/plantuml`.
To change the Tomcat defaults, edit the `/opt/tomcat/conf/server.xml` file.
{{< alert type="note" >}}
The default URL is different when using this approach. The Docker-based image
makes the service available at the root URL, with no relative path. Adjust
the configuration below accordingly.
{{< /alert >}}
Next, you can:
1. [Configure local PlantUML access](#configure-local-plantuml-access). Ensure the `proxy_pass` port
configured in the link matches the Connector port in `server.xml`.
1. [Verify that the PlantUML installation](#verify-the-plantuml-installation) succeeded.
### Configure local PlantUML access
The PlantUML server runs locally on your server, so it can't be accessed
externally by default. Your server must catch external PlantUML
calls to `https://gitlab.example.com/-/plantuml/` and redirect them to the
local PlantUML server. Depending on your setup, the URL is either of the
following:
- `http://plantuml:8080/`
- `http://localhost:8080/plantuml/`
- `http://plantuml:8005/`
- `http://localhost:8005/plantuml/`
If you're running [GitLab with TLS](https://docs.gitlab.com/omnibus/settings/ssl/)
you must configure this redirection, because PlantUML uses the insecure HTTP protocol.
Newer browsers, such as [Google Chrome 86+](https://www.chromestatus.com/feature/4926989725073408),
don't load insecure HTTP resources on pages served over HTTPS.
#### Use bundled GitLab NGINX
If you can modify `/etc/gitlab/gitlab.rb`, configure the bundled NGINX to handle the redirection:
1. Add the following line in `/etc/gitlab/gitlab.rb`, depending on your setup method:
```ruby
# Docker install
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n rewrite ^/-/plantuml/(.*) /$1 break;\n proxy_cache off; \n proxy_pass http://plantuml:8005/; \n}\n"
# Debian/Ubuntu install
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n rewrite ^/-/plantuml/(.*) /$1 break;\n proxy_cache off; \n proxy_pass http://localhost:8005/plantuml; \n}\n"
```
1. To activate the changes, run the following command:
```shell
sudo gitlab-ctl reconfigure
```
#### Use HTTPS PlantUML server
If you cannot modify the `gitlab.rb` file, configure your PlantUML server to use
HTTPS directly. This method is recommended for GitLab Dedicated instances.
This setup uses NGINX to handle SSL termination and proxy requests to the PlantUML
container. You can also use cloud-based load balancers like AWS Application Load Balancer (ALB) for
SSL termination.
1. Create an `nginx.conf` file:
```nginx
events {
worker_connections 1024;
}
http {
server {
listen 443 ssl;
server_name _;
ssl_certificate /etc/nginx/ssl/plantuml.crt;
ssl_certificate_key /etc/nginx/ssl/plantuml.key;
location / {
proxy_pass http://plantuml:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
```
1. Add the `plantuml.crt` and `plantuml.key` files to an `ssl` directory.
1. Configure the `docker-compose.yml` file:
```yaml
version: '3.8'
services:
plantuml:
image: plantuml/plantuml-server:tomcat
container_name: plantuml
networks:
- plantuml-net
plantuml-ssl:
image: nginx
container_name: plantuml-ssl
ports:
- "8443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- plantuml
networks:
- plantuml-net
networks:
plantuml-net:
driver: bridge
```
1. Start your PlantUML server with `docker-compose up`.
1. [Enable PlantUML integration](#enable-plantuml-integration) with the URL
`https://your-server:8443`.
### Verify the PlantUML installation
To verify the installation was successful:
1. Test the PlantUML server directly:
```shell
# Docker install
curl --location --verbose "http://localhost:8005/svg/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000"
# Debian/Ubuntu install
curl --location --verbose "http://localhost:8005/plantuml/svg/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000"
```
You should receive SVG output containing the text `hello`.
1. Test that GitLab can access PlantUML through NGINX by visiting:
```plaintext
http://gitlab.example.com/-/plantuml/svg/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
```
Replace `gitlab.example.com` with your GitLab instance URL. You should see a rendered
PlantUML diagram displaying `hello`.
```plaintext
Bob -> Alice : hello
```
### Configure PlantUML security
PlantUML has features that allow fetching network resources. If you self-host the
PlantUML server, put network controls in place to isolate it.
For example, make use of PlantUML's [security profiles](https://plantuml.com/security).
```plaintext
@startuml
start
' ...
!include http://localhost/
stop;
@enduml
```
#### Secure PlantUML SVG diagram output
When generating PlantUML diagrams in SVG format, configure your server for enhanced security.
Disable the SVG output route in your NGINX configuration to prevent potential security issues.
To disable the SVG output route, add this configuration to your NGINX server hosting
the PlantUML service:
```nginx
location ~ ^/-/plantuml/svg/ {
return 403;
}
```
This configuration prevents potentially malicious diagram code from executing in browsers.
## Enable PlantUML integration
After configuring your local PlantUML server, you're ready to enable the PlantUML integration:
1. Sign in to GitLab as an [Administrator](../../user/permissions.md) user.
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, go to **Settings > General** and expand the **PlantUML** section.
1. Select the **Enable PlantUML** checkbox.
1. Set the PlantUML instance as `https://gitlab.example.com/-/plantuml/`,
and select **Save changes**.
Depending on your PlantUML and GitLab version numbers, you may also need to take
these steps:
- For PlantUML servers running v1.2020.9 and later, such as [plantuml.com](https://plantuml.com),
you must set the `PLANTUML_ENCODING` environment variable to enable the `deflate`
compression. In Linux package installations, you can set this value in `/etc/gitlab/gitlab.rb` with
this command:
```ruby
gitlab_rails['env'] = { 'PLANTUML_ENCODING' => 'deflate' }
```
In GitLab Helm chart, you can set it by adding a variable to the
[global.extraEnv](https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/charts/globals.md#extraenv)
section, like this:
```yaml
global:
extraEnv:
PLANTUML_ENCODING: deflate
```
- `deflate` is the default encoding type for PlantUML. To use a different encoding type, PlantUML integration
[requires a header prefix in the URL](https://plantuml.com/text-encoding)
to distinguish different encoding types.
## Troubleshooting
### Rendered diagram URL remains the same after update
Rendered diagrams are cached. To see the updates, try these steps:
- If the diagram is in a Markdown file, make a small change to the Markdown file, and commit it. This triggers a re-render.
- [Invalidate the Markdown cache](../invalidate_markdown_cache.md#invalidate-the-cache) to force any cached Markdown
in the database or Redis to be cleared.
If you're still not seeing the updated URL, check the following:
- Ensure the PlantUML server is accessible from your GitLab instance.
- Verify that the PlantUML integration is enabled in your GitLab settings.
- Check the GitLab logs for errors related to PlantUML rendering.
- [Clear your GitLab Redis cache](../raketasks/maintenance.md#clear-redis-cache).
### `404` error when opening the PlantUML page in the browser
You might get a `404` error when visiting `https://gitlab.example.com/-/plantuml/`, when the PlantUML
server is set up [in Debian or Ubuntu](#debianubuntu).
This can happen even when the integration is working.
It does not necessarily indicate a problem with your PlantUML server or configuration.
To confirm if PlantUML is working correctly, you can [verify the PlantUML installation](#verify-the-plantuml-installation).
|
https://docs.gitlab.com/administration/mailgun
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/mailgun.md
|
2025-08-13
|
doc/administration/integration
|
[
"doc",
"administration",
"integration"
] |
mailgun.md
|
Plan
|
Project Management
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Mailgun
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
When you use Mailgun to send emails for your GitLab instance and [Mailgun](https://www.mailgun.com/)
integration is enabled and configured in GitLab, you can receive their webhook for
tracking delivery failures. To set up the integration, you must:
1. [Configure your Mailgun domain](#configure-your-mailgun-domain).
1. [Enable Mailgun integration](#enable-mailgun-integration).
After completing the integration, Mailgun `temporary_failure` and `permanent_failure` webhooks are sent to your GitLab instance.
## Configure your Mailgun domain
{{< history >}}
- [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/359113) the `/-/members/mailgun/permanent_failures` URL in GitLab 15.0.
- [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/359113) the URL to handle both temporary and permanent failures in GitLab 15.0.
{{< /history >}}
Before you can enable Mailgun in GitLab, set up your own Mailgun endpoints to receive the webhooks.
Using the [Mailgun webhook guide](https://www.mailgun.com/blog/product/a-guide-to-using-mailguns-webhooks/):
1. Add a webhook with the **Event type** set to **Permanent Failure**.
1. Enter the URL of your instance and include the `/-/mailgun/webhooks` path.
For example:
```plaintext
https://myinstance.gitlab.com/-/mailgun/webhooks
```
1. Add another webhook with the **Event type** set to **Temporary Failure**.
1. Enter the URL of your instance and use the same `/-/mailgun/webhooks` path.
## Enable Mailgun integration
After configuring your Mailgun domain for the webhook endpoints,
you're ready to enable the Mailgun integration:
1. Sign in to GitLab as an [Administrator](../../user/permissions.md) user.
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, go to **Settings** > **General** and expand the **Mailgun** section.
1. Select the **Enable Mailgun** checkbox.
1. Enter the Mailgun HTTP webhook signing key as described in
[the Mailgun documentation](https://documentation.mailgun.com/docs/mailgun/user-manual/get-started/) and
shown in the API security (`https://app.mailgun.com/app/account/security/api_keys`) section for your Mailgun account.
1. Select **Save changes**.
|
---
stage: Plan
group: Project Management
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
gitlab_dedicated: false
title: Mailgun
breadcrumbs:
- doc
- administration
- integration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
When you use Mailgun to send emails for your GitLab instance and [Mailgun](https://www.mailgun.com/)
integration is enabled and configured in GitLab, you can receive their webhook for
tracking delivery failures. To set up the integration, you must:
1. [Configure your Mailgun domain](#configure-your-mailgun-domain).
1. [Enable Mailgun integration](#enable-mailgun-integration).
After completing the integration, Mailgun `temporary_failure` and `permanent_failure` webhooks are sent to your GitLab instance.
## Configure your Mailgun domain
{{< history >}}
- [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/359113) the `/-/members/mailgun/permanent_failures` URL in GitLab 15.0.
- [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/359113) the URL to handle both temporary and permanent failures in GitLab 15.0.
{{< /history >}}
Before you can enable Mailgun in GitLab, set up your own Mailgun endpoints to receive the webhooks.
Using the [Mailgun webhook guide](https://www.mailgun.com/blog/product/a-guide-to-using-mailguns-webhooks/):
1. Add a webhook with the **Event type** set to **Permanent Failure**.
1. Enter the URL of your instance and include the `/-/mailgun/webhooks` path.
For example:
```plaintext
https://myinstance.gitlab.com/-/mailgun/webhooks
```
1. Add another webhook with the **Event type** set to **Temporary Failure**.
1. Enter the URL of your instance and use the same `/-/mailgun/webhooks` path.
## Enable Mailgun integration
After configuring your Mailgun domain for the webhook endpoints,
you're ready to enable the Mailgun integration:
1. Sign in to GitLab as an [Administrator](../../user/permissions.md) user.
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, go to **Settings** > **General** and expand the **Mailgun** section.
1. Select the **Enable Mailgun** checkbox.
1. Enter the Mailgun HTTP webhook signing key as described in
[the Mailgun documentation](https://documentation.mailgun.com/docs/mailgun/user-manual/get-started/) and
shown in the API security (`https://app.mailgun.com/app/account/security/api_keys`) section for your Mailgun account.
1. Select **Save changes**.
|
https://docs.gitlab.com/administration/kroki
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/kroki.md
|
2025-08-13
|
doc/administration/integration
|
[
"doc",
"administration",
"integration"
] |
kroki.md
|
Plan
|
Project Management
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Kroki
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
With the [Kroki](https://kroki.io) integration,
you can create diagrams-as-code within AsciiDoc, Markdown, reStructuredText, and Textile.
## Enable Kroki in GitLab
You need to enable Kroki integration from Settings under **Admin** area.
To do that, sign in with an administrator account and follow these steps:
1. On the left sidebar, at the bottom, select **Admin**.
1. Go to **Settings** > **General**.
1. Expand the **Kroki** section.
1. Select **Enable Kroki** checkbox.
1. Enter the **Kroki URL**, for example, `https://kroki.io`.
## Kroki server
When you enable Kroki, GitLab sends diagrams to an instance of Kroki to display them as images.
You can use the free public cloud instance `https://kroki.io` or you can [install Kroki](https://docs.kroki.io/kroki/setup/install/)
on your own infrastructure.
After you've installed Kroki, make sure to update the **Kroki URL** in the settings to point to your instance.
{{< alert type="note" >}}
Kroki diagrams are not stored on GitLab, so standard GitLab access controls and other user permission restrictions are not in force.
{{< /alert >}}
### Docker
With Docker, run a container like this:
```shell
docker run -d --name kroki -p 8080:8000 yuzutech/kroki
```
The **Kroki URL** is the hostname of the server running the container.
The [`yuzutech/kroki`](https://hub.docker.com/r/yuzutech/kroki) Docker image supports most diagram
types out of the box. For a complete list, see the [Kroki installation docs](https://docs.kroki.io/kroki/setup/install/#_the_kroki_container).
Supported diagram types include:
<!-- vale gitlab_base.Spelling = NO -->
- [Bytefield](https://bytefield-svg.deepsymmetry.org/bytefield-svg/intro.html)
- [D2](https://d2lang.com/tour/intro/)
- [DBML](https://dbml.dbdiagram.io/home/)
- [Ditaa](https://ditaa.sourceforge.net)
- [Erd](https://github.com/BurntSushi/erd)
- [GraphViz](https://www.graphviz.org/)
- [Nomnoml](https://github.com/skanaar/nomnoml)
- [PlantUML](https://github.com/plantuml/plantuml)
- [C4 model](https://github.com/RicardoNiepel/C4-PlantUML) (with PlantUML)
- [Structurizr](https://structurizr.com/) (great for C4 Model diagrams)
- [Svgbob](https://github.com/ivanceras/svgbob)
- [UMlet](https://github.com/umlet/umlet)
- [Vega](https://github.com/vega/vega)
- [Vega-Lite](https://github.com/vega/vega-lite)
- [WaveDrom](https://wavedrom.com/)
<!-- vale gitlab_base.Spelling = YES -->
If you want to use additional diagram libraries,
read the [Kroki installation](https://docs.kroki.io/kroki/setup/install/#_images) to learn how to start Kroki companion containers.
## Create diagrams
With Kroki integration enabled and configured, you can start adding diagrams to
your AsciiDoc or Markdown documentation using delimited blocks:
- **Markdown**
````markdown
```plantuml
Bob -> Alice : hello
Alice -> Bob : hi
```
````
- **AsciiDoc**
```plaintext
[plantuml]
....
Bob->Alice : hello
Alice -> Bob : hi
....
```
- **reStructuredText**
```plaintext
.. code-block:: plantuml
Bob->Alice : hello
Alice -> Bob : hi
```
- **Textile**
```plaintext
bc[plantuml]. Bob->Alice : hello
Alice -> Bob : hi
```
The delimited blocks are converted to an HTML image tag with source pointing to the
Kroki instance. If the Kroki server is correctly configured, this should
render a nice diagram instead of the block:

Kroki supports more than a dozen diagram libraries. Here's a few examples for AsciiDoc:
**GraphViz**
```plaintext
[graphviz]
....
digraph finite_state_machine {
rankdir=LR;
node [shape = doublecircle]; LR_0 LR_3 LR_4 LR_8;
node [shape = circle];
LR_0 -> LR_2 [ label = "SS(B)" ];
LR_0 -> LR_1 [ label = "SS(S)" ];
LR_1 -> LR_3 [ label = "S($end)" ];
LR_2 -> LR_6 [ label = "SS(b)" ];
LR_2 -> LR_5 [ label = "SS(a)" ];
LR_2 -> LR_4 [ label = "S(A)" ];
LR_5 -> LR_7 [ label = "S(b)" ];
LR_5 -> LR_5 [ label = "S(a)" ];
LR_6 -> LR_6 [ label = "S(b)" ];
LR_6 -> LR_5 [ label = "S(a)" ];
LR_7 -> LR_8 [ label = "S(b)" ];
LR_7 -> LR_5 [ label = "S(a)" ];
LR_8 -> LR_6 [ label = "S(b)" ];
LR_8 -> LR_5 [ label = "S(a)" ];
}
....
```

**C4 (based on PlantUML)**
```plaintext
[c4plantuml]
....
@startuml
!include C4_Context.puml
title System Context diagram for Internet Banking System
Person(customer, "Banking Customer", "A customer of the bank, with personal bank accounts.")
System(banking_system, "Internet Banking System", "Allows customers to check their accounts.")
System_Ext(mail_system, "E-mail system", "The internal Microsoft Exchange e-mail system.")
System_Ext(mainframe, "Mainframe Banking System", "Stores all of the core banking information.")
Rel(customer, banking_system, "Uses")
Rel_Back(customer, mail_system, "Sends e-mails to")
Rel_Neighbor(banking_system, mail_system, "Sends e-mails", "SMTP")
Rel(banking_system, mainframe, "Uses")
@enduml
....
```

<!-- vale gitlab_base.Spelling = NO -->
**Nomnoml**
<!-- vale gitlab_base.Spelling = YES -->
```plaintext
[nomnoml]
....
[Pirate|eyeCount: Int|raid();pillage()|
[beard]--[parrot]
[beard]-:>[foul mouth]
]
[<abstract>Marauder]<:--[Pirate]
[Pirate]- 0..7[mischief]
[jollyness]->[Pirate]
[jollyness]->[rum]
[jollyness]->[singing]
[Pirate]-> *[rum|tastiness: Int|swig()]
[Pirate]->[singing]
[singing]<->[rum]
....
```

|
---
stage: Plan
group: Project Management
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
gitlab_dedicated: true
title: Kroki
breadcrumbs:
- doc
- administration
- integration
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
With the [Kroki](https://kroki.io) integration,
you can create diagrams-as-code within AsciiDoc, Markdown, reStructuredText, and Textile.
## Enable Kroki in GitLab
You need to enable Kroki integration from Settings under **Admin** area.
To do that, sign in with an administrator account and follow these steps:
1. On the left sidebar, at the bottom, select **Admin**.
1. Go to **Settings** > **General**.
1. Expand the **Kroki** section.
1. Select **Enable Kroki** checkbox.
1. Enter the **Kroki URL**, for example, `https://kroki.io`.
## Kroki server
When you enable Kroki, GitLab sends diagrams to an instance of Kroki to display them as images.
You can use the free public cloud instance `https://kroki.io` or you can [install Kroki](https://docs.kroki.io/kroki/setup/install/)
on your own infrastructure.
After you've installed Kroki, make sure to update the **Kroki URL** in the settings to point to your instance.
{{< alert type="note" >}}
Kroki diagrams are not stored on GitLab, so standard GitLab access controls and other user permission restrictions are not in force.
{{< /alert >}}
### Docker
With Docker, run a container like this:
```shell
docker run -d --name kroki -p 8080:8000 yuzutech/kroki
```
The **Kroki URL** is the hostname of the server running the container.
The [`yuzutech/kroki`](https://hub.docker.com/r/yuzutech/kroki) Docker image supports most diagram
types out of the box. For a complete list, see the [Kroki installation docs](https://docs.kroki.io/kroki/setup/install/#_the_kroki_container).
Supported diagram types include:
<!-- vale gitlab_base.Spelling = NO -->
- [Bytefield](https://bytefield-svg.deepsymmetry.org/bytefield-svg/intro.html)
- [D2](https://d2lang.com/tour/intro/)
- [DBML](https://dbml.dbdiagram.io/home/)
- [Ditaa](https://ditaa.sourceforge.net)
- [Erd](https://github.com/BurntSushi/erd)
- [GraphViz](https://www.graphviz.org/)
- [Nomnoml](https://github.com/skanaar/nomnoml)
- [PlantUML](https://github.com/plantuml/plantuml)
- [C4 model](https://github.com/RicardoNiepel/C4-PlantUML) (with PlantUML)
- [Structurizr](https://structurizr.com/) (great for C4 Model diagrams)
- [Svgbob](https://github.com/ivanceras/svgbob)
- [UMlet](https://github.com/umlet/umlet)
- [Vega](https://github.com/vega/vega)
- [Vega-Lite](https://github.com/vega/vega-lite)
- [WaveDrom](https://wavedrom.com/)
<!-- vale gitlab_base.Spelling = YES -->
If you want to use additional diagram libraries,
read the [Kroki installation](https://docs.kroki.io/kroki/setup/install/#_images) to learn how to start Kroki companion containers.
## Create diagrams
With Kroki integration enabled and configured, you can start adding diagrams to
your AsciiDoc or Markdown documentation using delimited blocks:
- **Markdown**
````markdown
```plantuml
Bob -> Alice : hello
Alice -> Bob : hi
```
````
- **AsciiDoc**
```plaintext
[plantuml]
....
Bob->Alice : hello
Alice -> Bob : hi
....
```
- **reStructuredText**
```plaintext
.. code-block:: plantuml
Bob->Alice : hello
Alice -> Bob : hi
```
- **Textile**
```plaintext
bc[plantuml]. Bob->Alice : hello
Alice -> Bob : hi
```
The delimited blocks are converted to an HTML image tag with source pointing to the
Kroki instance. If the Kroki server is correctly configured, this should
render a nice diagram instead of the block:

Kroki supports more than a dozen diagram libraries. Here's a few examples for AsciiDoc:
**GraphViz**
```plaintext
[graphviz]
....
digraph finite_state_machine {
rankdir=LR;
node [shape = doublecircle]; LR_0 LR_3 LR_4 LR_8;
node [shape = circle];
LR_0 -> LR_2 [ label = "SS(B)" ];
LR_0 -> LR_1 [ label = "SS(S)" ];
LR_1 -> LR_3 [ label = "S($end)" ];
LR_2 -> LR_6 [ label = "SS(b)" ];
LR_2 -> LR_5 [ label = "SS(a)" ];
LR_2 -> LR_4 [ label = "S(A)" ];
LR_5 -> LR_7 [ label = "S(b)" ];
LR_5 -> LR_5 [ label = "S(a)" ];
LR_6 -> LR_6 [ label = "S(b)" ];
LR_6 -> LR_5 [ label = "S(a)" ];
LR_7 -> LR_8 [ label = "S(b)" ];
LR_7 -> LR_5 [ label = "S(a)" ];
LR_8 -> LR_6 [ label = "S(b)" ];
LR_8 -> LR_5 [ label = "S(a)" ];
}
....
```

**C4 (based on PlantUML)**
```plaintext
[c4plantuml]
....
@startuml
!include C4_Context.puml
title System Context diagram for Internet Banking System
Person(customer, "Banking Customer", "A customer of the bank, with personal bank accounts.")
System(banking_system, "Internet Banking System", "Allows customers to check their accounts.")
System_Ext(mail_system, "E-mail system", "The internal Microsoft Exchange e-mail system.")
System_Ext(mainframe, "Mainframe Banking System", "Stores all of the core banking information.")
Rel(customer, banking_system, "Uses")
Rel_Back(customer, mail_system, "Sends e-mails to")
Rel_Neighbor(banking_system, mail_system, "Sends e-mails", "SMTP")
Rel(banking_system, mainframe, "Uses")
@enduml
....
```

<!-- vale gitlab_base.Spelling = NO -->
**Nomnoml**
<!-- vale gitlab_base.Spelling = YES -->
```plaintext
[nomnoml]
....
[Pirate|eyeCount: Int|raid();pillage()|
[beard]--[parrot]
[beard]-:>[foul mouth]
]
[<abstract>Marauder]<:--[Pirate]
[Pirate]- 0..7[mischief]
[jollyness]->[Pirate]
[jollyness]->[rum]
[jollyness]->[singing]
[Pirate]-> *[rum|tastiness: Int|swig()]
[Pirate]->[singing]
[singing]<->[rum]
....
```

|
https://docs.gitlab.com/administration/troubleshooting_backup_gitlab
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/troubleshooting_backup_gitlab.md
|
2025-08-13
|
doc/administration/backup_restore
|
[
"doc",
"administration",
"backup_restore"
] |
troubleshooting_backup_gitlab.md
|
Data Access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Troubleshooting GitLab backups
| null |
When you back up GitLab, you might encounter the following issues.
## When the secrets file is lost
If you didn't [back up the secrets file](backup_gitlab.md#storing-configuration-files), you
must complete several steps to get GitLab working properly again.
The secrets file is responsible for storing the encryption key for the columns
that contain required, sensitive information. If the key is lost, GitLab can't
decrypt those columns, preventing access to the following items:
- [CI/CD variables](../../ci/variables/_index.md)
- [Kubernetes / GCP integration](../../user/infrastructure/clusters/_index.md)
- [Custom Pages domains](../../user/project/pages/custom_domains_ssl_tls_certification/_index.md)
- [Project error tracking](../../operations/error_tracking.md)
- [Runner authentication](../../ci/runners/_index.md)
- [Project mirroring](../../user/project/repository/mirror/_index.md)
- [Integrations](../../user/project/integrations/_index.md)
- [Web hooks](../../user/project/integrations/webhooks.md)
- [Deploy tokens](../../user/project/deploy_tokens/_index.md)
In cases like CI/CD variables and runner authentication, you can experience
unexpected behaviors, such as:
- Stuck jobs.
- 500 errors.
In this case, you must reset all the tokens for CI/CD variables and
runner authentication, which is described in more detail in the following
sections. After resetting the tokens, you should be able to visit your project
and the jobs begin running again.
{{< alert type="warning" >}}
The steps in this section can potentially lead to data loss on the previously listed items.
Consider opening a [Support Request](https://support.gitlab.com/hc/en-us/requests/new) if you're a Premium or Ultimate customer.
{{< /alert >}}
### Verify that all values can be decrypted
You can determine if your database contains values that can't be decrypted by using a
[Rake task](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets).
### Take a backup
You must directly modify GitLab data to work around your lost secrets file.
{{< alert type="warning" >}}
Be sure to create a full database backup before attempting any changes.
{{< /alert >}}
### Disable user two-factor authentication (2FA)
Users with 2FA enabled can't sign in to GitLab. In that case, you must
[disable 2FA for everyone](../../security/two_factor_authentication.md#for-all-users),
after which users must reactivate 2FA.
### Reset CI/CD variables
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Examine the `ci_group_variables` and `ci_variables` tables:
```sql
SELECT * FROM public."ci_group_variables";
SELECT * FROM public."ci_variables";
```
These are the variables that you need to delete.
1. Delete all variables:
```sql
DELETE FROM ci_group_variables;
DELETE FROM ci_variables;
```
1. If you know the specific group or project from which you wish to delete variables, you can include a `WHERE` statement to specify that in your `DELETE`:
```sql
DELETE FROM ci_group_variables WHERE group_id = <GROUPID>;
DELETE FROM ci_variables WHERE project_id = <PROJECTID>;
```
You may need to reconfigure or restart GitLab for the changes to take effect.
### Reset runner registration tokens
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Clear all tokens for projects, groups, and the entire instance:
{{< alert type="warning" >}}
The final `UPDATE` operation stops the runners from being able to pick
up new jobs. You must register new runners.
{{< /alert >}}
```sql
-- Clear project tokens
UPDATE projects SET runners_token = null, runners_token_encrypted = null;
-- Clear group tokens
UPDATE namespaces SET runners_token = null, runners_token_encrypted = null;
-- Clear instance tokens
UPDATE application_settings SET runners_registration_token_encrypted = null;
-- Clear key used for JWT authentication
-- This may break the $CI_JWT_TOKEN job variable:
-- https://gitlab.com/gitlab-org/gitlab/-/issues/325965
UPDATE application_settings SET encrypted_ci_jwt_signing_key = null;
-- Clear runner tokens
UPDATE ci_runners SET token = null, token_encrypted = null;
```
### Reset pending pipeline jobs
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Clear all the tokens for pending jobs:
For GitLab 15.3 and earlier:
```sql
-- Clear build tokens
UPDATE ci_builds SET token = null, token_encrypted = null;
```
For GitLab 15.4 and later:
```sql
-- Clear build tokens
UPDATE ci_builds SET token_encrypted = null;
```
A similar strategy can be employed for the remaining features. By removing the
data that can't be decrypted, GitLab can be returned to operation, and the
lost data can be manually replaced.
### Fix integrations and webhooks
If you've lost your secrets, the [integrations settings](../../user/project/integrations/_index.md)
and [webhooks settings](../../user/project/integrations/webhooks.md) pages might display `500` error messages. Lost secrets might also produce `500` errors when you try to access a repository in a project with a previously configured integration or webhook.
The fix is to truncate the affected tables (those containing encrypted columns).
This deletes all your configured integrations, webhooks, and related metadata.
You should verify that the secrets are the root cause before deleting any data.
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Truncate the following tables:
```sql
-- truncate web_hooks table
TRUNCATE integrations, chat_names, issue_tracker_data, jira_tracker_data, slack_integrations, web_hooks, zentao_tracker_data, web_hook_logs CASCADE;
```
## Container registry is not restored
If you restore a backup from an environment that uses the [container registry](../../user/packages/container_registry/_index.md)
to a newly installed environment where the container registry is not enabled, the container registry is not restored.
To also restore the container registry, you need to [enable it](../packages/container_registry.md#enable-the-container-registry) in the new
environment before you restore the backup.
## Container registry push failures after restoring from a backup
If you use the [container registry](../../user/packages/container_registry/_index.md),
pushes to the registry may fail after restoring your backup on a Linux package (Omnibus)
instance after restoring the registry data.
These failures mention permission issues in the registry logs, similar to:
```plaintext
level=error
msg="response completed with error"
err.code=unknown
err.detail="filesystem: mkdir /var/opt/gitlab/gitlab-rails/shared/registry/docker/registry/v2/repositories/...: permission denied"
err.message="unknown error"
```
This issue is caused by the restore running as the unprivileged user `git`,
which is unable to assign the correct ownership to the registry files during
the restore process ([issue #62759](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/62759 "Incorrect permissions on registry filesystem after restore")).
To get your registry working again:
```shell
sudo chown -R registry:registry /var/opt/gitlab/gitlab-rails/shared/registry/docker
```
If you changed the default file system location for the registry, run `chown`
against your custom location, instead of `/var/opt/gitlab/gitlab-rails/shared/registry/docker`.
## Backup fails to complete with Gzip error
When running the backup, you may receive a Gzip error message:
```shell
sudo /opt/gitlab/bin/gitlab-backup create
...
Dumping ...
...
gzip: stdout: Input/output error
Backup failed
```
If this happens, examine the following:
- Confirm there is sufficient disk space for the Gzip operation. It's not uncommon for backups that
use the [default strategy](backup_gitlab.md#backup-strategy-option) to require half the instance size
in free disk space during backup creation.
- If NFS is being used, check if the mount option `timeout` is set. The
default is `600`, and changing this to smaller values results in this error.
## Backup fails with `File name too long` error
During backup, you can get the `File name too long` error ([issue #354984](https://gitlab.com/gitlab-org/gitlab/-/issues/354984)). For example:
```plaintext
Problem: <class 'OSError: [Errno 36] File name too long:
```
This problem stops the backup script from completing. To fix this problem, you must truncate the filenames causing the problem. A maximum of 246 characters, including the file extension, is permitted.
{{< alert type="warning" >}}
The steps in this section can potentially lead to data loss. All steps must be followed strictly in the order given.
Consider opening a [Support Request](https://support.gitlab.com/hc/en-us/requests/new) if you're a Premium or Ultimate customer.
{{< /alert >}}
Truncating filenames to resolve the error involves:
- Cleaning up remote uploaded files that aren't tracked in the database.
- Truncating the filenames in the database.
- Rerunning the backup task.
### Clean up remote uploaded files
A [known issue](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/45425) caused object store uploads to remain after a parent resource was deleted. This issue was [resolved](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/18698).
To fix these files, you must clean up all remote uploaded files that are in the storage but not tracked in the `uploads` database table.
1. List all the object store upload files that can be moved to a lost and found directory if they don't exist in the GitLab database:
```shell
bundle exec rake gitlab:cleanup:remote_upload_files RAILS_ENV=production
```
1. If you are sure you want to delete these files and remove all non-referenced uploaded files, run:
{{< alert type="warning" >}}
The following action is irreversible.
{{< /alert >}}
```shell
bundle exec rake gitlab:cleanup:remote_upload_files RAILS_ENV=production DRY_RUN=false
```
### Truncate the filenames referenced by the database
You must truncate the files referenced by the database that are causing the problem. The filenames referenced by the database are stored:
- In the `uploads` table.
- In the references found. Any reference found from other database tables and columns.
- On the file system.
Truncate the filenames in the `uploads` table:
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Search the `uploads` table for filenames longer than 246 characters:
The following query selects the `uploads` records with filenames longer than 246 characters in batches of 0 to 10000. This improves the performance on large GitLab instances with tables having thousand of records.
```sql
CREATE TEMP TABLE uploads_with_long_filenames AS
SELECT ROW_NUMBER() OVER(ORDER BY id) row_id, id, path
FROM uploads AS u
WHERE LENGTH((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1]) > 246;
CREATE INDEX ON uploads_with_long_filenames(row_id);
SELECT
u.id,
u.path,
-- Current filename
(regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1] AS current_filename,
-- New filename
CONCAT(
LEFT(SPLIT_PART((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1], '.', 1), 242),
COALESCE(SUBSTRING((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1] FROM '\.(?:.(?!\.))+$'))
) AS new_filename,
-- New path
CONCAT(
COALESCE((regexp_match(u.path, '(.*\/).*'))[1], ''),
CONCAT(
LEFT(SPLIT_PART((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1], '.', 1), 242),
COALESCE(SUBSTRING((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1] FROM '\.(?:.(?!\.))+$'))
)
) AS new_path
FROM uploads_with_long_filenames AS u
WHERE u.row_id > 0 AND u.row_id <= 10000;
```
Output example:
```postgresql
-[ RECORD 1 ]----+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
id | 34
path | public/@hashed/loremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisitloremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisit.txt
current_filename | loremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisitloremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisit.txt
new_filename | loremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisitloremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelits.txt
new_path | public/@hashed/loremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisitloremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelits.txt
```
Where:
- `current_filename`: a filename that is more than 246 characters long.
- `new_filename`: a filename that has been truncated to 246 characters maximum.
- `new_path`: new path considering the `new_filename` (truncated).
After you validate the batch results, you must change the batch size (`row_id`) using the following sequence of numbers (10000 to 20000). Repeat this process until you reach the last record in the `uploads` table.
1. Rename the files found in the `uploads` table from long filenames to new truncated filenames. The following query rolls back the update so you can check the results safely in a transaction wrapper:
```sql
CREATE TEMP TABLE uploads_with_long_filenames AS
SELECT ROW_NUMBER() OVER(ORDER BY id) row_id, path, id
FROM uploads AS u
WHERE LENGTH((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1]) > 246;
CREATE INDEX ON uploads_with_long_filenames(row_id);
BEGIN;
WITH updated_uploads AS (
UPDATE uploads
SET
path =
CONCAT(
COALESCE((regexp_match(updatable_uploads.path, '(.*\/).*'))[1], ''),
CONCAT(
LEFT(SPLIT_PART((regexp_match(updatable_uploads.path, '[^\\/:*?"<>|\r\n]+$'))[1], '.', 1), 242),
COALESCE(SUBSTRING((regexp_match(updatable_uploads.path, '[^\\/:*?"<>|\r\n]+$'))[1] FROM '\.(?:.(?!\.))+$'))
)
)
FROM
uploads_with_long_filenames AS updatable_uploads
WHERE
uploads.id = updatable_uploads.id
AND updatable_uploads.row_id > 0 AND updatable_uploads.row_id <= 10000
RETURNING uploads.*
)
SELECT id, path FROM updated_uploads;
ROLLBACK;
```
After you validate the batch update results, you must change the batch size (`row_id`) using the following sequence of numbers (10000 to 20000). Repeat this process until you reach the last record in the `uploads` table.
1. Validate that the new filenames from the previous query are the expected ones. If you are sure you want to truncate the records found in the previous step to 246 characters, run the following:
{{< alert type="warning" >}}
The following action is irreversible.
{{< /alert >}}
```sql
CREATE TEMP TABLE uploads_with_long_filenames AS
SELECT ROW_NUMBER() OVER(ORDER BY id) row_id, path, id
FROM uploads AS u
WHERE LENGTH((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1]) > 246;
CREATE INDEX ON uploads_with_long_filenames(row_id);
UPDATE uploads
SET
path =
CONCAT(
COALESCE((regexp_match(updatable_uploads.path, '(.*\/).*'))[1], ''),
CONCAT(
LEFT(SPLIT_PART((regexp_match(updatable_uploads.path, '[^\\/:*?"<>|\r\n]+$'))[1], '.', 1), 242),
COALESCE(SUBSTRING((regexp_match(updatable_uploads.path, '[^\\/:*?"<>|\r\n]+$'))[1] FROM '\.(?:.(?!\.))+$'))
)
)
FROM
uploads_with_long_filenames AS updatable_uploads
WHERE
uploads.id = updatable_uploads.id
AND updatable_uploads.row_id > 0 AND updatable_uploads.row_id <= 10000;
```
After you finish the batch update, you must change the batch size (`updatable_uploads.row_id`) using the following sequence of numbers (10000 to 20000). Repeat this process until you reach the last record in the `uploads` table.
Truncate the filenames in the references found:
1. Check if those records are referenced somewhere. One way to do this is to dump the database and search for the parent directory name and filename:
1. To dump your database, you can use the following command as an example:
```shell
pg_dump -h /var/opt/gitlab/postgresql/ -d gitlabhq_production > gitlab-dump.tmp
```
1. Then you can search for the references using the `grep` command. Combining the parent directory and the filename can be a good idea. For example:
```shell
grep public/alongfilenamehere.txt gitlab-dump.tmp
```
1. Replace those long filenames using the new filenames obtained from querying the `uploads` table.
Truncate the filenames on the file system. You must manually rename the files in your file system to the new filenames obtained from querying the `uploads` table.
### Re-run the backup task
After following all the previous steps, re-run the backup task.
## Restoring database backup fails when `pg_stat_statements` was previously enabled
The GitLab backup of the PostgreSQL database includes all SQL statements required to enable extensions that were
previously enabled in the database.
The `pg_stat_statements` extension can only be enabled or disabled by a PostgreSQL user with `superuser` role.
As the restore process uses a database user with limited permissions, it can't execute the following SQL statements:
```sql
DROP EXTENSION IF EXISTS pg_stat_statements;
CREATE EXTENSION IF NOT EXISTS pg_stat_statements WITH SCHEMA public;
```
When trying to restore the backup in a PostgreSQL instance that doesn't have the `pg_stats_statements` extension,
the following error message is displayed:
```plaintext
ERROR: permission denied to create extension "pg_stat_statements"
HINT: Must be superuser to create this extension.
ERROR: extension "pg_stat_statements" does not exist
```
When trying to restore in an instance that has the `pg_stats_statements` extension enabled, the cleaning up step
fails with an error message similar to the following:
```plaintext
rake aborted!
ActiveRecord::StatementInvalid: PG::InsufficientPrivilege: ERROR: must be owner of view pg_stat_statements
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:42:in `block (4 levels) in <top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:41:in `each'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:41:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:71:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Caused by:
PG::InsufficientPrivilege: ERROR: must be owner of view pg_stat_statements
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:42:in `block (4 levels) in <top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:41:in `each'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:41:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:71:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Tasks: TOP => gitlab:db:drop_tables
(See full trace by running task with --trace)
```
### Prevent the dump file to include `pg_stat_statements`
To prevent the inclusion of the extension in the PostgreSQL dump file that is part of the backup bundle,
enable the extension in any schema except the `public` schema:
```sql
CREATE SCHEMA adm;
CREATE EXTENSION pg_stat_statements SCHEMA adm;
```
If the extension was previously enabled in the `public` schema, move it to a new one:
```sql
CREATE SCHEMA adm;
ALTER EXTENSION pg_stat_statements SET SCHEMA adm;
```
To query the `pg_stat_statements` data after changing the schema, prefix the view name with the new schema:
```sql
SELECT * FROM adm.pg_stat_statements limit 0;
```
To make it compatible with third-party monitoring solutions that expect it to be enabled in the `public` schema,
you need to include it in the `search_path`:
```sql
set search_path to public,adm;
```
### Fix an existing dump file to remove references to `pg_stat_statements`
To fix an existing backup file, do the following changes:
1. Extract from the backup the following file: `db/database.sql.gz`.
1. Decompress the file or use an editor that is capable of handling it compressed.
1. Remove the following lines, or similar ones:
```sql
CREATE EXTENSION IF NOT EXISTS pg_stat_statements WITH SCHEMA public;
```
```sql
COMMENT ON EXTENSION pg_stat_statements IS 'track planning and execution statistics of all SQL statements executed';
```
1. Save the changes and recompress the file.
1. Update the backup file with the modified `db/database.sql.gz`.
|
---
stage: Data Access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Troubleshooting GitLab backups
breadcrumbs:
- doc
- administration
- backup_restore
---
When you back up GitLab, you might encounter the following issues.
## When the secrets file is lost
If you didn't [back up the secrets file](backup_gitlab.md#storing-configuration-files), you
must complete several steps to get GitLab working properly again.
The secrets file is responsible for storing the encryption key for the columns
that contain required, sensitive information. If the key is lost, GitLab can't
decrypt those columns, preventing access to the following items:
- [CI/CD variables](../../ci/variables/_index.md)
- [Kubernetes / GCP integration](../../user/infrastructure/clusters/_index.md)
- [Custom Pages domains](../../user/project/pages/custom_domains_ssl_tls_certification/_index.md)
- [Project error tracking](../../operations/error_tracking.md)
- [Runner authentication](../../ci/runners/_index.md)
- [Project mirroring](../../user/project/repository/mirror/_index.md)
- [Integrations](../../user/project/integrations/_index.md)
- [Web hooks](../../user/project/integrations/webhooks.md)
- [Deploy tokens](../../user/project/deploy_tokens/_index.md)
In cases like CI/CD variables and runner authentication, you can experience
unexpected behaviors, such as:
- Stuck jobs.
- 500 errors.
In this case, you must reset all the tokens for CI/CD variables and
runner authentication, which is described in more detail in the following
sections. After resetting the tokens, you should be able to visit your project
and the jobs begin running again.
{{< alert type="warning" >}}
The steps in this section can potentially lead to data loss on the previously listed items.
Consider opening a [Support Request](https://support.gitlab.com/hc/en-us/requests/new) if you're a Premium or Ultimate customer.
{{< /alert >}}
### Verify that all values can be decrypted
You can determine if your database contains values that can't be decrypted by using a
[Rake task](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets).
### Take a backup
You must directly modify GitLab data to work around your lost secrets file.
{{< alert type="warning" >}}
Be sure to create a full database backup before attempting any changes.
{{< /alert >}}
### Disable user two-factor authentication (2FA)
Users with 2FA enabled can't sign in to GitLab. In that case, you must
[disable 2FA for everyone](../../security/two_factor_authentication.md#for-all-users),
after which users must reactivate 2FA.
### Reset CI/CD variables
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Examine the `ci_group_variables` and `ci_variables` tables:
```sql
SELECT * FROM public."ci_group_variables";
SELECT * FROM public."ci_variables";
```
These are the variables that you need to delete.
1. Delete all variables:
```sql
DELETE FROM ci_group_variables;
DELETE FROM ci_variables;
```
1. If you know the specific group or project from which you wish to delete variables, you can include a `WHERE` statement to specify that in your `DELETE`:
```sql
DELETE FROM ci_group_variables WHERE group_id = <GROUPID>;
DELETE FROM ci_variables WHERE project_id = <PROJECTID>;
```
You may need to reconfigure or restart GitLab for the changes to take effect.
### Reset runner registration tokens
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Clear all tokens for projects, groups, and the entire instance:
{{< alert type="warning" >}}
The final `UPDATE` operation stops the runners from being able to pick
up new jobs. You must register new runners.
{{< /alert >}}
```sql
-- Clear project tokens
UPDATE projects SET runners_token = null, runners_token_encrypted = null;
-- Clear group tokens
UPDATE namespaces SET runners_token = null, runners_token_encrypted = null;
-- Clear instance tokens
UPDATE application_settings SET runners_registration_token_encrypted = null;
-- Clear key used for JWT authentication
-- This may break the $CI_JWT_TOKEN job variable:
-- https://gitlab.com/gitlab-org/gitlab/-/issues/325965
UPDATE application_settings SET encrypted_ci_jwt_signing_key = null;
-- Clear runner tokens
UPDATE ci_runners SET token = null, token_encrypted = null;
```
### Reset pending pipeline jobs
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Clear all the tokens for pending jobs:
For GitLab 15.3 and earlier:
```sql
-- Clear build tokens
UPDATE ci_builds SET token = null, token_encrypted = null;
```
For GitLab 15.4 and later:
```sql
-- Clear build tokens
UPDATE ci_builds SET token_encrypted = null;
```
A similar strategy can be employed for the remaining features. By removing the
data that can't be decrypted, GitLab can be returned to operation, and the
lost data can be manually replaced.
### Fix integrations and webhooks
If you've lost your secrets, the [integrations settings](../../user/project/integrations/_index.md)
and [webhooks settings](../../user/project/integrations/webhooks.md) pages might display `500` error messages. Lost secrets might also produce `500` errors when you try to access a repository in a project with a previously configured integration or webhook.
The fix is to truncate the affected tables (those containing encrypted columns).
This deletes all your configured integrations, webhooks, and related metadata.
You should verify that the secrets are the root cause before deleting any data.
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Truncate the following tables:
```sql
-- truncate web_hooks table
TRUNCATE integrations, chat_names, issue_tracker_data, jira_tracker_data, slack_integrations, web_hooks, zentao_tracker_data, web_hook_logs CASCADE;
```
## Container registry is not restored
If you restore a backup from an environment that uses the [container registry](../../user/packages/container_registry/_index.md)
to a newly installed environment where the container registry is not enabled, the container registry is not restored.
To also restore the container registry, you need to [enable it](../packages/container_registry.md#enable-the-container-registry) in the new
environment before you restore the backup.
## Container registry push failures after restoring from a backup
If you use the [container registry](../../user/packages/container_registry/_index.md),
pushes to the registry may fail after restoring your backup on a Linux package (Omnibus)
instance after restoring the registry data.
These failures mention permission issues in the registry logs, similar to:
```plaintext
level=error
msg="response completed with error"
err.code=unknown
err.detail="filesystem: mkdir /var/opt/gitlab/gitlab-rails/shared/registry/docker/registry/v2/repositories/...: permission denied"
err.message="unknown error"
```
This issue is caused by the restore running as the unprivileged user `git`,
which is unable to assign the correct ownership to the registry files during
the restore process ([issue #62759](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/62759 "Incorrect permissions on registry filesystem after restore")).
To get your registry working again:
```shell
sudo chown -R registry:registry /var/opt/gitlab/gitlab-rails/shared/registry/docker
```
If you changed the default file system location for the registry, run `chown`
against your custom location, instead of `/var/opt/gitlab/gitlab-rails/shared/registry/docker`.
## Backup fails to complete with Gzip error
When running the backup, you may receive a Gzip error message:
```shell
sudo /opt/gitlab/bin/gitlab-backup create
...
Dumping ...
...
gzip: stdout: Input/output error
Backup failed
```
If this happens, examine the following:
- Confirm there is sufficient disk space for the Gzip operation. It's not uncommon for backups that
use the [default strategy](backup_gitlab.md#backup-strategy-option) to require half the instance size
in free disk space during backup creation.
- If NFS is being used, check if the mount option `timeout` is set. The
default is `600`, and changing this to smaller values results in this error.
## Backup fails with `File name too long` error
During backup, you can get the `File name too long` error ([issue #354984](https://gitlab.com/gitlab-org/gitlab/-/issues/354984)). For example:
```plaintext
Problem: <class 'OSError: [Errno 36] File name too long:
```
This problem stops the backup script from completing. To fix this problem, you must truncate the filenames causing the problem. A maximum of 246 characters, including the file extension, is permitted.
{{< alert type="warning" >}}
The steps in this section can potentially lead to data loss. All steps must be followed strictly in the order given.
Consider opening a [Support Request](https://support.gitlab.com/hc/en-us/requests/new) if you're a Premium or Ultimate customer.
{{< /alert >}}
Truncating filenames to resolve the error involves:
- Cleaning up remote uploaded files that aren't tracked in the database.
- Truncating the filenames in the database.
- Rerunning the backup task.
### Clean up remote uploaded files
A [known issue](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/45425) caused object store uploads to remain after a parent resource was deleted. This issue was [resolved](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/18698).
To fix these files, you must clean up all remote uploaded files that are in the storage but not tracked in the `uploads` database table.
1. List all the object store upload files that can be moved to a lost and found directory if they don't exist in the GitLab database:
```shell
bundle exec rake gitlab:cleanup:remote_upload_files RAILS_ENV=production
```
1. If you are sure you want to delete these files and remove all non-referenced uploaded files, run:
{{< alert type="warning" >}}
The following action is irreversible.
{{< /alert >}}
```shell
bundle exec rake gitlab:cleanup:remote_upload_files RAILS_ENV=production DRY_RUN=false
```
### Truncate the filenames referenced by the database
You must truncate the files referenced by the database that are causing the problem. The filenames referenced by the database are stored:
- In the `uploads` table.
- In the references found. Any reference found from other database tables and columns.
- On the file system.
Truncate the filenames in the `uploads` table:
1. Enter the database console:
For the Linux package (Omnibus):
```shell
sudo gitlab-rails dbconsole --database main
```
For self-compiled installations:
```shell
sudo -u git -H bundle exec rails dbconsole -e production --database main
```
1. Search the `uploads` table for filenames longer than 246 characters:
The following query selects the `uploads` records with filenames longer than 246 characters in batches of 0 to 10000. This improves the performance on large GitLab instances with tables having thousand of records.
```sql
CREATE TEMP TABLE uploads_with_long_filenames AS
SELECT ROW_NUMBER() OVER(ORDER BY id) row_id, id, path
FROM uploads AS u
WHERE LENGTH((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1]) > 246;
CREATE INDEX ON uploads_with_long_filenames(row_id);
SELECT
u.id,
u.path,
-- Current filename
(regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1] AS current_filename,
-- New filename
CONCAT(
LEFT(SPLIT_PART((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1], '.', 1), 242),
COALESCE(SUBSTRING((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1] FROM '\.(?:.(?!\.))+$'))
) AS new_filename,
-- New path
CONCAT(
COALESCE((regexp_match(u.path, '(.*\/).*'))[1], ''),
CONCAT(
LEFT(SPLIT_PART((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1], '.', 1), 242),
COALESCE(SUBSTRING((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1] FROM '\.(?:.(?!\.))+$'))
)
) AS new_path
FROM uploads_with_long_filenames AS u
WHERE u.row_id > 0 AND u.row_id <= 10000;
```
Output example:
```postgresql
-[ RECORD 1 ]----+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
id | 34
path | public/@hashed/loremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisitloremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisit.txt
current_filename | loremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisitloremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisit.txt
new_filename | loremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisitloremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelits.txt
new_path | public/@hashed/loremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelitsedvulputatemisitloremipsumdolorsitametconsecteturadipiscingelitseddoeiusmodtemporincididuntutlaboreetdoloremagnaaliquaauctorelits.txt
```
Where:
- `current_filename`: a filename that is more than 246 characters long.
- `new_filename`: a filename that has been truncated to 246 characters maximum.
- `new_path`: new path considering the `new_filename` (truncated).
After you validate the batch results, you must change the batch size (`row_id`) using the following sequence of numbers (10000 to 20000). Repeat this process until you reach the last record in the `uploads` table.
1. Rename the files found in the `uploads` table from long filenames to new truncated filenames. The following query rolls back the update so you can check the results safely in a transaction wrapper:
```sql
CREATE TEMP TABLE uploads_with_long_filenames AS
SELECT ROW_NUMBER() OVER(ORDER BY id) row_id, path, id
FROM uploads AS u
WHERE LENGTH((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1]) > 246;
CREATE INDEX ON uploads_with_long_filenames(row_id);
BEGIN;
WITH updated_uploads AS (
UPDATE uploads
SET
path =
CONCAT(
COALESCE((regexp_match(updatable_uploads.path, '(.*\/).*'))[1], ''),
CONCAT(
LEFT(SPLIT_PART((regexp_match(updatable_uploads.path, '[^\\/:*?"<>|\r\n]+$'))[1], '.', 1), 242),
COALESCE(SUBSTRING((regexp_match(updatable_uploads.path, '[^\\/:*?"<>|\r\n]+$'))[1] FROM '\.(?:.(?!\.))+$'))
)
)
FROM
uploads_with_long_filenames AS updatable_uploads
WHERE
uploads.id = updatable_uploads.id
AND updatable_uploads.row_id > 0 AND updatable_uploads.row_id <= 10000
RETURNING uploads.*
)
SELECT id, path FROM updated_uploads;
ROLLBACK;
```
After you validate the batch update results, you must change the batch size (`row_id`) using the following sequence of numbers (10000 to 20000). Repeat this process until you reach the last record in the `uploads` table.
1. Validate that the new filenames from the previous query are the expected ones. If you are sure you want to truncate the records found in the previous step to 246 characters, run the following:
{{< alert type="warning" >}}
The following action is irreversible.
{{< /alert >}}
```sql
CREATE TEMP TABLE uploads_with_long_filenames AS
SELECT ROW_NUMBER() OVER(ORDER BY id) row_id, path, id
FROM uploads AS u
WHERE LENGTH((regexp_match(u.path, '[^\\/:*?"<>|\r\n]+$'))[1]) > 246;
CREATE INDEX ON uploads_with_long_filenames(row_id);
UPDATE uploads
SET
path =
CONCAT(
COALESCE((regexp_match(updatable_uploads.path, '(.*\/).*'))[1], ''),
CONCAT(
LEFT(SPLIT_PART((regexp_match(updatable_uploads.path, '[^\\/:*?"<>|\r\n]+$'))[1], '.', 1), 242),
COALESCE(SUBSTRING((regexp_match(updatable_uploads.path, '[^\\/:*?"<>|\r\n]+$'))[1] FROM '\.(?:.(?!\.))+$'))
)
)
FROM
uploads_with_long_filenames AS updatable_uploads
WHERE
uploads.id = updatable_uploads.id
AND updatable_uploads.row_id > 0 AND updatable_uploads.row_id <= 10000;
```
After you finish the batch update, you must change the batch size (`updatable_uploads.row_id`) using the following sequence of numbers (10000 to 20000). Repeat this process until you reach the last record in the `uploads` table.
Truncate the filenames in the references found:
1. Check if those records are referenced somewhere. One way to do this is to dump the database and search for the parent directory name and filename:
1. To dump your database, you can use the following command as an example:
```shell
pg_dump -h /var/opt/gitlab/postgresql/ -d gitlabhq_production > gitlab-dump.tmp
```
1. Then you can search for the references using the `grep` command. Combining the parent directory and the filename can be a good idea. For example:
```shell
grep public/alongfilenamehere.txt gitlab-dump.tmp
```
1. Replace those long filenames using the new filenames obtained from querying the `uploads` table.
Truncate the filenames on the file system. You must manually rename the files in your file system to the new filenames obtained from querying the `uploads` table.
### Re-run the backup task
After following all the previous steps, re-run the backup task.
## Restoring database backup fails when `pg_stat_statements` was previously enabled
The GitLab backup of the PostgreSQL database includes all SQL statements required to enable extensions that were
previously enabled in the database.
The `pg_stat_statements` extension can only be enabled or disabled by a PostgreSQL user with `superuser` role.
As the restore process uses a database user with limited permissions, it can't execute the following SQL statements:
```sql
DROP EXTENSION IF EXISTS pg_stat_statements;
CREATE EXTENSION IF NOT EXISTS pg_stat_statements WITH SCHEMA public;
```
When trying to restore the backup in a PostgreSQL instance that doesn't have the `pg_stats_statements` extension,
the following error message is displayed:
```plaintext
ERROR: permission denied to create extension "pg_stat_statements"
HINT: Must be superuser to create this extension.
ERROR: extension "pg_stat_statements" does not exist
```
When trying to restore in an instance that has the `pg_stats_statements` extension enabled, the cleaning up step
fails with an error message similar to the following:
```plaintext
rake aborted!
ActiveRecord::StatementInvalid: PG::InsufficientPrivilege: ERROR: must be owner of view pg_stat_statements
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:42:in `block (4 levels) in <top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:41:in `each'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:41:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:71:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Caused by:
PG::InsufficientPrivilege: ERROR: must be owner of view pg_stat_statements
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:42:in `block (4 levels) in <top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:41:in `each'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:41:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/backup.rake:71:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Tasks: TOP => gitlab:db:drop_tables
(See full trace by running task with --trace)
```
### Prevent the dump file to include `pg_stat_statements`
To prevent the inclusion of the extension in the PostgreSQL dump file that is part of the backup bundle,
enable the extension in any schema except the `public` schema:
```sql
CREATE SCHEMA adm;
CREATE EXTENSION pg_stat_statements SCHEMA adm;
```
If the extension was previously enabled in the `public` schema, move it to a new one:
```sql
CREATE SCHEMA adm;
ALTER EXTENSION pg_stat_statements SET SCHEMA adm;
```
To query the `pg_stat_statements` data after changing the schema, prefix the view name with the new schema:
```sql
SELECT * FROM adm.pg_stat_statements limit 0;
```
To make it compatible with third-party monitoring solutions that expect it to be enabled in the `public` schema,
you need to include it in the `search_path`:
```sql
set search_path to public,adm;
```
### Fix an existing dump file to remove references to `pg_stat_statements`
To fix an existing backup file, do the following changes:
1. Extract from the backup the following file: `db/database.sql.gz`.
1. Decompress the file or use an editor that is capable of handling it compressed.
1. Remove the following lines, or similar ones:
```sql
CREATE EXTENSION IF NOT EXISTS pg_stat_statements WITH SCHEMA public;
```
```sql
COMMENT ON EXTENSION pg_stat_statements IS 'track planning and execution statistics of all SQL statements executed';
```
1. Save the changes and recompress the file.
1. Update the backup file with the modified `db/database.sql.gz`.
|
https://docs.gitlab.com/administration/backup_archive_process
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/backup_archive_process.md
|
2025-08-13
|
doc/administration/backup_restore
|
[
"doc",
"administration",
"backup_restore"
] |
backup_archive_process.md
|
Data Access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Backup archive process
| null |
When you run the [backup command](backup_gitlab.md#backup-command), a backup script creates a backup archive file to store your GitLab data.
To create the archive file, the backup script:
1. Extracts the previous backup archive file, when you're doing an incremental backup.
1. Updates or generates the backup archive file.
1. Runs all backup sub-tasks to:
- [Back up the database](#back-up-the-database).
- [Back up Git repositories](#back-up-git-repositories).
- [Back up files](#back-up-files).
1. Archives the backup staging area into a `tar` file.
1. Uploads the new backup archive to the object storage, if [configured](backup_gitlab.md#upload-backups-to-a-remote-cloud-storage).
1. Cleans up the archived [backup staging directory](#backup-staging-directory) files.
## Back up the database
To back up the database, the `db` sub-task:
1. Uses `pg_dump` to create an [SQL dump](https://www.postgresql.org/docs/16/backup-dump.html).
1. Pipes the output of `pg_dump` through `gzip` and creates a compressed SQL file.
1. Saves the file to the [backup staging directory](#backup-staging-directory).
## Back up Git repositories
To back up Git repositories, the `repositories` sub-task:
1. Informs `gitaly-backup` which repositories to back up.
1. Runs `gitaly-backup` to:
- Call a series of Remote Procedure Calls (RPCs) on Gitaly.
- Collect the backup data for each repository.
1. Streams the collected data into a directory structure in the [backup staging directory](#backup-staging-directory).
The following diagram illustrates the process:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
box Backup host
participant Repositories sub-task
participant gitaly-backup
end
Repositories sub-task->>+gitaly-backup: List of repositories
loop Each repository
gitaly-backup->>+Gitaly: ListRefs request
Gitaly->>-gitaly-backup: List of Git references
gitaly-backup->>+Gitaly: CreateBundleFromRefList request
Gitaly->>-gitaly-backup: Git bundle file
gitaly-backup->>+Gitaly: GetCustomHooks request
Gitaly->>-gitaly-backup: Custom hooks archive
end
gitaly-backup->>-Repositories sub-task: Success/failure
```
Gitaly Cluster (Praefect) configured storages are backed up in the same way as standalone Gitaly instances.
- When Gitaly Cluster (Praefect) receives the RPC calls from `gitaly-backup`, it rebuilds its own database.
- There is no need to backup the Gitaly Cluster (Praefect) database separately.
- Each repository is backed up only once, regardless of the replication factor, because backups operate through RPCs.
### Server-side backups
Server-side repository backups are an efficient way to back up Git repositories.
The advantages of this method are:
- Data is not transmitted through RPCs from Gitaly.
- Server-side backups require less network transfer.
- Disk storage on the machine running the backup Rake task is not required.
To back up Gitaly on the server-side, the `repositories` sub-task:
1. Runs `gitaly-backup` to make a single RPC call for each repository.
1. Triggers the Gitaly node storing the physical repository to upload backup data to object storage.
1. Links the backups stored on object storage to the created backup archive using a [backup ID](#backup-id).
The following diagram illustrates the process:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
box Backup host
participant Repositories sub-task
participant gitaly-backup
end
Repositories sub-task->>+gitaly-backup: List of repositories
loop Each repository
gitaly-backup->>+Gitaly: BackupRepository request
Gitaly->>+Object-storage: Git references file
Object-storage->>-Gitaly: Success/failure
Gitaly->>+Object-storage: Git bundle file
Object-storage->>-Gitaly: Success/failure
Gitaly->>+Object-storage: Custom hooks archive
Object-storage->>-Gitaly: Success/failure
Gitaly->>+Object-storage: Backup manifest file
Object-storage->>-Gitaly: Success/failure
Gitaly->>-gitaly-backup: Success/failure
end
gitaly-backup->>-Repositories sub-task: Success/failure
```
## Back up files
The following sub-tasks back up files:
- `uploads`: Attachments
- `builds`: CI/CD job output logs
- `artifacts`: CI/CD job artifacts
- `pages`: Page content
- `lfs`: LFS objects
- `terraform_state`: Terraform states
- `registry`: Container registry images
- `packages`: Packages
- `ci_secure_files`: Project-level secure files
- `external_diffs`: Merge request diffs (when stored externally)
Each sub-task identifies a set of files in a task-specific directory and:
1. Creates an archive of the identified files using the `tar` utility.
1. Compresses the archive through `gzip` without saving to disk.
1. Saves the `tar` file to the [backup staging directory](#backup-staging-directory).
Because backups are created from live instances, files might be modified during the backup process.
In this case, an [alternate strategy](backup_gitlab.md#backup-strategy-option) can be used to back up files. The `rsync` utility creates a copy of the
files to back up and passes them to `tar` for archiving.
{{< alert type="note" >}}
If you are using this strategy, the machine running the backup Rake task must have
sufficient storage for both the copied files and the compressed archive.
{{< /alert >}}
## Backup ID
Backup IDs are unique identifiers for backup archives. These IDs are crucial when you need to restore
GitLab, and multiple backup archives are available.
Backup archives are saved in a directory specified by the `backup_path` setting in the `config/gitlab.yml` file.
The default location is `/var/opt/gitlab/backups`.
The backup ID is composed of:
- Timestamp of backup creation
- Date (`YYYY_MM_DD`)
- GitLab version
- GitLab edition
The following is an example backup ID: `1493107454_2018_04_25_10.6.4-ce`
## Backup filename
By default, the filename follows the `<backup-id>_gitlab_backup.tar` structure. For example, `1493107454_2018_04_25_10.6.4-ce_gitlab_backup.tar`.
## Backup information file
The backup information file, `backup_information.yml`, saves all the backup inputs that are not included
in the backup. The file is saved in the [backup staging directory](#backup-staging-directory).
Sub-tasks use this file to determine how to restore and link data in the backup with external
services like [server-side repository backups](#server-side-backups).
The backup information file includes the following:
- The time the backup was created.
- The GitLab version that generated the backup.
- Other specified options. For example, skipped sub-tasks.
## Backup staging directory
The backup staging directory is a temporary storage location used during the backup and restore processes.
This directory:
- Stores backup artifacts before creating the GitLab backup archive.
- Extracts backup archives before restoring a backup or creating an incremental backup.
The backup staging directory is the same directory where completed backup archives are created.
When creating an untarred backup, the backup artifacts remain in this directory, and no archive is created.
The following is an example of a backup staging directory that contains an untarred backup:
```plaintext
backups/
├── 1701728344_2023_12_04_16.7.0-pre_gitlab_backup.tar
├── 1701728447_2023_12_04_16.7.0-pre_gitlab_backup.tar
├── artifacts.tar.gz
├── backup_information.yml
├── builds.tar.gz
├── ci_secure_files.tar.gz
├── db
│ ├── ci_database.sql.gz
│ └── database.sql.gz
├── lfs.tar.gz
├── packages.tar.gz
├── pages.tar.gz
├── repositories
│ ├── manifests/
│ ├── @hashed/
│ └── @snippets/
├── terraform_state.tar.gz
└── uploads.tar.gz
```
|
---
stage: Data Access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Backup archive process
breadcrumbs:
- doc
- administration
- backup_restore
---
When you run the [backup command](backup_gitlab.md#backup-command), a backup script creates a backup archive file to store your GitLab data.
To create the archive file, the backup script:
1. Extracts the previous backup archive file, when you're doing an incremental backup.
1. Updates or generates the backup archive file.
1. Runs all backup sub-tasks to:
- [Back up the database](#back-up-the-database).
- [Back up Git repositories](#back-up-git-repositories).
- [Back up files](#back-up-files).
1. Archives the backup staging area into a `tar` file.
1. Uploads the new backup archive to the object storage, if [configured](backup_gitlab.md#upload-backups-to-a-remote-cloud-storage).
1. Cleans up the archived [backup staging directory](#backup-staging-directory) files.
## Back up the database
To back up the database, the `db` sub-task:
1. Uses `pg_dump` to create an [SQL dump](https://www.postgresql.org/docs/16/backup-dump.html).
1. Pipes the output of `pg_dump` through `gzip` and creates a compressed SQL file.
1. Saves the file to the [backup staging directory](#backup-staging-directory).
## Back up Git repositories
To back up Git repositories, the `repositories` sub-task:
1. Informs `gitaly-backup` which repositories to back up.
1. Runs `gitaly-backup` to:
- Call a series of Remote Procedure Calls (RPCs) on Gitaly.
- Collect the backup data for each repository.
1. Streams the collected data into a directory structure in the [backup staging directory](#backup-staging-directory).
The following diagram illustrates the process:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
box Backup host
participant Repositories sub-task
participant gitaly-backup
end
Repositories sub-task->>+gitaly-backup: List of repositories
loop Each repository
gitaly-backup->>+Gitaly: ListRefs request
Gitaly->>-gitaly-backup: List of Git references
gitaly-backup->>+Gitaly: CreateBundleFromRefList request
Gitaly->>-gitaly-backup: Git bundle file
gitaly-backup->>+Gitaly: GetCustomHooks request
Gitaly->>-gitaly-backup: Custom hooks archive
end
gitaly-backup->>-Repositories sub-task: Success/failure
```
Gitaly Cluster (Praefect) configured storages are backed up in the same way as standalone Gitaly instances.
- When Gitaly Cluster (Praefect) receives the RPC calls from `gitaly-backup`, it rebuilds its own database.
- There is no need to backup the Gitaly Cluster (Praefect) database separately.
- Each repository is backed up only once, regardless of the replication factor, because backups operate through RPCs.
### Server-side backups
Server-side repository backups are an efficient way to back up Git repositories.
The advantages of this method are:
- Data is not transmitted through RPCs from Gitaly.
- Server-side backups require less network transfer.
- Disk storage on the machine running the backup Rake task is not required.
To back up Gitaly on the server-side, the `repositories` sub-task:
1. Runs `gitaly-backup` to make a single RPC call for each repository.
1. Triggers the Gitaly node storing the physical repository to upload backup data to object storage.
1. Links the backups stored on object storage to the created backup archive using a [backup ID](#backup-id).
The following diagram illustrates the process:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
box Backup host
participant Repositories sub-task
participant gitaly-backup
end
Repositories sub-task->>+gitaly-backup: List of repositories
loop Each repository
gitaly-backup->>+Gitaly: BackupRepository request
Gitaly->>+Object-storage: Git references file
Object-storage->>-Gitaly: Success/failure
Gitaly->>+Object-storage: Git bundle file
Object-storage->>-Gitaly: Success/failure
Gitaly->>+Object-storage: Custom hooks archive
Object-storage->>-Gitaly: Success/failure
Gitaly->>+Object-storage: Backup manifest file
Object-storage->>-Gitaly: Success/failure
Gitaly->>-gitaly-backup: Success/failure
end
gitaly-backup->>-Repositories sub-task: Success/failure
```
## Back up files
The following sub-tasks back up files:
- `uploads`: Attachments
- `builds`: CI/CD job output logs
- `artifacts`: CI/CD job artifacts
- `pages`: Page content
- `lfs`: LFS objects
- `terraform_state`: Terraform states
- `registry`: Container registry images
- `packages`: Packages
- `ci_secure_files`: Project-level secure files
- `external_diffs`: Merge request diffs (when stored externally)
Each sub-task identifies a set of files in a task-specific directory and:
1. Creates an archive of the identified files using the `tar` utility.
1. Compresses the archive through `gzip` without saving to disk.
1. Saves the `tar` file to the [backup staging directory](#backup-staging-directory).
Because backups are created from live instances, files might be modified during the backup process.
In this case, an [alternate strategy](backup_gitlab.md#backup-strategy-option) can be used to back up files. The `rsync` utility creates a copy of the
files to back up and passes them to `tar` for archiving.
{{< alert type="note" >}}
If you are using this strategy, the machine running the backup Rake task must have
sufficient storage for both the copied files and the compressed archive.
{{< /alert >}}
## Backup ID
Backup IDs are unique identifiers for backup archives. These IDs are crucial when you need to restore
GitLab, and multiple backup archives are available.
Backup archives are saved in a directory specified by the `backup_path` setting in the `config/gitlab.yml` file.
The default location is `/var/opt/gitlab/backups`.
The backup ID is composed of:
- Timestamp of backup creation
- Date (`YYYY_MM_DD`)
- GitLab version
- GitLab edition
The following is an example backup ID: `1493107454_2018_04_25_10.6.4-ce`
## Backup filename
By default, the filename follows the `<backup-id>_gitlab_backup.tar` structure. For example, `1493107454_2018_04_25_10.6.4-ce_gitlab_backup.tar`.
## Backup information file
The backup information file, `backup_information.yml`, saves all the backup inputs that are not included
in the backup. The file is saved in the [backup staging directory](#backup-staging-directory).
Sub-tasks use this file to determine how to restore and link data in the backup with external
services like [server-side repository backups](#server-side-backups).
The backup information file includes the following:
- The time the backup was created.
- The GitLab version that generated the backup.
- Other specified options. For example, skipped sub-tasks.
## Backup staging directory
The backup staging directory is a temporary storage location used during the backup and restore processes.
This directory:
- Stores backup artifacts before creating the GitLab backup archive.
- Extracts backup archives before restoring a backup or creating an incremental backup.
The backup staging directory is the same directory where completed backup archives are created.
When creating an untarred backup, the backup artifacts remain in this directory, and no archive is created.
The following is an example of a backup staging directory that contains an untarred backup:
```plaintext
backups/
├── 1701728344_2023_12_04_16.7.0-pre_gitlab_backup.tar
├── 1701728447_2023_12_04_16.7.0-pre_gitlab_backup.tar
├── artifacts.tar.gz
├── backup_information.yml
├── builds.tar.gz
├── ci_secure_files.tar.gz
├── db
│ ├── ci_database.sql.gz
│ └── database.sql.gz
├── lfs.tar.gz
├── packages.tar.gz
├── pages.tar.gz
├── repositories
│ ├── manifests/
│ ├── @hashed/
│ └── @snippets/
├── terraform_state.tar.gz
└── uploads.tar.gz
```
|
https://docs.gitlab.com/administration/backup_gitlab
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/backup_gitlab.md
|
2025-08-13
|
doc/administration/backup_restore
|
[
"doc",
"administration",
"backup_restore"
] |
backup_gitlab.md
|
Data Access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Back up GitLab
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab backups protect your data and help with disaster recovery.
The optimal backup strategy depends on your GitLab deployment configuration,
data volume, and storage locations. These factors determine which backup
methods to use,
where to store backups, and how to structure your backup schedule.
For larger GitLab instances, alternative backup strategies include:
- Incremental backups.
- Backups of specific repositories.
- Backups across multiple storage locations.
## Data included in a backup
GitLab provides a command-line interface to back up your entire instance.
By default, the backup creates an archive in a single compressed tar file.
This file includes:
- Database data and configuration
- Account and group settings
- CI/CD artifacts and job logs
- Git repositories and LFS objects
- External merge request diffs ([introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/154914) in GitLab 17.1)
- Package registry data and container registry images
- Project and [group](../../user/project/wiki/group.md) wikis.
- Project-level attachments and uploads
- Secure Files ([introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121142) in GitLab 16.1)
- GitLab Pages content
- Terraform states
- Snippets
## Data not included in a backup
- [Mattermost data](../../integration/mattermost/_index.md#back-up-gitlab-mattermost)
- Redis (and thus Sidekiq jobs)
- [Object storage](#object-storage) on Linux package (Omnibus) / Docker / Self-compiled installations
- [Global server hooks](../server_hooks.md#create-global-server-hooks-for-all-repositories)
- [File hooks](../file_hooks.md)
- GitLab configuration files (`/etc/gitlab`)
- TLS- and SSH-related keys and certificates
- Other system files
{{< alert type="warning" >}}
You are highly advised to read about [storing configuration files](#storing-configuration-files) to back up those separately.
{{< /alert >}}
## Simple backup procedure
As a rough guideline, if you are using a [1k reference architecture](../reference_architectures/1k_users.md) with less than 100 GB of data, then follow these steps:
1. Run the [backup command](#backup-command).
1. Back up [object storage](#object-storage), if applicable.
1. Manually back up [configuration files](#storing-configuration-files).
## Scaling backups
As the volume of GitLab data grows, the [backup command](#backup-command) takes longer to execute. [Backup options](#backup-options) such as [back up Git repositories concurrently](#back-up-git-repositories-concurrently) and [incremental repository backups](#incremental-repository-backups) can help to reduce execution time. At some point, the backup command becomes impractical by itself. For example, it can take 24 hours or more.
Starting with GitLab 18.0, repository backup performance has been significantly improved for repositories with large numbers of references (branches, tags). This improvement can reduce backup times from hours to minutes for affected repositories. No configuration changes are required to benefit from this enhancement. For technical details, see our [blog post about decreasing GitLab repository backup times](https://about.gitlab.com/blog/2025/06/05/how-we-decreased-gitlab-repo-backup-times-from-48-hours-to-41-minutes/).
In some cases, architecture changes may be warranted to allow backups to scale. If you are using a GitLab reference architecture, see [Back up and restore large reference architectures](backup_large_reference_architectures.md).
For more information, see [alternative backup strategies](#alternative-backup-strategies).
## What data needs to be backed up?
- [PostgreSQL databases](#postgresql-databases)
- [Git repositories](#git-repositories)
- [Blobs](#blobs)
- [Container registry](#container-registry)
- [Configuration files](#storing-configuration-files)
- [Other data](#other-data)
### PostgreSQL databases
In the simplest case, GitLab has one PostgreSQL database in one PostgreSQL server on the same VM as all other GitLab services. But depending on configuration, GitLab may use multiple PostgreSQL databases in multiple PostgreSQL servers.
In general, this data is the single source of truth for most user-generated content in the Web interface, such as issue and merge request content, comments, permissions, and credentials.
PostgreSQL also holds some cached data like HTML-rendered Markdown, and by default, merge request diffs.
However, merge request diffs can also be configured to be offloaded to the file system or object storage, see [Blobs](#blobs).
Gitaly Cluster (Praefect) uses a PostgreSQL database as a single source of truth to manage its Gitaly nodes.
A common PostgreSQL utility, [`pg_dump`](https://www.postgresql.org/docs/16/app-pgdump.html), produces a backup file which can be used to restore a PostgreSQL database. The [backup command](#backup-command) uses this utility under the hood.
Unfortunately, the larger the database, the longer it takes `pg_dump` to execute. Depending on your situation, the duration becomes impractical at some point (days, for example). If your database is over 100 GB, `pg_dump`, and by extension the [backup command](#backup-command), is likely not usable. For more information, see [alternative backup strategies](#alternative-backup-strategies).
### Git repositories
A GitLab instance can have one or more repository shards. Each shard is a Gitaly instance or Gitaly Cluster (Praefect)
that is responsible for allowing access and operations on the locally stored Git repositories. Gitaly can run
on a machine:
- With a single disk.
- With multiple disks mounted as a single mount-point (like with a RAID array).
- Using LVM.
Each project can have up to 3 different repositories:
- A project repository, where the source code is stored.
- A wiki repository, where the wiki content is stored.
- A design repository, where design artifacts are indexed (assets are actually in LFS).
They all live in the same shard and share the same base name with a `-wiki` and `-design` suffix
for Wiki and Design Repository cases.
Personal and project snippets, and group wiki content, are stored in Git repositories.
Project forks are deduplicated in live a GitLab site using pool repositories.
The [backup command](#backup-command) produces a Git bundle for each repository and tars them all up. This duplicates pool repository data into every fork. In [our testing](https://gitlab.com/gitlab-org/gitlab/-/issues/396343), 100 GB of Git repositories took a little over 2 hours to back up and upload to S3. At around 400 GB of Git data, the backup command is likely not viable for regular backups. For more information, see [alternative backup strategies](#alternative-backup-strategies).
### Blobs
GitLab stores blobs (or files) such as issue attachments or LFS objects into either:
- The file system in a specific location.
- An [Object Storage](../object_storage.md) solution. Object Storage solutions can be:
- Cloud based like Amazon S3 and Google Cloud Storage.
- Hosted by you (like MinIO).
- A Storage Appliance that exposes an Object Storage-compatible API.
#### Object storage
The [backup command](#backup-command) doesn't back up blobs that aren't stored on the file system. If you're using [object storage](../object_storage.md), be sure to enable backups with your object storage provider. For example, see:
- [Amazon S3 backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html)
- [Google Cloud Storage Transfer Service](https://cloud.google.com/storage-transfer-service) and [Google Cloud Storage Object Versioning](https://cloud.google.com/storage/docs/object-versioning)
### Container registry
[GitLab container registry](../packages/container_registry.md) storage can be configured in either:
- The file system in a specific location.
- An [Object Storage](../object_storage.md) solution. Object Storage solutions can be:
- Cloud based like Amazon S3 and Google Cloud Storage.
- Hosted by you (like MinIO).
- A Storage Appliance that exposes an Object Storage-compatible API.
The backup command does not back up registry data when they are stored in Object Storage.
### Storing configuration files
{{< alert type="warning" >}}
The backup Rake task GitLab provides does not store your configuration files. The primary reason for this is that your database contains items including encrypted information for two-factor authentication and the CI/CD secure variables. Storing encrypted information in the same location as its key defeats the purpose of using encryption in the first place. For example, the secrets file contains your database encryption key. If you lose it, then the GitLab application will not be able to decrypt any encrypted values in the database.
{{< /alert >}}
{{< alert type="warning" >}}
The secrets file may change after upgrades.
{{< /alert >}}
You should back up the configuration directory. At the very minimum, you must back up:
{{< tabs >}}
{{< tab title="Linux package" >}}
- `/etc/gitlab/gitlab-secrets.json`
- `/etc/gitlab/gitlab.rb`
For more information, see [Backup and restore Linux package (Omnibus) configuration](https://docs.gitlab.com/omnibus/settings/backups.html#backup-and-restore-omnibus-gitlab-configuration).
{{< /tab >}}
{{< tab title="Self-compiled" >}}
- `/home/git/gitlab/config/secrets.yml`
- `/home/git/gitlab/config/gitlab.yml`
{{< /tab >}}
{{< tab title="Docker" >}}
- Back up the volume where the configuration files are stored. If you created
the GitLab container according to the documentation, it should be in the
`/srv/gitlab/config` directory.
{{< /tab >}}
{{< tab title="GitLab Helm chart" >}}
- Follow the [Back up the secrets](https://docs.gitlab.com/charts/backup-restore/backup.html#back-up-the-secrets)
instructions.
{{< /tab >}}
{{< /tabs >}}
You may also want to back up any TLS keys and certificates (`/etc/gitlab/ssl`, `/etc/gitlab/trusted-certs`), and your
[SSH host keys](https://superuser.com/questions/532040/copy-ssh-keys-from-one-server-to-another-server/532079#532079)
to avoid man-in-the-middle attack warnings if you have to perform a full machine restore.
In the unlikely event that the secrets file is lost, see
[When the secrets file is lost](troubleshooting_backup_gitlab.md#when-the-secrets-file-is-lost).
### Other data
GitLab uses Redis both as a cache store and to hold persistent data for our background jobs system, Sidekiq. The provided [backup command](#backup-command) does not back up Redis data. This means that in order to take a consistent backup with the [backup command](#backup-command), there must be no pending or running background jobs. It is possible to [manually back up Redis](https://redis.io/docs/latest/operate/oss_and_stack/management/persistence/#backing-up-redis-data).
Elasticsearch is an optional database for advanced search. It can improve search
in both source-code level, and user generated content in issues, merge requests, and discussions. The [backup command](#backup-command) does not back up Elasticsearch data. Elasticsearch data can be regenerated from PostgreSQL data after a restore. It is possible to [manually back up Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html).
### Requirements
To be able to back up and restore, ensure that Rsync is installed on your
system. If you installed GitLab:
- Using the Linux package, Rsync is already installed.
- Using self-compiled, check if `rsync` is installed. If Rsync is not installed, install it. For example:
```shell
# Debian/Ubuntu
sudo apt-get install rsync
# RHEL/CentOS
sudo yum install rsync
```
### Backup command
{{< alert type="warning" >}}
The backup command does not back up items in [object storage](#object-storage) on Linux package (Omnibus) / Docker / Self-compiled installations.
{{< /alert >}}
{{< alert type="warning" >}}
The backup command requires [additional parameters](#back-up-and-restore-for-installations-using-pgbouncer) when
your installation is using PgBouncer, for either performance reasons or when using it with a Patroni cluster.
{{< /alert >}}
{{< alert type="warning" >}}
Before GitLab 15.5.0, the backup command doesn't verify if another backup is already running, as described in
[issue 362593](https://gitlab.com/gitlab-org/gitlab/-/issues/362593). We strongly recommend
you make sure that all backups are complete before starting a new one.
{{< /alert >}}
{{< alert type="note" >}}
You can only restore a backup to exactly the same version and type (CE/EE)
of GitLab on which it was created.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Run the backup task by using `kubectl` to run the `backup-utility` script on the GitLab toolbox pod. For more details, see the [charts backup documentation](https://docs.gitlab.com/charts/backup-restore/backup.html).
{{< /tab >}}
{{< tab title="Docker" >}}
Run the backup from the host.
```shell
docker exec -t <container name> gitlab-backup create
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
If your GitLab deployment has multiple nodes, you need to pick a node for running the backup command. You must ensure that the designated node:
- is persistent, and not subject to auto-scaling.
- has the GitLab Rails application already installed. If Puma or Sidekiq is running, then Rails is installed.
- has sufficient storage and memory to produce the backup file.
Example output:
```plaintext
Dumping database tables:
- Dumping table events... [DONE]
- Dumping table issues... [DONE]
- Dumping table keys... [DONE]
- Dumping table merge_requests... [DONE]
- Dumping table milestones... [DONE]
- Dumping table namespaces... [DONE]
- Dumping table notes... [DONE]
- Dumping table projects... [DONE]
- Dumping table protected_branches... [DONE]
- Dumping table schema_migrations... [DONE]
- Dumping table services... [DONE]
- Dumping table snippets... [DONE]
- Dumping table taggings... [DONE]
- Dumping table tags... [DONE]
- Dumping table users... [DONE]
- Dumping table users_projects... [DONE]
- Dumping table web_hooks... [DONE]
- Dumping table wikis... [DONE]
Dumping repositories:
- Dumping repository abcd... [DONE]
Creating backup archive: <backup-id>_gitlab_backup.tar [DONE]
Deleting tmp directories...[DONE]
Deleting old backups... [SKIPPING]
```
For detailed information about the backup process, see [Backup archive process](backup_archive_process.md).
### Backup options
The command-line tool GitLab provides to back up your instance can accept more
options.
#### Backup strategy option
The default backup strategy is to essentially stream data from the respective
data locations to the backup using the Linux command `tar` and `gzip`. This works
fine in most cases, but can cause problems when data is rapidly changing.
When data changes while `tar` is reading it, the error `file changed as we read it`
may occur, and causes the backup process to fail. In that case, you can use
the backup strategy called `copy`. The strategy copies data files
to a temporary location before calling `tar` and `gzip`, avoiding the error.
A side-effect is that the backup process takes up to an additional 1X disk
space. The process does its best to clean up the temporary files at each stage
so the problem doesn't compound, but it could be a considerable change for large
installations.
To use the `copy` strategy instead of the default streaming strategy, specify
`STRATEGY=copy` in the Rake task command. For example:
```shell
sudo gitlab-backup create STRATEGY=copy
```
#### Backup filename
{{< alert type="warning" >}}
If you use a custom backup filename, you can't
[limit the lifetime of the backups](#limit-backup-lifetime-for-local-files-prune-old-backups).
{{< /alert >}}
Backup files are created with filenames according to [specific defaults](backup_archive_process.md#backup-id). However, you can
override the `<backup-id>` portion of the filename by setting the `BACKUP`
environment variable. For example:
```shell
sudo gitlab-backup create BACKUP=dump
```
The resulting file is named `dump_gitlab_backup.tar`. This is useful for
systems that make use of rsync and incremental backups, and results in
considerably faster transfer speeds.
#### Backup compression
By default, Gzip fast compression is applied during backup of:
- [PostgreSQL database](#postgresql-databases) dumps.
- [blobs](#blobs), for example uploads, job artifacts, external merge request diffs.
The default command is `gzip -c -1`. You can override this command with `COMPRESS_CMD`. Similarly, you can override the decompression command with `DECOMPRESS_CMD`.
Caveats:
- The compression command is used in a pipeline, so your custom command must output to `stdout`.
- If you specify a command that is not packaged with GitLab, then you must install it yourself.
- The resultant filenames will still end in `.gz`.
- The default decompression command, used during restore, is `gzip -cd`. Therefore if you override the compression command to use a format that cannot be decompressed by `gzip -cd`, you must override the decompression command during restore.
- [Do not place environment variables after the backup command](https://gitlab.com/gitlab-org/gitlab/-/issues/433227). For example, `gitlab-backup create COMPRESS_CMD="pigz -c --best"` doesn't work as intended.
##### Default compression: Gzip with fastest method
```shell
gitlab-backup create
```
##### Gzip with slowest method
```shell
COMPRESS_CMD="gzip -c --best" gitlab-backup create
```
If `gzip` was used for backup, then restore does not require any options:
```shell
gitlab-backup restore
```
##### No compression
If your backup destination has built-in automatic compression, then you may wish to skip compression.
The `tee` command pipes `stdin` to `stdout`.
```shell
COMPRESS_CMD=tee gitlab-backup create
```
And on restore:
```shell
DECOMPRESS_CMD=tee gitlab-backup restore
```
##### Parallel compression with `pigz`
{{< alert type="warning" >}}
While we support using `COMPRESS_CMD` and `DECOMPRESS_CMD` to override the default Gzip compression library, we only test the default Gzip library with default options on a routine basis. You are responsible for testing and validating the viability of your backups. We strongly recommend this as best practice in general for backups, whether overriding the compression command or not. If you encounter issues with another compression library, you should revert back to the default. Troubleshooting and fixing errors with alternative libraries are a lower priority for GitLab.
{{< /alert >}}
{{< alert type="note" >}}
`pigz` is not included in the GitLab Linux package. You must install it yourself.
{{< /alert >}}
An example of compressing backups with `pigz` using 4 processes:
```shell
COMPRESS_CMD="pigz --compress --stdout --fast --processes=4" sudo gitlab-backup create
```
Because `pigz` compresses to the `gzip` format, it is not required to use `pigz` to decompress backups which were compressed by `pigz`. However, it can still have a performance benefit over `gzip`. An example of decompressing backups with `pigz`:
```shell
DECOMPRESS_CMD="pigz --decompress --stdout" sudo gitlab-backup restore
```
##### Parallel compression with `zstd`
{{< alert type="warning" >}}
While we support using `COMPRESS_CMD` and `DECOMPRESS_CMD` to override the default Gzip compression library, we only test the default Gzip library with default options on a routine basis. You are responsible for testing and validating the viability of your backups. We strongly recommend this as best practice in general for backups, whether overriding the compression command or not. If you encounter issues with another compression library, you should revert back to the default. Troubleshooting and fixing errors with alternative libraries are a lower priority for GitLab.
{{< /alert >}}
{{< alert type="note" >}}
`zstd` is not included in the GitLab Linux package. You must install it yourself.
{{< /alert >}}
An example of compressing backups with `zstd` using 4 threads:
```shell
COMPRESS_CMD="zstd --compress --stdout --fast --threads=4" sudo gitlab-backup create
```
An example of decompressing backups with `zstd`:
```shell
DECOMPRESS_CMD="zstd --decompress --stdout" sudo gitlab-backup restore
```
#### Confirm archive can be transferred
To ensure the generated archive is transferable by rsync, you can set the `GZIP_RSYNCABLE=yes`
option. This sets the `--rsyncable` option to `gzip`, which is useful only in
combination with setting [the Backup filename option](#backup-filename).
The `--rsyncable` option in `gzip` isn't guaranteed to be available
on all distributions. To verify that it's available in your distribution, run
`gzip --help` or consult the man pages.
```shell
sudo gitlab-backup create BACKUP=dump GZIP_RSYNCABLE=yes
```
#### Excluding specific data from the backup
Depending on your installation type, slightly different components can be skipped on backup creation.
{{< tabs >}}
{{< tab title="Linux package (Omnibus) / Docker / Self-compiled" >}}
<!-- source: https://gitlab.com/gitlab-org/gitlab/-/blob/d693aa7f894c7306a0d20ab6d138a7b95785f2ff/lib/backup/manager.rb#L117-133 -->
- `db` (database)
- `repositories` (Git repositories data, including wikis)
- `uploads` (attachments)
- `builds` (CI job output logs)
- `artifacts` (CI job artifacts)
- `pages` (Pages content)
- `lfs` (LFS objects)
- `terraform_state` (Terraform states)
- `registry` (Container registry images)
- `packages` (Packages)
- `ci_secure_files` (Project-level secure files)
- `external_diffs` (External merge request diffs)
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
<!-- source: https://gitlab.com/gitlab-org/build/CNG/-/blob/068e146db915efcd875414e04403410b71a2e70c/gitlab-toolbox/scripts/bin/backup-utility#L19 -->
- `db` (database)
- `repositories` (Git repositories data, including wikis)
- `uploads` (attachments)
- `artifacts` (CI job artifacts and output logs)
- `pages` (Pages content)
- `lfs` (LFS objects)
- `terraform_state` (Terraform states)
- `registry` (Container registry images)
- `packages` (Package registry)
- `ci_secure_files` (Project-level Secure Files)
- `external_diffs` (Merge request diffs)
{{< /tab >}}
{{< /tabs >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create SKIP=db,uploads
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
See [Skipping components](https://docs.gitlab.com/charts/backup-restore/backup.html#skipping-components) in charts backup documentation.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=db,uploads RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
`SKIP=` is also used to:
- [Skip creation of the tar file](#skipping-tar-creation) (`SKIP=tar`).
- [Skip uploading the backup to remote storage](#skip-uploading-backups-to-remote-storage) (`SKIP=remote`).
#### Skipping tar creation
{{< alert type="note" >}}
It is not possible to skip the tar creation when using [object storage](#upload-backups-to-a-remote-cloud-storage) for backups.
{{< /alert >}}
The last part of creating a backup is generation of a `.tar` file containing all the parts. In some cases, creating a `.tar` file might be wasted effort or even directly harmful, so you can skip this step by adding `tar` to the `SKIP` environment variable. Example use-cases:
- When the backup is picked up by other backup software.
- To speed up incremental backups by avoiding having to extract the backup every time. (In this case, `PREVIOUS_BACKUP` and `BACKUP` must not be specified, otherwise the specified backup is extracted, but no `.tar` file is generated at the end.)
Adding `tar` to the `SKIP` variable leaves the files and directories containing the
backup in the directory used for the intermediate files. These files are
overwritten when a new backup is created, so you should make sure they are copied
elsewhere, because you can only have one backup on the system.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create SKIP=tar
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=tar RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
#### Create server-side repository backups
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4941) in `gitlab-backup` in GitLab 16.3.
- Server-side support in `gitlab-backup` for restoring a specified backup instead of the latest backup [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132188) in GitLab 16.6.
- Server-side support in `gitlab-backup` for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/6475) in GitLab 16.6.
- Server-side support in `backup-utility` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/438393) in GitLab 17.0.
{{< /history >}}
Instead of storing large repository backups in the backup archive, repository
backups can be configured so that the Gitaly node that hosts each repository is
responsible for creating the backup and streaming it to object storage. This
helps reduce the network resources required to create and restore a backup.
1. [Configure a server-side backup destination in Gitaly](../gitaly/configure_gitaly.md#configure-server-side-backups).
1. Create a backup using the repositories server-side option. See the following examples.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create REPOSITORIES_SERVER_SIDE=true
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_SERVER_SIDE=true
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
kubectl exec <Toolbox pod name> -it -- backup-utility --repositories-server-side
```
When you are using [cron-based backups](https://docs.gitlab.com/charts/backup-restore/backup.html#cron-based-backup),
add the `--repositories-server-side` flag to the extra arguments.
{{< /tab >}}
{{< /tabs >}}
#### Back up Git repositories concurrently
When using [multiple repository storages](../repository_storage_paths.md),
repositories can be backed up or restored concurrently to help fully use CPU time. The
following variables are available to modify the default behavior of the Rake
task:
- `GITLAB_BACKUP_MAX_CONCURRENCY`: The maximum number of projects to back up at
the same time. Defaults to the number of logical CPUs.
- `GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY`: The maximum number of projects to
back up at the same time on each storage. This allows the repository backups
to be spread across storages. Defaults to `2`.
For example, with 4 repository storages:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create GITLAB_BACKUP_MAX_CONCURRENCY=4 GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY=1
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create GITLAB_BACKUP_MAX_CONCURRENCY=4 GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY=1
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```yaml
toolbox:
#...
extra: {}
extraEnv:
GITLAB_BACKUP_MAX_CONCURRENCY: 4
GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY: 1
```
{{< /tab >}}
{{< /tabs >}}
#### Incremental repository backups
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/351383) in GitLab 14.10 [with a flag](../feature_flags/_index.md) named `incremental_repository_backup`. Disabled by default.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/355945) in GitLab 15.3. Feature flag `incremental_repository_backup` removed.
- Server-side support for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/5461) in GitLab 16.6.
{{< /history >}}
{{< alert type="note" >}}
Only repositories support incremental backups. Therefore, if you use `INCREMENTAL=yes`, the task
creates a self-contained backup tar archive. This is because all subtasks except repositories are
still creating full backups (they overwrite the existing full backup).
See [issue 19256](https://gitlab.com/gitlab-org/gitlab/-/issues/19256) for a feature request to
support incremental backups for all subtasks.
{{< /alert >}}
Incremental repository backups can be faster than full repository backups because they only pack changes since the last backup into the backup bundle for each repository.
The incremental backup archives are not linked to each other: each archive is a self-contained backup of the instance. There must be an existing backup
to create an incremental backup from.
Use the `PREVIOUS_BACKUP=<backup-id>` option to choose the backup to use. By default, a backup file is created
as documented in the [Backup ID](backup_archive_process.md#backup-id) section. You can override the `<backup-id>` portion of the filename by setting the
[`BACKUP` environment variable](#backup-filename).
To create an incremental backup, run:
```shell
sudo gitlab-backup create INCREMENTAL=yes PREVIOUS_BACKUP=<backup-id>
```
To create an [untarred](#skipping-tar-creation) incremental backup from a tarred backup, use `SKIP=tar`:
```shell
sudo gitlab-backup create INCREMENTAL=yes SKIP=tar
```
#### Back up specific repository storages
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86896) in GitLab 15.0.
{{< /history >}}
When using [multiple repository storages](../repository_storage_paths.md),
repositories from specific repository storages can be backed up separately
using the `REPOSITORIES_STORAGES` option. The option accepts a comma-separated list of
storage names.
For example:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create REPOSITORIES_STORAGES=storage1,storage2
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_STORAGES=storage1,storage2
```
{{< /tab >}}
{{< /tabs >}}
#### Back up specific repositories
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/88094) in GitLab 15.1.
- [Skipping specific repositories added](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121865) in GitLab 16.1.
{{< /history >}}
You can back up specific repositories using the `REPOSITORIES_PATHS` option.
Similarly, you can use `SKIP_REPOSITORIES_PATHS` to skip certain repositories.
Both options accept a comma-separated list of project or group paths. If you
specify a group path, all repositories in all projects in the group and
descendant groups are included or skipped, depending on which option you used.
For example, to back up all repositories for all projects in Group A (`group-a`), the repository for
Project C in Group B (`group-b/project-c`),
and skip the Project D in Group A (`group-a/project-d`):
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
REPOSITORIES_PATHS=group-a SKIP_REPOSITORIES_PATHS=group-a/project_a2 backup-utility --skip db,registry,uploads,artifacts,lfs,packages,external_diffs,terraform_state,ci_secure_files,pages
```
{{< /tab >}}
{{< /tabs >}}
#### Upload backups to a remote (cloud) storage
{{< alert type="note" >}}
It is not possible to [skip the tar creation](#skipping-tar-creation) when using object storage for backups.
{{< /alert >}}
You can let the backup script upload (using the [Fog library](https://fog.github.io/))
the `.tar` file it creates. In the following example, we use Amazon S3 for
storage, but Fog also lets you use [other storage providers](https://fog.github.io/storage/).
GitLab also [imports cloud drivers](https://gitlab.com/gitlab-org/gitlab/-/blob/da46c9655962df7d49caef0e2b9f6bbe88462a02/Gemfile#L113)
for AWS, Google, and Aliyun. A local driver is
[also available](#upload-to-locally-mounted-shares).
[Read more about using object storage with GitLab](../object_storage.md).
##### Using Amazon S3
For Linux package (Omnibus):
1. Add the following to `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AWS',
'region' => 'eu-west-1',
# Choose one authentication method
# IAM Profile
'use_iam_profile' => true
# OR AWS Access and Secret key
'aws_access_key_id' => 'AKIAKIAKI',
'aws_secret_access_key' => 'secret123'
}
gitlab_rails['backup_upload_remote_directory'] = 'my.s3.bucket'
# Consider using multipart uploads when file size reaches 100 MB. Enter a number in bytes.
# gitlab_rails['backup_multipart_chunk_size'] = 104857600
```
1. If you're using the IAM Profile authentication method, ensure the instance where `backup-utility` is to be run has the following policy set (replace `<backups-bucket>` with the correct bucket name):
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<backups-bucket>/*"
}
]
}
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect
##### S3 Encrypted Buckets
AWS supports these [modes for server side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html):
- Amazon S3-Managed Keys (SSE-S3)
- Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
- Customer-Provided Keys (SSE-C)
Use your mode of choice with GitLab. Each mode has similar, but slightly
different, configuration methods.
###### SSE-S3
To enable SSE-S3, in the backup storage options set the `server_side_encryption`
field to `AES256`. For example, in the Linux package (Omnibus):
```ruby
gitlab_rails['backup_upload_storage_options'] = {
'server_side_encryption' => 'AES256'
}
```
###### SSE-KMS
To enable SSE-KMS, you need the
[KMS key via its Amazon Resource Name (ARN) in the `arn:aws:kms:region:acct-id:key/key-id` format](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html).
Under the `backup_upload_storage_options` configuration setting, set:
- `server_side_encryption` to `aws:kms`.
- `server_side_encryption_kms_key_id` to the ARN of the key.
For example, in the Linux package (Omnibus):
```ruby
gitlab_rails['backup_upload_storage_options'] = {
'server_side_encryption' => 'aws:kms',
'server_side_encryption_kms_key_id' => 'arn:aws:<YOUR KMS KEY ID>:'
}
```
###### SSE-C
SSE-C requires you to set these encryption options:
- `backup_encryption`: AES256.
- `backup_encryption_key`: Unencoded, 32-byte (256 bits) key. The upload fails if this isn't exactly 32 bytes.
For example, in the Linux package (Omnibus):
```ruby
gitlab_rails['backup_encryption'] = 'AES256'
gitlab_rails['backup_encryption_key'] = '<YOUR 32-BYTE KEY HERE>'
```
If the key contains binary characters and cannot be encoded in UTF-8,
instead, specify the key with the `GITLAB_BACKUP_ENCRYPTION_KEY` environment variable.
For example:
```ruby
gitlab_rails['env'] = { 'GITLAB_BACKUP_ENCRYPTION_KEY' => "\xDE\xAD\xBE\xEF" * 8 }
```
##### Digital Ocean Spaces
This example can be used for a bucket in Amsterdam (AMS3):
1. Add the following to `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AWS',
'region' => 'ams3',
'aws_access_key_id' => 'AKIAKIAKI',
'aws_secret_access_key' => 'secret123',
'endpoint' => 'https://ams3.digitaloceanspaces.com'
}
gitlab_rails['backup_upload_remote_directory'] = 'my.s3.bucket'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect
If you see a `400 Bad Request` error message when using Digital Ocean Spaces,
the cause may be the use of backup encryption. Because Digital Ocean Spaces
doesn't support encryption, remove or comment the line that contains
`gitlab_rails['backup_encryption']`.
##### Other S3 Providers
Not all S3 providers are fully compatible with the Fog library. For example,
if you see a `411 Length Required` error message after attempting to upload,
you may need to downgrade the `aws_signature_version` value from the default
value to `2`, [due to this issue](https://github.com/fog/fog-aws/issues/428).
For self-compiled installations:
1. Edit `home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
# snip
upload:
# Fog storage connection settings, see https://fog.github.io/storage/ .
connection:
provider: AWS
region: eu-west-1
aws_access_key_id: AKIAKIAKI
aws_secret_access_key: 'secret123'
# If using an IAM Profile, leave aws_access_key_id & aws_secret_access_key empty
# ie. aws_access_key_id: ''
# use_iam_profile: 'true'
# The remote 'directory' to store your backups. For S3, this would be the bucket name.
remote_directory: 'my.s3.bucket'
# Specifies Amazon S3 storage class to use for backups, this is optional
# storage_class: 'STANDARD'
#
# Turns on AWS Server-Side Encryption with Amazon Customer-Provided Encryption Keys for backups, this is optional
# 'encryption' must be set in order for this to have any effect.
# 'encryption_key' should be set to the 256-bit encryption key for Amazon S3 to use to encrypt or decrypt.
# To avoid storing the key on disk, the key can also be specified via the `GITLAB_BACKUP_ENCRYPTION_KEY` your data.
# encryption: 'AES256'
# encryption_key: '<key>'
#
#
# Turns on AWS Server-Side Encryption with Amazon S3-Managed keys (optional)
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html
# For SSE-S3, set 'server_side_encryption' to 'AES256'.
# For SS3-KMS, set 'server_side_encryption' to 'aws:kms'. Set
# 'server_side_encryption_kms_key_id' to the ARN of customer master key.
# storage_options:
# server_side_encryption: 'aws:kms'
# server_side_encryption_kms_key_id: 'arn:aws:kms:YOUR-KEY-ID-HERE'
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect
##### Using Google Cloud Storage
To use Google Cloud Storage to save backups, you must first create an
access key from the Google console:
1. Go to the [Google storage settings page](https://console.cloud.google.com/storage/settings).
1. Select **Interoperability**, and then create an access key.
1. Make note of the **Access Key** and **Secret** and replace them in the
following configurations.
1. In the buckets advanced settings ensure the Access Control option
**Set object-level and bucket-level permissions** is selected.
1. Ensure you have already created a bucket.
For the Linux package (Omnibus):
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'Google',
'google_storage_access_key_id' => 'Access Key',
'google_storage_secret_access_key' => 'Secret',
## If you have CNAME buckets (foo.example.com), you might run into SSL issues
## when uploading backups ("hostname foo.example.com.storage.googleapis.com
## does not match the server certificate"). In that case, uncomment the following
## setting. See: https://github.com/fog/fog/issues/2834
#'path_style' => true
}
gitlab_rails['backup_upload_remote_directory'] = 'my.google.bucket'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect
For self-compiled installations:
1. Edit `home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
upload:
connection:
provider: 'Google'
google_storage_access_key_id: 'Access Key'
google_storage_secret_access_key: 'Secret'
remote_directory: 'my.google.bucket'
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect
##### Using Azure Blob storage
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AzureRM',
'azure_storage_account_name' => '<AZURE STORAGE ACCOUNT NAME>',
'azure_storage_access_key' => '<AZURE STORAGE ACCESS KEY>',
'azure_storage_domain' => 'blob.core.windows.net', # Optional
}
gitlab_rails['backup_upload_remote_directory'] = '<AZURE BLOB CONTAINER>'
```
If you are using [a managed identity](../object_storage.md#azure-workload-and-managed-identities), omit `azure_storage_access_key`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AzureRM',
'azure_storage_account_name' => '<AZURE STORAGE ACCOUNT NAME>',
'azure_storage_domain' => '<AZURE STORAGE DOMAIN>' # Optional
}
gitlab_rails['backup_upload_remote_directory'] = '<AZURE BLOB CONTAINER>'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit `home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
upload:
connection:
provider: 'AzureRM'
azure_storage_account_name: '<AZURE STORAGE ACCOUNT NAME>'
azure_storage_access_key: '<AZURE STORAGE ACCESS KEY>'
remote_directory: '<AZURE BLOB CONTAINER>'
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect
{{< /tab >}}
{{< /tabs >}}
For more details, see the [table of Azure parameters](../object_storage.md#azure-blob-storage).
##### Specifying a custom directory for backups
This option works only for remote storage. If you want to group your backups,
you can pass a `DIRECTORY` environment variable:
```shell
sudo gitlab-backup create DIRECTORY=daily
sudo gitlab-backup create DIRECTORY=weekly
```
#### Skip uploading backups to remote storage
If you have configured GitLab to [upload backups in a remote storage](#upload-backups-to-a-remote-cloud-storage),
you can use the `SKIP=remote` option to skip uploading your backups to the remote storage.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create SKIP=remote
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=remote RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
#### Upload to locally-mounted shares
You can send backups to a locally-mounted share (for example, `NFS`,`CIFS`, or `SMB`) using the Fog
[`Local`](https://github.com/fog/fog-local#usage) storage provider.
To do this, you must set the following configuration keys:
- `backup_upload_connection.local_root`: mounted directory that backups are copied to.
- `backup_upload_remote_directory`: subdirectory of the `backup_upload_connection.local_root` directory. It is created if it doesn't exist.
If you want to copy the tarballs to the root of your mounted directory, use `.`.
When mounted, the directory set in the `local_root` key must be owned by either:
- The `git` user. So, mounting with the `uid=` of the `git` user for `CIFS` and `SMB`.
- The user that you are executing the backup tasks as. For the Linux package (Omnibus), this is the `git` user.
Because file system performance may affect overall GitLab performance,
[we don't recommend using cloud-based file systems for storage](../nfs.md#avoid-using-cloud-based-file-systems).
##### Avoid conflicting configuration
Don't set the following configuration keys to the same path:
- `gitlab_rails['backup_path']` (`backup.path` for self-compiled installations).
- `gitlab_rails['backup_upload_connection'].local_root` (`backup.upload.connection.local_root` for self-compiled installations).
The `backup_path` configuration key sets the local location of the backup file. The `upload` configuration key is
intended for use when the backup file is uploaded to a separate server, perhaps for archival purposes.
If these configuration keys are set to the same location, the upload feature fails because a backup already exists at
the upload location. This failure causes the upload feature to delete the backup because it assumes it's a residual file
remaining after the failed upload attempt.
##### Configure uploads to locally-mounted shares
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
:provider => 'Local',
:local_root => '/mnt/backups'
}
# The directory inside the mounted folder to copy backups to
# Use '.' to store them in the root directory
gitlab_rails['backup_upload_remote_directory'] = 'gitlab_backups'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit `home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
upload:
# Fog storage connection settings, see https://fog.github.io/storage/ .
connection:
provider: Local
local_root: '/mnt/backups'
# The directory inside the mounted folder to copy backups to
# Use '.' to store them in the root directory
remote_directory: 'gitlab_backups'
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Backup archive permissions
The backup archives created by GitLab (`1393513186_2014_02_27_gitlab_backup.tar`)
have the owner/group `git`/`git` and 0600 permissions by default. This is
meant to avoid other system users reading GitLab data. If you need the backup
archives to have different permissions, you can use the `archive_permissions`
setting.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_archive_permissions'] = 0644 # Makes the backup archives world-readable
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
archive_permissions: 0644 # Makes the backup archives world-readable
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Configuring cron to make daily backups
{{< alert type="warning" >}}
The following cron jobs do not [back up your GitLab configuration files](#storing-configuration-files)
or [SSH host keys](https://superuser.com/questions/532040/copy-ssh-keys-from-one-server-to-another-server/532079#532079).
{{< /alert >}}
You can schedule a cron job that backs up your repositories and GitLab metadata.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit the crontab for the `root` user:
```shell
sudo su -
crontab -e
```
1. There, add the following line to schedule the backup for everyday at 2 AM:
```plaintext
0 2 * * * /opt/gitlab/bin/gitlab-backup create CRON=1
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit the crontab for the `git` user:
```shell
sudo -u git crontab -e
```
1. Add the following lines at the bottom:
```plaintext
# Create a full backup of the GitLab repositories and SQL database every day at 2am
0 2 * * * cd /home/git/gitlab && PATH=/usr/local/bin:/usr/bin:/bin bundle exec rake gitlab:backup:create RAILS_ENV=production CRON=1
```
{{< /tab >}}
{{< /tabs >}}
The `CRON=1` environment setting directs the backup script to hide all progress
output if there aren't any errors. This is recommended to reduce cron spam.
When troubleshooting backup problems, however, replace `CRON=1` with `--trace` to log verbosely.
#### Limit backup lifetime for local files (prune old backups)
{{< alert type="warning" >}}
The process described in this section doesn't work if you used a [custom filename](#backup-filename)
for your backups.
{{< /alert >}}
To prevent regular backups from using all your disk space, you may want to set a limited lifetime
for backups. The next time the backup task runs, backups older than the `backup_keep_time` are
pruned.
This configuration option manages only local files. GitLab doesn't prune old
files stored in a third-party [object storage](#upload-backups-to-a-remote-cloud-storage)
because the user may not have permission to list and delete files. It's
recommended that you configure the appropriate retention policy for your object
storage (for example, [AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html)).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
## Limit backup lifetime to 7 days - 604800 seconds
gitlab_rails['backup_keep_time'] = 604800
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
## Limit backup lifetime to 7 days - 604800 seconds
keep_time: 604800
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Back up and restore for installations using PgBouncer
Do not back up or restore GitLab through a PgBouncer connection. These
tasks must [bypass PgBouncer and connect directly to the PostgreSQL primary database node](#bypassing-pgbouncer),
or they cause a GitLab outage.
When the GitLab backup or restore task is used with PgBouncer, the
following error message is shown:
```ruby
ActiveRecord::StatementInvalid: PG::UndefinedTable
```
Each time the GitLab backup runs, GitLab starts generating 500 errors and errors about missing
tables will [be logged by PostgreSQL](../logs/_index.md#postgresql-logs):
```plaintext
ERROR: relation "tablename" does not exist at character 123
```
This happens because the task uses `pg_dump`, which
[sets a null search path and explicitly includes the schema in every SQL query](https://gitlab.com/gitlab-org/gitlab/-/issues/23211)
to address [CVE-2018-1058](https://www.postgresql.org/about/news/postgresql-103-968-9512-9417-and-9322-released-1834/).
Because connections are reused with PgBouncer in transaction pooling mode,
PostgreSQL fails to search the default `public` schema. As a result,
this clearing of the search path causes tables and columns to appear
missing.
##### Bypassing PgBouncer
There are two ways to fix this:
1. [Use environment variables to override the database settings](#environment-variable-overrides) for the backup task.
1. Reconfigure a node to [connect directly to the PostgreSQL primary database node](../postgresql/pgbouncer.md#procedure-for-bypassing-pgbouncer).
###### Environment variable overrides
{{< history >}}
- Multiple databases support was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133177) in GitLab 16.5.
{{< /history >}}
By default, GitLab uses the database configuration stored in a
configuration file (`database.yml`). However, you can override the database settings
for the backup and restore task by setting environment
variables that are prefixed with `GITLAB_BACKUP_`:
- `GITLAB_BACKUP_PGHOST`
- `GITLAB_BACKUP_PGUSER`
- `GITLAB_BACKUP_PGPORT`
- `GITLAB_BACKUP_PGPASSWORD`
- `GITLAB_BACKUP_PGSSLMODE`
- `GITLAB_BACKUP_PGSSLKEY`
- `GITLAB_BACKUP_PGSSLCERT`
- `GITLAB_BACKUP_PGSSLROOTCERT`
- `GITLAB_BACKUP_PGSSLCRL`
- `GITLAB_BACKUP_PGSSLCOMPRESSION`
For example, to override the database host and port to use 192.168.1.10
and port 5432 with the Linux package (Omnibus):
```shell
sudo GITLAB_BACKUP_PGHOST=192.168.1.10 GITLAB_BACKUP_PGPORT=5432 /opt/gitlab/bin/gitlab-backup create
```
If you run GitLab on [multiple databases](../postgresql/_index.md), you can override database settings by including
the database name in the environment variable. For example if your `main` and `ci` databases are
hosted on different database servers, you would append their name after the `GITLAB_BACKUP_` prefix,
leaving the `PG*` names as is:
```shell
sudo GITLAB_BACKUP_MAIN_PGHOST=192.168.1.10 GITLAB_BACKUP_CI_PGHOST=192.168.1.12 /opt/gitlab/bin/gitlab-backup create
```
See the [PostgreSQL documentation](https://www.postgresql.org/docs/16/libpq-envars.html)
for more details on what these parameters do.
#### `gitaly-backup` for repository backup and restore
The `gitaly-backup` binary is used by the backup Rake task to create and restore repository backups from Gitaly.
`gitaly-backup` replaces the previous backup method that directly calls RPCs on Gitaly from GitLab.
The backup Rake task must be able to find this executable. In most cases, you don't need to change
the path to the binary as it should work fine with the default path `/opt/gitlab/embedded/bin/gitaly-backup`.
If you have a specific reason to change the path, it can be configured in the Linux package (Omnibus):
1. Add the following to `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_gitaly_backup_path'] = '/path/to/gitaly-backup'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
## Alternative backup strategies
Because every deployment may have different capabilities, you should first review [what data needs to be backed up](#what-data-needs-to-be-backed-up) to better understand if, and how, you can leverage them.
For example, if you use Amazon RDS, you might choose to use its built-in backup and restore features to handle your GitLab [PostgreSQL data](#postgresql-databases), and [exclude PostgreSQL data](#excluding-specific-data-from-the-backup) when using the [backup command](#backup-command).
In the following cases, consider using file system data transfer or snapshots as part of your backup strategy:
- Your GitLab instance contains a lot of Git repository data and the GitLab backup script is too slow.
- Your GitLab instance has a lot of forked projects and the regular backup task duplicates the Git data for all of them.
- Your GitLab instance has a problem and using the regular backup and import Rake tasks isn't possible.
{{< alert type="warning" >}}
Gitaly Cluster (Praefect) [does not support snapshot backups](../gitaly/praefect/_index.md#snapshot-backup-and-recovery).
{{< /alert >}}
When considering using file system data transfer or snapshots:
- Don't use these methods to migrate from one operating system to another. The operating systems of the source and destination should be as similar as possible. For example,
don't use these methods to migrate from Ubuntu to RHEL.
- Data consistency is very important. You should stop GitLab (`sudo gitlab-ctl stop`) before
doing a file system transfer (with `rsync`, for example) or taking a snapshot to ensure all data in memory is flushed to disk. GitLab consists of multiple subsystems (Gitaly, database, file storage) that have their own buffers, queues, and storage layers. GitLab transactions can span these subsystems, which results in parts of a transaction taking different paths to disk. On live systems, file system transfers and snapshot runs fail to capture parts of the transaction still in memory.
Example: Amazon Elastic Block Store (EBS)
- A GitLab server using the Linux package (Omnibus) hosted on Amazon AWS.
- An EBS drive containing an ext4 file system is mounted at `/var/opt/gitlab`.
- In this case you could make an application backup by taking an EBS snapshot.
- The backup includes all repositories, uploads and PostgreSQL data.
Example: Logical Volume Manager (LVM) snapshots + rsync
- A GitLab server using the Linux package (Omnibus), with an LVM logical volume mounted at `/var/opt/gitlab`.
- Replicating the `/var/opt/gitlab` directory using rsync would not be reliable because too many files would change while rsync is running.
- Instead of rsync-ing `/var/opt/gitlab`, we create a temporary LVM snapshot, which we mount as a read-only file system at `/mnt/gitlab_backup`.
- Now we can have a longer running rsync job which creates a consistent replica on the remote server.
- The replica includes all repositories, uploads and PostgreSQL data.
If you're running GitLab on a virtualized server, you can possibly also create
VM snapshots of the entire GitLab server. It's not uncommon however for a VM
snapshot to require you to power down the server, which limits this solution's
practical use.
### Back up repository data separately
First, ensure you back up existing GitLab data while [skipping repositories](#excluding-specific-data-from-the-backup):
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create SKIP=repositories
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=repositories RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
For manually backing up the Git repository data on disk, there are multiple possible strategies:
- Use snapshots, such as the previous examples of Amazon EBS drive snapshots, or LVM snapshots + rsync.
- Use [GitLab Geo](../geo/_index.md) and rely on the repository data on a Geo secondary site.
- [Prevent writes and copy the Git repository data](#prevent-writes-and-copy-the-git-repository-data).
- [Create an online backup by marking repositories as read-only (experimental)](#online-backup-through-marking-repositories-as-read-only-experimental).
#### Prevent writes and copy the Git repository data
Git repositories must be copied in a consistent way. If repositories
are copied during concurrent write operations,
inconsistencies or corruption issues can occur. For more details,
[issue 270422](https://gitlab.com/gitlab-org/gitlab/-/issues/270422)
has a longer discussion that explains the potential problems.
To prevent writes to the Git repository data, there are two possible approaches:
- Use [maintenance mode](../maintenance_mode/_index.md) to place GitLab in a read-only state.
- Create explicit downtime by stopping all Gitaly services before backing up the repositories:
```shell
sudo gitlab-ctl stop gitaly
# execute git data copy step
sudo gitlab-ctl start gitaly
```
You can copy Git repository data using any method, as long as writes are prevented on the data being copied
(to prevent inconsistencies and corruption issues). In order of preference and safety, the recommended methods are:
1. Use `rsync` with archive-mode, delete, and checksum options, for example:
```shell
rsync -aR --delete --checksum source destination # be extra safe with the order as it will delete existing data if inverted
```
1. Use a [`tar` pipe to copy the entire repository's directory to another server or location](../operations/moving_repositories.md#tar-pipe-to-another-server).
1. Use `sftp`, `scp`, `cp`, or any other copying method.
#### Online backup through marking repositories as read-only (experimental)
One way of backing up repositories without requiring instance-wide downtime
is to programmatically mark projects as read-only while copying the underlying data.
There are a few possible downsides to this:
- Repositories are read-only for a period of time that scales with the size of the repository.
- Backups take a longer time to complete due to marking each project as read-only, potentially leading to inconsistencies. For example,
a possible date discrepancy between the last data available for the first project that gets backed up compared to
the last project that gets backed up.
- Fork networks should be entirely read-only while the projects inside get backed up to prevent potential changes to the pool repository.
There is an experimental script that attempts to automate this process in
[the Geo team Runbooks project](https://gitlab.com/gitlab-org/geo-team/runbooks/-/tree/main/experimental-online-backup-through-rsync).
|
---
stage: Data Access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Back up GitLab
breadcrumbs:
- doc
- administration
- backup_restore
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab backups protect your data and help with disaster recovery.
The optimal backup strategy depends on your GitLab deployment configuration,
data volume, and storage locations. These factors determine which backup
methods to use,
where to store backups, and how to structure your backup schedule.
For larger GitLab instances, alternative backup strategies include:
- Incremental backups.
- Backups of specific repositories.
- Backups across multiple storage locations.
## Data included in a backup
GitLab provides a command-line interface to back up your entire instance.
By default, the backup creates an archive in a single compressed tar file.
This file includes:
- Database data and configuration
- Account and group settings
- CI/CD artifacts and job logs
- Git repositories and LFS objects
- External merge request diffs ([introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/154914) in GitLab 17.1)
- Package registry data and container registry images
- Project and [group](../../user/project/wiki/group.md) wikis.
- Project-level attachments and uploads
- Secure Files ([introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121142) in GitLab 16.1)
- GitLab Pages content
- Terraform states
- Snippets
## Data not included in a backup
- [Mattermost data](../../integration/mattermost/_index.md#back-up-gitlab-mattermost)
- Redis (and thus Sidekiq jobs)
- [Object storage](#object-storage) on Linux package (Omnibus) / Docker / Self-compiled installations
- [Global server hooks](../server_hooks.md#create-global-server-hooks-for-all-repositories)
- [File hooks](../file_hooks.md)
- GitLab configuration files (`/etc/gitlab`)
- TLS- and SSH-related keys and certificates
- Other system files
{{< alert type="warning" >}}
You are highly advised to read about [storing configuration files](#storing-configuration-files) to back up those separately.
{{< /alert >}}
## Simple backup procedure
As a rough guideline, if you are using a [1k reference architecture](../reference_architectures/1k_users.md) with less than 100 GB of data, then follow these steps:
1. Run the [backup command](#backup-command).
1. Back up [object storage](#object-storage), if applicable.
1. Manually back up [configuration files](#storing-configuration-files).
## Scaling backups
As the volume of GitLab data grows, the [backup command](#backup-command) takes longer to execute. [Backup options](#backup-options) such as [back up Git repositories concurrently](#back-up-git-repositories-concurrently) and [incremental repository backups](#incremental-repository-backups) can help to reduce execution time. At some point, the backup command becomes impractical by itself. For example, it can take 24 hours or more.
Starting with GitLab 18.0, repository backup performance has been significantly improved for repositories with large numbers of references (branches, tags). This improvement can reduce backup times from hours to minutes for affected repositories. No configuration changes are required to benefit from this enhancement. For technical details, see our [blog post about decreasing GitLab repository backup times](https://about.gitlab.com/blog/2025/06/05/how-we-decreased-gitlab-repo-backup-times-from-48-hours-to-41-minutes/).
In some cases, architecture changes may be warranted to allow backups to scale. If you are using a GitLab reference architecture, see [Back up and restore large reference architectures](backup_large_reference_architectures.md).
For more information, see [alternative backup strategies](#alternative-backup-strategies).
## What data needs to be backed up?
- [PostgreSQL databases](#postgresql-databases)
- [Git repositories](#git-repositories)
- [Blobs](#blobs)
- [Container registry](#container-registry)
- [Configuration files](#storing-configuration-files)
- [Other data](#other-data)
### PostgreSQL databases
In the simplest case, GitLab has one PostgreSQL database in one PostgreSQL server on the same VM as all other GitLab services. But depending on configuration, GitLab may use multiple PostgreSQL databases in multiple PostgreSQL servers.
In general, this data is the single source of truth for most user-generated content in the Web interface, such as issue and merge request content, comments, permissions, and credentials.
PostgreSQL also holds some cached data like HTML-rendered Markdown, and by default, merge request diffs.
However, merge request diffs can also be configured to be offloaded to the file system or object storage, see [Blobs](#blobs).
Gitaly Cluster (Praefect) uses a PostgreSQL database as a single source of truth to manage its Gitaly nodes.
A common PostgreSQL utility, [`pg_dump`](https://www.postgresql.org/docs/16/app-pgdump.html), produces a backup file which can be used to restore a PostgreSQL database. The [backup command](#backup-command) uses this utility under the hood.
Unfortunately, the larger the database, the longer it takes `pg_dump` to execute. Depending on your situation, the duration becomes impractical at some point (days, for example). If your database is over 100 GB, `pg_dump`, and by extension the [backup command](#backup-command), is likely not usable. For more information, see [alternative backup strategies](#alternative-backup-strategies).
### Git repositories
A GitLab instance can have one or more repository shards. Each shard is a Gitaly instance or Gitaly Cluster (Praefect)
that is responsible for allowing access and operations on the locally stored Git repositories. Gitaly can run
on a machine:
- With a single disk.
- With multiple disks mounted as a single mount-point (like with a RAID array).
- Using LVM.
Each project can have up to 3 different repositories:
- A project repository, where the source code is stored.
- A wiki repository, where the wiki content is stored.
- A design repository, where design artifacts are indexed (assets are actually in LFS).
They all live in the same shard and share the same base name with a `-wiki` and `-design` suffix
for Wiki and Design Repository cases.
Personal and project snippets, and group wiki content, are stored in Git repositories.
Project forks are deduplicated in live a GitLab site using pool repositories.
The [backup command](#backup-command) produces a Git bundle for each repository and tars them all up. This duplicates pool repository data into every fork. In [our testing](https://gitlab.com/gitlab-org/gitlab/-/issues/396343), 100 GB of Git repositories took a little over 2 hours to back up and upload to S3. At around 400 GB of Git data, the backup command is likely not viable for regular backups. For more information, see [alternative backup strategies](#alternative-backup-strategies).
### Blobs
GitLab stores blobs (or files) such as issue attachments or LFS objects into either:
- The file system in a specific location.
- An [Object Storage](../object_storage.md) solution. Object Storage solutions can be:
- Cloud based like Amazon S3 and Google Cloud Storage.
- Hosted by you (like MinIO).
- A Storage Appliance that exposes an Object Storage-compatible API.
#### Object storage
The [backup command](#backup-command) doesn't back up blobs that aren't stored on the file system. If you're using [object storage](../object_storage.md), be sure to enable backups with your object storage provider. For example, see:
- [Amazon S3 backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html)
- [Google Cloud Storage Transfer Service](https://cloud.google.com/storage-transfer-service) and [Google Cloud Storage Object Versioning](https://cloud.google.com/storage/docs/object-versioning)
### Container registry
[GitLab container registry](../packages/container_registry.md) storage can be configured in either:
- The file system in a specific location.
- An [Object Storage](../object_storage.md) solution. Object Storage solutions can be:
- Cloud based like Amazon S3 and Google Cloud Storage.
- Hosted by you (like MinIO).
- A Storage Appliance that exposes an Object Storage-compatible API.
The backup command does not back up registry data when they are stored in Object Storage.
### Storing configuration files
{{< alert type="warning" >}}
The backup Rake task GitLab provides does not store your configuration files. The primary reason for this is that your database contains items including encrypted information for two-factor authentication and the CI/CD secure variables. Storing encrypted information in the same location as its key defeats the purpose of using encryption in the first place. For example, the secrets file contains your database encryption key. If you lose it, then the GitLab application will not be able to decrypt any encrypted values in the database.
{{< /alert >}}
{{< alert type="warning" >}}
The secrets file may change after upgrades.
{{< /alert >}}
You should back up the configuration directory. At the very minimum, you must back up:
{{< tabs >}}
{{< tab title="Linux package" >}}
- `/etc/gitlab/gitlab-secrets.json`
- `/etc/gitlab/gitlab.rb`
For more information, see [Backup and restore Linux package (Omnibus) configuration](https://docs.gitlab.com/omnibus/settings/backups.html#backup-and-restore-omnibus-gitlab-configuration).
{{< /tab >}}
{{< tab title="Self-compiled" >}}
- `/home/git/gitlab/config/secrets.yml`
- `/home/git/gitlab/config/gitlab.yml`
{{< /tab >}}
{{< tab title="Docker" >}}
- Back up the volume where the configuration files are stored. If you created
the GitLab container according to the documentation, it should be in the
`/srv/gitlab/config` directory.
{{< /tab >}}
{{< tab title="GitLab Helm chart" >}}
- Follow the [Back up the secrets](https://docs.gitlab.com/charts/backup-restore/backup.html#back-up-the-secrets)
instructions.
{{< /tab >}}
{{< /tabs >}}
You may also want to back up any TLS keys and certificates (`/etc/gitlab/ssl`, `/etc/gitlab/trusted-certs`), and your
[SSH host keys](https://superuser.com/questions/532040/copy-ssh-keys-from-one-server-to-another-server/532079#532079)
to avoid man-in-the-middle attack warnings if you have to perform a full machine restore.
In the unlikely event that the secrets file is lost, see
[When the secrets file is lost](troubleshooting_backup_gitlab.md#when-the-secrets-file-is-lost).
### Other data
GitLab uses Redis both as a cache store and to hold persistent data for our background jobs system, Sidekiq. The provided [backup command](#backup-command) does not back up Redis data. This means that in order to take a consistent backup with the [backup command](#backup-command), there must be no pending or running background jobs. It is possible to [manually back up Redis](https://redis.io/docs/latest/operate/oss_and_stack/management/persistence/#backing-up-redis-data).
Elasticsearch is an optional database for advanced search. It can improve search
in both source-code level, and user generated content in issues, merge requests, and discussions. The [backup command](#backup-command) does not back up Elasticsearch data. Elasticsearch data can be regenerated from PostgreSQL data after a restore. It is possible to [manually back up Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html).
### Requirements
To be able to back up and restore, ensure that Rsync is installed on your
system. If you installed GitLab:
- Using the Linux package, Rsync is already installed.
- Using self-compiled, check if `rsync` is installed. If Rsync is not installed, install it. For example:
```shell
# Debian/Ubuntu
sudo apt-get install rsync
# RHEL/CentOS
sudo yum install rsync
```
### Backup command
{{< alert type="warning" >}}
The backup command does not back up items in [object storage](#object-storage) on Linux package (Omnibus) / Docker / Self-compiled installations.
{{< /alert >}}
{{< alert type="warning" >}}
The backup command requires [additional parameters](#back-up-and-restore-for-installations-using-pgbouncer) when
your installation is using PgBouncer, for either performance reasons or when using it with a Patroni cluster.
{{< /alert >}}
{{< alert type="warning" >}}
Before GitLab 15.5.0, the backup command doesn't verify if another backup is already running, as described in
[issue 362593](https://gitlab.com/gitlab-org/gitlab/-/issues/362593). We strongly recommend
you make sure that all backups are complete before starting a new one.
{{< /alert >}}
{{< alert type="note" >}}
You can only restore a backup to exactly the same version and type (CE/EE)
of GitLab on which it was created.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
Run the backup task by using `kubectl` to run the `backup-utility` script on the GitLab toolbox pod. For more details, see the [charts backup documentation](https://docs.gitlab.com/charts/backup-restore/backup.html).
{{< /tab >}}
{{< tab title="Docker" >}}
Run the backup from the host.
```shell
docker exec -t <container name> gitlab-backup create
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
If your GitLab deployment has multiple nodes, you need to pick a node for running the backup command. You must ensure that the designated node:
- is persistent, and not subject to auto-scaling.
- has the GitLab Rails application already installed. If Puma or Sidekiq is running, then Rails is installed.
- has sufficient storage and memory to produce the backup file.
Example output:
```plaintext
Dumping database tables:
- Dumping table events... [DONE]
- Dumping table issues... [DONE]
- Dumping table keys... [DONE]
- Dumping table merge_requests... [DONE]
- Dumping table milestones... [DONE]
- Dumping table namespaces... [DONE]
- Dumping table notes... [DONE]
- Dumping table projects... [DONE]
- Dumping table protected_branches... [DONE]
- Dumping table schema_migrations... [DONE]
- Dumping table services... [DONE]
- Dumping table snippets... [DONE]
- Dumping table taggings... [DONE]
- Dumping table tags... [DONE]
- Dumping table users... [DONE]
- Dumping table users_projects... [DONE]
- Dumping table web_hooks... [DONE]
- Dumping table wikis... [DONE]
Dumping repositories:
- Dumping repository abcd... [DONE]
Creating backup archive: <backup-id>_gitlab_backup.tar [DONE]
Deleting tmp directories...[DONE]
Deleting old backups... [SKIPPING]
```
For detailed information about the backup process, see [Backup archive process](backup_archive_process.md).
### Backup options
The command-line tool GitLab provides to back up your instance can accept more
options.
#### Backup strategy option
The default backup strategy is to essentially stream data from the respective
data locations to the backup using the Linux command `tar` and `gzip`. This works
fine in most cases, but can cause problems when data is rapidly changing.
When data changes while `tar` is reading it, the error `file changed as we read it`
may occur, and causes the backup process to fail. In that case, you can use
the backup strategy called `copy`. The strategy copies data files
to a temporary location before calling `tar` and `gzip`, avoiding the error.
A side-effect is that the backup process takes up to an additional 1X disk
space. The process does its best to clean up the temporary files at each stage
so the problem doesn't compound, but it could be a considerable change for large
installations.
To use the `copy` strategy instead of the default streaming strategy, specify
`STRATEGY=copy` in the Rake task command. For example:
```shell
sudo gitlab-backup create STRATEGY=copy
```
#### Backup filename
{{< alert type="warning" >}}
If you use a custom backup filename, you can't
[limit the lifetime of the backups](#limit-backup-lifetime-for-local-files-prune-old-backups).
{{< /alert >}}
Backup files are created with filenames according to [specific defaults](backup_archive_process.md#backup-id). However, you can
override the `<backup-id>` portion of the filename by setting the `BACKUP`
environment variable. For example:
```shell
sudo gitlab-backup create BACKUP=dump
```
The resulting file is named `dump_gitlab_backup.tar`. This is useful for
systems that make use of rsync and incremental backups, and results in
considerably faster transfer speeds.
#### Backup compression
By default, Gzip fast compression is applied during backup of:
- [PostgreSQL database](#postgresql-databases) dumps.
- [blobs](#blobs), for example uploads, job artifacts, external merge request diffs.
The default command is `gzip -c -1`. You can override this command with `COMPRESS_CMD`. Similarly, you can override the decompression command with `DECOMPRESS_CMD`.
Caveats:
- The compression command is used in a pipeline, so your custom command must output to `stdout`.
- If you specify a command that is not packaged with GitLab, then you must install it yourself.
- The resultant filenames will still end in `.gz`.
- The default decompression command, used during restore, is `gzip -cd`. Therefore if you override the compression command to use a format that cannot be decompressed by `gzip -cd`, you must override the decompression command during restore.
- [Do not place environment variables after the backup command](https://gitlab.com/gitlab-org/gitlab/-/issues/433227). For example, `gitlab-backup create COMPRESS_CMD="pigz -c --best"` doesn't work as intended.
##### Default compression: Gzip with fastest method
```shell
gitlab-backup create
```
##### Gzip with slowest method
```shell
COMPRESS_CMD="gzip -c --best" gitlab-backup create
```
If `gzip` was used for backup, then restore does not require any options:
```shell
gitlab-backup restore
```
##### No compression
If your backup destination has built-in automatic compression, then you may wish to skip compression.
The `tee` command pipes `stdin` to `stdout`.
```shell
COMPRESS_CMD=tee gitlab-backup create
```
And on restore:
```shell
DECOMPRESS_CMD=tee gitlab-backup restore
```
##### Parallel compression with `pigz`
{{< alert type="warning" >}}
While we support using `COMPRESS_CMD` and `DECOMPRESS_CMD` to override the default Gzip compression library, we only test the default Gzip library with default options on a routine basis. You are responsible for testing and validating the viability of your backups. We strongly recommend this as best practice in general for backups, whether overriding the compression command or not. If you encounter issues with another compression library, you should revert back to the default. Troubleshooting and fixing errors with alternative libraries are a lower priority for GitLab.
{{< /alert >}}
{{< alert type="note" >}}
`pigz` is not included in the GitLab Linux package. You must install it yourself.
{{< /alert >}}
An example of compressing backups with `pigz` using 4 processes:
```shell
COMPRESS_CMD="pigz --compress --stdout --fast --processes=4" sudo gitlab-backup create
```
Because `pigz` compresses to the `gzip` format, it is not required to use `pigz` to decompress backups which were compressed by `pigz`. However, it can still have a performance benefit over `gzip`. An example of decompressing backups with `pigz`:
```shell
DECOMPRESS_CMD="pigz --decompress --stdout" sudo gitlab-backup restore
```
##### Parallel compression with `zstd`
{{< alert type="warning" >}}
While we support using `COMPRESS_CMD` and `DECOMPRESS_CMD` to override the default Gzip compression library, we only test the default Gzip library with default options on a routine basis. You are responsible for testing and validating the viability of your backups. We strongly recommend this as best practice in general for backups, whether overriding the compression command or not. If you encounter issues with another compression library, you should revert back to the default. Troubleshooting and fixing errors with alternative libraries are a lower priority for GitLab.
{{< /alert >}}
{{< alert type="note" >}}
`zstd` is not included in the GitLab Linux package. You must install it yourself.
{{< /alert >}}
An example of compressing backups with `zstd` using 4 threads:
```shell
COMPRESS_CMD="zstd --compress --stdout --fast --threads=4" sudo gitlab-backup create
```
An example of decompressing backups with `zstd`:
```shell
DECOMPRESS_CMD="zstd --decompress --stdout" sudo gitlab-backup restore
```
#### Confirm archive can be transferred
To ensure the generated archive is transferable by rsync, you can set the `GZIP_RSYNCABLE=yes`
option. This sets the `--rsyncable` option to `gzip`, which is useful only in
combination with setting [the Backup filename option](#backup-filename).
The `--rsyncable` option in `gzip` isn't guaranteed to be available
on all distributions. To verify that it's available in your distribution, run
`gzip --help` or consult the man pages.
```shell
sudo gitlab-backup create BACKUP=dump GZIP_RSYNCABLE=yes
```
#### Excluding specific data from the backup
Depending on your installation type, slightly different components can be skipped on backup creation.
{{< tabs >}}
{{< tab title="Linux package (Omnibus) / Docker / Self-compiled" >}}
<!-- source: https://gitlab.com/gitlab-org/gitlab/-/blob/d693aa7f894c7306a0d20ab6d138a7b95785f2ff/lib/backup/manager.rb#L117-133 -->
- `db` (database)
- `repositories` (Git repositories data, including wikis)
- `uploads` (attachments)
- `builds` (CI job output logs)
- `artifacts` (CI job artifacts)
- `pages` (Pages content)
- `lfs` (LFS objects)
- `terraform_state` (Terraform states)
- `registry` (Container registry images)
- `packages` (Packages)
- `ci_secure_files` (Project-level secure files)
- `external_diffs` (External merge request diffs)
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
<!-- source: https://gitlab.com/gitlab-org/build/CNG/-/blob/068e146db915efcd875414e04403410b71a2e70c/gitlab-toolbox/scripts/bin/backup-utility#L19 -->
- `db` (database)
- `repositories` (Git repositories data, including wikis)
- `uploads` (attachments)
- `artifacts` (CI job artifacts and output logs)
- `pages` (Pages content)
- `lfs` (LFS objects)
- `terraform_state` (Terraform states)
- `registry` (Container registry images)
- `packages` (Package registry)
- `ci_secure_files` (Project-level Secure Files)
- `external_diffs` (Merge request diffs)
{{< /tab >}}
{{< /tabs >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create SKIP=db,uploads
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
See [Skipping components](https://docs.gitlab.com/charts/backup-restore/backup.html#skipping-components) in charts backup documentation.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=db,uploads RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
`SKIP=` is also used to:
- [Skip creation of the tar file](#skipping-tar-creation) (`SKIP=tar`).
- [Skip uploading the backup to remote storage](#skip-uploading-backups-to-remote-storage) (`SKIP=remote`).
#### Skipping tar creation
{{< alert type="note" >}}
It is not possible to skip the tar creation when using [object storage](#upload-backups-to-a-remote-cloud-storage) for backups.
{{< /alert >}}
The last part of creating a backup is generation of a `.tar` file containing all the parts. In some cases, creating a `.tar` file might be wasted effort or even directly harmful, so you can skip this step by adding `tar` to the `SKIP` environment variable. Example use-cases:
- When the backup is picked up by other backup software.
- To speed up incremental backups by avoiding having to extract the backup every time. (In this case, `PREVIOUS_BACKUP` and `BACKUP` must not be specified, otherwise the specified backup is extracted, but no `.tar` file is generated at the end.)
Adding `tar` to the `SKIP` variable leaves the files and directories containing the
backup in the directory used for the intermediate files. These files are
overwritten when a new backup is created, so you should make sure they are copied
elsewhere, because you can only have one backup on the system.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create SKIP=tar
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=tar RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
#### Create server-side repository backups
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4941) in `gitlab-backup` in GitLab 16.3.
- Server-side support in `gitlab-backup` for restoring a specified backup instead of the latest backup [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132188) in GitLab 16.6.
- Server-side support in `gitlab-backup` for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/6475) in GitLab 16.6.
- Server-side support in `backup-utility` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/438393) in GitLab 17.0.
{{< /history >}}
Instead of storing large repository backups in the backup archive, repository
backups can be configured so that the Gitaly node that hosts each repository is
responsible for creating the backup and streaming it to object storage. This
helps reduce the network resources required to create and restore a backup.
1. [Configure a server-side backup destination in Gitaly](../gitaly/configure_gitaly.md#configure-server-side-backups).
1. Create a backup using the repositories server-side option. See the following examples.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create REPOSITORIES_SERVER_SIDE=true
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_SERVER_SIDE=true
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
kubectl exec <Toolbox pod name> -it -- backup-utility --repositories-server-side
```
When you are using [cron-based backups](https://docs.gitlab.com/charts/backup-restore/backup.html#cron-based-backup),
add the `--repositories-server-side` flag to the extra arguments.
{{< /tab >}}
{{< /tabs >}}
#### Back up Git repositories concurrently
When using [multiple repository storages](../repository_storage_paths.md),
repositories can be backed up or restored concurrently to help fully use CPU time. The
following variables are available to modify the default behavior of the Rake
task:
- `GITLAB_BACKUP_MAX_CONCURRENCY`: The maximum number of projects to back up at
the same time. Defaults to the number of logical CPUs.
- `GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY`: The maximum number of projects to
back up at the same time on each storage. This allows the repository backups
to be spread across storages. Defaults to `2`.
For example, with 4 repository storages:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create GITLAB_BACKUP_MAX_CONCURRENCY=4 GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY=1
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create GITLAB_BACKUP_MAX_CONCURRENCY=4 GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY=1
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```yaml
toolbox:
#...
extra: {}
extraEnv:
GITLAB_BACKUP_MAX_CONCURRENCY: 4
GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY: 1
```
{{< /tab >}}
{{< /tabs >}}
#### Incremental repository backups
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/351383) in GitLab 14.10 [with a flag](../feature_flags/_index.md) named `incremental_repository_backup`. Disabled by default.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/355945) in GitLab 15.3. Feature flag `incremental_repository_backup` removed.
- Server-side support for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/5461) in GitLab 16.6.
{{< /history >}}
{{< alert type="note" >}}
Only repositories support incremental backups. Therefore, if you use `INCREMENTAL=yes`, the task
creates a self-contained backup tar archive. This is because all subtasks except repositories are
still creating full backups (they overwrite the existing full backup).
See [issue 19256](https://gitlab.com/gitlab-org/gitlab/-/issues/19256) for a feature request to
support incremental backups for all subtasks.
{{< /alert >}}
Incremental repository backups can be faster than full repository backups because they only pack changes since the last backup into the backup bundle for each repository.
The incremental backup archives are not linked to each other: each archive is a self-contained backup of the instance. There must be an existing backup
to create an incremental backup from.
Use the `PREVIOUS_BACKUP=<backup-id>` option to choose the backup to use. By default, a backup file is created
as documented in the [Backup ID](backup_archive_process.md#backup-id) section. You can override the `<backup-id>` portion of the filename by setting the
[`BACKUP` environment variable](#backup-filename).
To create an incremental backup, run:
```shell
sudo gitlab-backup create INCREMENTAL=yes PREVIOUS_BACKUP=<backup-id>
```
To create an [untarred](#skipping-tar-creation) incremental backup from a tarred backup, use `SKIP=tar`:
```shell
sudo gitlab-backup create INCREMENTAL=yes SKIP=tar
```
#### Back up specific repository storages
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86896) in GitLab 15.0.
{{< /history >}}
When using [multiple repository storages](../repository_storage_paths.md),
repositories from specific repository storages can be backed up separately
using the `REPOSITORIES_STORAGES` option. The option accepts a comma-separated list of
storage names.
For example:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create REPOSITORIES_STORAGES=storage1,storage2
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_STORAGES=storage1,storage2
```
{{< /tab >}}
{{< /tabs >}}
#### Back up specific repositories
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/88094) in GitLab 15.1.
- [Skipping specific repositories added](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121865) in GitLab 16.1.
{{< /history >}}
You can back up specific repositories using the `REPOSITORIES_PATHS` option.
Similarly, you can use `SKIP_REPOSITORIES_PATHS` to skip certain repositories.
Both options accept a comma-separated list of project or group paths. If you
specify a group path, all repositories in all projects in the group and
descendant groups are included or skipped, depending on which option you used.
For example, to back up all repositories for all projects in Group A (`group-a`), the repository for
Project C in Group B (`group-b/project-c`),
and skip the Project D in Group A (`group-a/project-d`):
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
REPOSITORIES_PATHS=group-a SKIP_REPOSITORIES_PATHS=group-a/project_a2 backup-utility --skip db,registry,uploads,artifacts,lfs,packages,external_diffs,terraform_state,ci_secure_files,pages
```
{{< /tab >}}
{{< /tabs >}}
#### Upload backups to a remote (cloud) storage
{{< alert type="note" >}}
It is not possible to [skip the tar creation](#skipping-tar-creation) when using object storage for backups.
{{< /alert >}}
You can let the backup script upload (using the [Fog library](https://fog.github.io/))
the `.tar` file it creates. In the following example, we use Amazon S3 for
storage, but Fog also lets you use [other storage providers](https://fog.github.io/storage/).
GitLab also [imports cloud drivers](https://gitlab.com/gitlab-org/gitlab/-/blob/da46c9655962df7d49caef0e2b9f6bbe88462a02/Gemfile#L113)
for AWS, Google, and Aliyun. A local driver is
[also available](#upload-to-locally-mounted-shares).
[Read more about using object storage with GitLab](../object_storage.md).
##### Using Amazon S3
For Linux package (Omnibus):
1. Add the following to `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AWS',
'region' => 'eu-west-1',
# Choose one authentication method
# IAM Profile
'use_iam_profile' => true
# OR AWS Access and Secret key
'aws_access_key_id' => 'AKIAKIAKI',
'aws_secret_access_key' => 'secret123'
}
gitlab_rails['backup_upload_remote_directory'] = 'my.s3.bucket'
# Consider using multipart uploads when file size reaches 100 MB. Enter a number in bytes.
# gitlab_rails['backup_multipart_chunk_size'] = 104857600
```
1. If you're using the IAM Profile authentication method, ensure the instance where `backup-utility` is to be run has the following policy set (replace `<backups-bucket>` with the correct bucket name):
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<backups-bucket>/*"
}
]
}
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect
##### S3 Encrypted Buckets
AWS supports these [modes for server side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html):
- Amazon S3-Managed Keys (SSE-S3)
- Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
- Customer-Provided Keys (SSE-C)
Use your mode of choice with GitLab. Each mode has similar, but slightly
different, configuration methods.
###### SSE-S3
To enable SSE-S3, in the backup storage options set the `server_side_encryption`
field to `AES256`. For example, in the Linux package (Omnibus):
```ruby
gitlab_rails['backup_upload_storage_options'] = {
'server_side_encryption' => 'AES256'
}
```
###### SSE-KMS
To enable SSE-KMS, you need the
[KMS key via its Amazon Resource Name (ARN) in the `arn:aws:kms:region:acct-id:key/key-id` format](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html).
Under the `backup_upload_storage_options` configuration setting, set:
- `server_side_encryption` to `aws:kms`.
- `server_side_encryption_kms_key_id` to the ARN of the key.
For example, in the Linux package (Omnibus):
```ruby
gitlab_rails['backup_upload_storage_options'] = {
'server_side_encryption' => 'aws:kms',
'server_side_encryption_kms_key_id' => 'arn:aws:<YOUR KMS KEY ID>:'
}
```
###### SSE-C
SSE-C requires you to set these encryption options:
- `backup_encryption`: AES256.
- `backup_encryption_key`: Unencoded, 32-byte (256 bits) key. The upload fails if this isn't exactly 32 bytes.
For example, in the Linux package (Omnibus):
```ruby
gitlab_rails['backup_encryption'] = 'AES256'
gitlab_rails['backup_encryption_key'] = '<YOUR 32-BYTE KEY HERE>'
```
If the key contains binary characters and cannot be encoded in UTF-8,
instead, specify the key with the `GITLAB_BACKUP_ENCRYPTION_KEY` environment variable.
For example:
```ruby
gitlab_rails['env'] = { 'GITLAB_BACKUP_ENCRYPTION_KEY' => "\xDE\xAD\xBE\xEF" * 8 }
```
##### Digital Ocean Spaces
This example can be used for a bucket in Amsterdam (AMS3):
1. Add the following to `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AWS',
'region' => 'ams3',
'aws_access_key_id' => 'AKIAKIAKI',
'aws_secret_access_key' => 'secret123',
'endpoint' => 'https://ams3.digitaloceanspaces.com'
}
gitlab_rails['backup_upload_remote_directory'] = 'my.s3.bucket'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect
If you see a `400 Bad Request` error message when using Digital Ocean Spaces,
the cause may be the use of backup encryption. Because Digital Ocean Spaces
doesn't support encryption, remove or comment the line that contains
`gitlab_rails['backup_encryption']`.
##### Other S3 Providers
Not all S3 providers are fully compatible with the Fog library. For example,
if you see a `411 Length Required` error message after attempting to upload,
you may need to downgrade the `aws_signature_version` value from the default
value to `2`, [due to this issue](https://github.com/fog/fog-aws/issues/428).
For self-compiled installations:
1. Edit `home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
# snip
upload:
# Fog storage connection settings, see https://fog.github.io/storage/ .
connection:
provider: AWS
region: eu-west-1
aws_access_key_id: AKIAKIAKI
aws_secret_access_key: 'secret123'
# If using an IAM Profile, leave aws_access_key_id & aws_secret_access_key empty
# ie. aws_access_key_id: ''
# use_iam_profile: 'true'
# The remote 'directory' to store your backups. For S3, this would be the bucket name.
remote_directory: 'my.s3.bucket'
# Specifies Amazon S3 storage class to use for backups, this is optional
# storage_class: 'STANDARD'
#
# Turns on AWS Server-Side Encryption with Amazon Customer-Provided Encryption Keys for backups, this is optional
# 'encryption' must be set in order for this to have any effect.
# 'encryption_key' should be set to the 256-bit encryption key for Amazon S3 to use to encrypt or decrypt.
# To avoid storing the key on disk, the key can also be specified via the `GITLAB_BACKUP_ENCRYPTION_KEY` your data.
# encryption: 'AES256'
# encryption_key: '<key>'
#
#
# Turns on AWS Server-Side Encryption with Amazon S3-Managed keys (optional)
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html
# For SSE-S3, set 'server_side_encryption' to 'AES256'.
# For SS3-KMS, set 'server_side_encryption' to 'aws:kms'. Set
# 'server_side_encryption_kms_key_id' to the ARN of customer master key.
# storage_options:
# server_side_encryption: 'aws:kms'
# server_side_encryption_kms_key_id: 'arn:aws:kms:YOUR-KEY-ID-HERE'
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect
##### Using Google Cloud Storage
To use Google Cloud Storage to save backups, you must first create an
access key from the Google console:
1. Go to the [Google storage settings page](https://console.cloud.google.com/storage/settings).
1. Select **Interoperability**, and then create an access key.
1. Make note of the **Access Key** and **Secret** and replace them in the
following configurations.
1. In the buckets advanced settings ensure the Access Control option
**Set object-level and bucket-level permissions** is selected.
1. Ensure you have already created a bucket.
For the Linux package (Omnibus):
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'Google',
'google_storage_access_key_id' => 'Access Key',
'google_storage_secret_access_key' => 'Secret',
## If you have CNAME buckets (foo.example.com), you might run into SSL issues
## when uploading backups ("hostname foo.example.com.storage.googleapis.com
## does not match the server certificate"). In that case, uncomment the following
## setting. See: https://github.com/fog/fog/issues/2834
#'path_style' => true
}
gitlab_rails['backup_upload_remote_directory'] = 'my.google.bucket'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect
For self-compiled installations:
1. Edit `home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
upload:
connection:
provider: 'Google'
google_storage_access_key_id: 'Access Key'
google_storage_secret_access_key: 'Secret'
remote_directory: 'my.google.bucket'
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect
##### Using Azure Blob storage
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AzureRM',
'azure_storage_account_name' => '<AZURE STORAGE ACCOUNT NAME>',
'azure_storage_access_key' => '<AZURE STORAGE ACCESS KEY>',
'azure_storage_domain' => 'blob.core.windows.net', # Optional
}
gitlab_rails['backup_upload_remote_directory'] = '<AZURE BLOB CONTAINER>'
```
If you are using [a managed identity](../object_storage.md#azure-workload-and-managed-identities), omit `azure_storage_access_key`:
```ruby
gitlab_rails['backup_upload_connection'] = {
'provider' => 'AzureRM',
'azure_storage_account_name' => '<AZURE STORAGE ACCOUNT NAME>',
'azure_storage_domain' => '<AZURE STORAGE DOMAIN>' # Optional
}
gitlab_rails['backup_upload_remote_directory'] = '<AZURE BLOB CONTAINER>'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit `home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
upload:
connection:
provider: 'AzureRM'
azure_storage_account_name: '<AZURE STORAGE ACCOUNT NAME>'
azure_storage_access_key: '<AZURE STORAGE ACCESS KEY>'
remote_directory: '<AZURE BLOB CONTAINER>'
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect
{{< /tab >}}
{{< /tabs >}}
For more details, see the [table of Azure parameters](../object_storage.md#azure-blob-storage).
##### Specifying a custom directory for backups
This option works only for remote storage. If you want to group your backups,
you can pass a `DIRECTORY` environment variable:
```shell
sudo gitlab-backup create DIRECTORY=daily
sudo gitlab-backup create DIRECTORY=weekly
```
#### Skip uploading backups to remote storage
If you have configured GitLab to [upload backups in a remote storage](#upload-backups-to-a-remote-cloud-storage),
you can use the `SKIP=remote` option to skip uploading your backups to the remote storage.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create SKIP=remote
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=remote RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
#### Upload to locally-mounted shares
You can send backups to a locally-mounted share (for example, `NFS`,`CIFS`, or `SMB`) using the Fog
[`Local`](https://github.com/fog/fog-local#usage) storage provider.
To do this, you must set the following configuration keys:
- `backup_upload_connection.local_root`: mounted directory that backups are copied to.
- `backup_upload_remote_directory`: subdirectory of the `backup_upload_connection.local_root` directory. It is created if it doesn't exist.
If you want to copy the tarballs to the root of your mounted directory, use `.`.
When mounted, the directory set in the `local_root` key must be owned by either:
- The `git` user. So, mounting with the `uid=` of the `git` user for `CIFS` and `SMB`.
- The user that you are executing the backup tasks as. For the Linux package (Omnibus), this is the `git` user.
Because file system performance may affect overall GitLab performance,
[we don't recommend using cloud-based file systems for storage](../nfs.md#avoid-using-cloud-based-file-systems).
##### Avoid conflicting configuration
Don't set the following configuration keys to the same path:
- `gitlab_rails['backup_path']` (`backup.path` for self-compiled installations).
- `gitlab_rails['backup_upload_connection'].local_root` (`backup.upload.connection.local_root` for self-compiled installations).
The `backup_path` configuration key sets the local location of the backup file. The `upload` configuration key is
intended for use when the backup file is uploaded to a separate server, perhaps for archival purposes.
If these configuration keys are set to the same location, the upload feature fails because a backup already exists at
the upload location. This failure causes the upload feature to delete the backup because it assumes it's a residual file
remaining after the failed upload attempt.
##### Configure uploads to locally-mounted shares
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_upload_connection'] = {
:provider => 'Local',
:local_root => '/mnt/backups'
}
# The directory inside the mounted folder to copy backups to
# Use '.' to store them in the root directory
gitlab_rails['backup_upload_remote_directory'] = 'gitlab_backups'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit `home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
upload:
# Fog storage connection settings, see https://fog.github.io/storage/ .
connection:
provider: Local
local_root: '/mnt/backups'
# The directory inside the mounted folder to copy backups to
# Use '.' to store them in the root directory
remote_directory: 'gitlab_backups'
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Backup archive permissions
The backup archives created by GitLab (`1393513186_2014_02_27_gitlab_backup.tar`)
have the owner/group `git`/`git` and 0600 permissions by default. This is
meant to avoid other system users reading GitLab data. If you need the backup
archives to have different permissions, you can use the `archive_permissions`
setting.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_archive_permissions'] = 0644 # Makes the backup archives world-readable
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
archive_permissions: 0644 # Makes the backup archives world-readable
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Configuring cron to make daily backups
{{< alert type="warning" >}}
The following cron jobs do not [back up your GitLab configuration files](#storing-configuration-files)
or [SSH host keys](https://superuser.com/questions/532040/copy-ssh-keys-from-one-server-to-another-server/532079#532079).
{{< /alert >}}
You can schedule a cron job that backs up your repositories and GitLab metadata.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit the crontab for the `root` user:
```shell
sudo su -
crontab -e
```
1. There, add the following line to schedule the backup for everyday at 2 AM:
```plaintext
0 2 * * * /opt/gitlab/bin/gitlab-backup create CRON=1
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit the crontab for the `git` user:
```shell
sudo -u git crontab -e
```
1. Add the following lines at the bottom:
```plaintext
# Create a full backup of the GitLab repositories and SQL database every day at 2am
0 2 * * * cd /home/git/gitlab && PATH=/usr/local/bin:/usr/bin:/bin bundle exec rake gitlab:backup:create RAILS_ENV=production CRON=1
```
{{< /tab >}}
{{< /tabs >}}
The `CRON=1` environment setting directs the backup script to hide all progress
output if there aren't any errors. This is recommended to reduce cron spam.
When troubleshooting backup problems, however, replace `CRON=1` with `--trace` to log verbosely.
#### Limit backup lifetime for local files (prune old backups)
{{< alert type="warning" >}}
The process described in this section doesn't work if you used a [custom filename](#backup-filename)
for your backups.
{{< /alert >}}
To prevent regular backups from using all your disk space, you may want to set a limited lifetime
for backups. The next time the backup task runs, backups older than the `backup_keep_time` are
pruned.
This configuration option manages only local files. GitLab doesn't prune old
files stored in a third-party [object storage](#upload-backups-to-a-remote-cloud-storage)
because the user may not have permission to list and delete files. It's
recommended that you configure the appropriate retention policy for your object
storage (for example, [AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html)).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
## Limit backup lifetime to 7 days - 604800 seconds
gitlab_rails['backup_keep_time'] = 604800
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
{{< /tab >}}
{{< tab title="Self-compiled" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
backup:
## Limit backup lifetime to 7 days - 604800 seconds
keep_time: 604800
```
1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
{{< /tab >}}
{{< /tabs >}}
#### Back up and restore for installations using PgBouncer
Do not back up or restore GitLab through a PgBouncer connection. These
tasks must [bypass PgBouncer and connect directly to the PostgreSQL primary database node](#bypassing-pgbouncer),
or they cause a GitLab outage.
When the GitLab backup or restore task is used with PgBouncer, the
following error message is shown:
```ruby
ActiveRecord::StatementInvalid: PG::UndefinedTable
```
Each time the GitLab backup runs, GitLab starts generating 500 errors and errors about missing
tables will [be logged by PostgreSQL](../logs/_index.md#postgresql-logs):
```plaintext
ERROR: relation "tablename" does not exist at character 123
```
This happens because the task uses `pg_dump`, which
[sets a null search path and explicitly includes the schema in every SQL query](https://gitlab.com/gitlab-org/gitlab/-/issues/23211)
to address [CVE-2018-1058](https://www.postgresql.org/about/news/postgresql-103-968-9512-9417-and-9322-released-1834/).
Because connections are reused with PgBouncer in transaction pooling mode,
PostgreSQL fails to search the default `public` schema. As a result,
this clearing of the search path causes tables and columns to appear
missing.
##### Bypassing PgBouncer
There are two ways to fix this:
1. [Use environment variables to override the database settings](#environment-variable-overrides) for the backup task.
1. Reconfigure a node to [connect directly to the PostgreSQL primary database node](../postgresql/pgbouncer.md#procedure-for-bypassing-pgbouncer).
###### Environment variable overrides
{{< history >}}
- Multiple databases support was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133177) in GitLab 16.5.
{{< /history >}}
By default, GitLab uses the database configuration stored in a
configuration file (`database.yml`). However, you can override the database settings
for the backup and restore task by setting environment
variables that are prefixed with `GITLAB_BACKUP_`:
- `GITLAB_BACKUP_PGHOST`
- `GITLAB_BACKUP_PGUSER`
- `GITLAB_BACKUP_PGPORT`
- `GITLAB_BACKUP_PGPASSWORD`
- `GITLAB_BACKUP_PGSSLMODE`
- `GITLAB_BACKUP_PGSSLKEY`
- `GITLAB_BACKUP_PGSSLCERT`
- `GITLAB_BACKUP_PGSSLROOTCERT`
- `GITLAB_BACKUP_PGSSLCRL`
- `GITLAB_BACKUP_PGSSLCOMPRESSION`
For example, to override the database host and port to use 192.168.1.10
and port 5432 with the Linux package (Omnibus):
```shell
sudo GITLAB_BACKUP_PGHOST=192.168.1.10 GITLAB_BACKUP_PGPORT=5432 /opt/gitlab/bin/gitlab-backup create
```
If you run GitLab on [multiple databases](../postgresql/_index.md), you can override database settings by including
the database name in the environment variable. For example if your `main` and `ci` databases are
hosted on different database servers, you would append their name after the `GITLAB_BACKUP_` prefix,
leaving the `PG*` names as is:
```shell
sudo GITLAB_BACKUP_MAIN_PGHOST=192.168.1.10 GITLAB_BACKUP_CI_PGHOST=192.168.1.12 /opt/gitlab/bin/gitlab-backup create
```
See the [PostgreSQL documentation](https://www.postgresql.org/docs/16/libpq-envars.html)
for more details on what these parameters do.
#### `gitaly-backup` for repository backup and restore
The `gitaly-backup` binary is used by the backup Rake task to create and restore repository backups from Gitaly.
`gitaly-backup` replaces the previous backup method that directly calls RPCs on Gitaly from GitLab.
The backup Rake task must be able to find this executable. In most cases, you don't need to change
the path to the binary as it should work fine with the default path `/opt/gitlab/embedded/bin/gitaly-backup`.
If you have a specific reason to change the path, it can be configured in the Linux package (Omnibus):
1. Add the following to `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['backup_gitaly_backup_path'] = '/path/to/gitaly-backup'
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
## Alternative backup strategies
Because every deployment may have different capabilities, you should first review [what data needs to be backed up](#what-data-needs-to-be-backed-up) to better understand if, and how, you can leverage them.
For example, if you use Amazon RDS, you might choose to use its built-in backup and restore features to handle your GitLab [PostgreSQL data](#postgresql-databases), and [exclude PostgreSQL data](#excluding-specific-data-from-the-backup) when using the [backup command](#backup-command).
In the following cases, consider using file system data transfer or snapshots as part of your backup strategy:
- Your GitLab instance contains a lot of Git repository data and the GitLab backup script is too slow.
- Your GitLab instance has a lot of forked projects and the regular backup task duplicates the Git data for all of them.
- Your GitLab instance has a problem and using the regular backup and import Rake tasks isn't possible.
{{< alert type="warning" >}}
Gitaly Cluster (Praefect) [does not support snapshot backups](../gitaly/praefect/_index.md#snapshot-backup-and-recovery).
{{< /alert >}}
When considering using file system data transfer or snapshots:
- Don't use these methods to migrate from one operating system to another. The operating systems of the source and destination should be as similar as possible. For example,
don't use these methods to migrate from Ubuntu to RHEL.
- Data consistency is very important. You should stop GitLab (`sudo gitlab-ctl stop`) before
doing a file system transfer (with `rsync`, for example) or taking a snapshot to ensure all data in memory is flushed to disk. GitLab consists of multiple subsystems (Gitaly, database, file storage) that have their own buffers, queues, and storage layers. GitLab transactions can span these subsystems, which results in parts of a transaction taking different paths to disk. On live systems, file system transfers and snapshot runs fail to capture parts of the transaction still in memory.
Example: Amazon Elastic Block Store (EBS)
- A GitLab server using the Linux package (Omnibus) hosted on Amazon AWS.
- An EBS drive containing an ext4 file system is mounted at `/var/opt/gitlab`.
- In this case you could make an application backup by taking an EBS snapshot.
- The backup includes all repositories, uploads and PostgreSQL data.
Example: Logical Volume Manager (LVM) snapshots + rsync
- A GitLab server using the Linux package (Omnibus), with an LVM logical volume mounted at `/var/opt/gitlab`.
- Replicating the `/var/opt/gitlab` directory using rsync would not be reliable because too many files would change while rsync is running.
- Instead of rsync-ing `/var/opt/gitlab`, we create a temporary LVM snapshot, which we mount as a read-only file system at `/mnt/gitlab_backup`.
- Now we can have a longer running rsync job which creates a consistent replica on the remote server.
- The replica includes all repositories, uploads and PostgreSQL data.
If you're running GitLab on a virtualized server, you can possibly also create
VM snapshots of the entire GitLab server. It's not uncommon however for a VM
snapshot to require you to power down the server, which limits this solution's
practical use.
### Back up repository data separately
First, ensure you back up existing GitLab data while [skipping repositories](#excluding-specific-data-from-the-backup):
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup create SKIP=repositories
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:create SKIP=repositories RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
For manually backing up the Git repository data on disk, there are multiple possible strategies:
- Use snapshots, such as the previous examples of Amazon EBS drive snapshots, or LVM snapshots + rsync.
- Use [GitLab Geo](../geo/_index.md) and rely on the repository data on a Geo secondary site.
- [Prevent writes and copy the Git repository data](#prevent-writes-and-copy-the-git-repository-data).
- [Create an online backup by marking repositories as read-only (experimental)](#online-backup-through-marking-repositories-as-read-only-experimental).
#### Prevent writes and copy the Git repository data
Git repositories must be copied in a consistent way. If repositories
are copied during concurrent write operations,
inconsistencies or corruption issues can occur. For more details,
[issue 270422](https://gitlab.com/gitlab-org/gitlab/-/issues/270422)
has a longer discussion that explains the potential problems.
To prevent writes to the Git repository data, there are two possible approaches:
- Use [maintenance mode](../maintenance_mode/_index.md) to place GitLab in a read-only state.
- Create explicit downtime by stopping all Gitaly services before backing up the repositories:
```shell
sudo gitlab-ctl stop gitaly
# execute git data copy step
sudo gitlab-ctl start gitaly
```
You can copy Git repository data using any method, as long as writes are prevented on the data being copied
(to prevent inconsistencies and corruption issues). In order of preference and safety, the recommended methods are:
1. Use `rsync` with archive-mode, delete, and checksum options, for example:
```shell
rsync -aR --delete --checksum source destination # be extra safe with the order as it will delete existing data if inverted
```
1. Use a [`tar` pipe to copy the entire repository's directory to another server or location](../operations/moving_repositories.md#tar-pipe-to-another-server).
1. Use `sftp`, `scp`, `cp`, or any other copying method.
#### Online backup through marking repositories as read-only (experimental)
One way of backing up repositories without requiring instance-wide downtime
is to programmatically mark projects as read-only while copying the underlying data.
There are a few possible downsides to this:
- Repositories are read-only for a period of time that scales with the size of the repository.
- Backups take a longer time to complete due to marking each project as read-only, potentially leading to inconsistencies. For example,
a possible date discrepancy between the last data available for the first project that gets backed up compared to
the last project that gets backed up.
- Fork networks should be entirely read-only while the projects inside get backed up to prevent potential changes to the pool repository.
There is an experimental script that attempts to automate this process in
[the Geo team Runbooks project](https://gitlab.com/gitlab-org/geo-team/runbooks/-/tree/main/experimental-online-backup-through-rsync).
|
https://docs.gitlab.com/administration/restore_gitlab
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/restore_gitlab.md
|
2025-08-13
|
doc/administration/backup_restore
|
[
"doc",
"administration",
"backup_restore"
] |
restore_gitlab.md
|
Data Access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Restore GitLab
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab restore operations recover data from backups to maintain system
continuity and recover from data loss. Restore operations:
- Recover database records and configuration
- Restore Git repositories, container registry images, and uploaded content
- Reinstate package registry data and CI/CD artifacts
- Restore account and group settings
- Recover project and group wikis
- Restore project-level secure files
- Recover external merge request diffs
The restore process requires an existing GitLab installation of the same
version as the backup. Follow the [prerequisites](#restore-prerequisites) and
test the complete restore process before using it in production.
## Restore prerequisites
### The destination GitLab instance must already be working
You need to have a working GitLab installation before you can perform a
restore. This is because the system user performing the restore actions (`git`)
is usually not allowed to create or delete the SQL database needed to import
data into (`gitlabhq_production`). All existing data is either erased
(SQL) or moved to a separate directory (such as repositories and uploads).
Restoring SQL data skips views owned by PostgreSQL extensions.
### The destination GitLab instance must have the exact same version
You can only restore a backup to exactly the same version and type (CE or EE)
of GitLab on which it was created. For example, CE 15.1.4.
If your backup is a different version than the current installation, you must
[downgrade](../../update/package/downgrade.md) or [upgrade](../../update/package/_index.md#upgrade-to-a-specific-version) your GitLab installation
before restoring the backup.
### GitLab secrets must be restored
To restore a backup, you must also restore the GitLab secrets.
If you are migrating to a new GitLab instance, you must copy the GitLab secrets file from the old server.
These include the database encryption key, [CI/CD variables](../../ci/variables/_index.md), and
variables used for [two-factor authentication](../../user/profile/account/two_factor_authentication.md).
Without the keys, [multiple issues occur](troubleshooting_backup_gitlab.md#when-the-secrets-file-is-lost), including loss of access by users with [two-factor authentication enabled](../../user/profile/account/two_factor_authentication.md),
and GitLab Runners cannot sign in.
Restore:
- `/etc/gitlab/gitlab-secrets.json` (Linux package installations)
- `/home/git/gitlab/.secret` (self-compiled installations)
- [Restoring the secrets](https://docs.gitlab.com/charts/backup-restore/restore.html#restoring-the-secrets) (cloud-native GitLab)
- [GitLab Helm chart secrets can be converted to the Linux package format](https://docs.gitlab.com/charts/installation/migration/helm_to_package.html), if required.
### Certain GitLab configuration must match the original backed up environment
You likely want to separately restore your previous `/etc/gitlab/gitlab.rb` (for Linux package installations)
or `/home/git/gitlab/config/gitlab.yml` (for self-compiled installations) and
[any TLS or SSH keys and certificates](backup_gitlab.md#data-not-included-in-a-backup).
Certain configuration is coupled to data in PostgreSQL. For example:
- If the original environment has three repository storages (for example, `default`, `my-storage-1`, and `my-storage-2`), then the target environment must also have at least those storage names defined in configuration.
- Restoring a backup from an environment using local storage restores to local storage even if the target environment uses object storage. Migrations to object storage must be done before or after restoration.
### Restoring directories that are mount points
If you're restoring into directories that are mount points, you must ensure these directories are
empty before attempting a restore. Otherwise, GitLab attempts to move these directories before
restoring the new data, which causes an error.
Read more about [configuring NFS mounts](../nfs.md).
## Restore for Linux package installations
This procedure assumes that:
- You have installed the exact same version and type (CE/EE) of GitLab
with which the backup was created.
- You have run `sudo gitlab-ctl reconfigure` at least once.
- GitLab is running. If not, start it using `sudo gitlab-ctl start`.
First ensure your backup tar file is in the backup directory described in the
`gitlab.rb` configuration `gitlab_rails['backup_path']`. The default is
`/var/opt/gitlab/backups`. The backup file needs to be owned by the `git` user.
```shell
sudo cp 11493107454_2018_04_25_10.6.4-ce_gitlab_backup.tar /var/opt/gitlab/backups/
sudo chown git:git /var/opt/gitlab/backups/11493107454_2018_04_25_10.6.4-ce_gitlab_backup.tar
```
Stop the processes that are connected to the database. Leave the rest of GitLab
running:
```shell
sudo gitlab-ctl stop puma
sudo gitlab-ctl stop sidekiq
# Verify
sudo gitlab-ctl status
```
Next, ensure you have completed the [restore prerequisites](#restore-prerequisites) steps and have run `gitlab-ctl reconfigure`
after copying over the GitLab secrets file from the original installation.
Next, restore the backup, specifying the ID of the backup you wish to
restore:
{{< alert type="warning" >}}
The following command overwrites the contents of your GitLab database!
{{< /alert >}}
```shell
# NOTE: "_gitlab_backup.tar" is omitted from the name
sudo gitlab-backup restore BACKUP=11493107454_2018_04_25_10.6.4-ce
```
If there's a GitLab version mismatch between your backup tar file and the
installed version of GitLab, the restore command aborts with an error
message:
```plaintext
GitLab version mismatch:
Your current GitLab version (16.5.0-ee) differs from the GitLab version in the backup!
Please switch to the following version and try again:
version: 16.4.3-ee
```
Install the [correct GitLab version](https://packages.gitlab.com/gitlab/),
and then try again.
{{< alert type="warning" >}}
The restore command requires [additional parameters](backup_gitlab.md#back-up-and-restore-for-installations-using-pgbouncer) when
your installation is using PgBouncer, for either performance reasons or when using it with a Patroni cluster.
{{< /alert >}}
Run reconfigure on the PostgreSQL node:
```shell
sudo gitlab-ctl reconfigure
```
Next, start and [check](../raketasks/maintenance.md#check-gitlab-configuration) GitLab:
```shell
sudo gitlab-ctl start
sudo gitlab-rake gitlab:check SANITIZE=true
```
Verify that the [database values can be decrypted](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets)
especially if `/etc/gitlab/gitlab-secrets.json` was restored, or if a different server is
the target for the restore.
```shell
sudo gitlab-rake gitlab:doctor:secrets
```
For added assurance, you can perform [an integrity check on the uploaded files](../raketasks/check.md#uploaded-files-integrity):
```shell
sudo gitlab-rake gitlab:artifacts:check
sudo gitlab-rake gitlab:lfs:check
sudo gitlab-rake gitlab:uploads:check
```
After the restore is completed, it's recommended to generate database statistics to improve the database performance and avoid inconsistencies in the UI:
1. Enter the [database console](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-postgresql-database).
1. Run the following:
```sql
SET STATEMENT_TIMEOUT=0 ; ANALYZE VERBOSE;
```
There are ongoing discussions about integrating the command into the restore command, see [issue 276184](https://gitlab.com/gitlab-org/gitlab/-/issues/276184) for more details.
## Restore for Docker image and GitLab Helm chart installations
For GitLab installations using the Docker image or the GitLab Helm chart on a
Kubernetes cluster, the restore task expects the restore directories to be
empty. However, with Docker and Kubernetes volume mounts, some system level
directories may be created at the volume roots, such as the `lost+found`
directory found in Linux operating systems. These directories are usually owned
by `root`, which can cause access permission errors because the restore Rake task
runs as the `git` user. To restore a GitLab installation, users have to confirm
the restore target directories are empty.
For both these installation types, the backup tarball has to be available in
the backup location (default location is `/var/opt/gitlab/backups`).
### Restore for Helm chart installations
The GitLab Helm chart uses the process documented in
[restoring a GitLab Helm chart installation](https://docs.gitlab.com/charts/backup-restore/restore.html#restoring-a-gitlab-installation)
### Restore for Docker image installations
If you're using [Docker Swarm](../../install/docker/installation.md#install-gitlab-by-using-docker-swarm-mode),
the container might restart during the restore process because Puma is shut down,
and so the container health check fails. To work around this problem,
temporarily disable the health check mechanism.
1. Edit `docker-compose.yml`:
```yaml
healthcheck:
disable: true
```
1. Deploy the stack:
```shell
docker stack deploy --compose-file docker-compose.yml mystack
```
For more information, see [issue 6846](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6846 "GitLab restore can fail owing to `gitlab-healthcheck`").
The restore task can be run from the host:
```shell
# Stop the processes that are connected to the database
docker exec -it <name of container> gitlab-ctl stop puma
docker exec -it <name of container> gitlab-ctl stop sidekiq
# Verify that the processes are all down before continuing
docker exec -it <name of container> gitlab-ctl status
# Run the restore. NOTE: "_gitlab_backup.tar" is omitted from the name
docker exec -it <name of container> gitlab-backup restore BACKUP=11493107454_2018_04_25_10.6.4-ce
# Restart the GitLab container
docker restart <name of container>
# Check GitLab
docker exec -it <name of container> gitlab-rake gitlab:check SANITIZE=true
```
## Restore for self-compiled installations
First, ensure your backup tar file is in the backup directory described in the
`gitlab.yml` configuration:
```yaml
## Backup settings
backup:
path: "tmp/backups" # Relative paths are relative to Rails.root (default: tmp/backups/)
```
The default is `/home/git/gitlab/tmp/backups`, and it needs to be owned by the `git` user. Now, you can begin the backup procedure:
```shell
# Stop processes that are connected to the database
sudo service gitlab stop
sudo -u git -H bundle exec rake gitlab:backup:restore RAILS_ENV=production
```
Example output:
```plaintext
Unpacking backup... [DONE]
Restoring database tables:
-- create_table("events", {:force=>true})
-> 0.2231s
[...]
- Loading fixture events...[DONE]
- Loading fixture issues...[DONE]
- Loading fixture keys...[SKIPPING]
- Loading fixture merge_requests...[DONE]
- Loading fixture milestones...[DONE]
- Loading fixture namespaces...[DONE]
- Loading fixture notes...[DONE]
- Loading fixture projects...[DONE]
- Loading fixture protected_branches...[SKIPPING]
- Loading fixture schema_migrations...[DONE]
- Loading fixture services...[SKIPPING]
- Loading fixture snippets...[SKIPPING]
- Loading fixture taggings...[SKIPPING]
- Loading fixture tags...[SKIPPING]
- Loading fixture users...[DONE]
- Loading fixture users_projects...[DONE]
- Loading fixture web_hooks...[SKIPPING]
- Loading fixture wikis...[SKIPPING]
Restoring repositories:
- Restoring repository abcd... [DONE]
- Object pool 1 ...
Deleting tmp directories...[DONE]
```
Next, restore `/home/git/gitlab/.secret` if necessary, [as previously mentioned](#restore-prerequisites).
Restart GitLab:
```shell
sudo service gitlab restart
```
## Restoring only one or a few projects or groups from a backup
Although the Rake task used to restore a GitLab instance doesn't support
restoring a single project or group, you can use a workaround by restoring
your backup to a separate, temporary GitLab instance, and then export your
project or group from there:
1. [Install a new GitLab](../../install/_index.md) instance at the same version as
the backed-up instance from which you want to restore.
1. Restore the backup into this new instance, then
export your [project](../../user/project/settings/import_export.md)
or [group](../../user/project/settings/import_export.md#migrate-groups-by-uploading-an-export-file-deprecated). For
more information about what is and isn't exported, see the export feature's documentation.
1. After the export is complete, go to the old instance and then import it.
1. After importing the projects or groups that you wanted is complete, you may
delete the new, temporary GitLab instance.
A feature request to provide direct restore of individual projects or groups
is being discussed in [issue #17517](https://gitlab.com/gitlab-org/gitlab/-/issues/17517).
## Restoring an incremental repository backup
Each backup archive contains a full self-contained backup, including those created through the [incremental repository backup procedure](backup_gitlab.md#incremental-repository-backups). To restore an incremental repository backup, use the same instructions as restoring any other regular backup archive.
## Restore options
The command-line tool GitLab provides to restore from backup can accept more
options.
### Specify backup to restore when there are more than one
Backup files use a naming scheme [starting with a backup ID](backup_archive_process.md#backup-id). When more than one backup exists, you must specify which
`<backup-id>_gitlab_backup.tar` file to restore by setting the environment variable `BACKUP=<backup-id>`.
### Disable prompts during restore
During a restore from backup, the restore script prompts for confirmation:
- If the **Write to authorized_keys** setting is enabled, before the restore script deletes and rebuilds the `authorized_keys` file.
- When restoring the database, before the restore script removes all existing tables.
- After restoring the database, if there were errors in restoring the schema, before continuing because further problems are likely.
To disable these prompts, set the `GITLAB_ASSUME_YES` environment variable to `1`.
- Linux package installations:
```shell
sudo GITLAB_ASSUME_YES=1 gitlab-backup restore
```
- Self-compiled installations:
```shell
sudo -u git -H GITLAB_ASSUME_YES=1 bundle exec rake gitlab:backup:restore RAILS_ENV=production
```
The `force=yes` environment variable also disables these prompts.
### Excluding tasks on restore
You can exclude specific tasks on restore by adding the environment variable `SKIP`, whose values are a comma-separated list of the following options:
- `db` (database)
- `uploads` (attachments)
- `builds` (CI job output logs)
- `artifacts` (CI job artifacts)
- `lfs` (LFS objects)
- `terraform_state` (Terraform states)
- `registry` (Container registry images)
- `pages` (Pages content)
- `repositories` (Git repositories data)
- `packages` (Packages)
To exclude specific tasks:
- Linux package installations:
```shell
sudo gitlab-backup restore BACKUP=<backup-id> SKIP=db,uploads
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore BACKUP=<backup-id> SKIP=db,uploads RAILS_ENV=production
```
### Restore specific repository storages
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86896) in GitLab 15.0.
{{< /history >}}
{{< alert type="warning" >}}
GitLab 17.1 and earlier are [affected by a race condition](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/158412)
that can cause data loss. The problem affects repositories that have been forked and use GitLab
[object pools](../repository_storage_paths.md#hashed-object-pools). To avoid data loss,
only restore backups by using GitLab 17.2 or later.
{{< /alert >}}
When using [multiple repository storages](../repository_storage_paths.md),
repositories from specific repository storages can be restored separately
using the `REPOSITORIES_STORAGES` option. The option accepts a comma-separated list of
storage names.
For example:
- Linux package installations:
```shell
sudo gitlab-backup restore BACKUP=<backup-id> REPOSITORIES_STORAGES=storage1,storage2
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore BACKUP=<backup-id> REPOSITORIES_STORAGES=storage1,storage2
```
### Restore specific repositories
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/88094) in GitLab 15.1.
{{< /history >}}
{{< alert type="warning" >}}
GitLab 17.1 and earlier are [affected by a race condition](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/158412)
that can cause data loss. The problem affects repositories that have been forked and use GitLab
[object pools](../repository_storage_paths.md#hashed-object-pools). To avoid data loss, only restore backups by using GitLab
17.2 or later.
{{< /alert >}}
You can restore specific repositories using the `REPOSITORIES_PATHS` and the `SKIP_REPOSITORIES_PATHS` options.
Both options accept a comma-separated list of project and group paths. If you
specify a group path, all repositories in all projects in the group and
descendant groups are included or skipped, depending on which option you used.
Both the groups and projects must exist in the specified backup or on the target instance.
{{< alert type="note" >}}
The `REPOSITORIES_PATHS` and `SKIP_REPOSITORIES_PATHS` options apply only to Git repositories.
They do not apply to project or group database entries. If you created a repositories backup
with `SKIP=db`, by itself it cannot be used to restore specific repositories to a new instance.
{{< /alert >}}
For example, to restore all repositories for all projects in Group A (`group-a`), the repository for
Project C in Group B (`group-b/project-c`), and skip the Project D in Group A (`group-a/project-d`):
- Linux package installations:
```shell
sudo gitlab-backup restore BACKUP=<backup-id> REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore BACKUP=<backup-id> REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
```
### Restore untarred backups
If an [untarred backup](backup_gitlab.md#skipping-tar-creation) (made with `SKIP=tar`) is found,
and no backup is chosen with `BACKUP=<backup-id>`, the untarred backup is used.
For example:
- Linux package installations:
```shell
sudo gitlab-backup restore
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore
```
### Restoring using server-side repository backups
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4941) in `gitlab-backup` in GitLab 16.3.
- Server-side support in `gitlab-backup` for restoring a specified backup instead of the latest backup [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132188) in GitLab 16.6.
- Server-side support in `gitlab-backup` for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/6475) in GitLab 16.6.
- Server-side support in `backup-utility` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/438393) in GitLab 17.0.
{{< /history >}}
When a server-side backup is collected, the restore process defaults to use the server-side restore mechanism shown in
[Create server-side repository backups](backup_gitlab.md#create-server-side-repository-backups). You can configure backup restoration so that the Gitaly
node that hosts each repository is responsible for pulling the necessary backup data directly from object storage.
1. [Configure a server-side backup destination in Gitaly](../gitaly/configure_gitaly.md#configure-server-side-backups).
1. Start a server-side backup restore process and specifying the [ID of the backup](backup_archive_process.md#backup-id) you wish to restore:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup restore BACKUP=11493107454_2018_04_25_10.6.4-ce
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore BACKUP=11493107454_2018_04_25_10.6.4-ce
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
kubectl exec <Toolbox pod name> -it -- backup-utility --restore -t <backup_ID> --repositories-server-side
```
When using [cron-based backups](https://docs.gitlab.com/charts/backup-restore/backup.html#cron-based-backup),
add the `--repositories-server-side` flag to the extra arguments.
{{< /tab >}}
{{< /tabs >}}
## Troubleshooting
The following are possible problems you might encounter, along with potential
solutions.
### Restoring database backup using output warnings from a Linux package installation
If you're using backup restore procedures, you may encounter the following
warning messages:
```plaintext
ERROR: must be owner of extension pg_trgm
ERROR: must be owner of extension btree_gist
ERROR: must be owner of extension plpgsql
WARNING: no privileges could be revoked for "public" (two occurrences)
WARNING: no privileges were granted for "public" (two occurrences)
```
Be advised that the backup is successfully restored in spite of these warning
messages.
The Rake task runs this as the `gitlab` user, which doesn't have superuser
access to the database. When restore is initiated, it also runs as the `gitlab`
user, but it also tries to alter the objects it doesn't have access to.
Those objects have no influence on the database backup or restore, but display
a warning message.
For more information, see:
- PostgreSQL issue tracker:
- [Not being a superuser](https://www.postgresql.org/message-id/201110220712.30886.adrian.klaver@gmail.com).
- [Having different owners](https://www.postgresql.org/message-id/2039.1177339749@sss.pgh.pa.us).
- Stack Overflow: [Resulting errors](https://stackoverflow.com/questions/4368789/error-must-be-owner-of-language-plpgsql).
### Restoring fails due to Git server hook
While restoring from backup, you can encounter an error when the following are true:
- A Git Server Hook (`custom_hook`) is configured using the method for [GitLab version 15.10 and earlier](../server_hooks.md)
- Your GitLab version is on version 15.11 and later
- You created symlinks to a directory outside of the GitLab-managed locations
The error looks like:
```plaintext
{"level":"fatal","msg":"restore: pipeline: 1 failures encountered:\n - @hashed/path/to/hashed_repository.git (path/to_project): manager: restore custom hooks, \"@hashed/path/to/hashed_repository/<BackupID>_<GitLabVersion>-ee/001.custom_hooks.tar\": rpc error: code = Internal desc = setting custom hooks: generating prepared vote: walking directory: copying file to hash: read /mnt/gitlab-app/git-data/repositories/+gitaly/tmp/default-repositories.old.<timestamp>.<temporaryfolder>/custom_hooks/compliance-triggers.d: is a directory\n","pid":3256017,"time":"2023-08-10T20:09:44.395Z"}
```
To resolve this, you can update the Git [server hooks](../server_hooks.md) for GitLab version 15.11 and later, and create a new backup.
### Successful restore with repositories showing as empty when using `fapolicyd`
When using `fapolicyd` for increased security, GitLab can report that a restore was successful but repositories show as empty. For more troubleshooting help, see
[Gitaly Troubleshooting documentation](../gitaly/troubleshooting.md#repositories-are-shown-as-empty-after-a-gitlab-restore).
|
---
stage: Data Access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Restore GitLab
breadcrumbs:
- doc
- administration
- backup_restore
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab restore operations recover data from backups to maintain system
continuity and recover from data loss. Restore operations:
- Recover database records and configuration
- Restore Git repositories, container registry images, and uploaded content
- Reinstate package registry data and CI/CD artifacts
- Restore account and group settings
- Recover project and group wikis
- Restore project-level secure files
- Recover external merge request diffs
The restore process requires an existing GitLab installation of the same
version as the backup. Follow the [prerequisites](#restore-prerequisites) and
test the complete restore process before using it in production.
## Restore prerequisites
### The destination GitLab instance must already be working
You need to have a working GitLab installation before you can perform a
restore. This is because the system user performing the restore actions (`git`)
is usually not allowed to create or delete the SQL database needed to import
data into (`gitlabhq_production`). All existing data is either erased
(SQL) or moved to a separate directory (such as repositories and uploads).
Restoring SQL data skips views owned by PostgreSQL extensions.
### The destination GitLab instance must have the exact same version
You can only restore a backup to exactly the same version and type (CE or EE)
of GitLab on which it was created. For example, CE 15.1.4.
If your backup is a different version than the current installation, you must
[downgrade](../../update/package/downgrade.md) or [upgrade](../../update/package/_index.md#upgrade-to-a-specific-version) your GitLab installation
before restoring the backup.
### GitLab secrets must be restored
To restore a backup, you must also restore the GitLab secrets.
If you are migrating to a new GitLab instance, you must copy the GitLab secrets file from the old server.
These include the database encryption key, [CI/CD variables](../../ci/variables/_index.md), and
variables used for [two-factor authentication](../../user/profile/account/two_factor_authentication.md).
Without the keys, [multiple issues occur](troubleshooting_backup_gitlab.md#when-the-secrets-file-is-lost), including loss of access by users with [two-factor authentication enabled](../../user/profile/account/two_factor_authentication.md),
and GitLab Runners cannot sign in.
Restore:
- `/etc/gitlab/gitlab-secrets.json` (Linux package installations)
- `/home/git/gitlab/.secret` (self-compiled installations)
- [Restoring the secrets](https://docs.gitlab.com/charts/backup-restore/restore.html#restoring-the-secrets) (cloud-native GitLab)
- [GitLab Helm chart secrets can be converted to the Linux package format](https://docs.gitlab.com/charts/installation/migration/helm_to_package.html), if required.
### Certain GitLab configuration must match the original backed up environment
You likely want to separately restore your previous `/etc/gitlab/gitlab.rb` (for Linux package installations)
or `/home/git/gitlab/config/gitlab.yml` (for self-compiled installations) and
[any TLS or SSH keys and certificates](backup_gitlab.md#data-not-included-in-a-backup).
Certain configuration is coupled to data in PostgreSQL. For example:
- If the original environment has three repository storages (for example, `default`, `my-storage-1`, and `my-storage-2`), then the target environment must also have at least those storage names defined in configuration.
- Restoring a backup from an environment using local storage restores to local storage even if the target environment uses object storage. Migrations to object storage must be done before or after restoration.
### Restoring directories that are mount points
If you're restoring into directories that are mount points, you must ensure these directories are
empty before attempting a restore. Otherwise, GitLab attempts to move these directories before
restoring the new data, which causes an error.
Read more about [configuring NFS mounts](../nfs.md).
## Restore for Linux package installations
This procedure assumes that:
- You have installed the exact same version and type (CE/EE) of GitLab
with which the backup was created.
- You have run `sudo gitlab-ctl reconfigure` at least once.
- GitLab is running. If not, start it using `sudo gitlab-ctl start`.
First ensure your backup tar file is in the backup directory described in the
`gitlab.rb` configuration `gitlab_rails['backup_path']`. The default is
`/var/opt/gitlab/backups`. The backup file needs to be owned by the `git` user.
```shell
sudo cp 11493107454_2018_04_25_10.6.4-ce_gitlab_backup.tar /var/opt/gitlab/backups/
sudo chown git:git /var/opt/gitlab/backups/11493107454_2018_04_25_10.6.4-ce_gitlab_backup.tar
```
Stop the processes that are connected to the database. Leave the rest of GitLab
running:
```shell
sudo gitlab-ctl stop puma
sudo gitlab-ctl stop sidekiq
# Verify
sudo gitlab-ctl status
```
Next, ensure you have completed the [restore prerequisites](#restore-prerequisites) steps and have run `gitlab-ctl reconfigure`
after copying over the GitLab secrets file from the original installation.
Next, restore the backup, specifying the ID of the backup you wish to
restore:
{{< alert type="warning" >}}
The following command overwrites the contents of your GitLab database!
{{< /alert >}}
```shell
# NOTE: "_gitlab_backup.tar" is omitted from the name
sudo gitlab-backup restore BACKUP=11493107454_2018_04_25_10.6.4-ce
```
If there's a GitLab version mismatch between your backup tar file and the
installed version of GitLab, the restore command aborts with an error
message:
```plaintext
GitLab version mismatch:
Your current GitLab version (16.5.0-ee) differs from the GitLab version in the backup!
Please switch to the following version and try again:
version: 16.4.3-ee
```
Install the [correct GitLab version](https://packages.gitlab.com/gitlab/),
and then try again.
{{< alert type="warning" >}}
The restore command requires [additional parameters](backup_gitlab.md#back-up-and-restore-for-installations-using-pgbouncer) when
your installation is using PgBouncer, for either performance reasons or when using it with a Patroni cluster.
{{< /alert >}}
Run reconfigure on the PostgreSQL node:
```shell
sudo gitlab-ctl reconfigure
```
Next, start and [check](../raketasks/maintenance.md#check-gitlab-configuration) GitLab:
```shell
sudo gitlab-ctl start
sudo gitlab-rake gitlab:check SANITIZE=true
```
Verify that the [database values can be decrypted](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets)
especially if `/etc/gitlab/gitlab-secrets.json` was restored, or if a different server is
the target for the restore.
```shell
sudo gitlab-rake gitlab:doctor:secrets
```
For added assurance, you can perform [an integrity check on the uploaded files](../raketasks/check.md#uploaded-files-integrity):
```shell
sudo gitlab-rake gitlab:artifacts:check
sudo gitlab-rake gitlab:lfs:check
sudo gitlab-rake gitlab:uploads:check
```
After the restore is completed, it's recommended to generate database statistics to improve the database performance and avoid inconsistencies in the UI:
1. Enter the [database console](https://docs.gitlab.com/omnibus/settings/database.html#connecting-to-the-postgresql-database).
1. Run the following:
```sql
SET STATEMENT_TIMEOUT=0 ; ANALYZE VERBOSE;
```
There are ongoing discussions about integrating the command into the restore command, see [issue 276184](https://gitlab.com/gitlab-org/gitlab/-/issues/276184) for more details.
## Restore for Docker image and GitLab Helm chart installations
For GitLab installations using the Docker image or the GitLab Helm chart on a
Kubernetes cluster, the restore task expects the restore directories to be
empty. However, with Docker and Kubernetes volume mounts, some system level
directories may be created at the volume roots, such as the `lost+found`
directory found in Linux operating systems. These directories are usually owned
by `root`, which can cause access permission errors because the restore Rake task
runs as the `git` user. To restore a GitLab installation, users have to confirm
the restore target directories are empty.
For both these installation types, the backup tarball has to be available in
the backup location (default location is `/var/opt/gitlab/backups`).
### Restore for Helm chart installations
The GitLab Helm chart uses the process documented in
[restoring a GitLab Helm chart installation](https://docs.gitlab.com/charts/backup-restore/restore.html#restoring-a-gitlab-installation)
### Restore for Docker image installations
If you're using [Docker Swarm](../../install/docker/installation.md#install-gitlab-by-using-docker-swarm-mode),
the container might restart during the restore process because Puma is shut down,
and so the container health check fails. To work around this problem,
temporarily disable the health check mechanism.
1. Edit `docker-compose.yml`:
```yaml
healthcheck:
disable: true
```
1. Deploy the stack:
```shell
docker stack deploy --compose-file docker-compose.yml mystack
```
For more information, see [issue 6846](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6846 "GitLab restore can fail owing to `gitlab-healthcheck`").
The restore task can be run from the host:
```shell
# Stop the processes that are connected to the database
docker exec -it <name of container> gitlab-ctl stop puma
docker exec -it <name of container> gitlab-ctl stop sidekiq
# Verify that the processes are all down before continuing
docker exec -it <name of container> gitlab-ctl status
# Run the restore. NOTE: "_gitlab_backup.tar" is omitted from the name
docker exec -it <name of container> gitlab-backup restore BACKUP=11493107454_2018_04_25_10.6.4-ce
# Restart the GitLab container
docker restart <name of container>
# Check GitLab
docker exec -it <name of container> gitlab-rake gitlab:check SANITIZE=true
```
## Restore for self-compiled installations
First, ensure your backup tar file is in the backup directory described in the
`gitlab.yml` configuration:
```yaml
## Backup settings
backup:
path: "tmp/backups" # Relative paths are relative to Rails.root (default: tmp/backups/)
```
The default is `/home/git/gitlab/tmp/backups`, and it needs to be owned by the `git` user. Now, you can begin the backup procedure:
```shell
# Stop processes that are connected to the database
sudo service gitlab stop
sudo -u git -H bundle exec rake gitlab:backup:restore RAILS_ENV=production
```
Example output:
```plaintext
Unpacking backup... [DONE]
Restoring database tables:
-- create_table("events", {:force=>true})
-> 0.2231s
[...]
- Loading fixture events...[DONE]
- Loading fixture issues...[DONE]
- Loading fixture keys...[SKIPPING]
- Loading fixture merge_requests...[DONE]
- Loading fixture milestones...[DONE]
- Loading fixture namespaces...[DONE]
- Loading fixture notes...[DONE]
- Loading fixture projects...[DONE]
- Loading fixture protected_branches...[SKIPPING]
- Loading fixture schema_migrations...[DONE]
- Loading fixture services...[SKIPPING]
- Loading fixture snippets...[SKIPPING]
- Loading fixture taggings...[SKIPPING]
- Loading fixture tags...[SKIPPING]
- Loading fixture users...[DONE]
- Loading fixture users_projects...[DONE]
- Loading fixture web_hooks...[SKIPPING]
- Loading fixture wikis...[SKIPPING]
Restoring repositories:
- Restoring repository abcd... [DONE]
- Object pool 1 ...
Deleting tmp directories...[DONE]
```
Next, restore `/home/git/gitlab/.secret` if necessary, [as previously mentioned](#restore-prerequisites).
Restart GitLab:
```shell
sudo service gitlab restart
```
## Restoring only one or a few projects or groups from a backup
Although the Rake task used to restore a GitLab instance doesn't support
restoring a single project or group, you can use a workaround by restoring
your backup to a separate, temporary GitLab instance, and then export your
project or group from there:
1. [Install a new GitLab](../../install/_index.md) instance at the same version as
the backed-up instance from which you want to restore.
1. Restore the backup into this new instance, then
export your [project](../../user/project/settings/import_export.md)
or [group](../../user/project/settings/import_export.md#migrate-groups-by-uploading-an-export-file-deprecated). For
more information about what is and isn't exported, see the export feature's documentation.
1. After the export is complete, go to the old instance and then import it.
1. After importing the projects or groups that you wanted is complete, you may
delete the new, temporary GitLab instance.
A feature request to provide direct restore of individual projects or groups
is being discussed in [issue #17517](https://gitlab.com/gitlab-org/gitlab/-/issues/17517).
## Restoring an incremental repository backup
Each backup archive contains a full self-contained backup, including those created through the [incremental repository backup procedure](backup_gitlab.md#incremental-repository-backups). To restore an incremental repository backup, use the same instructions as restoring any other regular backup archive.
## Restore options
The command-line tool GitLab provides to restore from backup can accept more
options.
### Specify backup to restore when there are more than one
Backup files use a naming scheme [starting with a backup ID](backup_archive_process.md#backup-id). When more than one backup exists, you must specify which
`<backup-id>_gitlab_backup.tar` file to restore by setting the environment variable `BACKUP=<backup-id>`.
### Disable prompts during restore
During a restore from backup, the restore script prompts for confirmation:
- If the **Write to authorized_keys** setting is enabled, before the restore script deletes and rebuilds the `authorized_keys` file.
- When restoring the database, before the restore script removes all existing tables.
- After restoring the database, if there were errors in restoring the schema, before continuing because further problems are likely.
To disable these prompts, set the `GITLAB_ASSUME_YES` environment variable to `1`.
- Linux package installations:
```shell
sudo GITLAB_ASSUME_YES=1 gitlab-backup restore
```
- Self-compiled installations:
```shell
sudo -u git -H GITLAB_ASSUME_YES=1 bundle exec rake gitlab:backup:restore RAILS_ENV=production
```
The `force=yes` environment variable also disables these prompts.
### Excluding tasks on restore
You can exclude specific tasks on restore by adding the environment variable `SKIP`, whose values are a comma-separated list of the following options:
- `db` (database)
- `uploads` (attachments)
- `builds` (CI job output logs)
- `artifacts` (CI job artifacts)
- `lfs` (LFS objects)
- `terraform_state` (Terraform states)
- `registry` (Container registry images)
- `pages` (Pages content)
- `repositories` (Git repositories data)
- `packages` (Packages)
To exclude specific tasks:
- Linux package installations:
```shell
sudo gitlab-backup restore BACKUP=<backup-id> SKIP=db,uploads
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore BACKUP=<backup-id> SKIP=db,uploads RAILS_ENV=production
```
### Restore specific repository storages
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86896) in GitLab 15.0.
{{< /history >}}
{{< alert type="warning" >}}
GitLab 17.1 and earlier are [affected by a race condition](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/158412)
that can cause data loss. The problem affects repositories that have been forked and use GitLab
[object pools](../repository_storage_paths.md#hashed-object-pools). To avoid data loss,
only restore backups by using GitLab 17.2 or later.
{{< /alert >}}
When using [multiple repository storages](../repository_storage_paths.md),
repositories from specific repository storages can be restored separately
using the `REPOSITORIES_STORAGES` option. The option accepts a comma-separated list of
storage names.
For example:
- Linux package installations:
```shell
sudo gitlab-backup restore BACKUP=<backup-id> REPOSITORIES_STORAGES=storage1,storage2
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore BACKUP=<backup-id> REPOSITORIES_STORAGES=storage1,storage2
```
### Restore specific repositories
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/88094) in GitLab 15.1.
{{< /history >}}
{{< alert type="warning" >}}
GitLab 17.1 and earlier are [affected by a race condition](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/158412)
that can cause data loss. The problem affects repositories that have been forked and use GitLab
[object pools](../repository_storage_paths.md#hashed-object-pools). To avoid data loss, only restore backups by using GitLab
17.2 or later.
{{< /alert >}}
You can restore specific repositories using the `REPOSITORIES_PATHS` and the `SKIP_REPOSITORIES_PATHS` options.
Both options accept a comma-separated list of project and group paths. If you
specify a group path, all repositories in all projects in the group and
descendant groups are included or skipped, depending on which option you used.
Both the groups and projects must exist in the specified backup or on the target instance.
{{< alert type="note" >}}
The `REPOSITORIES_PATHS` and `SKIP_REPOSITORIES_PATHS` options apply only to Git repositories.
They do not apply to project or group database entries. If you created a repositories backup
with `SKIP=db`, by itself it cannot be used to restore specific repositories to a new instance.
{{< /alert >}}
For example, to restore all repositories for all projects in Group A (`group-a`), the repository for
Project C in Group B (`group-b/project-c`), and skip the Project D in Group A (`group-a/project-d`):
- Linux package installations:
```shell
sudo gitlab-backup restore BACKUP=<backup-id> REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore BACKUP=<backup-id> REPOSITORIES_PATHS=group-a,group-b/project-c SKIP_REPOSITORIES_PATHS=group-a/project-d
```
### Restore untarred backups
If an [untarred backup](backup_gitlab.md#skipping-tar-creation) (made with `SKIP=tar`) is found,
and no backup is chosen with `BACKUP=<backup-id>`, the untarred backup is used.
For example:
- Linux package installations:
```shell
sudo gitlab-backup restore
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore
```
### Restoring using server-side repository backups
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4941) in `gitlab-backup` in GitLab 16.3.
- Server-side support in `gitlab-backup` for restoring a specified backup instead of the latest backup [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132188) in GitLab 16.6.
- Server-side support in `gitlab-backup` for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/6475) in GitLab 16.6.
- Server-side support in `backup-utility` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/438393) in GitLab 17.0.
{{< /history >}}
When a server-side backup is collected, the restore process defaults to use the server-side restore mechanism shown in
[Create server-side repository backups](backup_gitlab.md#create-server-side-repository-backups). You can configure backup restoration so that the Gitaly
node that hosts each repository is responsible for pulling the necessary backup data directly from object storage.
1. [Configure a server-side backup destination in Gitaly](../gitaly/configure_gitaly.md#configure-server-side-backups).
1. Start a server-side backup restore process and specifying the [ID of the backup](backup_archive_process.md#backup-id) you wish to restore:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-backup restore BACKUP=11493107454_2018_04_25_10.6.4-ce
```
{{< /tab >}}
{{< tab title="Self-compiled" >}}
```shell
sudo -u git -H bundle exec rake gitlab:backup:restore BACKUP=11493107454_2018_04_25_10.6.4-ce
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
kubectl exec <Toolbox pod name> -it -- backup-utility --restore -t <backup_ID> --repositories-server-side
```
When using [cron-based backups](https://docs.gitlab.com/charts/backup-restore/backup.html#cron-based-backup),
add the `--repositories-server-side` flag to the extra arguments.
{{< /tab >}}
{{< /tabs >}}
## Troubleshooting
The following are possible problems you might encounter, along with potential
solutions.
### Restoring database backup using output warnings from a Linux package installation
If you're using backup restore procedures, you may encounter the following
warning messages:
```plaintext
ERROR: must be owner of extension pg_trgm
ERROR: must be owner of extension btree_gist
ERROR: must be owner of extension plpgsql
WARNING: no privileges could be revoked for "public" (two occurrences)
WARNING: no privileges were granted for "public" (two occurrences)
```
Be advised that the backup is successfully restored in spite of these warning
messages.
The Rake task runs this as the `gitlab` user, which doesn't have superuser
access to the database. When restore is initiated, it also runs as the `gitlab`
user, but it also tries to alter the objects it doesn't have access to.
Those objects have no influence on the database backup or restore, but display
a warning message.
For more information, see:
- PostgreSQL issue tracker:
- [Not being a superuser](https://www.postgresql.org/message-id/201110220712.30886.adrian.klaver@gmail.com).
- [Having different owners](https://www.postgresql.org/message-id/2039.1177339749@sss.pgh.pa.us).
- Stack Overflow: [Resulting errors](https://stackoverflow.com/questions/4368789/error-must-be-owner-of-language-plpgsql).
### Restoring fails due to Git server hook
While restoring from backup, you can encounter an error when the following are true:
- A Git Server Hook (`custom_hook`) is configured using the method for [GitLab version 15.10 and earlier](../server_hooks.md)
- Your GitLab version is on version 15.11 and later
- You created symlinks to a directory outside of the GitLab-managed locations
The error looks like:
```plaintext
{"level":"fatal","msg":"restore: pipeline: 1 failures encountered:\n - @hashed/path/to/hashed_repository.git (path/to_project): manager: restore custom hooks, \"@hashed/path/to/hashed_repository/<BackupID>_<GitLabVersion>-ee/001.custom_hooks.tar\": rpc error: code = Internal desc = setting custom hooks: generating prepared vote: walking directory: copying file to hash: read /mnt/gitlab-app/git-data/repositories/+gitaly/tmp/default-repositories.old.<timestamp>.<temporaryfolder>/custom_hooks/compliance-triggers.d: is a directory\n","pid":3256017,"time":"2023-08-10T20:09:44.395Z"}
```
To resolve this, you can update the Git [server hooks](../server_hooks.md) for GitLab version 15.11 and later, and create a new backup.
### Successful restore with repositories showing as empty when using `fapolicyd`
When using `fapolicyd` for increased security, GitLab can report that a restore was successful but repositories show as empty. For more troubleshooting help, see
[Gitaly Troubleshooting documentation](../gitaly/troubleshooting.md#repositories-are-shown-as-empty-after-a-gitlab-restore).
|
https://docs.gitlab.com/administration/backup_cli
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/backup_cli.md
|
2025-08-13
|
doc/administration/backup_restore
|
[
"doc",
"administration",
"backup_restore"
] |
backup_cli.md
|
Data Access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Back up and Restore GitLab with `gitlab-backup-cli`
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
- Status: Experiment
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/11908) in GitLab 17.0. This feature is an [experiment](../../policy/development_stages_support.md) and subject to the [GitLab Testing Agreement](https://handbook.gitlab.com/handbook/legal/testing-agreement/).
{{< /history >}}
This tool is under development and is ultimately meant to replace [the Rake tasks used for backing up and restoring GitLab](backup_gitlab.md). You can follow the development of this tool in the epic: [Next Gen Scalable Backup and Restore](https://gitlab.com/groups/gitlab-org/-/epics/11577).
Feedback on the tool is welcome in [the feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/457155).
## Taking a backup
To take a backup of the current GitLab installation:
```shell
sudo gitlab-backup-cli backup all
```
### Backing up object storage
Only Google Cloud is supported. See [epic 11577](https://gitlab.com/groups/gitlab-org/-/epics/11577) for the plan to add more vendors.
#### GCP
`gitlab-backup-cli` creates and runs jobs with the Google Cloud [Storage Transfer Service](https://cloud.google.com/storage-transfer-service/) to copy GitLab data to a separate backup bucket.
Prerequisites:
- Review the [service accounts overview](https://cloud.google.com/iam/docs/service-account-overview) to authenticate with a service account.
- This document assumes you are setting up and using a dedicated Google Cloud service account for managing backups.
- If no other credentials are provided, and you are running inside Google Cloud, then the tool attempts to use the access of the infrastructure it is running on. For [security reasons](#security-considerations), you should run the tool with separate credentials, and restrict access to the created backups from the application.
To create a backup:
1. [Create a role](https://cloud.google.com/iam/docs/creating-custom-roles):
1. Create a file `role.yaml` with the following definition:
```yaml
---
description: Role for backing up GitLab object storage
includedPermissions:
- storagetransfer.jobs.create
- storagetransfer.jobs.get
- storagetransfer.jobs.run
- storagetransfer.jobs.update
- storagetransfer.operations.get
- storagetransfer.projects.getServiceAccount
stage: GA
title: GitLab Backup Role
```
1. Apply the role:
```shell
gcloud iam roles create --project=<YOUR_PROJECT_ID> <ROLE_NAME> --file=role.yaml
```
1. Create a service account for backups, and add it to the role:
```shell
gcloud iam service-accounts create "gitlab-backup-cli" --display-name="GitLab Backup Service Account"
# Get the service account email from the output of the following
gcloud iam service-accounts list
# Add the account to the role created previously
gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> --member="serviceAccount:<SERVICE_ACCOUNT_EMAIL>" --role="roles/<ROLE_NAME>"
```
1. To authenticate with a service account, see [service account credentials](https://cloud.google.com/iam/docs/service-account-overview#credentials). The credentials can be saved to a file, or stored in a predefined environment variable.
1. Create a destination bucket to backup to in [Google Cloud Storage](https://cloud.google.com/storage/). The options here are highly dependent on your requirements.
1. Run the backup:
```shell
sudo gitlab-backup-cli backup all --backup-bucket=<BUCKET_NAME>
```
If you want to backup the container registry bucket, add the option `--registry-bucket=<REGISTRY_BUCKET_NAME>`.
1. The backup creates a backup under `backups/<BACKUP_ID>/<BUCKET>` for each of the object storage types in the bucket.
## Backup directory structure
Example backup directory structure:
```plaintext
backups
└── 1714053314_2024_04_25_17.0.0-pre
├── artifacts.tar.gz
├── backup_information.json
├── builds.tar.gz
├── ci_secure_files.tar.gz
├── db
│ ├── ci_database.sql.gz
│ └── database.sql.gz
├── lfs.tar.gz
├── packages.tar.gz
├── pages.tar.gz
├── registry.tar.gz
├── repositories
│ ├── default
│ │ ├── @hashed
│ │ └── @snippets
│ └── manifests
│ └── default
├── terraform_state.tar.gz
└── uploads.tar.gz
```
The `db` directory is used to back up the GitLab PostgreSQL database using `pg_dump` to create [an SQL dump](https://www.postgresql.org/docs/16/backup-dump.html). The output of `pg_dump` is piped through `gzip` in order to create a compressed SQL file.
The `repositories` directory is used to back up Git repositories, as found in the GitLab database.
## Backup ID
Backup IDs identify individual backups. You need the backup ID of a backup archive if you need to restore GitLab and multiple backups are available.
Backups are saved in a directory set in `backup_path`, which is specified in the `config/gitlab.yml` file.
- By default, backups are stored in `/var/opt/gitlab/backups`.
- By default, backup directories are named after `backup_id`'s where `<backup-id>` identifies the time when the backup was created and the GitLab version.
For example, if the backup directory name is `1714053314_2024_04_25_17.0.0-pre`, the time of creation is represented by `1714053314_2024_04_25` and the GitLab version is 17.0.0-pre.
## Backup metadata file (`backup_information.json`)
{{< history >}}
- Metadata version 2 was introduced in [GitLab 16.11](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149441).
{{< /history >}}
`backup_information.json` is found in the backup directory, and it stores metadata about the backup. For example:
```json
{
"metadata_version": 2,
"backup_id": "1714053314_2024_04_25_17.0.0-pre",
"created_at": "2024-04-25T13:55:14Z",
"gitlab_version": "17.0.0-pre"
}
```
## Restore a backup
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/469247) in GitLab 17.6.
{{< /history >}}
Prerequisites:
- You have the backup ID of a backup created using `gitlab-backup-cli`.
To restore a backup of the current GitLab installation:
- Run the following command:
```shell
sudo gitlab-backup-cli restore all <backup_id>
```
### Restore object storage data
You can restore data from Google Cloud Storage. [Epic 11577](https://gitlab.com/groups/gitlab-org/-/epics/11577) proposes to add support for other vendors.
Prerequisites:
- You have the backup ID of a backup created using `gitlab-backup-cli`.
- You configured the required permissions for the restore location.
- You set up the object storage configuration `gitlab.rb` or `gitlab.yml` file, and matches the backup environment.
- You tested the restore process in a staging environment.
To restore object storage data:
- Run the following command:
```shell
sudo gitlab-backup restore <backup_id>
```
The restore process:
- Does not clear the destination bucket first.
- Overwrites existing files with the same filenames in the destination bucket.
- Might take a significant amount of time, depending on how much data is restored.
Always monitor your system resources during a restore. Keep your original files
until you verify the restoration was successful.
## Known issues
When working with `gitlab-backup-cli`, you might encounter the following issues.
### Architecture compatibility
If you use the `gitlab-backup-cli` tool on architectures other than the [1K architecture](../reference_architectures/1k_users.md), you might experience issues. This tool is supported only on 1K architecture and is recommended only for relevant environments.
### Backup strategy
Changes to existing files during backup might cause issues on the GitLab instance. This issue occurs because the tool's initial version does not use the [copy strategy](backup_gitlab.md#backup-strategy-option).
A workaround of this issue, is either to:
- Transition the GitLab instance into [Maintenance Mode](../maintenance_mode/_index.md).
- Restrict traffic to the servers during backup to preserve instance resources.
We're investigating an alternative to the copy strategy, see [issue 428520](https://gitlab.com/gitlab-org/gitlab/-/issues/428520).
## What data is backed up?
1. Git Repository Data
1. Databases
1. Blobs
## What data is NOT backed up?
1. Secrets and Configurations
- Follow the documentation on how to [backup secrets and configuration](backup_gitlab.md#storing-configuration-files).
1. Transient and Cache Data
- Redis: Cache
- Redis: Sidekiq Data
- Logs
- Elasticsearch
- Observability Data / Prometheus Metrics
## Security considerations
Instead of using the same credentials, you should create a separate user account specifically with only the necessary permissions to perform backups. Running backups with the same credentials as the application is a poor security practice for several reasons:
- Principle of least privilege - The backup process requires more extensive permissions (like read access to all data) than you need for normal application operations. A user or process should have the minimum access necessary to perform its function.
- Risk of compromise - If the application credentials are compromised, an attacker can gain access to the application and all its backup data, exposing historical data as well.
- Separation of duties - Using separate credentials for backups and applications helps maintain a separation of duties. This separation makes it harder for a single compromised account to cause widespread damage.
- Audit trail - Separate credentials for backups make it easier to track and audit backup activities independently from regular application operations.
- Granular access control - Different credentials allow for more granular access control. Backup credentials can be given read-only access to the data, while application credentials might need read-write access to specific tables or schemas.
- Compliance requirements - Many regulatory standards and compliance frameworks (like GDPR, HIPAA, or PCI-DSS) require or strongly recommend separation of duties and access controls, which are easier to achieve with separate credentials.
- Easier management of lifecycle - Application and backup processes may have different lifecycles. Using separate credentials makes it easier to manage these lifecycles independently. For example, you can rotate or revoke credentials without affecting the other process.
- Protection against application vulnerabilities - If the application has a vulnerability that allows SQL injection or other forms of unauthorized data access, using separate backup credentials adds an extra layer of protection for the backup process.
|
---
stage: Data Access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
ignore_in_report: true
title: Back up and Restore GitLab with `gitlab-backup-cli`
breadcrumbs:
- doc
- administration
- backup_restore
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
- Status: Experiment
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/11908) in GitLab 17.0. This feature is an [experiment](../../policy/development_stages_support.md) and subject to the [GitLab Testing Agreement](https://handbook.gitlab.com/handbook/legal/testing-agreement/).
{{< /history >}}
This tool is under development and is ultimately meant to replace [the Rake tasks used for backing up and restoring GitLab](backup_gitlab.md). You can follow the development of this tool in the epic: [Next Gen Scalable Backup and Restore](https://gitlab.com/groups/gitlab-org/-/epics/11577).
Feedback on the tool is welcome in [the feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/457155).
## Taking a backup
To take a backup of the current GitLab installation:
```shell
sudo gitlab-backup-cli backup all
```
### Backing up object storage
Only Google Cloud is supported. See [epic 11577](https://gitlab.com/groups/gitlab-org/-/epics/11577) for the plan to add more vendors.
#### GCP
`gitlab-backup-cli` creates and runs jobs with the Google Cloud [Storage Transfer Service](https://cloud.google.com/storage-transfer-service/) to copy GitLab data to a separate backup bucket.
Prerequisites:
- Review the [service accounts overview](https://cloud.google.com/iam/docs/service-account-overview) to authenticate with a service account.
- This document assumes you are setting up and using a dedicated Google Cloud service account for managing backups.
- If no other credentials are provided, and you are running inside Google Cloud, then the tool attempts to use the access of the infrastructure it is running on. For [security reasons](#security-considerations), you should run the tool with separate credentials, and restrict access to the created backups from the application.
To create a backup:
1. [Create a role](https://cloud.google.com/iam/docs/creating-custom-roles):
1. Create a file `role.yaml` with the following definition:
```yaml
---
description: Role for backing up GitLab object storage
includedPermissions:
- storagetransfer.jobs.create
- storagetransfer.jobs.get
- storagetransfer.jobs.run
- storagetransfer.jobs.update
- storagetransfer.operations.get
- storagetransfer.projects.getServiceAccount
stage: GA
title: GitLab Backup Role
```
1. Apply the role:
```shell
gcloud iam roles create --project=<YOUR_PROJECT_ID> <ROLE_NAME> --file=role.yaml
```
1. Create a service account for backups, and add it to the role:
```shell
gcloud iam service-accounts create "gitlab-backup-cli" --display-name="GitLab Backup Service Account"
# Get the service account email from the output of the following
gcloud iam service-accounts list
# Add the account to the role created previously
gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> --member="serviceAccount:<SERVICE_ACCOUNT_EMAIL>" --role="roles/<ROLE_NAME>"
```
1. To authenticate with a service account, see [service account credentials](https://cloud.google.com/iam/docs/service-account-overview#credentials). The credentials can be saved to a file, or stored in a predefined environment variable.
1. Create a destination bucket to backup to in [Google Cloud Storage](https://cloud.google.com/storage/). The options here are highly dependent on your requirements.
1. Run the backup:
```shell
sudo gitlab-backup-cli backup all --backup-bucket=<BUCKET_NAME>
```
If you want to backup the container registry bucket, add the option `--registry-bucket=<REGISTRY_BUCKET_NAME>`.
1. The backup creates a backup under `backups/<BACKUP_ID>/<BUCKET>` for each of the object storage types in the bucket.
## Backup directory structure
Example backup directory structure:
```plaintext
backups
└── 1714053314_2024_04_25_17.0.0-pre
├── artifacts.tar.gz
├── backup_information.json
├── builds.tar.gz
├── ci_secure_files.tar.gz
├── db
│ ├── ci_database.sql.gz
│ └── database.sql.gz
├── lfs.tar.gz
├── packages.tar.gz
├── pages.tar.gz
├── registry.tar.gz
├── repositories
│ ├── default
│ │ ├── @hashed
│ │ └── @snippets
│ └── manifests
│ └── default
├── terraform_state.tar.gz
└── uploads.tar.gz
```
The `db` directory is used to back up the GitLab PostgreSQL database using `pg_dump` to create [an SQL dump](https://www.postgresql.org/docs/16/backup-dump.html). The output of `pg_dump` is piped through `gzip` in order to create a compressed SQL file.
The `repositories` directory is used to back up Git repositories, as found in the GitLab database.
## Backup ID
Backup IDs identify individual backups. You need the backup ID of a backup archive if you need to restore GitLab and multiple backups are available.
Backups are saved in a directory set in `backup_path`, which is specified in the `config/gitlab.yml` file.
- By default, backups are stored in `/var/opt/gitlab/backups`.
- By default, backup directories are named after `backup_id`'s where `<backup-id>` identifies the time when the backup was created and the GitLab version.
For example, if the backup directory name is `1714053314_2024_04_25_17.0.0-pre`, the time of creation is represented by `1714053314_2024_04_25` and the GitLab version is 17.0.0-pre.
## Backup metadata file (`backup_information.json`)
{{< history >}}
- Metadata version 2 was introduced in [GitLab 16.11](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149441).
{{< /history >}}
`backup_information.json` is found in the backup directory, and it stores metadata about the backup. For example:
```json
{
"metadata_version": 2,
"backup_id": "1714053314_2024_04_25_17.0.0-pre",
"created_at": "2024-04-25T13:55:14Z",
"gitlab_version": "17.0.0-pre"
}
```
## Restore a backup
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/469247) in GitLab 17.6.
{{< /history >}}
Prerequisites:
- You have the backup ID of a backup created using `gitlab-backup-cli`.
To restore a backup of the current GitLab installation:
- Run the following command:
```shell
sudo gitlab-backup-cli restore all <backup_id>
```
### Restore object storage data
You can restore data from Google Cloud Storage. [Epic 11577](https://gitlab.com/groups/gitlab-org/-/epics/11577) proposes to add support for other vendors.
Prerequisites:
- You have the backup ID of a backup created using `gitlab-backup-cli`.
- You configured the required permissions for the restore location.
- You set up the object storage configuration `gitlab.rb` or `gitlab.yml` file, and matches the backup environment.
- You tested the restore process in a staging environment.
To restore object storage data:
- Run the following command:
```shell
sudo gitlab-backup restore <backup_id>
```
The restore process:
- Does not clear the destination bucket first.
- Overwrites existing files with the same filenames in the destination bucket.
- Might take a significant amount of time, depending on how much data is restored.
Always monitor your system resources during a restore. Keep your original files
until you verify the restoration was successful.
## Known issues
When working with `gitlab-backup-cli`, you might encounter the following issues.
### Architecture compatibility
If you use the `gitlab-backup-cli` tool on architectures other than the [1K architecture](../reference_architectures/1k_users.md), you might experience issues. This tool is supported only on 1K architecture and is recommended only for relevant environments.
### Backup strategy
Changes to existing files during backup might cause issues on the GitLab instance. This issue occurs because the tool's initial version does not use the [copy strategy](backup_gitlab.md#backup-strategy-option).
A workaround of this issue, is either to:
- Transition the GitLab instance into [Maintenance Mode](../maintenance_mode/_index.md).
- Restrict traffic to the servers during backup to preserve instance resources.
We're investigating an alternative to the copy strategy, see [issue 428520](https://gitlab.com/gitlab-org/gitlab/-/issues/428520).
## What data is backed up?
1. Git Repository Data
1. Databases
1. Blobs
## What data is NOT backed up?
1. Secrets and Configurations
- Follow the documentation on how to [backup secrets and configuration](backup_gitlab.md#storing-configuration-files).
1. Transient and Cache Data
- Redis: Cache
- Redis: Sidekiq Data
- Logs
- Elasticsearch
- Observability Data / Prometheus Metrics
## Security considerations
Instead of using the same credentials, you should create a separate user account specifically with only the necessary permissions to perform backups. Running backups with the same credentials as the application is a poor security practice for several reasons:
- Principle of least privilege - The backup process requires more extensive permissions (like read access to all data) than you need for normal application operations. A user or process should have the minimum access necessary to perform its function.
- Risk of compromise - If the application credentials are compromised, an attacker can gain access to the application and all its backup data, exposing historical data as well.
- Separation of duties - Using separate credentials for backups and applications helps maintain a separation of duties. This separation makes it harder for a single compromised account to cause widespread damage.
- Audit trail - Separate credentials for backups make it easier to track and audit backup activities independently from regular application operations.
- Granular access control - Different credentials allow for more granular access control. Backup credentials can be given read-only access to the data, while application credentials might need read-write access to specific tables or schemas.
- Compliance requirements - Many regulatory standards and compliance frameworks (like GDPR, HIPAA, or PCI-DSS) require or strongly recommend separation of duties and access controls, which are easier to achieve with separate credentials.
- Easier management of lifecycle - Application and backup processes may have different lifecycles. Using separate credentials makes it easier to manage these lifecycles independently. For example, you can rotate or revoke credentials without affecting the other process.
- Protection against application vulnerabilities - If the application has a vulnerability that allows SQL injection or other forms of unauthorized data access, using separate backup credentials adds an extra layer of protection for the backup process.
|
https://docs.gitlab.com/administration/migrate_to_new_server
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/migrate_to_new_server.md
|
2025-08-13
|
doc/administration/backup_restore
|
[
"doc",
"administration",
"backup_restore"
] |
migrate_to_new_server.md
|
Data Access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Migrate to a new server
| null |
<!-- some details borrowed from GitLab.com move from Azure to GCP detailed at https://gitlab.com/gitlab-com/migration/-/blob/master/.gitlab/issue_templates/failover.md -->
You can use GitLab backup and restore to migrate your instance to a new server. This section outlines a typical procedure for a GitLab deployment running on a single server.
If you're running GitLab Geo, an alternative option is [Geo disaster recovery for planned failover](../geo/disaster_recovery/planned_failover.md). You must make sure all sites meet the [Geo requirements](../geo/_index.md#requirements-for-running-geo) before selecting Geo for the migration.
{{< alert type="warning" >}}
Avoid uncoordinated data processing by both the new and old servers, where multiple
servers could connect concurrently and process the same data. For example, when using
[incoming email](../incoming_email.md), if both GitLab instances are
processing email at the same time, then both instances miss some data.
This type of problem can occur with other services as well, such as a
[non-packaged database](https://docs.gitlab.com/omnibus/settings/database.html#using-a-non-packaged-postgresql-database-management-server),
a non-packaged Redis instance, or non-packaged Sidekiq.
{{< /alert >}}
Prerequisites:
- Some time before your migration, consider notifying your users of upcoming
scheduled maintenance with a [broadcast message banner](../broadcast_messages.md).
- Ensure your backups are complete and current. Create a complete system-level backup, or
take a snapshot of all servers involved in the migration, in case destructive commands
(like `rm`) are run incorrectly.
## Prepare the new server
To prepare the new server:
1. Copy the
[SSH host keys](https://superuser.com/questions/532040/copy-ssh-keys-from-one-server-to-another-server/532079#532079)
from the old server to avoid man-in-the-middle attack warnings.
See [Manually replicate the primary site's SSH host keys](../geo/replication/configuration.md#step-2-manually-replicate-the-primary-sites-ssh-host-keys) for example steps.
1. [Install and configure GitLab](https://about.gitlab.com/install/) except
[incoming email](../incoming_email.md):
1. Install GitLab.
1. Configure by copying `/etc/gitlab` files from the old server to the new server, and update as necessary.
Read the
[Linux package installation backup and restore instructions](https://docs.gitlab.com/omnibus/settings/backups.html) for more detail.
1. If applicable, disable [incoming email](../incoming_email.md).
1. Block new CI/CD jobs from starting upon initial startup after the backup and restore.
Edit `/etc/gitlab/gitlab.rb` and set the following:
```ruby
nginx['custom_gitlab_server_config'] = "location = /api/v4/jobs/request {\n deny all;\n return 503;\n }\n"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Stop GitLab to avoid any potential unnecessary and unintentional data processing:
```shell
sudo gitlab-ctl stop
```
1. Configure the new server to allow receiving the Redis database and GitLab backup files:
```shell
sudo rm -f /var/opt/gitlab/redis/dump.rdb
sudo chown <your-linux-username> /var/opt/gitlab/redis /var/opt/gitlab/backups
```
## Prepare and transfer content from the old server
1. Ensure you have an up-to-date system-level backup or snapshot of the old server.
1. Enable [maintenance mode](../maintenance_mode/_index.md),
if supported by your GitLab edition.
1. Block new CI/CD jobs from starting:
1. Edit `/etc/gitlab/gitlab.rb`, and set the following:
```ruby
nginx['custom_gitlab_server_config'] = "location = /api/v4/jobs/request {\n deny all;\n return 503;\n }\n"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Disable periodic background jobs:
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, select **Monitoring** > **Background jobs**.
1. Under the Sidekiq dashboard, select **Cron** tab and then
**Disable All**.
1. Wait for the running CI/CD jobs to finish, or accept that jobs that have not completed may be lost.
To view jobs running, on the left sidebar, select **Overviews** > **Jobs**,
and then select **Running**.
1. Wait for Sidekiq jobs to finish:
1. On the left sidebar, select **Monitoring** > **Background jobs**.
1. Under the Sidekiq dashboard, select **Queues** and then **Live Poll**.
Wait for **Busy** and **Enqueued** to drop to 0.
These queues contain work that has been submitted by your users;
shutting down before these jobs complete may cause the work to be lost.
Make note of the numbers shown in the Sidekiq dashboard for post-migration verification.
1. Flush the Redis database to disk, and stop GitLab other than the services needed for migration:
```shell
sudo /opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket save && sudo gitlab-ctl stop && sudo gitlab-ctl start postgresql && sudo gitlab-ctl start gitaly
```
1. Create a GitLab backup:
```shell
sudo gitlab-backup create
```
1. After the backup is complete, disable the following GitLab services and prevent unintentional restarts by adding the following to the bottom of `/etc/gitlab/gitlab.rb`:
```ruby
alertmanager['enable'] = false
gitaly['enable'] = false
gitlab_exporter['enable'] = false
gitlab_pages['enable'] = false
gitlab_workhorse['enable'] = false
grafana['enable'] = false
logrotate['enable'] = false
gitlab_rails['incoming_email_enabled'] = false
nginx['enable'] = false
node_exporter['enable'] = false
postgres_exporter['enable'] = false
postgresql['enable'] = false
prometheus['enable'] = false
puma['enable'] = false
redis['enable'] = false
redis_exporter['enable'] = false
registry['enable'] = false
sidekiq['enable'] = false
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Verify everything is stopped, and confirm no services are running:
```shell
sudo gitlab-ctl status
```
1. Stop Redis on the new server before transferring the Redis database backup:
```shell
sudo gitlab-ctl stop redis
```
1. Transfer the Redis database and GitLab backups to the new server:
```shell
sudo scp /var/opt/gitlab/redis/dump.rdb <your-linux-username>@new-server:/var/opt/gitlab/redis
sudo scp /var/opt/gitlab/backups/your-backup.tar <your-linux-username>@new-server:/var/opt/gitlab/backups
```
### For instances with a large volume of Git and object data
If your GitLab instance has a large amount of data on local volumes, for example greater than 1 TB,
backups may take a long time. In that case, you may find it easier to transfer the data to the appropriate volumes on the new instance.
The main volumes that you might need to migrate manually are:
- The `/var/opt/gitlab/git-data` directory which contains all the Git data.
Be sure to read [the moving repositories documentation section](../operations/moving_repositories.md#migrating-to-another-gitlab-instance) to eliminate the chance of Git data corruption.
- The `/var/opt/gitlab/gitlab-rails/shared` directory which contains object data, like artifacts.
- If you are using the bundled PostgreSQL included with the Linux package,
you also need to migrate the [PostgreSQL data directory](https://docs.gitlab.com/omnibus/settings/database.html#store-postgresql-data-in-a-different-directory)
under `/var/opt/gitlab/postgresql/data`.
After all GitLab services have been stopped, you can use tools like `rsync` or mounting volume snapshots to move the data
to the new environment.
## Restore data on the new server
1. Restore appropriate file system permissions:
```shell
sudo chown gitlab-redis /var/opt/gitlab/redis
sudo chown gitlab-redis:gitlab-redis /var/opt/gitlab/redis/dump.rdb
sudo chown git:root /var/opt/gitlab/backups
sudo chown git:git /var/opt/gitlab/backups/your-backup.tar
```
1. Start Redis:
```shell
sudo gitlab-ctl start redis
```
Redis picks up and restores `dump.rdb` automatically.
1. [Restore the GitLab backup](restore_gitlab.md).
1. Verify that the Redis database restored correctly:
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, select **Monitoring** > **Background jobs**.
1. Under the Sidekiq dashboard, verify that the numbers
match with what was shown on the old server.
1. While still under the Sidekiq dashboard, select **Cron** and then **Enable All**
to re-enable periodic background jobs.
1. Test that read-only operations on the GitLab instance work as expected. For example, browse through project repository files, merge requests, and issues.
1. Disable [Maintenance Mode](../maintenance_mode/_index.md), if previously enabled.
1. Test that the GitLab instance is working as expected.
1. If applicable, re-enable [incoming email](../incoming_email.md) and test it is working as expected.
1. Update your DNS or load balancer to point at the new server.
1. Unblock new CI/CD jobs from starting by removing the custom NGINX configuration
you added previously:
```ruby
# The following line must be removed
nginx['custom_gitlab_server_config'] = "location = /api/v4/jobs/request {\n deny all;\n return 503;\n }\n"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Remove the scheduled maintenance [broadcast message banner](../broadcast_messages.md).
|
---
stage: Data Access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Migrate to a new server
breadcrumbs:
- doc
- administration
- backup_restore
---
<!-- some details borrowed from GitLab.com move from Azure to GCP detailed at https://gitlab.com/gitlab-com/migration/-/blob/master/.gitlab/issue_templates/failover.md -->
You can use GitLab backup and restore to migrate your instance to a new server. This section outlines a typical procedure for a GitLab deployment running on a single server.
If you're running GitLab Geo, an alternative option is [Geo disaster recovery for planned failover](../geo/disaster_recovery/planned_failover.md). You must make sure all sites meet the [Geo requirements](../geo/_index.md#requirements-for-running-geo) before selecting Geo for the migration.
{{< alert type="warning" >}}
Avoid uncoordinated data processing by both the new and old servers, where multiple
servers could connect concurrently and process the same data. For example, when using
[incoming email](../incoming_email.md), if both GitLab instances are
processing email at the same time, then both instances miss some data.
This type of problem can occur with other services as well, such as a
[non-packaged database](https://docs.gitlab.com/omnibus/settings/database.html#using-a-non-packaged-postgresql-database-management-server),
a non-packaged Redis instance, or non-packaged Sidekiq.
{{< /alert >}}
Prerequisites:
- Some time before your migration, consider notifying your users of upcoming
scheduled maintenance with a [broadcast message banner](../broadcast_messages.md).
- Ensure your backups are complete and current. Create a complete system-level backup, or
take a snapshot of all servers involved in the migration, in case destructive commands
(like `rm`) are run incorrectly.
## Prepare the new server
To prepare the new server:
1. Copy the
[SSH host keys](https://superuser.com/questions/532040/copy-ssh-keys-from-one-server-to-another-server/532079#532079)
from the old server to avoid man-in-the-middle attack warnings.
See [Manually replicate the primary site's SSH host keys](../geo/replication/configuration.md#step-2-manually-replicate-the-primary-sites-ssh-host-keys) for example steps.
1. [Install and configure GitLab](https://about.gitlab.com/install/) except
[incoming email](../incoming_email.md):
1. Install GitLab.
1. Configure by copying `/etc/gitlab` files from the old server to the new server, and update as necessary.
Read the
[Linux package installation backup and restore instructions](https://docs.gitlab.com/omnibus/settings/backups.html) for more detail.
1. If applicable, disable [incoming email](../incoming_email.md).
1. Block new CI/CD jobs from starting upon initial startup after the backup and restore.
Edit `/etc/gitlab/gitlab.rb` and set the following:
```ruby
nginx['custom_gitlab_server_config'] = "location = /api/v4/jobs/request {\n deny all;\n return 503;\n }\n"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Stop GitLab to avoid any potential unnecessary and unintentional data processing:
```shell
sudo gitlab-ctl stop
```
1. Configure the new server to allow receiving the Redis database and GitLab backup files:
```shell
sudo rm -f /var/opt/gitlab/redis/dump.rdb
sudo chown <your-linux-username> /var/opt/gitlab/redis /var/opt/gitlab/backups
```
## Prepare and transfer content from the old server
1. Ensure you have an up-to-date system-level backup or snapshot of the old server.
1. Enable [maintenance mode](../maintenance_mode/_index.md),
if supported by your GitLab edition.
1. Block new CI/CD jobs from starting:
1. Edit `/etc/gitlab/gitlab.rb`, and set the following:
```ruby
nginx['custom_gitlab_server_config'] = "location = /api/v4/jobs/request {\n deny all;\n return 503;\n }\n"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Disable periodic background jobs:
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, select **Monitoring** > **Background jobs**.
1. Under the Sidekiq dashboard, select **Cron** tab and then
**Disable All**.
1. Wait for the running CI/CD jobs to finish, or accept that jobs that have not completed may be lost.
To view jobs running, on the left sidebar, select **Overviews** > **Jobs**,
and then select **Running**.
1. Wait for Sidekiq jobs to finish:
1. On the left sidebar, select **Monitoring** > **Background jobs**.
1. Under the Sidekiq dashboard, select **Queues** and then **Live Poll**.
Wait for **Busy** and **Enqueued** to drop to 0.
These queues contain work that has been submitted by your users;
shutting down before these jobs complete may cause the work to be lost.
Make note of the numbers shown in the Sidekiq dashboard for post-migration verification.
1. Flush the Redis database to disk, and stop GitLab other than the services needed for migration:
```shell
sudo /opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket save && sudo gitlab-ctl stop && sudo gitlab-ctl start postgresql && sudo gitlab-ctl start gitaly
```
1. Create a GitLab backup:
```shell
sudo gitlab-backup create
```
1. After the backup is complete, disable the following GitLab services and prevent unintentional restarts by adding the following to the bottom of `/etc/gitlab/gitlab.rb`:
```ruby
alertmanager['enable'] = false
gitaly['enable'] = false
gitlab_exporter['enable'] = false
gitlab_pages['enable'] = false
gitlab_workhorse['enable'] = false
grafana['enable'] = false
logrotate['enable'] = false
gitlab_rails['incoming_email_enabled'] = false
nginx['enable'] = false
node_exporter['enable'] = false
postgres_exporter['enable'] = false
postgresql['enable'] = false
prometheus['enable'] = false
puma['enable'] = false
redis['enable'] = false
redis_exporter['enable'] = false
registry['enable'] = false
sidekiq['enable'] = false
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Verify everything is stopped, and confirm no services are running:
```shell
sudo gitlab-ctl status
```
1. Stop Redis on the new server before transferring the Redis database backup:
```shell
sudo gitlab-ctl stop redis
```
1. Transfer the Redis database and GitLab backups to the new server:
```shell
sudo scp /var/opt/gitlab/redis/dump.rdb <your-linux-username>@new-server:/var/opt/gitlab/redis
sudo scp /var/opt/gitlab/backups/your-backup.tar <your-linux-username>@new-server:/var/opt/gitlab/backups
```
### For instances with a large volume of Git and object data
If your GitLab instance has a large amount of data on local volumes, for example greater than 1 TB,
backups may take a long time. In that case, you may find it easier to transfer the data to the appropriate volumes on the new instance.
The main volumes that you might need to migrate manually are:
- The `/var/opt/gitlab/git-data` directory which contains all the Git data.
Be sure to read [the moving repositories documentation section](../operations/moving_repositories.md#migrating-to-another-gitlab-instance) to eliminate the chance of Git data corruption.
- The `/var/opt/gitlab/gitlab-rails/shared` directory which contains object data, like artifacts.
- If you are using the bundled PostgreSQL included with the Linux package,
you also need to migrate the [PostgreSQL data directory](https://docs.gitlab.com/omnibus/settings/database.html#store-postgresql-data-in-a-different-directory)
under `/var/opt/gitlab/postgresql/data`.
After all GitLab services have been stopped, you can use tools like `rsync` or mounting volume snapshots to move the data
to the new environment.
## Restore data on the new server
1. Restore appropriate file system permissions:
```shell
sudo chown gitlab-redis /var/opt/gitlab/redis
sudo chown gitlab-redis:gitlab-redis /var/opt/gitlab/redis/dump.rdb
sudo chown git:root /var/opt/gitlab/backups
sudo chown git:git /var/opt/gitlab/backups/your-backup.tar
```
1. Start Redis:
```shell
sudo gitlab-ctl start redis
```
Redis picks up and restores `dump.rdb` automatically.
1. [Restore the GitLab backup](restore_gitlab.md).
1. Verify that the Redis database restored correctly:
1. On the left sidebar, at the bottom, select **Admin**.
1. On the left sidebar, select **Monitoring** > **Background jobs**.
1. Under the Sidekiq dashboard, verify that the numbers
match with what was shown on the old server.
1. While still under the Sidekiq dashboard, select **Cron** and then **Enable All**
to re-enable periodic background jobs.
1. Test that read-only operations on the GitLab instance work as expected. For example, browse through project repository files, merge requests, and issues.
1. Disable [Maintenance Mode](../maintenance_mode/_index.md), if previously enabled.
1. Test that the GitLab instance is working as expected.
1. If applicable, re-enable [incoming email](../incoming_email.md) and test it is working as expected.
1. Update your DNS or load balancer to point at the new server.
1. Unblock new CI/CD jobs from starting by removing the custom NGINX configuration
you added previously:
```ruby
# The following line must be removed
nginx['custom_gitlab_server_config'] = "location = /api/v4/jobs/request {\n deny all;\n return 503;\n }\n"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Remove the scheduled maintenance [broadcast message banner](../broadcast_messages.md).
|
https://docs.gitlab.com/administration/backup_large_reference_architectures
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/backup_large_reference_architectures.md
|
2025-08-13
|
doc/administration/backup_restore
|
[
"doc",
"administration",
"backup_restore"
] |
backup_large_reference_architectures.md
|
Data Access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Back up and restore large reference architectures
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab backups preserve data consistency and enable disaster recovery for
large-scale GitLab deployments. This process:
- Coordinates data backups across distributed storage components
- Preserves PostgreSQL databases up to multiple terabytes in size
- Protects object storage data in external services
- Maintains backup integrity for large Git repository collections
- Creates recoverable copies of configuration and secret files
- Enables restoration of system data with minimal downtime
Follow these procedures for GitLab environments running reference architectures
that support 3,000+ users, with special considerations for cloud-based
databases and object storage.
{{< alert type="note" >}}
This document is intended for environments using:
- [Linux package (Omnibus) and cloud-native hybrid reference architectures 60 RPS / 3,000 users and up](../reference_architectures/_index.md)
- [Amazon RDS](https://aws.amazon.com/rds/) for PostgreSQL data
- [Amazon S3](https://aws.amazon.com/s3/) for object storage
- [Object storage](../object_storage.md) to store everything possible, including [blobs](backup_gitlab.md#blobs) and [container registry](backup_gitlab.md#container-registry)
{{< /alert >}}
## Configure daily backups
### Configure backup of PostgreSQL data
The [backup command](backup_gitlab.md) uses `pg_dump`, which is [not appropriate for databases over 100 GB](backup_gitlab.md#postgresql-databases). You must choose a PostgreSQL solution which has native, robust backup capabilities.
{{< tabs >}}
{{< tab title="AWS" >}}
1. [Configure AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) to back up RDS (and S3) data. For maximum protection, [configure continuous backups as well as snapshot backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html).
1. Configure AWS Backup to copy backups to a separate region. When AWS takes a backup, the backup can only be restored in the region the backup is stored.
1. After AWS Backup has run at least one scheduled backup, then you can [create an on-demand backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) as needed.
{{< /tab >}}
{{< tab title="Google" >}}
Schedule [automated daily backups of Google Cloud SQL data](https://cloud.google.com/sql/docs/postgres/backup-recovery/backing-up#schedulebackups).
Daily backups [can be retained](https://cloud.google.com/sql/docs/postgres/backup-recovery/backups#retention) for up to one year, and transaction logs can be retained for 7 days by default for point-in-time recovery.
{{< /tab >}}
{{< /tabs >}}
### Configure backup of object storage data
[Object storage](../object_storage.md), ([not NFS](../nfs.md)) is recommended for storing GitLab data, including [blobs](backup_gitlab.md#blobs) and [Container registry](backup_gitlab.md#container-registry).
{{< tabs >}}
{{< tab title="AWS" >}}
Configure AWS Backup to back up S3 data. This can be done at the same time when [configuring the backup of PostgreSQL data](#configure-backup-of-postgresql-data).
{{< /tab >}}
{{< tab title="Google" >}}
1. [Create a backup bucket in GCS](https://cloud.google.com/storage/docs/creating-buckets).
1. [Create Storage Transfer Service jobs](https://cloud.google.com/storage-transfer/docs/create-transfers) which copy each GitLab object storage bucket to a backup bucket. You can create these jobs once, and [schedule them to run daily](https://cloud.google.com/storage-transfer/docs/schedule-transfer-jobs). However this mixes new and old object storage data, so files that were deleted in GitLab will still exist in the backup. This wastes storage after restore, but it is otherwise not a problem. These files would be inaccessible to GitLab users because they do not exist in the GitLab database. You can delete [some of these orphaned files](../raketasks/cleanup.md#clean-up-project-upload-files-from-object-storage) after restore, but this clean up Rake task only operates on a subset of files.
1. For `When to overwrite`, choose `Never`. GitLab object stored files are intended to be immutable. This selection could be helpful if a malicious actor succeeded at mutating GitLab files.
1. For `When to delete`, choose `Never`. If you sync the backup bucket to source, then you cannot recover if files are accidentally or maliciously deleted from source.
1. Alternatively, it is possible to backup object storage into buckets or subdirectories segregated by day. This avoids the problem of orphaned files after restore, and supports backup of file versions if needed. But it greatly increases backup storage costs. This can be done with [a Cloud Function triggered by Cloud Scheduler](https://cloud.google.com/scheduler/docs/tut-gcf-pub-sub), or with a script run by a cronjob. A partial example:
```shell
# Set GCP project so you don't have to specify it in every command
gcloud config set project example-gcp-project-name
# Grant the Storage Transfer Service's hidden service account permission to write to the backup bucket. The integer 123456789012 is the GCP project's ID.
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.objectAdmin gs://backup-bucket
# Grant the Storage Transfer Service's hidden service account permission to list and read objects in the source buckets. The integer 123456789012 is the GCP project's ID.
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-artifacts
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-ci-secure-files
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-dependency-proxy
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-lfs
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-mr-diffs
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-packages
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-pages
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-registry
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-terraform-state
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-uploads
# Create transfer jobs for each bucket, targeting a subdirectory in the backup bucket.
today=$(date +%F)
gcloud transfer jobs create gs://gitlab-bucket-artifacts/ gs://backup-bucket/$today/artifacts/ --name "$today-backup-artifacts"
gcloud transfer jobs create gs://gitlab-bucket-ci-secure-files/ gs://backup-bucket/$today/ci-secure-files/ --name "$today-backup-ci-secure-files"
gcloud transfer jobs create gs://gitlab-bucket-dependency-proxy/ gs://backup-bucket/$today/dependency-proxy/ --name "$today-backup-dependency-proxy"
gcloud transfer jobs create gs://gitlab-bucket-lfs/ gs://backup-bucket/$today/lfs/ --name "$today-backup-lfs"
gcloud transfer jobs create gs://gitlab-bucket-mr-diffs/ gs://backup-bucket/$today/mr-diffs/ --name "$today-backup-mr-diffs"
gcloud transfer jobs create gs://gitlab-bucket-packages/ gs://backup-bucket/$today/packages/ --name "$today-backup-packages"
gcloud transfer jobs create gs://gitlab-bucket-pages/ gs://backup-bucket/$today/pages/ --name "$today-backup-pages"
gcloud transfer jobs create gs://gitlab-bucket-registry/ gs://backup-bucket/$today/registry/ --name "$today-backup-registry"
gcloud transfer jobs create gs://gitlab-bucket-terraform-state/ gs://backup-bucket/$today/terraform-state/ --name "$today-backup-terraform-state"
gcloud transfer jobs create gs://gitlab-bucket-uploads/ gs://backup-bucket/$today/uploads/ --name "$today-backup-uploads"
```
1. These Transfer Jobs are not automatically deleted after running. You could implement clean up of old jobs in the script.
1. The example script does not delete old backups. You could implement clean up of old backups according to your desired retention policy.
1. Ensure that backups are performed at the same time or later than Cloud SQL backups, to reduce data inconsistencies.
{{< /tab >}}
{{< /tabs >}}
### Configure backup of Git repositories
Set up cronjobs to perform Gitaly server-side backups:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Configure Gitaly server-side backup destination on all Gitaly nodes by following [Configure server-side backups](../gitaly/configure_gitaly.md#configure-server-side-backups).
This bucket is used exclusively by Gitaly to store repository data.
1. While Gitaly backs up all Git repository data in its designated object storage bucket configured previously,
the backup utility tool (`gitlab-backup`) uploads additional backup data to a separate bucket. This data includes a `tar` file containing essential metadata for restores.
Ensure this backup data is properly uploaded to remote (cloud) storage by following
[Upload backups to a remote (cloud) storage](backup_gitlab.md#upload-backups-to-a-remote-cloud-storage) to set up the upload bucket.
1. (Optional) To solidify the durability of this backup data, back up both buckets configured previously with their respective object store provider by adding them to
[backups of object storage data](#configure-backup-of-object-storage-data).
1. SSH into a GitLab Rails node, which is a node that runs Puma or Sidekiq.
1. Take a full backup of your Git data. Use the `REPOSITORIES_SERVER_SIDE` variable, and skip PostgreSQL data:
```shell
sudo gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db
```
This causes Gitaly nodes to upload the Git data and some metadata to remote storage. Blobs such as uploads, artifacts, and LFS do not need to be explicitly skipped, because the `gitlab-backup` command does not back up object storage by default.
1. Note the [backup ID](backup_archive_process.md#backup-id) of the backup, which is needed for the next step. For example, if the backup command outputs
`2024-02-22 02:17:47 UTC -- Backup 1708568263_2024_02_22_16.9.0-ce is done.`, then the backup ID is `1708568263_2024_02_22_16.9.0-ce`.
1. Check that the full backup created data in both the Gitaly backup bucket as well as the regular backup bucket.
1. Run the [backup command](backup_gitlab.md#backup-command) again, this time specifying [incremental backup of Git repositories](backup_gitlab.md#incremental-repository-backups), and a backup ID. Using the example ID from the previous step, the command is:
```shell
sudo gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db INCREMENTAL=yes PREVIOUS_BACKUP=1708568263_2024_02_22_16.9.0-ce
```
The value of `PREVIOUS_BACKUP` is not used by this command, but it is required by the command. There is an issue for removing this unnecessary requirement, see [issue 429141](https://gitlab.com/gitlab-org/gitlab/-/issues/429141).
1. Check that the incremental backup succeeded, and added data to object storage.
1. [Configure cron to make daily backups](backup_gitlab.md#configuring-cron-to-make-daily-backups). Edit the crontab for the `root` user:
```shell
sudo su -
crontab -e
```
1. There, add the following lines to schedule the backup for everyday of every month at 2 AM. To limit the number of increments needed to restore a backup, a full backup of Git repositories will be taken on the first of each month, and the rest of the days will take an incremental backup.:
```plaintext
0 2 1 * * /opt/gitlab/bin/gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db CRON=1
0 2 2-31 * * /opt/gitlab/bin/gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db INCREMENTAL=yes PREVIOUS_BACKUP=1708568263_2024_02_22_16.9.0-ce CRON=1
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Configure Gitaly server-side backup destination on all Gitaly nodes by following
[Configure server-side backups](../gitaly/configure_gitaly.md#configure-server-side-backups). This bucket is used exclusively by Gitaly to store repository data.
1. While Gitaly backs up all Git repository data in its designated object storage bucket configured previously,
the backup utility tool (`gitlab-backup`) uploads additional backup data to a separate bucket. This data includes a `tar` file containing essential metadata for restores.
Ensure this backup data is properly uploaded to remote (cloud) storage by following
[Upload backups to a remote (cloud) storage](backup_gitlab.md#upload-backups-to-a-remote-cloud-storage) to set up the upload bucket.
1. (Optional) To solidify the durability of this backup data, both buckets configured previously can be backed up by their respective object store provider by adding them
to [backups of object storage data](#configure-backup-of-object-storage-data).
1. Ensure backup of both buckets by following [Configure backup of object storage data](#configure-backup-of-object-storage-data). Both storage buckets configured previously
should also be backed up by their respective object storage provider.
1. SSH into a GitLab Rails node, which is a node that runs Puma or Sidekiq.
1. Take a full backup of your Git data. Use the `REPOSITORIES_SERVER_SIDE` variable and skip all other data:
```shell
kubectl exec <Toolbox pod name> -it -- backup-utility --repositories-server-side --skip db,builds,pages,registry,uploads,artifacts,lfs,packages,external_diffs,terraform_state,pages,ci_secure_files
```
This causes Gitaly nodes to upload the Git data and some metadata to remote storage. See [Toolbox included tools](https://docs.gitlab.com/charts/charts/gitlab/toolbox/#toolbox-included-tools).
1. Check that the full backup created data in both the Gitaly backup bucket as well as the regular backup bucket. Incremental repository backup is not supported by `backup-utility` with server-side repository backup, see [charts issue 3421](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3421).
1. [Configure cron to make daily backups](https://docs.gitlab.com/charts/backup-restore/backup.html#cron-based-backup). Specifically, set `gitlab.toolbox.backups.cron.extraArgs` to include:
```shell
--repositories-server-side --skip db --skip repositories --skip uploads --skip builds --skip artifacts --skip pages --skip lfs --skip terraform_state --skip registry --skip packages --skip ci_secure_files
```
{{< /tab >}}
{{< /tabs >}}
### Configure backup of configuration files
If your configuration and secrets are defined outside of your deployment and then deployed into it, then the implementation of the backup strategy depends on your specific setup and requirements. As an example, you can store secrets in [AWS Secret Manager](https://aws.amazon.com/secrets-manager/) with [replication to multiple regions](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create-manage-multi-region-secrets.html) and configure a script to back up secrets automatically.
If your configuration and secrets are only defined inside your deployment:
1. [Storing configuration files](backup_gitlab.md#storing-configuration-files) describes how to extract configuration and secrets files.
1. These files should be uploaded to a separate, more restrictive, object storage account.
## Restore a backup
Restore a backup of a GitLab instance.
### Prerequisites
Before restoring a backup:
1. Choose a [working destination GitLab instance](restore_gitlab.md#the-destination-gitlab-instance-must-already-be-working).
1. Ensure the destination GitLab instance is in a region where your AWS backups are stored.
1. Check that the [destination GitLab instance uses exactly the same version and type (CE or EE) of GitLab](restore_gitlab.md#the-destination-gitlab-instance-must-have-the-exact-same-version)
on which the backup data was created. For example, CE 15.1.4.
1. [Restore backed up secrets to the destination GitLab instance](restore_gitlab.md#gitlab-secrets-must-be-restored).
1. Ensure that the [destination GitLab instance has the same repository storages configured](restore_gitlab.md#certain-gitlab-configuration-must-match-the-original-backed-up-environment).
Additional storages are fine.
1. Ensure that [object storage is configured](restore_gitlab.md#certain-gitlab-configuration-must-match-the-original-backed-up-environment).
1. To use new secrets or configuration, and to avoid dealing with any unexpected configuration changes during restore:
- Linux package installations on all nodes:
1. [Reconfigure](../restart_gitlab.md#reconfigure-a-linux-package-installation) the destination GitLab instance.
1. [Restart](../restart_gitlab.md#restart-a-linux-package-installation) the destination GitLab instance.
- Helm chart (Kubernetes) installations:
1. On all GitLab Linux package nodes, run:
```shell
sudo gitlab-ctl reconfigure
sudo gitlab-ctl start
```
1. Make sure you have a running GitLab instance by deploying the charts.
Ensure the Toolbox pod is enabled and running by executing the following command:
```shell
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
```
1. The Webservice, Sidekiq and Toolbox pods must be restarted.
The safest way to restart those pods is to run:
```shell
kubectl delete pods -lapp=sidekiq,release=<helm release name>
kubectl delete pods -lapp=webservice,release=<helm release name>
kubectl delete pods -lapp=toolbox,release=<helm release name>
```
1. Confirm the destination GitLab instance still works. For example:
- Make requests to the [health check endpoints](../monitoring/health_check.md).
- [Run GitLab check Rake tasks](../raketasks/maintenance.md#check-gitlab-configuration).
1. Stop GitLab services which connect to the PostgreSQL database.
- Linux package installations on all nodes running Puma or Sidekiq, run:
```shell
sudo gitlab-ctl stop
```
- Helm chart (Kubernetes) installations:
1. Note the current number of replicas for database clients for subsequent restart:
```shell
kubectl get deploy -n <namespace> -lapp=sidekiq,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
kubectl get deploy -n <namespace> -lapp=webservice,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
kubectl get deploy -n <namespace> -lapp=prometheus,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
```
1. Stop the clients of the database to prevent locks interfering with the restore process:
```shell
kubectl scale deploy -lapp=sidekiq,release=<helm release name> -n <namespace> --replicas=0
kubectl scale deploy -lapp=webservice,release=<helm release name> -n <namespace> --replicas=0
kubectl scale deploy -lapp=prometheus,release=<helm release name> -n <namespace> --replicas=0
```
### Restore object storage data
{{< tabs >}}
{{< tab title="AWS" >}}
Each bucket exists as a separate backup within AWS and each backup can be restored to an existing or
new bucket.
1. To restore buckets, an IAM role with the correct permissions is required:
- `AWSBackupServiceRolePolicyForBackup`
- `AWSBackupServiceRolePolicyForRestores`
- `AWSBackupServiceRolePolicyForS3Restore`
- `AWSBackupServiceRolePolicyForS3Backup`
1. If existing buckets are being used, they must have
[Access Control Lists enabled](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html).
1. [Restore the S3 buckets using built-in tooling](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-s3.html).
1. You can move on to [Restore PostgreSQL data](#restore-postgresql-data) while the restore job is
running.
{{< /tab >}}
{{< tab title="Google" >}}
1. [Create Storage Transfer Service jobs](https://cloud.google.com/storage-transfer/docs/create-transfers) to transfer backed up data to the GitLab buckets.
1. You can move on to [Restore PostgreSQL data](#restore-postgresql-data) while the transfer jobs are
running.
{{< /tab >}}
{{< /tabs >}}
### Restore PostgreSQL data
{{< tabs >}}
{{< tab title="AWS" >}}
1. [Restore the AWS RDS database using built-in tooling](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-rds.html),
which creates a new RDS instance.
1. Because the new RDS instance has a different endpoint, you must reconfigure the destination GitLab instance
to point to the new database:
- For Linux package installations, follow
[Using a non-packaged PostgreSQL database management server](https://docs.gitlab.com/omnibus/settings/database.html#using-a-non-packaged-postgresql-database-management-server).
- For Helm chart (Kubernetes) installations, follow
[Configure the GitLab chart with an external database](https://docs.gitlab.com/charts/advanced/external-db/).
1. Before moving on, wait until the new RDS instance is created and ready to use.
{{< /tab >}}
{{< tab title="Google" >}}
1. [Restore the Google Cloud SQL database using built-in tooling](https://cloud.google.com/sql/docs/postgres/backup-recovery/restoring).
1. If you restore to a new database instance, then reconfigure GitLab to point to the new database:
- For Linux package installations, follow
[Using a non-packaged PostgreSQL database management server](https://docs.gitlab.com/omnibus/settings/database.html#using-a-non-packaged-postgresql-database-management-server).
- For Helm chart (Kubernetes) installations, follow
[Configure the GitLab chart with an external database](https://docs.gitlab.com/charts/advanced/external-db/).
1. Before moving on, wait until the Cloud SQL instance is ready to use.
{{< /tab >}}
{{< /tabs >}}
### Restore Git repositories
First, as part of [Restore object storage data](#restore-object-storage-data), you should have already:
- Restored a bucket containing the Gitaly server-side backups of Git repositories.
- Restored a bucket containing the `*_gitlab_backup.tar` files.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. SSH into a GitLab Rails node, which is a node that runs Puma or Sidekiq.
1. In your backup bucket, choose a `*_gitlab_backup.tar` file based on its timestamp, aligned with the PostgreSQL and object storage data that you restored.
1. Download the `tar` file in `/var/opt/gitlab/backups/`.
1. Restore the backup, specifying the ID of the backup you wish to restore, omitting `_gitlab_backup.tar` from the name:
```shell
# This command will overwrite the contents of your GitLab database!
sudo gitlab-backup restore BACKUP=11493107454_2018_04_25_10.6.4-ce SKIP=db
```
If there's a GitLab version mismatch between your backup tar file and the installed version of
GitLab, the restore command aborts with an error message.
Install the [correct GitLab version](https://packages.gitlab.com/gitlab/), and then try again.
1. Reconfigure, start, and [check](../raketasks/maintenance.md#check-gitlab-configuration) GitLab:
1. In all PostgreSQL nodes, run:
```shell
sudo gitlab-ctl reconfigure
```
1. In all Puma or Sidekiq nodes, run:
```shell
sudo gitlab-ctl start
```
1. In one Puma or Sidekiq node, run:
```shell
sudo gitlab-rake gitlab:check SANITIZE=true
```
1. Check that
[database values can be decrypted](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets)
especially if `/etc/gitlab/gitlab-secrets.json` was restored, or if a different server is the
target for the restore:
In a Puma or Sidekiq node, run:
```shell
sudo gitlab-rake gitlab:doctor:secrets
```
1. For added assurance, you can perform
[an integrity check on the uploaded files](../raketasks/check.md#uploaded-files-integrity):
In a Puma or Sidekiq node, run:
```shell
sudo gitlab-rake gitlab:artifacts:check
sudo gitlab-rake gitlab:lfs:check
sudo gitlab-rake gitlab:uploads:check
```
If missing or corrupted files are found, it does not always mean the backup and restore process failed.
For example, the files might be missing or corrupted on the source GitLab instance. You might need to cross-reference prior backups.
If you are migrating GitLab to a new environment, you can run the same checks on the source GitLab instance to determine whether
the integrity check result is preexisting or related to the backup and restore process.
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. SSH into a toolbox pod.
1. In your backup bucket, choose a `*_gitlab_backup.tar` file based on its timestamp, aligned with the PostgreSQL and object storage data that you restored.
1. Download the `tar` file in `/var/opt/gitlab/backups/`.
1. Restore the backup, specifying the ID of the backup you wish to restore, omitting `_gitlab_backup.tar` from the name:
```shell
# This command will overwrite the contents of Gitaly!
kubectl exec <Toolbox pod name> -it -- backup-utility --restore -t 11493107454_2018_04_25_10.6.4-ce --skip db,builds,pages,registry,uploads,artifacts,lfs,packages,external_diffs,terraform_state,pages,ci_secure_files
```
If there's a GitLab version mismatch between your backup tar file and the installed version of
GitLab, the restore command aborts with an error message.
Install the [correct GitLab version](https://packages.gitlab.com/gitlab/), and then try again.
1. Restart and [check](../raketasks/maintenance.md#check-gitlab-configuration) GitLab:
1. Start the stopped deployments, using the number of replicas noted in [Prerequisites](#prerequisites):
```shell
kubectl scale deploy -lapp=sidekiq,release=<helm release name> -n <namespace> --replicas=<original value>
kubectl scale deploy -lapp=webservice,release=<helm release name> -n <namespace> --replicas=<original value>
kubectl scale deploy -lapp=prometheus,release=<helm release name> -n <namespace> --replicas=<original value>
```
1. In the Toolbox pod, run:
```shell
sudo gitlab-rake gitlab:check SANITIZE=true
```
1. Check that
[database values can be decrypted](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets)
especially if `/etc/gitlab/gitlab-secrets.json` was restored, or if a different server is the
target for the restore:
In the Toolbox pod, run:
```shell
sudo gitlab-rake gitlab:doctor:secrets
```
1. For added assurance, you can perform
[an integrity check on the uploaded files](../raketasks/check.md#uploaded-files-integrity):
These commands can take a long time because they iterate over all rows. So, run the following commands in the GitLab Rails node,
rather than a Toolbox pod:
```shell
sudo gitlab-rake gitlab:artifacts:check
sudo gitlab-rake gitlab:lfs:check
sudo gitlab-rake gitlab:uploads:check
```
If missing or corrupted files are found, it does not always mean the backup and restore process failed.
For example, the files might be missing or corrupted on the source GitLab instance. You might need to cross-reference prior backups.
If you are migrating GitLab to a new environment, you can run the same checks on the source GitLab instance to determine whether
the integrity check result is preexisting or related to the backup and restore process.
{{< /tab >}}
{{< /tabs >}}
The restoration should be complete.
|
---
stage: Data Access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Back up and restore large reference architectures
breadcrumbs:
- doc
- administration
- backup_restore
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab backups preserve data consistency and enable disaster recovery for
large-scale GitLab deployments. This process:
- Coordinates data backups across distributed storage components
- Preserves PostgreSQL databases up to multiple terabytes in size
- Protects object storage data in external services
- Maintains backup integrity for large Git repository collections
- Creates recoverable copies of configuration and secret files
- Enables restoration of system data with minimal downtime
Follow these procedures for GitLab environments running reference architectures
that support 3,000+ users, with special considerations for cloud-based
databases and object storage.
{{< alert type="note" >}}
This document is intended for environments using:
- [Linux package (Omnibus) and cloud-native hybrid reference architectures 60 RPS / 3,000 users and up](../reference_architectures/_index.md)
- [Amazon RDS](https://aws.amazon.com/rds/) for PostgreSQL data
- [Amazon S3](https://aws.amazon.com/s3/) for object storage
- [Object storage](../object_storage.md) to store everything possible, including [blobs](backup_gitlab.md#blobs) and [container registry](backup_gitlab.md#container-registry)
{{< /alert >}}
## Configure daily backups
### Configure backup of PostgreSQL data
The [backup command](backup_gitlab.md) uses `pg_dump`, which is [not appropriate for databases over 100 GB](backup_gitlab.md#postgresql-databases). You must choose a PostgreSQL solution which has native, robust backup capabilities.
{{< tabs >}}
{{< tab title="AWS" >}}
1. [Configure AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) to back up RDS (and S3) data. For maximum protection, [configure continuous backups as well as snapshot backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html).
1. Configure AWS Backup to copy backups to a separate region. When AWS takes a backup, the backup can only be restored in the region the backup is stored.
1. After AWS Backup has run at least one scheduled backup, then you can [create an on-demand backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) as needed.
{{< /tab >}}
{{< tab title="Google" >}}
Schedule [automated daily backups of Google Cloud SQL data](https://cloud.google.com/sql/docs/postgres/backup-recovery/backing-up#schedulebackups).
Daily backups [can be retained](https://cloud.google.com/sql/docs/postgres/backup-recovery/backups#retention) for up to one year, and transaction logs can be retained for 7 days by default for point-in-time recovery.
{{< /tab >}}
{{< /tabs >}}
### Configure backup of object storage data
[Object storage](../object_storage.md), ([not NFS](../nfs.md)) is recommended for storing GitLab data, including [blobs](backup_gitlab.md#blobs) and [Container registry](backup_gitlab.md#container-registry).
{{< tabs >}}
{{< tab title="AWS" >}}
Configure AWS Backup to back up S3 data. This can be done at the same time when [configuring the backup of PostgreSQL data](#configure-backup-of-postgresql-data).
{{< /tab >}}
{{< tab title="Google" >}}
1. [Create a backup bucket in GCS](https://cloud.google.com/storage/docs/creating-buckets).
1. [Create Storage Transfer Service jobs](https://cloud.google.com/storage-transfer/docs/create-transfers) which copy each GitLab object storage bucket to a backup bucket. You can create these jobs once, and [schedule them to run daily](https://cloud.google.com/storage-transfer/docs/schedule-transfer-jobs). However this mixes new and old object storage data, so files that were deleted in GitLab will still exist in the backup. This wastes storage after restore, but it is otherwise not a problem. These files would be inaccessible to GitLab users because they do not exist in the GitLab database. You can delete [some of these orphaned files](../raketasks/cleanup.md#clean-up-project-upload-files-from-object-storage) after restore, but this clean up Rake task only operates on a subset of files.
1. For `When to overwrite`, choose `Never`. GitLab object stored files are intended to be immutable. This selection could be helpful if a malicious actor succeeded at mutating GitLab files.
1. For `When to delete`, choose `Never`. If you sync the backup bucket to source, then you cannot recover if files are accidentally or maliciously deleted from source.
1. Alternatively, it is possible to backup object storage into buckets or subdirectories segregated by day. This avoids the problem of orphaned files after restore, and supports backup of file versions if needed. But it greatly increases backup storage costs. This can be done with [a Cloud Function triggered by Cloud Scheduler](https://cloud.google.com/scheduler/docs/tut-gcf-pub-sub), or with a script run by a cronjob. A partial example:
```shell
# Set GCP project so you don't have to specify it in every command
gcloud config set project example-gcp-project-name
# Grant the Storage Transfer Service's hidden service account permission to write to the backup bucket. The integer 123456789012 is the GCP project's ID.
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.objectAdmin gs://backup-bucket
# Grant the Storage Transfer Service's hidden service account permission to list and read objects in the source buckets. The integer 123456789012 is the GCP project's ID.
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-artifacts
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-ci-secure-files
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-dependency-proxy
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-lfs
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-mr-diffs
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-packages
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-pages
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-registry
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-terraform-state
gsutil iam ch serviceAccount:project-123456789012@storage-transfer-service.iam.gserviceaccount.com:roles/storage.legacyBucketReader,roles/storage.objectViewer gs://gitlab-bucket-uploads
# Create transfer jobs for each bucket, targeting a subdirectory in the backup bucket.
today=$(date +%F)
gcloud transfer jobs create gs://gitlab-bucket-artifacts/ gs://backup-bucket/$today/artifacts/ --name "$today-backup-artifacts"
gcloud transfer jobs create gs://gitlab-bucket-ci-secure-files/ gs://backup-bucket/$today/ci-secure-files/ --name "$today-backup-ci-secure-files"
gcloud transfer jobs create gs://gitlab-bucket-dependency-proxy/ gs://backup-bucket/$today/dependency-proxy/ --name "$today-backup-dependency-proxy"
gcloud transfer jobs create gs://gitlab-bucket-lfs/ gs://backup-bucket/$today/lfs/ --name "$today-backup-lfs"
gcloud transfer jobs create gs://gitlab-bucket-mr-diffs/ gs://backup-bucket/$today/mr-diffs/ --name "$today-backup-mr-diffs"
gcloud transfer jobs create gs://gitlab-bucket-packages/ gs://backup-bucket/$today/packages/ --name "$today-backup-packages"
gcloud transfer jobs create gs://gitlab-bucket-pages/ gs://backup-bucket/$today/pages/ --name "$today-backup-pages"
gcloud transfer jobs create gs://gitlab-bucket-registry/ gs://backup-bucket/$today/registry/ --name "$today-backup-registry"
gcloud transfer jobs create gs://gitlab-bucket-terraform-state/ gs://backup-bucket/$today/terraform-state/ --name "$today-backup-terraform-state"
gcloud transfer jobs create gs://gitlab-bucket-uploads/ gs://backup-bucket/$today/uploads/ --name "$today-backup-uploads"
```
1. These Transfer Jobs are not automatically deleted after running. You could implement clean up of old jobs in the script.
1. The example script does not delete old backups. You could implement clean up of old backups according to your desired retention policy.
1. Ensure that backups are performed at the same time or later than Cloud SQL backups, to reduce data inconsistencies.
{{< /tab >}}
{{< /tabs >}}
### Configure backup of Git repositories
Set up cronjobs to perform Gitaly server-side backups:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Configure Gitaly server-side backup destination on all Gitaly nodes by following [Configure server-side backups](../gitaly/configure_gitaly.md#configure-server-side-backups).
This bucket is used exclusively by Gitaly to store repository data.
1. While Gitaly backs up all Git repository data in its designated object storage bucket configured previously,
the backup utility tool (`gitlab-backup`) uploads additional backup data to a separate bucket. This data includes a `tar` file containing essential metadata for restores.
Ensure this backup data is properly uploaded to remote (cloud) storage by following
[Upload backups to a remote (cloud) storage](backup_gitlab.md#upload-backups-to-a-remote-cloud-storage) to set up the upload bucket.
1. (Optional) To solidify the durability of this backup data, back up both buckets configured previously with their respective object store provider by adding them to
[backups of object storage data](#configure-backup-of-object-storage-data).
1. SSH into a GitLab Rails node, which is a node that runs Puma or Sidekiq.
1. Take a full backup of your Git data. Use the `REPOSITORIES_SERVER_SIDE` variable, and skip PostgreSQL data:
```shell
sudo gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db
```
This causes Gitaly nodes to upload the Git data and some metadata to remote storage. Blobs such as uploads, artifacts, and LFS do not need to be explicitly skipped, because the `gitlab-backup` command does not back up object storage by default.
1. Note the [backup ID](backup_archive_process.md#backup-id) of the backup, which is needed for the next step. For example, if the backup command outputs
`2024-02-22 02:17:47 UTC -- Backup 1708568263_2024_02_22_16.9.0-ce is done.`, then the backup ID is `1708568263_2024_02_22_16.9.0-ce`.
1. Check that the full backup created data in both the Gitaly backup bucket as well as the regular backup bucket.
1. Run the [backup command](backup_gitlab.md#backup-command) again, this time specifying [incremental backup of Git repositories](backup_gitlab.md#incremental-repository-backups), and a backup ID. Using the example ID from the previous step, the command is:
```shell
sudo gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db INCREMENTAL=yes PREVIOUS_BACKUP=1708568263_2024_02_22_16.9.0-ce
```
The value of `PREVIOUS_BACKUP` is not used by this command, but it is required by the command. There is an issue for removing this unnecessary requirement, see [issue 429141](https://gitlab.com/gitlab-org/gitlab/-/issues/429141).
1. Check that the incremental backup succeeded, and added data to object storage.
1. [Configure cron to make daily backups](backup_gitlab.md#configuring-cron-to-make-daily-backups). Edit the crontab for the `root` user:
```shell
sudo su -
crontab -e
```
1. There, add the following lines to schedule the backup for everyday of every month at 2 AM. To limit the number of increments needed to restore a backup, a full backup of Git repositories will be taken on the first of each month, and the rest of the days will take an incremental backup.:
```plaintext
0 2 1 * * /opt/gitlab/bin/gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db CRON=1
0 2 2-31 * * /opt/gitlab/bin/gitlab-backup create REPOSITORIES_SERVER_SIDE=true SKIP=db INCREMENTAL=yes PREVIOUS_BACKUP=1708568263_2024_02_22_16.9.0-ce CRON=1
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Configure Gitaly server-side backup destination on all Gitaly nodes by following
[Configure server-side backups](../gitaly/configure_gitaly.md#configure-server-side-backups). This bucket is used exclusively by Gitaly to store repository data.
1. While Gitaly backs up all Git repository data in its designated object storage bucket configured previously,
the backup utility tool (`gitlab-backup`) uploads additional backup data to a separate bucket. This data includes a `tar` file containing essential metadata for restores.
Ensure this backup data is properly uploaded to remote (cloud) storage by following
[Upload backups to a remote (cloud) storage](backup_gitlab.md#upload-backups-to-a-remote-cloud-storage) to set up the upload bucket.
1. (Optional) To solidify the durability of this backup data, both buckets configured previously can be backed up by their respective object store provider by adding them
to [backups of object storage data](#configure-backup-of-object-storage-data).
1. Ensure backup of both buckets by following [Configure backup of object storage data](#configure-backup-of-object-storage-data). Both storage buckets configured previously
should also be backed up by their respective object storage provider.
1. SSH into a GitLab Rails node, which is a node that runs Puma or Sidekiq.
1. Take a full backup of your Git data. Use the `REPOSITORIES_SERVER_SIDE` variable and skip all other data:
```shell
kubectl exec <Toolbox pod name> -it -- backup-utility --repositories-server-side --skip db,builds,pages,registry,uploads,artifacts,lfs,packages,external_diffs,terraform_state,pages,ci_secure_files
```
This causes Gitaly nodes to upload the Git data and some metadata to remote storage. See [Toolbox included tools](https://docs.gitlab.com/charts/charts/gitlab/toolbox/#toolbox-included-tools).
1. Check that the full backup created data in both the Gitaly backup bucket as well as the regular backup bucket. Incremental repository backup is not supported by `backup-utility` with server-side repository backup, see [charts issue 3421](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3421).
1. [Configure cron to make daily backups](https://docs.gitlab.com/charts/backup-restore/backup.html#cron-based-backup). Specifically, set `gitlab.toolbox.backups.cron.extraArgs` to include:
```shell
--repositories-server-side --skip db --skip repositories --skip uploads --skip builds --skip artifacts --skip pages --skip lfs --skip terraform_state --skip registry --skip packages --skip ci_secure_files
```
{{< /tab >}}
{{< /tabs >}}
### Configure backup of configuration files
If your configuration and secrets are defined outside of your deployment and then deployed into it, then the implementation of the backup strategy depends on your specific setup and requirements. As an example, you can store secrets in [AWS Secret Manager](https://aws.amazon.com/secrets-manager/) with [replication to multiple regions](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create-manage-multi-region-secrets.html) and configure a script to back up secrets automatically.
If your configuration and secrets are only defined inside your deployment:
1. [Storing configuration files](backup_gitlab.md#storing-configuration-files) describes how to extract configuration and secrets files.
1. These files should be uploaded to a separate, more restrictive, object storage account.
## Restore a backup
Restore a backup of a GitLab instance.
### Prerequisites
Before restoring a backup:
1. Choose a [working destination GitLab instance](restore_gitlab.md#the-destination-gitlab-instance-must-already-be-working).
1. Ensure the destination GitLab instance is in a region where your AWS backups are stored.
1. Check that the [destination GitLab instance uses exactly the same version and type (CE or EE) of GitLab](restore_gitlab.md#the-destination-gitlab-instance-must-have-the-exact-same-version)
on which the backup data was created. For example, CE 15.1.4.
1. [Restore backed up secrets to the destination GitLab instance](restore_gitlab.md#gitlab-secrets-must-be-restored).
1. Ensure that the [destination GitLab instance has the same repository storages configured](restore_gitlab.md#certain-gitlab-configuration-must-match-the-original-backed-up-environment).
Additional storages are fine.
1. Ensure that [object storage is configured](restore_gitlab.md#certain-gitlab-configuration-must-match-the-original-backed-up-environment).
1. To use new secrets or configuration, and to avoid dealing with any unexpected configuration changes during restore:
- Linux package installations on all nodes:
1. [Reconfigure](../restart_gitlab.md#reconfigure-a-linux-package-installation) the destination GitLab instance.
1. [Restart](../restart_gitlab.md#restart-a-linux-package-installation) the destination GitLab instance.
- Helm chart (Kubernetes) installations:
1. On all GitLab Linux package nodes, run:
```shell
sudo gitlab-ctl reconfigure
sudo gitlab-ctl start
```
1. Make sure you have a running GitLab instance by deploying the charts.
Ensure the Toolbox pod is enabled and running by executing the following command:
```shell
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
```
1. The Webservice, Sidekiq and Toolbox pods must be restarted.
The safest way to restart those pods is to run:
```shell
kubectl delete pods -lapp=sidekiq,release=<helm release name>
kubectl delete pods -lapp=webservice,release=<helm release name>
kubectl delete pods -lapp=toolbox,release=<helm release name>
```
1. Confirm the destination GitLab instance still works. For example:
- Make requests to the [health check endpoints](../monitoring/health_check.md).
- [Run GitLab check Rake tasks](../raketasks/maintenance.md#check-gitlab-configuration).
1. Stop GitLab services which connect to the PostgreSQL database.
- Linux package installations on all nodes running Puma or Sidekiq, run:
```shell
sudo gitlab-ctl stop
```
- Helm chart (Kubernetes) installations:
1. Note the current number of replicas for database clients for subsequent restart:
```shell
kubectl get deploy -n <namespace> -lapp=sidekiq,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
kubectl get deploy -n <namespace> -lapp=webservice,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
kubectl get deploy -n <namespace> -lapp=prometheus,release=<helm release name> -o jsonpath='{.items[].spec.replicas}{"\n"}'
```
1. Stop the clients of the database to prevent locks interfering with the restore process:
```shell
kubectl scale deploy -lapp=sidekiq,release=<helm release name> -n <namespace> --replicas=0
kubectl scale deploy -lapp=webservice,release=<helm release name> -n <namespace> --replicas=0
kubectl scale deploy -lapp=prometheus,release=<helm release name> -n <namespace> --replicas=0
```
### Restore object storage data
{{< tabs >}}
{{< tab title="AWS" >}}
Each bucket exists as a separate backup within AWS and each backup can be restored to an existing or
new bucket.
1. To restore buckets, an IAM role with the correct permissions is required:
- `AWSBackupServiceRolePolicyForBackup`
- `AWSBackupServiceRolePolicyForRestores`
- `AWSBackupServiceRolePolicyForS3Restore`
- `AWSBackupServiceRolePolicyForS3Backup`
1. If existing buckets are being used, they must have
[Access Control Lists enabled](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html).
1. [Restore the S3 buckets using built-in tooling](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-s3.html).
1. You can move on to [Restore PostgreSQL data](#restore-postgresql-data) while the restore job is
running.
{{< /tab >}}
{{< tab title="Google" >}}
1. [Create Storage Transfer Service jobs](https://cloud.google.com/storage-transfer/docs/create-transfers) to transfer backed up data to the GitLab buckets.
1. You can move on to [Restore PostgreSQL data](#restore-postgresql-data) while the transfer jobs are
running.
{{< /tab >}}
{{< /tabs >}}
### Restore PostgreSQL data
{{< tabs >}}
{{< tab title="AWS" >}}
1. [Restore the AWS RDS database using built-in tooling](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-rds.html),
which creates a new RDS instance.
1. Because the new RDS instance has a different endpoint, you must reconfigure the destination GitLab instance
to point to the new database:
- For Linux package installations, follow
[Using a non-packaged PostgreSQL database management server](https://docs.gitlab.com/omnibus/settings/database.html#using-a-non-packaged-postgresql-database-management-server).
- For Helm chart (Kubernetes) installations, follow
[Configure the GitLab chart with an external database](https://docs.gitlab.com/charts/advanced/external-db/).
1. Before moving on, wait until the new RDS instance is created and ready to use.
{{< /tab >}}
{{< tab title="Google" >}}
1. [Restore the Google Cloud SQL database using built-in tooling](https://cloud.google.com/sql/docs/postgres/backup-recovery/restoring).
1. If you restore to a new database instance, then reconfigure GitLab to point to the new database:
- For Linux package installations, follow
[Using a non-packaged PostgreSQL database management server](https://docs.gitlab.com/omnibus/settings/database.html#using-a-non-packaged-postgresql-database-management-server).
- For Helm chart (Kubernetes) installations, follow
[Configure the GitLab chart with an external database](https://docs.gitlab.com/charts/advanced/external-db/).
1. Before moving on, wait until the Cloud SQL instance is ready to use.
{{< /tab >}}
{{< /tabs >}}
### Restore Git repositories
First, as part of [Restore object storage data](#restore-object-storage-data), you should have already:
- Restored a bucket containing the Gitaly server-side backups of Git repositories.
- Restored a bucket containing the `*_gitlab_backup.tar` files.
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. SSH into a GitLab Rails node, which is a node that runs Puma or Sidekiq.
1. In your backup bucket, choose a `*_gitlab_backup.tar` file based on its timestamp, aligned with the PostgreSQL and object storage data that you restored.
1. Download the `tar` file in `/var/opt/gitlab/backups/`.
1. Restore the backup, specifying the ID of the backup you wish to restore, omitting `_gitlab_backup.tar` from the name:
```shell
# This command will overwrite the contents of your GitLab database!
sudo gitlab-backup restore BACKUP=11493107454_2018_04_25_10.6.4-ce SKIP=db
```
If there's a GitLab version mismatch between your backup tar file and the installed version of
GitLab, the restore command aborts with an error message.
Install the [correct GitLab version](https://packages.gitlab.com/gitlab/), and then try again.
1. Reconfigure, start, and [check](../raketasks/maintenance.md#check-gitlab-configuration) GitLab:
1. In all PostgreSQL nodes, run:
```shell
sudo gitlab-ctl reconfigure
```
1. In all Puma or Sidekiq nodes, run:
```shell
sudo gitlab-ctl start
```
1. In one Puma or Sidekiq node, run:
```shell
sudo gitlab-rake gitlab:check SANITIZE=true
```
1. Check that
[database values can be decrypted](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets)
especially if `/etc/gitlab/gitlab-secrets.json` was restored, or if a different server is the
target for the restore:
In a Puma or Sidekiq node, run:
```shell
sudo gitlab-rake gitlab:doctor:secrets
```
1. For added assurance, you can perform
[an integrity check on the uploaded files](../raketasks/check.md#uploaded-files-integrity):
In a Puma or Sidekiq node, run:
```shell
sudo gitlab-rake gitlab:artifacts:check
sudo gitlab-rake gitlab:lfs:check
sudo gitlab-rake gitlab:uploads:check
```
If missing or corrupted files are found, it does not always mean the backup and restore process failed.
For example, the files might be missing or corrupted on the source GitLab instance. You might need to cross-reference prior backups.
If you are migrating GitLab to a new environment, you can run the same checks on the source GitLab instance to determine whether
the integrity check result is preexisting or related to the backup and restore process.
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. SSH into a toolbox pod.
1. In your backup bucket, choose a `*_gitlab_backup.tar` file based on its timestamp, aligned with the PostgreSQL and object storage data that you restored.
1. Download the `tar` file in `/var/opt/gitlab/backups/`.
1. Restore the backup, specifying the ID of the backup you wish to restore, omitting `_gitlab_backup.tar` from the name:
```shell
# This command will overwrite the contents of Gitaly!
kubectl exec <Toolbox pod name> -it -- backup-utility --restore -t 11493107454_2018_04_25_10.6.4-ce --skip db,builds,pages,registry,uploads,artifacts,lfs,packages,external_diffs,terraform_state,pages,ci_secure_files
```
If there's a GitLab version mismatch between your backup tar file and the installed version of
GitLab, the restore command aborts with an error message.
Install the [correct GitLab version](https://packages.gitlab.com/gitlab/), and then try again.
1. Restart and [check](../raketasks/maintenance.md#check-gitlab-configuration) GitLab:
1. Start the stopped deployments, using the number of replicas noted in [Prerequisites](#prerequisites):
```shell
kubectl scale deploy -lapp=sidekiq,release=<helm release name> -n <namespace> --replicas=<original value>
kubectl scale deploy -lapp=webservice,release=<helm release name> -n <namespace> --replicas=<original value>
kubectl scale deploy -lapp=prometheus,release=<helm release name> -n <namespace> --replicas=<original value>
```
1. In the Toolbox pod, run:
```shell
sudo gitlab-rake gitlab:check SANITIZE=true
```
1. Check that
[database values can be decrypted](../raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets)
especially if `/etc/gitlab/gitlab-secrets.json` was restored, or if a different server is the
target for the restore:
In the Toolbox pod, run:
```shell
sudo gitlab-rake gitlab:doctor:secrets
```
1. For added assurance, you can perform
[an integrity check on the uploaded files](../raketasks/check.md#uploaded-files-integrity):
These commands can take a long time because they iterate over all rows. So, run the following commands in the GitLab Rails node,
rather than a Toolbox pod:
```shell
sudo gitlab-rake gitlab:artifacts:check
sudo gitlab-rake gitlab:lfs:check
sudo gitlab-rake gitlab:uploads:check
```
If missing or corrupted files are found, it does not always mean the backup and restore process failed.
For example, the files might be missing or corrupted on the source GitLab instance. You might need to cross-reference prior backups.
If you are migrating GitLab to a new environment, you can run the same checks on the source GitLab instance to determine whether
the integrity check result is preexisting or related to the backup and restore process.
{{< /tab >}}
{{< /tabs >}}
The restoration should be complete.
|
https://docs.gitlab.com/administration/backup_restore
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/backup_restore
|
[
"doc",
"administration",
"backup_restore"
] |
_index.md
|
Data Access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Back up and restore overview
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Your GitLab instance contains critical data for your software development or organization.
It is important to have a disaster recovery plan that includes regular backups for:
- Data protection: Safeguard against data loss due to hardware failures, software bugs, or accidental deletions.
- Disaster recovery: Restore GitLab instances and data in case of adverse events.
- Version control: Provide historical snapshots that enable rollbacks to previous states.
- Compliance: Meet the regulatory requirements of specific industries.
- Migration: Facilitate moving GitLab to new servers or environments.
- Testing and development: Create copies for testing upgrades or new features without risk to production data.
{{< alert type="note" >}}
This documentation applies to GitLab Community and Enterprise Edition.
While data security is ensured for GitLab.com, you can't use these methods to export or back up your data from GitLab.com.
{{< /alert >}}
## Back up GitLab
The procedures to back up your GitLab instance vary based on your
deployment's specific configuration and usage patterns.
Factors such as data types, storage locations, and volume influence the backup method,
storage options, and restoration process. For more information, see [Back up GitLab](backup_gitlab.md).
## Restore GitLab
The procedures to back up your GitLab instance vary based on your
deployment's specific configuration and usage patterns.
Factors such as data types, storage locations, and volume influence the restoration process.
For more information, see [Restore GitLab](restore_gitlab.md).
## Migrate to a new server
Use the GitLab backup and restore features to migrate your instance to a new server. For GitLab Geo deployments,
consider [Geo disaster recovery for planned failover](../geo/disaster_recovery/planned_failover.md).
For more information, see [Migrate to a new server](migrate_to_new_server.md).
## Back up and restore large reference architectures
It is important to back up and restore large reference architectures regularly.
For information on how to configure and restore backups for object storage data,
PostgreSQL data, and Git repositories, see [Back up and restore large reference architectures](backup_large_reference_architectures.md).
## Backup archive process
For data preservation and system integrity, GitLab creates a backup archive. For detailed information
on how GitLab creates this archive, see [Backup archive process](backup_archive_process.md).
## Related topics
- [Geo](../geo/_index.md)
- [Disaster Recovery (Geo)](../geo/disaster_recovery/_index.md)
- [Migrating GitLab groups](../../user/group/import/_index.md)
- [Import and migrate projects](../../user/project/import/_index.md)
- [GitLab Linux package (Omnibus) - Backup and Restore](https://docs.gitlab.com/omnibus/settings/backups.html)
- [GitLab Helm chart - Backup and Restore](https://docs.gitlab.com/charts/backup-restore/)
- [GitLab Operator - Backup and Restore](https://docs.gitlab.com/operator/backup_and_restore.html)
|
---
stage: Data Access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Back up and restore overview
breadcrumbs:
- doc
- administration
- backup_restore
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Your GitLab instance contains critical data for your software development or organization.
It is important to have a disaster recovery plan that includes regular backups for:
- Data protection: Safeguard against data loss due to hardware failures, software bugs, or accidental deletions.
- Disaster recovery: Restore GitLab instances and data in case of adverse events.
- Version control: Provide historical snapshots that enable rollbacks to previous states.
- Compliance: Meet the regulatory requirements of specific industries.
- Migration: Facilitate moving GitLab to new servers or environments.
- Testing and development: Create copies for testing upgrades or new features without risk to production data.
{{< alert type="note" >}}
This documentation applies to GitLab Community and Enterprise Edition.
While data security is ensured for GitLab.com, you can't use these methods to export or back up your data from GitLab.com.
{{< /alert >}}
## Back up GitLab
The procedures to back up your GitLab instance vary based on your
deployment's specific configuration and usage patterns.
Factors such as data types, storage locations, and volume influence the backup method,
storage options, and restoration process. For more information, see [Back up GitLab](backup_gitlab.md).
## Restore GitLab
The procedures to back up your GitLab instance vary based on your
deployment's specific configuration and usage patterns.
Factors such as data types, storage locations, and volume influence the restoration process.
For more information, see [Restore GitLab](restore_gitlab.md).
## Migrate to a new server
Use the GitLab backup and restore features to migrate your instance to a new server. For GitLab Geo deployments,
consider [Geo disaster recovery for planned failover](../geo/disaster_recovery/planned_failover.md).
For more information, see [Migrate to a new server](migrate_to_new_server.md).
## Back up and restore large reference architectures
It is important to back up and restore large reference architectures regularly.
For information on how to configure and restore backups for object storage data,
PostgreSQL data, and Git repositories, see [Back up and restore large reference architectures](backup_large_reference_architectures.md).
## Backup archive process
For data preservation and system integrity, GitLab creates a backup archive. For detailed information
on how GitLab creates this archive, see [Backup archive process](backup_archive_process.md).
## Related topics
- [Geo](../geo/_index.md)
- [Disaster Recovery (Geo)](../geo/disaster_recovery/_index.md)
- [Migrating GitLab groups](../../user/group/import/_index.md)
- [Import and migrate projects](../../user/project/import/_index.md)
- [GitLab Linux package (Omnibus) - Backup and Restore](https://docs.gitlab.com/omnibus/settings/backups.html)
- [GitLab Helm chart - Backup and Restore](https://docs.gitlab.com/charts/backup-restore/)
- [GitLab Operator - Backup and Restore](https://docs.gitlab.com/operator/backup_and_restore.html)
|
https://docs.gitlab.com/administration/wikis
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/wikis
|
[
"doc",
"administration",
"wikis"
] |
_index.md
|
Plan
|
Knowledge
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Wiki settings
|
Configure Wiki settings.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Adjust the wiki settings of your GitLab instance.
## Wiki page content size limit
You can set a maximum content size limit for wiki pages. This limit can prevent
abuse of the feature. The default value is **52428800 Bytes** (50 MB).
### How does it work?
The content size limit is applied when a wiki page is created or updated
through the GitLab UI or API. Local changes pushed via Git are not validated.
To break any existing wiki pages, the limit doesn't take effect until a wiki page
is edited again and the content changes.
### Wiki page content size limit configuration
This setting is not available through the [**Admin** area settings](../settings/_index.md).
To configure this setting, use either the Rails console
or the [Application settings API](../../api/settings.md).
{{< alert type="note" >}}
The value of the limit must be in bytes. The minimum value is 1024 bytes.
{{< /alert >}}
#### Through the Rails console
To configure this setting through the Rails console:
1. Start the Rails console:
```shell
# For Omnibus installations
sudo gitlab-rails console
# For installations from source
sudo -u git -H bundle exec rails console -e production
```
1. Update the wiki page maximum content size:
```ruby
ApplicationSetting.first.update!(wiki_page_max_content_bytes: 50.megabytes)
```
To retrieve the current value, start the Rails console and run:
```ruby
Gitlab::CurrentSettings.wiki_page_max_content_bytes
```
#### Through the API
To set the wiki page size limit through the Application Settings API, use a command,
as you would to [update any other setting](../../api/settings.md#update-application-settings):
```shell
curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/application/settings?wiki_page_max_content_bytes=52428800"
```
You can also use the API to [retrieve the current value](../../api/settings.md#get-details-on-current-application-settings):
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/application/settings"
```
### Reduce wiki repository size
The wiki counts as part of the [namespace storage size](../settings/account_and_limit_settings.md),
so you should keep your wiki repositories as compact as possible.
For more information about tools to compact repositories,
read the documentation on [reducing repository size](../../user/project/repository/repository_size.md#methods-to-reduce-repository-size).
## Allow URI includes for AsciiDoc
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/348687) in GitLab 16.1.
{{< /history >}}
Include directives import content from separate pages or external URLs,
and display them as part of the content of the current document. To enable
AsciiDoc includes, enable the feature through the Rails console or the API.
### Through the Rails console
To configure this setting through the Rails console:
1. Start the Rails console:
```shell
# For Omnibus installations
sudo gitlab-rails console
# For installations from source
sudo -u git -H bundle exec rails console -e production
```
1. Update the wiki to allow URI includes for AsciiDoc:
```ruby
ApplicationSetting.first.update!(wiki_asciidoc_allow_uri_includes: true)
```
To check if includes are enabled, start the Rails console and run:
```ruby
Gitlab::CurrentSettings.wiki_asciidoc_allow_uri_includes
```
### Through the API
To set the wiki to allow URI includes for AsciiDoc through the
[Application Settings API](../../api/settings.md#update-application-settings),
use a `curl` command:
```shell
curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" \
"https://gitlab.example.com/api/v4/application/settings?wiki_asciidoc_allow_uri_includes=true"
```
## Related topics
- [User documentation for wikis](../../user/project/wiki/_index.md)
- [Project wikis API](../../api/wikis.md)
- [Group wikis API](../../api/group_wikis.md)
|
---
stage: Plan
group: Knowledge
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Wiki settings
description: Configure Wiki settings.
breadcrumbs:
- doc
- administration
- wikis
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Adjust the wiki settings of your GitLab instance.
## Wiki page content size limit
You can set a maximum content size limit for wiki pages. This limit can prevent
abuse of the feature. The default value is **52428800 Bytes** (50 MB).
### How does it work?
The content size limit is applied when a wiki page is created or updated
through the GitLab UI or API. Local changes pushed via Git are not validated.
To break any existing wiki pages, the limit doesn't take effect until a wiki page
is edited again and the content changes.
### Wiki page content size limit configuration
This setting is not available through the [**Admin** area settings](../settings/_index.md).
To configure this setting, use either the Rails console
or the [Application settings API](../../api/settings.md).
{{< alert type="note" >}}
The value of the limit must be in bytes. The minimum value is 1024 bytes.
{{< /alert >}}
#### Through the Rails console
To configure this setting through the Rails console:
1. Start the Rails console:
```shell
# For Omnibus installations
sudo gitlab-rails console
# For installations from source
sudo -u git -H bundle exec rails console -e production
```
1. Update the wiki page maximum content size:
```ruby
ApplicationSetting.first.update!(wiki_page_max_content_bytes: 50.megabytes)
```
To retrieve the current value, start the Rails console and run:
```ruby
Gitlab::CurrentSettings.wiki_page_max_content_bytes
```
#### Through the API
To set the wiki page size limit through the Application Settings API, use a command,
as you would to [update any other setting](../../api/settings.md#update-application-settings):
```shell
curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/application/settings?wiki_page_max_content_bytes=52428800"
```
You can also use the API to [retrieve the current value](../../api/settings.md#get-details-on-current-application-settings):
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/application/settings"
```
### Reduce wiki repository size
The wiki counts as part of the [namespace storage size](../settings/account_and_limit_settings.md),
so you should keep your wiki repositories as compact as possible.
For more information about tools to compact repositories,
read the documentation on [reducing repository size](../../user/project/repository/repository_size.md#methods-to-reduce-repository-size).
## Allow URI includes for AsciiDoc
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/348687) in GitLab 16.1.
{{< /history >}}
Include directives import content from separate pages or external URLs,
and display them as part of the content of the current document. To enable
AsciiDoc includes, enable the feature through the Rails console or the API.
### Through the Rails console
To configure this setting through the Rails console:
1. Start the Rails console:
```shell
# For Omnibus installations
sudo gitlab-rails console
# For installations from source
sudo -u git -H bundle exec rails console -e production
```
1. Update the wiki to allow URI includes for AsciiDoc:
```ruby
ApplicationSetting.first.update!(wiki_asciidoc_allow_uri_includes: true)
```
To check if includes are enabled, start the Rails console and run:
```ruby
Gitlab::CurrentSettings.wiki_asciidoc_allow_uri_includes
```
### Through the API
To set the wiki to allow URI includes for AsciiDoc through the
[Application Settings API](../../api/settings.md#update-application-settings),
use a `curl` command:
```shell
curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" \
"https://gitlab.example.com/api/v4/application/settings?wiki_asciidoc_allow_uri_includes=true"
```
## Related topics
- [User documentation for wikis](../../user/project/wiki/_index.md)
- [Project wikis API](../../api/wikis.md)
- [Group wikis API](../../api/group_wikis.md)
|
https://docs.gitlab.com/administration/standalone
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/standalone.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
standalone.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Standalone PostgreSQL for Linux package installations
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
If you wish to have your database service hosted separately from your GitLab
application servers, you can do this using the PostgreSQL binaries packaged
together with the Linux package. This is recommended as part of our
[reference architecture for up to 40 RPS or 2,000 users](../reference_architectures/2k_users.md).
## Setting it up
1. SSH in to the PostgreSQL server.
1. [Download and install](https://about.gitlab.com/install/) the Linux
package you want using steps 1 and 2 from the GitLab downloads page. Do not complete any other steps on the
download page.
1. Generate a password hash for PostgreSQL. This assumes you are using the default
username of `gitlab` (recommended). The command requests a password
and confirmation. Use the value that is output by this command in the next
step as the value of `POSTGRESQL_PASSWORD_HASH`.
```shell
sudo gitlab-ctl pg-password-md5 gitlab
```
1. Edit `/etc/gitlab/gitlab.rb` and add the contents below, updating placeholder
values appropriately.
- `POSTGRESQL_PASSWORD_HASH` - The value output from the previous step
- `APPLICATION_SERVER_IP_BLOCKS` - A space delimited list of IP subnets or IP
addresses of the GitLab application servers that connect to the
database. Example: `%w(123.123.123.123/32 123.123.123.234/32)`
```ruby
# Disable all components except PostgreSQL
roles(['postgres_role'])
prometheus['enable'] = false
alertmanager['enable'] = false
pgbouncer_exporter['enable'] = false
redis_exporter['enable'] = false
gitlab_exporter['enable'] = false
postgresql['listen_address'] = '0.0.0.0'
postgresql['port'] = 5432
# Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
# Replace XXX.XXX.XXX.XXX/YY with Network Address
# ????
postgresql['trust_auth_cidr_addresses'] = %w(APPLICATION_SERVER_IP_BLOCKS)
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
1. Note the PostgreSQL node's IP address or hostname, port, and
plain text password. These are necessary when configuring the GitLab
application servers later.
1. [Enable monitoring](replication_and_failover.md#enable-monitoring)
Advanced configuration options are supported and can be added if
needed.
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Standalone PostgreSQL for Linux package installations
breadcrumbs:
- doc
- administration
- postgresql
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
If you wish to have your database service hosted separately from your GitLab
application servers, you can do this using the PostgreSQL binaries packaged
together with the Linux package. This is recommended as part of our
[reference architecture for up to 40 RPS or 2,000 users](../reference_architectures/2k_users.md).
## Setting it up
1. SSH in to the PostgreSQL server.
1. [Download and install](https://about.gitlab.com/install/) the Linux
package you want using steps 1 and 2 from the GitLab downloads page. Do not complete any other steps on the
download page.
1. Generate a password hash for PostgreSQL. This assumes you are using the default
username of `gitlab` (recommended). The command requests a password
and confirmation. Use the value that is output by this command in the next
step as the value of `POSTGRESQL_PASSWORD_HASH`.
```shell
sudo gitlab-ctl pg-password-md5 gitlab
```
1. Edit `/etc/gitlab/gitlab.rb` and add the contents below, updating placeholder
values appropriately.
- `POSTGRESQL_PASSWORD_HASH` - The value output from the previous step
- `APPLICATION_SERVER_IP_BLOCKS` - A space delimited list of IP subnets or IP
addresses of the GitLab application servers that connect to the
database. Example: `%w(123.123.123.123/32 123.123.123.234/32)`
```ruby
# Disable all components except PostgreSQL
roles(['postgres_role'])
prometheus['enable'] = false
alertmanager['enable'] = false
pgbouncer_exporter['enable'] = false
redis_exporter['enable'] = false
gitlab_exporter['enable'] = false
postgresql['listen_address'] = '0.0.0.0'
postgresql['port'] = 5432
# Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
# Replace XXX.XXX.XXX.XXX/YY with Network Address
# ????
postgresql['trust_auth_cidr_addresses'] = %w(APPLICATION_SERVER_IP_BLOCKS)
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
1. Note the PostgreSQL node's IP address or hostname, port, and
plain text password. These are necessary when configuring the GitLab
application servers later.
1. [Enable monitoring](replication_and_failover.md#enable-monitoring)
Advanced configuration options are supported and can be added if
needed.
|
https://docs.gitlab.com/administration/database_load_balancing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/database_load_balancing.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
database_load_balancing.md
|
Data Access
|
Database Frameworks
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Database Load Balancing
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
With Database Load Balancing, read-only queries can be distributed across
multiple PostgreSQL nodes to increase performance.
This functionality is provided natively in GitLab Rails and Sidekiq where
they can be configured to balance their database read queries in a round-robin approach,
without any external dependencies:
```plantuml
@startuml
!theme plain
card "**Internal Load Balancer**" as ilb
skinparam linetype ortho
together {
collections "**GitLab Rails** x3" as gitlab
collections "**Sidekiq** x4" as sidekiq
}
collections "**Consul** x3" as consul
card "Database" as database {
collections "**PGBouncer x3**\n//Consul//" as pgbouncer
card "**PostgreSQL** //Primary//\n//Patroni//\n//PgBouncer//\n//Consul//" as postgres_primary
collections "**PostgreSQL** //Secondary// **x2**\n//Patroni//\n//PgBouncer//\n//Consul//" as postgres_secondary
pgbouncer --> postgres_primary
postgres_primary .r-> postgres_secondary
}
gitlab --> ilb
gitlab -[hidden]-> pgbouncer
gitlab .[norank]-> postgres_primary
gitlab .[norank]-> postgres_secondary
sidekiq --> ilb
sidekiq -[hidden]-> pgbouncer
sidekiq .[norank]-> postgres_primary
sidekiq .[norank]-> postgres_secondary
ilb --> pgbouncer
consul -r-> pgbouncer
consul .[norank]r-> postgres_primary
consul .[norank]r-> postgres_secondary
@enduml
```
## Requirements to enable Database Load Balancing
To enable Database Load Balancing, make sure that:
- The HA PostgreSQL setup has one or more secondary nodes replicating the primary.
- Each PostgreSQL node is connected with the same credentials and on the same port.
For Linux package installations, you also need PgBouncer configured on each PostgreSQL node to pool
all load-balanced connections when [configuring a multi-node setup](replication_and_failover.md).
## Configuring Database Load Balancing
Database Load Balancing can be configured in one of two ways:
- (Recommended) [Hosts](#hosts): a list of PostgreSQL hosts.
- [Service Discovery](#service-discovery): a DNS record that returns a list of PostgreSQL hosts.
### Hosts
<!-- Including the Primary host in Database Load Balancing is now recommended for improved performance - Approved by the Reference Architecture and Database groups. -->
To configure a list of hosts, perform these steps on all GitLab Rails and Sidekiq
nodes for each environment you want to balance:
1. Edit the `/etc/gitlab/gitlab.rb` file.
1. In `gitlab_rails['db_load_balancing']`, create the array of the database
hosts you want to balance. For example, on
an environment with PostgreSQL running on the hosts `primary.example.com`,
`secondary1.example.com`, `secondary2.example.com`:
```ruby
gitlab_rails['db_load_balancing'] = { 'hosts' => ['primary.example.com', 'secondary1.example.com', 'secondary2.example.com'] }
```
These hosts must be reachable on the same port configured with `gitlab_rails['db_port']`.
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
{{< alert type="note" >}}
Adding the primary to the hosts list is optional, but recommended.
This makes the primary eligible for load-balanced read queries, improving system performance
when the primary has capacity for these queries.
Very high-traffic instances may not have capacity on the primary for it to serve as a read replica.
The primary will be used for write queries whether or not it is present in this list.
{{< /alert >}}
### Service Discovery
Service discovery allows GitLab to automatically retrieve a list of PostgreSQL
hosts to use. It periodically
checks a DNS `A` record, using the IPs returned by this record as the addresses
for the secondaries. For service discovery to work, all you need is a DNS server
and an `A` record containing the IP addresses of your secondaries.
When using a Linux package installation, the provided [Consul](../consul.md) service works as
a DNS server and returns PostgreSQL addresses via the `postgresql-ha.service.consul`
record. For example:
1. On each GitLab Rails / Sidekiq node, edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby
gitlab_rails['db_load_balancing'] = { 'discover' => {
'nameserver' => 'localhost'
'record' => 'postgresql-ha.service.consul'
'record_type' => 'A'
'port' => '8600'
'interval' => '60'
'disconnect_timeout' => '120'
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
| Option | Description | Default |
|----------------------|---------------------------------------------------------------------------------------------------|-----------|
| `nameserver` | The nameserver to use for looking up the DNS record. | localhost |
| `record` | The record to look up. This option is required for service discovery to work. | |
| `record_type` | Optional record type to look up. Can be either `A` or `SRV`. | `A` |
| `port` | The port of the nameserver. | 8600 |
| `interval` | The minimum time in seconds between checking the DNS record. | 60 |
| `disconnect_timeout` | The time in seconds after which an old connection is closed, after the list of hosts was updated. | 120 |
| `use_tcp` | Lookup DNS resources using TCP instead of UDP | false |
| `max_replica_pools` | The maximum number of replicas each Rails process connects to. This is useful if you run a lot of Postgres replicas and a lot of Rails processes because without this limit every Rails process connects to every replica by default. The default behavior is unlimited if not set. | nil |
If `record_type` is set to `SRV`, then GitLab continues to use round-robin algorithm
and ignores the `weight` and `priority` in the record. Because `SRV` records usually
return hostnames instead of IPs, GitLab needs to look for the IPs of returned hostnames
in the additional section of the `SRV` response. If no IP is found for a hostname, GitLab
needs to query the configured `nameserver` for `ANY` record for each such hostname looking for `A` or `AAAA`
records, eventually dropping this hostname from rotation if it can't resolve its IP.
The `interval` value specifies the minimum time between checks. If the `A`
record has a TTL greater than this value, then service discovery honors said
TTL. For example, if the TTL of the `A` record is 90 seconds, then service
discovery waits at least 90 seconds before checking the `A` record again.
When the list of hosts is updated, it might take a while for the old connections
to be terminated. The `disconnect_timeout` setting can be used to enforce an
upper limit on the time it takes to terminate all old database connections.
### Handling stale reads
{{< history >}}
- [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/327902) from GitLab Premium to GitLab Free in 14.0.
{{< /history >}}
To prevent reading from an outdated secondary the load balancer checks if it
is in sync with the primary. If the data is recent enough, the
secondary is used, otherwise it is ignored. To reduce the overhead of
these checks we only perform them at certain intervals.
There are three configuration options that influence this behavior:
| Option | Description | Default |
|------------------------------|----------------------------------------------------------------------------------------------------------------|------------|
| `max_replication_difference` | The amount of data (in bytes) a secondary is allowed to lag behind when it hasn't replicated data for a while. | 8 MB |
| `max_replication_lag_time` | The maximum number of seconds a secondary is allowed to lag behind before we stop using it. | 60 seconds |
| `replica_check_interval` | The minimum number of seconds we have to wait before checking the status of a secondary. | 60 seconds |
The defaults should be sufficient for most users.
To configure these options with a hosts list, use the following example:
```ruby
gitlab_rails['db_load_balancing'] = {
'hosts' => ['primary.example.com', 'secondary1.example.com', 'secondary2.example.com'],
'max_replication_difference' => 16777216, # 16 MB
'max_replication_lag_time' => 30,
'replica_check_interval' => 30
}
```
## Logging
The load balancer logs various events in
[`database_load_balancing.log`](../logs/_index.md#database_load_balancinglog), such as
- When a host is marked as offline
- When a host comes back online
- When all secondaries are offline
- When a read is retried on a different host due to a query conflict
The log is structured with each entry a JSON object containing at least:
- An `event` field useful for filtering.
- A human-readable `message` field.
- Some event-specific metadata. For example, `db_host`
- Contextual information that is always logged. For example, `severity` and `time`.
For example:
```json
{"severity":"INFO","time":"2019-09-02T12:12:01.728Z","correlation_id":"abcdefg","event":"host_online","message":"Host came back online","db_host":"111.222.333.444","db_port":null,"tag":"rails.database_load_balancing","environment":"production","hostname":"web-example-1","fqdn":"gitlab.example.com","path":null,"params":null}
```
## Implementation Details
### Balancing queries
Read-only `SELECT` queries balance among all the given hosts.
Everything else (including transactions) executes on the primary.
Queries such as `SELECT ... FOR UPDATE` are also executed on the primary.
### Prepared statements
Prepared statements don't work well with load balancing and are disabled
automatically when load balancing is enabled. This shouldn't impact
response timings.
### Primary sticking
After a write has been performed, GitLab sticks to using the primary for a
certain period of time, scoped to the user that performed the write. GitLab
reverts back to using secondaries when they have either caught up, or after 30
seconds.
### Failover handling
In the event of a failover or an unresponsive database, the load balancer
tries to use the next available host. If no secondaries are available the
operation is performed on the primary instead.
If a connection error occurs while writing data, the
operation retries up to 3 times using an exponential back-off.
When using load balancing, you should be able to safely restart a database server
without it immediately leading to errors being presented to the users.
### Development guide
For detailed development guide on database load balancing,
see the development documentation.
|
---
stage: Data Access
group: Database Frameworks
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Database Load Balancing
breadcrumbs:
- doc
- administration
- postgresql
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
With Database Load Balancing, read-only queries can be distributed across
multiple PostgreSQL nodes to increase performance.
This functionality is provided natively in GitLab Rails and Sidekiq where
they can be configured to balance their database read queries in a round-robin approach,
without any external dependencies:
```plantuml
@startuml
!theme plain
card "**Internal Load Balancer**" as ilb
skinparam linetype ortho
together {
collections "**GitLab Rails** x3" as gitlab
collections "**Sidekiq** x4" as sidekiq
}
collections "**Consul** x3" as consul
card "Database" as database {
collections "**PGBouncer x3**\n//Consul//" as pgbouncer
card "**PostgreSQL** //Primary//\n//Patroni//\n//PgBouncer//\n//Consul//" as postgres_primary
collections "**PostgreSQL** //Secondary// **x2**\n//Patroni//\n//PgBouncer//\n//Consul//" as postgres_secondary
pgbouncer --> postgres_primary
postgres_primary .r-> postgres_secondary
}
gitlab --> ilb
gitlab -[hidden]-> pgbouncer
gitlab .[norank]-> postgres_primary
gitlab .[norank]-> postgres_secondary
sidekiq --> ilb
sidekiq -[hidden]-> pgbouncer
sidekiq .[norank]-> postgres_primary
sidekiq .[norank]-> postgres_secondary
ilb --> pgbouncer
consul -r-> pgbouncer
consul .[norank]r-> postgres_primary
consul .[norank]r-> postgres_secondary
@enduml
```
## Requirements to enable Database Load Balancing
To enable Database Load Balancing, make sure that:
- The HA PostgreSQL setup has one or more secondary nodes replicating the primary.
- Each PostgreSQL node is connected with the same credentials and on the same port.
For Linux package installations, you also need PgBouncer configured on each PostgreSQL node to pool
all load-balanced connections when [configuring a multi-node setup](replication_and_failover.md).
## Configuring Database Load Balancing
Database Load Balancing can be configured in one of two ways:
- (Recommended) [Hosts](#hosts): a list of PostgreSQL hosts.
- [Service Discovery](#service-discovery): a DNS record that returns a list of PostgreSQL hosts.
### Hosts
<!-- Including the Primary host in Database Load Balancing is now recommended for improved performance - Approved by the Reference Architecture and Database groups. -->
To configure a list of hosts, perform these steps on all GitLab Rails and Sidekiq
nodes for each environment you want to balance:
1. Edit the `/etc/gitlab/gitlab.rb` file.
1. In `gitlab_rails['db_load_balancing']`, create the array of the database
hosts you want to balance. For example, on
an environment with PostgreSQL running on the hosts `primary.example.com`,
`secondary1.example.com`, `secondary2.example.com`:
```ruby
gitlab_rails['db_load_balancing'] = { 'hosts' => ['primary.example.com', 'secondary1.example.com', 'secondary2.example.com'] }
```
These hosts must be reachable on the same port configured with `gitlab_rails['db_port']`.
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
{{< alert type="note" >}}
Adding the primary to the hosts list is optional, but recommended.
This makes the primary eligible for load-balanced read queries, improving system performance
when the primary has capacity for these queries.
Very high-traffic instances may not have capacity on the primary for it to serve as a read replica.
The primary will be used for write queries whether or not it is present in this list.
{{< /alert >}}
### Service Discovery
Service discovery allows GitLab to automatically retrieve a list of PostgreSQL
hosts to use. It periodically
checks a DNS `A` record, using the IPs returned by this record as the addresses
for the secondaries. For service discovery to work, all you need is a DNS server
and an `A` record containing the IP addresses of your secondaries.
When using a Linux package installation, the provided [Consul](../consul.md) service works as
a DNS server and returns PostgreSQL addresses via the `postgresql-ha.service.consul`
record. For example:
1. On each GitLab Rails / Sidekiq node, edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby
gitlab_rails['db_load_balancing'] = { 'discover' => {
'nameserver' => 'localhost'
'record' => 'postgresql-ha.service.consul'
'record_type' => 'A'
'port' => '8600'
'interval' => '60'
'disconnect_timeout' => '120'
}
}
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
| Option | Description | Default |
|----------------------|---------------------------------------------------------------------------------------------------|-----------|
| `nameserver` | The nameserver to use for looking up the DNS record. | localhost |
| `record` | The record to look up. This option is required for service discovery to work. | |
| `record_type` | Optional record type to look up. Can be either `A` or `SRV`. | `A` |
| `port` | The port of the nameserver. | 8600 |
| `interval` | The minimum time in seconds between checking the DNS record. | 60 |
| `disconnect_timeout` | The time in seconds after which an old connection is closed, after the list of hosts was updated. | 120 |
| `use_tcp` | Lookup DNS resources using TCP instead of UDP | false |
| `max_replica_pools` | The maximum number of replicas each Rails process connects to. This is useful if you run a lot of Postgres replicas and a lot of Rails processes because without this limit every Rails process connects to every replica by default. The default behavior is unlimited if not set. | nil |
If `record_type` is set to `SRV`, then GitLab continues to use round-robin algorithm
and ignores the `weight` and `priority` in the record. Because `SRV` records usually
return hostnames instead of IPs, GitLab needs to look for the IPs of returned hostnames
in the additional section of the `SRV` response. If no IP is found for a hostname, GitLab
needs to query the configured `nameserver` for `ANY` record for each such hostname looking for `A` or `AAAA`
records, eventually dropping this hostname from rotation if it can't resolve its IP.
The `interval` value specifies the minimum time between checks. If the `A`
record has a TTL greater than this value, then service discovery honors said
TTL. For example, if the TTL of the `A` record is 90 seconds, then service
discovery waits at least 90 seconds before checking the `A` record again.
When the list of hosts is updated, it might take a while for the old connections
to be terminated. The `disconnect_timeout` setting can be used to enforce an
upper limit on the time it takes to terminate all old database connections.
### Handling stale reads
{{< history >}}
- [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/327902) from GitLab Premium to GitLab Free in 14.0.
{{< /history >}}
To prevent reading from an outdated secondary the load balancer checks if it
is in sync with the primary. If the data is recent enough, the
secondary is used, otherwise it is ignored. To reduce the overhead of
these checks we only perform them at certain intervals.
There are three configuration options that influence this behavior:
| Option | Description | Default |
|------------------------------|----------------------------------------------------------------------------------------------------------------|------------|
| `max_replication_difference` | The amount of data (in bytes) a secondary is allowed to lag behind when it hasn't replicated data for a while. | 8 MB |
| `max_replication_lag_time` | The maximum number of seconds a secondary is allowed to lag behind before we stop using it. | 60 seconds |
| `replica_check_interval` | The minimum number of seconds we have to wait before checking the status of a secondary. | 60 seconds |
The defaults should be sufficient for most users.
To configure these options with a hosts list, use the following example:
```ruby
gitlab_rails['db_load_balancing'] = {
'hosts' => ['primary.example.com', 'secondary1.example.com', 'secondary2.example.com'],
'max_replication_difference' => 16777216, # 16 MB
'max_replication_lag_time' => 30,
'replica_check_interval' => 30
}
```
## Logging
The load balancer logs various events in
[`database_load_balancing.log`](../logs/_index.md#database_load_balancinglog), such as
- When a host is marked as offline
- When a host comes back online
- When all secondaries are offline
- When a read is retried on a different host due to a query conflict
The log is structured with each entry a JSON object containing at least:
- An `event` field useful for filtering.
- A human-readable `message` field.
- Some event-specific metadata. For example, `db_host`
- Contextual information that is always logged. For example, `severity` and `time`.
For example:
```json
{"severity":"INFO","time":"2019-09-02T12:12:01.728Z","correlation_id":"abcdefg","event":"host_online","message":"Host came back online","db_host":"111.222.333.444","db_port":null,"tag":"rails.database_load_balancing","environment":"production","hostname":"web-example-1","fqdn":"gitlab.example.com","path":null,"params":null}
```
## Implementation Details
### Balancing queries
Read-only `SELECT` queries balance among all the given hosts.
Everything else (including transactions) executes on the primary.
Queries such as `SELECT ... FOR UPDATE` are also executed on the primary.
### Prepared statements
Prepared statements don't work well with load balancing and are disabled
automatically when load balancing is enabled. This shouldn't impact
response timings.
### Primary sticking
After a write has been performed, GitLab sticks to using the primary for a
certain period of time, scoped to the user that performed the write. GitLab
reverts back to using secondaries when they have either caught up, or after 30
seconds.
### Failover handling
In the event of a failover or an unresponsive database, the load balancer
tries to use the next available host. If no secondaries are available the
operation is performed on the primary instead.
If a connection error occurs while writing data, the
operation retries up to 3 times using an exponential back-off.
When using load balancing, you should be able to safely restart a database server
without it immediately leading to errors being presented to the users.
### Development guide
For detailed development guide on database load balancing,
see the development documentation.
|
https://docs.gitlab.com/administration/moving
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/moving.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
moving.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Moving GitLab databases to a different PostgreSQL instance
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Sometimes it is necessary to move your databases from one PostgreSQL instance to
another. For example, if you are using AWS Aurora and are preparing to
enable Database Load Balancing, you need to move your databases to
RDS for PostgreSQL.
To move databases from one instance to another:
1. Gather the source and destination PostgreSQL endpoint information:
```shell
SRC_PGHOST=<source postgresql host>
SRC_PGUSER=<source postgresql user>
DST_PGHOST=<destination postgresql host>
DST_PGUSER=<destination postgresql user>
```
1. Stop GitLab:
```shell
sudo gitlab-ctl stop
```
1. Dump the databases from the source:
```shell
/opt/gitlab/embedded/bin/pg_dump -h $SRC_PGHOST -U $SRC_PGUSER -c -C -f gitlabhq_production.sql gitlabhq_production
/opt/gitlab/embedded/bin/pg_dump -h $SRC_PGHOST -U $SRC_PGUSER -c -C -f praefect_production.sql praefect_production
```
{{< alert type="note" >}}
In rare occasions, you might notice database performance issues after you perform
a `pg_dump` and restore. This can happen because `pg_dump` does not contain the statistics
[used by the optimizer to make query planning decisions](https://www.postgresql.org/docs/16/app-pgdump.html).
If performance degrades after a restore, fix the problem by finding the problematic query,
then running ANALYZE on the tables used by the query.
{{< /alert >}}
1. Restore the databases to the destination (this overwrites any existing databases with the same names):
```shell
/opt/gitlab/embedded/bin/psql -h $DST_PGHOST -U $DST_PGUSER -f praefect_production.sql postgres
/opt/gitlab/embedded/bin/psql -h $DST_PGHOST -U $DST_PGUSER -f gitlabhq_production.sql postgres
```
1. Optional. If you migrate from a database that doesn't use PgBouncer to a database that does, you must manually add a [`pg_shadow_lookup` function](../gitaly/praefect/configure.md#manual-database-setup) to the application database (usually `gitlabhq_production`).
1. Configure the GitLab application servers with the appropriate connection details
for your destination PostgreSQL instance in your `/etc/gitlab/gitlab.rb` file:
```ruby
gitlab_rails['db_host'] = '<destination postgresql host>'
```
For more information on GitLab multi-node setups, refer to the [reference architectures](../reference_architectures/_index.md).
1. Reconfigure for the changes to take effect:
```shell
sudo gitlab-ctl reconfigure
```
1. Restart GitLab:
```shell
sudo gitlab-ctl start
```
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Moving GitLab databases to a different PostgreSQL instance
breadcrumbs:
- doc
- administration
- postgresql
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Sometimes it is necessary to move your databases from one PostgreSQL instance to
another. For example, if you are using AWS Aurora and are preparing to
enable Database Load Balancing, you need to move your databases to
RDS for PostgreSQL.
To move databases from one instance to another:
1. Gather the source and destination PostgreSQL endpoint information:
```shell
SRC_PGHOST=<source postgresql host>
SRC_PGUSER=<source postgresql user>
DST_PGHOST=<destination postgresql host>
DST_PGUSER=<destination postgresql user>
```
1. Stop GitLab:
```shell
sudo gitlab-ctl stop
```
1. Dump the databases from the source:
```shell
/opt/gitlab/embedded/bin/pg_dump -h $SRC_PGHOST -U $SRC_PGUSER -c -C -f gitlabhq_production.sql gitlabhq_production
/opt/gitlab/embedded/bin/pg_dump -h $SRC_PGHOST -U $SRC_PGUSER -c -C -f praefect_production.sql praefect_production
```
{{< alert type="note" >}}
In rare occasions, you might notice database performance issues after you perform
a `pg_dump` and restore. This can happen because `pg_dump` does not contain the statistics
[used by the optimizer to make query planning decisions](https://www.postgresql.org/docs/16/app-pgdump.html).
If performance degrades after a restore, fix the problem by finding the problematic query,
then running ANALYZE on the tables used by the query.
{{< /alert >}}
1. Restore the databases to the destination (this overwrites any existing databases with the same names):
```shell
/opt/gitlab/embedded/bin/psql -h $DST_PGHOST -U $DST_PGUSER -f praefect_production.sql postgres
/opt/gitlab/embedded/bin/psql -h $DST_PGHOST -U $DST_PGUSER -f gitlabhq_production.sql postgres
```
1. Optional. If you migrate from a database that doesn't use PgBouncer to a database that does, you must manually add a [`pg_shadow_lookup` function](../gitaly/praefect/configure.md#manual-database-setup) to the application database (usually `gitlabhq_production`).
1. Configure the GitLab application servers with the appropriate connection details
for your destination PostgreSQL instance in your `/etc/gitlab/gitlab.rb` file:
```ruby
gitlab_rails['db_host'] = '<destination postgresql host>'
```
For more information on GitLab multi-node setups, refer to the [reference architectures](../reference_architectures/_index.md).
1. Reconfigure for the changes to take effect:
```shell
sudo gitlab-ctl reconfigure
```
1. Restart GitLab:
```shell
sudo gitlab-ctl start
```
|
https://docs.gitlab.com/administration/external
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/external.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
external.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Configure GitLab using an external PostgreSQL service
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
If you're hosting GitLab on a cloud provider, you can optionally use a
managed service for PostgreSQL. For example, AWS offers a managed Relational
Database Service (RDS) that runs PostgreSQL.
Alternatively, you may opt to manage your own PostgreSQL instance or cluster
separate from the Linux package.
If you use a cloud-managed service, or provide your own PostgreSQL instance,
set up PostgreSQL according to the
[database requirements document](../../install/requirements.md#postgresql).
## GitLab Rails database
After you set up the external PostgreSQL server:
1. Log in to your database server.
1. Set up a `gitlab` user with a password of your choice, create the `gitlabhq_production` database, and make the user an
owner of the database. You can see an example of this setup in the
[self-compiled installation documentation](../../install/self_compiled/_index.md#7-database).
1. If you are using a cloud-managed service, you may need to grant additional
roles to your `gitlab` user:
- Amazon RDS requires the [`rds_superuser`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.Roles) role.
- Azure Database for PostgreSQL requires the [`azure_pg_admin`](https://learn.microsoft.com/en-us/azure/postgresql/single-server/how-to-create-users#how-to-create-additional-admin-users-in-azure-database-for-postgresql) role. Azure Database for PostgreSQL - Flexible Server requires [allow-listing extensions before they can be installed](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions#how-to-use-postgresql-extensions).
- Google Cloud SQL requires the [`cloudsqlsuperuser`](https://cloud.google.com/sql/docs/postgres/users#default-users) role.
This is for the installation of extensions during installation and upgrades. As an alternative,
[ensure the extensions are installed manually, and read about the problems that may arise during future GitLab upgrades](../../install/postgresql_extensions.md).
1. Configure the GitLab application servers with the appropriate connection details
for your external PostgreSQL service in your `/etc/gitlab/gitlab.rb` file:
```ruby
# Disable the bundled Omnibus provided PostgreSQL
postgresql['enable'] = false
# PostgreSQL connection details
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'unicode'
gitlab_rails['db_host'] = '10.1.0.5' # IP/hostname of database server
gitlab_rails['db_port'] = 5432
gitlab_rails['db_password'] = 'DB password'
```
For more information on GitLab multi-node setups, refer to the [reference architectures](../reference_architectures/_index.md).
1. Reconfigure for the changes to take effect:
```shell
sudo gitlab-ctl reconfigure
```
1. Restart PostgreSQL to enable the TCP port:
```shell
sudo gitlab-ctl restart
```
## Container registry metadata database
If you plan to use the [container registry metadata database](../packages/container_registry_metadata_database.md),
you should also create the registry database and user.
After you set up the external PostgreSQL server:
1. Log in to your database server.
1. Use the following SQL commands to create the user and the database:
```sql
-- Create the registry user
CREATE USER registry WITH PASSWORD '<your_registry_password>';
-- Create the registry database
CREATE DATABASE registry OWNER registry;
```
1. For cloud-managed services, grant additional roles as needed:
{{< tabs >}}
{{< tab title="Amazon RDS" >}}
```sql
GRANT rds_superuser TO registry;
```
{{< /tab >}}
{{< tab title="Azure database" >}}
```sql
GRANT azure_pg_admin TO registry;
```
{{< /tab >}}
{{< tab title="Google Cloud SQL" >}}
```sql
GRANT cloudsqlsuperuser TO registry;
```
{{< /tab >}}
{{< /tabs >}}
1. You can now enable and start using the container registry metadata database.
## Troubleshooting
### Resolve `SSL SYSCALL error: EOF detected` error
When using an external PostgreSQL instance, you may see an error like:
```shell
pg_dump: error: Error message from server: SSL SYSCALL error: EOF detected
```
To resolve this error, ensure that you are meeting the
[minimum PostgreSQL requirements](../../install/requirements.md#postgresql). After
upgrading your RDS instance to a [supported version](../../install/requirements.md#postgresql),
you should be able to perform a backup without this error.
See [issue 64763](https://gitlab.com/gitlab-org/gitlab/-/issues/364763) for more information.
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Configure GitLab using an external PostgreSQL service
breadcrumbs:
- doc
- administration
- postgresql
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
If you're hosting GitLab on a cloud provider, you can optionally use a
managed service for PostgreSQL. For example, AWS offers a managed Relational
Database Service (RDS) that runs PostgreSQL.
Alternatively, you may opt to manage your own PostgreSQL instance or cluster
separate from the Linux package.
If you use a cloud-managed service, or provide your own PostgreSQL instance,
set up PostgreSQL according to the
[database requirements document](../../install/requirements.md#postgresql).
## GitLab Rails database
After you set up the external PostgreSQL server:
1. Log in to your database server.
1. Set up a `gitlab` user with a password of your choice, create the `gitlabhq_production` database, and make the user an
owner of the database. You can see an example of this setup in the
[self-compiled installation documentation](../../install/self_compiled/_index.md#7-database).
1. If you are using a cloud-managed service, you may need to grant additional
roles to your `gitlab` user:
- Amazon RDS requires the [`rds_superuser`](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.Roles) role.
- Azure Database for PostgreSQL requires the [`azure_pg_admin`](https://learn.microsoft.com/en-us/azure/postgresql/single-server/how-to-create-users#how-to-create-additional-admin-users-in-azure-database-for-postgresql) role. Azure Database for PostgreSQL - Flexible Server requires [allow-listing extensions before they can be installed](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions#how-to-use-postgresql-extensions).
- Google Cloud SQL requires the [`cloudsqlsuperuser`](https://cloud.google.com/sql/docs/postgres/users#default-users) role.
This is for the installation of extensions during installation and upgrades. As an alternative,
[ensure the extensions are installed manually, and read about the problems that may arise during future GitLab upgrades](../../install/postgresql_extensions.md).
1. Configure the GitLab application servers with the appropriate connection details
for your external PostgreSQL service in your `/etc/gitlab/gitlab.rb` file:
```ruby
# Disable the bundled Omnibus provided PostgreSQL
postgresql['enable'] = false
# PostgreSQL connection details
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'unicode'
gitlab_rails['db_host'] = '10.1.0.5' # IP/hostname of database server
gitlab_rails['db_port'] = 5432
gitlab_rails['db_password'] = 'DB password'
```
For more information on GitLab multi-node setups, refer to the [reference architectures](../reference_architectures/_index.md).
1. Reconfigure for the changes to take effect:
```shell
sudo gitlab-ctl reconfigure
```
1. Restart PostgreSQL to enable the TCP port:
```shell
sudo gitlab-ctl restart
```
## Container registry metadata database
If you plan to use the [container registry metadata database](../packages/container_registry_metadata_database.md),
you should also create the registry database and user.
After you set up the external PostgreSQL server:
1. Log in to your database server.
1. Use the following SQL commands to create the user and the database:
```sql
-- Create the registry user
CREATE USER registry WITH PASSWORD '<your_registry_password>';
-- Create the registry database
CREATE DATABASE registry OWNER registry;
```
1. For cloud-managed services, grant additional roles as needed:
{{< tabs >}}
{{< tab title="Amazon RDS" >}}
```sql
GRANT rds_superuser TO registry;
```
{{< /tab >}}
{{< tab title="Azure database" >}}
```sql
GRANT azure_pg_admin TO registry;
```
{{< /tab >}}
{{< tab title="Google Cloud SQL" >}}
```sql
GRANT cloudsqlsuperuser TO registry;
```
{{< /tab >}}
{{< /tabs >}}
1. You can now enable and start using the container registry metadata database.
## Troubleshooting
### Resolve `SSL SYSCALL error: EOF detected` error
When using an external PostgreSQL instance, you may see an error like:
```shell
pg_dump: error: Error message from server: SSL SYSCALL error: EOF detected
```
To resolve this error, ensure that you are meeting the
[minimum PostgreSQL requirements](../../install/requirements.md#postgresql). After
upgrading your RDS instance to a [supported version](../../install/requirements.md#postgresql),
you should be able to perform a backup without this error.
See [issue 64763](https://gitlab.com/gitlab-org/gitlab/-/issues/364763) for more information.
|
https://docs.gitlab.com/administration/upgrading_os
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/upgrading_os.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
upgrading_os.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Upgrading operating systems for PostgreSQL
| null |
{{< alert type="warning" >}}
[Geo](../geo/_index.md) cannot be used to migrate a PostgreSQL database from one operating system to another. If you attempt to do so, the secondary site may appear to be 100% replicated when in fact some data is not replicated, leading to data loss. This is because Geo depends on PostgreSQL streaming replication, which suffers from the limitations described in this document. Also see [Geo Troubleshooting - Check OS locale data compatibility](../geo/replication/troubleshooting/common.md#check-os-locale-data-compatibility).
{{< /alert >}}
If you upgrade the operating system on which PostgreSQL runs, any
[changes to locale data might corrupt your database indexes](https://wiki.postgresql.org/wiki/Locale_data_changes).
In particular, the upgrade to `glibc` 2.28 is likely to cause this problem. To avoid this issue,
migrate using one of the following options, roughly in order of complexity:
- Recommended. [Backup and restore](#backup-and-restore).
- Recommended. [Rebuild all indexes](#rebuild-all-indexes).
- [Rebuild only affected indexes](#rebuild-only-affected-indexes).
Be sure to backup before attempting any migration, and validate the migration process in a
production-like environment. If the length of downtime might be a problem, then consider timing
different approaches with a copy of production data in a production-like environment.
If you are running a scaled-out GitLab environment, and there are no other services running on the
nodes where PostgreSQL is running, then we recommend upgrading the operating system of the
PostgreSQL nodes by themselves. To reduce complexity and risk, do not combine the procedure with
other changes, especially if those changes do not require downtime, such as upgrading the operating
system of nodes running only Puma or Sidekiq.
For more information about how GitLab plans to address this issue, see
[epic 8573](https://gitlab.com/groups/gitlab-org/-/epics/8573).
## Backup and restore
Backup and restore recreates the entire database, including the indexes.
1. Take a scheduled downtime window. In all nodes, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. Backup the PostgreSQL database with `pg_dump` or the
[GitLab backup tool, with all data types except `db` excluded](../backup_restore/backup_gitlab.md#excluding-specific-data-from-the-backup)
(so only the database is backed up).
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. Restore the PostgreSQL database from backup.
1. In all nodes, start GitLab.
Advantages:
- Straightforward.
- Removes any database bloat in indexes and tables, reducing disk use.
Disadvantages:
- Downtime increases with database size, at some point becoming problematic. It depends on many
factors, but if your database is over 100 GB then it might take on the order of 24 hours.
### Backup and restore, with Geo secondary sites
1. Take a scheduled downtime window. In all nodes of all sites, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In the primary site, backup the PostgreSQL database with `pg_dump` or the
[GitLab backup tool, with all data types except `db` excluded](../backup_restore/backup_gitlab.md#excluding-specific-data-from-the-backup)
(so only the database is backed up).
1. In all PostgreSQL nodes of all sites, upgrade the OS.
1. In all PostgreSQL nodes of all sites,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes of all sites, install the new GitLab package of the same GitLab version.
1. In the primary site, restore the PostgreSQL database from backup.
1. Optionally, start using the primary site, at the risk of not having a secondary site as warm
standby.
1. Set up PostgreSQL streaming replication to the secondary sites again.
1. If the secondary sites receive traffic from users, then let the read-replica databases catch up
before starting GitLab.
1. In all nodes of all sites, start GitLab.
## Rebuild all indexes
[Rebuild all indexes](https://www.postgresql.org/docs/16/sql-reindex.html).
1. Take a scheduled downtime window. In all nodes, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. In a [database console](../troubleshooting/postgresql.md#start-a-database-console), rebuild all indexes:
```sql
SET statement_timeout = 0;
REINDEX DATABASE gitlabhq_production;
```
1. After reindexing the database, the version must be refreshed for all affected collations.
To update the system catalog to record the current collation version:
```sql
ALTER DATABASE gitlabhq_production REFRESH COLLATION VERSION;
```
1. In all nodes, start GitLab.
Advantages:
- Straightforward.
- May be faster than backup and restore, depending on many factors.
- Removes any database bloat in indexes, reducing disk use.
Disadvantages:
- Downtime increases with database size, at some point becoming problematic.
### Rebuild all indexes, with Geo secondary sites
1. Take a scheduled downtime window. In all nodes of all sites, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. In the primary site, in a
[database console](../troubleshooting/postgresql.md#start-a-database-console), rebuild all indexes:
```sql
SET statement_timeout = 0;
REINDEX DATABASE gitlabhq_production;
```
1. After reindexing the database, the version must be refreshed for all affected collations.
To update the system catalog to record the current collation version:
```sql
ALTER DATABASE <database_name> REFRESH COLLATION VERSION;
```
1. If the secondary sites receive traffic from users, then let the read-replica databases catch up
before starting GitLab.
1. In all nodes of all sites, start GitLab.
## Rebuild only affected indexes
This is similar to the approach used for GitLab.com. To learn more about this process and how the
different types of indexes were handled, see the blog post about
[upgrading the operating system on our PostgreSQL database clusters](https://about.gitlab.com/blog/2022/08/12/upgrading-database-os/).
1. Take a scheduled downtime window. In all nodes, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. [Determine which indexes are affected](https://wiki.postgresql.org/wiki/Locale_data_changes#What_indexes_are_affected).
1. In a [database console](../troubleshooting/postgresql.md#start-a-database-console), reindex each affected index:
```sql
SET statement_timeout = 0;
REINDEX INDEX <index name> CONCURRENTLY;
```
1. After reindexing bad indexes, the collation must be refreshed. To update the system catalog to
record the current collation version:
```sql
ALTER DATABASE <database_name> REFRESH COLLATION VERSION;
```
1. In all nodes, start GitLab.
Advantages:
- Downtime is not spent rebuilding unaffected indexes.
Disadvantages:
- More chances for mistakes.
- Requires expert knowledge of PostgreSQL to handle unexpected problems during migration.
- Preserves database bloat.
### Rebuild only affected indexes, with Geo secondary sites
1. Take a scheduled downtime window. In all nodes of all sites, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. [Determine which indexes are affected](https://wiki.postgresql.org/wiki/Locale_data_changes#What_indexes_are_affected).
1. In the primary site, in a
[database console](../troubleshooting/postgresql.md#start-a-database-console), reindex each affected index:
```sql
SET statement_timeout = 0;
REINDEX INDEX <index name> CONCURRENTLY;
```
1. After reindexing bad indexes, the collation must be refreshed. To update the system catalog to
record the current collation version:
```sql
ALTER DATABASE <database_name> REFRESH COLLATION VERSION;
```
1. The existing PostgreSQL streaming replication should replicate the reindex changes to the
read-replica databases.
1. In all nodes of all sites, start GitLab.
## Checking `glibc` versions
To see what version of `glibc` is used, run `ldd --version`.
The following table shows the `glibc` versions shipped for different operating systems:
| Operating system | `glibc` version |
|---------------------|-----------------|
| CentOS 7 | 2.17 |
| RedHat Enterprise 8 | 2.28 |
| RedHat Enterprise 9 | 2.34 |
| Ubuntu 18.04 | 2.27 |
| Ubuntu 20.04 | 2.31 |
| Ubuntu 22.04 | 2.35 |
| Ubuntu 24.04 | 2.39 |
For example, suppose you are upgrading from CentOS 7 to RedHat
Enterprise 8. In this case, using PostgreSQL on this upgraded operating
system requires using one of the two mentioned approaches, because `glibc`
is upgraded from 2.17 to 2.28. Failing to handle the collation changes
properly causes significant failures in GitLab, such as runners not
picking jobs with tags.
On the other hand, if PostgreSQL has already been running on `glibc` 2.28
or higher with no issues, your indexes should continue to work without
further action. For example, if you have been running PostgreSQL on
RedHat Enterprise 8 (`glibc` 2.28) for a while, and want to upgrade
to RedHat Enterprise 9 (`glibc` 2.34), there should be no collations-related issues.
### Verifying `glibc` collation versions
For PostgreSQL 13 and higher, you can verify that your database
collation version matches your system with this SQL query:
```sql
SELECT collname AS COLLATION_NAME,
collversion AS VERSION,
pg_collation_actual_version(oid) AS actual_version
FROM pg_collation
WHERE collprovider = 'c';
```
### Matching collation example
For example, on a Ubuntu 22.04 system, the output of a properly indexed
system looks like:
```sql
gitlabhq_production=# SELECT collname AS COLLATION_NAME,
collversion AS VERSION,
pg_collation_actual_version(oid) AS actual_version
FROM pg_collation
WHERE collprovider = 'c';
collation_name | version | actual_version
----------------+---------+----------------
C | |
POSIX | |
ucs_basic | |
C.utf8 | |
en_US.utf8 | 2.35 | 2.35
en_US | 2.35 | 2.35
(6 rows)
```
### Mismatched collation example
On the other hand, if you've upgraded from Ubuntu 18.04 to 22.04 without
reindexing, you might see:
```sql
gitlabhq_production=# SELECT collname AS COLLATION_NAME,
collversion AS VERSION,
pg_collation_actual_version(oid) AS actual_version
FROM pg_collation
WHERE collprovider = 'c';
collation_name | version | actual_version
----------------+---------+----------------
C | |
POSIX | |
ucs_basic | |
C.utf8 | |
en_US.utf8 | 2.27 | 2.35
en_US | 2.27 | 2.35
(6 rows)
```
## Streaming replication
The corrupted index issue affects PostgreSQL streaming replication. You must
[rebuild all indexes](#rebuild-all-indexes) or
[rebuild only affected indexes](#rebuild-only-affected-indexes) before allowing
reads against a replica with different locale data.
## Additional Geo variations
The upgrade procedures documented previously are not set in stone. With Geo there are potentially more options,
because there exists redundant infrastructure. You could consider modifications to suit your use-case,
but be sure to weigh it against the added complexity. Here are some examples:
To reserve a secondary site as a warm standby in case of disaster during the OS upgrade of the
primary site and the other secondary site:
1. Isolate the secondary site's data from changes on the primary site: Pause the secondary site.
1. Perform the OS upgrade on the primary site.
1. If the OS upgrade fails and the primary site is unrecoverable,
promote the secondary site, route users to it, and try again later.
This leaves you without an up-to-date secondary site.
To provide users with read-only access to GitLab during the OS upgrade (partial downtime):
1. Enable [Maintenance Mode](../maintenance_mode/_index.md) on the primary site instead of stopping
it.
1. Promote the secondary site but do not route users to it yet.
1. Perform the OS upgrade on the promoted site.
1. Route users to the promoted site instead of the old primary site.
1. Set up the old primary site as a new secondary site.
{{< alert type="warning" >}}
Even though the secondary site already has a read-replica of the database, you cannot upgrade
its operating system prior to promotion. If you were to attempt that, then the secondary site may
miss replication of some Git repositories or files, due to the corrupted indexes.
See [Streaming replication](#streaming-replication).
{{< /alert >}}
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Upgrading operating systems for PostgreSQL
breadcrumbs:
- doc
- administration
- postgresql
---
{{< alert type="warning" >}}
[Geo](../geo/_index.md) cannot be used to migrate a PostgreSQL database from one operating system to another. If you attempt to do so, the secondary site may appear to be 100% replicated when in fact some data is not replicated, leading to data loss. This is because Geo depends on PostgreSQL streaming replication, which suffers from the limitations described in this document. Also see [Geo Troubleshooting - Check OS locale data compatibility](../geo/replication/troubleshooting/common.md#check-os-locale-data-compatibility).
{{< /alert >}}
If you upgrade the operating system on which PostgreSQL runs, any
[changes to locale data might corrupt your database indexes](https://wiki.postgresql.org/wiki/Locale_data_changes).
In particular, the upgrade to `glibc` 2.28 is likely to cause this problem. To avoid this issue,
migrate using one of the following options, roughly in order of complexity:
- Recommended. [Backup and restore](#backup-and-restore).
- Recommended. [Rebuild all indexes](#rebuild-all-indexes).
- [Rebuild only affected indexes](#rebuild-only-affected-indexes).
Be sure to backup before attempting any migration, and validate the migration process in a
production-like environment. If the length of downtime might be a problem, then consider timing
different approaches with a copy of production data in a production-like environment.
If you are running a scaled-out GitLab environment, and there are no other services running on the
nodes where PostgreSQL is running, then we recommend upgrading the operating system of the
PostgreSQL nodes by themselves. To reduce complexity and risk, do not combine the procedure with
other changes, especially if those changes do not require downtime, such as upgrading the operating
system of nodes running only Puma or Sidekiq.
For more information about how GitLab plans to address this issue, see
[epic 8573](https://gitlab.com/groups/gitlab-org/-/epics/8573).
## Backup and restore
Backup and restore recreates the entire database, including the indexes.
1. Take a scheduled downtime window. In all nodes, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. Backup the PostgreSQL database with `pg_dump` or the
[GitLab backup tool, with all data types except `db` excluded](../backup_restore/backup_gitlab.md#excluding-specific-data-from-the-backup)
(so only the database is backed up).
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. Restore the PostgreSQL database from backup.
1. In all nodes, start GitLab.
Advantages:
- Straightforward.
- Removes any database bloat in indexes and tables, reducing disk use.
Disadvantages:
- Downtime increases with database size, at some point becoming problematic. It depends on many
factors, but if your database is over 100 GB then it might take on the order of 24 hours.
### Backup and restore, with Geo secondary sites
1. Take a scheduled downtime window. In all nodes of all sites, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In the primary site, backup the PostgreSQL database with `pg_dump` or the
[GitLab backup tool, with all data types except `db` excluded](../backup_restore/backup_gitlab.md#excluding-specific-data-from-the-backup)
(so only the database is backed up).
1. In all PostgreSQL nodes of all sites, upgrade the OS.
1. In all PostgreSQL nodes of all sites,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes of all sites, install the new GitLab package of the same GitLab version.
1. In the primary site, restore the PostgreSQL database from backup.
1. Optionally, start using the primary site, at the risk of not having a secondary site as warm
standby.
1. Set up PostgreSQL streaming replication to the secondary sites again.
1. If the secondary sites receive traffic from users, then let the read-replica databases catch up
before starting GitLab.
1. In all nodes of all sites, start GitLab.
## Rebuild all indexes
[Rebuild all indexes](https://www.postgresql.org/docs/16/sql-reindex.html).
1. Take a scheduled downtime window. In all nodes, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. In a [database console](../troubleshooting/postgresql.md#start-a-database-console), rebuild all indexes:
```sql
SET statement_timeout = 0;
REINDEX DATABASE gitlabhq_production;
```
1. After reindexing the database, the version must be refreshed for all affected collations.
To update the system catalog to record the current collation version:
```sql
ALTER DATABASE gitlabhq_production REFRESH COLLATION VERSION;
```
1. In all nodes, start GitLab.
Advantages:
- Straightforward.
- May be faster than backup and restore, depending on many factors.
- Removes any database bloat in indexes, reducing disk use.
Disadvantages:
- Downtime increases with database size, at some point becoming problematic.
### Rebuild all indexes, with Geo secondary sites
1. Take a scheduled downtime window. In all nodes of all sites, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. In the primary site, in a
[database console](../troubleshooting/postgresql.md#start-a-database-console), rebuild all indexes:
```sql
SET statement_timeout = 0;
REINDEX DATABASE gitlabhq_production;
```
1. After reindexing the database, the version must be refreshed for all affected collations.
To update the system catalog to record the current collation version:
```sql
ALTER DATABASE <database_name> REFRESH COLLATION VERSION;
```
1. If the secondary sites receive traffic from users, then let the read-replica databases catch up
before starting GitLab.
1. In all nodes of all sites, start GitLab.
## Rebuild only affected indexes
This is similar to the approach used for GitLab.com. To learn more about this process and how the
different types of indexes were handled, see the blog post about
[upgrading the operating system on our PostgreSQL database clusters](https://about.gitlab.com/blog/2022/08/12/upgrading-database-os/).
1. Take a scheduled downtime window. In all nodes, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. [Determine which indexes are affected](https://wiki.postgresql.org/wiki/Locale_data_changes#What_indexes_are_affected).
1. In a [database console](../troubleshooting/postgresql.md#start-a-database-console), reindex each affected index:
```sql
SET statement_timeout = 0;
REINDEX INDEX <index name> CONCURRENTLY;
```
1. After reindexing bad indexes, the collation must be refreshed. To update the system catalog to
record the current collation version:
```sql
ALTER DATABASE <database_name> REFRESH COLLATION VERSION;
```
1. In all nodes, start GitLab.
Advantages:
- Downtime is not spent rebuilding unaffected indexes.
Disadvantages:
- More chances for mistakes.
- Requires expert knowledge of PostgreSQL to handle unexpected problems during migration.
- Preserves database bloat.
### Rebuild only affected indexes, with Geo secondary sites
1. Take a scheduled downtime window. In all nodes of all sites, stop unnecessary GitLab services:
```shell
gitlab-ctl stop
gitlab-ctl start postgresql
```
1. In all PostgreSQL nodes, upgrade the OS.
1. In all PostgreSQL nodes,
[update GitLab package sources after upgrading the OS](../../update/package/_index.md#upgrade-the-operating-system-optional).
1. In all PostgreSQL nodes, install the new GitLab package of the same GitLab version.
1. [Determine which indexes are affected](https://wiki.postgresql.org/wiki/Locale_data_changes#What_indexes_are_affected).
1. In the primary site, in a
[database console](../troubleshooting/postgresql.md#start-a-database-console), reindex each affected index:
```sql
SET statement_timeout = 0;
REINDEX INDEX <index name> CONCURRENTLY;
```
1. After reindexing bad indexes, the collation must be refreshed. To update the system catalog to
record the current collation version:
```sql
ALTER DATABASE <database_name> REFRESH COLLATION VERSION;
```
1. The existing PostgreSQL streaming replication should replicate the reindex changes to the
read-replica databases.
1. In all nodes of all sites, start GitLab.
## Checking `glibc` versions
To see what version of `glibc` is used, run `ldd --version`.
The following table shows the `glibc` versions shipped for different operating systems:
| Operating system | `glibc` version |
|---------------------|-----------------|
| CentOS 7 | 2.17 |
| RedHat Enterprise 8 | 2.28 |
| RedHat Enterprise 9 | 2.34 |
| Ubuntu 18.04 | 2.27 |
| Ubuntu 20.04 | 2.31 |
| Ubuntu 22.04 | 2.35 |
| Ubuntu 24.04 | 2.39 |
For example, suppose you are upgrading from CentOS 7 to RedHat
Enterprise 8. In this case, using PostgreSQL on this upgraded operating
system requires using one of the two mentioned approaches, because `glibc`
is upgraded from 2.17 to 2.28. Failing to handle the collation changes
properly causes significant failures in GitLab, such as runners not
picking jobs with tags.
On the other hand, if PostgreSQL has already been running on `glibc` 2.28
or higher with no issues, your indexes should continue to work without
further action. For example, if you have been running PostgreSQL on
RedHat Enterprise 8 (`glibc` 2.28) for a while, and want to upgrade
to RedHat Enterprise 9 (`glibc` 2.34), there should be no collations-related issues.
### Verifying `glibc` collation versions
For PostgreSQL 13 and higher, you can verify that your database
collation version matches your system with this SQL query:
```sql
SELECT collname AS COLLATION_NAME,
collversion AS VERSION,
pg_collation_actual_version(oid) AS actual_version
FROM pg_collation
WHERE collprovider = 'c';
```
### Matching collation example
For example, on a Ubuntu 22.04 system, the output of a properly indexed
system looks like:
```sql
gitlabhq_production=# SELECT collname AS COLLATION_NAME,
collversion AS VERSION,
pg_collation_actual_version(oid) AS actual_version
FROM pg_collation
WHERE collprovider = 'c';
collation_name | version | actual_version
----------------+---------+----------------
C | |
POSIX | |
ucs_basic | |
C.utf8 | |
en_US.utf8 | 2.35 | 2.35
en_US | 2.35 | 2.35
(6 rows)
```
### Mismatched collation example
On the other hand, if you've upgraded from Ubuntu 18.04 to 22.04 without
reindexing, you might see:
```sql
gitlabhq_production=# SELECT collname AS COLLATION_NAME,
collversion AS VERSION,
pg_collation_actual_version(oid) AS actual_version
FROM pg_collation
WHERE collprovider = 'c';
collation_name | version | actual_version
----------------+---------+----------------
C | |
POSIX | |
ucs_basic | |
C.utf8 | |
en_US.utf8 | 2.27 | 2.35
en_US | 2.27 | 2.35
(6 rows)
```
## Streaming replication
The corrupted index issue affects PostgreSQL streaming replication. You must
[rebuild all indexes](#rebuild-all-indexes) or
[rebuild only affected indexes](#rebuild-only-affected-indexes) before allowing
reads against a replica with different locale data.
## Additional Geo variations
The upgrade procedures documented previously are not set in stone. With Geo there are potentially more options,
because there exists redundant infrastructure. You could consider modifications to suit your use-case,
but be sure to weigh it against the added complexity. Here are some examples:
To reserve a secondary site as a warm standby in case of disaster during the OS upgrade of the
primary site and the other secondary site:
1. Isolate the secondary site's data from changes on the primary site: Pause the secondary site.
1. Perform the OS upgrade on the primary site.
1. If the OS upgrade fails and the primary site is unrecoverable,
promote the secondary site, route users to it, and try again later.
This leaves you without an up-to-date secondary site.
To provide users with read-only access to GitLab during the OS upgrade (partial downtime):
1. Enable [Maintenance Mode](../maintenance_mode/_index.md) on the primary site instead of stopping
it.
1. Promote the secondary site but do not route users to it yet.
1. Perform the OS upgrade on the promoted site.
1. Route users to the promoted site instead of the old primary site.
1. Set up the old primary site as a new secondary site.
{{< alert type="warning" >}}
Even though the secondary site already has a read-replica of the database, you cannot upgrade
its operating system prior to promotion. If you were to attempt that, then the secondary site may
miss replication of some Git repositories or files, due to the corrupted indexes.
See [Streaming replication](#streaming-replication).
{{< /alert >}}
|
https://docs.gitlab.com/administration/external_upgrade
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/external_upgrade.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
external_upgrade.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Upgrading external PostgreSQL databases
| null |
When upgrading your PostgreSQL database engine, it is important to follow all steps
recommended by the PostgreSQL community and your cloud provider. Two
kinds of upgrades exist for PostgreSQL databases:
- Minor version upgrades: These include only bug and security fixes. They are
always backward-compatible with your existing application database model.
The minor version upgrade process consists of replacing the PostgreSQL binaries
and restarting the database service. The data directory remains unchanged.
- Major version upgrades: These change the internal storage format and the database
catalog. As a result, object statistics used by the query optimizer
[are not transferred to the new version](https://www.postgresql.org/docs/16/pgupgrade.html)
and must be rebuilt with `ANALYZE`.
Not following the documented major version upgrade process often results in
poor database performance and high CPU use on the database server.
All major cloud providers support in-place major version upgrades of database
instances, using the `pg_upgrade` utility. However you must follow the pre- and
post- upgrade steps to reduce the risk of performance degradation or database disruption.
Read carefully the major version upgrade steps of your external database platform:
- [Amazon RDS for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html#USER_UpgradeDBInstance.PostgreSQL.MajorVersion.Process)
- [Azure Database for PostgreSQL Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-major-version-upgrade)
- [Google Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres/upgrade-major-db-version-inplace)
- [PostgreSQL community `pg_upgrade`](https://www.postgresql.org/docs/16/pgupgrade.html)
## Always `ANALYZE` your database after a major version upgrade
It is mandatory to run the [`ANALYZE` operation](https://www.postgresql.org/docs/16/sql-analyze.html)
to refresh the `pg_statistic` table after a major version upgrade, because optimizer statistics
[are not transferred by `pg_upgrade`](https://www.postgresql.org/docs/16/pgupgrade.html).
This should be done for all databases on the upgraded PostgreSQL service/instance/cluster.
When you plan your maintenance window, you should include the `ANALYZE` duration
because this operation might significantly degrade GitLab performance.
To speed up the `ANALYZE` operation, use the
[`vacuumdb` utility](https://www.postgresql.org/docs/16/app-vacuumdb.html),
with `--analyze-only --jobs=njobs` to execute the `ANALYZE` command in parallel by
running `njobs` commands simultaneously.
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Upgrading external PostgreSQL databases
breadcrumbs:
- doc
- administration
- postgresql
---
When upgrading your PostgreSQL database engine, it is important to follow all steps
recommended by the PostgreSQL community and your cloud provider. Two
kinds of upgrades exist for PostgreSQL databases:
- Minor version upgrades: These include only bug and security fixes. They are
always backward-compatible with your existing application database model.
The minor version upgrade process consists of replacing the PostgreSQL binaries
and restarting the database service. The data directory remains unchanged.
- Major version upgrades: These change the internal storage format and the database
catalog. As a result, object statistics used by the query optimizer
[are not transferred to the new version](https://www.postgresql.org/docs/16/pgupgrade.html)
and must be rebuilt with `ANALYZE`.
Not following the documented major version upgrade process often results in
poor database performance and high CPU use on the database server.
All major cloud providers support in-place major version upgrades of database
instances, using the `pg_upgrade` utility. However you must follow the pre- and
post- upgrade steps to reduce the risk of performance degradation or database disruption.
Read carefully the major version upgrade steps of your external database platform:
- [Amazon RDS for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html#USER_UpgradeDBInstance.PostgreSQL.MajorVersion.Process)
- [Azure Database for PostgreSQL Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-major-version-upgrade)
- [Google Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres/upgrade-major-db-version-inplace)
- [PostgreSQL community `pg_upgrade`](https://www.postgresql.org/docs/16/pgupgrade.html)
## Always `ANALYZE` your database after a major version upgrade
It is mandatory to run the [`ANALYZE` operation](https://www.postgresql.org/docs/16/sql-analyze.html)
to refresh the `pg_statistic` table after a major version upgrade, because optimizer statistics
[are not transferred by `pg_upgrade`](https://www.postgresql.org/docs/16/pgupgrade.html).
This should be done for all databases on the upgraded PostgreSQL service/instance/cluster.
When you plan your maintenance window, you should include the `ANALYZE` duration
because this operation might significantly degrade GitLab performance.
To speed up the `ANALYZE` operation, use the
[`vacuumdb` utility](https://www.postgresql.org/docs/16/app-vacuumdb.html),
with `--analyze-only --jobs=njobs` to execute the `ANALYZE` command in parallel by
running `njobs` commands simultaneously.
|
https://docs.gitlab.com/administration/postgresql
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
_index.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Configuring PostgreSQL for scaling
|
Configure PostgreSQL for scaling.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
In this section, you are guided through configuring a PostgreSQL database to
be used with GitLab in one of our [reference architectures](../reference_architectures/_index.md).
## Configuration options
Choose one of the following PostgreSQL configuration options:
### Standalone PostgreSQL for Linux package installations
This setup is for when you have installed GitLab by using the
[Linux package](https://about.gitlab.com/install/) (CE or EE),
to use the bundled PostgreSQL having only its service enabled.
Read how to [set up a standalone PostgreSQL instance](standalone.md) for Linux package installations.
### Provide your own PostgreSQL instance
This setup is for when you have installed GitLab using the
[Linux package](https://about.gitlab.com/install/) (CE or EE),
or [self-compiled](../../install/self_compiled/_index.md) your installation, but you want to use
your own external PostgreSQL server.
Read how to [set up an external PostgreSQL instance](external.md).
When setting up an external database there are some metrics that are useful for monitoring and troubleshooting.
When setting up an external database there are monitoring and logging settings required for troubleshooting various database related issues.
Read more about [monitoring and logging setup for external Databases](external_metrics.md).
### PostgreSQL replication and failover for Linux package installations
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This setup is for when you have installed GitLab using the
[Linux **Enterprise Edition** (EE) package](https://about.gitlab.com/install/?version=ee).
All the tools that are needed like PostgreSQL, PgBouncer, and Patroni are bundled in
the package, so you can use it to set up the whole PostgreSQL infrastructure (primary, replica).
Read how to [set up PostgreSQL replication and failover](replication_and_failover.md) for Linux package installations.
## Related topics
- [Working with the bundled PgBouncer service](pgbouncer.md)
- [Database load balancing](database_load_balancing.md)
- [Moving GitLab databases to a different PostgreSQL instance](moving.md)
- Database guides for GitLab development
- [Upgrade external database](external_upgrade.md)
- [Upgrading operating systems for PostgreSQL](upgrading_os.md)
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Configuring PostgreSQL for scaling
description: Configure PostgreSQL for scaling.
breadcrumbs:
- doc
- administration
- postgresql
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
In this section, you are guided through configuring a PostgreSQL database to
be used with GitLab in one of our [reference architectures](../reference_architectures/_index.md).
## Configuration options
Choose one of the following PostgreSQL configuration options:
### Standalone PostgreSQL for Linux package installations
This setup is for when you have installed GitLab by using the
[Linux package](https://about.gitlab.com/install/) (CE or EE),
to use the bundled PostgreSQL having only its service enabled.
Read how to [set up a standalone PostgreSQL instance](standalone.md) for Linux package installations.
### Provide your own PostgreSQL instance
This setup is for when you have installed GitLab using the
[Linux package](https://about.gitlab.com/install/) (CE or EE),
or [self-compiled](../../install/self_compiled/_index.md) your installation, but you want to use
your own external PostgreSQL server.
Read how to [set up an external PostgreSQL instance](external.md).
When setting up an external database there are some metrics that are useful for monitoring and troubleshooting.
When setting up an external database there are monitoring and logging settings required for troubleshooting various database related issues.
Read more about [monitoring and logging setup for external Databases](external_metrics.md).
### PostgreSQL replication and failover for Linux package installations
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This setup is for when you have installed GitLab using the
[Linux **Enterprise Edition** (EE) package](https://about.gitlab.com/install/?version=ee).
All the tools that are needed like PostgreSQL, PgBouncer, and Patroni are bundled in
the package, so you can use it to set up the whole PostgreSQL infrastructure (primary, replica).
Read how to [set up PostgreSQL replication and failover](replication_and_failover.md) for Linux package installations.
## Related topics
- [Working with the bundled PgBouncer service](pgbouncer.md)
- [Database load balancing](database_load_balancing.md)
- [Moving GitLab databases to a different PostgreSQL instance](moving.md)
- Database guides for GitLab development
- [Upgrade external database](external_upgrade.md)
- [Upgrading operating systems for PostgreSQL](upgrading_os.md)
|
https://docs.gitlab.com/administration/replication_and_failover_troubleshooting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/replication_and_failover_troubleshooting.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
replication_and_failover_troubleshooting.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Troubleshooting PostgreSQL replication and failover for Linux package installations
| null |
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
When working with PostgreSQL replication and failover, you might encounter the following issues.
## Consul and PostgreSQL changes not taking effect
Due to the potential impacts, `gitlab-ctl reconfigure` only reloads Consul and PostgreSQL, it does not restart the services. However, not all changes can be activated by reloading.
To restart either service, run `gitlab-ctl restart SERVICE`
For PostgreSQL, it is usually safe to restart the leader node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done.
On the Consul server nodes, it is important to [restart the Consul service](../consul.md#restart-consul) in a controlled manner.
## PgBouncer error `ERROR: pgbouncer cannot connect to server`
You may get this error when running `gitlab-rake gitlab:db:configure` or you
may see the error in the PgBouncer log file.
```plaintext
PG::ConnectionBad: ERROR: pgbouncer cannot connect to server
```
The problem may be that your PgBouncer node's IP address is not included in the
`trust_auth_cidr_addresses` setting in `/etc/gitlab/gitlab.rb` on the database nodes.
You can confirm that this is the issue by checking the PostgreSQL log on the leader
database node. If you see the following error then `trust_auth_cidr_addresses`
is the problem.
```plaintext
2018-03-29_13:59:12.11776 FATAL: no pg_hba.conf entry for host "123.123.123.123", user "pgbouncer", database "gitlabhq_production", SSL off
```
To fix the problem, add the IP address to `/etc/gitlab/gitlab.rb`.
```ruby
postgresql['trust_auth_cidr_addresses'] = %w(123.123.123.123/32 <other_cidrs>)
```
[Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
## PgBouncer nodes don't fail over after Patroni switchover
Due to a [known issue](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/8166) that
affects versions of GitLab prior to 16.5.0, the automatic failover of PgBouncer nodes does not
happen after a [Patroni switchover](replication_and_failover.md#manual-failover-procedure-for-patroni). In this
example, GitLab failed to detect a paused database, then attempted to `RESUME` a
not-paused database:
```plaintext
INFO -- : Running: gitlab-ctl pgb-notify --pg-database gitlabhq_production --newhost database7.example.com --user pgbouncer --hostuser gitlab-consul
ERROR -- : STDERR: Error running command: GitlabCtl::Errors::ExecutionError
ERROR -- : STDERR: ERROR: ERROR: database gitlabhq_production is not paused
```
To ensure a [Patroni switchover](replication_and_failover.md#manual-failover-procedure-for-patroni) succeeds,
you must manually restart the PgBouncer service on all PgBouncer nodes with this command:
```shell
gitlab-ctl restart pgbouncer
```
## Reinitialize a replica
If a replica cannot start or rejoin the cluster, or when it lags behind and cannot catch up, it might be necessary to reinitialize the replica:
1. [Check the replication status](replication_and_failover.md#check-replication-status) to confirm which server
needs to be reinitialized. For example:
```plaintext
+ Cluster: postgresql-ha (6970678148837286213) ------+---------+--------------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+-------------------------------------+--------------+---------+--------------+----+-----------+
| gitlab-database-1.example.com | 172.18.0.111 | Replica | running | 55 | 0 |
| gitlab-database-2.example.com | 172.18.0.112 | Replica | start failed | | unknown |
| gitlab-database-3.example.com | 172.18.0.113 | Leader | running | 55 | |
+-------------------------------------+--------------+---------+--------------+----+-----------+
```
1. Sign in to the broken server and reinitialize the database and replication. Patroni shuts
down PostgreSQL on that server, remove the data directory, and reinitialize it from scratch:
```shell
sudo gitlab-ctl patroni reinitialize-replica --member gitlab-database-2.example.com
```
This can be run on any Patroni node, but be aware that `sudo gitlab-ctl patroni reinitialize-replica`
without `--member` restarts the server it is run on.
You should run it locally on the broken server to reduce the risk of
unintended data loss.
1. Monitor the logs:
```shell
sudo gitlab-ctl tail patroni
```
## Reset the Patroni state in Consul
{{< alert type="warning" >}}
Resetting the Patroni state in Consul is a potentially destructive process. Make sure that you have a healthy database backup first.
{{< /alert >}}
As a last resort you can reset the Patroni state in Consul completely.
This may be required if your Patroni cluster is in an unknown or bad state and no node can start:
```plaintext
+ Cluster: postgresql-ha (6970678148837286213) ------+---------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+-------------------------------------+--------------+---------+---------+----+-----------+
| gitlab-database-1.example.com | 172.18.0.111 | Replica | stopped | | unknown |
| gitlab-database-2.example.com | 172.18.0.112 | Replica | stopped | | unknown |
| gitlab-database-3.example.com | 172.18.0.113 | Replica | stopped | | unknown |
+-------------------------------------+--------------+---------+---------+----+-----------+
```
Before deleting the Patroni state in Consul,
[try to resolve the `gitlab-ctl` errors](#errors-running-gitlab-ctl) on the Patroni nodes.
This process results in a reinitialized Patroni cluster when
the first Patroni node starts.
To reset the Patroni state in Consul:
1. Take note of the Patroni node that was the leader, or that the application thinks is the current leader,
if the current state shows more than one, or none:
- Look on the PgBouncer nodes in `/var/opt/gitlab/consul/databases.ini`,
which contains the hostname of the current leader.
- Look in the Patroni logs `/var/log/gitlab/patroni/current` (or the older rotated and
compressed logs `/var/log/gitlab/patroni/@40000*`) on all database nodes to see
which server was most recently identified as the leader by the cluster:
```plaintext
INFO: no action. I am a secondary (database1.local) and following a leader (database2.local)
```
1. Stop Patroni on all nodes:
```shell
sudo gitlab-ctl stop patroni
```
1. Reset the state in Consul:
```shell
/opt/gitlab/embedded/bin/consul kv delete -recurse /service/postgresql-ha/
```
1. Start one Patroni node, which initializes the Patroni cluster to elect as a leader.
It's highly recommended to start the previous leader (noted in the first step),
so as to not lose existing writes that may have not been replicated because
of the broken cluster state:
```shell
sudo gitlab-ctl start patroni
```
1. Start all other Patroni nodes that join the Patroni cluster as replicas:
```shell
sudo gitlab-ctl start patroni
```
If you are still seeing issues, the next step is restoring the last healthy backup.
## Errors in the Patroni log about a `pg_hba.conf` entry for `127.0.0.1`
The following log entry in the Patroni log indicates the replication is not working
and a configuration change is needed:
```plaintext
FATAL: no pg_hba.conf entry for replication connection from host "127.0.0.1", user "gitlab_replicator"
```
To fix the problem, ensure the loopback interface is included in the CIDR addresses list:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
postgresql['trust_auth_cidr_addresses'] = %w(<other_cidrs> 127.0.0.1/32)
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
1. Check that [all the replicas are synchronized](replication_and_failover.md#check-replication-status)
## Error: requested start point is ahead of the Write Ahead Log (WAL) flush position
This error in Patroni logs indicates that the database is not replicating:
```plaintext
FATAL: could not receive data from WAL stream:
ERROR: requested starting point 0/5000000 is ahead of the WAL flush position of this server 0/4000388
```
This example error is from a replica that was initially misconfigured, and had never replicated.
Fix it [by reinitializing the replica](#reinitialize-a-replica).
## Patroni fails to start with `MemoryError`
Patroni may fail to start, logging an error and stack trace:
```plaintext
MemoryError
Traceback (most recent call last):
File "/opt/gitlab/embedded/bin/patroni", line 8, in <module>
sys.exit(main())
[..]
File "/opt/gitlab/embedded/lib/python3.7/ctypes/__init__.py", line 273, in _reset_cache
CFUNCTYPE(c_int)(lambda: None)
```
If the stack trace ends with `CFUNCTYPE(c_int)(lambda: None)`, this code triggers `MemoryError`
if the Linux server has been hardened for security.
The code causes Python to write temporary executable files, and if it cannot find a file system in which to do this. For example, if `noexec` is set on the `/tmp` file system, it fails with `MemoryError` ([read more in the issue](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6184)).
## Errors running `gitlab-ctl`
Patroni nodes can get into a state where `gitlab-ctl` commands fail
and `gitlab-ctl reconfigure` cannot fix the node.
If this co-incides with a version upgrade of PostgreSQL, [follow a different procedure](#postgresql-major-version-upgrade-fails-on-a-patroni-replica)
One common symptom is that `gitlab-ctl` cannot determine
information it needs about the installation if the database server is failing to start:
```plaintext
Malformed configuration JSON file found at /opt/gitlab/embedded/nodes/<HOSTNAME>.json.
This usually happens when your last run of `gitlab-ctl reconfigure` didn't complete successfully.
```
```plaintext
Error while reinitializing replica on the current node: Attributes not found in
/opt/gitlab/embedded/nodes/<HOSTNAME>.json, has reconfigure been run yet?
```
Similarly, the nodes file (`/opt/gitlab/embedded/nodes/<HOSTNAME>.json`) should contain a lot of information,
but might get created with only:
```json
{
"name": "<HOSTNAME>"
}
```
The following process for fixing this includes reinitializing this replica:
the current state of PostgreSQL on this node is discarded:
1. Shut down the Patroni and (if present) PostgreSQL services:
```shell
sudo gitlab-ctl status
sudo gitlab-ctl stop patroni
sudo gitlab-ctl stop postgresql
```
1. Remove `/var/opt/gitlab/postgresql/data` in case its state prevents
PostgreSQL from starting:
```shell
cd /var/opt/gitlab/postgresql
sudo rm -rf data
```
{{< alert type="warning" >}}
Take care with this step to avoid data loss.
This step can be also achieved by renaming `data/`:
make sure there's enough free disk for a new copy of the primary database,
and remove the extra directory when the replica is fixed.
{{< /alert >}}
1. With PostgreSQL not running, the nodes file now gets created successfully:
```shell
sudo gitlab-ctl reconfigure
```
1. Start Patroni:
```shell
sudo gitlab-ctl start patroni
```
1. Monitor the logs and check the cluster state:
```shell
sudo gitlab-ctl tail patroni
sudo gitlab-ctl patroni members
```
1. Re-run `reconfigure` again:
```shell
sudo gitlab-ctl reconfigure
```
1. Reinitialize the replica if `gitlab-ctl patroni members` indicates this is needed:
```shell
sudo gitlab-ctl patroni reinitialize-replica
```
If this procedure doesn't work and if the cluster is unable to elect a leader,
[there is a another fix](#reset-the-patroni-state-in-consul) which should only be
used as a last resort.
## PostgreSQL major version upgrade fails on a Patroni replica
A Patroni replica can get stuck in a loop during `gitlab-ctl pg-upgrade`, and
the upgrade fails.
An example set of symptoms is as follows:
1. A `postgresql` service is defined,
which shouldn't usually be present on a Patroni node. It is present because
`gitlab-ctl pg-upgrade` adds it to create a new empty database:
```plaintext
run: patroni: (pid 1972) 1919s; run: log: (pid 1971) 1919s
down: postgresql: 1s, normally up, want up; run: log: (pid 1973) 1919s
```
1. PostgreSQL generates `PANIC` log entries in
`/var/log/gitlab/postgresql/current` as Patroni is removing
`/var/opt/gitlab/postgresql/data` as part of reinitializing the replica:
```plaintext
DETAIL: Could not open file "pg_xact/0000": No such file or directory.
WARNING: terminating connection because of crash of another server process
LOG: all server processes terminated; reinitializing
PANIC: could not open file "global/pg_control": No such file or directory
```
1. In `/var/log/gitlab/patroni/current`, Patroni logs the following.
The local PostgreSQL version is different from the cluster leader:
```plaintext
INFO: trying to bootstrap from leader 'HOSTNAME'
pg_basebackup: incompatible server version 12.6
pg_basebackup: removing data directory "/var/opt/gitlab/postgresql/data"
ERROR: Error when fetching backup: pg_basebackup exited with code=1
```
This workaround applies when the Patroni cluster is in the following state:
- The [leader has been successfully upgraded to the new major version](replication_and_failover.md#upgrading-postgresql-major-version-in-a-patroni-cluster).
- The step to upgrade PostgreSQL on replicas is failing.
This workaround completes the PostgreSQL upgrade on a Patroni replica
by setting the node to use the new PostgreSQL version, and then reinitializing
it as a replica in the new cluster that was created
when the leader was upgraded:
1. Check the cluster status on all nodes to confirm which is the leader
and what state the replicas are in
```shell
sudo gitlab-ctl patroni members
```
1. Replica: check which version of PostgreSQL is active:
```shell
sudo ls -al /opt/gitlab/embedded/bin | grep postgres
```
1. Replica: ensure the nodes file is correct and `gitlab-ctl` can run. This resolves
the [errors running `gitlab-ctl`](#errors-running-gitlab-ctl) issue if the replica
has any of those errors as well:
```shell
sudo gitlab-ctl stop patroni
sudo gitlab-ctl reconfigure
```
1. Replica: relink the PostgreSQL binaries to the required version
to fix the `incompatible server version` error:
1. Edit `/etc/gitlab/gitlab.rb` and specify the required version:
```ruby
postgresql['version'] = 13
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Check the binaries are relinked. The binaries distributed for
PostgreSQL vary between major releases, it's typical to
have a small number of incorrect symbolic links:
```shell
sudo ls -al /opt/gitlab/embedded/bin | grep postgres
```
1. Replica: ensure PostgreSQL is fully reinitialized for the specified version:
```shell
cd /var/opt/gitlab/postgresql
sudo rm -rf data
sudo gitlab-ctl reconfigure
```
1. Replica: optionally monitor the database in two additional terminal sessions:
- Disk use increases as `pg_basebackup` runs. Track progress of the
replica initialization with:
```shell
cd /var/opt/gitlab/postgresql
watch du -sh data
```
- Monitor the process in the logs:
```shell
sudo gitlab-ctl tail patroni
```
1. Replica: Start Patroni to reinitialize the replica:
```shell
sudo gitlab-ctl start patroni
```
1. Replica: After it completes, remove the hardcoded version from `/etc/gitlab/gitlab.rb`:
1. Edit `/etc/gitlab/gitlab.rb` and remove `postgresql['version']`.
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Check the correct binaries are linked:
```shell
sudo ls -al /opt/gitlab/embedded/bin | grep postgres
```
1. Check the cluster status on all nodes:
```shell
sudo gitlab-ctl patroni members
```
Repeat this procedure on the other replica if required.
## PostgreSQL replicas stuck in loop while being created
If PostgreSQL replicas appear to migrate but then restart in a loop, check the
`/opt/gitlab-data/postgresql/` folder permissions on your replicas and primary server.
You can also see this error message in the logs:
`could not get COPY data stream: ERROR: could not open file "<file>" Permission denied`.
## Issues with other components
If you're running into an issue with a component not outlined here, be sure to check the troubleshooting section of their specific documentation page:
- [Consul](../consul.md#troubleshooting-consul)
- [PostgreSQL](https://docs.gitlab.com/omnibus/settings/database.html#troubleshooting)
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Troubleshooting PostgreSQL replication and failover for Linux package installations
breadcrumbs:
- doc
- administration
- postgresql
---
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
When working with PostgreSQL replication and failover, you might encounter the following issues.
## Consul and PostgreSQL changes not taking effect
Due to the potential impacts, `gitlab-ctl reconfigure` only reloads Consul and PostgreSQL, it does not restart the services. However, not all changes can be activated by reloading.
To restart either service, run `gitlab-ctl restart SERVICE`
For PostgreSQL, it is usually safe to restart the leader node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done.
On the Consul server nodes, it is important to [restart the Consul service](../consul.md#restart-consul) in a controlled manner.
## PgBouncer error `ERROR: pgbouncer cannot connect to server`
You may get this error when running `gitlab-rake gitlab:db:configure` or you
may see the error in the PgBouncer log file.
```plaintext
PG::ConnectionBad: ERROR: pgbouncer cannot connect to server
```
The problem may be that your PgBouncer node's IP address is not included in the
`trust_auth_cidr_addresses` setting in `/etc/gitlab/gitlab.rb` on the database nodes.
You can confirm that this is the issue by checking the PostgreSQL log on the leader
database node. If you see the following error then `trust_auth_cidr_addresses`
is the problem.
```plaintext
2018-03-29_13:59:12.11776 FATAL: no pg_hba.conf entry for host "123.123.123.123", user "pgbouncer", database "gitlabhq_production", SSL off
```
To fix the problem, add the IP address to `/etc/gitlab/gitlab.rb`.
```ruby
postgresql['trust_auth_cidr_addresses'] = %w(123.123.123.123/32 <other_cidrs>)
```
[Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
## PgBouncer nodes don't fail over after Patroni switchover
Due to a [known issue](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/8166) that
affects versions of GitLab prior to 16.5.0, the automatic failover of PgBouncer nodes does not
happen after a [Patroni switchover](replication_and_failover.md#manual-failover-procedure-for-patroni). In this
example, GitLab failed to detect a paused database, then attempted to `RESUME` a
not-paused database:
```plaintext
INFO -- : Running: gitlab-ctl pgb-notify --pg-database gitlabhq_production --newhost database7.example.com --user pgbouncer --hostuser gitlab-consul
ERROR -- : STDERR: Error running command: GitlabCtl::Errors::ExecutionError
ERROR -- : STDERR: ERROR: ERROR: database gitlabhq_production is not paused
```
To ensure a [Patroni switchover](replication_and_failover.md#manual-failover-procedure-for-patroni) succeeds,
you must manually restart the PgBouncer service on all PgBouncer nodes with this command:
```shell
gitlab-ctl restart pgbouncer
```
## Reinitialize a replica
If a replica cannot start or rejoin the cluster, or when it lags behind and cannot catch up, it might be necessary to reinitialize the replica:
1. [Check the replication status](replication_and_failover.md#check-replication-status) to confirm which server
needs to be reinitialized. For example:
```plaintext
+ Cluster: postgresql-ha (6970678148837286213) ------+---------+--------------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+-------------------------------------+--------------+---------+--------------+----+-----------+
| gitlab-database-1.example.com | 172.18.0.111 | Replica | running | 55 | 0 |
| gitlab-database-2.example.com | 172.18.0.112 | Replica | start failed | | unknown |
| gitlab-database-3.example.com | 172.18.0.113 | Leader | running | 55 | |
+-------------------------------------+--------------+---------+--------------+----+-----------+
```
1. Sign in to the broken server and reinitialize the database and replication. Patroni shuts
down PostgreSQL on that server, remove the data directory, and reinitialize it from scratch:
```shell
sudo gitlab-ctl patroni reinitialize-replica --member gitlab-database-2.example.com
```
This can be run on any Patroni node, but be aware that `sudo gitlab-ctl patroni reinitialize-replica`
without `--member` restarts the server it is run on.
You should run it locally on the broken server to reduce the risk of
unintended data loss.
1. Monitor the logs:
```shell
sudo gitlab-ctl tail patroni
```
## Reset the Patroni state in Consul
{{< alert type="warning" >}}
Resetting the Patroni state in Consul is a potentially destructive process. Make sure that you have a healthy database backup first.
{{< /alert >}}
As a last resort you can reset the Patroni state in Consul completely.
This may be required if your Patroni cluster is in an unknown or bad state and no node can start:
```plaintext
+ Cluster: postgresql-ha (6970678148837286213) ------+---------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+-------------------------------------+--------------+---------+---------+----+-----------+
| gitlab-database-1.example.com | 172.18.0.111 | Replica | stopped | | unknown |
| gitlab-database-2.example.com | 172.18.0.112 | Replica | stopped | | unknown |
| gitlab-database-3.example.com | 172.18.0.113 | Replica | stopped | | unknown |
+-------------------------------------+--------------+---------+---------+----+-----------+
```
Before deleting the Patroni state in Consul,
[try to resolve the `gitlab-ctl` errors](#errors-running-gitlab-ctl) on the Patroni nodes.
This process results in a reinitialized Patroni cluster when
the first Patroni node starts.
To reset the Patroni state in Consul:
1. Take note of the Patroni node that was the leader, or that the application thinks is the current leader,
if the current state shows more than one, or none:
- Look on the PgBouncer nodes in `/var/opt/gitlab/consul/databases.ini`,
which contains the hostname of the current leader.
- Look in the Patroni logs `/var/log/gitlab/patroni/current` (or the older rotated and
compressed logs `/var/log/gitlab/patroni/@40000*`) on all database nodes to see
which server was most recently identified as the leader by the cluster:
```plaintext
INFO: no action. I am a secondary (database1.local) and following a leader (database2.local)
```
1. Stop Patroni on all nodes:
```shell
sudo gitlab-ctl stop patroni
```
1. Reset the state in Consul:
```shell
/opt/gitlab/embedded/bin/consul kv delete -recurse /service/postgresql-ha/
```
1. Start one Patroni node, which initializes the Patroni cluster to elect as a leader.
It's highly recommended to start the previous leader (noted in the first step),
so as to not lose existing writes that may have not been replicated because
of the broken cluster state:
```shell
sudo gitlab-ctl start patroni
```
1. Start all other Patroni nodes that join the Patroni cluster as replicas:
```shell
sudo gitlab-ctl start patroni
```
If you are still seeing issues, the next step is restoring the last healthy backup.
## Errors in the Patroni log about a `pg_hba.conf` entry for `127.0.0.1`
The following log entry in the Patroni log indicates the replication is not working
and a configuration change is needed:
```plaintext
FATAL: no pg_hba.conf entry for replication connection from host "127.0.0.1", user "gitlab_replicator"
```
To fix the problem, ensure the loopback interface is included in the CIDR addresses list:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
postgresql['trust_auth_cidr_addresses'] = %w(<other_cidrs> 127.0.0.1/32)
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
1. Check that [all the replicas are synchronized](replication_and_failover.md#check-replication-status)
## Error: requested start point is ahead of the Write Ahead Log (WAL) flush position
This error in Patroni logs indicates that the database is not replicating:
```plaintext
FATAL: could not receive data from WAL stream:
ERROR: requested starting point 0/5000000 is ahead of the WAL flush position of this server 0/4000388
```
This example error is from a replica that was initially misconfigured, and had never replicated.
Fix it [by reinitializing the replica](#reinitialize-a-replica).
## Patroni fails to start with `MemoryError`
Patroni may fail to start, logging an error and stack trace:
```plaintext
MemoryError
Traceback (most recent call last):
File "/opt/gitlab/embedded/bin/patroni", line 8, in <module>
sys.exit(main())
[..]
File "/opt/gitlab/embedded/lib/python3.7/ctypes/__init__.py", line 273, in _reset_cache
CFUNCTYPE(c_int)(lambda: None)
```
If the stack trace ends with `CFUNCTYPE(c_int)(lambda: None)`, this code triggers `MemoryError`
if the Linux server has been hardened for security.
The code causes Python to write temporary executable files, and if it cannot find a file system in which to do this. For example, if `noexec` is set on the `/tmp` file system, it fails with `MemoryError` ([read more in the issue](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6184)).
## Errors running `gitlab-ctl`
Patroni nodes can get into a state where `gitlab-ctl` commands fail
and `gitlab-ctl reconfigure` cannot fix the node.
If this co-incides with a version upgrade of PostgreSQL, [follow a different procedure](#postgresql-major-version-upgrade-fails-on-a-patroni-replica)
One common symptom is that `gitlab-ctl` cannot determine
information it needs about the installation if the database server is failing to start:
```plaintext
Malformed configuration JSON file found at /opt/gitlab/embedded/nodes/<HOSTNAME>.json.
This usually happens when your last run of `gitlab-ctl reconfigure` didn't complete successfully.
```
```plaintext
Error while reinitializing replica on the current node: Attributes not found in
/opt/gitlab/embedded/nodes/<HOSTNAME>.json, has reconfigure been run yet?
```
Similarly, the nodes file (`/opt/gitlab/embedded/nodes/<HOSTNAME>.json`) should contain a lot of information,
but might get created with only:
```json
{
"name": "<HOSTNAME>"
}
```
The following process for fixing this includes reinitializing this replica:
the current state of PostgreSQL on this node is discarded:
1. Shut down the Patroni and (if present) PostgreSQL services:
```shell
sudo gitlab-ctl status
sudo gitlab-ctl stop patroni
sudo gitlab-ctl stop postgresql
```
1. Remove `/var/opt/gitlab/postgresql/data` in case its state prevents
PostgreSQL from starting:
```shell
cd /var/opt/gitlab/postgresql
sudo rm -rf data
```
{{< alert type="warning" >}}
Take care with this step to avoid data loss.
This step can be also achieved by renaming `data/`:
make sure there's enough free disk for a new copy of the primary database,
and remove the extra directory when the replica is fixed.
{{< /alert >}}
1. With PostgreSQL not running, the nodes file now gets created successfully:
```shell
sudo gitlab-ctl reconfigure
```
1. Start Patroni:
```shell
sudo gitlab-ctl start patroni
```
1. Monitor the logs and check the cluster state:
```shell
sudo gitlab-ctl tail patroni
sudo gitlab-ctl patroni members
```
1. Re-run `reconfigure` again:
```shell
sudo gitlab-ctl reconfigure
```
1. Reinitialize the replica if `gitlab-ctl patroni members` indicates this is needed:
```shell
sudo gitlab-ctl patroni reinitialize-replica
```
If this procedure doesn't work and if the cluster is unable to elect a leader,
[there is a another fix](#reset-the-patroni-state-in-consul) which should only be
used as a last resort.
## PostgreSQL major version upgrade fails on a Patroni replica
A Patroni replica can get stuck in a loop during `gitlab-ctl pg-upgrade`, and
the upgrade fails.
An example set of symptoms is as follows:
1. A `postgresql` service is defined,
which shouldn't usually be present on a Patroni node. It is present because
`gitlab-ctl pg-upgrade` adds it to create a new empty database:
```plaintext
run: patroni: (pid 1972) 1919s; run: log: (pid 1971) 1919s
down: postgresql: 1s, normally up, want up; run: log: (pid 1973) 1919s
```
1. PostgreSQL generates `PANIC` log entries in
`/var/log/gitlab/postgresql/current` as Patroni is removing
`/var/opt/gitlab/postgresql/data` as part of reinitializing the replica:
```plaintext
DETAIL: Could not open file "pg_xact/0000": No such file or directory.
WARNING: terminating connection because of crash of another server process
LOG: all server processes terminated; reinitializing
PANIC: could not open file "global/pg_control": No such file or directory
```
1. In `/var/log/gitlab/patroni/current`, Patroni logs the following.
The local PostgreSQL version is different from the cluster leader:
```plaintext
INFO: trying to bootstrap from leader 'HOSTNAME'
pg_basebackup: incompatible server version 12.6
pg_basebackup: removing data directory "/var/opt/gitlab/postgresql/data"
ERROR: Error when fetching backup: pg_basebackup exited with code=1
```
This workaround applies when the Patroni cluster is in the following state:
- The [leader has been successfully upgraded to the new major version](replication_and_failover.md#upgrading-postgresql-major-version-in-a-patroni-cluster).
- The step to upgrade PostgreSQL on replicas is failing.
This workaround completes the PostgreSQL upgrade on a Patroni replica
by setting the node to use the new PostgreSQL version, and then reinitializing
it as a replica in the new cluster that was created
when the leader was upgraded:
1. Check the cluster status on all nodes to confirm which is the leader
and what state the replicas are in
```shell
sudo gitlab-ctl patroni members
```
1. Replica: check which version of PostgreSQL is active:
```shell
sudo ls -al /opt/gitlab/embedded/bin | grep postgres
```
1. Replica: ensure the nodes file is correct and `gitlab-ctl` can run. This resolves
the [errors running `gitlab-ctl`](#errors-running-gitlab-ctl) issue if the replica
has any of those errors as well:
```shell
sudo gitlab-ctl stop patroni
sudo gitlab-ctl reconfigure
```
1. Replica: relink the PostgreSQL binaries to the required version
to fix the `incompatible server version` error:
1. Edit `/etc/gitlab/gitlab.rb` and specify the required version:
```ruby
postgresql['version'] = 13
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Check the binaries are relinked. The binaries distributed for
PostgreSQL vary between major releases, it's typical to
have a small number of incorrect symbolic links:
```shell
sudo ls -al /opt/gitlab/embedded/bin | grep postgres
```
1. Replica: ensure PostgreSQL is fully reinitialized for the specified version:
```shell
cd /var/opt/gitlab/postgresql
sudo rm -rf data
sudo gitlab-ctl reconfigure
```
1. Replica: optionally monitor the database in two additional terminal sessions:
- Disk use increases as `pg_basebackup` runs. Track progress of the
replica initialization with:
```shell
cd /var/opt/gitlab/postgresql
watch du -sh data
```
- Monitor the process in the logs:
```shell
sudo gitlab-ctl tail patroni
```
1. Replica: Start Patroni to reinitialize the replica:
```shell
sudo gitlab-ctl start patroni
```
1. Replica: After it completes, remove the hardcoded version from `/etc/gitlab/gitlab.rb`:
1. Edit `/etc/gitlab/gitlab.rb` and remove `postgresql['version']`.
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Check the correct binaries are linked:
```shell
sudo ls -al /opt/gitlab/embedded/bin | grep postgres
```
1. Check the cluster status on all nodes:
```shell
sudo gitlab-ctl patroni members
```
Repeat this procedure on the other replica if required.
## PostgreSQL replicas stuck in loop while being created
If PostgreSQL replicas appear to migrate but then restart in a loop, check the
`/opt/gitlab-data/postgresql/` folder permissions on your replicas and primary server.
You can also see this error message in the logs:
`could not get COPY data stream: ERROR: could not open file "<file>" Permission denied`.
## Issues with other components
If you're running into an issue with a component not outlined here, be sure to check the troubleshooting section of their specific documentation page:
- [Consul](../consul.md#troubleshooting-consul)
- [PostgreSQL](https://docs.gitlab.com/omnibus/settings/database.html#troubleshooting)
|
https://docs.gitlab.com/administration/replication_and_failover
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/replication_and_failover.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
replication_and_failover.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
PostgreSQL replication and failover for Linux package installations
| null |
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
If you're a Free user of GitLab Self-Managed, consider using a cloud-hosted solution.
This document doesn't cover self-compiled installations.
If a setup with replication and failover isn't what you were looking for, see
the [database configuration document](https://docs.gitlab.com/omnibus/settings/database.html)
for the Linux packages.
It's recommended to read this document fully before attempting to configure PostgreSQL with
replication and failover for GitLab.
## Operating system upgrades
If you are failing over to a system with a different operating system,
read the [documentation on upgrading operating systems for PostgreSQL](upgrading_os.md).
Failing to account for local changes with operating system upgrades may result in data corruption.
## Architecture
The Linux package-recommended configuration for a PostgreSQL cluster with
replication failover requires:
- A minimum of three PostgreSQL nodes.
- A minimum of three Consul server nodes.
- A minimum of three PgBouncer nodes that track and handle primary database reads and writes.
- An internal load balancer (TCP) to balance requests between the PgBouncer nodes.
- [Database Load Balancing](database_load_balancing.md) enabled.
- A local PgBouncer service configured on each PostgreSQL node. This is separate from the main PgBouncer cluster that tracks the primary.
```plantuml
@startuml
card "**Internal Load Balancer**" as ilb #9370DB
skinparam linetype ortho
together {
collections "**GitLab Rails** x3" as gitlab #32CD32
collections "**Sidekiq** x4" as sidekiq #ff8dd1
}
collections "**Consul** x3" as consul #e76a9b
card "Database" as database {
collections "**PGBouncer x3**\n//Consul//" as pgbouncer #4EA7FF
card "**PostgreSQL** //Primary//\n//Patroni//\n//PgBouncer//\n//Consul//" as postgres_primary #4EA7FF
collections "**PostgreSQL** //Secondary// **x2**\n//Patroni//\n//PgBouncer//\n//Consul//" as postgres_secondary #4EA7FF
pgbouncer -[#4EA7FF]-> postgres_primary
postgres_primary .[#4EA7FF]r-> postgres_secondary
}
gitlab -[#32CD32]-> ilb
gitlab -[hidden]-> pgbouncer
gitlab .[#32CD32,norank]-> postgres_primary
gitlab .[#32CD32,norank]-> postgres_secondary
sidekiq -[#ff8dd1]-> ilb
sidekiq -[hidden]-> pgbouncer
sidekiq .[#ff8dd1,norank]-> postgres_primary
sidekiq .[#ff8dd1,norank]-> postgres_secondary
ilb -[#9370DB]-> pgbouncer
consul -[#e76a9b]r-> pgbouncer
consul .[#e76a9b,norank]r-> postgres_primary
consul .[#e76a9b,norank]r-> postgres_secondary
@enduml
```
You also need to take into consideration the underlying network topology, making
sure you have redundant connectivity between all Database and GitLab instances
to avoid the network becoming a single point of failure.
### Database node
Each database node runs four services:
- `PostgreSQL`: The database itself.
- `Patroni`: Communicates with other Patroni services in the cluster and handles failover when issues with the leader server occurs. The failover procedure consists of:
- Selecting a new leader for the cluster.
- Promoting the new node to leader.
- Instructing remaining servers to follow the new leader node.
- `PgBouncer`: A local pooler for the node. Used for _read_ queries as part of [Database Load Balancing](database_load_balancing.md).
- `Consul` agent: To communicate with Consul cluster which stores the current Patroni state. The agent monitors the status of each node in the database cluster and tracks its health in a service definition on the Consul cluster.
### Consul server node
The Consul server node runs the Consul server service. These nodes must have reached the quorum and elected a leader before Patroni cluster bootstrap; otherwise, database nodes wait until such Consul leader is elected.
### PgBouncer node
Each PgBouncer node runs two services:
- `PgBouncer`: The database connection pooler itself.
- `Consul` agent: Watches the status of the PostgreSQL service definition on the Consul cluster. If that status changes, Consul runs a script which updates the PgBouncer configuration to point to the new PostgreSQL leader node and reloads the PgBouncer service.
### Connection flow
Each service in the package comes with a set of [default ports](../package_information/defaults.md#ports). You may need to make specific firewall rules for the connections listed below:
There are several connection flows in this setup:
- [Primary](#primary)
- [Database Load Balancing](#database-load-balancing)
- [Replication](#replication)
#### Primary
- Application servers connect to either PgBouncer directly via its [default port](../package_information/defaults.md) or via a configured Internal Load Balancer (TCP) that serves multiple PgBouncers.
- PgBouncer connects to the primary database server's [PostgreSQL default port](../package_information/defaults.md).
#### Database Load Balancing
For read queries against data that haven't been recently changed and are up to date on all database nodes:
- Application servers connect to the local PgBouncer service via its [default port](../package_information/defaults.md) on each database node in a round-robin approach.
- Local PgBouncer connects to the local database server's [PostgreSQL default port](../package_information/defaults.md).
#### Replication
- Patroni actively manages the running PostgreSQL processes and configuration.
- PostgreSQL secondaries connect to the primary database servers [PostgreSQL default port](../package_information/defaults.md)
- Consul servers and agents connect to each others [Consul default ports](../package_information/defaults.md)
## Setting it up
### Required information
Before proceeding with configuration, you need to collect all the necessary
information.
#### Network information
PostgreSQL doesn't listen on any network interface by default. It needs to know
which IP address to listen on to be accessible to other services. Similarly,
PostgreSQL access is controlled based on the network source.
This is why you need:
- The IP address of each node's network interface. This can be set to `0.0.0.0` to
listen on all interfaces. It cannot be set to the loopback address `127.0.0.1`.
- Network Address. This can be in subnet (that is, `192.168.0.0/255.255.255.0`)
or Classless Inter-Domain Routing (CIDR) (`192.168.0.0/24`) form.
#### Consul information
When using default setup, minimum configuration requires:
- `CONSUL_USERNAME`. The default user for Linux package installations is `gitlab-consul`
- `CONSUL_DATABASE_PASSWORD`. Password for the database user.
- `CONSUL_PASSWORD_HASH`. This is a hash generated out of Consul username/password pair. It can be generated with:
```shell
sudo gitlab-ctl pg-password-md5 CONSUL_USERNAME
```
- `CONSUL_SERVER_NODES`. The IP addresses or DNS records of the Consul server nodes.
Few notes on the service itself:
- The service runs under a system account, by default `gitlab-consul`.
- If you are using a different username, you have to specify it through the `CONSUL_USERNAME` variable.
- Passwords are stored in the following locations:
- `/etc/gitlab/gitlab.rb`: hashed
- `/var/opt/gitlab/pgbouncer/pg_auth`: hashed
- `/var/opt/gitlab/consul/.pgpass`: plaintext
#### PostgreSQL information
When configuring PostgreSQL, we do the following:
- Set `max_replication_slots` to double the number of database nodes. Patroni uses one extra slot per node when initiating the replication.
- Set `max_wal_senders` to one more than the allocated number of replication slots in the cluster. This prevents replication from using up all of the available database connections.
In this document we are assuming 3 database nodes, which makes this configuration:
```ruby
patroni['postgresql']['max_replication_slots'] = 6
patroni['postgresql']['max_wal_senders'] = 7
```
As previously mentioned, prepare the network subnets that need permission
to authenticate with the database.
You also need to have the IP addresses or DNS records of Consul
server nodes on hand.
You need the following password information for the application's database user:
- `POSTGRESQL_USERNAME`. The default user for Linux package installations is `gitlab`
- `POSTGRESQL_USER_PASSWORD`. The password for the database user
- `POSTGRESQL_PASSWORD_HASH`. This is a hash generated out of the username/password pair.
It can be generated with:
```shell
sudo gitlab-ctl pg-password-md5 POSTGRESQL_USERNAME
```
#### Patroni information
You need the following password information for the Patroni API:
- `PATRONI_API_USERNAME`. A username for basic auth to the API
- `PATRONI_API_PASSWORD`. A password for basic auth to the API
#### PgBouncer information
When using a default setup, the minimum configuration requires:
- `PGBOUNCER_USERNAME`. The default user for Linux package installations is `pgbouncer`
- `PGBOUNCER_PASSWORD`. This is a password for PgBouncer service.
- `PGBOUNCER_PASSWORD_HASH`. This is a hash generated out of PgBouncer username/password pair. It can be generated with:
```shell
sudo gitlab-ctl pg-password-md5 PGBOUNCER_USERNAME
```
- `PGBOUNCER_NODE`, is the IP address or a FQDN of the node running PgBouncer.
Few things to remember about the service itself:
- The service runs as the same system account as the database. In the package, this is by default `gitlab-psql`
- If you use a non-default user account for PgBouncer service (by default `pgbouncer`), you need to specify this username.
- Passwords are stored in the following locations:
- `/etc/gitlab/gitlab.rb`: hashed, and in plain text
- `/var/opt/gitlab/pgbouncer/pg_auth`: hashed
### Installing the Linux package
First, make sure to [download and install](https://about.gitlab.com/install/) the Linux package on each node.
Make sure you install the necessary dependencies from step 1,
add GitLab package repository from step 2.
When installing the GitLab package, do not supply `EXTERNAL_URL` value.
### Configuring the Database nodes
1. Make sure to [configure the Consul nodes](../consul.md).
1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information), [`POSTGRESQL_PASSWORD_HASH`](#postgresql-information), the [number of db nodes](#postgresql-information), and the [network address](#network-information) before executing the next step.
#### Configuring Patroni cluster
You must enable Patroni explicitly to be able to use it (with `patroni['enable'] = true`).
Any PostgreSQL configuration item that controls replication, for example `wal_level`, `max_wal_senders`, or others are strictly
controlled by Patroni. These configurations override the original settings that you make with the `postgresql[...]` configuration key.
Hence, they are all separated and placed under `patroni['postgresql'][...]`. This behavior is limited to replication.
Patroni honours any other PostgreSQL configuration that was made with the `postgresql[...]` configuration key. For example,
`max_wal_senders` by default is set to `5`. If you wish to change this you must set it with the `patroni['postgresql']['max_wal_senders']`
configuration key.
Here is an example:
```ruby
# Disable all components except Patroni, PgBouncer and Consul
roles(['patroni_role', 'pgbouncer_role'])
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
# Configure the Consul agent
consul['services'] = %w(postgresql)
# START user configuration
# Set the real values as explained in Required Information section
#
# Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
# Replace POSTGRESQL_REPLICATION_PASSWORD_HASH with a generated md5 value
postgresql['sql_replication_password'] = 'POSTGRESQL_REPLICATION_PASSWORD_HASH'
# Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
# Replace PATRONI_API_USERNAME with a username for Patroni Rest API calls (use the same username in all nodes)
patroni['username'] = 'PATRONI_API_USERNAME'
# Replace PATRONI_API_PASSWORD with a password for Patroni Rest API calls (use the same password in all nodes)
patroni['password'] = 'PATRONI_API_PASSWORD'
# Sets `max_replication_slots` to double the number of database nodes.
# Patroni uses one extra slot per node when initiating the replication.
patroni['postgresql']['max_replication_slots'] = X
# Set `max_wal_senders` to one more than the number of replication slots in the cluster.
# This is used to prevent replication from using up all of the
# available database connections.
patroni['postgresql']['max_wal_senders'] = X+1
# Replace XXX.XXX.XXX.XXX/YY with Network Addresses for your other patroni nodes
patroni['allowlist'] = %w(XXX.XXX.XXX.XXX/YY 127.0.0.1/32)
# Replace XXX.XXX.XXX.XXX/YY with Network Address
postgresql['trust_auth_cidr_addresses'] = %w(XXX.XXX.XXX.XXX/YY 127.0.0.1/32)
# Local PgBouncer service for Database Load Balancing
pgbouncer['databases'] = {
gitlabhq_production: {
host: "127.0.0.1",
user: "PGBOUNCER_USERNAME",
password: 'PGBOUNCER_PASSWORD_HASH'
}
}
# Replace placeholders:
#
# Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
# with the addresses gathered for CONSUL_SERVER_NODES
consul['configuration'] = {
retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
}
#
# END user configuration
```
All database nodes use the same configuration. The leader node is not determined in configuration,
and there is no additional or different configuration for either leader or replica nodes.
After the configuration of a node is complete, you must [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
on each node for the changes to take effect.
Generally, when Consul cluster is ready, the first node that [reconfigures](../restart_gitlab.md#reconfigure-a-linux-package-installation)
becomes the leader. You do not need to sequence the nodes reconfiguration. You can run them in parallel or in any order.
If you choose an arbitrary order, you do not have any predetermined leader.
#### Enable Monitoring
If you enable Monitoring, it must be enabled on all database servers.
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
```ruby
# Enable service discovery for Prometheus
consul['monitoring_service_discovery'] = true
# Set the network addresses that the exporters must listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
postgres_exporter['listen_address'] = '0.0.0.0:9187'
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
#### Enable TLS support for the Patroni API
By default, the Patroni [REST API](https://patroni.readthedocs.io/en/latest/rest_api.html#rest-api) is served over HTTP.
You have the option to enable TLS and use HTTPS over the same [port](../package_information/defaults.md).
To enable TLS, you need PEM-formatted certificate and private key files. Both files must be readable by the PostgreSQL user (`gitlab-psql` by default, or the one set by `postgresql['username']`):
```ruby
patroni['tls_certificate_file'] = '/path/to/server/certificate.pem'
patroni['tls_key_file'] = '/path/to/server/key.pem'
```
If the server's private key is encrypted, specify the password to decrypt it:
```ruby
patroni['tls_key_password'] = 'private-key-password' # This is the plain-text password.
```
If you are using a self-signed certificate or an internal CA, you need to either disable the TLS verification or pass the certificate of the
internal CA, otherwise you may run into an unexpected error when using the `gitlab-ctl patroni ....` commands. The Linux package ensures that Patroni API
clients honor this configuration.
TLS certificate verification is enabled by default. To disable it:
```ruby
patroni['tls_verify'] = false
```
Alternatively, you can pass a PEM-formatted certificate of the internal CA. Again, the file must be readable by the PostgreSQL user:
```ruby
patroni['tls_ca_file'] = '/path/to/ca.pem'
```
When TLS is enabled, mutual authentication of the API server and client is possible for all endpoints, the extent of which depends on
the `patroni['tls_client_mode']` attribute:
- `none` (default): The API does not check for any client certificates.
- `optional`: Client certificates are required for all [unsafe](https://patroni.readthedocs.io/en/latest/security.html#protecting-the-rest-api) API calls.
- `required`: Client certificates are required for all API calls.
The client certificates are verified against the CA certificate that is specified with the `patroni['tls_ca_file']` attribute. Therefore,
this attribute is required for mutual TLS authentication. You also need to specify PEM-formatted client certificate and private key files.
Both files must be readable by the PostgreSQL user:
```ruby
patroni['tls_client_mode'] = 'required'
patroni['tls_ca_file'] = '/path/to/ca.pem'
patroni['tls_client_certificate_file'] = '/path/to/client/certificate.pem'
patroni['tls_client_key_file'] = '/path/to/client/key.pem'
```
You can use different certificates and keys for both API server and client on different Patroni nodes as long as they can be verified.
However, the CA certificate (`patroni['tls_ca_file']`), TLS certificate verification (`patroni['tls_verify']`), and client TLS
authentication mode (`patroni['tls_client_mode']`), must each have the same value on all nodes.
### Configure PgBouncer nodes
1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`CONSUL_PASSWORD_HASH`](#consul-information), and [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information) before executing the next step.
1. On each node, edit the `/etc/gitlab/gitlab.rb` configuration file and replace values noted in the `# START user configuration` section as below:
```ruby
# Disable all components except PgBouncer and Consul agent
roles(['pgbouncer_role'])
# Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
# Configure Consul agent
consul['watchers'] = %w(postgresql)
# START user configuration
# Set the real values as explained in Required Information section
# Replace CONSUL_PASSWORD_HASH with a generated md5 value
# Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
pgbouncer['users'] = {
'gitlab-consul': {
password: 'CONSUL_PASSWORD_HASH'
},
'pgbouncer': {
password: 'PGBOUNCER_PASSWORD_HASH'
}
}
# Replace placeholders:
#
# Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
# with the addresses gathered for CONSUL_SERVER_NODES
consul['configuration'] = {
retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
}
#
# END user configuration
```
1. Run `gitlab-ctl reconfigure`
1. Create a `.pgpass` file so Consul is able to
reload PgBouncer. Enter the `PGBOUNCER_PASSWORD` twice when asked:
```shell
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
1. [Enable monitoring](pgbouncer.md#enable-monitoring)
#### PgBouncer Checkpoint
1. Ensure each node is talking to the current node leader:
```shell
gitlab-ctl pgb-console # Supply PGBOUNCER_PASSWORD when prompted
```
If there is an error `psql: ERROR: Auth failed` after typing in the
password, ensure you have previously generated the MD5 password hashes with the correct
format. The correct format is to concatenate the password and the username:
`PASSWORDUSERNAME`. For example, `Sup3rS3cr3tpgbouncer` would be the text
needed to generate an MD5 password hash for the `pgbouncer` user.
1. After the console prompt has become available, run the following queries:
```shell
show databases ; show clients ;
```
The output should be similar to the following:
```plaintext
name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
(2 rows)
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
(2 rows)
```
#### Configure the internal load balancer
If you're running more than one PgBouncer node as recommended, you must set up a TCP internal load balancer to serve each correctly. This can be accomplished with any reputable TCP load balancer.
As an example, here's how you could do it with [HAProxy](https://www.haproxy.org/):
```plaintext
global
log /dev/log local0
log localhost local1 notice
log stdout format raw local0
defaults
log global
default-server inter 10s fall 3 rise 2
balance leastconn
frontend internal-pgbouncer-tcp-in
bind *:6432
mode tcp
option tcplog
default_backend pgbouncer
backend pgbouncer
mode tcp
option tcp-check
server pgbouncer1 <ip>:6432 check
server pgbouncer2 <ip>:6432 check
server pgbouncer3 <ip>:6432 check
```
Refer to your preferred Load Balancer's documentation for further guidance.
### Configuring the Application nodes
Application nodes run the `gitlab-rails` service. You may have other
attributes set, but the following need to be set.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable PostgreSQL on the application node
postgresql['enable'] = false
gitlab_rails['db_host'] = 'PGBOUNCER_NODE' or 'INTERNAL_LOAD_BALANCER'
gitlab_rails['db_port'] = 6432
gitlab_rails['db_password'] = 'POSTGRESQL_USER_PASSWORD'
gitlab_rails['auto_migrate'] = false
gitlab_rails['db_load_balancing'] = { 'hosts' => ['POSTGRESQL_NODE_1', 'POSTGRESQL_NODE_2', 'POSTGRESQL_NODE_3'] }
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
#### Application node post-configuration
Ensure that all migrations ran:
```shell
gitlab-rake gitlab:db:configure
```
{{< alert type="note" >}}
If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to PostgreSQL it may be that your PgBouncer node's IP address is missing from
PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
[PgBouncer error `ERROR: pgbouncer cannot connect to server`](replication_and_failover_troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server) before you proceed.
{{< /alert >}}
### Backups
Do not backup or restore GitLab through a PgBouncer connection: this causes a GitLab outage.
[Read more about this and how to reconfigure backups](../backup_restore/backup_gitlab.md#back-up-and-restore-for-installations-using-pgbouncer).
### Ensure GitLab is running
At this point, your GitLab instance should be up and running. Verify you're able
to sign in, and create issues and merge requests. For more information, see [Troubleshooting replication and failover](replication_and_failover_troubleshooting.md).
## Example configuration
This section describes several fully expanded example configurations.
### Example recommended setup
This example uses three Consul servers, three PgBouncer servers (with an
associated internal load balancer), three PostgreSQL servers, and one
application node.
In this setup, all servers share the same `10.6.0.0/16` private network range.
The servers communicate freely over these addresses.
While you can use a different networking setup, it's recommended to ensure that it allows
for synchronous replication to occur across the cluster.
As a general rule, a latency of less than 2 ms ensures replication operations to be performant.
GitLab [reference architectures](../reference_architectures/_index.md) are sized to
assume that application database queries are shared by all three nodes.
Communication latency higher than 2 ms can lead to database locks and
impact the replica's ability to serve read-only queries in a timely fashion.
- `10.6.0.22`: PgBouncer 2
- `10.6.0.23`: PgBouncer 3
- `10.6.0.31`: PostgreSQL 1
- `10.6.0.32`: PostgreSQL 2
- `10.6.0.33`: PostgreSQL 3
- `10.6.0.41`: GitLab application
All passwords are set to `toomanysecrets`. Do not use this password or derived hashes and the `external_url` for GitLab is `http://gitlab.example.com`.
After the initial configuration, if a failover occurs, the PostgreSQL leader node changes to one of the available secondaries until it is failed back.
#### Example recommended setup for Consul servers
On each server edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Consul
roles(['consul_role'])
consul['configuration'] = {
server: true,
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
consul['monitoring_service_discovery'] = true
```
[Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
#### Example recommended setup for PgBouncer servers
On each server edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Pgbouncer and Consul agent
roles(['pgbouncer_role'])
# Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
pgbouncer['users'] = {
'gitlab-consul': {
password: '5e0e3263571e3704ad655076301d6ebe'
},
'pgbouncer': {
password: '771a8625958a529132abe6f1a4acb19c'
}
}
consul['watchers'] = %w(postgresql)
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
consul['monitoring_service_discovery'] = true
```
[Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
#### Internal load balancer setup
An internal load balancer (TCP) is then required to be setup to serve each PgBouncer node (in this example on the IP of `10.6.0.20`). An example of how to do this can be found in the [PgBouncer Configure Internal Load Balancer](#configure-the-internal-load-balancer) section.
#### Example recommended setup for PostgreSQL servers
On database nodes edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Patroni, PgBouncer and Consul
roles(['patroni_role', 'pgbouncer_role'])
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
postgresql['hot_standby'] = 'on'
postgresql['wal_level'] = 'replica'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
postgresql['pgbouncer_user_password'] = '771a8625958a529132abe6f1a4acb19c'
postgresql['sql_user_password'] = '450409b85a0223a214b5fb1484f34d0f'
patroni['username'] = 'PATRONI_API_USERNAME'
patroni['password'] = 'PATRONI_API_PASSWORD'
patroni['postgresql']['max_replication_slots'] = 6
patroni['postgresql']['max_wal_senders'] = 7
patroni['allowlist'] = = %w(10.6.0.0/16 127.0.0.1/32)
postgresql['trust_auth_cidr_addresses'] = %w(10.6.0.0/16 127.0.0.1/32)
# Local PgBouncer service for Database Load Balancing
pgbouncer['databases'] = {
gitlabhq_production: {
host: "127.0.0.1",
user: "pgbouncer",
password: '771a8625958a529132abe6f1a4acb19c'
}
}
# Configure the Consul agent
consul['services'] = %w(postgresql)
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
consul['monitoring_service_discovery'] = true
```
[Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
#### Example recommended setup manual steps
After deploying the configuration follow these steps:
1. Find the primary database node:
```shell
gitlab-ctl get-postgresql-primary
```
1. On `10.6.0.41`, our application server:
Set `gitlab-consul` user's PgBouncer password to `toomanysecrets`:
```shell
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
Run database migrations:
```shell
gitlab-rake gitlab:db:configure
```
## Patroni
Patroni is an opinionated solution for PostgreSQL high-availability. It takes the control of PostgreSQL, overrides its configuration, and manages its lifecycle (start, stop, restart). Patroni is the only option for PostgreSQL 12+ clustering and for cascading replication for Geo deployments.
The fundamental [architecture](#example-recommended-setup-manual-steps) does not change for Patroni.
You do not need any special consideration for Patroni while provisioning your database nodes. Patroni heavily relies on Consul to store the state of the cluster and elect a leader. Any failure in Consul cluster and its leader election propagates to the Patroni cluster as well.
Patroni monitors the cluster and handles any failover. When the primary node fails, it works with Consul to notify PgBouncer. On failure, Patroni handles the transitioning of the old primary to a replica and rejoins it to the cluster automatically.
With Patroni, the connection flow is slightly different. Patroni on each node connects to Consul agent to join the cluster. Only after this point it decides if the node is the primary or a replica. Based on this decision, it configures and starts PostgreSQL which it communicates with directly over a Unix socket. This means that if the Consul cluster is not functional or does not have a leader, Patroni and by extension PostgreSQL does not start. Patroni also exposes a REST API which can be accessed via its [default port](../package_information/defaults.md)
on each node.
### Check replication status
Run `gitlab-ctl patroni members` to query Patroni for a summary of the cluster status:
```plaintext
+ Cluster: postgresql-ha (6970678148837286213) ------+---------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+-------------------------------------+--------------+---------+---------+----+-----------+
| gitlab-database-1.example.com | 172.18.0.111 | Replica | running | 5 | 0 |
| gitlab-database-2.example.com | 172.18.0.112 | Replica | running | 5 | 100 |
| gitlab-database-3.example.com | 172.18.0.113 | Leader | running | 5 | |
+-------------------------------------+--------------+---------+---------+----+-----------+
```
To verify the status of replication:
```shell
echo -e 'select * from pg_stat_wal_receiver\x\g\x \n select * from pg_stat_replication\x\g\x' | gitlab-psql
```
The same command can be run on all three database servers. It returns any information
about replication available depending on the role the server is performing.
The leader should return one record per replica:
```sql
-[ RECORD 1 ]----+------------------------------
pid | 371
usesysid | 16384
usename | gitlab_replicator
application_name | gitlab-database-1.example.com
client_addr | 172.18.0.111
client_hostname |
client_port | 42900
backend_start | 2021-06-14 08:01:59.580341+00
backend_xmin |
state | streaming
sent_lsn | 0/EA13220
write_lsn | 0/EA13220
flush_lsn | 0/EA13220
replay_lsn | 0/EA13220
write_lag |
flush_lag |
replay_lag |
sync_priority | 0
sync_state | async
reply_time | 2021-06-18 19:17:14.915419+00
```
Investigate further if:
- There are missing or extra records.
- `reply_time` is not current.
The `lsn` fields relate to which write-ahead-log segments have been replicated.
Run the following on the leader to find out the current Log Sequence Number (LSN):
```shell
echo 'SELECT pg_current_wal_lsn();' | gitlab-psql
```
If a replica is not in sync, `gitlab-ctl patroni members` indicates the volume
of missing data, and the `lag` fields indicate the elapsed time.
Read more about the data returned by the leader
[in the PostgreSQL documentation](https://www.postgresql.org/docs/16/monitoring-stats.html#PG-STAT-REPLICATION-VIEW),
including other values for the `state` field.
The replicas should return:
```sql
-[ RECORD 1 ]---------+-------------------------------------------------------------------------------------------------
pid | 391
status | streaming
receive_start_lsn | 0/D000000
receive_start_tli | 5
received_lsn | 0/EA13220
received_tli | 5
last_msg_send_time | 2021-06-18 19:16:54.807375+00
last_msg_receipt_time | 2021-06-18 19:16:54.807512+00
latest_end_lsn | 0/EA13220
latest_end_time | 2021-06-18 19:07:23.844879+00
slot_name | gitlab-database-1.example.com
sender_host | 172.18.0.113
sender_port | 5432
conninfo | user=gitlab_replicator host=172.18.0.113 port=5432 application_name=gitlab-database-1.example.com
```
Read more about the data returned by the replica
[in the PostgreSQL documentation](https://www.postgresql.org/docs/16/monitoring-stats.html#PG-STAT-WAL-RECEIVER-VIEW).
### Selecting the appropriate Patroni replication method
[Review the Patroni documentation carefully](https://patroni.readthedocs.io/en/latest/yaml_configuration.html#postgresql)
before making changes as some of the options carry a risk of potential data
loss if not fully understood. The [replication mode](https://patroni.readthedocs.io/en/latest/replication_modes.html)
configured determines the amount of tolerable data loss.
{{< alert type="warning" >}}
Replication is not a backup strategy! There is no replacement for a well-considered and tested backup solution.
{{< /alert >}}
Linux package installations default [`synchronous_commit`](https://www.postgresql.org/docs/16/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT) to `on`.
```ruby
postgresql['synchronous_commit'] = 'on'
gitlab['geo-postgresql']['synchronous_commit'] = 'on'
```
#### Customizing Patroni failover behavior
Linux package installations expose several options allowing more control over the [Patroni restoration process](#recovering-the-patroni-cluster).
Each option is shown below with its default value in `/etc/gitlab/gitlab.rb`.
```ruby
patroni['use_pg_rewind'] = true
patroni['remove_data_directory_on_rewind_failure'] = false
patroni['remove_data_directory_on_diverged_timelines'] = false
```
[The upstream documentation is always more up to date](https://patroni.readthedocs.io/en/latest/patroni_configuration.html), but the table below should provide a minimal overview of functionality.
| Setting | Overview |
|-----------------------------------------------|----------|
| `use_pg_rewind` | Try running `pg_rewind` on the former cluster leader before it rejoins the database cluster. |
| `remove_data_directory_on_rewind_failure` | If `pg_rewind` fails, remove the local PostgreSQL data directory and re-replicate from the current cluster leader. |
| `remove_data_directory_on_diverged_timelines` | If `pg_rewind` cannot be used and the former leader's timeline has diverged from the current one, delete the local data directory and re-replicate from the current cluster leader. |
### Database authorization for Patroni
Patroni uses a Unix socket to manage the PostgreSQL instance. Therefore, a connection from the `local` socket must be trusted.
Replicas use the replication user (`gitlab_replicator` by default) to communicate with the leader. For this user,
you can choose between `trust` and `md5` authentication. If you set `postgresql['sql_replication_password']`,
Patroni uses `md5` authentication, and otherwise falls back to `trust`.
Based on the authentication you choose, you must specify the cluster CIDR in the `postgresql['md5_auth_cidr_addresses']` or `postgresql['trust_auth_cidr_addresses']` settings.
### Interacting with Patroni cluster
You can use `gitlab-ctl patroni members` to check the status of the cluster members. To check the status of each node
`gitlab-ctl patroni` provides two additional sub-commands, `check-leader` and `check-replica` which indicate if a node
is the primary or a replica.
When Patroni is enabled, it exclusively controls PostgreSQL's startup,
shutdown, and restart. This means, to shut down PostgreSQL on a certain node, you must shutdown Patroni on the same node with:
```shell
sudo gitlab-ctl stop patroni
```
Stopping or restarting the Patroni service on the leader node triggers an automatic failover. If you need Patroni to reload its configuration or restart the PostgreSQL process without triggering the failover, you must use the `reload` or `restart` sub-commands of `gitlab-ctl patroni` instead. These two sub-commands are wrappers of the same `patronictl` commands.
### Manual failover procedure for Patroni
{{< alert type="warning" >}}
In GitLab 16.5 and earlier, PgBouncer nodes do not automatically fail over alongside
Patroni nodes. PgBouncer services
[must be restarted manually](replication_and_failover_troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
for a successful switchover.
{{< /alert >}}
While Patroni supports automatic failover, you also have the ability to perform
a manual one, where you have two slightly different options:
- Failover: allows you to perform a manual failover when there are no healthy nodes.
You can perform this action in any PostgreSQL node:
```shell
sudo gitlab-ctl patroni failover
```
- Switchover: only works when the cluster is healthy and allows you to schedule a switchover (it can happen immediately).
You can perform this action in any PostgreSQL node:
```shell
sudo gitlab-ctl patroni switchover
```
For further details on this subject, see the
[Patroni documentation](https://patroni.readthedocs.io/en/latest/rest_api.html#switchover-and-failover-endpoints).
#### Geo secondary site considerations
When a Geo secondary site is replicating from a primary site that uses `Patroni` and `PgBouncer`, replicating through PgBouncer is not supported. There is a feature request to add support, see [issue #8832](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/8832).
Recommended. Introduce a load balancer in the primary site to automatically handle failovers in the `Patroni` cluster. For more information, see [Step 2: Configure the internal load balancer on the primary site](../geo/setup/database.md#step-2-configure-the-internal-load-balancer-on-the-primary-site).
##### Handling Patroni failover when replicating directly from the leader node
If your secondary site is configured to replicate directly from the leader node in the `Patroni` cluster, then a failover in the `Patroni` cluster will stop replication to the secondary site, even if the original node gets re-added as a follower node.
In that scenario, you must manually point your secondary site to replicate from the new leader after a failover in the `Patroni` cluster:
```shell
sudo gitlab-ctl replicate-geo-database --host=<new_leader_ip> --replication-slot=<slot_name>
```
This re-syncs your secondary site database and may take a very long time depending on the amount of data to sync. You may also need to run `gitlab-ctl reconfigure` if replication is still not working after re-syncing.
### Recovering the Patroni cluster
To recover the old primary and rejoin it to the cluster as a replica, you can start Patroni with:
```shell
sudo gitlab-ctl start patroni
```
No further configuration or intervention is needed.
### Maintenance procedure for Patroni
With Patroni enabled, you can run planned maintenance on your nodes. To perform maintenance on one node without Patroni, you can put it into maintenance mode with:
```shell
sudo gitlab-ctl patroni pause
```
When Patroni runs in a paused mode, it does not change the state of PostgreSQL. After you are done, you can resume Patroni:
```shell
sudo gitlab-ctl patroni resume
```
For further details, see [Patroni documentation on this subject](https://patroni.readthedocs.io/en/latest/pause.html).
### Upgrading PostgreSQL major version in a Patroni cluster
For a list of the bundled PostgreSQL versions and the default version for each release, see the [PostgreSQL versions of the Linux package](../package_information/postgresql_versions.md).
Here are a few key facts that you must consider before upgrading PostgreSQL:
- The main point is that you have to shut down the Patroni cluster. This means that your
GitLab deployment is down for the duration of database upgrade or, at least, as long as your leader
node is upgraded. This can be a significant downtime depending on the size of your database.
- Upgrading PostgreSQL creates a new data directory with a new control data. From the perspective of Patroni, this is a new cluster that needs to be bootstrapped again. Therefore, as part of the upgrade procedure, the cluster state (stored in Consul) is wiped out. After the upgrade is complete, Patroni bootstraps a new cluster. This changes your cluster ID.
- The procedures for upgrading leader and replicas are not the same. That is why it is important to use the right procedure on each node.
- Upgrading a replica node deletes the data directory and resynchronizes it from the leader using the
configured replication method (`pg_basebackup` is the only available option). It might take some
time for replica to catch up with the leader, depending on the size of your database.
- An overview of the upgrade procedure is outlined in [the Patroni documentation](https://patroni.readthedocs.io/en/latest/existing_data.html#major-upgrade-of-postgresql-version).
You can still use `gitlab-ctl pg-upgrade` which implements this procedure with a few adjustments.
Considering these, you should carefully plan your PostgreSQL upgrade:
1. Find out which node is the leader and which node is a replica:
```shell
gitlab-ctl patroni members
```
{{< alert type="note" >}}
On a Geo secondary site, the Patroni leader node is called `standby leader`.
{{< /alert >}}
1. Stop Patroni only on replicas.
```shell
sudo gitlab-ctl stop patroni
```
1. Enable the maintenance mode on the application node:
```shell
sudo gitlab-ctl deploy-page up
```
1. Upgrade PostgreSQL on the leader node and make sure that the upgrade is completed successfully:
```shell
# Default command timeout is 600s, configurable with '--timeout'
sudo gitlab-ctl pg-upgrade
```
{{< alert type="note" >}}
`gitlab-ctl pg-upgrade` tries to detect the role of the node. If for any reason the auto-detection
does not work or you believe it did not detect the role correctly, you can use the `--leader` or
`--replica` arguments to manually override it. Use `gitlab-ctl pg-upgrade --help` for more details on available options.
{{< /alert >}}
1. Check the status of the leader and cluster. You can proceed only if you have a healthy leader:
```shell
gitlab-ctl patroni check-leader
# OR
gitlab-ctl patroni members
```
1. You can now disable the maintenance mode on the application node:
```shell
sudo gitlab-ctl deploy-page down
```
1. Upgrade PostgreSQL on replicas (you can do this in parallel on all of them):
```shell
sudo gitlab-ctl pg-upgrade
```
1. Ensure that the compatible versions of `pg_dump` and `pg_restore` are used
on the GitLab Rails instance to avoid version mismatch errors when performing
a backup or restore. You can do this by specifying the PostgreSQL version
in `/etc/gitlab/gitlab.rb` on the Rails instance:
```shell
postgresql['version'] = 16
```
If issues are encountered upgrading the replicas,
[there is a troubleshooting section](replication_and_failover_troubleshooting.md#postgresql-major-version-upgrade-fails-on-a-patroni-replica) that might be the solution.
{{< alert type="note" >}}
Reverting the PostgreSQL upgrade with `gitlab-ctl revert-pg-upgrade` has the same considerations as
`gitlab-ctl pg-upgrade`. You should follow the same procedure by first stopping the replicas,
then reverting the leader, and finally reverting the replicas.
{{< /alert >}}
### Near-zero-downtime upgrade of PostgreSQL in a Patroni cluster
{{< details >}}
- Status: Experiment
{{< /details >}}
Patroni enables you to run a major PostgreSQL upgrade without shutting down the cluster. However, this
requires additional resources to host the new Patroni nodes with the upgraded PostgreSQL. In practice, with this
procedure, you are:
- Creating a new Patroni cluster with a new version of PostgreSQL.
- Migrating the data from the existing cluster.
This procedure is non-invasive, and does not impact your existing cluster before switching it off.
However, it can be both time- and resource-consuming. Consider their trade-offs with availability.
The steps, in order:
1. [Provision resources for the new cluster](#provision-resources-for-the-new-cluster).
1. [Preflight check](#preflight-check).
1. [Configure the leader of the new cluster](#configure-the-leader-of-the-new-cluster).
1. [Start publisher on the existing leader](#start-publisher-on-the-existing-leader).
1. [Copy the data from the existing cluster](#copy-the-data-from-the-existing-cluster).
1. [Replicate data from the existing cluster](#replicate-data-from-the-existing-cluster).
1. [Grow the new cluster](#grow-the-new-cluster).
1. [Switch the application to use the new cluster](#switch-the-application-to-use-the-new-cluster).
1. [Clean up](#clean-up).
#### Provision resources for the new cluster
You need a new set of resources for Patroni nodes. The new Patroni cluster does not require exactly the same number
of nodes as the existing cluster. You may choose a different number of nodes based on your requirements. The new
cluster uses the existing Consul cluster (with a different `patroni['scope']`) and PgBouncer nodes.
Make sure that at least the leader node of the existing cluster is accessible from the nodes of the new
cluster.
#### Preflight check
We rely on PostgreSQL [logical replication](https://www.postgresql.org/docs/16/logical-replication.html)
to support near-zero-downtime upgrades of Patroni clusters. The of
[logical replication requirements](https://www.postgresql.org/docs/16/logical-replication-restrictions.html)
must be met. In particular, `wal_level` must be `logical`. To check the `wal_level`,
run the following command with `gitlab-psql` on any node of the existing cluster:
```sql
SHOW wal_level;
```
By default, Patroni sets `wal_level` to `replica`. You must increase it to `logical`.
Changing `wal_level` requires restarting PostgreSQL, so this step leads to a short
downtime (hence near-zero-downtime). To do this on the Patroni leader node:
1. Edit `gitlab.rb` by setting:
```ruby
patroni['postgresql']['wal_level'] = 'logical'
```
1. Run `gitlab-ctl reconfigure`. This writes the configuration but does not restart PostgreSQL service.
1. Run `gitlab-ctl patroni restart` to restart PostgreSQL and apply the new `wal_level` without triggering
failover. For the duration of restart cycle, the cluster leader is unavailable.
1. Verify the change by running `SHOW wal_level` with `gitlab-psql`.
#### Configure the leader of the new cluster
Configure the first node of the new cluster. It becomes the leader of the new cluster.
You can use the configuration of the existing cluster, if it is compatible with the new
PostgreSQL version. Refer to the documentation on [configuring Patroni clusters](#configuring-patroni-cluster).
In addition to the common configuration, you must apply the following in `gitlab.rb` to:
1. Make sure that the new Patroni cluster uses a different scope. The scope is used to namespace the Patroni settings
in Consul, making it possible to use the same Consul cluster for the existing and the new clusters.
```ruby
patroni['scope'] = 'postgresql_new-ha'
```
1. Make sure that Consul agents don't mix PostgreSQL services offered by the existing and the new Patroni
clusters. For this purpose, you must use an internal attribute:
```ruby
consul['internal']['postgresql_service_name'] = 'postgresql_new'
```
#### Start publisher on the existing leader
On the existing leader, run this SQL statement with `gitlab-psql` to start a logical replication publisher:
```sql
CREATE PUBLICATION patroni_upgrade FOR ALL TABLES;
```
#### Copy the data from the existing cluster
To dump the current database from the existing cluster, run these commands on the
leader of the new cluster:
1. Optional. Copy global database objects:
```shell
pg_dumpall -h ${EXISTING_CLUSTER_LEADER} -U gitlab-psql -g | gitlab-psql
```
You can ignore the errors about existing database objects, such as roles. They are
created when the node is configured for the first time.
1. Copy the current database:
```shell
pg_dump -h ${EXISTING_CLUSTER_LEADER} -U gitlab-psql -d gitlabhq_production -s | gitlab-psql
```
Depending on the size of your database, this command may take a while to complete.
The `pg_dump` and `pg_dumpall` commands are in `/opt/gitlab/embedded/bin`. In these commands,
`EXISTING_CLUSTER_LEADER` is the host address of the leader node of the existing cluster.
{{< alert type="note" >}}
The `gitlab-psql` user must be able to authenticate the existing leader from the new leader node.
{{< /alert >}}
#### Replicate data from the existing cluster
After taking the initial data dump, you must keep the new leader in sync with the
latest changes of your existing cluster. On the new leader, run this SQL statement
with `gitlab-psql` to subscribe to publication of the existing leader:
```sql
CREATE SUBSCRIPTION patroni_upgrade
CONNECTION 'host=EXISTING_CLUSTER_LEADER dbname=gitlabhq_production user=gitlab-psql'
PUBLICATION patroni_upgrade;
```
In this statement, `EXISTING_CLUSTER_LEADER` is the host address of the leader node
of the existing cluster. You can also use
[other parameters](https://www.postgresql.org/docs/16/libpq-connect.html#LIBPQ-PARAMKEYWORDS)
to change the connection string. For example, you can pass the authentication password.
To check the status of replication, run these queries:
- `SELECT * FROM pg_replication_slots WHERE slot_name = 'patroni_upgrade'` on the existing leader (the publisher).
- `SELECT * FROM pg_stat_subscription` on the new leader (the subscriber).
#### Grow the new cluster
Configure other nodes of the new cluster in the way you
[configured the leader](#configure-the-leader-of-the-new-cluster).
Make sure that you use the same `patroni['scope']` and
`consul['internal']['postgresql_service_name']`.
What happens here:
- The application still uses the existing leader as its database backend.
- The logical replication ensures that the new leader keeps in sync.
- When other nodes are added to the new cluster, Patroni handles
the replication to the nodes.
It is a good idea to wait until the replica nodes of the new cluster are initialized and caught up on the replication
lag.
#### Switch the application to use the new cluster
Up to this point, you can stop the upgrade procedure without losing data on the
existing cluster. When you switch the database backend of the application and point
it to the new cluster, the old cluster does not receive new updates. It falls behind
the new cluster. After this point, any recovery must be done from the nodes of the new cluster.
To do the switch on all PgBouncer nodes:
1. Edit `gitlab.rb` by setting:
```ruby
consul['watchers'] = %w(postgresql_new)
consul['internal']['postgresql_service_name'] = 'postgresql_new'
```
1. Run `gitlab-ctl reconfigure`.
#### Clean up
After completing these steps, then you can clean up the resources of the old Patroni cluster.
They are no longer needed. However, before removing the resources, remove the
logical replication subscription on the new leader by running `DROP SUBSCRIPTION patroni_upgrade`
with `gitlab-psql`.
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: PostgreSQL replication and failover for Linux package installations
breadcrumbs:
- doc
- administration
- postgresql
---
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
If you're a Free user of GitLab Self-Managed, consider using a cloud-hosted solution.
This document doesn't cover self-compiled installations.
If a setup with replication and failover isn't what you were looking for, see
the [database configuration document](https://docs.gitlab.com/omnibus/settings/database.html)
for the Linux packages.
It's recommended to read this document fully before attempting to configure PostgreSQL with
replication and failover for GitLab.
## Operating system upgrades
If you are failing over to a system with a different operating system,
read the [documentation on upgrading operating systems for PostgreSQL](upgrading_os.md).
Failing to account for local changes with operating system upgrades may result in data corruption.
## Architecture
The Linux package-recommended configuration for a PostgreSQL cluster with
replication failover requires:
- A minimum of three PostgreSQL nodes.
- A minimum of three Consul server nodes.
- A minimum of three PgBouncer nodes that track and handle primary database reads and writes.
- An internal load balancer (TCP) to balance requests between the PgBouncer nodes.
- [Database Load Balancing](database_load_balancing.md) enabled.
- A local PgBouncer service configured on each PostgreSQL node. This is separate from the main PgBouncer cluster that tracks the primary.
```plantuml
@startuml
card "**Internal Load Balancer**" as ilb #9370DB
skinparam linetype ortho
together {
collections "**GitLab Rails** x3" as gitlab #32CD32
collections "**Sidekiq** x4" as sidekiq #ff8dd1
}
collections "**Consul** x3" as consul #e76a9b
card "Database" as database {
collections "**PGBouncer x3**\n//Consul//" as pgbouncer #4EA7FF
card "**PostgreSQL** //Primary//\n//Patroni//\n//PgBouncer//\n//Consul//" as postgres_primary #4EA7FF
collections "**PostgreSQL** //Secondary// **x2**\n//Patroni//\n//PgBouncer//\n//Consul//" as postgres_secondary #4EA7FF
pgbouncer -[#4EA7FF]-> postgres_primary
postgres_primary .[#4EA7FF]r-> postgres_secondary
}
gitlab -[#32CD32]-> ilb
gitlab -[hidden]-> pgbouncer
gitlab .[#32CD32,norank]-> postgres_primary
gitlab .[#32CD32,norank]-> postgres_secondary
sidekiq -[#ff8dd1]-> ilb
sidekiq -[hidden]-> pgbouncer
sidekiq .[#ff8dd1,norank]-> postgres_primary
sidekiq .[#ff8dd1,norank]-> postgres_secondary
ilb -[#9370DB]-> pgbouncer
consul -[#e76a9b]r-> pgbouncer
consul .[#e76a9b,norank]r-> postgres_primary
consul .[#e76a9b,norank]r-> postgres_secondary
@enduml
```
You also need to take into consideration the underlying network topology, making
sure you have redundant connectivity between all Database and GitLab instances
to avoid the network becoming a single point of failure.
### Database node
Each database node runs four services:
- `PostgreSQL`: The database itself.
- `Patroni`: Communicates with other Patroni services in the cluster and handles failover when issues with the leader server occurs. The failover procedure consists of:
- Selecting a new leader for the cluster.
- Promoting the new node to leader.
- Instructing remaining servers to follow the new leader node.
- `PgBouncer`: A local pooler for the node. Used for _read_ queries as part of [Database Load Balancing](database_load_balancing.md).
- `Consul` agent: To communicate with Consul cluster which stores the current Patroni state. The agent monitors the status of each node in the database cluster and tracks its health in a service definition on the Consul cluster.
### Consul server node
The Consul server node runs the Consul server service. These nodes must have reached the quorum and elected a leader before Patroni cluster bootstrap; otherwise, database nodes wait until such Consul leader is elected.
### PgBouncer node
Each PgBouncer node runs two services:
- `PgBouncer`: The database connection pooler itself.
- `Consul` agent: Watches the status of the PostgreSQL service definition on the Consul cluster. If that status changes, Consul runs a script which updates the PgBouncer configuration to point to the new PostgreSQL leader node and reloads the PgBouncer service.
### Connection flow
Each service in the package comes with a set of [default ports](../package_information/defaults.md#ports). You may need to make specific firewall rules for the connections listed below:
There are several connection flows in this setup:
- [Primary](#primary)
- [Database Load Balancing](#database-load-balancing)
- [Replication](#replication)
#### Primary
- Application servers connect to either PgBouncer directly via its [default port](../package_information/defaults.md) or via a configured Internal Load Balancer (TCP) that serves multiple PgBouncers.
- PgBouncer connects to the primary database server's [PostgreSQL default port](../package_information/defaults.md).
#### Database Load Balancing
For read queries against data that haven't been recently changed and are up to date on all database nodes:
- Application servers connect to the local PgBouncer service via its [default port](../package_information/defaults.md) on each database node in a round-robin approach.
- Local PgBouncer connects to the local database server's [PostgreSQL default port](../package_information/defaults.md).
#### Replication
- Patroni actively manages the running PostgreSQL processes and configuration.
- PostgreSQL secondaries connect to the primary database servers [PostgreSQL default port](../package_information/defaults.md)
- Consul servers and agents connect to each others [Consul default ports](../package_information/defaults.md)
## Setting it up
### Required information
Before proceeding with configuration, you need to collect all the necessary
information.
#### Network information
PostgreSQL doesn't listen on any network interface by default. It needs to know
which IP address to listen on to be accessible to other services. Similarly,
PostgreSQL access is controlled based on the network source.
This is why you need:
- The IP address of each node's network interface. This can be set to `0.0.0.0` to
listen on all interfaces. It cannot be set to the loopback address `127.0.0.1`.
- Network Address. This can be in subnet (that is, `192.168.0.0/255.255.255.0`)
or Classless Inter-Domain Routing (CIDR) (`192.168.0.0/24`) form.
#### Consul information
When using default setup, minimum configuration requires:
- `CONSUL_USERNAME`. The default user for Linux package installations is `gitlab-consul`
- `CONSUL_DATABASE_PASSWORD`. Password for the database user.
- `CONSUL_PASSWORD_HASH`. This is a hash generated out of Consul username/password pair. It can be generated with:
```shell
sudo gitlab-ctl pg-password-md5 CONSUL_USERNAME
```
- `CONSUL_SERVER_NODES`. The IP addresses or DNS records of the Consul server nodes.
Few notes on the service itself:
- The service runs under a system account, by default `gitlab-consul`.
- If you are using a different username, you have to specify it through the `CONSUL_USERNAME` variable.
- Passwords are stored in the following locations:
- `/etc/gitlab/gitlab.rb`: hashed
- `/var/opt/gitlab/pgbouncer/pg_auth`: hashed
- `/var/opt/gitlab/consul/.pgpass`: plaintext
#### PostgreSQL information
When configuring PostgreSQL, we do the following:
- Set `max_replication_slots` to double the number of database nodes. Patroni uses one extra slot per node when initiating the replication.
- Set `max_wal_senders` to one more than the allocated number of replication slots in the cluster. This prevents replication from using up all of the available database connections.
In this document we are assuming 3 database nodes, which makes this configuration:
```ruby
patroni['postgresql']['max_replication_slots'] = 6
patroni['postgresql']['max_wal_senders'] = 7
```
As previously mentioned, prepare the network subnets that need permission
to authenticate with the database.
You also need to have the IP addresses or DNS records of Consul
server nodes on hand.
You need the following password information for the application's database user:
- `POSTGRESQL_USERNAME`. The default user for Linux package installations is `gitlab`
- `POSTGRESQL_USER_PASSWORD`. The password for the database user
- `POSTGRESQL_PASSWORD_HASH`. This is a hash generated out of the username/password pair.
It can be generated with:
```shell
sudo gitlab-ctl pg-password-md5 POSTGRESQL_USERNAME
```
#### Patroni information
You need the following password information for the Patroni API:
- `PATRONI_API_USERNAME`. A username for basic auth to the API
- `PATRONI_API_PASSWORD`. A password for basic auth to the API
#### PgBouncer information
When using a default setup, the minimum configuration requires:
- `PGBOUNCER_USERNAME`. The default user for Linux package installations is `pgbouncer`
- `PGBOUNCER_PASSWORD`. This is a password for PgBouncer service.
- `PGBOUNCER_PASSWORD_HASH`. This is a hash generated out of PgBouncer username/password pair. It can be generated with:
```shell
sudo gitlab-ctl pg-password-md5 PGBOUNCER_USERNAME
```
- `PGBOUNCER_NODE`, is the IP address or a FQDN of the node running PgBouncer.
Few things to remember about the service itself:
- The service runs as the same system account as the database. In the package, this is by default `gitlab-psql`
- If you use a non-default user account for PgBouncer service (by default `pgbouncer`), you need to specify this username.
- Passwords are stored in the following locations:
- `/etc/gitlab/gitlab.rb`: hashed, and in plain text
- `/var/opt/gitlab/pgbouncer/pg_auth`: hashed
### Installing the Linux package
First, make sure to [download and install](https://about.gitlab.com/install/) the Linux package on each node.
Make sure you install the necessary dependencies from step 1,
add GitLab package repository from step 2.
When installing the GitLab package, do not supply `EXTERNAL_URL` value.
### Configuring the Database nodes
1. Make sure to [configure the Consul nodes](../consul.md).
1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information), [`POSTGRESQL_PASSWORD_HASH`](#postgresql-information), the [number of db nodes](#postgresql-information), and the [network address](#network-information) before executing the next step.
#### Configuring Patroni cluster
You must enable Patroni explicitly to be able to use it (with `patroni['enable'] = true`).
Any PostgreSQL configuration item that controls replication, for example `wal_level`, `max_wal_senders`, or others are strictly
controlled by Patroni. These configurations override the original settings that you make with the `postgresql[...]` configuration key.
Hence, they are all separated and placed under `patroni['postgresql'][...]`. This behavior is limited to replication.
Patroni honours any other PostgreSQL configuration that was made with the `postgresql[...]` configuration key. For example,
`max_wal_senders` by default is set to `5`. If you wish to change this you must set it with the `patroni['postgresql']['max_wal_senders']`
configuration key.
Here is an example:
```ruby
# Disable all components except Patroni, PgBouncer and Consul
roles(['patroni_role', 'pgbouncer_role'])
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
# Configure the Consul agent
consul['services'] = %w(postgresql)
# START user configuration
# Set the real values as explained in Required Information section
#
# Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
# Replace POSTGRESQL_REPLICATION_PASSWORD_HASH with a generated md5 value
postgresql['sql_replication_password'] = 'POSTGRESQL_REPLICATION_PASSWORD_HASH'
# Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
# Replace PATRONI_API_USERNAME with a username for Patroni Rest API calls (use the same username in all nodes)
patroni['username'] = 'PATRONI_API_USERNAME'
# Replace PATRONI_API_PASSWORD with a password for Patroni Rest API calls (use the same password in all nodes)
patroni['password'] = 'PATRONI_API_PASSWORD'
# Sets `max_replication_slots` to double the number of database nodes.
# Patroni uses one extra slot per node when initiating the replication.
patroni['postgresql']['max_replication_slots'] = X
# Set `max_wal_senders` to one more than the number of replication slots in the cluster.
# This is used to prevent replication from using up all of the
# available database connections.
patroni['postgresql']['max_wal_senders'] = X+1
# Replace XXX.XXX.XXX.XXX/YY with Network Addresses for your other patroni nodes
patroni['allowlist'] = %w(XXX.XXX.XXX.XXX/YY 127.0.0.1/32)
# Replace XXX.XXX.XXX.XXX/YY with Network Address
postgresql['trust_auth_cidr_addresses'] = %w(XXX.XXX.XXX.XXX/YY 127.0.0.1/32)
# Local PgBouncer service for Database Load Balancing
pgbouncer['databases'] = {
gitlabhq_production: {
host: "127.0.0.1",
user: "PGBOUNCER_USERNAME",
password: 'PGBOUNCER_PASSWORD_HASH'
}
}
# Replace placeholders:
#
# Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
# with the addresses gathered for CONSUL_SERVER_NODES
consul['configuration'] = {
retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
}
#
# END user configuration
```
All database nodes use the same configuration. The leader node is not determined in configuration,
and there is no additional or different configuration for either leader or replica nodes.
After the configuration of a node is complete, you must [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
on each node for the changes to take effect.
Generally, when Consul cluster is ready, the first node that [reconfigures](../restart_gitlab.md#reconfigure-a-linux-package-installation)
becomes the leader. You do not need to sequence the nodes reconfiguration. You can run them in parallel or in any order.
If you choose an arbitrary order, you do not have any predetermined leader.
#### Enable Monitoring
If you enable Monitoring, it must be enabled on all database servers.
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
```ruby
# Enable service discovery for Prometheus
consul['monitoring_service_discovery'] = true
# Set the network addresses that the exporters must listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
postgres_exporter['listen_address'] = '0.0.0.0:9187'
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
#### Enable TLS support for the Patroni API
By default, the Patroni [REST API](https://patroni.readthedocs.io/en/latest/rest_api.html#rest-api) is served over HTTP.
You have the option to enable TLS and use HTTPS over the same [port](../package_information/defaults.md).
To enable TLS, you need PEM-formatted certificate and private key files. Both files must be readable by the PostgreSQL user (`gitlab-psql` by default, or the one set by `postgresql['username']`):
```ruby
patroni['tls_certificate_file'] = '/path/to/server/certificate.pem'
patroni['tls_key_file'] = '/path/to/server/key.pem'
```
If the server's private key is encrypted, specify the password to decrypt it:
```ruby
patroni['tls_key_password'] = 'private-key-password' # This is the plain-text password.
```
If you are using a self-signed certificate or an internal CA, you need to either disable the TLS verification or pass the certificate of the
internal CA, otherwise you may run into an unexpected error when using the `gitlab-ctl patroni ....` commands. The Linux package ensures that Patroni API
clients honor this configuration.
TLS certificate verification is enabled by default. To disable it:
```ruby
patroni['tls_verify'] = false
```
Alternatively, you can pass a PEM-formatted certificate of the internal CA. Again, the file must be readable by the PostgreSQL user:
```ruby
patroni['tls_ca_file'] = '/path/to/ca.pem'
```
When TLS is enabled, mutual authentication of the API server and client is possible for all endpoints, the extent of which depends on
the `patroni['tls_client_mode']` attribute:
- `none` (default): The API does not check for any client certificates.
- `optional`: Client certificates are required for all [unsafe](https://patroni.readthedocs.io/en/latest/security.html#protecting-the-rest-api) API calls.
- `required`: Client certificates are required for all API calls.
The client certificates are verified against the CA certificate that is specified with the `patroni['tls_ca_file']` attribute. Therefore,
this attribute is required for mutual TLS authentication. You also need to specify PEM-formatted client certificate and private key files.
Both files must be readable by the PostgreSQL user:
```ruby
patroni['tls_client_mode'] = 'required'
patroni['tls_ca_file'] = '/path/to/ca.pem'
patroni['tls_client_certificate_file'] = '/path/to/client/certificate.pem'
patroni['tls_client_key_file'] = '/path/to/client/key.pem'
```
You can use different certificates and keys for both API server and client on different Patroni nodes as long as they can be verified.
However, the CA certificate (`patroni['tls_ca_file']`), TLS certificate verification (`patroni['tls_verify']`), and client TLS
authentication mode (`patroni['tls_client_mode']`), must each have the same value on all nodes.
### Configure PgBouncer nodes
1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`CONSUL_PASSWORD_HASH`](#consul-information), and [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information) before executing the next step.
1. On each node, edit the `/etc/gitlab/gitlab.rb` configuration file and replace values noted in the `# START user configuration` section as below:
```ruby
# Disable all components except PgBouncer and Consul agent
roles(['pgbouncer_role'])
# Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
# Configure Consul agent
consul['watchers'] = %w(postgresql)
# START user configuration
# Set the real values as explained in Required Information section
# Replace CONSUL_PASSWORD_HASH with a generated md5 value
# Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
pgbouncer['users'] = {
'gitlab-consul': {
password: 'CONSUL_PASSWORD_HASH'
},
'pgbouncer': {
password: 'PGBOUNCER_PASSWORD_HASH'
}
}
# Replace placeholders:
#
# Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
# with the addresses gathered for CONSUL_SERVER_NODES
consul['configuration'] = {
retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
}
#
# END user configuration
```
1. Run `gitlab-ctl reconfigure`
1. Create a `.pgpass` file so Consul is able to
reload PgBouncer. Enter the `PGBOUNCER_PASSWORD` twice when asked:
```shell
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
1. [Enable monitoring](pgbouncer.md#enable-monitoring)
#### PgBouncer Checkpoint
1. Ensure each node is talking to the current node leader:
```shell
gitlab-ctl pgb-console # Supply PGBOUNCER_PASSWORD when prompted
```
If there is an error `psql: ERROR: Auth failed` after typing in the
password, ensure you have previously generated the MD5 password hashes with the correct
format. The correct format is to concatenate the password and the username:
`PASSWORDUSERNAME`. For example, `Sup3rS3cr3tpgbouncer` would be the text
needed to generate an MD5 password hash for the `pgbouncer` user.
1. After the console prompt has become available, run the following queries:
```shell
show databases ; show clients ;
```
The output should be similar to the following:
```plaintext
name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
(2 rows)
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
(2 rows)
```
#### Configure the internal load balancer
If you're running more than one PgBouncer node as recommended, you must set up a TCP internal load balancer to serve each correctly. This can be accomplished with any reputable TCP load balancer.
As an example, here's how you could do it with [HAProxy](https://www.haproxy.org/):
```plaintext
global
log /dev/log local0
log localhost local1 notice
log stdout format raw local0
defaults
log global
default-server inter 10s fall 3 rise 2
balance leastconn
frontend internal-pgbouncer-tcp-in
bind *:6432
mode tcp
option tcplog
default_backend pgbouncer
backend pgbouncer
mode tcp
option tcp-check
server pgbouncer1 <ip>:6432 check
server pgbouncer2 <ip>:6432 check
server pgbouncer3 <ip>:6432 check
```
Refer to your preferred Load Balancer's documentation for further guidance.
### Configuring the Application nodes
Application nodes run the `gitlab-rails` service. You may have other
attributes set, but the following need to be set.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable PostgreSQL on the application node
postgresql['enable'] = false
gitlab_rails['db_host'] = 'PGBOUNCER_NODE' or 'INTERNAL_LOAD_BALANCER'
gitlab_rails['db_port'] = 6432
gitlab_rails['db_password'] = 'POSTGRESQL_USER_PASSWORD'
gitlab_rails['auto_migrate'] = false
gitlab_rails['db_load_balancing'] = { 'hosts' => ['POSTGRESQL_NODE_1', 'POSTGRESQL_NODE_2', 'POSTGRESQL_NODE_3'] }
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
#### Application node post-configuration
Ensure that all migrations ran:
```shell
gitlab-rake gitlab:db:configure
```
{{< alert type="note" >}}
If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to PostgreSQL it may be that your PgBouncer node's IP address is missing from
PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
[PgBouncer error `ERROR: pgbouncer cannot connect to server`](replication_and_failover_troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server) before you proceed.
{{< /alert >}}
### Backups
Do not backup or restore GitLab through a PgBouncer connection: this causes a GitLab outage.
[Read more about this and how to reconfigure backups](../backup_restore/backup_gitlab.md#back-up-and-restore-for-installations-using-pgbouncer).
### Ensure GitLab is running
At this point, your GitLab instance should be up and running. Verify you're able
to sign in, and create issues and merge requests. For more information, see [Troubleshooting replication and failover](replication_and_failover_troubleshooting.md).
## Example configuration
This section describes several fully expanded example configurations.
### Example recommended setup
This example uses three Consul servers, three PgBouncer servers (with an
associated internal load balancer), three PostgreSQL servers, and one
application node.
In this setup, all servers share the same `10.6.0.0/16` private network range.
The servers communicate freely over these addresses.
While you can use a different networking setup, it's recommended to ensure that it allows
for synchronous replication to occur across the cluster.
As a general rule, a latency of less than 2 ms ensures replication operations to be performant.
GitLab [reference architectures](../reference_architectures/_index.md) are sized to
assume that application database queries are shared by all three nodes.
Communication latency higher than 2 ms can lead to database locks and
impact the replica's ability to serve read-only queries in a timely fashion.
- `10.6.0.22`: PgBouncer 2
- `10.6.0.23`: PgBouncer 3
- `10.6.0.31`: PostgreSQL 1
- `10.6.0.32`: PostgreSQL 2
- `10.6.0.33`: PostgreSQL 3
- `10.6.0.41`: GitLab application
All passwords are set to `toomanysecrets`. Do not use this password or derived hashes and the `external_url` for GitLab is `http://gitlab.example.com`.
After the initial configuration, if a failover occurs, the PostgreSQL leader node changes to one of the available secondaries until it is failed back.
#### Example recommended setup for Consul servers
On each server edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Consul
roles(['consul_role'])
consul['configuration'] = {
server: true,
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
consul['monitoring_service_discovery'] = true
```
[Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
#### Example recommended setup for PgBouncer servers
On each server edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Pgbouncer and Consul agent
roles(['pgbouncer_role'])
# Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
pgbouncer['users'] = {
'gitlab-consul': {
password: '5e0e3263571e3704ad655076301d6ebe'
},
'pgbouncer': {
password: '771a8625958a529132abe6f1a4acb19c'
}
}
consul['watchers'] = %w(postgresql)
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
consul['monitoring_service_discovery'] = true
```
[Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
#### Internal load balancer setup
An internal load balancer (TCP) is then required to be setup to serve each PgBouncer node (in this example on the IP of `10.6.0.20`). An example of how to do this can be found in the [PgBouncer Configure Internal Load Balancer](#configure-the-internal-load-balancer) section.
#### Example recommended setup for PostgreSQL servers
On database nodes edit `/etc/gitlab/gitlab.rb`:
```ruby
# Disable all components except Patroni, PgBouncer and Consul
roles(['patroni_role', 'pgbouncer_role'])
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
postgresql['hot_standby'] = 'on'
postgresql['wal_level'] = 'replica'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
postgresql['pgbouncer_user_password'] = '771a8625958a529132abe6f1a4acb19c'
postgresql['sql_user_password'] = '450409b85a0223a214b5fb1484f34d0f'
patroni['username'] = 'PATRONI_API_USERNAME'
patroni['password'] = 'PATRONI_API_PASSWORD'
patroni['postgresql']['max_replication_slots'] = 6
patroni['postgresql']['max_wal_senders'] = 7
patroni['allowlist'] = = %w(10.6.0.0/16 127.0.0.1/32)
postgresql['trust_auth_cidr_addresses'] = %w(10.6.0.0/16 127.0.0.1/32)
# Local PgBouncer service for Database Load Balancing
pgbouncer['databases'] = {
gitlabhq_production: {
host: "127.0.0.1",
user: "pgbouncer",
password: '771a8625958a529132abe6f1a4acb19c'
}
}
# Configure the Consul agent
consul['services'] = %w(postgresql)
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
consul['monitoring_service_discovery'] = true
```
[Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
#### Example recommended setup manual steps
After deploying the configuration follow these steps:
1. Find the primary database node:
```shell
gitlab-ctl get-postgresql-primary
```
1. On `10.6.0.41`, our application server:
Set `gitlab-consul` user's PgBouncer password to `toomanysecrets`:
```shell
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
Run database migrations:
```shell
gitlab-rake gitlab:db:configure
```
## Patroni
Patroni is an opinionated solution for PostgreSQL high-availability. It takes the control of PostgreSQL, overrides its configuration, and manages its lifecycle (start, stop, restart). Patroni is the only option for PostgreSQL 12+ clustering and for cascading replication for Geo deployments.
The fundamental [architecture](#example-recommended-setup-manual-steps) does not change for Patroni.
You do not need any special consideration for Patroni while provisioning your database nodes. Patroni heavily relies on Consul to store the state of the cluster and elect a leader. Any failure in Consul cluster and its leader election propagates to the Patroni cluster as well.
Patroni monitors the cluster and handles any failover. When the primary node fails, it works with Consul to notify PgBouncer. On failure, Patroni handles the transitioning of the old primary to a replica and rejoins it to the cluster automatically.
With Patroni, the connection flow is slightly different. Patroni on each node connects to Consul agent to join the cluster. Only after this point it decides if the node is the primary or a replica. Based on this decision, it configures and starts PostgreSQL which it communicates with directly over a Unix socket. This means that if the Consul cluster is not functional or does not have a leader, Patroni and by extension PostgreSQL does not start. Patroni also exposes a REST API which can be accessed via its [default port](../package_information/defaults.md)
on each node.
### Check replication status
Run `gitlab-ctl patroni members` to query Patroni for a summary of the cluster status:
```plaintext
+ Cluster: postgresql-ha (6970678148837286213) ------+---------+---------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+-------------------------------------+--------------+---------+---------+----+-----------+
| gitlab-database-1.example.com | 172.18.0.111 | Replica | running | 5 | 0 |
| gitlab-database-2.example.com | 172.18.0.112 | Replica | running | 5 | 100 |
| gitlab-database-3.example.com | 172.18.0.113 | Leader | running | 5 | |
+-------------------------------------+--------------+---------+---------+----+-----------+
```
To verify the status of replication:
```shell
echo -e 'select * from pg_stat_wal_receiver\x\g\x \n select * from pg_stat_replication\x\g\x' | gitlab-psql
```
The same command can be run on all three database servers. It returns any information
about replication available depending on the role the server is performing.
The leader should return one record per replica:
```sql
-[ RECORD 1 ]----+------------------------------
pid | 371
usesysid | 16384
usename | gitlab_replicator
application_name | gitlab-database-1.example.com
client_addr | 172.18.0.111
client_hostname |
client_port | 42900
backend_start | 2021-06-14 08:01:59.580341+00
backend_xmin |
state | streaming
sent_lsn | 0/EA13220
write_lsn | 0/EA13220
flush_lsn | 0/EA13220
replay_lsn | 0/EA13220
write_lag |
flush_lag |
replay_lag |
sync_priority | 0
sync_state | async
reply_time | 2021-06-18 19:17:14.915419+00
```
Investigate further if:
- There are missing or extra records.
- `reply_time` is not current.
The `lsn` fields relate to which write-ahead-log segments have been replicated.
Run the following on the leader to find out the current Log Sequence Number (LSN):
```shell
echo 'SELECT pg_current_wal_lsn();' | gitlab-psql
```
If a replica is not in sync, `gitlab-ctl patroni members` indicates the volume
of missing data, and the `lag` fields indicate the elapsed time.
Read more about the data returned by the leader
[in the PostgreSQL documentation](https://www.postgresql.org/docs/16/monitoring-stats.html#PG-STAT-REPLICATION-VIEW),
including other values for the `state` field.
The replicas should return:
```sql
-[ RECORD 1 ]---------+-------------------------------------------------------------------------------------------------
pid | 391
status | streaming
receive_start_lsn | 0/D000000
receive_start_tli | 5
received_lsn | 0/EA13220
received_tli | 5
last_msg_send_time | 2021-06-18 19:16:54.807375+00
last_msg_receipt_time | 2021-06-18 19:16:54.807512+00
latest_end_lsn | 0/EA13220
latest_end_time | 2021-06-18 19:07:23.844879+00
slot_name | gitlab-database-1.example.com
sender_host | 172.18.0.113
sender_port | 5432
conninfo | user=gitlab_replicator host=172.18.0.113 port=5432 application_name=gitlab-database-1.example.com
```
Read more about the data returned by the replica
[in the PostgreSQL documentation](https://www.postgresql.org/docs/16/monitoring-stats.html#PG-STAT-WAL-RECEIVER-VIEW).
### Selecting the appropriate Patroni replication method
[Review the Patroni documentation carefully](https://patroni.readthedocs.io/en/latest/yaml_configuration.html#postgresql)
before making changes as some of the options carry a risk of potential data
loss if not fully understood. The [replication mode](https://patroni.readthedocs.io/en/latest/replication_modes.html)
configured determines the amount of tolerable data loss.
{{< alert type="warning" >}}
Replication is not a backup strategy! There is no replacement for a well-considered and tested backup solution.
{{< /alert >}}
Linux package installations default [`synchronous_commit`](https://www.postgresql.org/docs/16/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT) to `on`.
```ruby
postgresql['synchronous_commit'] = 'on'
gitlab['geo-postgresql']['synchronous_commit'] = 'on'
```
#### Customizing Patroni failover behavior
Linux package installations expose several options allowing more control over the [Patroni restoration process](#recovering-the-patroni-cluster).
Each option is shown below with its default value in `/etc/gitlab/gitlab.rb`.
```ruby
patroni['use_pg_rewind'] = true
patroni['remove_data_directory_on_rewind_failure'] = false
patroni['remove_data_directory_on_diverged_timelines'] = false
```
[The upstream documentation is always more up to date](https://patroni.readthedocs.io/en/latest/patroni_configuration.html), but the table below should provide a minimal overview of functionality.
| Setting | Overview |
|-----------------------------------------------|----------|
| `use_pg_rewind` | Try running `pg_rewind` on the former cluster leader before it rejoins the database cluster. |
| `remove_data_directory_on_rewind_failure` | If `pg_rewind` fails, remove the local PostgreSQL data directory and re-replicate from the current cluster leader. |
| `remove_data_directory_on_diverged_timelines` | If `pg_rewind` cannot be used and the former leader's timeline has diverged from the current one, delete the local data directory and re-replicate from the current cluster leader. |
### Database authorization for Patroni
Patroni uses a Unix socket to manage the PostgreSQL instance. Therefore, a connection from the `local` socket must be trusted.
Replicas use the replication user (`gitlab_replicator` by default) to communicate with the leader. For this user,
you can choose between `trust` and `md5` authentication. If you set `postgresql['sql_replication_password']`,
Patroni uses `md5` authentication, and otherwise falls back to `trust`.
Based on the authentication you choose, you must specify the cluster CIDR in the `postgresql['md5_auth_cidr_addresses']` or `postgresql['trust_auth_cidr_addresses']` settings.
### Interacting with Patroni cluster
You can use `gitlab-ctl patroni members` to check the status of the cluster members. To check the status of each node
`gitlab-ctl patroni` provides two additional sub-commands, `check-leader` and `check-replica` which indicate if a node
is the primary or a replica.
When Patroni is enabled, it exclusively controls PostgreSQL's startup,
shutdown, and restart. This means, to shut down PostgreSQL on a certain node, you must shutdown Patroni on the same node with:
```shell
sudo gitlab-ctl stop patroni
```
Stopping or restarting the Patroni service on the leader node triggers an automatic failover. If you need Patroni to reload its configuration or restart the PostgreSQL process without triggering the failover, you must use the `reload` or `restart` sub-commands of `gitlab-ctl patroni` instead. These two sub-commands are wrappers of the same `patronictl` commands.
### Manual failover procedure for Patroni
{{< alert type="warning" >}}
In GitLab 16.5 and earlier, PgBouncer nodes do not automatically fail over alongside
Patroni nodes. PgBouncer services
[must be restarted manually](replication_and_failover_troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
for a successful switchover.
{{< /alert >}}
While Patroni supports automatic failover, you also have the ability to perform
a manual one, where you have two slightly different options:
- Failover: allows you to perform a manual failover when there are no healthy nodes.
You can perform this action in any PostgreSQL node:
```shell
sudo gitlab-ctl patroni failover
```
- Switchover: only works when the cluster is healthy and allows you to schedule a switchover (it can happen immediately).
You can perform this action in any PostgreSQL node:
```shell
sudo gitlab-ctl patroni switchover
```
For further details on this subject, see the
[Patroni documentation](https://patroni.readthedocs.io/en/latest/rest_api.html#switchover-and-failover-endpoints).
#### Geo secondary site considerations
When a Geo secondary site is replicating from a primary site that uses `Patroni` and `PgBouncer`, replicating through PgBouncer is not supported. There is a feature request to add support, see [issue #8832](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/8832).
Recommended. Introduce a load balancer in the primary site to automatically handle failovers in the `Patroni` cluster. For more information, see [Step 2: Configure the internal load balancer on the primary site](../geo/setup/database.md#step-2-configure-the-internal-load-balancer-on-the-primary-site).
##### Handling Patroni failover when replicating directly from the leader node
If your secondary site is configured to replicate directly from the leader node in the `Patroni` cluster, then a failover in the `Patroni` cluster will stop replication to the secondary site, even if the original node gets re-added as a follower node.
In that scenario, you must manually point your secondary site to replicate from the new leader after a failover in the `Patroni` cluster:
```shell
sudo gitlab-ctl replicate-geo-database --host=<new_leader_ip> --replication-slot=<slot_name>
```
This re-syncs your secondary site database and may take a very long time depending on the amount of data to sync. You may also need to run `gitlab-ctl reconfigure` if replication is still not working after re-syncing.
### Recovering the Patroni cluster
To recover the old primary and rejoin it to the cluster as a replica, you can start Patroni with:
```shell
sudo gitlab-ctl start patroni
```
No further configuration or intervention is needed.
### Maintenance procedure for Patroni
With Patroni enabled, you can run planned maintenance on your nodes. To perform maintenance on one node without Patroni, you can put it into maintenance mode with:
```shell
sudo gitlab-ctl patroni pause
```
When Patroni runs in a paused mode, it does not change the state of PostgreSQL. After you are done, you can resume Patroni:
```shell
sudo gitlab-ctl patroni resume
```
For further details, see [Patroni documentation on this subject](https://patroni.readthedocs.io/en/latest/pause.html).
### Upgrading PostgreSQL major version in a Patroni cluster
For a list of the bundled PostgreSQL versions and the default version for each release, see the [PostgreSQL versions of the Linux package](../package_information/postgresql_versions.md).
Here are a few key facts that you must consider before upgrading PostgreSQL:
- The main point is that you have to shut down the Patroni cluster. This means that your
GitLab deployment is down for the duration of database upgrade or, at least, as long as your leader
node is upgraded. This can be a significant downtime depending on the size of your database.
- Upgrading PostgreSQL creates a new data directory with a new control data. From the perspective of Patroni, this is a new cluster that needs to be bootstrapped again. Therefore, as part of the upgrade procedure, the cluster state (stored in Consul) is wiped out. After the upgrade is complete, Patroni bootstraps a new cluster. This changes your cluster ID.
- The procedures for upgrading leader and replicas are not the same. That is why it is important to use the right procedure on each node.
- Upgrading a replica node deletes the data directory and resynchronizes it from the leader using the
configured replication method (`pg_basebackup` is the only available option). It might take some
time for replica to catch up with the leader, depending on the size of your database.
- An overview of the upgrade procedure is outlined in [the Patroni documentation](https://patroni.readthedocs.io/en/latest/existing_data.html#major-upgrade-of-postgresql-version).
You can still use `gitlab-ctl pg-upgrade` which implements this procedure with a few adjustments.
Considering these, you should carefully plan your PostgreSQL upgrade:
1. Find out which node is the leader and which node is a replica:
```shell
gitlab-ctl patroni members
```
{{< alert type="note" >}}
On a Geo secondary site, the Patroni leader node is called `standby leader`.
{{< /alert >}}
1. Stop Patroni only on replicas.
```shell
sudo gitlab-ctl stop patroni
```
1. Enable the maintenance mode on the application node:
```shell
sudo gitlab-ctl deploy-page up
```
1. Upgrade PostgreSQL on the leader node and make sure that the upgrade is completed successfully:
```shell
# Default command timeout is 600s, configurable with '--timeout'
sudo gitlab-ctl pg-upgrade
```
{{< alert type="note" >}}
`gitlab-ctl pg-upgrade` tries to detect the role of the node. If for any reason the auto-detection
does not work or you believe it did not detect the role correctly, you can use the `--leader` or
`--replica` arguments to manually override it. Use `gitlab-ctl pg-upgrade --help` for more details on available options.
{{< /alert >}}
1. Check the status of the leader and cluster. You can proceed only if you have a healthy leader:
```shell
gitlab-ctl patroni check-leader
# OR
gitlab-ctl patroni members
```
1. You can now disable the maintenance mode on the application node:
```shell
sudo gitlab-ctl deploy-page down
```
1. Upgrade PostgreSQL on replicas (you can do this in parallel on all of them):
```shell
sudo gitlab-ctl pg-upgrade
```
1. Ensure that the compatible versions of `pg_dump` and `pg_restore` are used
on the GitLab Rails instance to avoid version mismatch errors when performing
a backup or restore. You can do this by specifying the PostgreSQL version
in `/etc/gitlab/gitlab.rb` on the Rails instance:
```shell
postgresql['version'] = 16
```
If issues are encountered upgrading the replicas,
[there is a troubleshooting section](replication_and_failover_troubleshooting.md#postgresql-major-version-upgrade-fails-on-a-patroni-replica) that might be the solution.
{{< alert type="note" >}}
Reverting the PostgreSQL upgrade with `gitlab-ctl revert-pg-upgrade` has the same considerations as
`gitlab-ctl pg-upgrade`. You should follow the same procedure by first stopping the replicas,
then reverting the leader, and finally reverting the replicas.
{{< /alert >}}
### Near-zero-downtime upgrade of PostgreSQL in a Patroni cluster
{{< details >}}
- Status: Experiment
{{< /details >}}
Patroni enables you to run a major PostgreSQL upgrade without shutting down the cluster. However, this
requires additional resources to host the new Patroni nodes with the upgraded PostgreSQL. In practice, with this
procedure, you are:
- Creating a new Patroni cluster with a new version of PostgreSQL.
- Migrating the data from the existing cluster.
This procedure is non-invasive, and does not impact your existing cluster before switching it off.
However, it can be both time- and resource-consuming. Consider their trade-offs with availability.
The steps, in order:
1. [Provision resources for the new cluster](#provision-resources-for-the-new-cluster).
1. [Preflight check](#preflight-check).
1. [Configure the leader of the new cluster](#configure-the-leader-of-the-new-cluster).
1. [Start publisher on the existing leader](#start-publisher-on-the-existing-leader).
1. [Copy the data from the existing cluster](#copy-the-data-from-the-existing-cluster).
1. [Replicate data from the existing cluster](#replicate-data-from-the-existing-cluster).
1. [Grow the new cluster](#grow-the-new-cluster).
1. [Switch the application to use the new cluster](#switch-the-application-to-use-the-new-cluster).
1. [Clean up](#clean-up).
#### Provision resources for the new cluster
You need a new set of resources for Patroni nodes. The new Patroni cluster does not require exactly the same number
of nodes as the existing cluster. You may choose a different number of nodes based on your requirements. The new
cluster uses the existing Consul cluster (with a different `patroni['scope']`) and PgBouncer nodes.
Make sure that at least the leader node of the existing cluster is accessible from the nodes of the new
cluster.
#### Preflight check
We rely on PostgreSQL [logical replication](https://www.postgresql.org/docs/16/logical-replication.html)
to support near-zero-downtime upgrades of Patroni clusters. The of
[logical replication requirements](https://www.postgresql.org/docs/16/logical-replication-restrictions.html)
must be met. In particular, `wal_level` must be `logical`. To check the `wal_level`,
run the following command with `gitlab-psql` on any node of the existing cluster:
```sql
SHOW wal_level;
```
By default, Patroni sets `wal_level` to `replica`. You must increase it to `logical`.
Changing `wal_level` requires restarting PostgreSQL, so this step leads to a short
downtime (hence near-zero-downtime). To do this on the Patroni leader node:
1. Edit `gitlab.rb` by setting:
```ruby
patroni['postgresql']['wal_level'] = 'logical'
```
1. Run `gitlab-ctl reconfigure`. This writes the configuration but does not restart PostgreSQL service.
1. Run `gitlab-ctl patroni restart` to restart PostgreSQL and apply the new `wal_level` without triggering
failover. For the duration of restart cycle, the cluster leader is unavailable.
1. Verify the change by running `SHOW wal_level` with `gitlab-psql`.
#### Configure the leader of the new cluster
Configure the first node of the new cluster. It becomes the leader of the new cluster.
You can use the configuration of the existing cluster, if it is compatible with the new
PostgreSQL version. Refer to the documentation on [configuring Patroni clusters](#configuring-patroni-cluster).
In addition to the common configuration, you must apply the following in `gitlab.rb` to:
1. Make sure that the new Patroni cluster uses a different scope. The scope is used to namespace the Patroni settings
in Consul, making it possible to use the same Consul cluster for the existing and the new clusters.
```ruby
patroni['scope'] = 'postgresql_new-ha'
```
1. Make sure that Consul agents don't mix PostgreSQL services offered by the existing and the new Patroni
clusters. For this purpose, you must use an internal attribute:
```ruby
consul['internal']['postgresql_service_name'] = 'postgresql_new'
```
#### Start publisher on the existing leader
On the existing leader, run this SQL statement with `gitlab-psql` to start a logical replication publisher:
```sql
CREATE PUBLICATION patroni_upgrade FOR ALL TABLES;
```
#### Copy the data from the existing cluster
To dump the current database from the existing cluster, run these commands on the
leader of the new cluster:
1. Optional. Copy global database objects:
```shell
pg_dumpall -h ${EXISTING_CLUSTER_LEADER} -U gitlab-psql -g | gitlab-psql
```
You can ignore the errors about existing database objects, such as roles. They are
created when the node is configured for the first time.
1. Copy the current database:
```shell
pg_dump -h ${EXISTING_CLUSTER_LEADER} -U gitlab-psql -d gitlabhq_production -s | gitlab-psql
```
Depending on the size of your database, this command may take a while to complete.
The `pg_dump` and `pg_dumpall` commands are in `/opt/gitlab/embedded/bin`. In these commands,
`EXISTING_CLUSTER_LEADER` is the host address of the leader node of the existing cluster.
{{< alert type="note" >}}
The `gitlab-psql` user must be able to authenticate the existing leader from the new leader node.
{{< /alert >}}
#### Replicate data from the existing cluster
After taking the initial data dump, you must keep the new leader in sync with the
latest changes of your existing cluster. On the new leader, run this SQL statement
with `gitlab-psql` to subscribe to publication of the existing leader:
```sql
CREATE SUBSCRIPTION patroni_upgrade
CONNECTION 'host=EXISTING_CLUSTER_LEADER dbname=gitlabhq_production user=gitlab-psql'
PUBLICATION patroni_upgrade;
```
In this statement, `EXISTING_CLUSTER_LEADER` is the host address of the leader node
of the existing cluster. You can also use
[other parameters](https://www.postgresql.org/docs/16/libpq-connect.html#LIBPQ-PARAMKEYWORDS)
to change the connection string. For example, you can pass the authentication password.
To check the status of replication, run these queries:
- `SELECT * FROM pg_replication_slots WHERE slot_name = 'patroni_upgrade'` on the existing leader (the publisher).
- `SELECT * FROM pg_stat_subscription` on the new leader (the subscriber).
#### Grow the new cluster
Configure other nodes of the new cluster in the way you
[configured the leader](#configure-the-leader-of-the-new-cluster).
Make sure that you use the same `patroni['scope']` and
`consul['internal']['postgresql_service_name']`.
What happens here:
- The application still uses the existing leader as its database backend.
- The logical replication ensures that the new leader keeps in sync.
- When other nodes are added to the new cluster, Patroni handles
the replication to the nodes.
It is a good idea to wait until the replica nodes of the new cluster are initialized and caught up on the replication
lag.
#### Switch the application to use the new cluster
Up to this point, you can stop the upgrade procedure without losing data on the
existing cluster. When you switch the database backend of the application and point
it to the new cluster, the old cluster does not receive new updates. It falls behind
the new cluster. After this point, any recovery must be done from the nodes of the new cluster.
To do the switch on all PgBouncer nodes:
1. Edit `gitlab.rb` by setting:
```ruby
consul['watchers'] = %w(postgresql_new)
consul['internal']['postgresql_service_name'] = 'postgresql_new'
```
1. Run `gitlab-ctl reconfigure`.
#### Clean up
After completing these steps, then you can clean up the resources of the old Patroni cluster.
They are no longer needed. However, before removing the resources, remove the
logical replication subscription on the new leader by running `DROP SUBSCRIPTION patroni_upgrade`
with `gitlab-psql`.
|
https://docs.gitlab.com/administration/pgbouncer
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/pgbouncer.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
pgbouncer.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Working with the bundled PgBouncer service
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< alert type="note" >}}
PgBouncer is bundled in the `gitlab-ee` package, but is free to use.
For support, you need a [Premium subscription](https://about.gitlab.com/pricing/).
{{< /alert >}}
[PgBouncer](https://www.pgbouncer.org/) is used to seamlessly migrate database
connections between servers in a failover scenario. Additionally, it can be used
in a non-fault-tolerant setup to pool connections, speeding up response time
while reducing resource usage.
GitLab Premium includes a bundled version of PgBouncer that can be managed
through `/etc/gitlab/gitlab.rb`.
## PgBouncer as part of a fault-tolerant GitLab installation
This content has been moved to a [new location](replication_and_failover.md#configure-pgbouncer-nodes).
## PgBouncer as part of a non-fault-tolerant GitLab installation
1. Generate `PGBOUNCER_USER_PASSWORD_HASH` with the command `gitlab-ctl pg-password-md5 pgbouncer`
1. Generate `SQL_USER_PASSWORD_HASH` with the command `gitlab-ctl pg-password-md5 gitlab`. Enter the plaintext SQL_USER_PASSWORD later.
1. On your database node, ensure the following is set in your `/etc/gitlab/gitlab.rb`
```ruby
postgresql['pgbouncer_user_password'] = 'PGBOUNCER_USER_PASSWORD_HASH'
postgresql['sql_user_password'] = 'SQL_USER_PASSWORD_HASH'
postgresql['listen_address'] = 'XX.XX.XX.Y' # Where XX.XX.XX.Y is the ip address on the node postgresql should listen on
postgresql['md5_auth_cidr_addresses'] = %w(AA.AA.AA.B/32) # Where AA.AA.AA.B is the IP address of the pgbouncer node
```
1. Run `gitlab-ctl reconfigure`
{{< alert type="note" >}}
If the database was already running, it needs to be restarted after reconfigure by running `gitlab-ctl restart postgresql`.
{{< /alert >}}
1. On the node you are running PgBouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
```ruby
pgbouncer['enable'] = true
pgbouncer['databases'] = {
gitlabhq_production: {
host: 'DATABASE_HOST',
user: 'pgbouncer',
password: 'PGBOUNCER_USER_PASSWORD_HASH'
}
}
```
You can pass additional configuration parameters per database, for example:
```ruby
pgbouncer['databases'] = {
gitlabhq_production: {
...
pool_mode: 'transaction'
}
}
```
Use these parameters with caution. For the complete list of parameters refer to the
[PgBouncer documentation](https://www.pgbouncer.org/config.html#section-databases).
1. Run `gitlab-ctl reconfigure`
1. On the node running Puma, make sure the following is set in `/etc/gitlab/gitlab.rb`
```ruby
gitlab_rails['db_host'] = 'PGBOUNCER_HOST'
gitlab_rails['db_port'] = '6432'
gitlab_rails['db_password'] = 'SQL_USER_PASSWORD'
```
1. Run `gitlab-ctl reconfigure`
1. At this point, your instance should connect to the database through PgBouncer. If you are having issues, see the [Troubleshooting](#troubleshooting) section
## Backups
Do not backup or restore GitLab through a PgBouncer connection: it causes a GitLab outage.
[Read more about this and how to reconfigure backups](../backup_restore/backup_gitlab.md#back-up-and-restore-for-installations-using-pgbouncer).
## Enable Monitoring
If you enable Monitoring, it must be enabled on all PgBouncer servers.
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
```ruby
# Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
# Replace placeholders
# Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
# with the addresses of the Consul server nodes
consul['configuration'] = {
retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
}
# Set the network addresses that the exporters will listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
## Administrative console
In Linux package installations, a command is provided to automatically connect to the
PgBouncer administrative console. See the
[PgBouncer documentation](https://www.pgbouncer.org/usage.html#admin-console)
for detailed instructions on how to interact with the console.
To start a session run the following and provide the password for the `pgbouncer`
user:
```shell
sudo gitlab-ctl pgb-console
```
To get some basic information about the instance:
```shell
pgbouncer=# show databases; show clients; show servers;
name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
---------------------+-----------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
gitlabhq_production | 127.0.0.1 | 5432 | gitlabhq_production | | 100 | 5 | | 0 | 1
pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
(2 rows)
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link
| remote_pid | tls
------+-----------+---------------------+--------+-----------+-------+------------+------------+---------------------+---------------------+-----------+------
+------------+-----
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44590 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12444c0 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44592 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12447c0 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44594 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x1244940 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44706 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:16:31 | 0x1244ac0 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44708 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:15:15 | 0x1244c40 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44794 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:15:15 | 0x1244dc0 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44798 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:16:31 | 0x1244f40 |
| 0 |
C | pgbouncer | pgbouncer | active | 127.0.0.1 | 44660 | 127.0.0.1 | 6432 | 2018-04-24 22:13:51 | 2018-04-24 22:17:12 | 0x1244640 |
| 0 |
(8 rows)
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | rem
ote_pid | tls
------+--------+---------------------+-------+-----------+------+------------+------------+---------------------+---------------------+-----------+------+----
--------+-----
S | gitlab | gitlabhq_production | idle | 127.0.0.1 | 5432 | 127.0.0.1 | 35646 | 2018-04-24 22:15:15 | 2018-04-24 22:17:10 | 0x124dca0 | |
19980 |
(1 row)
```
## Procedure for bypassing PgBouncer
### Linux package installations
Some database changes have to be done directly, and not through PgBouncer.
The main affected tasks are [database restores](../backup_restore/backup_gitlab.md#back-up-and-restore-for-installations-using-pgbouncer)
and [GitLab upgrades with database migrations](../../update/zero_downtime.md).
1. To find the primary node, run the following on a database node:
```shell
sudo gitlab-ctl patroni members
```
1. Edit `/etc/gitlab/gitlab.rb` on the application node you're performing the task on, and update
`gitlab_rails['db_host']` and `gitlab_rails['db_port']` with the database
primary's host and port.
1. Run reconfigure:
```shell
sudo gitlab-ctl reconfigure
```
After you've performed the tasks or procedure, switch back to using PgBouncer:
1. Change back `/etc/gitlab/gitlab.rb` to point to PgBouncer.
1. Run reconfigure:
```shell
sudo gitlab-ctl reconfigure
```
### Helm chart installations
High-availability deployments also need to bypass PgBouncer for the same reasons as Linux package-based ones.
For Helm chart installations:
- Database backup and restore tasks are performed by the toolbox container.
- Migration tasks are performed by the migrations container.
You should override the PostgreSQL port on each subchart, so these tasks can execute and connect to PostgreSQL directly:
- [Toolbox](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/toolbox/values.yaml#L40)
- [Migrations](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/migrations/values.yaml#L46)
## Fine tuning
PgBouncer's default settings suit the majority of installations.
In specific cases you may want to change the performance-specific and resource-specific variables to either increase possible
throughput or to limit resource utilization that could cause memory exhaustion on the database.
You can find the parameters and respective documentation on the [official PgBouncer documentation](https://www.pgbouncer.org/config.html).
Listed below are the most relevant ones and their defaults on a Linux package installation:
- `pgbouncer['max_client_conn']` (default: `2048`, depends on server file descriptor limits)
This is the "frontend" pool in PgBouncer: connections from Rails to PgBouncer.
- `pgbouncer['default_pool_size']` (default: `100`)
This is the "backend" pool in PgBouncer: connections from PgBouncer to the database.
The ideal number for `default_pool_size` must be enough to handle all provisioned services that need to access
the database. Each of the listed services below use the following formula to define database pool size:
- `puma` : `max_threads + headroom` (default `14`)
- `max_threads` is configured via: `gitlab['puma']['max_threads']` (default: `4`)
- `headroom` can be configured via `DB_POOL_HEADROOM` environment variable (default to `10`)
- `sidekiq` : `max_concurrency + 1 + headroom` (default: `31`)
- `max_concurrency` is configured via: `sidekiq['max_concurrency']` (default: `20`)
- `headroom` can be configured via `DB_POOL_HEADROOM` environment variable (default to `10`)
- `geo-logcursor`: `1+headroom` (default: `11`)
- `headroom` can be configured via `DB_POOL_HEADROOM` environment variable (default to `10`)
To calculate the `default_pool_size`, multiply the number of instances of `puma`, `sidekiq` and `geo-logcursor` by the
number of connections each can consume as per listed previously. The total is the suggested `default_pool_size`.
If you are using more than one PgBouncer with an internal Load Balancer, you may be able to divide the
`default_pool_size` by the number of instances to guarantee an evenly distributed load between them.
The `pgbouncer['max_client_conn']` is the hard limit of connections PgBouncer can accept. It's unlikely you need
to change this. If you are hitting that limit, you may want to consider adding additional PgBouncers with an internal
Load Balancer.
When setting up the limits for a PgBouncer that points to the Geo Tracking Database,
you can likely ignore `puma` from the equation, as it is only accessing that database sporadically.
## Troubleshooting
In case you are experiencing any issues connecting through PgBouncer, the first
place to check is always the logs:
```shell
sudo gitlab-ctl tail pgbouncer
```
Additionally, you can check the output from `show databases` in the
[administrative console](#administrative-console). In the output, you would expect
to see values in the `host` field for the `gitlabhq_production` database.
Additionally, `current_connections` should be greater than 1.
### Message: `LOG: invalid CIDR mask in address`
See the suggested fix [in Geo documentation](../geo/replication/troubleshooting/postgresql_replication.md#message-log--invalid-cidr-mask-in-address).
### Message: `LOG: invalid IP mask "md5": Name or service not known`
See the suggested fix [in Geo documentation](../geo/replication/troubleshooting/postgresql_replication.md#message-log--invalid-ip-mask-md5-name-or-service-not-known).
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Working with the bundled PgBouncer service
breadcrumbs:
- doc
- administration
- postgresql
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< alert type="note" >}}
PgBouncer is bundled in the `gitlab-ee` package, but is free to use.
For support, you need a [Premium subscription](https://about.gitlab.com/pricing/).
{{< /alert >}}
[PgBouncer](https://www.pgbouncer.org/) is used to seamlessly migrate database
connections between servers in a failover scenario. Additionally, it can be used
in a non-fault-tolerant setup to pool connections, speeding up response time
while reducing resource usage.
GitLab Premium includes a bundled version of PgBouncer that can be managed
through `/etc/gitlab/gitlab.rb`.
## PgBouncer as part of a fault-tolerant GitLab installation
This content has been moved to a [new location](replication_and_failover.md#configure-pgbouncer-nodes).
## PgBouncer as part of a non-fault-tolerant GitLab installation
1. Generate `PGBOUNCER_USER_PASSWORD_HASH` with the command `gitlab-ctl pg-password-md5 pgbouncer`
1. Generate `SQL_USER_PASSWORD_HASH` with the command `gitlab-ctl pg-password-md5 gitlab`. Enter the plaintext SQL_USER_PASSWORD later.
1. On your database node, ensure the following is set in your `/etc/gitlab/gitlab.rb`
```ruby
postgresql['pgbouncer_user_password'] = 'PGBOUNCER_USER_PASSWORD_HASH'
postgresql['sql_user_password'] = 'SQL_USER_PASSWORD_HASH'
postgresql['listen_address'] = 'XX.XX.XX.Y' # Where XX.XX.XX.Y is the ip address on the node postgresql should listen on
postgresql['md5_auth_cidr_addresses'] = %w(AA.AA.AA.B/32) # Where AA.AA.AA.B is the IP address of the pgbouncer node
```
1. Run `gitlab-ctl reconfigure`
{{< alert type="note" >}}
If the database was already running, it needs to be restarted after reconfigure by running `gitlab-ctl restart postgresql`.
{{< /alert >}}
1. On the node you are running PgBouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
```ruby
pgbouncer['enable'] = true
pgbouncer['databases'] = {
gitlabhq_production: {
host: 'DATABASE_HOST',
user: 'pgbouncer',
password: 'PGBOUNCER_USER_PASSWORD_HASH'
}
}
```
You can pass additional configuration parameters per database, for example:
```ruby
pgbouncer['databases'] = {
gitlabhq_production: {
...
pool_mode: 'transaction'
}
}
```
Use these parameters with caution. For the complete list of parameters refer to the
[PgBouncer documentation](https://www.pgbouncer.org/config.html#section-databases).
1. Run `gitlab-ctl reconfigure`
1. On the node running Puma, make sure the following is set in `/etc/gitlab/gitlab.rb`
```ruby
gitlab_rails['db_host'] = 'PGBOUNCER_HOST'
gitlab_rails['db_port'] = '6432'
gitlab_rails['db_password'] = 'SQL_USER_PASSWORD'
```
1. Run `gitlab-ctl reconfigure`
1. At this point, your instance should connect to the database through PgBouncer. If you are having issues, see the [Troubleshooting](#troubleshooting) section
## Backups
Do not backup or restore GitLab through a PgBouncer connection: it causes a GitLab outage.
[Read more about this and how to reconfigure backups](../backup_restore/backup_gitlab.md#back-up-and-restore-for-installations-using-pgbouncer).
## Enable Monitoring
If you enable Monitoring, it must be enabled on all PgBouncer servers.
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
```ruby
# Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
# Replace placeholders
# Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
# with the addresses of the Consul server nodes
consul['configuration'] = {
retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
}
# Set the network addresses that the exporters will listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
## Administrative console
In Linux package installations, a command is provided to automatically connect to the
PgBouncer administrative console. See the
[PgBouncer documentation](https://www.pgbouncer.org/usage.html#admin-console)
for detailed instructions on how to interact with the console.
To start a session run the following and provide the password for the `pgbouncer`
user:
```shell
sudo gitlab-ctl pgb-console
```
To get some basic information about the instance:
```shell
pgbouncer=# show databases; show clients; show servers;
name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
---------------------+-----------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
gitlabhq_production | 127.0.0.1 | 5432 | gitlabhq_production | | 100 | 5 | | 0 | 1
pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
(2 rows)
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link
| remote_pid | tls
------+-----------+---------------------+--------+-----------+-------+------------+------------+---------------------+---------------------+-----------+------
+------------+-----
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44590 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12444c0 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44592 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12447c0 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44594 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x1244940 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44706 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:16:31 | 0x1244ac0 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44708 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:15:15 | 0x1244c40 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44794 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:15:15 | 0x1244dc0 |
| 0 |
C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44798 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:16:31 | 0x1244f40 |
| 0 |
C | pgbouncer | pgbouncer | active | 127.0.0.1 | 44660 | 127.0.0.1 | 6432 | 2018-04-24 22:13:51 | 2018-04-24 22:17:12 | 0x1244640 |
| 0 |
(8 rows)
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | rem
ote_pid | tls
------+--------+---------------------+-------+-----------+------+------------+------------+---------------------+---------------------+-----------+------+----
--------+-----
S | gitlab | gitlabhq_production | idle | 127.0.0.1 | 5432 | 127.0.0.1 | 35646 | 2018-04-24 22:15:15 | 2018-04-24 22:17:10 | 0x124dca0 | |
19980 |
(1 row)
```
## Procedure for bypassing PgBouncer
### Linux package installations
Some database changes have to be done directly, and not through PgBouncer.
The main affected tasks are [database restores](../backup_restore/backup_gitlab.md#back-up-and-restore-for-installations-using-pgbouncer)
and [GitLab upgrades with database migrations](../../update/zero_downtime.md).
1. To find the primary node, run the following on a database node:
```shell
sudo gitlab-ctl patroni members
```
1. Edit `/etc/gitlab/gitlab.rb` on the application node you're performing the task on, and update
`gitlab_rails['db_host']` and `gitlab_rails['db_port']` with the database
primary's host and port.
1. Run reconfigure:
```shell
sudo gitlab-ctl reconfigure
```
After you've performed the tasks or procedure, switch back to using PgBouncer:
1. Change back `/etc/gitlab/gitlab.rb` to point to PgBouncer.
1. Run reconfigure:
```shell
sudo gitlab-ctl reconfigure
```
### Helm chart installations
High-availability deployments also need to bypass PgBouncer for the same reasons as Linux package-based ones.
For Helm chart installations:
- Database backup and restore tasks are performed by the toolbox container.
- Migration tasks are performed by the migrations container.
You should override the PostgreSQL port on each subchart, so these tasks can execute and connect to PostgreSQL directly:
- [Toolbox](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/toolbox/values.yaml#L40)
- [Migrations](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/charts/gitlab/charts/migrations/values.yaml#L46)
## Fine tuning
PgBouncer's default settings suit the majority of installations.
In specific cases you may want to change the performance-specific and resource-specific variables to either increase possible
throughput or to limit resource utilization that could cause memory exhaustion on the database.
You can find the parameters and respective documentation on the [official PgBouncer documentation](https://www.pgbouncer.org/config.html).
Listed below are the most relevant ones and their defaults on a Linux package installation:
- `pgbouncer['max_client_conn']` (default: `2048`, depends on server file descriptor limits)
This is the "frontend" pool in PgBouncer: connections from Rails to PgBouncer.
- `pgbouncer['default_pool_size']` (default: `100`)
This is the "backend" pool in PgBouncer: connections from PgBouncer to the database.
The ideal number for `default_pool_size` must be enough to handle all provisioned services that need to access
the database. Each of the listed services below use the following formula to define database pool size:
- `puma` : `max_threads + headroom` (default `14`)
- `max_threads` is configured via: `gitlab['puma']['max_threads']` (default: `4`)
- `headroom` can be configured via `DB_POOL_HEADROOM` environment variable (default to `10`)
- `sidekiq` : `max_concurrency + 1 + headroom` (default: `31`)
- `max_concurrency` is configured via: `sidekiq['max_concurrency']` (default: `20`)
- `headroom` can be configured via `DB_POOL_HEADROOM` environment variable (default to `10`)
- `geo-logcursor`: `1+headroom` (default: `11`)
- `headroom` can be configured via `DB_POOL_HEADROOM` environment variable (default to `10`)
To calculate the `default_pool_size`, multiply the number of instances of `puma`, `sidekiq` and `geo-logcursor` by the
number of connections each can consume as per listed previously. The total is the suggested `default_pool_size`.
If you are using more than one PgBouncer with an internal Load Balancer, you may be able to divide the
`default_pool_size` by the number of instances to guarantee an evenly distributed load between them.
The `pgbouncer['max_client_conn']` is the hard limit of connections PgBouncer can accept. It's unlikely you need
to change this. If you are hitting that limit, you may want to consider adding additional PgBouncers with an internal
Load Balancer.
When setting up the limits for a PgBouncer that points to the Geo Tracking Database,
you can likely ignore `puma` from the equation, as it is only accessing that database sporadically.
## Troubleshooting
In case you are experiencing any issues connecting through PgBouncer, the first
place to check is always the logs:
```shell
sudo gitlab-ctl tail pgbouncer
```
Additionally, you can check the output from `show databases` in the
[administrative console](#administrative-console). In the output, you would expect
to see values in the `host` field for the `gitlabhq_production` database.
Additionally, `current_connections` should be greater than 1.
### Message: `LOG: invalid CIDR mask in address`
See the suggested fix [in Geo documentation](../geo/replication/troubleshooting/postgresql_replication.md#message-log--invalid-cidr-mask-in-address).
### Message: `LOG: invalid IP mask "md5": Name or service not known`
See the suggested fix [in Geo documentation](../geo/replication/troubleshooting/postgresql_replication.md#message-log--invalid-ip-mask-md5-name-or-service-not-known).
|
https://docs.gitlab.com/administration/external_metrics
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/external_metrics.md
|
2025-08-13
|
doc/administration/postgresql
|
[
"doc",
"administration",
"postgresql"
] |
external_metrics.md
|
Data Access
|
Database Operations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Monitoring and logging setup for external databases
| null |
External PostgreSQL database systems have different logging options for monitoring performance and troubleshooting, however they are not enabled by default. In this section we provide the recommendations for self-managed PostgreSQL, and recommendations for some major providers of PostgreSQL managed services.
## Recommended PostgreSQL Logging settings
You should enable the following logging settings:
- `log_statement=ddl`: log changes of database model definition (DDL), such as `CREATE`, `ALTER` or `DROP` of objects. This helps track recent model changes that could be causing performance issues and identify security breaches and human errors.
- `log_lock_waits=on`: log of processes holding [locks](https://www.postgresql.org/docs/16/explicit-locking.html) for long periods, a common cause of poor query performance.
- `log_temp_files=0`: log usage of intense and unusual temporary files that can indicate poor query performance.
- `log_autovacuum_min_duration=0`: log all autovacuum executions. Autovacuum is a key component for overall PostgreSQL engine performance. Essential for troubleshooting and tuning if dead tuples are not being removed from tables.
- `log_min_duration_statement=1000`: log slow queries (slower than 1 second).
The full description of these parameter settings can be found in
[PostgreSQL error reporting and logging documentation](https://www.postgresql.org/docs/16/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT).
## Amazon RDS
The Amazon Relational Database Service (RDS) provides a large number of [monitoring metrics](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.html) and [logging interfaces](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitor_Logs_Events.html). Here are a few you should configure:
- Change all [recommended PostgreSQL Logging settings](#recommended-postgresql-logging-settings) through [RDS Parameter Groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html).
- As the recommended logging parameters are [dynamic in RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.Parameters.html) you don't require a reboot after changing these settings.
- The PostgreSQL logs can be observed through the [RDS console](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/logs-events-streams-console.html).
- Enable [RDS performance insight](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html) allows you to visualise your database load with many important performance metrics of a PostgreSQL database engine.
- Enable [RDS Enhanced Monitoring](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html) to monitor the operating system metrics. These metrics can indicate bottlenecks in your underlying hardware and OS that are impacting your database performance.
- In production environments set the monitoring interval to 10 seconds (or less) to capture micro bursts of resource usage that can be the cause of many performance issues. Set `Granularity=10` in the console or `monitoring-interval=10` in the CLI.
|
---
stage: Data Access
group: Database Operations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Monitoring and logging setup for external databases
breadcrumbs:
- doc
- administration
- postgresql
---
External PostgreSQL database systems have different logging options for monitoring performance and troubleshooting, however they are not enabled by default. In this section we provide the recommendations for self-managed PostgreSQL, and recommendations for some major providers of PostgreSQL managed services.
## Recommended PostgreSQL Logging settings
You should enable the following logging settings:
- `log_statement=ddl`: log changes of database model definition (DDL), such as `CREATE`, `ALTER` or `DROP` of objects. This helps track recent model changes that could be causing performance issues and identify security breaches and human errors.
- `log_lock_waits=on`: log of processes holding [locks](https://www.postgresql.org/docs/16/explicit-locking.html) for long periods, a common cause of poor query performance.
- `log_temp_files=0`: log usage of intense and unusual temporary files that can indicate poor query performance.
- `log_autovacuum_min_duration=0`: log all autovacuum executions. Autovacuum is a key component for overall PostgreSQL engine performance. Essential for troubleshooting and tuning if dead tuples are not being removed from tables.
- `log_min_duration_statement=1000`: log slow queries (slower than 1 second).
The full description of these parameter settings can be found in
[PostgreSQL error reporting and logging documentation](https://www.postgresql.org/docs/16/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT).
## Amazon RDS
The Amazon Relational Database Service (RDS) provides a large number of [monitoring metrics](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.html) and [logging interfaces](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitor_Logs_Events.html). Here are a few you should configure:
- Change all [recommended PostgreSQL Logging settings](#recommended-postgresql-logging-settings) through [RDS Parameter Groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html).
- As the recommended logging parameters are [dynamic in RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.Parameters.html) you don't require a reboot after changing these settings.
- The PostgreSQL logs can be observed through the [RDS console](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/logs-events-streams-console.html).
- Enable [RDS performance insight](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html) allows you to visualise your database load with many important performance metrics of a PostgreSQL database engine.
- Enable [RDS Enhanced Monitoring](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html) to monitor the operating system metrics. These metrics can indicate bottlenecks in your underlying hardware and OS that are impacting your database performance.
- In production environments set the monitoring interval to 10 seconds (or less) to capture micro bursts of resource usage that can be the cause of many performance issues. Set `Granularity=10` in the console or `monitoring-interval=10` in the CLI.
|
https://docs.gitlab.com/administration/tracing_correlation_id
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/tracing_correlation_id.md
|
2025-08-13
|
doc/administration/logs
|
[
"doc",
"administration",
"logs"
] |
tracing_correlation_id.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Find relevant log entries with a correlation ID
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab instances log a unique request tracking ID (known as the
"correlation ID") for most requests. Each individual request to GitLab gets
its own correlation ID, which then gets logged in each GitLab component's logs for that
request. This makes it easier to trace behavior in a
distributed system. Without this ID it can be difficult or
impossible to match correlating log entries.
## Identify the correlation ID for a request
The correlation ID is logged in structured logs under the key `correlation_id`
and in all response headers GitLab sends under the header `x-request-id`.
You can find your correlation ID by searching in either place.
### Getting the correlation ID in your browser
You can use your browser's developer tools to monitor and inspect network
activity with the site that you're visiting. See the links below for network monitoring
documentation for some popular browsers.
- [Network Monitor - Firefox Developer Tools](https://firefox-source-docs.mozilla.org/devtools-user/network_monitor/index.html)
- [Inspect Network Activity In Chrome DevTools](https://developer.chrome.com/docs/devtools/network/)
- [Safari Web Development Tools](https://developer.apple.com/safari/tools/)
- [Microsoft Edge Network panel](https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/network/)
To locate a relevant request and view its correlation ID:
1. Enable persistent logging in your network monitor. Some actions in GitLab redirect you quickly after you submit a form, so this helps capture all relevant activity.
1. To help isolate the requests you are looking for, you can filter for `document` requests.
1. Select the request of interest to view further detail.
1. Go to the **Headers** section and look for **Response Headers**. There you should find an `x-request-id` header with a
value that was randomly generated by GitLab for the request.
See the following example:

### Getting the correlation ID from your logs
Another approach to finding the correct correlation ID is to search or watch
your logs and find the `correlation_id` value for the log entry that you're
watching for.
For example, if you want to learn what's happening or breaking when
you reproduce an action in GitLab, you could tail the GitLab logs, filtering
to requests by your user, and then watch the requests until you see what you're
interested in.
### Getting the correlation ID from curl
If you're using `curl` then you can use the verbose option to show request and response headers, as well as other debug information.
```shell
➜ ~ curl --verbose "https://gitlab.example.com/api/v4/projects"
# look for a line that looks like this
< x-request-id: 4rAMkV3gof4
```
#### Using jq
This example uses [jq](https://stedolan.github.io/jq/) to filter results and
display values we most likely care about.
```shell
sudo gitlab-ctl tail gitlab-rails/production_json.log | jq 'select(.username == "bob") | "User: \(.username), \(.method) \(.path), \(.controller)#\(.action), ID: \(.correlation_id)"'
```
```plaintext
"User: bob, GET /root/linux, ProjectsController#show, ID: U7k7fh6NpW3"
"User: bob, GET /root/linux/commits/master/signatures, Projects::CommitsController#signatures, ID: XPIHpctzEg1"
"User: bob, GET /root/linux/blob/master/README, Projects::BlobController#show, ID: LOt9hgi1TV4"
```
#### Using grep
This example uses only `grep` and `tr`, which are more likely to be installed than `jq`.
```shell
sudo gitlab-ctl tail gitlab-rails/production_json.log | grep '"username":"bob"' | tr ',' '\n' | egrep 'method|path|correlation_id'
```
```plaintext
{"method":"GET"
"path":"/root/linux"
"username":"bob"
"correlation_id":"U7k7fh6NpW3"}
{"method":"GET"
"path":"/root/linux/commits/master/signatures"
"username":"bob"
"correlation_id":"XPIHpctzEg1"}
{"method":"GET"
"path":"/root/linux/blob/master/README"
"username":"bob"
"correlation_id":"LOt9hgi1TV4"}
```
## Searching your logs for the correlation ID
When you have the correlation ID you can start searching for relevant log
entries. You can filter the lines by the correlation ID itself.
Combining a `find` and `grep` should be sufficient to find the entries you are looking for.
```shell
# find <gitlab log directory> -type f -mtime -0 exec grep '<correlation ID>' '{}' '+'
find /var/log/gitlab -type f -mtime 0 -exec grep 'LOt9hgi1TV4' '{}' '+'
```
```plaintext
/var/log/gitlab/gitlab-workhorse/current:{"correlation_id":"LOt9hgi1TV4","duration_ms":2478,"host":"gitlab.domain.tld","level":"info","method":"GET","msg":"access","proto":"HTTP/1.1","referrer":"https://gitlab.domain.tld/root/linux","remote_addr":"68.0.116.160:0","remote_ip":"[filtered]","status":200,"system":"http","time":"2019-09-17T22:17:19Z","uri":"/root/linux/blob/master/README?format=json\u0026viewer=rich","user_agent":"Mozilla/5.0 (Mac) Gecko Firefox/69.0","written_bytes":1743}
/var/log/gitlab/gitaly/current:{"correlation_id":"LOt9hgi1TV4","grpc.code":"OK","grpc.meta.auth_version":"v2","grpc.meta.client_name":"gitlab-web","grpc.method":"FindCommits","grpc.request.deadline":"2019-09-17T22:17:47Z","grpc.request.fullMethod":"/gitaly.CommitService/FindCommits","grpc.request.glProjectPath":"root/linux","grpc.request.glRepository":"project-1","grpc.request.repoPath":"@hashed/6b/86/6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b.git","grpc.request.repoStorage":"default","grpc.request.topLevelGroup":"@hashed","grpc.service":"gitaly.CommitService","grpc.start_time":"2019-09-17T22:17:17Z","grpc.time_ms":2319.161,"level":"info","msg":"finished streaming call with code OK","peer.address":"@","span.kind":"server","system":"grpc","time":"2019-09-17T22:17:19Z"}
/var/log/gitlab/gitlab-rails/production_json.log:{"method":"GET","path":"/root/linux/blob/master/README","format":"json","controller":"Projects::BlobController","action":"show","status":200,"duration":2448.77,"view":0.49,"db":21.63,"time":"2019-09-17T22:17:19.800Z","params":[{"key":"viewer","value":"rich"},{"key":"namespace_id","value":"root"},{"key":"project_id","value":"linux"},{"key":"id","value":"master/README"}],"remote_ip":"[filtered]","user_id":2,"username":"bob","ua":"Mozilla/5.0 (Mac) Gecko Firefox/69.0","queue_duration":3.38,"gitaly_calls":1,"gitaly_duration":0.77,"rugged_calls":4,"rugged_duration_ms":28.74,"correlation_id":"LOt9hgi1TV4"}
```
### Searching in distributed architectures
If you have done some horizontal scaling in your GitLab infrastructure, then
you must search across all of your GitLab nodes. You can do this with
some sort of log aggregation software like Loki, ELK, Splunk, or others.
You can use a tool like Ansible or PSSH (parallel SSH) that can execute identical commands across your servers in
parallel, or craft your own solution.
### Viewing the request in the Performance Bar
You can use the [performance bar](../monitoring/performance/performance_bar.md) to view interesting data including calls made to SQL and Gitaly.
To view the data, the correlation ID of the request must match the same session as the user
viewing the performance bar. For API requests, this means that you must perform the request
using the session cookie of the authenticated user.
For example, if you want to view the database queries executed for the following API endpoint:
```shell
https://gitlab.com/api/v4/groups/2564205/projects?with_security_reports=true&page=1&per_page=1
```
First, enable the **Developer Tools** panel. See [Getting the correlation ID in your browser](#getting-the-correlation-id-in-your-browser) for details on how to do this.
After developer tools have been enabled, obtain a session cookie as follows:
1. Visit <https://gitlab.com> while logged in.
1. Optional. Select **Fetch/XHR** request filter in the **Developer Tools** panel. This step is described for Google Chrome developer tools and is not strictly necessary, it just makes it easier to find the correct request.
1. Select the `results?request_id=<some-request-id>` request on the left hand side.
1. The session cookie is displayed under the `Request Headers` section of the `Headers` panel. Right-click on the cookie value and select `Copy value`.

You have the value of the session cookie copied to your clipboard, for example:
```shell
experimentation_subject_id=<subject-id>; _gitlab_session=<session-id>; event_filter=all; visitor_id=<visitor-id>; perf_bar_enabled=true; sidebar_collapsed=true; diff_view=inline; sast_entry_point_dismissed=true; auto_devops_settings_dismissed=true; cf_clearance=<cf-clearance>; collapsed_gutter=false
```
Use the value of the session cookie to craft an API request by pasting it into a custom header of a `curl` request:
```shell
$ curl --include "https://gitlab.com/api/v4/groups/2564205/projects?with_security_reports=true&page=1&per_page=1" \
--header 'cookie: experimentation_subject_id=<subject-id>; _gitlab_session=<session-id>; event_filter=all; visitor_id=<visitor-id>; perf_bar_enabled=true; sidebar_collapsed=true; diff_view=inline; sast_entry_point_dismissed=true; auto_devops_settings_dismissed=true; cf_clearance=<cf-clearance>; collapsed_gutter=false'
date: Tue, 28 Sep 2021 03:55:33 GMT
content-type: application/json
...
x-request-id: 01FGN8P881GF2E5J91JYA338Y3
...
[
{
"id":27497069,
"description":"Analyzer for images used on live K8S containers based on Starboard"
},
"container_registry_image_prefix":"registry.gitlab.com/gitlab-org/security-products/analyzers/cluster-image-scanning",
"..."
]
```
The response contains the data from the API endpoint, and a `correlation_id` value, returned in the `x-request-id` header, as described in the [Identify the correlation ID for a request](#identify-the-correlation-id-for-a-request) section.
You can then view the database details for this request:
1. Paste the `x-request-id` value into the `request details` field of the [performance bar](../monitoring/performance/performance_bar.md) and press <kbd>Enter/Return</kbd>. This example uses the `x-request-id` value `01FGN8P881GF2E5J91JYA338Y3`, returned by the previous response:

1. A new request is inserted into the `Request Selector` dropdown list on the right-hand side of the Performance Bar. Select the new request to view the metrics of the API request:

1. Select the `pg` link in the Progress Bar to view the database queries executed by the API request:

The database query dialog is displayed:

|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Find relevant log entries with a correlation ID
breadcrumbs:
- doc
- administration
- logs
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab instances log a unique request tracking ID (known as the
"correlation ID") for most requests. Each individual request to GitLab gets
its own correlation ID, which then gets logged in each GitLab component's logs for that
request. This makes it easier to trace behavior in a
distributed system. Without this ID it can be difficult or
impossible to match correlating log entries.
## Identify the correlation ID for a request
The correlation ID is logged in structured logs under the key `correlation_id`
and in all response headers GitLab sends under the header `x-request-id`.
You can find your correlation ID by searching in either place.
### Getting the correlation ID in your browser
You can use your browser's developer tools to monitor and inspect network
activity with the site that you're visiting. See the links below for network monitoring
documentation for some popular browsers.
- [Network Monitor - Firefox Developer Tools](https://firefox-source-docs.mozilla.org/devtools-user/network_monitor/index.html)
- [Inspect Network Activity In Chrome DevTools](https://developer.chrome.com/docs/devtools/network/)
- [Safari Web Development Tools](https://developer.apple.com/safari/tools/)
- [Microsoft Edge Network panel](https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/network/)
To locate a relevant request and view its correlation ID:
1. Enable persistent logging in your network monitor. Some actions in GitLab redirect you quickly after you submit a form, so this helps capture all relevant activity.
1. To help isolate the requests you are looking for, you can filter for `document` requests.
1. Select the request of interest to view further detail.
1. Go to the **Headers** section and look for **Response Headers**. There you should find an `x-request-id` header with a
value that was randomly generated by GitLab for the request.
See the following example:

### Getting the correlation ID from your logs
Another approach to finding the correct correlation ID is to search or watch
your logs and find the `correlation_id` value for the log entry that you're
watching for.
For example, if you want to learn what's happening or breaking when
you reproduce an action in GitLab, you could tail the GitLab logs, filtering
to requests by your user, and then watch the requests until you see what you're
interested in.
### Getting the correlation ID from curl
If you're using `curl` then you can use the verbose option to show request and response headers, as well as other debug information.
```shell
➜ ~ curl --verbose "https://gitlab.example.com/api/v4/projects"
# look for a line that looks like this
< x-request-id: 4rAMkV3gof4
```
#### Using jq
This example uses [jq](https://stedolan.github.io/jq/) to filter results and
display values we most likely care about.
```shell
sudo gitlab-ctl tail gitlab-rails/production_json.log | jq 'select(.username == "bob") | "User: \(.username), \(.method) \(.path), \(.controller)#\(.action), ID: \(.correlation_id)"'
```
```plaintext
"User: bob, GET /root/linux, ProjectsController#show, ID: U7k7fh6NpW3"
"User: bob, GET /root/linux/commits/master/signatures, Projects::CommitsController#signatures, ID: XPIHpctzEg1"
"User: bob, GET /root/linux/blob/master/README, Projects::BlobController#show, ID: LOt9hgi1TV4"
```
#### Using grep
This example uses only `grep` and `tr`, which are more likely to be installed than `jq`.
```shell
sudo gitlab-ctl tail gitlab-rails/production_json.log | grep '"username":"bob"' | tr ',' '\n' | egrep 'method|path|correlation_id'
```
```plaintext
{"method":"GET"
"path":"/root/linux"
"username":"bob"
"correlation_id":"U7k7fh6NpW3"}
{"method":"GET"
"path":"/root/linux/commits/master/signatures"
"username":"bob"
"correlation_id":"XPIHpctzEg1"}
{"method":"GET"
"path":"/root/linux/blob/master/README"
"username":"bob"
"correlation_id":"LOt9hgi1TV4"}
```
## Searching your logs for the correlation ID
When you have the correlation ID you can start searching for relevant log
entries. You can filter the lines by the correlation ID itself.
Combining a `find` and `grep` should be sufficient to find the entries you are looking for.
```shell
# find <gitlab log directory> -type f -mtime -0 exec grep '<correlation ID>' '{}' '+'
find /var/log/gitlab -type f -mtime 0 -exec grep 'LOt9hgi1TV4' '{}' '+'
```
```plaintext
/var/log/gitlab/gitlab-workhorse/current:{"correlation_id":"LOt9hgi1TV4","duration_ms":2478,"host":"gitlab.domain.tld","level":"info","method":"GET","msg":"access","proto":"HTTP/1.1","referrer":"https://gitlab.domain.tld/root/linux","remote_addr":"68.0.116.160:0","remote_ip":"[filtered]","status":200,"system":"http","time":"2019-09-17T22:17:19Z","uri":"/root/linux/blob/master/README?format=json\u0026viewer=rich","user_agent":"Mozilla/5.0 (Mac) Gecko Firefox/69.0","written_bytes":1743}
/var/log/gitlab/gitaly/current:{"correlation_id":"LOt9hgi1TV4","grpc.code":"OK","grpc.meta.auth_version":"v2","grpc.meta.client_name":"gitlab-web","grpc.method":"FindCommits","grpc.request.deadline":"2019-09-17T22:17:47Z","grpc.request.fullMethod":"/gitaly.CommitService/FindCommits","grpc.request.glProjectPath":"root/linux","grpc.request.glRepository":"project-1","grpc.request.repoPath":"@hashed/6b/86/6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b.git","grpc.request.repoStorage":"default","grpc.request.topLevelGroup":"@hashed","grpc.service":"gitaly.CommitService","grpc.start_time":"2019-09-17T22:17:17Z","grpc.time_ms":2319.161,"level":"info","msg":"finished streaming call with code OK","peer.address":"@","span.kind":"server","system":"grpc","time":"2019-09-17T22:17:19Z"}
/var/log/gitlab/gitlab-rails/production_json.log:{"method":"GET","path":"/root/linux/blob/master/README","format":"json","controller":"Projects::BlobController","action":"show","status":200,"duration":2448.77,"view":0.49,"db":21.63,"time":"2019-09-17T22:17:19.800Z","params":[{"key":"viewer","value":"rich"},{"key":"namespace_id","value":"root"},{"key":"project_id","value":"linux"},{"key":"id","value":"master/README"}],"remote_ip":"[filtered]","user_id":2,"username":"bob","ua":"Mozilla/5.0 (Mac) Gecko Firefox/69.0","queue_duration":3.38,"gitaly_calls":1,"gitaly_duration":0.77,"rugged_calls":4,"rugged_duration_ms":28.74,"correlation_id":"LOt9hgi1TV4"}
```
### Searching in distributed architectures
If you have done some horizontal scaling in your GitLab infrastructure, then
you must search across all of your GitLab nodes. You can do this with
some sort of log aggregation software like Loki, ELK, Splunk, or others.
You can use a tool like Ansible or PSSH (parallel SSH) that can execute identical commands across your servers in
parallel, or craft your own solution.
### Viewing the request in the Performance Bar
You can use the [performance bar](../monitoring/performance/performance_bar.md) to view interesting data including calls made to SQL and Gitaly.
To view the data, the correlation ID of the request must match the same session as the user
viewing the performance bar. For API requests, this means that you must perform the request
using the session cookie of the authenticated user.
For example, if you want to view the database queries executed for the following API endpoint:
```shell
https://gitlab.com/api/v4/groups/2564205/projects?with_security_reports=true&page=1&per_page=1
```
First, enable the **Developer Tools** panel. See [Getting the correlation ID in your browser](#getting-the-correlation-id-in-your-browser) for details on how to do this.
After developer tools have been enabled, obtain a session cookie as follows:
1. Visit <https://gitlab.com> while logged in.
1. Optional. Select **Fetch/XHR** request filter in the **Developer Tools** panel. This step is described for Google Chrome developer tools and is not strictly necessary, it just makes it easier to find the correct request.
1. Select the `results?request_id=<some-request-id>` request on the left hand side.
1. The session cookie is displayed under the `Request Headers` section of the `Headers` panel. Right-click on the cookie value and select `Copy value`.

You have the value of the session cookie copied to your clipboard, for example:
```shell
experimentation_subject_id=<subject-id>; _gitlab_session=<session-id>; event_filter=all; visitor_id=<visitor-id>; perf_bar_enabled=true; sidebar_collapsed=true; diff_view=inline; sast_entry_point_dismissed=true; auto_devops_settings_dismissed=true; cf_clearance=<cf-clearance>; collapsed_gutter=false
```
Use the value of the session cookie to craft an API request by pasting it into a custom header of a `curl` request:
```shell
$ curl --include "https://gitlab.com/api/v4/groups/2564205/projects?with_security_reports=true&page=1&per_page=1" \
--header 'cookie: experimentation_subject_id=<subject-id>; _gitlab_session=<session-id>; event_filter=all; visitor_id=<visitor-id>; perf_bar_enabled=true; sidebar_collapsed=true; diff_view=inline; sast_entry_point_dismissed=true; auto_devops_settings_dismissed=true; cf_clearance=<cf-clearance>; collapsed_gutter=false'
date: Tue, 28 Sep 2021 03:55:33 GMT
content-type: application/json
...
x-request-id: 01FGN8P881GF2E5J91JYA338Y3
...
[
{
"id":27497069,
"description":"Analyzer for images used on live K8S containers based on Starboard"
},
"container_registry_image_prefix":"registry.gitlab.com/gitlab-org/security-products/analyzers/cluster-image-scanning",
"..."
]
```
The response contains the data from the API endpoint, and a `correlation_id` value, returned in the `x-request-id` header, as described in the [Identify the correlation ID for a request](#identify-the-correlation-id-for-a-request) section.
You can then view the database details for this request:
1. Paste the `x-request-id` value into the `request details` field of the [performance bar](../monitoring/performance/performance_bar.md) and press <kbd>Enter/Return</kbd>. This example uses the `x-request-id` value `01FGN8P881GF2E5J91JYA338Y3`, returned by the previous response:

1. A new request is inserted into the `Request Selector` dropdown list on the right-hand side of the Performance Bar. Select the new request to view the metrics of the API request:

1. Select the `pg` link in the Progress Bar to view the database queries executed by the API request:

The database query dialog is displayed:

|
https://docs.gitlab.com/administration/log_parsing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/log_parsing.md
|
2025-08-13
|
doc/administration/logs
|
[
"doc",
"administration",
"logs"
] |
log_parsing.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Parsing GitLab logs with `jq`
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
We recommend using log aggregation and search tools like Kibana and Splunk whenever possible,
but if they are not available you can still quickly parse
[GitLab logs](_index.md) in JSON format
using [`jq`](https://stedolan.github.io/jq/).
{{< alert type="note" >}}
Specifically for summarizing error events and basic usage statistics,
the GitLab Support Team provides the specialised
[`fast-stats` tool](https://gitlab.com/gitlab-com/support/toolbox/fast-stats/#when-to-use-it).
{{< /alert >}}
## What is JQ?
As noted in its [manual](https://stedolan.github.io/jq/manual/), `jq` is a command-line JSON processor. The following examples
include use cases targeted for parsing GitLab log files.
## Parsing Logs
The examples listed below address their respective log files by
their relative Linux package installation paths and default filenames.
Find the respective full paths in the [GitLab logs sections](_index.md#production_jsonlog).
### Compressed logs
When [log files are rotated](https://smarden.org/runit/svlogd.8), they are renamed in
Unix timestamp format and compressed with `gzip`. The resulting file name looks like
`@40000000624492fa18da6f34.s`. These files must be handled differently before parsing,
than the more recent log files:
- To uncompress the file, use `gunzip -S .s @40000000624492fa18da6f34.s`, replacing
the filename with your compressed log file's name.
- To read or pipe the file directly, use `zcat` or `zless`.
- To search file contents, use `zgrep`.
### General Commands
#### Pipe colorized `jq` output into `less`
```shell
jq . <FILE> -C | less -R
```
#### Search for a term and pretty-print all matching lines
```shell
grep <TERM> <FILE> | jq .
```
#### Skip invalid lines of JSON
```shell
jq -cR 'fromjson?' file.json | jq <COMMAND>
```
By default `jq` errors out when it encounters a line that is not valid JSON.
This skips over all invalid lines and parses the rest.
#### Print a JSON log's time range
```shell
cat log.json | (head -1; tail -1) | jq '.time'
```
Use `zcat` if the file has been rotated and compressed:
```shell
zcat @400000006026b71d1a7af804.s | (head -1; tail -1) | jq '.time'
zcat some_json.log.25.gz | (head -1; tail -1) | jq '.time'
```
#### Get activity for correlation ID across multiple JSON logs in chronological order
```shell
grep -hR <correlationID> | jq -c -R 'fromjson?' | jq -C -s 'sort_by(.time)' | less -R
```
### Parsing `gitlab-rails/production_json.log` and `gitlab-rails/api_json.log`
#### Find all requests with a 5XX status code
```shell
jq 'select(.status >= 500)' <FILE>
```
#### Top 10 slowest requests
```shell
jq -s 'sort_by(-.duration_s) | limit(10; .[])' <FILE>
```
#### Find and pretty print all requests related to a project
```shell
grep <PROJECT_NAME> <FILE> | jq .
```
#### Find all requests with a total duration > 5 seconds
```shell
jq 'select(.duration_s > 5000)' <FILE>
```
#### Find all project requests with more than 5 Gitaly calls
```shell
grep <PROJECT_NAME> <FILE> | jq 'select(.gitaly_calls > 5)'
```
#### Find all requests with a Gitaly duration > 10 seconds
```shell
jq 'select(.gitaly_duration_s > 10000)' <FILE>
```
#### Find all requests with a queue duration > 10 seconds
```shell
jq 'select(.queue_duration_s > 10000)' <FILE>
```
#### Top 10 requests by # of Gitaly calls
```shell
jq -s 'map(select(.gitaly_calls != null)) | sort_by(-.gitaly_calls) | limit(10; .[])' <FILE>
```
#### Output a specific time range
```shell
jq 'select(.time >= "2023-01-10T00:00:00Z" and .time <= "2023-01-10T12:00:00Z")' <FILE>
```
### Parsing `gitlab-rails/production_json.log`
#### Print the top three controller methods by request volume and their three longest durations
```shell
jq -s -r 'group_by(.controller+.action) | sort_by(-length) | limit(3; .[]) | sort_by(-.duration_s) | "CT: \(length)\tMETHOD: \(.[0].controller)#\(.[0].action)\tDURS: \(.[0].duration_s), \(.[1].duration_s), \(.[2].duration_s)"' production_json.log
```
**Example output**
```plaintext
CT: 2721 METHOD: SessionsController#new DURS: 844.06, 713.81, 704.66
CT: 2435 METHOD: MetricsController#index DURS: 299.29, 284.01, 158.57
CT: 1328 METHOD: Projects::NotesController#index DURS: 403.99, 386.29, 384.39
```
### Parsing `gitlab-rails/api_json.log`
#### Print top three routes with request count and their three longest durations
```shell
jq -s -r 'group_by(.route) | sort_by(-length) | limit(3; .[]) | sort_by(-.duration_s) | "CT: \(length)\tROUTE: \(.[0].route)\tDURS: \(.[0].duration_s), \(.[1].duration_s), \(.[2].duration_s)"' api_json.log
```
**Example output**
```plaintext
CT: 2472 ROUTE: /api/:version/internal/allowed DURS: 56402.65, 38411.43, 19500.41
CT: 297 ROUTE: /api/:version/projects/:id/repository/tags DURS: 731.39, 685.57, 480.86
CT: 190 ROUTE: /api/:version/projects/:id/repository/commits DURS: 1079.02, 979.68, 958.21
```
#### Print top API user agents
```shell
jq --raw-output '
select(.remote_ip != "127.0.0.1") | [
(.time | split(".")[0] | strptime("%Y-%m-%dT%H:%M:%S") | strftime("…%m-%dT%H…")),
."meta.caller_id", .username, .ua
] | @tsv' api_json.log | sort | uniq -c \
| grep --invert-match --extended-regexp '^\s+\d{1,3}\b'
```
**Example output**:
```plaintext
1234 …01-12T01… GET /api/:version/projects/:id/pipelines some_user # plus browser details; OK
54321 …01-12T01… POST /api/:version/projects/:id/repository/files/:file_path/raw some_bot
5678 …01-12T01… PATCH /api/:version/jobs/:id/trace gitlab-runner # plus version details; OK
```
This example shows a custom tool or script causing an unexpectedly high [request rate (>15 RPS)](../reference_architectures/_index.md#available-reference-architectures).
User agents in this situation can be specialized [third-party clients](../../api/rest/third_party_clients.md),
or general tools like `curl`.
The hourly aggregation helps to:
- Correlate spikes of bot or user activity to data from monitoring tools such as [Prometheus](../monitoring/prometheus/_index.md).
- Evaluate [rate limit settings](../settings/user_and_ip_rate_limits.md).
You can also use `fast-stats top` (see top of page) to extract performance statistics for those users or bots.
### Parsing `gitlab-rails/importer.log`
To troubleshoot [project imports](../raketasks/project_import_export.md) or
[migrations](../../user/project/import/_index.md), run this command:
```shell
jq 'select(.project_path == "<namespace>/<project>").error_messages' importer.log
```
For common issues, see [troubleshooting](../raketasks/import_export_rake_tasks_troubleshooting.md).
### Parsing `gitlab-workhorse/current`
#### Print top Workhorse user agents
```shell
jq --raw-output '
select(.remote_ip != "127.0.0.1") | [
(.time | split(".")[0] | strptime("%Y-%m-%dT%H:%M:%S") | strftime("…%m-%dT%H…")),
.remote_ip, .uri, .user_agent
] | @tsv' current |
sort | uniq -c
```
Similar to the [API `ua` example](#print-top-api-user-agents),
many unexpected user agents in this output indicate unoptimized scripts.
Expected user agents include `gitlab-runner`, `GitLab-Shell`, and browsers.
The performance impact of runners checking for new jobs can be reduced by increasing
[the `check_interval` setting](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section),
for example.
### Parsing `gitlab-rails/geo.log`
#### Find most common Geo sync errors
If [the `geo:status` Rake task](../geo/replication/troubleshooting/common.md#sync-status-rake-task)
repeatedly reports that some items never reach 100%,
the following command helps to focus on the most common errors.
```shell
jq --raw-output 'select(.severity == "ERROR") | [
(.time | split(".")[0] | strptime("%Y-%m-%dT%H:%M:%S") | strftime("…%m-%dT%H:%M…")),
.class, .id, .message, .error
] | @tsv' geo.log \
| sort | uniq -c
```
Refer to our [Geo troubleshooting page](../geo/replication/troubleshooting/_index.md)
for advice about specific error messages.
### Parsing `gitaly/current`
Use the following examples to [troubleshoot Gitaly](../gitaly/troubleshooting.md).
#### Find all Gitaly requests sent from web UI
```shell
jq 'select(."grpc.meta.client_name" == "gitlab-web")' current
```
#### Find all failed Gitaly requests
```shell
jq 'select(."grpc.code" != null and ."grpc.code" != "OK")' current
```
#### Find all requests that took longer than 30 seconds
```shell
jq 'select(."grpc.time_ms" > 30000)' current
```
#### Print top ten projects by request volume and their three longest durations
```shell
jq --raw-output --slurp '
map(
select(
."grpc.request.glProjectPath" != null
and ."grpc.request.glProjectPath" != ""
and ."grpc.time_ms" != null
)
)
| group_by(."grpc.request.glProjectPath")
| sort_by(-length)
| limit(10; .[])
| sort_by(-."grpc.time_ms")
| [
length,
.[0]."grpc.time_ms",
.[1]."grpc.time_ms",
.[2]."grpc.time_ms",
.[0]."grpc.request.glProjectPath"
]
| @sh' current |
awk 'BEGIN { printf "%7s %10s %10s %10s\t%s\n", "CT", "MAX DURS", "", "", "PROJECT" }
{ printf "%7u %7u ms, %7u ms, %7u ms\t%s\n", $1, $2, $3, $4, $5 }'
```
**Example output**
```plaintext
CT MAX DURS PROJECT
206 4898 ms, 1101 ms, 1032 ms 'groupD/project4'
109 1420 ms, 962 ms, 875 ms 'groupEF/project56'
663 106 ms, 96 ms, 94 ms 'groupABC/project123'
...
```
#### Types of user and project activity overview
```shell
jq --raw-output '[
(.time | split(".")[0] | strptime("%Y-%m-%dT%H:%M:%S") | strftime("…%m-%dT%H…")),
.username, ."grpc.method", ."grpc.request.glProjectPath"
] | @tsv' current | sort | uniq -c \
| grep --invert-match --extended-regexp '^\s+\d{1,3}\b'
```
**Example output**:
```plaintext
5678 …01-12T01… ReferenceTransactionHook # Praefect operation; OK
54321 …01-12T01… some_bot GetBlobs namespace/subgroup/project
1234 …01-12T01… some_user FindCommit namespace/subgroup/project
```
This example shows a custom tool or script causing unexpectedly high of [request rate (>15 RPS)](../reference_architectures/_index.md#available-reference-architectures) on Gitaly.
The hourly aggregation helps to:
- Correlate spikes of bot or user activity to data from monitoring tools such as [Prometheus](../monitoring/prometheus/_index.md).
- Evaluate [rate limit settings](../settings/user_and_ip_rate_limits.md).
You can also use `fast-stats top` (see top of page) to extract performance statistics for those users or bots.
#### Find all projects affected by a fatal Git problem
```shell
grep "fatal: " current |
jq '."grpc.request.glProjectPath"' |
sort | uniq
```
### Parsing `gitlab-shell/gitlab-shell.log`
For investigating Git calls through SSH.
Find the top 20 calls by project and user:
```shell
jq --raw-output --slurp '
map(
select(
.username != null and
.gl_project_path !=null
)
)
| group_by(.username+.gl_project_path)
| sort_by(-length)
| limit(20; .[])
| "count: \(length)\tuser: \(.[0].username)\tproject: \(.[0].gl_project_path)" ' \
gitlab-shell.log
```
Find the top 20 calls by project, user, and command:
```shell
jq --raw-output --slurp '
map(
select(
.command != null and
.username != null and
.gl_project_path !=null
)
)
| group_by(.username+.gl_project_path+.command)
| sort_by(-length)
| limit(20; .[])
| "count: \(length)\tcommand: \(.[0].command)\tuser: \(.[0].username)\tproject: \(.[0].gl_project_path)" ' \
gitlab-shell.log
```
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Parsing GitLab logs with `jq`
breadcrumbs:
- doc
- administration
- logs
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
We recommend using log aggregation and search tools like Kibana and Splunk whenever possible,
but if they are not available you can still quickly parse
[GitLab logs](_index.md) in JSON format
using [`jq`](https://stedolan.github.io/jq/).
{{< alert type="note" >}}
Specifically for summarizing error events and basic usage statistics,
the GitLab Support Team provides the specialised
[`fast-stats` tool](https://gitlab.com/gitlab-com/support/toolbox/fast-stats/#when-to-use-it).
{{< /alert >}}
## What is JQ?
As noted in its [manual](https://stedolan.github.io/jq/manual/), `jq` is a command-line JSON processor. The following examples
include use cases targeted for parsing GitLab log files.
## Parsing Logs
The examples listed below address their respective log files by
their relative Linux package installation paths and default filenames.
Find the respective full paths in the [GitLab logs sections](_index.md#production_jsonlog).
### Compressed logs
When [log files are rotated](https://smarden.org/runit/svlogd.8), they are renamed in
Unix timestamp format and compressed with `gzip`. The resulting file name looks like
`@40000000624492fa18da6f34.s`. These files must be handled differently before parsing,
than the more recent log files:
- To uncompress the file, use `gunzip -S .s @40000000624492fa18da6f34.s`, replacing
the filename with your compressed log file's name.
- To read or pipe the file directly, use `zcat` or `zless`.
- To search file contents, use `zgrep`.
### General Commands
#### Pipe colorized `jq` output into `less`
```shell
jq . <FILE> -C | less -R
```
#### Search for a term and pretty-print all matching lines
```shell
grep <TERM> <FILE> | jq .
```
#### Skip invalid lines of JSON
```shell
jq -cR 'fromjson?' file.json | jq <COMMAND>
```
By default `jq` errors out when it encounters a line that is not valid JSON.
This skips over all invalid lines and parses the rest.
#### Print a JSON log's time range
```shell
cat log.json | (head -1; tail -1) | jq '.time'
```
Use `zcat` if the file has been rotated and compressed:
```shell
zcat @400000006026b71d1a7af804.s | (head -1; tail -1) | jq '.time'
zcat some_json.log.25.gz | (head -1; tail -1) | jq '.time'
```
#### Get activity for correlation ID across multiple JSON logs in chronological order
```shell
grep -hR <correlationID> | jq -c -R 'fromjson?' | jq -C -s 'sort_by(.time)' | less -R
```
### Parsing `gitlab-rails/production_json.log` and `gitlab-rails/api_json.log`
#### Find all requests with a 5XX status code
```shell
jq 'select(.status >= 500)' <FILE>
```
#### Top 10 slowest requests
```shell
jq -s 'sort_by(-.duration_s) | limit(10; .[])' <FILE>
```
#### Find and pretty print all requests related to a project
```shell
grep <PROJECT_NAME> <FILE> | jq .
```
#### Find all requests with a total duration > 5 seconds
```shell
jq 'select(.duration_s > 5000)' <FILE>
```
#### Find all project requests with more than 5 Gitaly calls
```shell
grep <PROJECT_NAME> <FILE> | jq 'select(.gitaly_calls > 5)'
```
#### Find all requests with a Gitaly duration > 10 seconds
```shell
jq 'select(.gitaly_duration_s > 10000)' <FILE>
```
#### Find all requests with a queue duration > 10 seconds
```shell
jq 'select(.queue_duration_s > 10000)' <FILE>
```
#### Top 10 requests by # of Gitaly calls
```shell
jq -s 'map(select(.gitaly_calls != null)) | sort_by(-.gitaly_calls) | limit(10; .[])' <FILE>
```
#### Output a specific time range
```shell
jq 'select(.time >= "2023-01-10T00:00:00Z" and .time <= "2023-01-10T12:00:00Z")' <FILE>
```
### Parsing `gitlab-rails/production_json.log`
#### Print the top three controller methods by request volume and their three longest durations
```shell
jq -s -r 'group_by(.controller+.action) | sort_by(-length) | limit(3; .[]) | sort_by(-.duration_s) | "CT: \(length)\tMETHOD: \(.[0].controller)#\(.[0].action)\tDURS: \(.[0].duration_s), \(.[1].duration_s), \(.[2].duration_s)"' production_json.log
```
**Example output**
```plaintext
CT: 2721 METHOD: SessionsController#new DURS: 844.06, 713.81, 704.66
CT: 2435 METHOD: MetricsController#index DURS: 299.29, 284.01, 158.57
CT: 1328 METHOD: Projects::NotesController#index DURS: 403.99, 386.29, 384.39
```
### Parsing `gitlab-rails/api_json.log`
#### Print top three routes with request count and their three longest durations
```shell
jq -s -r 'group_by(.route) | sort_by(-length) | limit(3; .[]) | sort_by(-.duration_s) | "CT: \(length)\tROUTE: \(.[0].route)\tDURS: \(.[0].duration_s), \(.[1].duration_s), \(.[2].duration_s)"' api_json.log
```
**Example output**
```plaintext
CT: 2472 ROUTE: /api/:version/internal/allowed DURS: 56402.65, 38411.43, 19500.41
CT: 297 ROUTE: /api/:version/projects/:id/repository/tags DURS: 731.39, 685.57, 480.86
CT: 190 ROUTE: /api/:version/projects/:id/repository/commits DURS: 1079.02, 979.68, 958.21
```
#### Print top API user agents
```shell
jq --raw-output '
select(.remote_ip != "127.0.0.1") | [
(.time | split(".")[0] | strptime("%Y-%m-%dT%H:%M:%S") | strftime("…%m-%dT%H…")),
."meta.caller_id", .username, .ua
] | @tsv' api_json.log | sort | uniq -c \
| grep --invert-match --extended-regexp '^\s+\d{1,3}\b'
```
**Example output**:
```plaintext
1234 …01-12T01… GET /api/:version/projects/:id/pipelines some_user # plus browser details; OK
54321 …01-12T01… POST /api/:version/projects/:id/repository/files/:file_path/raw some_bot
5678 …01-12T01… PATCH /api/:version/jobs/:id/trace gitlab-runner # plus version details; OK
```
This example shows a custom tool or script causing an unexpectedly high [request rate (>15 RPS)](../reference_architectures/_index.md#available-reference-architectures).
User agents in this situation can be specialized [third-party clients](../../api/rest/third_party_clients.md),
or general tools like `curl`.
The hourly aggregation helps to:
- Correlate spikes of bot or user activity to data from monitoring tools such as [Prometheus](../monitoring/prometheus/_index.md).
- Evaluate [rate limit settings](../settings/user_and_ip_rate_limits.md).
You can also use `fast-stats top` (see top of page) to extract performance statistics for those users or bots.
### Parsing `gitlab-rails/importer.log`
To troubleshoot [project imports](../raketasks/project_import_export.md) or
[migrations](../../user/project/import/_index.md), run this command:
```shell
jq 'select(.project_path == "<namespace>/<project>").error_messages' importer.log
```
For common issues, see [troubleshooting](../raketasks/import_export_rake_tasks_troubleshooting.md).
### Parsing `gitlab-workhorse/current`
#### Print top Workhorse user agents
```shell
jq --raw-output '
select(.remote_ip != "127.0.0.1") | [
(.time | split(".")[0] | strptime("%Y-%m-%dT%H:%M:%S") | strftime("…%m-%dT%H…")),
.remote_ip, .uri, .user_agent
] | @tsv' current |
sort | uniq -c
```
Similar to the [API `ua` example](#print-top-api-user-agents),
many unexpected user agents in this output indicate unoptimized scripts.
Expected user agents include `gitlab-runner`, `GitLab-Shell`, and browsers.
The performance impact of runners checking for new jobs can be reduced by increasing
[the `check_interval` setting](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section),
for example.
### Parsing `gitlab-rails/geo.log`
#### Find most common Geo sync errors
If [the `geo:status` Rake task](../geo/replication/troubleshooting/common.md#sync-status-rake-task)
repeatedly reports that some items never reach 100%,
the following command helps to focus on the most common errors.
```shell
jq --raw-output 'select(.severity == "ERROR") | [
(.time | split(".")[0] | strptime("%Y-%m-%dT%H:%M:%S") | strftime("…%m-%dT%H:%M…")),
.class, .id, .message, .error
] | @tsv' geo.log \
| sort | uniq -c
```
Refer to our [Geo troubleshooting page](../geo/replication/troubleshooting/_index.md)
for advice about specific error messages.
### Parsing `gitaly/current`
Use the following examples to [troubleshoot Gitaly](../gitaly/troubleshooting.md).
#### Find all Gitaly requests sent from web UI
```shell
jq 'select(."grpc.meta.client_name" == "gitlab-web")' current
```
#### Find all failed Gitaly requests
```shell
jq 'select(."grpc.code" != null and ."grpc.code" != "OK")' current
```
#### Find all requests that took longer than 30 seconds
```shell
jq 'select(."grpc.time_ms" > 30000)' current
```
#### Print top ten projects by request volume and their three longest durations
```shell
jq --raw-output --slurp '
map(
select(
."grpc.request.glProjectPath" != null
and ."grpc.request.glProjectPath" != ""
and ."grpc.time_ms" != null
)
)
| group_by(."grpc.request.glProjectPath")
| sort_by(-length)
| limit(10; .[])
| sort_by(-."grpc.time_ms")
| [
length,
.[0]."grpc.time_ms",
.[1]."grpc.time_ms",
.[2]."grpc.time_ms",
.[0]."grpc.request.glProjectPath"
]
| @sh' current |
awk 'BEGIN { printf "%7s %10s %10s %10s\t%s\n", "CT", "MAX DURS", "", "", "PROJECT" }
{ printf "%7u %7u ms, %7u ms, %7u ms\t%s\n", $1, $2, $3, $4, $5 }'
```
**Example output**
```plaintext
CT MAX DURS PROJECT
206 4898 ms, 1101 ms, 1032 ms 'groupD/project4'
109 1420 ms, 962 ms, 875 ms 'groupEF/project56'
663 106 ms, 96 ms, 94 ms 'groupABC/project123'
...
```
#### Types of user and project activity overview
```shell
jq --raw-output '[
(.time | split(".")[0] | strptime("%Y-%m-%dT%H:%M:%S") | strftime("…%m-%dT%H…")),
.username, ."grpc.method", ."grpc.request.glProjectPath"
] | @tsv' current | sort | uniq -c \
| grep --invert-match --extended-regexp '^\s+\d{1,3}\b'
```
**Example output**:
```plaintext
5678 …01-12T01… ReferenceTransactionHook # Praefect operation; OK
54321 …01-12T01… some_bot GetBlobs namespace/subgroup/project
1234 …01-12T01… some_user FindCommit namespace/subgroup/project
```
This example shows a custom tool or script causing unexpectedly high of [request rate (>15 RPS)](../reference_architectures/_index.md#available-reference-architectures) on Gitaly.
The hourly aggregation helps to:
- Correlate spikes of bot or user activity to data from monitoring tools such as [Prometheus](../monitoring/prometheus/_index.md).
- Evaluate [rate limit settings](../settings/user_and_ip_rate_limits.md).
You can also use `fast-stats top` (see top of page) to extract performance statistics for those users or bots.
#### Find all projects affected by a fatal Git problem
```shell
grep "fatal: " current |
jq '."grpc.request.glProjectPath"' |
sort | uniq
```
### Parsing `gitlab-shell/gitlab-shell.log`
For investigating Git calls through SSH.
Find the top 20 calls by project and user:
```shell
jq --raw-output --slurp '
map(
select(
.username != null and
.gl_project_path !=null
)
)
| group_by(.username+.gl_project_path)
| sort_by(-length)
| limit(20; .[])
| "count: \(length)\tuser: \(.[0].username)\tproject: \(.[0].gl_project_path)" ' \
gitlab-shell.log
```
Find the top 20 calls by project, user, and command:
```shell
jq --raw-output --slurp '
map(
select(
.command != null and
.username != null and
.gl_project_path !=null
)
)
| group_by(.username+.gl_project_path+.command)
| sort_by(-length)
| limit(20; .[])
| "count: \(length)\tcommand: \(.[0].command)\tuser: \(.[0].username)\tproject: \(.[0].gl_project_path)" ' \
gitlab-shell.log
```
|
https://docs.gitlab.com/administration/logs
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/logs
|
[
"doc",
"administration",
"logs"
] |
_index.md
|
Monitor
|
Platform Insights
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Log system
|
Access comprehensive logging and monitoring capabilities.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The log system in GitLab provides comprehensive logging and monitoring capabilities for analyzing your GitLab instance.
You can use logs to identify system issues, investigate security events, and analyze application performance.
A log entry exists for every action, so when issues occur, these logs provide the data needed to quickly diagnose and resolve problems.
The log system:
- Tracks all application activity across GitLab components in structured log files.
- Records performance metrics, errors, and security events in standardized formats.
- Integrates with log analysis tools like Elasticsearch and Splunk through JSON logging.
- Maintains separate log files for different GitLab services and components.
- Includes correlation IDs to trace requests across the entire system.
System log files are typically plain text in a standard log file format.
The log system is similar to [audit events](../compliance/audit_event_reports.md).
For more information, see also:
- [Customizing logging on Linux package installations](https://docs.gitlab.com/omnibus/settings/logs.html)
- [Parsing and analyzing GitLab logs in JSON format](log_parsing.md)
## Log Levels
Each log message has an assigned log level that indicates its importance and verbosity.
Each logger has an assigned minimum log level.
A logger emits a log message only if its log level is equal to or above the minimum log level.
The following log levels are supported:
| Level | Name |
|:------|:----------|
| 0 | `DEBUG` |
| 1 | `INFO` |
| 2 | `WARN` |
| 3 | `ERROR` |
| 4 | `FATAL` |
| 5 | `UNKNOWN` |
GitLab loggers emit all log messages because they are set to `DEBUG` by default.
### Override default log level
You can override the minimum log level for GitLab loggers using the `GITLAB_LOG_LEVEL` environment variable.
Valid values are either a value of `0` to `5`, or the name of the log level.
Example:
```shell
GITLAB_LOG_LEVEL=info
```
For some services, other log levels are in place that are not affected by this setting.
Some of these services have their own environment variables to override the log level. For example:
| Service | Log level | Environment variable |
|:--------------------------|:----------|:---------------------|
| GitLab Cleanup | `INFO` | `DEBUG` |
| GitLab Doctor | `INFO` | `VERBOSE` |
| GitLab Export | `INFO` | `EXPORT_DEBUG` |
| GitLab Import | `INFO` | `IMPORT_DEBUG` |
| GitLab QA Runtime | `INFO` | `QA_LOG_LEVEL` |
| GitLab Product Usage Data | `INFO` | |
| Google APIs | `INFO` | |
| Rack Timeout | `ERROR` | |
| Snowplow Tracker | `FATAL` | |
| gRPC Client (Gitaly) | `WARN` | `GRPC_LOG_LEVEL` |
| LLM | `INFO` | `LLM_DEBUG` |
## Log Rotation
The logs for a given service may be managed and rotated by:
- `logrotate`
- `svlogd` (`runit`'s service logging daemon)
- `logrotate` and `svlogd`
- Or not at all
The following table includes information about what's responsible for managing and rotating logs for
the included services. Logs
[managed by `svlogd`](https://docs.gitlab.com/omnibus/settings/logs.html#runit-logs)
are written to a file called `current`. The `logrotate` service built into GitLab
[manages all logs](https://docs.gitlab.com/omnibus/settings/logs.html#logrotate)
except those captured by `runit`.
| Log type | Managed by logrotate | Managed by svlogd/runit |
|:------------------------------------------------|:------------------------|:------------------------|
| [Alertmanager logs](#alertmanager-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [crond logs](#crond-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Gitaly](#gitaly-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [GitLab Exporter for Linux package installations](#gitlab-exporter) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [GitLab Pages logs](#pages-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| GitLab Rails | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No |
| [GitLab Shell logs](#gitlab-shelllog) | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No |
| [Grafana logs](#grafana-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [LogRotate logs](#logrotate-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Mailroom](#mail_room_jsonlog-default) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [NGINX](#nginx-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [Patroni logs](#patroni-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [PgBouncer logs](#pgbouncer-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [PostgreSQL logs](#postgresql-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Praefect logs](#praefect-logs) | {{< icon name="dotted-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [Prometheus logs](#prometheus-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Puma](#puma-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [Redis logs](#redis-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Registry logs](#registry-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Sentinel logs](#sentinel-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Workhorse logs](#workhorse-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
## Accessing logs on Helm chart installations
On Helm chart installations, GitLab components send logs to `stdout`, which can be accessed by using `kubectl logs`.
Logs are also available in the pod at `/var/log/gitlab` for the lifetime of the pod.
### Pods with structured logs (subcomponent filtering)
Some pods include a `subcomponent` field that identifies the specific log type:
```shell
# Webservice pod logs (Rails application)
kubectl logs -l app=webservice -c webservice | jq 'select(."subcomponent"=="<subcomponent-key>")'
# Sidekiq pod logs (background jobs)
kubectl logs -l app=sidekiq | jq 'select(."subcomponent"=="<subcomponent-key>")'
```
The following log sections indicate the appropriate pod and subcomponent key where applicable.
### Other pods
For other GitLab components that don't use structured logs with subcomponents, you can access logs directly.
To find available pod selectors:
```shell
# List all unique app labels in use
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.labels.app}{"\n"}{end}' | grep -v '^$' | sort | uniq
# For pods with app labels
kubectl logs -l app=<pod-selector>
# For specific pods (when app labels aren't available)
kubectl get pods
kubectl logs <pod-name>
```
For more Kubernetes troubleshooting commands, see the [Kubernetes cheat sheet](https://docs.gitlab.com/charts/troubleshooting/kubernetes_cheat_sheet/).
## `production_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/production_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/production_json.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="production_json"` key on Helm chart installations.
It contains a structured log for Rails controller requests received from
GitLab, thanks to [Lograge](https://github.com/roidrage/lograge/).
Requests from the API are logged to a separate file in `api_json.log`.
Each line contains JSON that can be ingested by services like Elasticsearch and Splunk.
Line breaks were added to examples for legibility:
```json
{
"method":"GET",
"path":"/gitlab/gitlab-foss/issues/1234",
"format":"html",
"controller":"Projects::IssuesController",
"action":"show",
"status":200,
"time":"2017-08-08T20:15:54.821Z",
"params":[{"key":"param_key","value":"param_value"}],
"remote_ip":"18.245.0.1",
"user_id":1,
"username":"admin",
"queue_duration_s":0.0,
"gitaly_calls":16,
"gitaly_duration_s":0.16,
"redis_calls":115,
"redis_duration_s":0.13,
"redis_read_bytes":1507378,
"redis_write_bytes":2920,
"correlation_id":"O1SdybnnIq7",
"cpu_s":17.50,
"db_duration_s":0.08,
"view_duration_s":2.39,
"duration_s":20.54,
"pid": 81836,
"worker_id":"puma_0"
}
```
This example was a GET request for a specific
issue. Each line also contains performance data, with times in
seconds:
- `duration_s`: Total time to retrieve the request
- `queue_duration_s`: Total time the request was queued inside GitLab Workhorse
- `view_duration_s`: Total time inside the Rails views
- `db_duration_s`: Total time to retrieve data from PostgreSQL
- `cpu_s`: Total time spent on CPU
- `gitaly_duration_s`: Total time by Gitaly calls
- `gitaly_calls`: Total number of calls made to Gitaly
- `redis_calls`: Total number of calls made to Redis
- `redis_cross_slot_calls`: Total number of cross-slot calls made to Redis
- `redis_allowed_cross_slot_calls`: Total number of allowed cross-slot calls made to Redis
- `redis_duration_s`: Total time to retrieve data from Redis
- `redis_read_bytes`: Total bytes read from Redis
- `redis_write_bytes`: Total bytes written to Redis
- `redis_<instance>_calls`: Total number of calls made to a Redis instance
- `redis_<instance>_cross_slot_calls`: Total number of cross-slot calls made to a Redis instance
- `redis_<instance>_allowed_cross_slot_calls`: Total number of allowed cross-slot calls made to a Redis instance
- `redis_<instance>_duration_s`: Total time to retrieve data from a Redis instance
- `redis_<instance>_read_bytes`: Total bytes read from a Redis instance
- `redis_<instance>_write_bytes`: Total bytes written to a Redis instance
- `pid`: The worker's Linux process ID (changes when workers restart)
- `worker_id`: The worker's logical ID (does not change when workers restart)
User clone and fetch activity using HTTP transport appears in the log as `action: git_upload_pack`.
In addition, the log contains the originating IP address,
(`remote_ip`), the user's ID (`user_id`), and username (`username`).
Some endpoints (such as `/search`) may make requests to Elasticsearch if using
[advanced search](../../user/search/advanced_search.md). These
additionally log `elasticsearch_calls` and `elasticsearch_call_duration_s`,
which correspond to:
- `elasticsearch_calls`: Total number of calls to Elasticsearch
- `elasticsearch_duration_s`: Total time taken by Elasticsearch calls
- `elasticsearch_timed_out_count`: Total number of calls to Elasticsearch that
timed out and therefore returned partial results
ActionCable connection and subscription events are also logged to this file and they follow the
previous format. The `method`, `path`, and `format` fields are not applicable, and are always empty.
The ActionCable connection or channel class is used as the `controller`.
```json
{
"method":null,
"path":null,
"format":null,
"controller":"IssuesChannel",
"action":"subscribe",
"status":200,
"time":"2020-05-14T19:46:22.008Z",
"params":[{"key":"project_path","value":"gitlab/gitlab-foss"},{"key":"iid","value":"1"}],
"remote_ip":"127.0.0.1",
"user_id":1,
"username":"admin",
"ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:76.0) Gecko/20100101 Firefox/76.0",
"correlation_id":"jSOIEynHCUa",
"duration_s":0.32566
}
```
{{< alert type="note" >}}
If an error occurs, an
`exception` field is included with `class`, `message`, and
`backtrace`. Previous versions included an `error` field instead of
`exception.class` and `exception.message`. For example:
{{< /alert >}}
```json
{
"method": "GET",
"path": "/admin",
"format": "html",
"controller": "Admin::DashboardController",
"action": "index",
"status": 500,
"time": "2019-11-14T13:12:46.156Z",
"params": [],
"remote_ip": "127.0.0.1",
"user_id": 1,
"username": "root",
"ua": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:70.0) Gecko/20100101 Firefox/70.0",
"queue_duration": 274.35,
"correlation_id": "KjDVUhNvvV3",
"queue_duration_s":0.0,
"gitaly_calls":16,
"gitaly_duration_s":0.16,
"redis_calls":115,
"redis_duration_s":0.13,
"correlation_id":"O1SdybnnIq7",
"cpu_s":17.50,
"db_duration_s":0.08,
"view_duration_s":2.39,
"duration_s":20.54,
"pid": 81836,
"worker_id": "puma_0",
"exception.class": "NameError",
"exception.message": "undefined local variable or method `adsf' for #<Admin::DashboardController:0x00007ff3c9648588>",
"exception.backtrace": [
"app/controllers/admin/dashboard_controller.rb:11:in `index'",
"ee/app/controllers/ee/admin/dashboard_controller.rb:14:in `index'",
"ee/lib/gitlab/ip_address_state.rb:10:in `with'",
"ee/app/controllers/ee/application_controller.rb:43:in `set_current_ip_address'",
"lib/gitlab/session.rb:11:in `with_session'",
"app/controllers/application_controller.rb:450:in `set_session_storage'",
"app/controllers/application_controller.rb:444:in `set_locale'",
"ee/lib/gitlab/jira/middleware.rb:19:in `call'"
]
}
```
## `production.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/production.log` file on Linux package installations.
- In the `/home/git/gitlab/log/production.log` file on self-compiled installations.
It contains information about all performed requests. You can see the
URL and type of request, IP address, and what parts of code were
involved to service this particular request. Also, you can see all SQL
requests performed, and how much time each took. This task is
more useful for GitLab contributors and developers. Use part of this log
file when you're reporting bugs. For example:
```plaintext
Started GET "/gitlabhq/yaml_db/tree/master" for 168.111.56.1 at 2015-02-12 19:34:53 +0200
Processing by Projects::TreeController#show as HTML
Parameters: {"project_id"=>"gitlabhq/yaml_db", "id"=>"master"}
... [CUT OUT]
Namespaces"."created_at" DESC, "namespaces"."id" DESC LIMIT 1 [["id", 26]]
CACHE (0.0ms) SELECT "members".* FROM "members" WHERE "members"."source_type" = 'Project' AND "members"."type" IN ('ProjectMember') AND "members"."source_id" = $1 AND "members"."source_type" = $2 AND "members"."user_id" = 1 ORDER BY "members"."created_at" DESC, "members"."id" DESC LIMIT 1 [["source_id", 18], ["source_type", "Project"]]
CACHE (0.0ms) SELECT "members".* FROM "members" WHERE "members"."source_type" = 'Project' AND "members".
(1.4ms) SELECT COUNT(*) FROM "merge_requests" WHERE "merge_requests"."target_project_id" = $1 AND ("merge_requests"."state" IN ('opened','reopened')) [["target_project_id", 18]]
Rendered layouts/nav/_project.html.haml (28.0ms)
Rendered layouts/_collapse_button.html.haml (0.2ms)
Rendered layouts/_flash.html.haml (0.1ms)
Rendered layouts/_page.html.haml (32.9ms)
Completed 200 OK in 166ms (Views: 117.4ms | ActiveRecord: 27.2ms)
```
In this example, the server processed an HTTP request with URL
`/gitlabhq/yaml_db/tree/master` from IP `168.111.56.1` at `2015-02-12 19:34:53 +0200`.
The request was processed by `Projects::TreeController`.
## `api_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/api_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/api_json.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="api_json"` key on Helm chart installations.
It helps you see requests made directly to the API. For example:
```json
{
"time":"2018-10-29T12:49:42.123Z",
"severity":"INFO",
"duration":709.08,
"db":14.59,
"view":694.49,
"status":200,
"method":"GET",
"path":"/api/v4/projects",
"params":[{"key":"action","value":"git-upload-pack"},{"key":"changes","value":"_any"},{"key":"key_id","value":"secret"},{"key":"secret_token","value":"[FILTERED]"}],
"host":"localhost",
"remote_ip":"::1",
"ua":"Ruby",
"route":"/api/:version/projects",
"user_id":1,
"username":"root",
"queue_duration":100.31,
"gitaly_calls":30,
"gitaly_duration":5.36,
"pid": 81836,
"worker_id": "puma_0",
...
}
```
This entry shows an internal endpoint accessed to check whether an
associated SSH key can download the project in question by using a `git fetch` or
`git clone`. In this example, we see:
- `duration`: Total time in milliseconds to retrieve the request
- `queue_duration`: Total time in milliseconds the request was queued inside GitLab Workhorse
- `method`: The HTTP method used to make the request
- `path`: The relative path of the query
- `params`: Key-value pairs passed in a query string or HTTP body (sensitive parameters, such as passwords and tokens, are filtered out)
- `ua`: The User-Agent of the requester
{{< alert type="note" >}}
As of [`Grape Logging`](https://github.com/aserafin/grape_logging) v1.8.4,
the `view_duration_s` is calculated by [`duration_s - db_duration_s`](https://github.com/aserafin/grape_logging/blob/v1.8.4/lib/grape_logging/middleware/request_logger.rb#L117-L119).
Therefore, `view_duration_s` can be affected by multiple different factors, like read-write
process on Redis or external HTTP, not only the serialization process.
{{< /alert >}}
## `application.log` (deprecated)
{{< history >}}
- [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/111046) in GitLab 15.10.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/application.log` file on Linux package installations.
- In the `/home/git/gitlab/log/application.log` file on self-compiled installations.
It contains a less structured version of the logs in
[`application_json.log`](#application_jsonlog), like this example:
```plaintext
October 06, 2014 11:56: User "Administrator" (admin@example.com) was created
October 06, 2014 11:56: Documentcloud created a new project "Documentcloud / Underscore"
October 06, 2014 11:56: Gitlab Org created a new project "Gitlab Org / Gitlab Ce"
October 07, 2014 11:25: User "Claudie Hodkiewicz" (nasir_stehr@olson.co.uk) was removed
October 07, 2014 11:25: Project "project133" was removed
```
## `application_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/application_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/application_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="application_json"` key on Helm chart installations.
It helps you discover events happening in your instance such as user creation
and project deletion. For example:
```json
{
"severity":"INFO",
"time":"2020-01-14T13:35:15.466Z",
"correlation_id":"3823a1550b64417f9c9ed8ee0f48087e",
"message":"User \"Administrator\" (admin@example.com) was created"
}
{
"severity":"INFO",
"time":"2020-01-14T13:35:15.466Z",
"correlation_id":"78e3df10c9a18745243d524540bd5be4",
"message":"Project \"project133\" was removed"
}
```
## `integrations_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/integrations_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/integrations_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="integrations_json"` key on Helm chart installations.
It contains information about [integration](../../user/project/integrations/_index.md)
activities, such as Jira, Asana, and irker services. It uses JSON format,
like this example:
```json
{
"severity":"ERROR",
"time":"2018-09-06T14:56:20.439Z",
"service_class":"Integrations::Jira",
"project_id":8,
"project_path":"h5bp/html5-boilerplate",
"message":"Error sending message",
"client_url":"http://jira.gitlab.com:8080",
"error":"execution expired"
}
{
"severity":"INFO",
"time":"2018-09-06T17:15:16.365Z",
"service_class":"Integrations::Jira",
"project_id":3,
"project_path":"namespace2/project2",
"message":"Successfully posted",
"client_url":"http://jira.example.com"
}
```
## `kubernetes.log` (deprecated)
{{< history >}}
- [Deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8) in GitLab 14.5.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/kubernetes.log` file on Linux package installations.
- In the `/home/git/gitlab/log/kubernetes.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="kubernetes"` key on Helm chart installations.
It logs information related to [certificate-based clusters](../../user/project/clusters/_index.md), such as connectivity errors. Each line contains JSON that can be ingested by services like Elasticsearch and Splunk.
## `git_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/git_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/git_json.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="git_json"` key on Helm chart installations.
GitLab has to interact with Git repositories, but in some rare cases
something can go wrong. If this happens, you need to know exactly what
happened. This log file contains all failed requests from GitLab to Git
repositories. In the majority of cases this file is useful for developers
only. For example:
```json
{
"severity":"ERROR",
"time":"2019-07-19T22:16:12.528Z",
"correlation_id":"FeGxww5Hj64",
"message":"Command failed [1]: /usr/bin/git --git-dir=/Users/vsizov/gitlab-development-kit/gitlab/tmp/tests/gitlab-satellites/group184/gitlabhq/.git --work-tree=/Users/vsizov/gitlab-development-kit/gitlab/tmp/tests/gitlab-satellites/group184/gitlabhq merge --no-ff -mMerge branch 'feature_conflict' into 'feature' source/feature_conflict\n\nerror: failed to push some refs to '/Users/vsizov/gitlab-development-kit/repositories/gitlabhq/gitlab_git.git'"
}
```
## `audit_json.log`
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< alert type="note" >}}
GitLab Free tracks a small number of different audit events.
GitLab Premium tracks many more.
{{< /alert >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/audit_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/audit_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="audit_json"` key on Helm chart installations.
Changes to group or project settings and memberships (`target_details`)
are logged to this file. For example:
```json
{
"severity":"INFO",
"time":"2018-10-17T17:38:22.523Z",
"author_id":3,
"entity_id":2,
"entity_type":"Project",
"change":"visibility",
"from":"Private",
"to":"Public",
"author_name":"John Doe4",
"target_id":2,
"target_type":"Project",
"target_details":"namespace2/project2"
}
```
## Sidekiq logs
For Linux package installations, some Sidekiq logs are in `/var/log/gitlab/sidekiq/current`
and as follows.
### `sidekiq.log`
{{< history >}}
- The default log format for Helm chart installations [changed from `text` to `json`](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/3169) in GitLab 16.0 and later.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/sidekiq/current` file on Linux package installations.
- In the `/home/git/gitlab/log/sidekiq.log` file on self-compiled installations.
GitLab uses background jobs for processing tasks which can take a long
time. All information about processing these jobs are written to this
file. For example:
```json
{
"severity":"INFO",
"time":"2018-04-03T22:57:22.071Z",
"queue":"cronjob:update_all_mirrors",
"args":[],
"class":"UpdateAllMirrorsWorker",
"retry":false,
"queue_namespace":"cronjob",
"jid":"06aeaa3b0aadacf9981f368e",
"created_at":"2018-04-03T22:57:21.930Z",
"enqueued_at":"2018-04-03T22:57:21.931Z",
"pid":10077,
"worker_id":"sidekiq_0",
"message":"UpdateAllMirrorsWorker JID-06aeaa3b0aadacf9981f368e: done: 0.139 sec",
"job_status":"done",
"duration":0.139,
"completed_at":"2018-04-03T22:57:22.071Z",
"db_duration":0.05,
"db_duration_s":0.0005,
"gitaly_duration":0,
"gitaly_calls":0
}
```
Instead of JSON logs, you can opt to generate text logs for Sidekiq. For example:
```plaintext
2023-05-16T16:08:55.272Z pid=82525 tid=23rl INFO: Initializing websocket
2023-05-16T16:08:55.279Z pid=82525 tid=23rl INFO: Booted Rails 6.1.7.2 application in production environment
2023-05-16T16:08:55.279Z pid=82525 tid=23rl INFO: Running in ruby 3.0.5p211 (2022-11-24 revision ba5cf0f7c5) [arm64-darwin22]
2023-05-16T16:08:55.279Z pid=82525 tid=23rl INFO: See LICENSE and the LGPL-3.0 for licensing details.
2023-05-16T16:08:55.279Z pid=82525 tid=23rl INFO: Upgrade to Sidekiq Pro for more features and support: https://sidekiq.org
2023-05-16T16:08:55.286Z pid=82525 tid=7p4t INFO: Cleaning working queues
2023-05-16T16:09:06.043Z pid=82525 tid=7p7d class=ScheduleMergeRequestCleanupRefsWorker jid=efcc73f169c09a514b06da3f INFO: start
2023-05-16T16:09:06.050Z pid=82525 tid=7p7d class=ScheduleMergeRequestCleanupRefsWorker jid=efcc73f169c09a514b06da3f INFO: arguments: []
2023-05-16T16:09:06.065Z pid=82525 tid=7p81 class=UserStatusCleanup::BatchWorker jid=e279aa6409ac33031a314822 INFO: start
2023-05-16T16:09:06.066Z pid=82525 tid=7p81 class=UserStatusCleanup::BatchWorker jid=e279aa6409ac33031a314822 INFO: arguments: []
```
For Linux package installations, add the configuration option:
```ruby
sidekiq['log_format'] = 'text'
```
For self-compiled installations, edit the `gitlab.yml` and set the Sidekiq
`log_format` configuration option:
```yaml
## Sidekiq
sidekiq:
log_format: text
```
### `sidekiq_client.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/sidekiq_client.log` file on Linux package installations.
- In the `/home/git/gitlab/log/sidekiq_client.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="sidekiq_client"` key on Helm chart installations.
This file contains logging information about jobs before Sidekiq starts
processing them, such as before being enqueued.
This log file follows the same structure as
[`sidekiq.log`](#sidekiqlog), so it is structured as JSON if
you've configured this for Sidekiq as mentioned previously.
## `gitlab-shell.log`
GitLab Shell is used by GitLab for executing Git commands and provide SSH
access to Git repositories.
Information containing `git-{upload-pack,receive-pack}` requests is at
`/var/log/gitlab/gitlab-shell/gitlab-shell.log`. Information about hooks to
GitLab Shell from Gitaly is at `/var/log/gitlab/gitaly/current`.
Example log entries for `/var/log/gitlab/gitlab-shell/gitlab-shell.log`:
```json
{
"duration_ms": 74.104,
"level": "info",
"method": "POST",
"msg": "Finished HTTP request",
"time": "2020-04-17T20:28:46Z",
"url": "http://127.0.0.1:8080/api/v4/internal/allowed"
}
{
"command": "git-upload-pack",
"git_protocol": "",
"gl_project_path": "root/example",
"gl_repository": "project-1",
"level": "info",
"msg": "executing git command",
"time": "2020-04-17T20:28:46Z",
"user_id": "user-1",
"username": "root"
}
```
Example log entries for `/var/log/gitlab/gitaly/current`:
```json
{
"method": "POST",
"url": "http://127.0.0.1:8080/api/v4/internal/allowed",
"duration": 0.058012959,
"gitaly_embedded": true,
"pid": 16636,
"level": "info",
"msg": "finished HTTP request",
"time": "2020-04-17T20:29:08+00:00"
}
{
"method": "POST",
"url": "http://127.0.0.1:8080/api/v4/internal/pre_receive",
"duration": 0.031022552,
"gitaly_embedded": true,
"pid": 16636,
"level": "info",
"msg": "finished HTTP request",
"time": "2020-04-17T20:29:08+00:00"
}
```
## Gitaly logs
This file is in `/var/log/gitlab/gitaly/current` and is produced by [runit](https://smarden.org/runit/).
`runit` is packaged with the Linux package and a brief explanation of its purpose
is available [in the Linux package documentation](https://docs.gitlab.com/omnibus/architecture/#runit).
[Log files are rotated](https://smarden.org/runit/svlogd.8), renamed in
Unix timestamp format, and `gzip`-compressed (like `@1584057562.s`).
### `grpc.log`
This file is at `/var/log/gitlab/gitlab-rails/grpc.log` for Linux
package installations. Native [gRPC](https://grpc.io/) logging used by Gitaly.
### `gitaly_hooks.log`
This file is at `/var/log/gitlab/gitaly/gitaly_hooks.log` and is
produced by `gitaly-hooks` command. It also contains records about
failures received during processing of the responses from GitLab API.
## Puma logs
### `puma_stdout.log`
This log is located:
- In the `/var/log/gitlab/puma/puma_stdout.log` file on Linux package installations.
- In the `/home/git/gitlab/log/puma_stdout.log` file on self-compiled installations.
### `puma_stderr.log`
This log is located:
- In the `/var/log/gitlab/puma/puma_stderr.log` file on Linux package installations.
- In the `/home/git/gitlab/log/puma_stderr.log` file on self-compiled installations.
## `repocheck.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/repocheck.log` file on Linux package installations.
- In the `/home/git/gitlab/log/repocheck.log` file on self-compiled installations.
It logs information whenever a [repository check is run](../repository_checks.md)
on a project.
## `importer.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/importer.log` file on Linux package installations.
- In the `/home/git/gitlab/log/importer.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="importer"` key on Helm chart installations.
This file logs the progress of [project imports and migrations](../../user/project/import/_index.md).
## `exporter.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/exporter.log` file on Linux package installations.
- In the `/home/git/gitlab/log/exporter.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="exporter"` key on Helm chart installations.
It logs the progress of the export process.
## `features_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/features_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/features_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="features_json"` key on Helm chart installations.
The modification events from Feature flags in development of GitLab
are recorded in this file. For example:
```json
{"severity":"INFO","time":"2020-11-24T02:30:59.860Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable","extra.thing":"true"}
{"severity":"INFO","time":"2020-11-24T02:31:29.108Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable","extra.thing":"true"}
{"severity":"INFO","time":"2020-11-24T02:31:29.129Z","correlation_id":null,"key":"cd_auto_rollback","action":"disable","extra.thing":"false"}
{"severity":"INFO","time":"2020-11-24T02:31:29.177Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable","extra.thing":"Project:1"}
{"severity":"INFO","time":"2020-11-24T02:31:29.183Z","correlation_id":null,"key":"cd_auto_rollback","action":"disable","extra.thing":"Project:1"}
{"severity":"INFO","time":"2020-11-24T02:31:29.188Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable_percentage_of_time","extra.percentage":"50"}
{"severity":"INFO","time":"2020-11-24T02:31:29.193Z","correlation_id":null,"key":"cd_auto_rollback","action":"disable_percentage_of_time"}
{"severity":"INFO","time":"2020-11-24T02:31:29.198Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable_percentage_of_actors","extra.percentage":"50"}
{"severity":"INFO","time":"2020-11-24T02:31:29.203Z","correlation_id":null,"key":"cd_auto_rollback","action":"disable_percentage_of_actors"}
{"severity":"INFO","time":"2020-11-24T02:31:29.329Z","correlation_id":null,"key":"cd_auto_rollback","action":"remove"}
```
## `ci_resource_groups_json.log`
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/384180) in GitLab 15.9.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/ci_resource_groups_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/ci_resource_group_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="ci_resource_groups_json"` key on Helm chart installations.
It contains information about [resource group](../../ci/resource_groups/_index.md) acquisition. For example:
```json
{"severity":"INFO","time":"2023-02-10T23:02:06.095Z","correlation_id":"01GRYS10C2DZQ9J1G12ZVAD4YD","resource_group_id":1,"processable_id":288,"message":"attempted to assign resource to processable","success":true}
{"severity":"INFO","time":"2023-02-10T23:02:08.945Z","correlation_id":"01GRYS138MYEG32C0QEWMC4BDM","resource_group_id":1,"processable_id":288,"message":"attempted to release resource from processable","success":true}
```
The examples show the `resource_group_id`, `processable_id`, `message`, and `success` fields for each entry.
## `auth.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/auth.log` file on Linux package installations.
- In the `/home/git/gitlab/log/auth.log` file on self-compiled installations.
This log records:
- Requests over the [Rate Limit](../settings/rate_limits_on_raw_endpoints.md) on raw endpoints.
- [Protected paths](../settings/protected_paths.md) abusive requests.
- User ID and username, if available.
## `auth_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/auth_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/auth_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="auth_json"` key on Helm chart installations.
This file contains the JSON version of the logs in `auth.log`, for example:
```json
{
"severity":"ERROR",
"time":"2023-04-19T22:14:25.893Z",
"correlation_id":"01GYDSAKAN2SPZPAMJNRWW5H8S",
"message":"Rack_Attack",
"env":"blocklist",
"remote_ip":"x.x.x.x",
"request_method":"GET",
"path":"/group/project.git/info/refs?service=git-upload-pack"
}
```
## `graphql_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/graphql_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/graphql_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="graphql_json"` key on Helm chart installations.
GraphQL queries are recorded in the file. For example:
```json
{"query_string":"query IntrospectionQuery{__schema {queryType { name },mutationType { name }}}...(etc)","variables":{"a":1,"b":2},"complexity":181,"depth":1,"duration_s":7}
```
## `clickhouse.log`
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133371) in GitLab 16.5.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/clickhouse.log` file on Linux package installations.
- In the `/home/git/gitlab/log/clickhouse.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="clickhouse"` key.
The `clickhouse.log` file logs information related to the
[ClickHouse database client](../../integration/clickhouse.md) in GitLab.
## `migrations.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/migrations.log` file on Linux package installations.
- In the `/home/git/gitlab/log/migrations.log` file on self-compiled installations.
This file logs the progress of [database migrations](../raketasks/maintenance.md#display-status-of-database-migrations).
## `mail_room_json.log` (default)
This log is located:
- In the `/var/log/gitlab/mailroom/current` file on Linux package installations.
- In the `/home/git/gitlab/log/mail_room_json.log` file on self-compiled installations.
This structured log file records internal activity in the `mail_room` gem.
Its name and path are configurable, so the name and path may not match this one
documented previously.
## `web_hooks.log`
{{< history >}}
- Introduced in GitLab 16.3.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/web_hooks.log` file on Linux package installations.
- In the `/home/git/gitlab/log/web_hooks.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="web_hooks"` key on Helm chart installations.
The back-off, disablement, and re-enablement events for Webhook are recorded in this file. For example:
```json
{"severity":"INFO","time":"2020-11-24T02:30:59.860Z","hook_id":12,"action":"backoff","disabled_until":"2020-11-24T04:30:59.860Z","recent_failures":2}
{"severity":"INFO","time":"2020-11-24T02:30:59.860Z","hook_id":12,"action":"disable","disabled_until":null,"recent_failures":100}
{"severity":"INFO","time":"2020-11-24T02:30:59.860Z","hook_id":12,"action":"enable","disabled_until":null,"recent_failures":0}
```
## Reconfigure logs
Reconfigure log files are in `/var/log/gitlab/reconfigure` for Linux package installations. Self-compiled installations
don't have reconfigure logs. A reconfigure log is populated whenever `gitlab-ctl reconfigure` is run manually or as part
of an upgrade.
Reconfigure logs files are named according to the UNIX timestamp of when the reconfigure
was initiated, such as `1509705644.log`
## `sidekiq_exporter.log` and `web_exporter.log`
If Prometheus metrics and the Sidekiq Exporter are both enabled, Sidekiq
starts a Web server and listens to the defined port (default:
`8082`). By default, Sidekiq Exporter access logs are disabled but can
be enabled:
- Use the `sidekiq['exporter_log_enabled'] = true` option in `/etc/gitlab/gitlab.rb` on Linux package installations.
- Use the `sidekiq_exporter.log_enabled` option in `gitlab.yml` on self-compiled installations.
When enabled, depending on your installation method, this file is located at:
- `/var/log/gitlab/gitlab-rails/sidekiq_exporter.log` on Linux package installations.
- `/home/git/gitlab/log/sidekiq_exporter.log` on self-compiled installations.
If Prometheus metrics and the Web Exporter are both enabled, Puma
starts a Web server and listens to the defined port (default: `8083`), and access logs
are generated in a location based on your installation method:
- `/var/log/gitlab/gitlab-rails/web_exporter.log` on Linux package installations.
- `/home/git/gitlab/log/web_exporter.log` on self-compiled installations.
## `database_load_balancing.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Contains details of GitLab [Database Load Balancing](../postgresql/database_load_balancing.md).
This log is located:
- In the `/var/log/gitlab/gitlab-rails/database_load_balancing.log` file on Linux package installations.
- In the `/home/git/gitlab/log/database_load_balancing.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="database_load_balancing"` key on Helm chart installations.
## `zoekt.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/110980) in GitLab 15.9.
{{< /history >}}
This file logs information related to [exact code search](../../user/search/exact_code_search.md).
This log is located:
- In the `/var/log/gitlab/gitlab-rails/zoekt.log` file on Linux package installations.
- In the `/home/git/gitlab/log/zoekt.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="zoekt"` key on Helm chart installations.
## `elasticsearch.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This file logs information related to the Elasticsearch Integration, including
errors during indexing or searching Elasticsearch.
This log is located:
- In the `/var/log/gitlab/gitlab-rails/elasticsearch.log` file on Linux package installations.
- In the `/home/git/gitlab/log/elasticsearch.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="elasticsearch"` key on Helm chart installations.
Each line contains JSON that can be ingested by services like Elasticsearch and Splunk.
Line breaks have been added to the following example line for clarity:
```json
{
"severity":"DEBUG",
"time":"2019-10-17T06:23:13.227Z",
"correlation_id":null,
"message":"redacted_search_result",
"class_name":"Milestone",
"id":2,
"ability":"read_milestone",
"current_user_id":2,
"query":"project"
}
```
## `exceptions_json.log`
This file logs the information about exceptions being tracked by
`Gitlab::ErrorTracking`, which provides a standard and consistent way of
processing rescued exceptions.
This log is located:
- In the `/var/log/gitlab/gitlab-rails/exceptions_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/exceptions_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="exceptions_json"` key on Helm chart installations.
Each line contains JSON that can be ingested by Elasticsearch. For example:
```json
{
"severity": "ERROR",
"time": "2019-12-17T11:49:29.485Z",
"correlation_id": "AbDVUrrTvM1",
"extra.project_id": 55,
"extra.relation_key": "milestones",
"extra.relation_index": 1,
"exception.class": "NoMethodError",
"exception.message": "undefined method `strong_memoize' for #<Gitlab::ImportExport::RelationFactory:0x00007fb5d917c4b0>",
"exception.backtrace": [
"lib/gitlab/import_export/relation_factory.rb:329:in `unique_relation?'",
"lib/gitlab/import_export/relation_factory.rb:345:in `find_or_create_object!'"
]
}
```
## `service_measurement.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/service_measurement.log` file on Linux package installations.
- In the `/home/git/gitlab/log/service_measurement.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="service_measurement"` key on Helm chart installations.
It contains only a single structured log with measurements for each service execution.
It contains measurements such as the number of SQL calls, `execution_time`, `gc_stats`, and `memory usage`.
For example:
```json
{ "severity":"INFO", "time":"2020-04-22T16:04:50.691Z","correlation_id":"04f1366e-57a1-45b8-88c1-b00b23dc3616","class":"Projects::ImportExport::ExportService","current_user":"John Doe","project_full_path":"group1/test-export","file_path":"/path/to/archive","gc_stats":{"count":{"before":127,"after":127,"diff":0},"heap_allocated_pages":{"before":10369,"after":10369,"diff":0},"heap_sorted_length":{"before":10369,"after":10369,"diff":0},"heap_allocatable_pages":{"before":0,"after":0,"diff":0},"heap_available_slots":{"before":4226409,"after":4226409,"diff":0},"heap_live_slots":{"before":2542709,"after":2641420,"diff":98711},"heap_free_slots":{"before":1683700,"after":1584989,"diff":-98711},"heap_final_slots":{"before":0,"after":0,"diff":0},"heap_marked_slots":{"before":2542704,"after":2542704,"diff":0},"heap_eden_pages":{"before":10369,"after":10369,"diff":0},"heap_tomb_pages":{"before":0,"after":0,"diff":0},"total_allocated_pages":{"before":10369,"after":10369,"diff":0},"total_freed_pages":{"before":0,"after":0,"diff":0},"total_allocated_objects":{"before":24896308,"after":24995019,"diff":98711},"total_freed_objects":{"before":22353599,"after":22353599,"diff":0},"malloc_increase_bytes":{"before":140032,"after":6650240,"diff":6510208},"malloc_increase_bytes_limit":{"before":25804104,"after":25804104,"diff":0},"minor_gc_count":{"before":94,"after":94,"diff":0},"major_gc_count":{"before":33,"after":33,"diff":0},"remembered_wb_unprotected_objects":{"before":34284,"after":34284,"diff":0},"remembered_wb_unprotected_objects_limit":{"before":68568,"after":68568,"diff":0},"old_objects":{"before":2404725,"after":2404725,"diff":0},"old_objects_limit":{"before":4809450,"after":4809450,"diff":0},"oldmalloc_increase_bytes":{"before":140032,"after":6650240,"diff":6510208},"oldmalloc_increase_bytes_limit":{"before":68537556,"after":68537556,"diff":0}},"time_to_finish":0.12298400001600385,"number_of_sql_calls":70,"memory_usage":"0.0 MiB","label":"process_48616"}
```
## `geo.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/geo.log` file on Linux package installations.
- In the `/home/git/gitlab/log/geo.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="geo"` key on Helm chart installations.
This file contains information about when Geo attempts to sync repositories
and files. Each line in the file contains a separate JSON entry that can be
ingested into (for example, Elasticsearch or Splunk).
For example:
```json
{"severity":"INFO","time":"2017-08-06T05:40:16.104Z","message":"Repository update","project_id":1,"source":"repository","resync_repository":true,"resync_wiki":true,"class":"Gitlab::Geo::LogCursor::Daemon","cursor_delay_s":0.038}
```
This message shows that Geo detected that a repository update was needed for project `1`.
## `update_mirror_service_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/update_mirror_service_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/update_mirror_service_json.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="update_mirror_service_json"` key on Helm chart installations.
This file contains information about LFS errors that occurred during project mirroring.
While we work to move other project mirroring errors into this log, the [general log](#productionlog)
can be used.
```json
{
"severity":"ERROR",
"time":"2020-07-28T23:29:29.473Z",
"correlation_id":"5HgIkCJsO53",
"user_id":"x",
"project_id":"x",
"import_url":"https://mirror-source/group/project.git",
"error_message":"The LFS objects download list couldn't be imported. Error: Unauthorized"
}
```
## `llm.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/120506) in GitLab 16.0.
{{< /history >}}
The `llm.log` file logs information related to
[AI features](../../user/gitlab_duo/_index.md). Logging includes information about AI events.
### LLM input and output logging
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13401) in GitLab 17.2 [with a flag](../feature_flags/_index.md) named `expanded_ai_logging`. Disabled by default.
{{< /history >}}
{{< alert type="flag" >}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{< /alert >}}
To log the LLM prompt input and response output, enable the `expanded_ai_logging` feature flag. This flag is intended for use on GitLab.com only, and not on GitLab Self-Managed instances.
This flag is disabled by default and can only be enabled:
- For GitLab.com, when you provide consent through a GitLab [Support Ticket](https://about.gitlab.com/support/portal/).
By default, the log does not contain LLM prompt input and response output to support [data retention policies](../../user/gitlab_duo/data_usage.md#data-retention) of AI feature data.
The log file is located at:
- In the `/var/log/gitlab/gitlab-rails/llm.log` file on Linux package installations.
- In the `/home/git/gitlab/log/llm.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="llm"` key on Helm chart installations.
## `epic_work_item_sync.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/120506) in GitLab 16.9.
{{< /history >}}
The `epic_work_item_sync.log` file logs information related to syncing and migrating epics as work items.
This log is located:
- In the `/var/log/gitlab/gitlab-rails/epic_work_item_sync.log` file on Linux package installations.
- In the `/home/git/gitlab/log/epic_work_item_sync.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="epic_work_item_sync"` key on Helm chart installations.
## `secret_push_protection.log`
{{< details >}}
- Tier: Ultimate
- Offering: GitLab.com, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/137812) in GitLab 16.7.
{{< /history >}}
The `secret_push_protection.log` file logs information related to [Secret Push Protection](../../user/application_security/secret_detection/secret_push_protection/_index.md) feature.
This log is located:
- In the `/var/log/gitlab/gitlab-rails/secret_push_protection.log` file on Linux package installations.
- In the `/home/git/gitlab/log/secret_push_protection.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="secret_push_protection"` key on Helm chart installations.
## `active_context.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/work_items/554925) in GitLab 18.3.
{{< /history >}}
The `active_context.log` file logs information related to embedding pipelines through the
[`ActiveContext` layer](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_context_abstraction_layer/).
GitLab supports `ActiveContext` code embeddings.
This pipeline handles embedding generation for project code files.
For more information, see [architecture design](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/codebase_as_chat_context/code_embeddings/).
This log is located:
- In the `/var/log/gitlab/gitlab-rails/active_context.log` file on Linux package installations.
- In the `/home/git/gitlab/log/active_context.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="activecontext"` key on Helm chart installations.
## Registry logs
For Linux package installations, container registry logs are in `/var/log/gitlab/registry/current`.
## NGINX logs
For Linux package installations, NGINX logs are in:
- `/var/log/gitlab/nginx/gitlab_access.log`: A log of requests made to GitLab
- `/var/log/gitlab/nginx/gitlab_error.log`: A log of NGINX errors for GitLab
- `/var/log/gitlab/nginx/gitlab_pages_access.log`: A log of requests made to Pages static sites
- `/var/log/gitlab/nginx/gitlab_pages_error.log`: A log of NGINX errors for Pages static sites
- `/var/log/gitlab/nginx/gitlab_registry_access.log`: A log of requests made to the container registry
- `/var/log/gitlab/nginx/gitlab_registry_error.log`: A log of NGINX errors for the container registry
- `/var/log/gitlab/nginx/gitlab_mattermost_access.log`: A log of requests made to Mattermost
- `/var/log/gitlab/nginx/gitlab_mattermost_error.log`: A log of NGINX errors for Mattermost
Below is the default GitLab NGINX access log format:
```plaintext
'$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"'
```
The `$request` and `$http_referer` are
[filtered](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/support/nginx/gitlab)
for sensitive query string parameters such as secret tokens.
## Pages logs
For Linux package installations, Pages logs are in `/var/log/gitlab/gitlab-pages/current`.
For example:
```json
{
"level": "info",
"msg": "GitLab Pages Daemon",
"revision": "52b2899",
"time": "2020-04-22T17:53:12Z",
"version": "1.17.0"
}
{
"level": "info",
"msg": "URL: https://gitlab.com/gitlab-org/gitlab-pages",
"time": "2020-04-22T17:53:12Z"
}
{
"gid": 998,
"in-place": false,
"level": "info",
"msg": "running the daemon as unprivileged user",
"time": "2020-04-22T17:53:12Z",
"uid": 998
}
```
## Product Usage Data log
{{< alert type="note" >}}
We recommend against using the raw logs for analysing feature usage, as the data quality has not yet been certified for accuracy.
The list of events can change in each version based on new features or changes to existing features. Certified in-product adoption reports will be available after the data is ready for analysis.
{{< /alert >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/product_usage_data.log` file on Linux package installations.
- In the `/home/git/gitlab/log/product_usage_data.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="product_usage_data"` key on Helm chart installations.
It contains JSON-formatted logs of product usage events tracked through Snowplow. Each line in the file contains a separate JSON entry that can be ingested by services like Elasticsearch or Splunk. Line breaks were added to examples for legibility:
```json
{
"severity":"INFO",
"time":"2025-04-09T13:43:40.254Z",
"message":"sending event",
"payload":"{
\"e\":\"se\",
\"se_ca\":\"projects:merge_requests:diffs\",
\"se_ac\":\"i_code_review_user_searches_diff\",
\"cx\":\"eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy5zbm93cGxvdy9jb250ZXh0cy9qc29uc2NoZW1hLzEtMC0xIiwiZGF0YSI6W3sic2NoZW1hIjoiaWdsdTpjb20uZ2l0bGFiL2dpdGxhYl9zdGFuZGFyZC9qc29uc2NoZW1hLzEtMS0xIiwiZGF0YSI6eyJlbnZpcm9ubWVudCI6ImRldmVsb3BtZW50Iiwic291cmNlIjoiZ2l0bGFiLXJhaWxzIiwiY29ycmVsYXRpb25faWQiOiJlNDk2NzNjNWI2MGQ5ODc0M2U4YWI0MjZiMTZmMTkxMiIsInBsYW4iOiJkZWZhdWx0IiwiZXh0cmEiOnt9LCJ1c2VyX2lkIjpudWxsLCJnbG9iYWxfdXNlcl9pZCI6bnVsbCwiaXNfZ2l0bGFiX3RlYW1fbWVtYmVyIjpudWxsLCJuYW1lc3BhY2VfaWQiOjMxLCJwcm9qZWN0X2lkIjo2LCJmZWF0dXJlX2VuYWJsZWRfYnlfbmFtZXNwYWNlX2lkcyI6bnVsbCwicmVhbG0iOiJzZWxmLW1hbmFnZWQiLCJpbnN0YW5jZV9pZCI6IjJkMDg1NzBkLWNmZGItNDFmMy1iODllLWM3MTM5YmFjZTI3NSIsImhvc3RfbmFtZSI6ImpsYXJzZW4tLTIwMjIxMjE0LVBWWTY5IiwiaW5zdGFuY2VfdmVyc2lvbiI6IjE3LjExLjAiLCJjb250ZXh0X2dlbmVyYXRlZF9hdCI6IjIwMjUtMDQtMDkgMTM6NDM6NDAgVVRDIn19LHsic2NoZW1hIjoiaWdsdTpjb20uZ2l0bGFiL2dpdGxhYl9zZXJ2aWNlX3BpbmcvanNvbnNjaGVtYS8xLTAtMSIsImRhdGEiOnsiZGF0YV9zb3VyY2UiOiJyZWRpc19obGwiLCJldmVudF9uYW1lIjoiaV9jb2RlX3Jldmlld191c2VyX3NlYXJjaGVzX2RpZmYifX1dfQ==\",
\"p\":\"srv\",
\"dtm\":\"1744206220253\",
\"tna\":\"gl\",
\"tv\":\"rb-0.8.0\",
\"eid\":\"4f067989-d10d-40b0-9312-ad9d7355be7f\"
}
```
To inspect these logs, you can use the [Rake task](../raketasks/_index.md) `product_usage_data:format` which formats the JSON output and decodes base64-encoded context data for better readability:
```shell
gitlab-rake "product_usage_data:format[log/product_usage_data.log]"
# or pipe the logs directly
cat log/product_usage_data.log | gitlab-rake product_usage_data:format
# or tail the logs in real-time
tail -f log/product_usage_data.log | gitlab-rake product_usage_data:format
```
You can disable this log by setting the `GITLAB_DISABLE_PRODUCT_USAGE_EVENT_LOGGING` environment variable to any value.
## Let's Encrypt logs
For Linux package installations, Let's Encrypt [auto-renew](https://docs.gitlab.com/omnibus/settings/ssl/#renew-the-certificates-automatically) logs are in `/var/log/gitlab/lets-encrypt/`.
## Mattermost logs
For Linux package installations, Mattermost logs are in these locations:
- `/var/log/gitlab/mattermost/mattermost.log`
- `/var/log/gitlab/mattermost/current`
## Workhorse logs
For Linux package installations, Workhorse logs are in `/var/log/gitlab/gitlab-workhorse/current`.
## Patroni logs
For Linux package installations, Patroni logs are in `/var/log/gitlab/patroni/current`.
## PgBouncer logs
For Linux package installations, PgBouncer logs are in `/var/log/gitlab/pgbouncer/current`.
## PostgreSQL logs
For Linux package installations, PostgreSQL logs are in `/var/log/gitlab/postgresql/current`.
If Patroni is being used, the PostgreSQL logs are stored in the [Patroni logs](#patroni-logs) instead.
## Prometheus logs
For Linux package installations, Prometheus logs are in `/var/log/gitlab/prometheus/current`.
## Redis logs
For Linux package installations, Redis logs are in `/var/log/gitlab/redis/current`.
## Sentinel logs
For Linux package installations, Sentinel logs are in `/var/log/gitlab/sentinel/current`.
## Alertmanager logs
For Linux package installations, Alertmanager logs are in `/var/log/gitlab/alertmanager/current`.
<!-- vale gitlab_base.Spelling = NO -->
## crond logs
For Linux package installations, crond logs are in `/var/log/gitlab/crond/`.
<!-- vale gitlab_base.Spelling = YES -->
## Grafana logs
For Linux package installations, Grafana logs are in `/var/log/gitlab/grafana/current`.
## LogRotate logs
For Linux package installations, `logrotate` logs are in `/var/log/gitlab/logrotate/current`.
## GitLab Monitor logs
For Linux package installations, GitLab Monitor logs are in `/var/log/gitlab/gitlab-monitor/`.
## GitLab Exporter
For Linux package installations, GitLab Exporter logs are in `/var/log/gitlab/gitlab-exporter/current`.
## GitLab agent server for Kubernetes
For Linux package installations, GitLab agent server for Kubernetes logs are
in `/var/log/gitlab/gitlab-kas/current`.
## Praefect logs
For Linux package installations, Praefect logs are in `/var/log/gitlab/praefect/`.
GitLab also tracks [Prometheus metrics for Gitaly Cluster (Praefect)](../gitaly/praefect/monitoring.md).
## Backup log
For Linux package installations, the backup log is located at `/var/log/gitlab/gitlab-rails/backup_json.log`.
On Helm chart installations, the backup log is stored in the Toolbox pod, at `/var/log/gitlab/backup_json.log`.
This log is populated when a [GitLab backup is created](../backup_restore/_index.md). You can use this log to understand how the backup process performed.
## Performance bar stats
This log is located:
- In the `/var/log/gitlab/gitlab-rails/performance_bar_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/performance_bar_json.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="performance_bar_json"` key on Helm chart installations.
Performance bar statistics (currently only duration of SQL queries) are recorded
in that file. For example:
```json
{"severity":"INFO","time":"2020-12-04T09:29:44.592Z","correlation_id":"33680b1490ccd35981b03639c406a697","filename":"app/models/ci/pipeline.rb","method_path":"app/models/ci/pipeline.rb:each_with_object","request_id":"rYHomD0VJS4","duration_ms":26.889,"count":2,"query_type": "active-record"}
```
These statistics are logged on .com only, disabled on self-deployments.
## Gathering logs
When [troubleshooting](../troubleshooting/_index.md) issues that aren't localized to one of the
previously listed components, it's helpful to simultaneously gather multiple logs and statistics
from a GitLab instance.
{{< alert type="note" >}}
GitLab Support often asks for one of these, and maintains the required tools.
{{< /alert >}}
### Briefly tail the main logs
If the bug or error is readily reproducible, save the main GitLab logs
[to a file](../troubleshooting/linux_cheat_sheet.md#files-and-directories) while reproducing the
problem a few times:
```shell
sudo gitlab-ctl tail | tee /tmp/<case-ID-and-keywords>.log
```
Conclude the log gathering with <kbd>Control</kbd> + <kbd>C</kbd>.
### Gathering SOS logs
If performance degradations or cascading errors occur that can't readily be attributed to one
of the previously listed GitLab components, [use our SOS scripts](../troubleshooting/diagnostics_tools.md#sos-scripts).
### Fast-stats
[Fast-stats](https://gitlab.com/gitlab-com/support/toolbox/fast-stats) is a tool
for creating and comparing performance statistics from GitLab logs.
For more details and instructions to run it, read the
[documentation for fast-stats](https://gitlab.com/gitlab-com/support/toolbox/fast-stats#usage).
## Find relevant log entries with a correlation ID
Most requests have a log ID that can be used to [find relevant log entries](tracing_correlation_id.md).
|
---
stage: Monitor
group: Platform Insights
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Log system
description: Access comprehensive logging and monitoring capabilities.
breadcrumbs:
- doc
- administration
- logs
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The log system in GitLab provides comprehensive logging and monitoring capabilities for analyzing your GitLab instance.
You can use logs to identify system issues, investigate security events, and analyze application performance.
A log entry exists for every action, so when issues occur, these logs provide the data needed to quickly diagnose and resolve problems.
The log system:
- Tracks all application activity across GitLab components in structured log files.
- Records performance metrics, errors, and security events in standardized formats.
- Integrates with log analysis tools like Elasticsearch and Splunk through JSON logging.
- Maintains separate log files for different GitLab services and components.
- Includes correlation IDs to trace requests across the entire system.
System log files are typically plain text in a standard log file format.
The log system is similar to [audit events](../compliance/audit_event_reports.md).
For more information, see also:
- [Customizing logging on Linux package installations](https://docs.gitlab.com/omnibus/settings/logs.html)
- [Parsing and analyzing GitLab logs in JSON format](log_parsing.md)
## Log Levels
Each log message has an assigned log level that indicates its importance and verbosity.
Each logger has an assigned minimum log level.
A logger emits a log message only if its log level is equal to or above the minimum log level.
The following log levels are supported:
| Level | Name |
|:------|:----------|
| 0 | `DEBUG` |
| 1 | `INFO` |
| 2 | `WARN` |
| 3 | `ERROR` |
| 4 | `FATAL` |
| 5 | `UNKNOWN` |
GitLab loggers emit all log messages because they are set to `DEBUG` by default.
### Override default log level
You can override the minimum log level for GitLab loggers using the `GITLAB_LOG_LEVEL` environment variable.
Valid values are either a value of `0` to `5`, or the name of the log level.
Example:
```shell
GITLAB_LOG_LEVEL=info
```
For some services, other log levels are in place that are not affected by this setting.
Some of these services have their own environment variables to override the log level. For example:
| Service | Log level | Environment variable |
|:--------------------------|:----------|:---------------------|
| GitLab Cleanup | `INFO` | `DEBUG` |
| GitLab Doctor | `INFO` | `VERBOSE` |
| GitLab Export | `INFO` | `EXPORT_DEBUG` |
| GitLab Import | `INFO` | `IMPORT_DEBUG` |
| GitLab QA Runtime | `INFO` | `QA_LOG_LEVEL` |
| GitLab Product Usage Data | `INFO` | |
| Google APIs | `INFO` | |
| Rack Timeout | `ERROR` | |
| Snowplow Tracker | `FATAL` | |
| gRPC Client (Gitaly) | `WARN` | `GRPC_LOG_LEVEL` |
| LLM | `INFO` | `LLM_DEBUG` |
## Log Rotation
The logs for a given service may be managed and rotated by:
- `logrotate`
- `svlogd` (`runit`'s service logging daemon)
- `logrotate` and `svlogd`
- Or not at all
The following table includes information about what's responsible for managing and rotating logs for
the included services. Logs
[managed by `svlogd`](https://docs.gitlab.com/omnibus/settings/logs.html#runit-logs)
are written to a file called `current`. The `logrotate` service built into GitLab
[manages all logs](https://docs.gitlab.com/omnibus/settings/logs.html#logrotate)
except those captured by `runit`.
| Log type | Managed by logrotate | Managed by svlogd/runit |
|:------------------------------------------------|:------------------------|:------------------------|
| [Alertmanager logs](#alertmanager-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [crond logs](#crond-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Gitaly](#gitaly-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [GitLab Exporter for Linux package installations](#gitlab-exporter) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [GitLab Pages logs](#pages-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| GitLab Rails | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No |
| [GitLab Shell logs](#gitlab-shelllog) | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No |
| [Grafana logs](#grafana-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [LogRotate logs](#logrotate-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Mailroom](#mail_room_jsonlog-default) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [NGINX](#nginx-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [Patroni logs](#patroni-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [PgBouncer logs](#pgbouncer-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [PostgreSQL logs](#postgresql-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Praefect logs](#praefect-logs) | {{< icon name="dotted-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [Prometheus logs](#prometheus-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Puma](#puma-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
| [Redis logs](#redis-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Registry logs](#registry-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Sentinel logs](#sentinel-logs) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes |
| [Workhorse logs](#workhorse-logs) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
## Accessing logs on Helm chart installations
On Helm chart installations, GitLab components send logs to `stdout`, which can be accessed by using `kubectl logs`.
Logs are also available in the pod at `/var/log/gitlab` for the lifetime of the pod.
### Pods with structured logs (subcomponent filtering)
Some pods include a `subcomponent` field that identifies the specific log type:
```shell
# Webservice pod logs (Rails application)
kubectl logs -l app=webservice -c webservice | jq 'select(."subcomponent"=="<subcomponent-key>")'
# Sidekiq pod logs (background jobs)
kubectl logs -l app=sidekiq | jq 'select(."subcomponent"=="<subcomponent-key>")'
```
The following log sections indicate the appropriate pod and subcomponent key where applicable.
### Other pods
For other GitLab components that don't use structured logs with subcomponents, you can access logs directly.
To find available pod selectors:
```shell
# List all unique app labels in use
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.labels.app}{"\n"}{end}' | grep -v '^$' | sort | uniq
# For pods with app labels
kubectl logs -l app=<pod-selector>
# For specific pods (when app labels aren't available)
kubectl get pods
kubectl logs <pod-name>
```
For more Kubernetes troubleshooting commands, see the [Kubernetes cheat sheet](https://docs.gitlab.com/charts/troubleshooting/kubernetes_cheat_sheet/).
## `production_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/production_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/production_json.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="production_json"` key on Helm chart installations.
It contains a structured log for Rails controller requests received from
GitLab, thanks to [Lograge](https://github.com/roidrage/lograge/).
Requests from the API are logged to a separate file in `api_json.log`.
Each line contains JSON that can be ingested by services like Elasticsearch and Splunk.
Line breaks were added to examples for legibility:
```json
{
"method":"GET",
"path":"/gitlab/gitlab-foss/issues/1234",
"format":"html",
"controller":"Projects::IssuesController",
"action":"show",
"status":200,
"time":"2017-08-08T20:15:54.821Z",
"params":[{"key":"param_key","value":"param_value"}],
"remote_ip":"18.245.0.1",
"user_id":1,
"username":"admin",
"queue_duration_s":0.0,
"gitaly_calls":16,
"gitaly_duration_s":0.16,
"redis_calls":115,
"redis_duration_s":0.13,
"redis_read_bytes":1507378,
"redis_write_bytes":2920,
"correlation_id":"O1SdybnnIq7",
"cpu_s":17.50,
"db_duration_s":0.08,
"view_duration_s":2.39,
"duration_s":20.54,
"pid": 81836,
"worker_id":"puma_0"
}
```
This example was a GET request for a specific
issue. Each line also contains performance data, with times in
seconds:
- `duration_s`: Total time to retrieve the request
- `queue_duration_s`: Total time the request was queued inside GitLab Workhorse
- `view_duration_s`: Total time inside the Rails views
- `db_duration_s`: Total time to retrieve data from PostgreSQL
- `cpu_s`: Total time spent on CPU
- `gitaly_duration_s`: Total time by Gitaly calls
- `gitaly_calls`: Total number of calls made to Gitaly
- `redis_calls`: Total number of calls made to Redis
- `redis_cross_slot_calls`: Total number of cross-slot calls made to Redis
- `redis_allowed_cross_slot_calls`: Total number of allowed cross-slot calls made to Redis
- `redis_duration_s`: Total time to retrieve data from Redis
- `redis_read_bytes`: Total bytes read from Redis
- `redis_write_bytes`: Total bytes written to Redis
- `redis_<instance>_calls`: Total number of calls made to a Redis instance
- `redis_<instance>_cross_slot_calls`: Total number of cross-slot calls made to a Redis instance
- `redis_<instance>_allowed_cross_slot_calls`: Total number of allowed cross-slot calls made to a Redis instance
- `redis_<instance>_duration_s`: Total time to retrieve data from a Redis instance
- `redis_<instance>_read_bytes`: Total bytes read from a Redis instance
- `redis_<instance>_write_bytes`: Total bytes written to a Redis instance
- `pid`: The worker's Linux process ID (changes when workers restart)
- `worker_id`: The worker's logical ID (does not change when workers restart)
User clone and fetch activity using HTTP transport appears in the log as `action: git_upload_pack`.
In addition, the log contains the originating IP address,
(`remote_ip`), the user's ID (`user_id`), and username (`username`).
Some endpoints (such as `/search`) may make requests to Elasticsearch if using
[advanced search](../../user/search/advanced_search.md). These
additionally log `elasticsearch_calls` and `elasticsearch_call_duration_s`,
which correspond to:
- `elasticsearch_calls`: Total number of calls to Elasticsearch
- `elasticsearch_duration_s`: Total time taken by Elasticsearch calls
- `elasticsearch_timed_out_count`: Total number of calls to Elasticsearch that
timed out and therefore returned partial results
ActionCable connection and subscription events are also logged to this file and they follow the
previous format. The `method`, `path`, and `format` fields are not applicable, and are always empty.
The ActionCable connection or channel class is used as the `controller`.
```json
{
"method":null,
"path":null,
"format":null,
"controller":"IssuesChannel",
"action":"subscribe",
"status":200,
"time":"2020-05-14T19:46:22.008Z",
"params":[{"key":"project_path","value":"gitlab/gitlab-foss"},{"key":"iid","value":"1"}],
"remote_ip":"127.0.0.1",
"user_id":1,
"username":"admin",
"ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:76.0) Gecko/20100101 Firefox/76.0",
"correlation_id":"jSOIEynHCUa",
"duration_s":0.32566
}
```
{{< alert type="note" >}}
If an error occurs, an
`exception` field is included with `class`, `message`, and
`backtrace`. Previous versions included an `error` field instead of
`exception.class` and `exception.message`. For example:
{{< /alert >}}
```json
{
"method": "GET",
"path": "/admin",
"format": "html",
"controller": "Admin::DashboardController",
"action": "index",
"status": 500,
"time": "2019-11-14T13:12:46.156Z",
"params": [],
"remote_ip": "127.0.0.1",
"user_id": 1,
"username": "root",
"ua": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:70.0) Gecko/20100101 Firefox/70.0",
"queue_duration": 274.35,
"correlation_id": "KjDVUhNvvV3",
"queue_duration_s":0.0,
"gitaly_calls":16,
"gitaly_duration_s":0.16,
"redis_calls":115,
"redis_duration_s":0.13,
"correlation_id":"O1SdybnnIq7",
"cpu_s":17.50,
"db_duration_s":0.08,
"view_duration_s":2.39,
"duration_s":20.54,
"pid": 81836,
"worker_id": "puma_0",
"exception.class": "NameError",
"exception.message": "undefined local variable or method `adsf' for #<Admin::DashboardController:0x00007ff3c9648588>",
"exception.backtrace": [
"app/controllers/admin/dashboard_controller.rb:11:in `index'",
"ee/app/controllers/ee/admin/dashboard_controller.rb:14:in `index'",
"ee/lib/gitlab/ip_address_state.rb:10:in `with'",
"ee/app/controllers/ee/application_controller.rb:43:in `set_current_ip_address'",
"lib/gitlab/session.rb:11:in `with_session'",
"app/controllers/application_controller.rb:450:in `set_session_storage'",
"app/controllers/application_controller.rb:444:in `set_locale'",
"ee/lib/gitlab/jira/middleware.rb:19:in `call'"
]
}
```
## `production.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/production.log` file on Linux package installations.
- In the `/home/git/gitlab/log/production.log` file on self-compiled installations.
It contains information about all performed requests. You can see the
URL and type of request, IP address, and what parts of code were
involved to service this particular request. Also, you can see all SQL
requests performed, and how much time each took. This task is
more useful for GitLab contributors and developers. Use part of this log
file when you're reporting bugs. For example:
```plaintext
Started GET "/gitlabhq/yaml_db/tree/master" for 168.111.56.1 at 2015-02-12 19:34:53 +0200
Processing by Projects::TreeController#show as HTML
Parameters: {"project_id"=>"gitlabhq/yaml_db", "id"=>"master"}
... [CUT OUT]
Namespaces"."created_at" DESC, "namespaces"."id" DESC LIMIT 1 [["id", 26]]
CACHE (0.0ms) SELECT "members".* FROM "members" WHERE "members"."source_type" = 'Project' AND "members"."type" IN ('ProjectMember') AND "members"."source_id" = $1 AND "members"."source_type" = $2 AND "members"."user_id" = 1 ORDER BY "members"."created_at" DESC, "members"."id" DESC LIMIT 1 [["source_id", 18], ["source_type", "Project"]]
CACHE (0.0ms) SELECT "members".* FROM "members" WHERE "members"."source_type" = 'Project' AND "members".
(1.4ms) SELECT COUNT(*) FROM "merge_requests" WHERE "merge_requests"."target_project_id" = $1 AND ("merge_requests"."state" IN ('opened','reopened')) [["target_project_id", 18]]
Rendered layouts/nav/_project.html.haml (28.0ms)
Rendered layouts/_collapse_button.html.haml (0.2ms)
Rendered layouts/_flash.html.haml (0.1ms)
Rendered layouts/_page.html.haml (32.9ms)
Completed 200 OK in 166ms (Views: 117.4ms | ActiveRecord: 27.2ms)
```
In this example, the server processed an HTTP request with URL
`/gitlabhq/yaml_db/tree/master` from IP `168.111.56.1` at `2015-02-12 19:34:53 +0200`.
The request was processed by `Projects::TreeController`.
## `api_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/api_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/api_json.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="api_json"` key on Helm chart installations.
It helps you see requests made directly to the API. For example:
```json
{
"time":"2018-10-29T12:49:42.123Z",
"severity":"INFO",
"duration":709.08,
"db":14.59,
"view":694.49,
"status":200,
"method":"GET",
"path":"/api/v4/projects",
"params":[{"key":"action","value":"git-upload-pack"},{"key":"changes","value":"_any"},{"key":"key_id","value":"secret"},{"key":"secret_token","value":"[FILTERED]"}],
"host":"localhost",
"remote_ip":"::1",
"ua":"Ruby",
"route":"/api/:version/projects",
"user_id":1,
"username":"root",
"queue_duration":100.31,
"gitaly_calls":30,
"gitaly_duration":5.36,
"pid": 81836,
"worker_id": "puma_0",
...
}
```
This entry shows an internal endpoint accessed to check whether an
associated SSH key can download the project in question by using a `git fetch` or
`git clone`. In this example, we see:
- `duration`: Total time in milliseconds to retrieve the request
- `queue_duration`: Total time in milliseconds the request was queued inside GitLab Workhorse
- `method`: The HTTP method used to make the request
- `path`: The relative path of the query
- `params`: Key-value pairs passed in a query string or HTTP body (sensitive parameters, such as passwords and tokens, are filtered out)
- `ua`: The User-Agent of the requester
{{< alert type="note" >}}
As of [`Grape Logging`](https://github.com/aserafin/grape_logging) v1.8.4,
the `view_duration_s` is calculated by [`duration_s - db_duration_s`](https://github.com/aserafin/grape_logging/blob/v1.8.4/lib/grape_logging/middleware/request_logger.rb#L117-L119).
Therefore, `view_duration_s` can be affected by multiple different factors, like read-write
process on Redis or external HTTP, not only the serialization process.
{{< /alert >}}
## `application.log` (deprecated)
{{< history >}}
- [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/111046) in GitLab 15.10.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/application.log` file on Linux package installations.
- In the `/home/git/gitlab/log/application.log` file on self-compiled installations.
It contains a less structured version of the logs in
[`application_json.log`](#application_jsonlog), like this example:
```plaintext
October 06, 2014 11:56: User "Administrator" (admin@example.com) was created
October 06, 2014 11:56: Documentcloud created a new project "Documentcloud / Underscore"
October 06, 2014 11:56: Gitlab Org created a new project "Gitlab Org / Gitlab Ce"
October 07, 2014 11:25: User "Claudie Hodkiewicz" (nasir_stehr@olson.co.uk) was removed
October 07, 2014 11:25: Project "project133" was removed
```
## `application_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/application_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/application_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="application_json"` key on Helm chart installations.
It helps you discover events happening in your instance such as user creation
and project deletion. For example:
```json
{
"severity":"INFO",
"time":"2020-01-14T13:35:15.466Z",
"correlation_id":"3823a1550b64417f9c9ed8ee0f48087e",
"message":"User \"Administrator\" (admin@example.com) was created"
}
{
"severity":"INFO",
"time":"2020-01-14T13:35:15.466Z",
"correlation_id":"78e3df10c9a18745243d524540bd5be4",
"message":"Project \"project133\" was removed"
}
```
## `integrations_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/integrations_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/integrations_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="integrations_json"` key on Helm chart installations.
It contains information about [integration](../../user/project/integrations/_index.md)
activities, such as Jira, Asana, and irker services. It uses JSON format,
like this example:
```json
{
"severity":"ERROR",
"time":"2018-09-06T14:56:20.439Z",
"service_class":"Integrations::Jira",
"project_id":8,
"project_path":"h5bp/html5-boilerplate",
"message":"Error sending message",
"client_url":"http://jira.gitlab.com:8080",
"error":"execution expired"
}
{
"severity":"INFO",
"time":"2018-09-06T17:15:16.365Z",
"service_class":"Integrations::Jira",
"project_id":3,
"project_path":"namespace2/project2",
"message":"Successfully posted",
"client_url":"http://jira.example.com"
}
```
## `kubernetes.log` (deprecated)
{{< history >}}
- [Deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8) in GitLab 14.5.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/kubernetes.log` file on Linux package installations.
- In the `/home/git/gitlab/log/kubernetes.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="kubernetes"` key on Helm chart installations.
It logs information related to [certificate-based clusters](../../user/project/clusters/_index.md), such as connectivity errors. Each line contains JSON that can be ingested by services like Elasticsearch and Splunk.
## `git_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/git_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/git_json.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="git_json"` key on Helm chart installations.
GitLab has to interact with Git repositories, but in some rare cases
something can go wrong. If this happens, you need to know exactly what
happened. This log file contains all failed requests from GitLab to Git
repositories. In the majority of cases this file is useful for developers
only. For example:
```json
{
"severity":"ERROR",
"time":"2019-07-19T22:16:12.528Z",
"correlation_id":"FeGxww5Hj64",
"message":"Command failed [1]: /usr/bin/git --git-dir=/Users/vsizov/gitlab-development-kit/gitlab/tmp/tests/gitlab-satellites/group184/gitlabhq/.git --work-tree=/Users/vsizov/gitlab-development-kit/gitlab/tmp/tests/gitlab-satellites/group184/gitlabhq merge --no-ff -mMerge branch 'feature_conflict' into 'feature' source/feature_conflict\n\nerror: failed to push some refs to '/Users/vsizov/gitlab-development-kit/repositories/gitlabhq/gitlab_git.git'"
}
```
## `audit_json.log`
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< alert type="note" >}}
GitLab Free tracks a small number of different audit events.
GitLab Premium tracks many more.
{{< /alert >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/audit_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/audit_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="audit_json"` key on Helm chart installations.
Changes to group or project settings and memberships (`target_details`)
are logged to this file. For example:
```json
{
"severity":"INFO",
"time":"2018-10-17T17:38:22.523Z",
"author_id":3,
"entity_id":2,
"entity_type":"Project",
"change":"visibility",
"from":"Private",
"to":"Public",
"author_name":"John Doe4",
"target_id":2,
"target_type":"Project",
"target_details":"namespace2/project2"
}
```
## Sidekiq logs
For Linux package installations, some Sidekiq logs are in `/var/log/gitlab/sidekiq/current`
and as follows.
### `sidekiq.log`
{{< history >}}
- The default log format for Helm chart installations [changed from `text` to `json`](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/3169) in GitLab 16.0 and later.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/sidekiq/current` file on Linux package installations.
- In the `/home/git/gitlab/log/sidekiq.log` file on self-compiled installations.
GitLab uses background jobs for processing tasks which can take a long
time. All information about processing these jobs are written to this
file. For example:
```json
{
"severity":"INFO",
"time":"2018-04-03T22:57:22.071Z",
"queue":"cronjob:update_all_mirrors",
"args":[],
"class":"UpdateAllMirrorsWorker",
"retry":false,
"queue_namespace":"cronjob",
"jid":"06aeaa3b0aadacf9981f368e",
"created_at":"2018-04-03T22:57:21.930Z",
"enqueued_at":"2018-04-03T22:57:21.931Z",
"pid":10077,
"worker_id":"sidekiq_0",
"message":"UpdateAllMirrorsWorker JID-06aeaa3b0aadacf9981f368e: done: 0.139 sec",
"job_status":"done",
"duration":0.139,
"completed_at":"2018-04-03T22:57:22.071Z",
"db_duration":0.05,
"db_duration_s":0.0005,
"gitaly_duration":0,
"gitaly_calls":0
}
```
Instead of JSON logs, you can opt to generate text logs for Sidekiq. For example:
```plaintext
2023-05-16T16:08:55.272Z pid=82525 tid=23rl INFO: Initializing websocket
2023-05-16T16:08:55.279Z pid=82525 tid=23rl INFO: Booted Rails 6.1.7.2 application in production environment
2023-05-16T16:08:55.279Z pid=82525 tid=23rl INFO: Running in ruby 3.0.5p211 (2022-11-24 revision ba5cf0f7c5) [arm64-darwin22]
2023-05-16T16:08:55.279Z pid=82525 tid=23rl INFO: See LICENSE and the LGPL-3.0 for licensing details.
2023-05-16T16:08:55.279Z pid=82525 tid=23rl INFO: Upgrade to Sidekiq Pro for more features and support: https://sidekiq.org
2023-05-16T16:08:55.286Z pid=82525 tid=7p4t INFO: Cleaning working queues
2023-05-16T16:09:06.043Z pid=82525 tid=7p7d class=ScheduleMergeRequestCleanupRefsWorker jid=efcc73f169c09a514b06da3f INFO: start
2023-05-16T16:09:06.050Z pid=82525 tid=7p7d class=ScheduleMergeRequestCleanupRefsWorker jid=efcc73f169c09a514b06da3f INFO: arguments: []
2023-05-16T16:09:06.065Z pid=82525 tid=7p81 class=UserStatusCleanup::BatchWorker jid=e279aa6409ac33031a314822 INFO: start
2023-05-16T16:09:06.066Z pid=82525 tid=7p81 class=UserStatusCleanup::BatchWorker jid=e279aa6409ac33031a314822 INFO: arguments: []
```
For Linux package installations, add the configuration option:
```ruby
sidekiq['log_format'] = 'text'
```
For self-compiled installations, edit the `gitlab.yml` and set the Sidekiq
`log_format` configuration option:
```yaml
## Sidekiq
sidekiq:
log_format: text
```
### `sidekiq_client.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/sidekiq_client.log` file on Linux package installations.
- In the `/home/git/gitlab/log/sidekiq_client.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="sidekiq_client"` key on Helm chart installations.
This file contains logging information about jobs before Sidekiq starts
processing them, such as before being enqueued.
This log file follows the same structure as
[`sidekiq.log`](#sidekiqlog), so it is structured as JSON if
you've configured this for Sidekiq as mentioned previously.
## `gitlab-shell.log`
GitLab Shell is used by GitLab for executing Git commands and provide SSH
access to Git repositories.
Information containing `git-{upload-pack,receive-pack}` requests is at
`/var/log/gitlab/gitlab-shell/gitlab-shell.log`. Information about hooks to
GitLab Shell from Gitaly is at `/var/log/gitlab/gitaly/current`.
Example log entries for `/var/log/gitlab/gitlab-shell/gitlab-shell.log`:
```json
{
"duration_ms": 74.104,
"level": "info",
"method": "POST",
"msg": "Finished HTTP request",
"time": "2020-04-17T20:28:46Z",
"url": "http://127.0.0.1:8080/api/v4/internal/allowed"
}
{
"command": "git-upload-pack",
"git_protocol": "",
"gl_project_path": "root/example",
"gl_repository": "project-1",
"level": "info",
"msg": "executing git command",
"time": "2020-04-17T20:28:46Z",
"user_id": "user-1",
"username": "root"
}
```
Example log entries for `/var/log/gitlab/gitaly/current`:
```json
{
"method": "POST",
"url": "http://127.0.0.1:8080/api/v4/internal/allowed",
"duration": 0.058012959,
"gitaly_embedded": true,
"pid": 16636,
"level": "info",
"msg": "finished HTTP request",
"time": "2020-04-17T20:29:08+00:00"
}
{
"method": "POST",
"url": "http://127.0.0.1:8080/api/v4/internal/pre_receive",
"duration": 0.031022552,
"gitaly_embedded": true,
"pid": 16636,
"level": "info",
"msg": "finished HTTP request",
"time": "2020-04-17T20:29:08+00:00"
}
```
## Gitaly logs
This file is in `/var/log/gitlab/gitaly/current` and is produced by [runit](https://smarden.org/runit/).
`runit` is packaged with the Linux package and a brief explanation of its purpose
is available [in the Linux package documentation](https://docs.gitlab.com/omnibus/architecture/#runit).
[Log files are rotated](https://smarden.org/runit/svlogd.8), renamed in
Unix timestamp format, and `gzip`-compressed (like `@1584057562.s`).
### `grpc.log`
This file is at `/var/log/gitlab/gitlab-rails/grpc.log` for Linux
package installations. Native [gRPC](https://grpc.io/) logging used by Gitaly.
### `gitaly_hooks.log`
This file is at `/var/log/gitlab/gitaly/gitaly_hooks.log` and is
produced by `gitaly-hooks` command. It also contains records about
failures received during processing of the responses from GitLab API.
## Puma logs
### `puma_stdout.log`
This log is located:
- In the `/var/log/gitlab/puma/puma_stdout.log` file on Linux package installations.
- In the `/home/git/gitlab/log/puma_stdout.log` file on self-compiled installations.
### `puma_stderr.log`
This log is located:
- In the `/var/log/gitlab/puma/puma_stderr.log` file on Linux package installations.
- In the `/home/git/gitlab/log/puma_stderr.log` file on self-compiled installations.
## `repocheck.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/repocheck.log` file on Linux package installations.
- In the `/home/git/gitlab/log/repocheck.log` file on self-compiled installations.
It logs information whenever a [repository check is run](../repository_checks.md)
on a project.
## `importer.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/importer.log` file on Linux package installations.
- In the `/home/git/gitlab/log/importer.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="importer"` key on Helm chart installations.
This file logs the progress of [project imports and migrations](../../user/project/import/_index.md).
## `exporter.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/exporter.log` file on Linux package installations.
- In the `/home/git/gitlab/log/exporter.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="exporter"` key on Helm chart installations.
It logs the progress of the export process.
## `features_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/features_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/features_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="features_json"` key on Helm chart installations.
The modification events from Feature flags in development of GitLab
are recorded in this file. For example:
```json
{"severity":"INFO","time":"2020-11-24T02:30:59.860Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable","extra.thing":"true"}
{"severity":"INFO","time":"2020-11-24T02:31:29.108Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable","extra.thing":"true"}
{"severity":"INFO","time":"2020-11-24T02:31:29.129Z","correlation_id":null,"key":"cd_auto_rollback","action":"disable","extra.thing":"false"}
{"severity":"INFO","time":"2020-11-24T02:31:29.177Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable","extra.thing":"Project:1"}
{"severity":"INFO","time":"2020-11-24T02:31:29.183Z","correlation_id":null,"key":"cd_auto_rollback","action":"disable","extra.thing":"Project:1"}
{"severity":"INFO","time":"2020-11-24T02:31:29.188Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable_percentage_of_time","extra.percentage":"50"}
{"severity":"INFO","time":"2020-11-24T02:31:29.193Z","correlation_id":null,"key":"cd_auto_rollback","action":"disable_percentage_of_time"}
{"severity":"INFO","time":"2020-11-24T02:31:29.198Z","correlation_id":null,"key":"cd_auto_rollback","action":"enable_percentage_of_actors","extra.percentage":"50"}
{"severity":"INFO","time":"2020-11-24T02:31:29.203Z","correlation_id":null,"key":"cd_auto_rollback","action":"disable_percentage_of_actors"}
{"severity":"INFO","time":"2020-11-24T02:31:29.329Z","correlation_id":null,"key":"cd_auto_rollback","action":"remove"}
```
## `ci_resource_groups_json.log`
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/384180) in GitLab 15.9.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/ci_resource_groups_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/ci_resource_group_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="ci_resource_groups_json"` key on Helm chart installations.
It contains information about [resource group](../../ci/resource_groups/_index.md) acquisition. For example:
```json
{"severity":"INFO","time":"2023-02-10T23:02:06.095Z","correlation_id":"01GRYS10C2DZQ9J1G12ZVAD4YD","resource_group_id":1,"processable_id":288,"message":"attempted to assign resource to processable","success":true}
{"severity":"INFO","time":"2023-02-10T23:02:08.945Z","correlation_id":"01GRYS138MYEG32C0QEWMC4BDM","resource_group_id":1,"processable_id":288,"message":"attempted to release resource from processable","success":true}
```
The examples show the `resource_group_id`, `processable_id`, `message`, and `success` fields for each entry.
## `auth.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/auth.log` file on Linux package installations.
- In the `/home/git/gitlab/log/auth.log` file on self-compiled installations.
This log records:
- Requests over the [Rate Limit](../settings/rate_limits_on_raw_endpoints.md) on raw endpoints.
- [Protected paths](../settings/protected_paths.md) abusive requests.
- User ID and username, if available.
## `auth_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/auth_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/auth_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="auth_json"` key on Helm chart installations.
This file contains the JSON version of the logs in `auth.log`, for example:
```json
{
"severity":"ERROR",
"time":"2023-04-19T22:14:25.893Z",
"correlation_id":"01GYDSAKAN2SPZPAMJNRWW5H8S",
"message":"Rack_Attack",
"env":"blocklist",
"remote_ip":"x.x.x.x",
"request_method":"GET",
"path":"/group/project.git/info/refs?service=git-upload-pack"
}
```
## `graphql_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/graphql_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/graphql_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="graphql_json"` key on Helm chart installations.
GraphQL queries are recorded in the file. For example:
```json
{"query_string":"query IntrospectionQuery{__schema {queryType { name },mutationType { name }}}...(etc)","variables":{"a":1,"b":2},"complexity":181,"depth":1,"duration_s":7}
```
## `clickhouse.log`
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133371) in GitLab 16.5.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/clickhouse.log` file on Linux package installations.
- In the `/home/git/gitlab/log/clickhouse.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="clickhouse"` key.
The `clickhouse.log` file logs information related to the
[ClickHouse database client](../../integration/clickhouse.md) in GitLab.
## `migrations.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/migrations.log` file on Linux package installations.
- In the `/home/git/gitlab/log/migrations.log` file on self-compiled installations.
This file logs the progress of [database migrations](../raketasks/maintenance.md#display-status-of-database-migrations).
## `mail_room_json.log` (default)
This log is located:
- In the `/var/log/gitlab/mailroom/current` file on Linux package installations.
- In the `/home/git/gitlab/log/mail_room_json.log` file on self-compiled installations.
This structured log file records internal activity in the `mail_room` gem.
Its name and path are configurable, so the name and path may not match this one
documented previously.
## `web_hooks.log`
{{< history >}}
- Introduced in GitLab 16.3.
{{< /history >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/web_hooks.log` file on Linux package installations.
- In the `/home/git/gitlab/log/web_hooks.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="web_hooks"` key on Helm chart installations.
The back-off, disablement, and re-enablement events for Webhook are recorded in this file. For example:
```json
{"severity":"INFO","time":"2020-11-24T02:30:59.860Z","hook_id":12,"action":"backoff","disabled_until":"2020-11-24T04:30:59.860Z","recent_failures":2}
{"severity":"INFO","time":"2020-11-24T02:30:59.860Z","hook_id":12,"action":"disable","disabled_until":null,"recent_failures":100}
{"severity":"INFO","time":"2020-11-24T02:30:59.860Z","hook_id":12,"action":"enable","disabled_until":null,"recent_failures":0}
```
## Reconfigure logs
Reconfigure log files are in `/var/log/gitlab/reconfigure` for Linux package installations. Self-compiled installations
don't have reconfigure logs. A reconfigure log is populated whenever `gitlab-ctl reconfigure` is run manually or as part
of an upgrade.
Reconfigure logs files are named according to the UNIX timestamp of when the reconfigure
was initiated, such as `1509705644.log`
## `sidekiq_exporter.log` and `web_exporter.log`
If Prometheus metrics and the Sidekiq Exporter are both enabled, Sidekiq
starts a Web server and listens to the defined port (default:
`8082`). By default, Sidekiq Exporter access logs are disabled but can
be enabled:
- Use the `sidekiq['exporter_log_enabled'] = true` option in `/etc/gitlab/gitlab.rb` on Linux package installations.
- Use the `sidekiq_exporter.log_enabled` option in `gitlab.yml` on self-compiled installations.
When enabled, depending on your installation method, this file is located at:
- `/var/log/gitlab/gitlab-rails/sidekiq_exporter.log` on Linux package installations.
- `/home/git/gitlab/log/sidekiq_exporter.log` on self-compiled installations.
If Prometheus metrics and the Web Exporter are both enabled, Puma
starts a Web server and listens to the defined port (default: `8083`), and access logs
are generated in a location based on your installation method:
- `/var/log/gitlab/gitlab-rails/web_exporter.log` on Linux package installations.
- `/home/git/gitlab/log/web_exporter.log` on self-compiled installations.
## `database_load_balancing.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Contains details of GitLab [Database Load Balancing](../postgresql/database_load_balancing.md).
This log is located:
- In the `/var/log/gitlab/gitlab-rails/database_load_balancing.log` file on Linux package installations.
- In the `/home/git/gitlab/log/database_load_balancing.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="database_load_balancing"` key on Helm chart installations.
## `zoekt.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/110980) in GitLab 15.9.
{{< /history >}}
This file logs information related to [exact code search](../../user/search/exact_code_search.md).
This log is located:
- In the `/var/log/gitlab/gitlab-rails/zoekt.log` file on Linux package installations.
- In the `/home/git/gitlab/log/zoekt.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="zoekt"` key on Helm chart installations.
## `elasticsearch.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This file logs information related to the Elasticsearch Integration, including
errors during indexing or searching Elasticsearch.
This log is located:
- In the `/var/log/gitlab/gitlab-rails/elasticsearch.log` file on Linux package installations.
- In the `/home/git/gitlab/log/elasticsearch.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="elasticsearch"` key on Helm chart installations.
Each line contains JSON that can be ingested by services like Elasticsearch and Splunk.
Line breaks have been added to the following example line for clarity:
```json
{
"severity":"DEBUG",
"time":"2019-10-17T06:23:13.227Z",
"correlation_id":null,
"message":"redacted_search_result",
"class_name":"Milestone",
"id":2,
"ability":"read_milestone",
"current_user_id":2,
"query":"project"
}
```
## `exceptions_json.log`
This file logs the information about exceptions being tracked by
`Gitlab::ErrorTracking`, which provides a standard and consistent way of
processing rescued exceptions.
This log is located:
- In the `/var/log/gitlab/gitlab-rails/exceptions_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/exceptions_json.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="exceptions_json"` key on Helm chart installations.
Each line contains JSON that can be ingested by Elasticsearch. For example:
```json
{
"severity": "ERROR",
"time": "2019-12-17T11:49:29.485Z",
"correlation_id": "AbDVUrrTvM1",
"extra.project_id": 55,
"extra.relation_key": "milestones",
"extra.relation_index": 1,
"exception.class": "NoMethodError",
"exception.message": "undefined method `strong_memoize' for #<Gitlab::ImportExport::RelationFactory:0x00007fb5d917c4b0>",
"exception.backtrace": [
"lib/gitlab/import_export/relation_factory.rb:329:in `unique_relation?'",
"lib/gitlab/import_export/relation_factory.rb:345:in `find_or_create_object!'"
]
}
```
## `service_measurement.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/service_measurement.log` file on Linux package installations.
- In the `/home/git/gitlab/log/service_measurement.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="service_measurement"` key on Helm chart installations.
It contains only a single structured log with measurements for each service execution.
It contains measurements such as the number of SQL calls, `execution_time`, `gc_stats`, and `memory usage`.
For example:
```json
{ "severity":"INFO", "time":"2020-04-22T16:04:50.691Z","correlation_id":"04f1366e-57a1-45b8-88c1-b00b23dc3616","class":"Projects::ImportExport::ExportService","current_user":"John Doe","project_full_path":"group1/test-export","file_path":"/path/to/archive","gc_stats":{"count":{"before":127,"after":127,"diff":0},"heap_allocated_pages":{"before":10369,"after":10369,"diff":0},"heap_sorted_length":{"before":10369,"after":10369,"diff":0},"heap_allocatable_pages":{"before":0,"after":0,"diff":0},"heap_available_slots":{"before":4226409,"after":4226409,"diff":0},"heap_live_slots":{"before":2542709,"after":2641420,"diff":98711},"heap_free_slots":{"before":1683700,"after":1584989,"diff":-98711},"heap_final_slots":{"before":0,"after":0,"diff":0},"heap_marked_slots":{"before":2542704,"after":2542704,"diff":0},"heap_eden_pages":{"before":10369,"after":10369,"diff":0},"heap_tomb_pages":{"before":0,"after":0,"diff":0},"total_allocated_pages":{"before":10369,"after":10369,"diff":0},"total_freed_pages":{"before":0,"after":0,"diff":0},"total_allocated_objects":{"before":24896308,"after":24995019,"diff":98711},"total_freed_objects":{"before":22353599,"after":22353599,"diff":0},"malloc_increase_bytes":{"before":140032,"after":6650240,"diff":6510208},"malloc_increase_bytes_limit":{"before":25804104,"after":25804104,"diff":0},"minor_gc_count":{"before":94,"after":94,"diff":0},"major_gc_count":{"before":33,"after":33,"diff":0},"remembered_wb_unprotected_objects":{"before":34284,"after":34284,"diff":0},"remembered_wb_unprotected_objects_limit":{"before":68568,"after":68568,"diff":0},"old_objects":{"before":2404725,"after":2404725,"diff":0},"old_objects_limit":{"before":4809450,"after":4809450,"diff":0},"oldmalloc_increase_bytes":{"before":140032,"after":6650240,"diff":6510208},"oldmalloc_increase_bytes_limit":{"before":68537556,"after":68537556,"diff":0}},"time_to_finish":0.12298400001600385,"number_of_sql_calls":70,"memory_usage":"0.0 MiB","label":"process_48616"}
```
## `geo.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/geo.log` file on Linux package installations.
- In the `/home/git/gitlab/log/geo.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="geo"` key on Helm chart installations.
This file contains information about when Geo attempts to sync repositories
and files. Each line in the file contains a separate JSON entry that can be
ingested into (for example, Elasticsearch or Splunk).
For example:
```json
{"severity":"INFO","time":"2017-08-06T05:40:16.104Z","message":"Repository update","project_id":1,"source":"repository","resync_repository":true,"resync_wiki":true,"class":"Gitlab::Geo::LogCursor::Daemon","cursor_delay_s":0.038}
```
This message shows that Geo detected that a repository update was needed for project `1`.
## `update_mirror_service_json.log`
This log is located:
- In the `/var/log/gitlab/gitlab-rails/update_mirror_service_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/update_mirror_service_json.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="update_mirror_service_json"` key on Helm chart installations.
This file contains information about LFS errors that occurred during project mirroring.
While we work to move other project mirroring errors into this log, the [general log](#productionlog)
can be used.
```json
{
"severity":"ERROR",
"time":"2020-07-28T23:29:29.473Z",
"correlation_id":"5HgIkCJsO53",
"user_id":"x",
"project_id":"x",
"import_url":"https://mirror-source/group/project.git",
"error_message":"The LFS objects download list couldn't be imported. Error: Unauthorized"
}
```
## `llm.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/120506) in GitLab 16.0.
{{< /history >}}
The `llm.log` file logs information related to
[AI features](../../user/gitlab_duo/_index.md). Logging includes information about AI events.
### LLM input and output logging
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13401) in GitLab 17.2 [with a flag](../feature_flags/_index.md) named `expanded_ai_logging`. Disabled by default.
{{< /history >}}
{{< alert type="flag" >}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{< /alert >}}
To log the LLM prompt input and response output, enable the `expanded_ai_logging` feature flag. This flag is intended for use on GitLab.com only, and not on GitLab Self-Managed instances.
This flag is disabled by default and can only be enabled:
- For GitLab.com, when you provide consent through a GitLab [Support Ticket](https://about.gitlab.com/support/portal/).
By default, the log does not contain LLM prompt input and response output to support [data retention policies](../../user/gitlab_duo/data_usage.md#data-retention) of AI feature data.
The log file is located at:
- In the `/var/log/gitlab/gitlab-rails/llm.log` file on Linux package installations.
- In the `/home/git/gitlab/log/llm.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="llm"` key on Helm chart installations.
## `epic_work_item_sync.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/120506) in GitLab 16.9.
{{< /history >}}
The `epic_work_item_sync.log` file logs information related to syncing and migrating epics as work items.
This log is located:
- In the `/var/log/gitlab/gitlab-rails/epic_work_item_sync.log` file on Linux package installations.
- In the `/home/git/gitlab/log/epic_work_item_sync.log` file on self-compiled installations.
- On the Sidekiq and Webservice pods under the `subcomponent="epic_work_item_sync"` key on Helm chart installations.
## `secret_push_protection.log`
{{< details >}}
- Tier: Ultimate
- Offering: GitLab.com, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/137812) in GitLab 16.7.
{{< /history >}}
The `secret_push_protection.log` file logs information related to [Secret Push Protection](../../user/application_security/secret_detection/secret_push_protection/_index.md) feature.
This log is located:
- In the `/var/log/gitlab/gitlab-rails/secret_push_protection.log` file on Linux package installations.
- In the `/home/git/gitlab/log/secret_push_protection.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="secret_push_protection"` key on Helm chart installations.
## `active_context.log`
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/work_items/554925) in GitLab 18.3.
{{< /history >}}
The `active_context.log` file logs information related to embedding pipelines through the
[`ActiveContext` layer](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ai_context_abstraction_layer/).
GitLab supports `ActiveContext` code embeddings.
This pipeline handles embedding generation for project code files.
For more information, see [architecture design](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/codebase_as_chat_context/code_embeddings/).
This log is located:
- In the `/var/log/gitlab/gitlab-rails/active_context.log` file on Linux package installations.
- In the `/home/git/gitlab/log/active_context.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="activecontext"` key on Helm chart installations.
## Registry logs
For Linux package installations, container registry logs are in `/var/log/gitlab/registry/current`.
## NGINX logs
For Linux package installations, NGINX logs are in:
- `/var/log/gitlab/nginx/gitlab_access.log`: A log of requests made to GitLab
- `/var/log/gitlab/nginx/gitlab_error.log`: A log of NGINX errors for GitLab
- `/var/log/gitlab/nginx/gitlab_pages_access.log`: A log of requests made to Pages static sites
- `/var/log/gitlab/nginx/gitlab_pages_error.log`: A log of NGINX errors for Pages static sites
- `/var/log/gitlab/nginx/gitlab_registry_access.log`: A log of requests made to the container registry
- `/var/log/gitlab/nginx/gitlab_registry_error.log`: A log of NGINX errors for the container registry
- `/var/log/gitlab/nginx/gitlab_mattermost_access.log`: A log of requests made to Mattermost
- `/var/log/gitlab/nginx/gitlab_mattermost_error.log`: A log of NGINX errors for Mattermost
Below is the default GitLab NGINX access log format:
```plaintext
'$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"'
```
The `$request` and `$http_referer` are
[filtered](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/support/nginx/gitlab)
for sensitive query string parameters such as secret tokens.
## Pages logs
For Linux package installations, Pages logs are in `/var/log/gitlab/gitlab-pages/current`.
For example:
```json
{
"level": "info",
"msg": "GitLab Pages Daemon",
"revision": "52b2899",
"time": "2020-04-22T17:53:12Z",
"version": "1.17.0"
}
{
"level": "info",
"msg": "URL: https://gitlab.com/gitlab-org/gitlab-pages",
"time": "2020-04-22T17:53:12Z"
}
{
"gid": 998,
"in-place": false,
"level": "info",
"msg": "running the daemon as unprivileged user",
"time": "2020-04-22T17:53:12Z",
"uid": 998
}
```
## Product Usage Data log
{{< alert type="note" >}}
We recommend against using the raw logs for analysing feature usage, as the data quality has not yet been certified for accuracy.
The list of events can change in each version based on new features or changes to existing features. Certified in-product adoption reports will be available after the data is ready for analysis.
{{< /alert >}}
This log is located:
- In the `/var/log/gitlab/gitlab-rails/product_usage_data.log` file on Linux package installations.
- In the `/home/git/gitlab/log/product_usage_data.log` file on self-compiled installations.
- On the Webservice pods under the `subcomponent="product_usage_data"` key on Helm chart installations.
It contains JSON-formatted logs of product usage events tracked through Snowplow. Each line in the file contains a separate JSON entry that can be ingested by services like Elasticsearch or Splunk. Line breaks were added to examples for legibility:
```json
{
"severity":"INFO",
"time":"2025-04-09T13:43:40.254Z",
"message":"sending event",
"payload":"{
\"e\":\"se\",
\"se_ca\":\"projects:merge_requests:diffs\",
\"se_ac\":\"i_code_review_user_searches_diff\",
\"cx\":\"eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy5zbm93cGxvdy9jb250ZXh0cy9qc29uc2NoZW1hLzEtMC0xIiwiZGF0YSI6W3sic2NoZW1hIjoiaWdsdTpjb20uZ2l0bGFiL2dpdGxhYl9zdGFuZGFyZC9qc29uc2NoZW1hLzEtMS0xIiwiZGF0YSI6eyJlbnZpcm9ubWVudCI6ImRldmVsb3BtZW50Iiwic291cmNlIjoiZ2l0bGFiLXJhaWxzIiwiY29ycmVsYXRpb25faWQiOiJlNDk2NzNjNWI2MGQ5ODc0M2U4YWI0MjZiMTZmMTkxMiIsInBsYW4iOiJkZWZhdWx0IiwiZXh0cmEiOnt9LCJ1c2VyX2lkIjpudWxsLCJnbG9iYWxfdXNlcl9pZCI6bnVsbCwiaXNfZ2l0bGFiX3RlYW1fbWVtYmVyIjpudWxsLCJuYW1lc3BhY2VfaWQiOjMxLCJwcm9qZWN0X2lkIjo2LCJmZWF0dXJlX2VuYWJsZWRfYnlfbmFtZXNwYWNlX2lkcyI6bnVsbCwicmVhbG0iOiJzZWxmLW1hbmFnZWQiLCJpbnN0YW5jZV9pZCI6IjJkMDg1NzBkLWNmZGItNDFmMy1iODllLWM3MTM5YmFjZTI3NSIsImhvc3RfbmFtZSI6ImpsYXJzZW4tLTIwMjIxMjE0LVBWWTY5IiwiaW5zdGFuY2VfdmVyc2lvbiI6IjE3LjExLjAiLCJjb250ZXh0X2dlbmVyYXRlZF9hdCI6IjIwMjUtMDQtMDkgMTM6NDM6NDAgVVRDIn19LHsic2NoZW1hIjoiaWdsdTpjb20uZ2l0bGFiL2dpdGxhYl9zZXJ2aWNlX3BpbmcvanNvbnNjaGVtYS8xLTAtMSIsImRhdGEiOnsiZGF0YV9zb3VyY2UiOiJyZWRpc19obGwiLCJldmVudF9uYW1lIjoiaV9jb2RlX3Jldmlld191c2VyX3NlYXJjaGVzX2RpZmYifX1dfQ==\",
\"p\":\"srv\",
\"dtm\":\"1744206220253\",
\"tna\":\"gl\",
\"tv\":\"rb-0.8.0\",
\"eid\":\"4f067989-d10d-40b0-9312-ad9d7355be7f\"
}
```
To inspect these logs, you can use the [Rake task](../raketasks/_index.md) `product_usage_data:format` which formats the JSON output and decodes base64-encoded context data for better readability:
```shell
gitlab-rake "product_usage_data:format[log/product_usage_data.log]"
# or pipe the logs directly
cat log/product_usage_data.log | gitlab-rake product_usage_data:format
# or tail the logs in real-time
tail -f log/product_usage_data.log | gitlab-rake product_usage_data:format
```
You can disable this log by setting the `GITLAB_DISABLE_PRODUCT_USAGE_EVENT_LOGGING` environment variable to any value.
## Let's Encrypt logs
For Linux package installations, Let's Encrypt [auto-renew](https://docs.gitlab.com/omnibus/settings/ssl/#renew-the-certificates-automatically) logs are in `/var/log/gitlab/lets-encrypt/`.
## Mattermost logs
For Linux package installations, Mattermost logs are in these locations:
- `/var/log/gitlab/mattermost/mattermost.log`
- `/var/log/gitlab/mattermost/current`
## Workhorse logs
For Linux package installations, Workhorse logs are in `/var/log/gitlab/gitlab-workhorse/current`.
## Patroni logs
For Linux package installations, Patroni logs are in `/var/log/gitlab/patroni/current`.
## PgBouncer logs
For Linux package installations, PgBouncer logs are in `/var/log/gitlab/pgbouncer/current`.
## PostgreSQL logs
For Linux package installations, PostgreSQL logs are in `/var/log/gitlab/postgresql/current`.
If Patroni is being used, the PostgreSQL logs are stored in the [Patroni logs](#patroni-logs) instead.
## Prometheus logs
For Linux package installations, Prometheus logs are in `/var/log/gitlab/prometheus/current`.
## Redis logs
For Linux package installations, Redis logs are in `/var/log/gitlab/redis/current`.
## Sentinel logs
For Linux package installations, Sentinel logs are in `/var/log/gitlab/sentinel/current`.
## Alertmanager logs
For Linux package installations, Alertmanager logs are in `/var/log/gitlab/alertmanager/current`.
<!-- vale gitlab_base.Spelling = NO -->
## crond logs
For Linux package installations, crond logs are in `/var/log/gitlab/crond/`.
<!-- vale gitlab_base.Spelling = YES -->
## Grafana logs
For Linux package installations, Grafana logs are in `/var/log/gitlab/grafana/current`.
## LogRotate logs
For Linux package installations, `logrotate` logs are in `/var/log/gitlab/logrotate/current`.
## GitLab Monitor logs
For Linux package installations, GitLab Monitor logs are in `/var/log/gitlab/gitlab-monitor/`.
## GitLab Exporter
For Linux package installations, GitLab Exporter logs are in `/var/log/gitlab/gitlab-exporter/current`.
## GitLab agent server for Kubernetes
For Linux package installations, GitLab agent server for Kubernetes logs are
in `/var/log/gitlab/gitlab-kas/current`.
## Praefect logs
For Linux package installations, Praefect logs are in `/var/log/gitlab/praefect/`.
GitLab also tracks [Prometheus metrics for Gitaly Cluster (Praefect)](../gitaly/praefect/monitoring.md).
## Backup log
For Linux package installations, the backup log is located at `/var/log/gitlab/gitlab-rails/backup_json.log`.
On Helm chart installations, the backup log is stored in the Toolbox pod, at `/var/log/gitlab/backup_json.log`.
This log is populated when a [GitLab backup is created](../backup_restore/_index.md). You can use this log to understand how the backup process performed.
## Performance bar stats
This log is located:
- In the `/var/log/gitlab/gitlab-rails/performance_bar_json.log` file on Linux package installations.
- In the `/home/git/gitlab/log/performance_bar_json.log` file on self-compiled installations.
- On the Sidekiq pods under the `subcomponent="performance_bar_json"` key on Helm chart installations.
Performance bar statistics (currently only duration of SQL queries) are recorded
in that file. For example:
```json
{"severity":"INFO","time":"2020-12-04T09:29:44.592Z","correlation_id":"33680b1490ccd35981b03639c406a697","filename":"app/models/ci/pipeline.rb","method_path":"app/models/ci/pipeline.rb:each_with_object","request_id":"rYHomD0VJS4","duration_ms":26.889,"count":2,"query_type": "active-record"}
```
These statistics are logged on .com only, disabled on self-deployments.
## Gathering logs
When [troubleshooting](../troubleshooting/_index.md) issues that aren't localized to one of the
previously listed components, it's helpful to simultaneously gather multiple logs and statistics
from a GitLab instance.
{{< alert type="note" >}}
GitLab Support often asks for one of these, and maintains the required tools.
{{< /alert >}}
### Briefly tail the main logs
If the bug or error is readily reproducible, save the main GitLab logs
[to a file](../troubleshooting/linux_cheat_sheet.md#files-and-directories) while reproducing the
problem a few times:
```shell
sudo gitlab-ctl tail | tee /tmp/<case-ID-and-keywords>.log
```
Conclude the log gathering with <kbd>Control</kbd> + <kbd>C</kbd>.
### Gathering SOS logs
If performance degradations or cascading errors occur that can't readily be attributed to one
of the previously listed GitLab components, [use our SOS scripts](../troubleshooting/diagnostics_tools.md#sos-scripts).
### Fast-stats
[Fast-stats](https://gitlab.com/gitlab-com/support/toolbox/fast-stats) is a tool
for creating and comparing performance statistics from GitLab logs.
For more details and instructions to run it, read the
[documentation for fast-stats](https://gitlab.com/gitlab-com/support/toolbox/fast-stats#usage).
## Find relevant log entries with a correlation ID
Most requests have a log ID that can be used to [find relevant log entries](tracing_correlation_id.md).
|
https://docs.gitlab.com/administration/gateway
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/gateway.md
|
2025-08-13
|
doc/administration/gitlab_duo
|
[
"doc",
"administration",
"gitlab_duo"
] |
gateway.md
|
AI-powered
|
AI Framework
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
AI gateway
| null |
The AI gateway is a standalone service that gives access to AI-native GitLab Duo features.
GitLab operates an instance of the AI gateway, based in the cloud. This instance is used by:
- GitLab.com.
- GitLab Self-Managed. For more information,
see how to [configure GitLab Duo on a GitLab Self-Managed instance](setup.md).
- GitLab Dedicated.
There is also a self-hosted AI gateway instance. You can use this instance on
GitLab Self-Managed through [GitLab Duo Self-Hosted](../../administration/gitlab_duo_self_hosted/_index.md).
This page describes where the AI gateway is deployed, and answers questions about region selection, data routing, and data sovereignty.
## Region support
### GitLab Self-Managed and GitLab Dedicated
For GitLab Self-Managed and GitLab Dedicated customers, region selection
is managed internally by GitLab.
[View the available regions](https://gitlab-com.gitlab.io/gl-infra/platform/runway/runwayctl/manifest.schema.html#spec_regions) in the [Runway](https://gitlab.com/gitlab-com/gl-infra/platform/runway) service manifest.
Runway is the GitLab internal developer platform. It is not available to external
customers. Support for improvements to GitLab Self-Managed instances is proposed in
[epic 1330](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1330).
### GitLab.com
For GitLab.com customers, the routing mechanism is based on the GitLab instance
location, instead of the user's instance location.
Because GitLab.com is single-homed in `us-east1`, requests to the AI gateway
are routed to `us-east4` in almost all cases. This means that the routing might
not always result in the absolute nearest deployment for every user.
### Direct and indirect connections
The IDE communicates directly with the AI gateway by default, bypassing the GitLab
monolith. This direct connection improves routing efficiency. To change this, you can
[configure direct and indirect connections](../../user/project/repository/code_suggestions/_index.md#direct-and-indirect-connections).
### Automatic routing
GitLab leverages Cloudflare and Google Cloud Platform (GCP) load balancers to route AI
gateway requests to the nearest available deployment automatically. This routing
mechanism prioritizes low latency and efficient processing of user requests.
You cannot manually control this routing process. The system dynamically selects the
optimal region based on factors like network conditions and server load.
### Tracing requests to specific regions
You cannot directly trace your AI requests to specific regions at this time.
If you need assistance with tracing a particular request, GitLab Support can access and
analyze logs that contain Cloudflare headers and instance UUIDs. These logs provide
insights into the routing path and can help identify the region where a request was processed.
## Data sovereignty
It's important to acknowledge the current limitations regarding strict data sovereignty enforcement in our multi-region AI gateway deployment. Currently, we cannot guarantee requests will go to or remain within a particular region. Therefore, this is not a data residency solution.
### Factors that influence data routing
The following factors influence where data is routed.
- **Network latency**: The primary routing mechanism focuses on minimizing latency, meaning data might be processed in a region other than the nearest one if network conditions dictate.
- **Service availability**: In case of regional outages or service disruptions, requests might be automatically rerouted to ensure uninterrupted service.
- **Third-Party dependencies**: The GitLab AI infrastructure relies on third-party model providers, like Google Vertex AI, which have their own data handling practices.
### AI gateway deployment regions
For the most up-to-date information on AI gateway deployment regions, refer to the [AI-assist runway configuration file](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/.runway/runway.yml?ref_type=heads#L12).
As of the last update (2023-11-21), GitLab deploys the AI gateway in the following regions:
- North America (`us-east4`)
- Europe (`europe-west2`, `europe-west3`, `europe-west9`)
- Asia Pacific (`asia-northeast1`, `asia-northeast3`)
Deployment regions may change frequently. For the most current information, always check the
previously linked configuration file.
The exact location of the LLM models used by the AI gateway is determined by the third-party model providers. There is no guarantee that the models reside in the same geographical regions as the AI gateway deployments. This implies that data may flow back to the US or other regions where the model provider operates, even if the AI gateway processes the initial request in a different region.
### Data Flow and LLM model locations
GitLab is working closely with LLM providers to understand their regional data handling practices fully.
There might be instances where data is transmitted to regions outside the one closest to the user due to the factors mentioned in the previous section.
### Future enhancements
GitLab is actively working to let customers specify data residency requirements more granularly in the future. The proposed functionality can provide greater control over data processing locations and help meet specific compliance needs.
## Specific regional questions
### Data routing post-Brexit
The UK's exit from the EU does not directly impact data routing preferences or decisions for AI gateway. Data is routed to the most optimal region based on performance and availability. Data can still flow freely between the EU and UK.
|
---
stage: AI-powered
group: AI Framework
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: AI gateway
breadcrumbs:
- doc
- administration
- gitlab_duo
---
The AI gateway is a standalone service that gives access to AI-native GitLab Duo features.
GitLab operates an instance of the AI gateway, based in the cloud. This instance is used by:
- GitLab.com.
- GitLab Self-Managed. For more information,
see how to [configure GitLab Duo on a GitLab Self-Managed instance](setup.md).
- GitLab Dedicated.
There is also a self-hosted AI gateway instance. You can use this instance on
GitLab Self-Managed through [GitLab Duo Self-Hosted](../../administration/gitlab_duo_self_hosted/_index.md).
This page describes where the AI gateway is deployed, and answers questions about region selection, data routing, and data sovereignty.
## Region support
### GitLab Self-Managed and GitLab Dedicated
For GitLab Self-Managed and GitLab Dedicated customers, region selection
is managed internally by GitLab.
[View the available regions](https://gitlab-com.gitlab.io/gl-infra/platform/runway/runwayctl/manifest.schema.html#spec_regions) in the [Runway](https://gitlab.com/gitlab-com/gl-infra/platform/runway) service manifest.
Runway is the GitLab internal developer platform. It is not available to external
customers. Support for improvements to GitLab Self-Managed instances is proposed in
[epic 1330](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1330).
### GitLab.com
For GitLab.com customers, the routing mechanism is based on the GitLab instance
location, instead of the user's instance location.
Because GitLab.com is single-homed in `us-east1`, requests to the AI gateway
are routed to `us-east4` in almost all cases. This means that the routing might
not always result in the absolute nearest deployment for every user.
### Direct and indirect connections
The IDE communicates directly with the AI gateway by default, bypassing the GitLab
monolith. This direct connection improves routing efficiency. To change this, you can
[configure direct and indirect connections](../../user/project/repository/code_suggestions/_index.md#direct-and-indirect-connections).
### Automatic routing
GitLab leverages Cloudflare and Google Cloud Platform (GCP) load balancers to route AI
gateway requests to the nearest available deployment automatically. This routing
mechanism prioritizes low latency and efficient processing of user requests.
You cannot manually control this routing process. The system dynamically selects the
optimal region based on factors like network conditions and server load.
### Tracing requests to specific regions
You cannot directly trace your AI requests to specific regions at this time.
If you need assistance with tracing a particular request, GitLab Support can access and
analyze logs that contain Cloudflare headers and instance UUIDs. These logs provide
insights into the routing path and can help identify the region where a request was processed.
## Data sovereignty
It's important to acknowledge the current limitations regarding strict data sovereignty enforcement in our multi-region AI gateway deployment. Currently, we cannot guarantee requests will go to or remain within a particular region. Therefore, this is not a data residency solution.
### Factors that influence data routing
The following factors influence where data is routed.
- **Network latency**: The primary routing mechanism focuses on minimizing latency, meaning data might be processed in a region other than the nearest one if network conditions dictate.
- **Service availability**: In case of regional outages or service disruptions, requests might be automatically rerouted to ensure uninterrupted service.
- **Third-Party dependencies**: The GitLab AI infrastructure relies on third-party model providers, like Google Vertex AI, which have their own data handling practices.
### AI gateway deployment regions
For the most up-to-date information on AI gateway deployment regions, refer to the [AI-assist runway configuration file](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/.runway/runway.yml?ref_type=heads#L12).
As of the last update (2023-11-21), GitLab deploys the AI gateway in the following regions:
- North America (`us-east4`)
- Europe (`europe-west2`, `europe-west3`, `europe-west9`)
- Asia Pacific (`asia-northeast1`, `asia-northeast3`)
Deployment regions may change frequently. For the most current information, always check the
previously linked configuration file.
The exact location of the LLM models used by the AI gateway is determined by the third-party model providers. There is no guarantee that the models reside in the same geographical regions as the AI gateway deployments. This implies that data may flow back to the US or other regions where the model provider operates, even if the AI gateway processes the initial request in a different region.
### Data Flow and LLM model locations
GitLab is working closely with LLM providers to understand their regional data handling practices fully.
There might be instances where data is transmitted to regions outside the one closest to the user due to the factors mentioned in the previous section.
### Future enhancements
GitLab is actively working to let customers specify data residency requirements more granularly in the future. The proposed functionality can provide greater control over data processing locations and help meet specific compliance needs.
## Specific regional questions
### Data routing post-Brexit
The UK's exit from the EU does not directly impact data routing preferences or decisions for AI gateway. Data is routed to the most optimal region based on performance and availability. Data can still flow freely between the EU and UK.
|
https://docs.gitlab.com/administration/setup
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/setup.md
|
2025-08-13
|
doc/administration/gitlab_duo
|
[
"doc",
"administration",
"gitlab_duo"
] |
setup.md
|
AI-powered
|
AI Framework
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Configure GitLab Duo on a GitLab Self-Managed instance
|
Ensure GitLab Duo is configured and operating correctly.
|
{{< details >}}
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab Duo is powered by large language models (LLMs), with data sent through an AI gateway.
To use GitLab Duo on a GitLab Self-Managed instance, you can do either of the following:
- Use the GitLab AI vendor models and the cloud-based AI gateway that's hosted by
GitLab. This is the default option.
- [Use GitLab Duo Self-Hosted to self-host the AI gateway, with a supported self-hosted LLM](../../administration/gitlab_duo_self_hosted/_index.md#set-up-a-gitlab-duo-self-hosted-infrastructure).
This option provides full control over your data and security.
{{< alert type="note" >}}
You must have a Premium or Ultimate subscription with the GitLab Duo Enterprise add-on to use GitLab Duo Self-Hosted.
{{< /alert >}}
This page focuses on how to configure a GitLab Self-Managed instance if you're using the default, GitLab-hosted option.
## Prerequisites
- You must ensure both [outbound](#allow-outbound-connections-from-the-gitlab-instance)
and [inbound](#allow-inbound-connections-from-clients-to-the-gitlab-instance) connectivity exists.
Network firewalls can cause lag or delay.
- [Silent Mode](../../administration/silent_mode/_index.md) must not be turned on.
- You must [activate your instance with an activation code](../../administration/license.md#activate-gitlab-ee).
{{< alert type="note" >}}
You cannot use an [offline license](https://about.gitlab.com/pricing/licensing-faq/cloud-licensing/#what-is-an-offline-cloud-license) or a legacy license.
{{< /alert >}}
- GitLab Duo requires GitLab 17.2 and later for the best user experience and results. Earlier versions might continue to work, however the experience may be degraded.
GitLab Duo features that are experimental or beta are turned off by default
and [must be turned on](../../user/gitlab_duo/turn_on_off.md#turn-on-beta-and-experimental-features).
## Allow outbound connections from the GitLab instance
Check both your outbound and inbound settings:
- Your firewalls and HTTP/S proxy servers must allow outbound connections
to `cloud.gitlab.com` and `customers.gitlab.com` on port `443` both with `https://`.
These hosts are protected by Cloudflare. Update your firewall settings to allow traffic to
all IP addresses in the [list of IP ranges Cloudflare publishes](https://www.cloudflare.com/ips/).
- To use an HTTP/S proxy, both `gitLab_workhorse` and `gitLab_rails` must have the necessary
[web proxy environment variables](https://docs.gitlab.com/omnibus/settings/environment-variables.html) set.
- In multi-node GitLab installations, configure the HTTP/S proxy on all **Rails** and **Sidekiq** nodes.
## Allow inbound connections from clients to the GitLab instance
- GitLab instances must allow inbound connections from Duo clients ([IDEs](../../editor_extensions/_index.md),
Code Editors, and GitLab Web Frontend) on port 443 with `https://` and `wss://`.
- Both `HTTP2` and the `'upgrade'` header must be allowed, because GitLab Duo
uses both REST and WebSockets.
- Check for restrictions on WebSocket (`wss://`) traffic to `wss://gitlab.example.com/-/cable` and other `.com` domains.
Network policy restrictions on `wss://` traffic can cause issues with some GitLab Duo Chat
services. Consider policy updates to allow these services.
- If you use reverse proxies, such as Apache, you might see GitLab Duo Chat connection issues in your
logs, like **WebSocket connection to .... failures**.
To resolve this problem, try editing your Apache proxy settings:
```apache
# Enable WebSocket reverse Proxy
# Needs proxy_wstunnel enabled
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:8181/$1" [P,L]
```
## Run a health check for GitLab Duo
{{< details >}}
- Status: Beta
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/161997) in GitLab 17.3.
- [Download health check report added](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/165032) in GitLab 17.5.
{{< /history >}}
You can determine if your instance meets the requirements to use GitLab Duo.
When the health check completes, it displays a pass or fail result and the types of issues.
If the health check fails any of the tests, users might not be able to use GitLab Duo features in your instance.
This is a [beta](../../policy/development_stages_support.md) feature.
Prerequisites:
- You must be an administrator.
To run a health check:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **GitLab Duo**.
1. On the upper-right corner, select **Run health check**.
1. Optional. In GitLab 17.5 and later, after the health check is complete, you can select **Download report** to save a detailed report of the health check results.
These tests are performed:
| Test | Description |
|-----------------|-------------|
| Network | Tests whether your instance can connect to `customers.gitlab.com` and `cloud.gitlab.com`.<br><br>If your instance cannot connect to either destination, ensure that your firewall or proxy server settings [allow connection](setup.md). |
| Synchronization | Tests whether your subscription: <br>- Has been activated with an activation code and can be synchronized with `customers.gitlab.com`.<br>- Has correct access credentials.<br>- Has been synchronized recently. If it hasn't or the access credentials are missing or expired, you can [manually synchronize](../../subscriptions/manage_subscription.md#manually-synchronize-subscription-data) your subscription data. |
| System exchange | Tests whether Code Suggestions can be used in your instance. If the system exchange assessment fails, users might not be able to use GitLab Duo features. |
For GitLab instances earlier than version 17.10, if you are encountering any issues with the health check for:
- GitLab-hosted Duo, see the [troubleshooting page](../../user/gitlab_duo/troubleshooting.md).
|
---
stage: AI-powered
group: AI Framework
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Ensure GitLab Duo is configured and operating correctly.
title: Configure GitLab Duo on a GitLab Self-Managed instance
gitlab_dedicated: false
breadcrumbs:
- doc
- administration
- gitlab_duo
---
{{< details >}}
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab Duo is powered by large language models (LLMs), with data sent through an AI gateway.
To use GitLab Duo on a GitLab Self-Managed instance, you can do either of the following:
- Use the GitLab AI vendor models and the cloud-based AI gateway that's hosted by
GitLab. This is the default option.
- [Use GitLab Duo Self-Hosted to self-host the AI gateway, with a supported self-hosted LLM](../../administration/gitlab_duo_self_hosted/_index.md#set-up-a-gitlab-duo-self-hosted-infrastructure).
This option provides full control over your data and security.
{{< alert type="note" >}}
You must have a Premium or Ultimate subscription with the GitLab Duo Enterprise add-on to use GitLab Duo Self-Hosted.
{{< /alert >}}
This page focuses on how to configure a GitLab Self-Managed instance if you're using the default, GitLab-hosted option.
## Prerequisites
- You must ensure both [outbound](#allow-outbound-connections-from-the-gitlab-instance)
and [inbound](#allow-inbound-connections-from-clients-to-the-gitlab-instance) connectivity exists.
Network firewalls can cause lag or delay.
- [Silent Mode](../../administration/silent_mode/_index.md) must not be turned on.
- You must [activate your instance with an activation code](../../administration/license.md#activate-gitlab-ee).
{{< alert type="note" >}}
You cannot use an [offline license](https://about.gitlab.com/pricing/licensing-faq/cloud-licensing/#what-is-an-offline-cloud-license) or a legacy license.
{{< /alert >}}
- GitLab Duo requires GitLab 17.2 and later for the best user experience and results. Earlier versions might continue to work, however the experience may be degraded.
GitLab Duo features that are experimental or beta are turned off by default
and [must be turned on](../../user/gitlab_duo/turn_on_off.md#turn-on-beta-and-experimental-features).
## Allow outbound connections from the GitLab instance
Check both your outbound and inbound settings:
- Your firewalls and HTTP/S proxy servers must allow outbound connections
to `cloud.gitlab.com` and `customers.gitlab.com` on port `443` both with `https://`.
These hosts are protected by Cloudflare. Update your firewall settings to allow traffic to
all IP addresses in the [list of IP ranges Cloudflare publishes](https://www.cloudflare.com/ips/).
- To use an HTTP/S proxy, both `gitLab_workhorse` and `gitLab_rails` must have the necessary
[web proxy environment variables](https://docs.gitlab.com/omnibus/settings/environment-variables.html) set.
- In multi-node GitLab installations, configure the HTTP/S proxy on all **Rails** and **Sidekiq** nodes.
## Allow inbound connections from clients to the GitLab instance
- GitLab instances must allow inbound connections from Duo clients ([IDEs](../../editor_extensions/_index.md),
Code Editors, and GitLab Web Frontend) on port 443 with `https://` and `wss://`.
- Both `HTTP2` and the `'upgrade'` header must be allowed, because GitLab Duo
uses both REST and WebSockets.
- Check for restrictions on WebSocket (`wss://`) traffic to `wss://gitlab.example.com/-/cable` and other `.com` domains.
Network policy restrictions on `wss://` traffic can cause issues with some GitLab Duo Chat
services. Consider policy updates to allow these services.
- If you use reverse proxies, such as Apache, you might see GitLab Duo Chat connection issues in your
logs, like **WebSocket connection to .... failures**.
To resolve this problem, try editing your Apache proxy settings:
```apache
# Enable WebSocket reverse Proxy
# Needs proxy_wstunnel enabled
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:8181/$1" [P,L]
```
## Run a health check for GitLab Duo
{{< details >}}
- Status: Beta
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/161997) in GitLab 17.3.
- [Download health check report added](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/165032) in GitLab 17.5.
{{< /history >}}
You can determine if your instance meets the requirements to use GitLab Duo.
When the health check completes, it displays a pass or fail result and the types of issues.
If the health check fails any of the tests, users might not be able to use GitLab Duo features in your instance.
This is a [beta](../../policy/development_stages_support.md) feature.
Prerequisites:
- You must be an administrator.
To run a health check:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **GitLab Duo**.
1. On the upper-right corner, select **Run health check**.
1. Optional. In GitLab 17.5 and later, after the health check is complete, you can select **Download report** to save a detailed report of the health check results.
These tests are performed:
| Test | Description |
|-----------------|-------------|
| Network | Tests whether your instance can connect to `customers.gitlab.com` and `cloud.gitlab.com`.<br><br>If your instance cannot connect to either destination, ensure that your firewall or proxy server settings [allow connection](setup.md). |
| Synchronization | Tests whether your subscription: <br>- Has been activated with an activation code and can be synchronized with `customers.gitlab.com`.<br>- Has correct access credentials.<br>- Has been synchronized recently. If it hasn't or the access credentials are missing or expired, you can [manually synchronize](../../subscriptions/manage_subscription.md#manually-synchronize-subscription-data) your subscription data. |
| System exchange | Tests whether Code Suggestions can be used in your instance. If the system exchange assessment fails, users might not be able to use GitLab Duo features. |
For GitLab instances earlier than version 17.10, if you are encountering any issues with the health check for:
- GitLab-hosted Duo, see the [troubleshooting page](../../user/gitlab_duo/troubleshooting.md).
|
https://docs.gitlab.com/administration/sidekiq_memory_killer
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/sidekiq_memory_killer.md
|
2025-08-13
|
doc/administration/sidekiq
|
[
"doc",
"administration",
"sidekiq"
] |
sidekiq_memory_killer.md
|
Data access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Reducing memory use
| null |
The Sidekiq memory killer automatically manages background job processes that
consume too much memory. This feature monitors worker processes and restarts them before
the Linux memory killer steps in, which allows background jobs to run to completion
before gracefully shutting down. By logging these events, we make it easier to
identify jobs that lead to high memory use.
## How we monitor Sidekiq memory
GitLab monitors the available RSS limit by default only for Linux package or Docker installations. The reason for this
is that GitLab relies on runit to restart Sidekiq after a memory-induced shutdown, and self-compiled and Helm chart
installations don't use runit or an equivalent tool.
With the default settings, Sidekiq restarts no
more often than once every 15 minutes, with the restart causing about one
minute of delay for incoming background jobs.
Some background jobs rely on long-running external processes. To ensure these
are cleanly terminated when Sidekiq is restarted, each Sidekiq process should be
run as a process group leader (for example, using `chpst -P`). If using a Linux package installation or the
`bin/background_jobs` script with `runit` installed, this is handled for you.
## Configuring the limits
Sidekiq memory limits are controlled using [environment variables](https://docs.gitlab.com/omnibus/settings/environment-variables.html#setting-custom-environment-variables)
- `SIDEKIQ_MEMORY_KILLER_MAX_RSS` (KB): defines the Sidekiq process soft limit for allowed RSS.
If the Sidekiq process RSS (expressed in kilobytes) exceeds `SIDEKIQ_MEMORY_KILLER_MAX_RSS`,
for longer than `SIDEKIQ_MEMORY_KILLER_GRACE_TIME`, the graceful restart is triggered.
If `SIDEKIQ_MEMORY_KILLER_MAX_RSS` is not set, or its value is set to 0, the soft limit is not monitored.
`SIDEKIQ_MEMORY_KILLER_MAX_RSS` defaults to `2000000`.
- `SIDEKIQ_MEMORY_KILLER_GRACE_TIME`: defines the grace time period in seconds for which the Sidekiq process is allowed to run
above the allowed RSS soft limit. If the Sidekiq process goes below the allowed RSS (soft limit)
within `SIDEKIQ_MEMORY_KILLER_GRACE_TIME`, the restart is aborted. Default value is 900 seconds (15 minutes).
- `SIDEKIQ_MEMORY_KILLER_HARD_LIMIT_RSS` (KB): defines the Sidekiq process hard limit for allowed RSS.
If the Sidekiq process RSS (expressed in kilobytes) exceeds `SIDEKIQ_MEMORY_KILLER_HARD_LIMIT_RSS`,
an immediate graceful restart of Sidekiq is triggered. If this value is not set, or set to 0,
the hard limit is not be monitored.
- `SIDEKIQ_MEMORY_KILLER_CHECK_INTERVAL`: defines how often to check the process RSS. Defaults to 3 seconds.
- `SIDEKIQ_MEMORY_KILLER_SHUTDOWN_WAIT`: defines the maximum time allowed for all Sidekiq jobs to finish.
No new jobs are accepted during that time. Defaults to 30 seconds.
If the process restart is not performed by Sidekiq, the Sidekiq process is forcefully terminated after
[Sidekiq shutdown timeout](https://github.com/mperham/sidekiq/wiki/Signals#term) (defaults to 25 seconds) +2 seconds.
If jobs do not finish during that time, all currently running jobs are interrupted with a `SIGTERM` signal
sent to the Sidekiq process.
- `GITLAB_MEMORY_WATCHDOG_ENABLED`: enabled by default. Set the `GITLAB_MEMORY_WATCHDOG_ENABLED` to false, to disable Watchdog from running.
### Monitor worker restarts
GitLab emits log events if workers are restarted due to high memory usage.
The following is an example of one of these log events in `/var/log/gitlab/gitlab-rails/sidekiq_client.log`:
```json
{
"severity": "WARN",
"time": "2023-02-04T09:45:16.173Z",
"correlation_id": null,
"pid": 2725,
"worker_id": "sidekiq_1",
"memwd_handler_class": "Gitlab::Memory::Watchdog::SidekiqHandler",
"memwd_sleep_time_s": 3,
"memwd_rss_bytes": 1079683247,
"memwd_max_rss_bytes": 629145600,
"memwd_max_strikes": 5,
"memwd_cur_strikes": 6,
"message": "rss memory limit exceeded",
"running_jobs": [
{
jid: "83efb701c59547ee42ff7068",
worker_class: "Ci::DeleteObjectsWorker"
},
{
jid: "c3a74503dc2637f8f9445dd3",
worker_class: "Ci::ArchiveTraceWorker"
}
]
}
```
Where:
- `memwd_rss_bytes` is the actual amount of memory consumed.
- `memwd_max_rss_bytes` is the RSS limit set through `per_worker_max_memory_mb`.
- `running jobs` lists the jobs that were running at the time when the process
exceeded the RSS limit and started a graceful restart.
|
---
stage: Data access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Reducing memory use
breadcrumbs:
- doc
- administration
- sidekiq
---
The Sidekiq memory killer automatically manages background job processes that
consume too much memory. This feature monitors worker processes and restarts them before
the Linux memory killer steps in, which allows background jobs to run to completion
before gracefully shutting down. By logging these events, we make it easier to
identify jobs that lead to high memory use.
## How we monitor Sidekiq memory
GitLab monitors the available RSS limit by default only for Linux package or Docker installations. The reason for this
is that GitLab relies on runit to restart Sidekiq after a memory-induced shutdown, and self-compiled and Helm chart
installations don't use runit or an equivalent tool.
With the default settings, Sidekiq restarts no
more often than once every 15 minutes, with the restart causing about one
minute of delay for incoming background jobs.
Some background jobs rely on long-running external processes. To ensure these
are cleanly terminated when Sidekiq is restarted, each Sidekiq process should be
run as a process group leader (for example, using `chpst -P`). If using a Linux package installation or the
`bin/background_jobs` script with `runit` installed, this is handled for you.
## Configuring the limits
Sidekiq memory limits are controlled using [environment variables](https://docs.gitlab.com/omnibus/settings/environment-variables.html#setting-custom-environment-variables)
- `SIDEKIQ_MEMORY_KILLER_MAX_RSS` (KB): defines the Sidekiq process soft limit for allowed RSS.
If the Sidekiq process RSS (expressed in kilobytes) exceeds `SIDEKIQ_MEMORY_KILLER_MAX_RSS`,
for longer than `SIDEKIQ_MEMORY_KILLER_GRACE_TIME`, the graceful restart is triggered.
If `SIDEKIQ_MEMORY_KILLER_MAX_RSS` is not set, or its value is set to 0, the soft limit is not monitored.
`SIDEKIQ_MEMORY_KILLER_MAX_RSS` defaults to `2000000`.
- `SIDEKIQ_MEMORY_KILLER_GRACE_TIME`: defines the grace time period in seconds for which the Sidekiq process is allowed to run
above the allowed RSS soft limit. If the Sidekiq process goes below the allowed RSS (soft limit)
within `SIDEKIQ_MEMORY_KILLER_GRACE_TIME`, the restart is aborted. Default value is 900 seconds (15 minutes).
- `SIDEKIQ_MEMORY_KILLER_HARD_LIMIT_RSS` (KB): defines the Sidekiq process hard limit for allowed RSS.
If the Sidekiq process RSS (expressed in kilobytes) exceeds `SIDEKIQ_MEMORY_KILLER_HARD_LIMIT_RSS`,
an immediate graceful restart of Sidekiq is triggered. If this value is not set, or set to 0,
the hard limit is not be monitored.
- `SIDEKIQ_MEMORY_KILLER_CHECK_INTERVAL`: defines how often to check the process RSS. Defaults to 3 seconds.
- `SIDEKIQ_MEMORY_KILLER_SHUTDOWN_WAIT`: defines the maximum time allowed for all Sidekiq jobs to finish.
No new jobs are accepted during that time. Defaults to 30 seconds.
If the process restart is not performed by Sidekiq, the Sidekiq process is forcefully terminated after
[Sidekiq shutdown timeout](https://github.com/mperham/sidekiq/wiki/Signals#term) (defaults to 25 seconds) +2 seconds.
If jobs do not finish during that time, all currently running jobs are interrupted with a `SIGTERM` signal
sent to the Sidekiq process.
- `GITLAB_MEMORY_WATCHDOG_ENABLED`: enabled by default. Set the `GITLAB_MEMORY_WATCHDOG_ENABLED` to false, to disable Watchdog from running.
### Monitor worker restarts
GitLab emits log events if workers are restarted due to high memory usage.
The following is an example of one of these log events in `/var/log/gitlab/gitlab-rails/sidekiq_client.log`:
```json
{
"severity": "WARN",
"time": "2023-02-04T09:45:16.173Z",
"correlation_id": null,
"pid": 2725,
"worker_id": "sidekiq_1",
"memwd_handler_class": "Gitlab::Memory::Watchdog::SidekiqHandler",
"memwd_sleep_time_s": 3,
"memwd_rss_bytes": 1079683247,
"memwd_max_rss_bytes": 629145600,
"memwd_max_strikes": 5,
"memwd_cur_strikes": 6,
"message": "rss memory limit exceeded",
"running_jobs": [
{
jid: "83efb701c59547ee42ff7068",
worker_class: "Ci::DeleteObjectsWorker"
},
{
jid: "c3a74503dc2637f8f9445dd3",
worker_class: "Ci::ArchiveTraceWorker"
}
]
}
```
Where:
- `memwd_rss_bytes` is the actual amount of memory consumed.
- `memwd_max_rss_bytes` is the RSS limit set through `per_worker_max_memory_mb`.
- `running jobs` lists the jobs that were running at the time when the process
exceeded the RSS limit and started a graceful restart.
|
https://docs.gitlab.com/administration/extra_sidekiq_processes
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/extra_sidekiq_processes.md
|
2025-08-13
|
doc/administration/sidekiq
|
[
"doc",
"administration",
"sidekiq"
] |
extra_sidekiq_processes.md
|
Data access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Run multiple Sidekiq processes
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab allows you to start multiple Sidekiq processes to process background jobs
at a higher rate on a single instance. By default, Sidekiq starts one worker
process and only uses a single core.
{{< alert type="note" >}}
The information in this page applies only to Linux package installations.
{{< /alert >}}
## Start multiple processes
When starting multiple processes, the number of processes should at most equal
(and **not** exceed) the number of CPU cores you want to dedicate to Sidekiq.
The Sidekiq worker process uses no more than one CPU core.
To start multiple processes, use the `sidekiq['queue_groups']` array setting to
specify how many processes to create using `sidekiq-cluster` and which queues
they should handle. Each item in the array equates to one additional Sidekiq
process, and values in each item determine the queues it works on. In the vast
majority of cases, all processes should listen to all queues (see
[processing specific job classes](processing_specific_job_classes.md) for more
details).
For example, to create four Sidekiq processes, each listening
to all available queues:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['queue_groups'] = ['*'] * 4
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
To view the Sidekiq processes in GitLab:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Monitoring > Background jobs**.
## Concurrency
By default each process defined under `sidekiq` starts with a number of threads
that equals the number of queues, plus one spare thread, up to a maximum of 50.
For example, a process that handles all queues uses 50 threads by default.
These threads run inside a single Ruby process, and each process can only use a
single CPU core. The usefulness of threading depends on the work having some
external dependencies to wait on, like database queries or HTTP requests. Most
Sidekiq deployments benefit from this threading.
### Manage thread counts explicitly
The correct maximum thread count (also called concurrency) depends on the
workload. Typical values range from `5` for highly CPU-bound tasks to `15` or
higher for mixed low-priority work. A reasonable starting range is `15` to `25`
for a non-specialized deployment.
The values vary according to the work each specific deployment of Sidekiq does.
Any other specialized deployments with processes dedicated to specific queues
should have the concurrency tuned according to:
- The CPU usage of each type of process.
- The throughput achieved.
Each thread requires a Redis connection, so adding threads may increase Redis
latency and potentially cause client timeouts. See the
[Sidekiq documentation about Redis](https://github.com/mperham/sidekiq/wiki/Using-Redis)
for more details.
#### Manage thread counts with concurrency field
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/439687) in GitLab 16.9.
{{< /history >}}
In GitLab 16.9 and later, you can set the concurrency by setting `concurrency`. This value explicitly sets each process
with this amount of concurrency.
For example, to set the concurrency to `20`:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['concurrency'] = 20
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Modify the check interval
To modify the Sidekiq health check interval for the additional Sidekiq
processes:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['interval'] = 5
```
The value can be any integer number of seconds.
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Troubleshoot using the CLI
{{< alert type="warning" >}}
It's recommended to use `/etc/gitlab/gitlab.rb` to configure the Sidekiq processes.
If you experience a problem, you should contact GitLab support. Use the command
line at your own risk.
{{< /alert >}}
For debugging purposes, you can start extra Sidekiq processes by using the command
`/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster`. This command
takes arguments using the following syntax:
```shell
/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster [QUEUE,QUEUE,...] [QUEUE, ...]
```
The `--dryrun` argument allows viewing the command to be executed without
actually starting it.
Each separate argument denotes a group of queues that have to be processed by a
Sidekiq process. Multiple queues can be processed by the same process by
separating them with a comma instead of a space.
Instead of a queue, a queue namespace can also be provided, to have the process
automatically listen on all queues in that namespace without needing to
explicitly list all the queue names. For more information about queue namespaces,
see the relevant section in the
Sidekiq development part of the GitLab development documentation.
### Monitor the `sidekiq-cluster` command
The `sidekiq-cluster` command does not terminate once it has started the desired
amount of Sidekiq processes. Instead, the process continues running and
forwards any signals to the child processes. This allows you to stop all
Sidekiq processes as you send a signal to the `sidekiq-cluster` process,
instead of having to send it to the individual processes.
If the `sidekiq-cluster` process crashes or receives a `SIGKILL`, the child
processes terminate themselves after a few seconds. This ensures you don't
end up with zombie Sidekiq processes.
This allows you to monitor the processes by hooking up
`sidekiq-cluster` to your supervisor of choice (for example, runit).
If a child process died the `sidekiq-cluster` command signals all remaining
process to terminate, then terminate itself. This removes the need for
`sidekiq-cluster` to re-implement complex process monitoring/restarting code.
Instead you should make sure your supervisor restarts the `sidekiq-cluster`
process whenever necessary.
### PID files
The `sidekiq-cluster` command can store its PID in a file. By default no PID
file is written, but this can be changed by passing the `--pidfile` option to
`sidekiq-cluster`. For example:
```shell
/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster --pidfile /var/run/gitlab/sidekiq_cluster.pid process_commit
```
Keep in mind that the PID file contains the PID of the `sidekiq-cluster`
command and not the PIDs of the started Sidekiq processes.
### Environment
The Rails environment can be set by passing the `--environment` flag to the
`sidekiq-cluster` command, or by setting `RAILS_ENV` to a non-empty value. The
default value can be found in `/opt/gitlab/etc/gitlab-rails/env/RAILS_ENV`.
|
---
stage: Data access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Run multiple Sidekiq processes
breadcrumbs:
- doc
- administration
- sidekiq
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab allows you to start multiple Sidekiq processes to process background jobs
at a higher rate on a single instance. By default, Sidekiq starts one worker
process and only uses a single core.
{{< alert type="note" >}}
The information in this page applies only to Linux package installations.
{{< /alert >}}
## Start multiple processes
When starting multiple processes, the number of processes should at most equal
(and **not** exceed) the number of CPU cores you want to dedicate to Sidekiq.
The Sidekiq worker process uses no more than one CPU core.
To start multiple processes, use the `sidekiq['queue_groups']` array setting to
specify how many processes to create using `sidekiq-cluster` and which queues
they should handle. Each item in the array equates to one additional Sidekiq
process, and values in each item determine the queues it works on. In the vast
majority of cases, all processes should listen to all queues (see
[processing specific job classes](processing_specific_job_classes.md) for more
details).
For example, to create four Sidekiq processes, each listening
to all available queues:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['queue_groups'] = ['*'] * 4
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
To view the Sidekiq processes in GitLab:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Monitoring > Background jobs**.
## Concurrency
By default each process defined under `sidekiq` starts with a number of threads
that equals the number of queues, plus one spare thread, up to a maximum of 50.
For example, a process that handles all queues uses 50 threads by default.
These threads run inside a single Ruby process, and each process can only use a
single CPU core. The usefulness of threading depends on the work having some
external dependencies to wait on, like database queries or HTTP requests. Most
Sidekiq deployments benefit from this threading.
### Manage thread counts explicitly
The correct maximum thread count (also called concurrency) depends on the
workload. Typical values range from `5` for highly CPU-bound tasks to `15` or
higher for mixed low-priority work. A reasonable starting range is `15` to `25`
for a non-specialized deployment.
The values vary according to the work each specific deployment of Sidekiq does.
Any other specialized deployments with processes dedicated to specific queues
should have the concurrency tuned according to:
- The CPU usage of each type of process.
- The throughput achieved.
Each thread requires a Redis connection, so adding threads may increase Redis
latency and potentially cause client timeouts. See the
[Sidekiq documentation about Redis](https://github.com/mperham/sidekiq/wiki/Using-Redis)
for more details.
#### Manage thread counts with concurrency field
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/439687) in GitLab 16.9.
{{< /history >}}
In GitLab 16.9 and later, you can set the concurrency by setting `concurrency`. This value explicitly sets each process
with this amount of concurrency.
For example, to set the concurrency to `20`:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['concurrency'] = 20
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Modify the check interval
To modify the Sidekiq health check interval for the additional Sidekiq
processes:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['interval'] = 5
```
The value can be any integer number of seconds.
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Troubleshoot using the CLI
{{< alert type="warning" >}}
It's recommended to use `/etc/gitlab/gitlab.rb` to configure the Sidekiq processes.
If you experience a problem, you should contact GitLab support. Use the command
line at your own risk.
{{< /alert >}}
For debugging purposes, you can start extra Sidekiq processes by using the command
`/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster`. This command
takes arguments using the following syntax:
```shell
/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster [QUEUE,QUEUE,...] [QUEUE, ...]
```
The `--dryrun` argument allows viewing the command to be executed without
actually starting it.
Each separate argument denotes a group of queues that have to be processed by a
Sidekiq process. Multiple queues can be processed by the same process by
separating them with a comma instead of a space.
Instead of a queue, a queue namespace can also be provided, to have the process
automatically listen on all queues in that namespace without needing to
explicitly list all the queue names. For more information about queue namespaces,
see the relevant section in the
Sidekiq development part of the GitLab development documentation.
### Monitor the `sidekiq-cluster` command
The `sidekiq-cluster` command does not terminate once it has started the desired
amount of Sidekiq processes. Instead, the process continues running and
forwards any signals to the child processes. This allows you to stop all
Sidekiq processes as you send a signal to the `sidekiq-cluster` process,
instead of having to send it to the individual processes.
If the `sidekiq-cluster` process crashes or receives a `SIGKILL`, the child
processes terminate themselves after a few seconds. This ensures you don't
end up with zombie Sidekiq processes.
This allows you to monitor the processes by hooking up
`sidekiq-cluster` to your supervisor of choice (for example, runit).
If a child process died the `sidekiq-cluster` command signals all remaining
process to terminate, then terminate itself. This removes the need for
`sidekiq-cluster` to re-implement complex process monitoring/restarting code.
Instead you should make sure your supervisor restarts the `sidekiq-cluster`
process whenever necessary.
### PID files
The `sidekiq-cluster` command can store its PID in a file. By default no PID
file is written, but this can be changed by passing the `--pidfile` option to
`sidekiq-cluster`. For example:
```shell
/opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster --pidfile /var/run/gitlab/sidekiq_cluster.pid process_commit
```
Keep in mind that the PID file contains the PID of the `sidekiq-cluster`
command and not the PIDs of the started Sidekiq processes.
### Environment
The Rails environment can be set by passing the `--environment` flag to the
`sidekiq-cluster` command, or by setting `RAILS_ENV` to a non-empty value. The
default value can be found in `/opt/gitlab/etc/gitlab-rails/env/RAILS_ENV`.
|
https://docs.gitlab.com/administration/sidekiq_troubleshooting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/sidekiq_troubleshooting.md
|
2025-08-13
|
doc/administration/sidekiq
|
[
"doc",
"administration",
"sidekiq"
] |
sidekiq_troubleshooting.md
|
Data access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Troubleshooting Sidekiq
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Sidekiq is the background job processor GitLab uses to asynchronously run
tasks. When things go wrong it can be difficult to troubleshoot. These
situations also tend to be high-pressure because a production system job queue
may be filling up. Users notice when this happens because new branches
may not show up and merge requests may not be updated. The following are some
troubleshooting steps to help you diagnose the bottleneck.
GitLab administrators/users should consider working through these
debug steps with GitLab Support so the backtraces can be analyzed by our team.
It may reveal a bug or necessary improvement in GitLab.
In any of the backtraces, be wary of suspecting cases where every
thread appears to be waiting in the database, Redis, or waiting to acquire
a mutex. This **may** mean there's contention in the database, for example,
but look for one thread that is different than the rest. This other thread
may be using all available CPU, or have a Ruby Global Interpreter Lock,
preventing other threads from continuing.
## Log arguments to Sidekiq jobs
Some arguments passed to Sidekiq jobs are logged by default.
To avoid logging sensitive information (for instance, password reset tokens),
GitLab logs numeric arguments for all workers, with overrides for some specific
workers where their arguments are not sensitive.
Example log output:
```json
{"severity":"INFO","time":"2020-06-08T14:37:37.892Z","class":"AdminEmailsWorker","args":["[FILTERED]","[FILTERED]","[FILTERED]"],"retry":3,"queue":"admin_emails","backtrace":true,"jid":"9e35e2674ac7b12d123e13cc","created_at":"2020-06-08T14:37:37.373Z","meta.user":"root","meta.caller_id":"Admin::EmailsController#create","correlation_id":"37D3lArJmT1","uber-trace-id":"2d942cc98cc1b561:6dc94409cfdd4d77:9fbe19bdee865293:1","enqueued_at":"2020-06-08T14:37:37.410Z","pid":65011,"message":"AdminEmailsWorker JID-9e35e2674ac7b12d123e13cc: done: 0.48085 sec","job_status":"done","scheduling_latency_s":0.001012,"redis_calls":9,"redis_duration_s":0.004608,"redis_read_bytes":696,"redis_write_bytes":6141,"duration_s":0.48085,"cpu_s":0.308849,"completed_at":"2020-06-08T14:37:37.892Z","db_duration_s":0.010742}
{"severity":"INFO","time":"2020-06-08T14:37:37.894Z","class":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper","wrapped":"ActionMailer::MailDeliveryJob","queue":"mailers","args":["[FILTERED]"],"retry":3,"backtrace":true,"jid":"e47a4f6793d475378432e3c8","created_at":"2020-06-08T14:37:37.884Z","meta.user":"root","meta.caller_id":"AdminEmailsWorker","correlation_id":"37D3lArJmT1","uber-trace-id":"2d942cc98cc1b561:29344de0f966446d:5c3b0e0e1bef987b:1","enqueued_at":"2020-06-08T14:37:37.885Z","pid":65011,"message":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-e47a4f6793d475378432e3c8: start","job_status":"start","scheduling_latency_s":0.009473}
{"severity":"INFO","time":"2020-06-08T14:39:50.648Z","class":"NewIssueWorker","args":["455","1"],"retry":3,"queue":"new_issue","backtrace":true,"jid":"a24af71f96fd129ec47f5d1e","created_at":"2020-06-08T14:39:50.643Z","meta.user":"root","meta.project":"h5bp/html5-boilerplate","meta.root_namespace":"h5bp","meta.caller_id":"Projects::IssuesController#create","correlation_id":"f9UCZHqhuP7","uber-trace-id":"28f65730f99f55a3:a5d2b62dec38dffc:48ddd092707fa1b7:1","enqueued_at":"2020-06-08T14:39:50.646Z","pid":65011,"message":"NewIssueWorker JID-a24af71f96fd129ec47f5d1e: start","job_status":"start","scheduling_latency_s":0.001144}
```
When using [Sidekiq JSON logging](../logs/_index.md#sidekiqlog),
arguments logs are limited to a maximum size of 10 kilobytes of text;
any arguments after this limit are discarded and replaced with a
single argument containing the string `"..."`.
You can set `SIDEKIQ_LOG_ARGUMENTS` [environment variable](https://docs.gitlab.com/omnibus/settings/environment-variables.html)
to `0` (false) to disable argument logging.
Example:
```ruby
gitlab_rails['env'] = {"SIDEKIQ_LOG_ARGUMENTS" => "0"}
```
## Investigating Sidekiq queue backlogs or slow performance
Symptoms of slow Sidekiq performance include problems with merge request status updates,
and delays before CI pipelines start running.
Potential causes include:
- The GitLab instance may need more Sidekiq workers. By default, a single-node Linux package installation
runs one worker, restricting the execution of Sidekiq jobs to a maximum of one CPU core.
[Read more about running multiple Sidekiq workers](extra_sidekiq_processes.md).
- The instance is configured with more Sidekiq workers, but most of the extra workers are
not configured to run any job that is queued. This can result in a backlog of jobs
when the instance is busy, if the workload has changed in the months or years since
the workers were configured, or as a result of GitLab product changes.
Gather data on the state of the Sidekiq workers with the following Ruby script.
1. Create the script:
```ruby
cat > /var/opt/gitlab/sidekiqcheck.rb <<EOF
require 'sidekiq/monitor'
Sidekiq::Monitor::Status.new.display('overview')
Sidekiq::Monitor::Status.new.display('processes'); nil
Sidekiq::Monitor::Status.new.display('queues'); nil
puts "----------- workers ----------- "
workers = Sidekiq::Workers.new
workers.each do |_process_id, _thread_id, work|
pp work
end
puts "----------- Queued Jobs ----------- "
Sidekiq::Queue.all.each do |queue|
queue.each do |job|
pp job
end
end ;nil
puts "----------- done! ----------- "
EOF
```
1. Execute and capture the output:
```shell
sudo gitlab-rails runner /var/opt/gitlab/sidekiqcheck.rb > /tmp/sidekiqcheck_$(date '+%Y%m%d-%H:%M').out
```
If the performance issue is intermittent:
- Run this in a cron job every five minutes. Write the files to a location with enough space: allow for at least 500 KB per file.
```shell
cat > /etc/cron.d/sidekiqcheck <<EOF
*/5 * * * * root /opt/gitlab/bin/gitlab-rails runner /var/opt/gitlab/sidekiqcheck.rb > /tmp/sidekiqcheck_$(date '+\%Y\%m\%d-\%H:\%M').out 2>&1
EOF
```
- Refer back to the data to see what went wrong.
1. Analyze the output. The following commands assume that you have a directory of output files.
1. `grep 'Busy: ' *` shows how many jobs were being run. `grep 'Enqueued: ' *`
shows the backlog of work at that time.
1. Look at the number of busy threads across the workers in samples where Sidekiq is under load:
```shell
ls | while read f ; do if grep -q 'Enqueued: 0' $f; then :
else echo $f; egrep 'Busy:|Enqueued:|---- Processes' $f
grep 'Threads:' $f ; fi
done | more
```
Example output:
```plaintext
sidekiqcheck_20221024-14:00.out
Busy: 47
Enqueued: 363
---- Processes (13) ----
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 23 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (24 busy)
Threads: 30 (23 busy)
```
- In this output file, 47 threads were busy, and there was a backlog of 363 jobs.
- Of the 13 worker processes, only two were busy.
- This indicates that the other workers are configured too specifically.
- Look at the full output to work out which workers were busy.
Correlate with your `sidekiq_queues` configuration in `gitlab.rb`.
- An overloaded single-worker environment might look like this:
```plaintext
sidekiqcheck_20221024-14:00.out
Busy: 25
Enqueued: 363
---- Processes (1) ----
Threads: 25 (25 busy)
```
1. Look at the `---- Queues (xxx) ----` section of the output file to
determine what jobs were queued up at the time.
1. The files also include low level details about the state of Sidekiq at the time.
This could be useful for identifying where spikes in workload are coming from.
- The `----------- workers -----------` section details the jobs that make up the
`Busy` count in the summary.
- The `----------- Queued Jobs -----------` section provides details on
jobs that are `Enqueued`.
## Thread dump
Send the Sidekiq process ID the `TTIN` signal to output thread
backtraces in the log file.
```shell
kill -TTIN <sidekiq_pid>
```
Check in `/var/log/gitlab/sidekiq/current` or `$GITLAB_HOME/log/sidekiq.log` for
the backtrace output. The backtraces are lengthy and generally start with
several `WARN` level messages. Here's an example of a single thread's backtrace:
```plaintext
2016-04-13T06:21:20.022Z 31517 TID-orn4urby0 WARN: ActiveRecord::RecordNotFound: Couldn't find Note with 'id'=3375386
2016-04-13T06:21:20.022Z 31517 TID-orn4urby0 WARN: /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/activerecord-4.2.5.2/lib/active_record/core.rb:155:in `find'
/opt/gitlab/embedded/service/gitlab-rails/app/workers/new_note_worker.rb:7:in `perform'
/opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/sidekiq-4.0.1/lib/sidekiq/processor.rb:150:in `execute_job'
/opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/sidekiq-4.0.1/lib/sidekiq/processor.rb:132:in `block (2 levels) in process'
/opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/sidekiq-4.0.1/lib/sidekiq/middleware/chain.rb:127:in `block in invoke'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/sidekiq_middleware/memory_killer.rb:17:in `call'
/opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/sidekiq-4.0.1/lib/sidekiq/middleware/chain.rb:129:in `block in invoke'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/sidekiq_middleware/arguments_logger.rb:6:in `call'
...
```
In some cases Sidekiq may be hung and unable to respond to the `TTIN` signal.
Move on to other troubleshooting methods if this happens.
## Ruby profiling with `rbspy`
[rbspy](https://rbspy.github.io) is an easy to use and low-overhead Ruby profiler that can be used to create
flamegraph-style diagrams of CPU usage by Ruby processes.
No changes to GitLab are required to use it and it has no dependencies. To install it:
1. Download the binary from the [`rbspy` releases page](https://github.com/rbspy/rbspy/releases).
1. Make the binary executable.
To profile a Sidekiq worker for one minute, run:
```shell
sudo ./rbspy record --pid <sidekiq_pid> --duration 60 --file /tmp/sidekiq_profile.svg
```

In this example of a flamegraph generated by `rbspy`, almost all of the Sidekiq process's time is spent in `rev_parse`, a native C
function in Rugged. In the stack, we can see `rev_parse` is being called by the `ExpirePipelineCacheWorker`.
`rbspy` requires additional [capabilities](https://man7.org/linux/man-pages/man7/capabilities.7.html)
in [containerized environments](https://rbspy.github.io/using-rbspy/index.html#containers).
It requires at least the `SYS_PTRACE` capability, otherwise it terminates with a `permission denied` error.
{{< tabs >}}
{{< tab title="Kubernetes" >}}
```yaml
securityContext:
capabilities:
add:
- SYS_PTRACE
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
docker run --cap-add SYS_PTRACE [...]
```
{{< /tab >}}
{{< tab title="Docker Compose" >}}
```yaml
services:
ruby_container_name:
# ...
cap_add:
- SYS_PTRACE
```
{{< /tab >}}
{{< /tabs >}}
## Process profiling with `perf`
Linux has a process profiling tool called `perf` that is helpful when a certain
process is eating up a lot of CPU. If you see high CPU usage and Sidekiq isn't
responding to the `TTIN` signal, this is a good next step.
If `perf` is not installed on your system, install it with `apt-get` or `yum`:
```shell
# Debian
sudo apt-get install linux-tools
# Ubuntu (may require these additional Kernel packages)
sudo apt-get install linux-tools-common linux-tools-generic linux-tools-`uname -r`
# Red Hat/CentOS
sudo yum install perf
```
Run `perf` against the Sidekiq PID:
```shell
sudo perf record -p <sidekiq_pid>
```
Let this run for 30-60 seconds and then press Ctrl-C. Then view the `perf` report:
```shell
$ sudo perf report
# Sample output
Samples: 348K of event 'cycles', Event count (approx.): 280908431073
97.69% ruby nokogiri.so [.] xmlXPathNodeSetMergeAndClear
0.18% ruby libruby.so.2.1.0 [.] objspace_malloc_increase
0.12% ruby libc-2.12.so [.] _int_malloc
0.10% ruby libc-2.12.so [.] _int_free
```
The sample output from the `perf` report shows that 97% of the CPU is
being spent inside Nokogiri and `xmlXPathNodeSetMergeAndClear`. For something
this obvious you should then go investigate what job in GitLab would use
Nokogiri and XPath. Combine with `TTIN` or `gdb` output to show the
corresponding Ruby code where this is happening.
## The GNU Project Debugger (`gdb`)
`gdb` can be another effective tool for debugging Sidekiq. It gives you a little
more interactive way to look at each thread and see what's causing problems.
Attaching to a process with `gdb` suspends the standard operation
of the process (Sidekiq does not process jobs while `gdb` is attached).
Start by attaching to the Sidekiq PID:
```shell
gdb -p <sidekiq_pid>
```
Then gather information on all the threads:
```plaintext
info threads
# Example output
30 Thread 0x7fe5fbd63700 (LWP 26060) 0x0000003f7cadf113 in poll () from /lib64/libc.so.6
29 Thread 0x7fe5f2b3b700 (LWP 26533) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
28 Thread 0x7fe5f2a3a700 (LWP 26534) 0x0000003f7ce0ba5e in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
27 Thread 0x7fe5f2939700 (LWP 26535) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
26 Thread 0x7fe5f2838700 (LWP 26537) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
25 Thread 0x7fe5f2737700 (LWP 26538) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
24 Thread 0x7fe5f2535700 (LWP 26540) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
23 Thread 0x7fe5f2434700 (LWP 26541) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
22 Thread 0x7fe5f2232700 (LWP 26543) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
21 Thread 0x7fe5f2131700 (LWP 26544) 0x00007fe5f7b570f0 in xmlXPathNodeSetMergeAndClear ()
from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
...
```
If you see a suspicious thread, like the Nokogiri one in the example, you may want
to get more information:
```plaintext
thread 21
bt
# Example output
#0 0x00007ff0d6afe111 in xmlXPathNodeSetMergeAndClear () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#1 0x00007ff0d6b0b836 in xmlXPathNodeCollectAndTest () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#2 0x00007ff0d6b09037 in xmlXPathCompOpEval () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#3 0x00007ff0d6b09017 in xmlXPathCompOpEval () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#4 0x00007ff0d6b092e0 in xmlXPathCompOpEval () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#5 0x00007ff0d6b0bc37 in xmlXPathRunEval () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#6 0x00007ff0d6b0be5f in xmlXPathEvalExpression () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#7 0x00007ff0d6a97dc3 in evaluate (argc=2, argv=0x1022d058, self=<value optimized out>) at xml_xpath_context.c:221
#8 0x00007ff0daeab0ea in vm_call_cfunc_with_frame (th=0x1022a4f0, reg_cfp=0x1032b810, ci=<value optimized out>) at vm_insnhelper.c:1510
```
To output a backtrace from all threads at once:
```plaintext
set pagination off
thread apply all bt
```
Once you're done debugging with `gdb`, be sure to detach from the process and
exit:
```plaintext
detach
exit
```
## Sidekiq kill signals
TTIN was described previously as the signal to print backtraces for logging, however
Sidekiq responds to other signals as well. For example, TSTP and TERM can be used
to gracefully shut Sidekiq down, see
[the Sidekiq Signals docs](https://github.com/mperham/sidekiq/wiki/Signals#ttin).
## Check for blocking queries
Sometimes the speed at which Sidekiq processes jobs can be so fast that it can
cause database contention. Check for blocking queries when backtraces documented previously
show that many threads are stuck in the database adapter.
The PostgreSQL wiki has details on the query you can run to see blocking
queries. The query is different based on PostgreSQL version. See
[Lock Monitoring](https://wiki.postgresql.org/wiki/Lock_Monitoring) for
the query details.
## Managing Sidekiq queues
It is possible to use [Sidekiq API](https://github.com/mperham/sidekiq/wiki/API)
to perform a number of troubleshooting steps on Sidekiq.
These are the administrative commands and it should only be used if currently
administration interface is not suitable due to scale of installation.
All these commands should be run using `gitlab-rails console`.
### View the queue size
```ruby
Sidekiq::Queue.new("pipeline_processing:build_queue").size
```
### Enumerate all enqueued jobs
```ruby
queue = Sidekiq::Queue.new("chaos:chaos_sleep")
queue.each do |job|
# job.klass # => 'MyWorker'
# job.args # => [1, 2, 3]
# job.jid # => jid
# job.queue # => chaos:chaos_sleep
# job["retry"] # => 3
# job.item # => {
# "class"=>"Chaos::SleepWorker",
# "args"=>[1000],
# "retry"=>3,
# "queue"=>"chaos:chaos_sleep",
# "backtrace"=>true,
# "queue_namespace"=>"chaos",
# "jid"=>"39bc482b823cceaf07213523",
# "created_at"=>1566317076.266069,
# "correlation_id"=>"c323b832-a857-4858-b695-672de6f0e1af",
# "enqueued_at"=>1566317076.26761},
# }
# job.delete if job.jid == 'abcdef1234567890'
end
```
### Enumerate currently running jobs
```ruby
workers = Sidekiq::Workers.new
workers.each do |process_id, thread_id, work|
# process_id is a unique identifier per Sidekiq process
# thread_id is a unique identifier per thread
# work is a Hash which looks like:
# {"queue"=>"chaos:chaos_sleep",
# "payload"=>
# { "class"=>"Chaos::SleepWorker",
# "args"=>[1000],
# "retry"=>3,
# "queue"=>"chaos:chaos_sleep",
# "backtrace"=>true,
# "queue_namespace"=>"chaos",
# "jid"=>"b2a31e3eac7b1a99ff235869",
# "created_at"=>1566316974.9215662,
# "correlation_id"=>"e484fb26-7576-45f9-bf21-b99389e1c53c",
# "enqueued_at"=>1566316974.9229589},
# "run_at"=>1566316974}],
end
```
### Remove Sidekiq jobs for given parameters (destructive)
The general method to kill jobs conditionally is the following command, which
removes jobs that are queued but not started. Running jobs cannot be killed.
```ruby
queue = Sidekiq::Queue.new('<queue name>')
queue.each { |job| job.delete if <condition>}
```
Have a look at the section below for cancelling running jobs.
In the method documented previously, `<queue-name>` is the name of the queue that contains the jobs you want to delete and `<condition>` decides which jobs get deleted.
Commonly, `<condition>` references the job arguments, which depend on the type of job in question. To find the arguments for a specific queue, you can have a look at the `perform` function of the related worker file, commonly found at `/app/workers/<queue-name>_worker.rb`.
For example, `repository_import` has `project_id` as the job argument, while `update_merge_requests` has `project_id, user_id, oldrev, newrev, ref`.
Arguments need to be referenced by their sequence ID using `job.args[<id>]` because `job.args` is a list of all arguments provided to the Sidekiq job.
Here are some examples:
```ruby
queue = Sidekiq::Queue.new('update_merge_requests')
# In this example, we want to remove any update_merge_requests jobs
# for the Project with ID 125 and ref `ref/heads/my_branch`
queue.each { |job| job.delete if job.args[0] == 125 and job.args[4] == 'ref/heads/my_branch' }
```
```ruby
# Cancelling jobs like: `RepositoryImportWorker.new.perform_async(100)`
id_list = [100]
queue = Sidekiq::Queue.new('repository_import')
queue.each do |job|
job.delete if id_list.include?(job.args[0])
end
```
### Remove specific job ID (destructive)
```ruby
queue = Sidekiq::Queue.new('repository_import')
queue.each do |job|
job.delete if job.jid == 'my-job-id'
end
```
### Remove Sidekiq jobs for a specific worker (destructive)
```ruby
queue = Sidekiq::Queue.new("default")
queue.each do |job|
if job.klass == "TodosDestroyer::PrivateFeaturesWorker"
# Uncomment the line below to actually delete jobs
#job.delete
puts "Deleted job ID #{job.jid}"
end
end
```
## Canceling running jobs (destructive)
This is highly risky operation and use it as last resort.
Doing that might result in data corruption, as the job
is interrupted mid-execution and it is not guaranteed
that proper rollback of transactions is implemented.
```ruby
Gitlab::SidekiqDaemon::Monitor.cancel_job('job-id')
```
This requires the Sidekiq to be run with `SIDEKIQ_MONITOR_WORKER=1`
environment variable.
To perform of the interrupt we use `Thread.raise` which
has number of drawbacks, as mentioned in [Why Ruby's Timeout is dangerous (and `Thread.raise` is terrifying)](https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying/#timeout-how-it-works-and-why-thread-raise-is-terrifying).
## Manually trigger a cron job
By visiting `/admin/background_jobs`, you can look into what jobs are scheduled/running/pending on your instance.
You can trigger a cron job from the UI by selecting the "Enqueue Now" button. To trigger a cron job programmatically first open a [Rails console](../operations/rails_console.md).
To find the cron job you want to test:
```ruby
job = Sidekiq::Cron::Job.find('job-name')
# get status of job:
job.status
# enqueue job right now!
job.enque!
```
For example, to trigger the `update_all_mirrors_worker` cron job that updates the repository mirrors:
```ruby
irb(main):001:0> job = Sidekiq::Cron::Job.find('update_all_mirrors_worker')
=>
#<Sidekiq::Cron::Job:0x00007f147f84a1d0
...
irb(main):002:0> job.status
=> "enabled"
irb(main):003:0> job.enque!
=> 257
```
The list of available jobs can be found in the [workers](https://gitlab.com/gitlab-org/gitlab/-/tree/master/app/workers) directory.
For more information about Sidekiq jobs, see the [Sidekiq-cron](https://github.com/sidekiq-cron/sidekiq-cron#work-with-job) documentation.
## Disabling cron jobs
You can disable any Sidekiq cron jobs by visiting the [Monitoring section in the **Admin** area](../admin_area.md#monitoring-section). You can also perform the same action using the command line and [Rails Runner](../operations/rails_console.md#using-the-rails-runner).
To disable all cron jobs:
```shell
sudo gitlab-rails runner 'Sidekiq::Cron::Job.all.map(&:disable!)'
```
To enable all cron jobs:
```shell
sudo gitlab-rails runner 'Sidekiq::Cron::Job.all.map(&:enable!)'
```
If you wish to enable only a subset of the jobs at a time you can use name matching. For example, to enable only jobs with `geo` in the name:
```shell
sudo gitlab-rails runner 'Sidekiq::Cron::Job.all.select{ |j| j.name.match("geo") }.map(&:disable!)'
```
## Clearing a Sidekiq job deduplication idempotency key
Occasionally, jobs that are expected to run (for example, cron jobs) are observed to not run at all. When checking the logs, there might be instances where jobs are seen to not run with a `"job_status": "deduplicated"`.
This can happen when a job failed and the idempotency key was not cleared properly. For example, [stopping Sidekiq kills any remaining jobs after 25 seconds](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4918).
[By default, the key expires after 6 hours](https://gitlab.com/gitlab-org/gitlab/-/blob/87c92f06eb92716a26679cd339f3787ae7edbdc3/lib/gitlab/sidekiq_middleware/duplicate_jobs/duplicate_job.rb#L23),
but if you want to clear the idempotency key immediately, follow the following steps (the example provided is for `Geo::VerificationBatchWorker`):
1. Find the worker class and `args` of the job in the Sidekiq logs:
```plaintext
{ ... "class":"Geo::VerificationBatchWorker","args":["container_repository"] ... }
```
1. Start a [Rails console session](../operations/rails_console.md#starting-a-rails-console-session).
1. Run the following snippet:
```ruby
worker_class = Geo::VerificationBatchWorker
args = ["container_repository"]
dj = Gitlab::SidekiqMiddleware::DuplicateJobs::DuplicateJob.new({ 'class' => worker_class.name, 'args' => args }, worker_class.queue)
dj.send(:idempotency_key)
dj.delete!
```
## CPU saturation in Redis caused by Sidekiq BRPOP calls
Sidekiq `BROP` calls can cause CPU usage to increase on Redis.
Increase the [`SIDEKIQ_SEMI_RELIABLE_FETCH_TIMEOUT` environment variable](../environment_variables.md) to improve CPU usage on Redis.
## Error: `OpenSSL::Cipher::CipherError`
If you receive error messages like:
```plaintext
"OpenSSL::Cipher::CipherError","exception.message":"","exception.backtrace":["encryptor (3.0.0) lib/encryptor.rb:98:in `final'","encryptor (3.0.0) lib/encryptor.rb:98:in `crypt'","encryptor (3.0.0) lib/encryptor.rb:49:in `decrypt'"
```
This error means that the processes are unable to decrypt encrypted data that is stored in the GitLab database. This indicates that there is some problem with your `/etc/gitlab/gitlab-secrets.json` file, ensure that you copied the file from your main GitLab node to your Sidekiq nodes.
## Related topics
- [Elasticsearch workers overload Sidekiq](../../integration/elasticsearch/troubleshooting/migrations.md#elasticsearch-workers-overload-sidekiq).
|
---
stage: Data access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Troubleshooting Sidekiq
breadcrumbs:
- doc
- administration
- sidekiq
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Sidekiq is the background job processor GitLab uses to asynchronously run
tasks. When things go wrong it can be difficult to troubleshoot. These
situations also tend to be high-pressure because a production system job queue
may be filling up. Users notice when this happens because new branches
may not show up and merge requests may not be updated. The following are some
troubleshooting steps to help you diagnose the bottleneck.
GitLab administrators/users should consider working through these
debug steps with GitLab Support so the backtraces can be analyzed by our team.
It may reveal a bug or necessary improvement in GitLab.
In any of the backtraces, be wary of suspecting cases where every
thread appears to be waiting in the database, Redis, or waiting to acquire
a mutex. This **may** mean there's contention in the database, for example,
but look for one thread that is different than the rest. This other thread
may be using all available CPU, or have a Ruby Global Interpreter Lock,
preventing other threads from continuing.
## Log arguments to Sidekiq jobs
Some arguments passed to Sidekiq jobs are logged by default.
To avoid logging sensitive information (for instance, password reset tokens),
GitLab logs numeric arguments for all workers, with overrides for some specific
workers where their arguments are not sensitive.
Example log output:
```json
{"severity":"INFO","time":"2020-06-08T14:37:37.892Z","class":"AdminEmailsWorker","args":["[FILTERED]","[FILTERED]","[FILTERED]"],"retry":3,"queue":"admin_emails","backtrace":true,"jid":"9e35e2674ac7b12d123e13cc","created_at":"2020-06-08T14:37:37.373Z","meta.user":"root","meta.caller_id":"Admin::EmailsController#create","correlation_id":"37D3lArJmT1","uber-trace-id":"2d942cc98cc1b561:6dc94409cfdd4d77:9fbe19bdee865293:1","enqueued_at":"2020-06-08T14:37:37.410Z","pid":65011,"message":"AdminEmailsWorker JID-9e35e2674ac7b12d123e13cc: done: 0.48085 sec","job_status":"done","scheduling_latency_s":0.001012,"redis_calls":9,"redis_duration_s":0.004608,"redis_read_bytes":696,"redis_write_bytes":6141,"duration_s":0.48085,"cpu_s":0.308849,"completed_at":"2020-06-08T14:37:37.892Z","db_duration_s":0.010742}
{"severity":"INFO","time":"2020-06-08T14:37:37.894Z","class":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper","wrapped":"ActionMailer::MailDeliveryJob","queue":"mailers","args":["[FILTERED]"],"retry":3,"backtrace":true,"jid":"e47a4f6793d475378432e3c8","created_at":"2020-06-08T14:37:37.884Z","meta.user":"root","meta.caller_id":"AdminEmailsWorker","correlation_id":"37D3lArJmT1","uber-trace-id":"2d942cc98cc1b561:29344de0f966446d:5c3b0e0e1bef987b:1","enqueued_at":"2020-06-08T14:37:37.885Z","pid":65011,"message":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-e47a4f6793d475378432e3c8: start","job_status":"start","scheduling_latency_s":0.009473}
{"severity":"INFO","time":"2020-06-08T14:39:50.648Z","class":"NewIssueWorker","args":["455","1"],"retry":3,"queue":"new_issue","backtrace":true,"jid":"a24af71f96fd129ec47f5d1e","created_at":"2020-06-08T14:39:50.643Z","meta.user":"root","meta.project":"h5bp/html5-boilerplate","meta.root_namespace":"h5bp","meta.caller_id":"Projects::IssuesController#create","correlation_id":"f9UCZHqhuP7","uber-trace-id":"28f65730f99f55a3:a5d2b62dec38dffc:48ddd092707fa1b7:1","enqueued_at":"2020-06-08T14:39:50.646Z","pid":65011,"message":"NewIssueWorker JID-a24af71f96fd129ec47f5d1e: start","job_status":"start","scheduling_latency_s":0.001144}
```
When using [Sidekiq JSON logging](../logs/_index.md#sidekiqlog),
arguments logs are limited to a maximum size of 10 kilobytes of text;
any arguments after this limit are discarded and replaced with a
single argument containing the string `"..."`.
You can set `SIDEKIQ_LOG_ARGUMENTS` [environment variable](https://docs.gitlab.com/omnibus/settings/environment-variables.html)
to `0` (false) to disable argument logging.
Example:
```ruby
gitlab_rails['env'] = {"SIDEKIQ_LOG_ARGUMENTS" => "0"}
```
## Investigating Sidekiq queue backlogs or slow performance
Symptoms of slow Sidekiq performance include problems with merge request status updates,
and delays before CI pipelines start running.
Potential causes include:
- The GitLab instance may need more Sidekiq workers. By default, a single-node Linux package installation
runs one worker, restricting the execution of Sidekiq jobs to a maximum of one CPU core.
[Read more about running multiple Sidekiq workers](extra_sidekiq_processes.md).
- The instance is configured with more Sidekiq workers, but most of the extra workers are
not configured to run any job that is queued. This can result in a backlog of jobs
when the instance is busy, if the workload has changed in the months or years since
the workers were configured, or as a result of GitLab product changes.
Gather data on the state of the Sidekiq workers with the following Ruby script.
1. Create the script:
```ruby
cat > /var/opt/gitlab/sidekiqcheck.rb <<EOF
require 'sidekiq/monitor'
Sidekiq::Monitor::Status.new.display('overview')
Sidekiq::Monitor::Status.new.display('processes'); nil
Sidekiq::Monitor::Status.new.display('queues'); nil
puts "----------- workers ----------- "
workers = Sidekiq::Workers.new
workers.each do |_process_id, _thread_id, work|
pp work
end
puts "----------- Queued Jobs ----------- "
Sidekiq::Queue.all.each do |queue|
queue.each do |job|
pp job
end
end ;nil
puts "----------- done! ----------- "
EOF
```
1. Execute and capture the output:
```shell
sudo gitlab-rails runner /var/opt/gitlab/sidekiqcheck.rb > /tmp/sidekiqcheck_$(date '+%Y%m%d-%H:%M').out
```
If the performance issue is intermittent:
- Run this in a cron job every five minutes. Write the files to a location with enough space: allow for at least 500 KB per file.
```shell
cat > /etc/cron.d/sidekiqcheck <<EOF
*/5 * * * * root /opt/gitlab/bin/gitlab-rails runner /var/opt/gitlab/sidekiqcheck.rb > /tmp/sidekiqcheck_$(date '+\%Y\%m\%d-\%H:\%M').out 2>&1
EOF
```
- Refer back to the data to see what went wrong.
1. Analyze the output. The following commands assume that you have a directory of output files.
1. `grep 'Busy: ' *` shows how many jobs were being run. `grep 'Enqueued: ' *`
shows the backlog of work at that time.
1. Look at the number of busy threads across the workers in samples where Sidekiq is under load:
```shell
ls | while read f ; do if grep -q 'Enqueued: 0' $f; then :
else echo $f; egrep 'Busy:|Enqueued:|---- Processes' $f
grep 'Threads:' $f ; fi
done | more
```
Example output:
```plaintext
sidekiqcheck_20221024-14:00.out
Busy: 47
Enqueued: 363
---- Processes (13) ----
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 23 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (0 busy)
Threads: 30 (24 busy)
Threads: 30 (23 busy)
```
- In this output file, 47 threads were busy, and there was a backlog of 363 jobs.
- Of the 13 worker processes, only two were busy.
- This indicates that the other workers are configured too specifically.
- Look at the full output to work out which workers were busy.
Correlate with your `sidekiq_queues` configuration in `gitlab.rb`.
- An overloaded single-worker environment might look like this:
```plaintext
sidekiqcheck_20221024-14:00.out
Busy: 25
Enqueued: 363
---- Processes (1) ----
Threads: 25 (25 busy)
```
1. Look at the `---- Queues (xxx) ----` section of the output file to
determine what jobs were queued up at the time.
1. The files also include low level details about the state of Sidekiq at the time.
This could be useful for identifying where spikes in workload are coming from.
- The `----------- workers -----------` section details the jobs that make up the
`Busy` count in the summary.
- The `----------- Queued Jobs -----------` section provides details on
jobs that are `Enqueued`.
## Thread dump
Send the Sidekiq process ID the `TTIN` signal to output thread
backtraces in the log file.
```shell
kill -TTIN <sidekiq_pid>
```
Check in `/var/log/gitlab/sidekiq/current` or `$GITLAB_HOME/log/sidekiq.log` for
the backtrace output. The backtraces are lengthy and generally start with
several `WARN` level messages. Here's an example of a single thread's backtrace:
```plaintext
2016-04-13T06:21:20.022Z 31517 TID-orn4urby0 WARN: ActiveRecord::RecordNotFound: Couldn't find Note with 'id'=3375386
2016-04-13T06:21:20.022Z 31517 TID-orn4urby0 WARN: /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/activerecord-4.2.5.2/lib/active_record/core.rb:155:in `find'
/opt/gitlab/embedded/service/gitlab-rails/app/workers/new_note_worker.rb:7:in `perform'
/opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/sidekiq-4.0.1/lib/sidekiq/processor.rb:150:in `execute_job'
/opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/sidekiq-4.0.1/lib/sidekiq/processor.rb:132:in `block (2 levels) in process'
/opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/sidekiq-4.0.1/lib/sidekiq/middleware/chain.rb:127:in `block in invoke'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/sidekiq_middleware/memory_killer.rb:17:in `call'
/opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/sidekiq-4.0.1/lib/sidekiq/middleware/chain.rb:129:in `block in invoke'
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/sidekiq_middleware/arguments_logger.rb:6:in `call'
...
```
In some cases Sidekiq may be hung and unable to respond to the `TTIN` signal.
Move on to other troubleshooting methods if this happens.
## Ruby profiling with `rbspy`
[rbspy](https://rbspy.github.io) is an easy to use and low-overhead Ruby profiler that can be used to create
flamegraph-style diagrams of CPU usage by Ruby processes.
No changes to GitLab are required to use it and it has no dependencies. To install it:
1. Download the binary from the [`rbspy` releases page](https://github.com/rbspy/rbspy/releases).
1. Make the binary executable.
To profile a Sidekiq worker for one minute, run:
```shell
sudo ./rbspy record --pid <sidekiq_pid> --duration 60 --file /tmp/sidekiq_profile.svg
```

In this example of a flamegraph generated by `rbspy`, almost all of the Sidekiq process's time is spent in `rev_parse`, a native C
function in Rugged. In the stack, we can see `rev_parse` is being called by the `ExpirePipelineCacheWorker`.
`rbspy` requires additional [capabilities](https://man7.org/linux/man-pages/man7/capabilities.7.html)
in [containerized environments](https://rbspy.github.io/using-rbspy/index.html#containers).
It requires at least the `SYS_PTRACE` capability, otherwise it terminates with a `permission denied` error.
{{< tabs >}}
{{< tab title="Kubernetes" >}}
```yaml
securityContext:
capabilities:
add:
- SYS_PTRACE
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
docker run --cap-add SYS_PTRACE [...]
```
{{< /tab >}}
{{< tab title="Docker Compose" >}}
```yaml
services:
ruby_container_name:
# ...
cap_add:
- SYS_PTRACE
```
{{< /tab >}}
{{< /tabs >}}
## Process profiling with `perf`
Linux has a process profiling tool called `perf` that is helpful when a certain
process is eating up a lot of CPU. If you see high CPU usage and Sidekiq isn't
responding to the `TTIN` signal, this is a good next step.
If `perf` is not installed on your system, install it with `apt-get` or `yum`:
```shell
# Debian
sudo apt-get install linux-tools
# Ubuntu (may require these additional Kernel packages)
sudo apt-get install linux-tools-common linux-tools-generic linux-tools-`uname -r`
# Red Hat/CentOS
sudo yum install perf
```
Run `perf` against the Sidekiq PID:
```shell
sudo perf record -p <sidekiq_pid>
```
Let this run for 30-60 seconds and then press Ctrl-C. Then view the `perf` report:
```shell
$ sudo perf report
# Sample output
Samples: 348K of event 'cycles', Event count (approx.): 280908431073
97.69% ruby nokogiri.so [.] xmlXPathNodeSetMergeAndClear
0.18% ruby libruby.so.2.1.0 [.] objspace_malloc_increase
0.12% ruby libc-2.12.so [.] _int_malloc
0.10% ruby libc-2.12.so [.] _int_free
```
The sample output from the `perf` report shows that 97% of the CPU is
being spent inside Nokogiri and `xmlXPathNodeSetMergeAndClear`. For something
this obvious you should then go investigate what job in GitLab would use
Nokogiri and XPath. Combine with `TTIN` or `gdb` output to show the
corresponding Ruby code where this is happening.
## The GNU Project Debugger (`gdb`)
`gdb` can be another effective tool for debugging Sidekiq. It gives you a little
more interactive way to look at each thread and see what's causing problems.
Attaching to a process with `gdb` suspends the standard operation
of the process (Sidekiq does not process jobs while `gdb` is attached).
Start by attaching to the Sidekiq PID:
```shell
gdb -p <sidekiq_pid>
```
Then gather information on all the threads:
```plaintext
info threads
# Example output
30 Thread 0x7fe5fbd63700 (LWP 26060) 0x0000003f7cadf113 in poll () from /lib64/libc.so.6
29 Thread 0x7fe5f2b3b700 (LWP 26533) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
28 Thread 0x7fe5f2a3a700 (LWP 26534) 0x0000003f7ce0ba5e in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
27 Thread 0x7fe5f2939700 (LWP 26535) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
26 Thread 0x7fe5f2838700 (LWP 26537) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
25 Thread 0x7fe5f2737700 (LWP 26538) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
24 Thread 0x7fe5f2535700 (LWP 26540) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
23 Thread 0x7fe5f2434700 (LWP 26541) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
22 Thread 0x7fe5f2232700 (LWP 26543) 0x0000003f7ce0b68c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
21 Thread 0x7fe5f2131700 (LWP 26544) 0x00007fe5f7b570f0 in xmlXPathNodeSetMergeAndClear ()
from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
...
```
If you see a suspicious thread, like the Nokogiri one in the example, you may want
to get more information:
```plaintext
thread 21
bt
# Example output
#0 0x00007ff0d6afe111 in xmlXPathNodeSetMergeAndClear () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#1 0x00007ff0d6b0b836 in xmlXPathNodeCollectAndTest () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#2 0x00007ff0d6b09037 in xmlXPathCompOpEval () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#3 0x00007ff0d6b09017 in xmlXPathCompOpEval () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#4 0x00007ff0d6b092e0 in xmlXPathCompOpEval () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#5 0x00007ff0d6b0bc37 in xmlXPathRunEval () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#6 0x00007ff0d6b0be5f in xmlXPathEvalExpression () from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokogiri/nokogiri.so
#7 0x00007ff0d6a97dc3 in evaluate (argc=2, argv=0x1022d058, self=<value optimized out>) at xml_xpath_context.c:221
#8 0x00007ff0daeab0ea in vm_call_cfunc_with_frame (th=0x1022a4f0, reg_cfp=0x1032b810, ci=<value optimized out>) at vm_insnhelper.c:1510
```
To output a backtrace from all threads at once:
```plaintext
set pagination off
thread apply all bt
```
Once you're done debugging with `gdb`, be sure to detach from the process and
exit:
```plaintext
detach
exit
```
## Sidekiq kill signals
TTIN was described previously as the signal to print backtraces for logging, however
Sidekiq responds to other signals as well. For example, TSTP and TERM can be used
to gracefully shut Sidekiq down, see
[the Sidekiq Signals docs](https://github.com/mperham/sidekiq/wiki/Signals#ttin).
## Check for blocking queries
Sometimes the speed at which Sidekiq processes jobs can be so fast that it can
cause database contention. Check for blocking queries when backtraces documented previously
show that many threads are stuck in the database adapter.
The PostgreSQL wiki has details on the query you can run to see blocking
queries. The query is different based on PostgreSQL version. See
[Lock Monitoring](https://wiki.postgresql.org/wiki/Lock_Monitoring) for
the query details.
## Managing Sidekiq queues
It is possible to use [Sidekiq API](https://github.com/mperham/sidekiq/wiki/API)
to perform a number of troubleshooting steps on Sidekiq.
These are the administrative commands and it should only be used if currently
administration interface is not suitable due to scale of installation.
All these commands should be run using `gitlab-rails console`.
### View the queue size
```ruby
Sidekiq::Queue.new("pipeline_processing:build_queue").size
```
### Enumerate all enqueued jobs
```ruby
queue = Sidekiq::Queue.new("chaos:chaos_sleep")
queue.each do |job|
# job.klass # => 'MyWorker'
# job.args # => [1, 2, 3]
# job.jid # => jid
# job.queue # => chaos:chaos_sleep
# job["retry"] # => 3
# job.item # => {
# "class"=>"Chaos::SleepWorker",
# "args"=>[1000],
# "retry"=>3,
# "queue"=>"chaos:chaos_sleep",
# "backtrace"=>true,
# "queue_namespace"=>"chaos",
# "jid"=>"39bc482b823cceaf07213523",
# "created_at"=>1566317076.266069,
# "correlation_id"=>"c323b832-a857-4858-b695-672de6f0e1af",
# "enqueued_at"=>1566317076.26761},
# }
# job.delete if job.jid == 'abcdef1234567890'
end
```
### Enumerate currently running jobs
```ruby
workers = Sidekiq::Workers.new
workers.each do |process_id, thread_id, work|
# process_id is a unique identifier per Sidekiq process
# thread_id is a unique identifier per thread
# work is a Hash which looks like:
# {"queue"=>"chaos:chaos_sleep",
# "payload"=>
# { "class"=>"Chaos::SleepWorker",
# "args"=>[1000],
# "retry"=>3,
# "queue"=>"chaos:chaos_sleep",
# "backtrace"=>true,
# "queue_namespace"=>"chaos",
# "jid"=>"b2a31e3eac7b1a99ff235869",
# "created_at"=>1566316974.9215662,
# "correlation_id"=>"e484fb26-7576-45f9-bf21-b99389e1c53c",
# "enqueued_at"=>1566316974.9229589},
# "run_at"=>1566316974}],
end
```
### Remove Sidekiq jobs for given parameters (destructive)
The general method to kill jobs conditionally is the following command, which
removes jobs that are queued but not started. Running jobs cannot be killed.
```ruby
queue = Sidekiq::Queue.new('<queue name>')
queue.each { |job| job.delete if <condition>}
```
Have a look at the section below for cancelling running jobs.
In the method documented previously, `<queue-name>` is the name of the queue that contains the jobs you want to delete and `<condition>` decides which jobs get deleted.
Commonly, `<condition>` references the job arguments, which depend on the type of job in question. To find the arguments for a specific queue, you can have a look at the `perform` function of the related worker file, commonly found at `/app/workers/<queue-name>_worker.rb`.
For example, `repository_import` has `project_id` as the job argument, while `update_merge_requests` has `project_id, user_id, oldrev, newrev, ref`.
Arguments need to be referenced by their sequence ID using `job.args[<id>]` because `job.args` is a list of all arguments provided to the Sidekiq job.
Here are some examples:
```ruby
queue = Sidekiq::Queue.new('update_merge_requests')
# In this example, we want to remove any update_merge_requests jobs
# for the Project with ID 125 and ref `ref/heads/my_branch`
queue.each { |job| job.delete if job.args[0] == 125 and job.args[4] == 'ref/heads/my_branch' }
```
```ruby
# Cancelling jobs like: `RepositoryImportWorker.new.perform_async(100)`
id_list = [100]
queue = Sidekiq::Queue.new('repository_import')
queue.each do |job|
job.delete if id_list.include?(job.args[0])
end
```
### Remove specific job ID (destructive)
```ruby
queue = Sidekiq::Queue.new('repository_import')
queue.each do |job|
job.delete if job.jid == 'my-job-id'
end
```
### Remove Sidekiq jobs for a specific worker (destructive)
```ruby
queue = Sidekiq::Queue.new("default")
queue.each do |job|
if job.klass == "TodosDestroyer::PrivateFeaturesWorker"
# Uncomment the line below to actually delete jobs
#job.delete
puts "Deleted job ID #{job.jid}"
end
end
```
## Canceling running jobs (destructive)
This is highly risky operation and use it as last resort.
Doing that might result in data corruption, as the job
is interrupted mid-execution and it is not guaranteed
that proper rollback of transactions is implemented.
```ruby
Gitlab::SidekiqDaemon::Monitor.cancel_job('job-id')
```
This requires the Sidekiq to be run with `SIDEKIQ_MONITOR_WORKER=1`
environment variable.
To perform of the interrupt we use `Thread.raise` which
has number of drawbacks, as mentioned in [Why Ruby's Timeout is dangerous (and `Thread.raise` is terrifying)](https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying/#timeout-how-it-works-and-why-thread-raise-is-terrifying).
## Manually trigger a cron job
By visiting `/admin/background_jobs`, you can look into what jobs are scheduled/running/pending on your instance.
You can trigger a cron job from the UI by selecting the "Enqueue Now" button. To trigger a cron job programmatically first open a [Rails console](../operations/rails_console.md).
To find the cron job you want to test:
```ruby
job = Sidekiq::Cron::Job.find('job-name')
# get status of job:
job.status
# enqueue job right now!
job.enque!
```
For example, to trigger the `update_all_mirrors_worker` cron job that updates the repository mirrors:
```ruby
irb(main):001:0> job = Sidekiq::Cron::Job.find('update_all_mirrors_worker')
=>
#<Sidekiq::Cron::Job:0x00007f147f84a1d0
...
irb(main):002:0> job.status
=> "enabled"
irb(main):003:0> job.enque!
=> 257
```
The list of available jobs can be found in the [workers](https://gitlab.com/gitlab-org/gitlab/-/tree/master/app/workers) directory.
For more information about Sidekiq jobs, see the [Sidekiq-cron](https://github.com/sidekiq-cron/sidekiq-cron#work-with-job) documentation.
## Disabling cron jobs
You can disable any Sidekiq cron jobs by visiting the [Monitoring section in the **Admin** area](../admin_area.md#monitoring-section). You can also perform the same action using the command line and [Rails Runner](../operations/rails_console.md#using-the-rails-runner).
To disable all cron jobs:
```shell
sudo gitlab-rails runner 'Sidekiq::Cron::Job.all.map(&:disable!)'
```
To enable all cron jobs:
```shell
sudo gitlab-rails runner 'Sidekiq::Cron::Job.all.map(&:enable!)'
```
If you wish to enable only a subset of the jobs at a time you can use name matching. For example, to enable only jobs with `geo` in the name:
```shell
sudo gitlab-rails runner 'Sidekiq::Cron::Job.all.select{ |j| j.name.match("geo") }.map(&:disable!)'
```
## Clearing a Sidekiq job deduplication idempotency key
Occasionally, jobs that are expected to run (for example, cron jobs) are observed to not run at all. When checking the logs, there might be instances where jobs are seen to not run with a `"job_status": "deduplicated"`.
This can happen when a job failed and the idempotency key was not cleared properly. For example, [stopping Sidekiq kills any remaining jobs after 25 seconds](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4918).
[By default, the key expires after 6 hours](https://gitlab.com/gitlab-org/gitlab/-/blob/87c92f06eb92716a26679cd339f3787ae7edbdc3/lib/gitlab/sidekiq_middleware/duplicate_jobs/duplicate_job.rb#L23),
but if you want to clear the idempotency key immediately, follow the following steps (the example provided is for `Geo::VerificationBatchWorker`):
1. Find the worker class and `args` of the job in the Sidekiq logs:
```plaintext
{ ... "class":"Geo::VerificationBatchWorker","args":["container_repository"] ... }
```
1. Start a [Rails console session](../operations/rails_console.md#starting-a-rails-console-session).
1. Run the following snippet:
```ruby
worker_class = Geo::VerificationBatchWorker
args = ["container_repository"]
dj = Gitlab::SidekiqMiddleware::DuplicateJobs::DuplicateJob.new({ 'class' => worker_class.name, 'args' => args }, worker_class.queue)
dj.send(:idempotency_key)
dj.delete!
```
## CPU saturation in Redis caused by Sidekiq BRPOP calls
Sidekiq `BROP` calls can cause CPU usage to increase on Redis.
Increase the [`SIDEKIQ_SEMI_RELIABLE_FETCH_TIMEOUT` environment variable](../environment_variables.md) to improve CPU usage on Redis.
## Error: `OpenSSL::Cipher::CipherError`
If you receive error messages like:
```plaintext
"OpenSSL::Cipher::CipherError","exception.message":"","exception.backtrace":["encryptor (3.0.0) lib/encryptor.rb:98:in `final'","encryptor (3.0.0) lib/encryptor.rb:98:in `crypt'","encryptor (3.0.0) lib/encryptor.rb:49:in `decrypt'"
```
This error means that the processes are unable to decrypt encrypted data that is stored in the GitLab database. This indicates that there is some problem with your `/etc/gitlab/gitlab-secrets.json` file, ensure that you copied the file from your main GitLab node to your Sidekiq nodes.
## Related topics
- [Elasticsearch workers overload Sidekiq](../../integration/elasticsearch/troubleshooting/migrations.md#elasticsearch-workers-overload-sidekiq).
|
https://docs.gitlab.com/administration/sidekiq
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/sidekiq
|
[
"doc",
"administration",
"sidekiq"
] |
_index.md
|
Data access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Configure an external Sidekiq instance
|
Configure an external Sidekiq instance.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
You can configure an external Sidekiq instance by using the Sidekiq that's bundled in the GitLab package. Sidekiq requires connection to the Redis,
PostgreSQL, and Gitaly instances.
## Configure TCP access for PostgreSQL, Gitaly, and Redis on the GitLab instance
By default, GitLab uses UNIX sockets and is not set up to communicate via TCP. To change this:
1. [Configure packaged PostgreSQL server to listen on TCP/IP](https://docs.gitlab.com/omnibus/settings/database.html#configure-packaged-postgresql-server-to-listen-on-tcpip) adding the Sidekiq server IP addresses to `postgresql['md5_auth_cidr_addresses']`
1. [Make the bundled Redis reachable via TCP](https://docs.gitlab.com/omnibus/settings/redis.html#making-the-bundled-redis-reachable-via-tcp)
1. Edit the `/etc/gitlab/gitlab.rb` file on your GitLab instance, and add the following:
```ruby
## Gitaly
gitaly['configuration'] = {
# ...
#
# Make Gitaly accept connections on all network interfaces
listen_addr: '0.0.0.0:8075',
auth: {
## Set up the Gitaly token as a form of authentication because you are accessing Gitaly over the network
## https://docs.gitlab.com/ee/administration/gitaly/configure_gitaly.html#about-the-gitaly-token
token: 'abc123secret',
},
}
gitlab_rails['gitaly_token'] = 'abc123secret'
# Password to Authenticate Redis
gitlab_rails['redis_password'] = 'redis-password-goes-here'
```
1. Run `reconfigure`:
```shell
sudo gitlab-ctl reconfigure
```
1. Restart the `PostgreSQL` server:
```shell
sudo gitlab-ctl restart postgresql
```
## Set up Sidekiq instance
Find [your reference architecture](../reference_architectures/_index.md#available-reference-architectures) and follow the Sidekiq instance setup details.
## Configure multiple Sidekiq nodes with shared storage
If you run multiple Sidekiq nodes with a shared file storage, such as NFS, you must
specify the UIDs and GIDs to ensure they match between servers. Specifying the UIDs
and GIDs prevents permissions issues in the file system. This advice is similar to the
[advice for Geo setups](../geo/replication/multiple_servers.md#step-4-configure-the-frontend-application-nodes-on-the-geo-secondary-site).
To set up multiple Sidekiq nodes:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
user['uid'] = 9000
user['gid'] = 9000
web_server['uid'] = 9001
web_server['gid'] = 9001
registry['uid'] = 9002
registry['gid'] = 9002
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Configure the container registry when using an external Sidekiq
If you're using the container registry and it's running on a different
node than Sidekiq, follow the steps below.
1. Edit `/etc/gitlab/gitlab.rb`, and configure the registry URL:
```ruby
gitlab_rails['registry_api_url'] = "https://registry.example.com"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. In the instance where container registry is hosted, copy the `registry.key`
file to the Sidekiq node.
## Configure the Sidekiq metrics server
If you want to collect Sidekiq metrics, enable the Sidekiq metrics server.
To make metrics available from `localhost:8082/metrics`:
To configure the metrics server:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['metrics_enabled'] = true
sidekiq['listen_address'] = "localhost"
sidekiq['listen_port'] = 8082
# Optionally log all the metrics server logs to log/sidekiq_exporter.log
sidekiq['exporter_log_enabled'] = true
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
### Enable HTTPS
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/364771) in GitLab 15.2.
{{< /history >}}
To serve metrics via HTTPS instead of HTTP, enable TLS in the exporter settings:
1. Edit `/etc/gitlab/gitlab.rb` to add (or find and uncomment) the following lines:
```ruby
sidekiq['exporter_tls_enabled'] = true
sidekiq['exporter_tls_cert_path'] = "/path/to/certificate.pem"
sidekiq['exporter_tls_key_path'] = "/path/to/private-key.pem"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
When TLS is enabled, the same `port` and `address` are used as described previously.
The metrics server cannot serve both HTTP and HTTPS at the same time.
## Configure health checks
If you use health check probes to observe Sidekiq, enable the Sidekiq health check server.
To make health checks available from `localhost:8092`:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['health_checks_enabled'] = true
sidekiq['health_checks_listen_address'] = "localhost"
sidekiq['health_checks_listen_port'] = 8092
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
For more information about health checks, see the [Sidekiq health check page](sidekiq_health_check.md).
## Configure LDAP and user or group synchronization
If you use LDAP for user and group management, you must add the LDAP configuration to your Sidekiq node as well as the LDAP
synchronization worker. If the LDAP configuration and LDAP synchronization worker are not applied to your Sidekiq node,
users and groups are not automatically synchronized.
For more information about configuring LDAP for GitLab, see:
- [GitLab LDAP configuration documentation](../auth/ldap/_index.md#configure-ldap)
- [LDAP synchronization documentation](../auth/ldap/ldap_synchronization.md#adjust-ldap-user-sync-schedule)
To enable LDAP with the synchronization worker for Sidekiq:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['ldap_enabled'] = true
gitlab_rails['prevent_ldap_sign_in'] = false
gitlab_rails['ldap_servers'] = {
'main' => {
'label' => 'LDAP',
'host' => 'ldap.mydomain.com',
'port' => 389,
'uid' => 'sAMAccountName',
'encryption' => 'simple_tls',
'verify_certificates' => true,
'bind_dn' => '_the_full_dn_of_the_user_you_will_bind_with',
'password' => '_the_password_of_the_bind_user',
'tls_options' => {
'ca_file' => '',
'ssl_version' => '',
'ciphers' => '',
'cert' => '',
'key' => ''
},
'timeout' => 10,
'active_directory' => true,
'allow_username_or_email_login' => false,
'block_auto_created_users' => false,
'base' => 'dc=example,dc=com',
'user_filter' => '',
'attributes' => {
'username' => ['uid', 'userid', 'sAMAccountName'],
'email' => ['mail', 'email', 'userPrincipalName'],
'name' => 'cn',
'first_name' => 'givenName',
'last_name' => 'sn'
},
'lowercase_usernames' => false,
# Enterprise Edition only
# https://docs.gitlab.com/ee/administration/auth/ldap/ldap_synchronization.html
'group_base' => '',
'admin_group' => '',
'external_groups' => [],
'sync_ssh_keys' => false
}
}
gitlab_rails['ldap_sync_worker_cron'] = "0 */12 * * *"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Configure SAML Groups for SAML Group Sync
If you use [SAML Group Sync](../../user/group/saml_sso/group_sync.md), you must configure [SAML Groups](../../integration/saml.md#configure-users-based-on-saml-group-membership) on all your Sidekiq nodes.
## Related topics
- [Extra Sidekiq processes](extra_sidekiq_processes.md)
- [Processing specific job classes](processing_specific_job_classes.md)
- [Sidekiq health checks](sidekiq_health_check.md)
- [Using the GitLab-Sidekiq chart](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/)
## Troubleshooting
See our [administrator guide to troubleshooting Sidekiq](sidekiq_troubleshooting.md).
|
---
stage: Data access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Configure an external Sidekiq instance
description: Configure an external Sidekiq instance.
breadcrumbs:
- doc
- administration
- sidekiq
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
You can configure an external Sidekiq instance by using the Sidekiq that's bundled in the GitLab package. Sidekiq requires connection to the Redis,
PostgreSQL, and Gitaly instances.
## Configure TCP access for PostgreSQL, Gitaly, and Redis on the GitLab instance
By default, GitLab uses UNIX sockets and is not set up to communicate via TCP. To change this:
1. [Configure packaged PostgreSQL server to listen on TCP/IP](https://docs.gitlab.com/omnibus/settings/database.html#configure-packaged-postgresql-server-to-listen-on-tcpip) adding the Sidekiq server IP addresses to `postgresql['md5_auth_cidr_addresses']`
1. [Make the bundled Redis reachable via TCP](https://docs.gitlab.com/omnibus/settings/redis.html#making-the-bundled-redis-reachable-via-tcp)
1. Edit the `/etc/gitlab/gitlab.rb` file on your GitLab instance, and add the following:
```ruby
## Gitaly
gitaly['configuration'] = {
# ...
#
# Make Gitaly accept connections on all network interfaces
listen_addr: '0.0.0.0:8075',
auth: {
## Set up the Gitaly token as a form of authentication because you are accessing Gitaly over the network
## https://docs.gitlab.com/ee/administration/gitaly/configure_gitaly.html#about-the-gitaly-token
token: 'abc123secret',
},
}
gitlab_rails['gitaly_token'] = 'abc123secret'
# Password to Authenticate Redis
gitlab_rails['redis_password'] = 'redis-password-goes-here'
```
1. Run `reconfigure`:
```shell
sudo gitlab-ctl reconfigure
```
1. Restart the `PostgreSQL` server:
```shell
sudo gitlab-ctl restart postgresql
```
## Set up Sidekiq instance
Find [your reference architecture](../reference_architectures/_index.md#available-reference-architectures) and follow the Sidekiq instance setup details.
## Configure multiple Sidekiq nodes with shared storage
If you run multiple Sidekiq nodes with a shared file storage, such as NFS, you must
specify the UIDs and GIDs to ensure they match between servers. Specifying the UIDs
and GIDs prevents permissions issues in the file system. This advice is similar to the
[advice for Geo setups](../geo/replication/multiple_servers.md#step-4-configure-the-frontend-application-nodes-on-the-geo-secondary-site).
To set up multiple Sidekiq nodes:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
user['uid'] = 9000
user['gid'] = 9000
web_server['uid'] = 9001
web_server['gid'] = 9001
registry['uid'] = 9002
registry['gid'] = 9002
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Configure the container registry when using an external Sidekiq
If you're using the container registry and it's running on a different
node than Sidekiq, follow the steps below.
1. Edit `/etc/gitlab/gitlab.rb`, and configure the registry URL:
```ruby
gitlab_rails['registry_api_url'] = "https://registry.example.com"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. In the instance where container registry is hosted, copy the `registry.key`
file to the Sidekiq node.
## Configure the Sidekiq metrics server
If you want to collect Sidekiq metrics, enable the Sidekiq metrics server.
To make metrics available from `localhost:8082/metrics`:
To configure the metrics server:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['metrics_enabled'] = true
sidekiq['listen_address'] = "localhost"
sidekiq['listen_port'] = 8082
# Optionally log all the metrics server logs to log/sidekiq_exporter.log
sidekiq['exporter_log_enabled'] = true
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
### Enable HTTPS
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/364771) in GitLab 15.2.
{{< /history >}}
To serve metrics via HTTPS instead of HTTP, enable TLS in the exporter settings:
1. Edit `/etc/gitlab/gitlab.rb` to add (or find and uncomment) the following lines:
```ruby
sidekiq['exporter_tls_enabled'] = true
sidekiq['exporter_tls_cert_path'] = "/path/to/certificate.pem"
sidekiq['exporter_tls_key_path'] = "/path/to/private-key.pem"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation)
for the changes to take effect.
When TLS is enabled, the same `port` and `address` are used as described previously.
The metrics server cannot serve both HTTP and HTTPS at the same time.
## Configure health checks
If you use health check probes to observe Sidekiq, enable the Sidekiq health check server.
To make health checks available from `localhost:8092`:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['health_checks_enabled'] = true
sidekiq['health_checks_listen_address'] = "localhost"
sidekiq['health_checks_listen_port'] = 8092
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
For more information about health checks, see the [Sidekiq health check page](sidekiq_health_check.md).
## Configure LDAP and user or group synchronization
If you use LDAP for user and group management, you must add the LDAP configuration to your Sidekiq node as well as the LDAP
synchronization worker. If the LDAP configuration and LDAP synchronization worker are not applied to your Sidekiq node,
users and groups are not automatically synchronized.
For more information about configuring LDAP for GitLab, see:
- [GitLab LDAP configuration documentation](../auth/ldap/_index.md#configure-ldap)
- [LDAP synchronization documentation](../auth/ldap/ldap_synchronization.md#adjust-ldap-user-sync-schedule)
To enable LDAP with the synchronization worker for Sidekiq:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['ldap_enabled'] = true
gitlab_rails['prevent_ldap_sign_in'] = false
gitlab_rails['ldap_servers'] = {
'main' => {
'label' => 'LDAP',
'host' => 'ldap.mydomain.com',
'port' => 389,
'uid' => 'sAMAccountName',
'encryption' => 'simple_tls',
'verify_certificates' => true,
'bind_dn' => '_the_full_dn_of_the_user_you_will_bind_with',
'password' => '_the_password_of_the_bind_user',
'tls_options' => {
'ca_file' => '',
'ssl_version' => '',
'ciphers' => '',
'cert' => '',
'key' => ''
},
'timeout' => 10,
'active_directory' => true,
'allow_username_or_email_login' => false,
'block_auto_created_users' => false,
'base' => 'dc=example,dc=com',
'user_filter' => '',
'attributes' => {
'username' => ['uid', 'userid', 'sAMAccountName'],
'email' => ['mail', 'email', 'userPrincipalName'],
'name' => 'cn',
'first_name' => 'givenName',
'last_name' => 'sn'
},
'lowercase_usernames' => false,
# Enterprise Edition only
# https://docs.gitlab.com/ee/administration/auth/ldap/ldap_synchronization.html
'group_base' => '',
'admin_group' => '',
'external_groups' => [],
'sync_ssh_keys' => false
}
}
gitlab_rails['ldap_sync_worker_cron'] = "0 */12 * * *"
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Configure SAML Groups for SAML Group Sync
If you use [SAML Group Sync](../../user/group/saml_sso/group_sync.md), you must configure [SAML Groups](../../integration/saml.md#configure-users-based-on-saml-group-membership) on all your Sidekiq nodes.
## Related topics
- [Extra Sidekiq processes](extra_sidekiq_processes.md)
- [Processing specific job classes](processing_specific_job_classes.md)
- [Sidekiq health checks](sidekiq_health_check.md)
- [Using the GitLab-Sidekiq chart](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/)
## Troubleshooting
See our [administrator guide to troubleshooting Sidekiq](sidekiq_troubleshooting.md).
|
https://docs.gitlab.com/administration/sidekiq_job_migration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/sidekiq_job_migration.md
|
2025-08-13
|
doc/administration/sidekiq
|
[
"doc",
"administration",
"sidekiq"
] |
sidekiq_job_migration.md
|
none
|
unassigned
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Sidekiq job migration Rake tasks
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< alert type="warning" >}}
This operation should be very uncommon. We do not recommend it for the vast majority of GitLab instances.
{{< /alert >}}
Sidekiq routing rules allow administrators to re-route certain background jobs from their regular queue to an alternative queue. By default, GitLab uses one queue per background job type. GitLab has over 400 background job types, and so correspondingly it has over 400 queues.
Most administrators do not need to change this setting. In some cases with particularly large background job processing workloads, Redis performance might suffer due to the number of queues that GitLab listens to.
If the Sidekiq routing rules are changed, administrators should be cautious with the migration to avoid losing jobs entirely. The basic migration steps are:
1. Listen to both the old and new queues.
1. Update the routing rules.
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
1. Run the [Rake tasks for migrating queued and future jobs](#migrate-queued-and-future-jobs).
1. Stop listening to the old queues.
## Migrate queued and future jobs
Step 4 involves rewriting some Sidekiq job data for jobs that are already stored in Redis, but due to run in the future. The two sets of jobs that are due to run in the future: scheduled jobs and jobs to be retried. We provide a separate Rake task to migrate each set:
- `gitlab:sidekiq:migrate_jobs:retry` for jobs to be retried.
- `gitlab:sidekiq:migrate_jobs:schedule` for scheduled jobs.
Queued jobs that are yet to be run can also be migrated with a Rake task ([available in GitLab 15.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/101348) and later):
- `gitlab:sidekiq:migrate_jobs:queued` for queued jobs to be performed asynchronously.
Most of the time, running all three at the same time is the correct choice. Three separate tasks allow for more fine-grained control where needed. To run all three at once ([available in GitLab 15.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/101348) and later):
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:sidekiq:migrate_jobs:retry gitlab:sidekiq:migrate_jobs:schedule gitlab:sidekiq:migrate_jobs:queued
# source installations
bundle exec rake gitlab:sidekiq:migrate_jobs:retry gitlab:sidekiq:migrate_jobs:schedule gitlab:sidekiq:migrate_jobs:queued RAILS_ENV=production
```
|
---
stage: none
group: unassigned
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Sidekiq job migration Rake tasks
breadcrumbs:
- doc
- administration
- sidekiq
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< alert type="warning" >}}
This operation should be very uncommon. We do not recommend it for the vast majority of GitLab instances.
{{< /alert >}}
Sidekiq routing rules allow administrators to re-route certain background jobs from their regular queue to an alternative queue. By default, GitLab uses one queue per background job type. GitLab has over 400 background job types, and so correspondingly it has over 400 queues.
Most administrators do not need to change this setting. In some cases with particularly large background job processing workloads, Redis performance might suffer due to the number of queues that GitLab listens to.
If the Sidekiq routing rules are changed, administrators should be cautious with the migration to avoid losing jobs entirely. The basic migration steps are:
1. Listen to both the old and new queues.
1. Update the routing rules.
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
1. Run the [Rake tasks for migrating queued and future jobs](#migrate-queued-and-future-jobs).
1. Stop listening to the old queues.
## Migrate queued and future jobs
Step 4 involves rewriting some Sidekiq job data for jobs that are already stored in Redis, but due to run in the future. The two sets of jobs that are due to run in the future: scheduled jobs and jobs to be retried. We provide a separate Rake task to migrate each set:
- `gitlab:sidekiq:migrate_jobs:retry` for jobs to be retried.
- `gitlab:sidekiq:migrate_jobs:schedule` for scheduled jobs.
Queued jobs that are yet to be run can also be migrated with a Rake task ([available in GitLab 15.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/101348) and later):
- `gitlab:sidekiq:migrate_jobs:queued` for queued jobs to be performed asynchronously.
Most of the time, running all three at the same time is the correct choice. Three separate tasks allow for more fine-grained control where needed. To run all three at once ([available in GitLab 15.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/101348) and later):
```shell
# omnibus-gitlab
sudo gitlab-rake gitlab:sidekiq:migrate_jobs:retry gitlab:sidekiq:migrate_jobs:schedule gitlab:sidekiq:migrate_jobs:queued
# source installations
bundle exec rake gitlab:sidekiq:migrate_jobs:retry gitlab:sidekiq:migrate_jobs:schedule gitlab:sidekiq:migrate_jobs:queued RAILS_ENV=production
```
|
https://docs.gitlab.com/administration/processing_specific_job_classes
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/processing_specific_job_classes.md
|
2025-08-13
|
doc/administration/sidekiq
|
[
"doc",
"administration",
"sidekiq"
] |
processing_specific_job_classes.md
|
Data access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Processing specific job classes
| null |
{{< alert type="warning" >}}
These are advanced settings. While they are used on GitLab.com, most GitLab
instances should only add more processes that listen to all queues. This is the
same approach described in the [Reference Architectures](../reference_architectures/_index.md).
{{< /alert >}}
Most GitLab instances should have [all processes to listen to all queues](extra_sidekiq_processes.md#start-multiple-processes).
Another alternative is to use [routing rules](#routing-rules) which direct specific
job classes inside the application to queue names that you configure. Then, the Sidekiq
processes only need to listen to a handful of the configured queues. Doing so
lowers the load on Redis, which is important on very large-scale deployments.
## Routing rules
{{< history >}}
- [Default routing rule value](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/97908) introduced in GitLab 15.4.
- Queue selectors [replaced by routing rules](https://gitlab.com/gitlab-org/gitlab/-/issues/390787) in GitLab 17.0.
{{< /history >}}
{{< alert type="note" >}}
Mailer jobs cannot be routed by routing rules, and always go to the
`mailers` queue. When using routing rules, ensure that at least one process is
listening to the `mailers` queue. Typically this can be placed alongside the
`default` queue.
{{< /alert >}}
We recommend most GitLab instances using routing rules to manage their Sidekiq
queues. This allows administrators to choose single queue names for groups of
job classes based on their attributes. The syntax is an ordered array of pairs of `[query, queue]`:
1. The query is a [worker matching query](#worker-matching-query).
1. The queue name must be a valid Sidekiq queue name. If the queue name
is `nil`, or an empty string, the worker is routed to the queue generated
by the name of the worker instead. (See [list of available job classes](#list-of-available-job-classes)
for more information).
The queue name does not have to match any existing queue name in the
list of available job classes.
1. The first query matching a worker is chosen for that worker; later rules are
ignored.
### Routing rules migration
After the Sidekiq routing rules are changed, you must take care with
the migration to avoid losing jobs entirely, especially in a system with long
queues of jobs. The migration can be done by following the migration steps
mentioned in [Sidekiq job migration](sidekiq_job_migration.md).
### Routing rules in a scaled architecture
Routing rules must be the same across all GitLab nodes (especially GitLab Rails
and Sidekiq nodes) as they are part of the application configuration.
### Detailed example
This is a comprehensive example intended to show different possibilities.
A [Helm chart example is also available](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/#queues).
These are not recommendations.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['routing_rules'] = [
# Route all non-CPU-bound workers that are high urgency to `high-urgency` queue
['resource_boundary!=cpu&urgency=high', 'high-urgency'],
# Route all database, gitaly and global search workers that are throttled to `throttled` queue
['feature_category=database,gitaly,global_search&urgency=throttled', 'throttled'],
# Route all workers having contact with outside world to a `network-intenstive` queue
['has_external_dependencies=true|feature_category=hooks|tags=network', 'network-intensive'],
# Wildcard matching, route the rest to `default` queue
['*', 'default']
]
```
The `queue_groups` can then be set to match these generated queue names. For
instance:
```ruby
sidekiq['queue_groups'] = [
# Run two high-urgency processes
'high-urgency',
'high-urgency',
# Run one process for throttled, network-intensive
'throttled,network-intensive',
# Run one 'catchall' process on the default and mailers queues
'default,mailers'
]
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Worker matching query
GitLab provides a query syntax to match a worker based on its attributes
employed by routing rules. A query includes two components:
- Attributes that can be selected.
- Operators used to construct a query.
### Available attributes
Queue matching query works upon the worker attributes, described in
the Sidekiq style guide in the GitLab development documentation. We support querying
based on a subset of worker attributes:
- `feature_category` - the
GitLab feature category the
queue belongs to. For example, the `merge` queue belongs to the
`source_code_management` category.
- `has_external_dependencies` - whether or not the queue connects to external
services. For example, all importers have this set to `true`.
- `urgency` - how important it is that this queue's jobs run
quickly. Can be `high`, `low`, or `throttled`. For example, the
`authorized_projects` queue is used to refresh user permissions, and
is `high` urgency.
- `worker_name` - the worker name. Use this attribute to select a specific worker. Find all available names in [the job classes lists](#list-of-available-job-classes) below.
- `name` - the queue name generated from the worker name. Use this attribute to select a specific queue. Because this is generated from
the worker name, it does not change based on the result of other routing
rules.
- `resource_boundary` - if the queue is bound by `cpu`, `memory`, or
`unknown`. For example, the `ProjectExportWorker` is memory bound as it has
to load data in memory before saving it for export.
- `tags` - short-lived annotations for queues. These are expected to frequently
change from release to release, and may be removed entirely.
- `queue_namespace` - Some workers are grouped by a namespace, and
`name` is prefixed with `<queue_namespace>:`. For example, for a queue `name` of `cronjob:admin_email`,
`queue_namespace` is `cronjob`. Use this attribute to select a group of workers.
`has_external_dependencies` is a boolean attribute: only the exact
string `true` is considered true, and everything else is considered
false.
`tags` is a set, which means that `=` checks for intersecting sets, and
`!=` checks for disjoint sets. For example, `tags=a,b` selects queues
that have tags `a`, `b`, or both. `tags!=a,b` selects queues that have
neither of those tags.
### Available operators
Routing rules support the following operators, listed from highest to lowest
precedence:
- `|` - the logical `OR` operator. For example, `query_a|query_b` (where `query_a`
and `query_b` are queries made up of the other operators here) includes
queues that match either query.
- `&` - the logical `AND` operator. For example, `query_a&query_b` (where
`query_a` and `query_b` are queries made up of the other operators here)
include only queues that match both queries.
- `!=` - the `NOT IN` operator. For example, `feature_category!=issue_tracking`
excludes all queues from the `issue_tracking` feature category.
- `=` - the `IN` operator. For example, `resource_boundary=cpu` includes all
queues that are CPU bound.
- `,` - the concatenate set operator. For example,
`feature_category=continuous_integration,pages` includes all queues from
either the `continuous_integration` category or the `pages` category. This
example is also possible using the OR operator, but allows greater brevity, as
well as being lower precedence.
The operator precedence for this syntax is fixed: it's not possible to make `AND`
have higher precedence than `OR`.
As with the standard queue group syntax documented previously, a single `*` as the
entire queue group selects all queues.
### List of available job classes
For a list of the existing Sidekiq job classes and queues, check the following
files:
- [Queues for all GitLab editions](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/all_queues.yml)
- [Queues for GitLab Enterprise Editions only](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/all_queues.yml)
|
---
stage: Data access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Processing specific job classes
breadcrumbs:
- doc
- administration
- sidekiq
---
{{< alert type="warning" >}}
These are advanced settings. While they are used on GitLab.com, most GitLab
instances should only add more processes that listen to all queues. This is the
same approach described in the [Reference Architectures](../reference_architectures/_index.md).
{{< /alert >}}
Most GitLab instances should have [all processes to listen to all queues](extra_sidekiq_processes.md#start-multiple-processes).
Another alternative is to use [routing rules](#routing-rules) which direct specific
job classes inside the application to queue names that you configure. Then, the Sidekiq
processes only need to listen to a handful of the configured queues. Doing so
lowers the load on Redis, which is important on very large-scale deployments.
## Routing rules
{{< history >}}
- [Default routing rule value](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/97908) introduced in GitLab 15.4.
- Queue selectors [replaced by routing rules](https://gitlab.com/gitlab-org/gitlab/-/issues/390787) in GitLab 17.0.
{{< /history >}}
{{< alert type="note" >}}
Mailer jobs cannot be routed by routing rules, and always go to the
`mailers` queue. When using routing rules, ensure that at least one process is
listening to the `mailers` queue. Typically this can be placed alongside the
`default` queue.
{{< /alert >}}
We recommend most GitLab instances using routing rules to manage their Sidekiq
queues. This allows administrators to choose single queue names for groups of
job classes based on their attributes. The syntax is an ordered array of pairs of `[query, queue]`:
1. The query is a [worker matching query](#worker-matching-query).
1. The queue name must be a valid Sidekiq queue name. If the queue name
is `nil`, or an empty string, the worker is routed to the queue generated
by the name of the worker instead. (See [list of available job classes](#list-of-available-job-classes)
for more information).
The queue name does not have to match any existing queue name in the
list of available job classes.
1. The first query matching a worker is chosen for that worker; later rules are
ignored.
### Routing rules migration
After the Sidekiq routing rules are changed, you must take care with
the migration to avoid losing jobs entirely, especially in a system with long
queues of jobs. The migration can be done by following the migration steps
mentioned in [Sidekiq job migration](sidekiq_job_migration.md).
### Routing rules in a scaled architecture
Routing rules must be the same across all GitLab nodes (especially GitLab Rails
and Sidekiq nodes) as they are part of the application configuration.
### Detailed example
This is a comprehensive example intended to show different possibilities.
A [Helm chart example is also available](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/#queues).
These are not recommendations.
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
sidekiq['routing_rules'] = [
# Route all non-CPU-bound workers that are high urgency to `high-urgency` queue
['resource_boundary!=cpu&urgency=high', 'high-urgency'],
# Route all database, gitaly and global search workers that are throttled to `throttled` queue
['feature_category=database,gitaly,global_search&urgency=throttled', 'throttled'],
# Route all workers having contact with outside world to a `network-intenstive` queue
['has_external_dependencies=true|feature_category=hooks|tags=network', 'network-intensive'],
# Wildcard matching, route the rest to `default` queue
['*', 'default']
]
```
The `queue_groups` can then be set to match these generated queue names. For
instance:
```ruby
sidekiq['queue_groups'] = [
# Run two high-urgency processes
'high-urgency',
'high-urgency',
# Run one process for throttled, network-intensive
'throttled,network-intensive',
# Run one 'catchall' process on the default and mailers queues
'default,mailers'
]
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Worker matching query
GitLab provides a query syntax to match a worker based on its attributes
employed by routing rules. A query includes two components:
- Attributes that can be selected.
- Operators used to construct a query.
### Available attributes
Queue matching query works upon the worker attributes, described in
the Sidekiq style guide in the GitLab development documentation. We support querying
based on a subset of worker attributes:
- `feature_category` - the
GitLab feature category the
queue belongs to. For example, the `merge` queue belongs to the
`source_code_management` category.
- `has_external_dependencies` - whether or not the queue connects to external
services. For example, all importers have this set to `true`.
- `urgency` - how important it is that this queue's jobs run
quickly. Can be `high`, `low`, or `throttled`. For example, the
`authorized_projects` queue is used to refresh user permissions, and
is `high` urgency.
- `worker_name` - the worker name. Use this attribute to select a specific worker. Find all available names in [the job classes lists](#list-of-available-job-classes) below.
- `name` - the queue name generated from the worker name. Use this attribute to select a specific queue. Because this is generated from
the worker name, it does not change based on the result of other routing
rules.
- `resource_boundary` - if the queue is bound by `cpu`, `memory`, or
`unknown`. For example, the `ProjectExportWorker` is memory bound as it has
to load data in memory before saving it for export.
- `tags` - short-lived annotations for queues. These are expected to frequently
change from release to release, and may be removed entirely.
- `queue_namespace` - Some workers are grouped by a namespace, and
`name` is prefixed with `<queue_namespace>:`. For example, for a queue `name` of `cronjob:admin_email`,
`queue_namespace` is `cronjob`. Use this attribute to select a group of workers.
`has_external_dependencies` is a boolean attribute: only the exact
string `true` is considered true, and everything else is considered
false.
`tags` is a set, which means that `=` checks for intersecting sets, and
`!=` checks for disjoint sets. For example, `tags=a,b` selects queues
that have tags `a`, `b`, or both. `tags!=a,b` selects queues that have
neither of those tags.
### Available operators
Routing rules support the following operators, listed from highest to lowest
precedence:
- `|` - the logical `OR` operator. For example, `query_a|query_b` (where `query_a`
and `query_b` are queries made up of the other operators here) includes
queues that match either query.
- `&` - the logical `AND` operator. For example, `query_a&query_b` (where
`query_a` and `query_b` are queries made up of the other operators here)
include only queues that match both queries.
- `!=` - the `NOT IN` operator. For example, `feature_category!=issue_tracking`
excludes all queues from the `issue_tracking` feature category.
- `=` - the `IN` operator. For example, `resource_boundary=cpu` includes all
queues that are CPU bound.
- `,` - the concatenate set operator. For example,
`feature_category=continuous_integration,pages` includes all queues from
either the `continuous_integration` category or the `pages` category. This
example is also possible using the OR operator, but allows greater brevity, as
well as being lower precedence.
The operator precedence for this syntax is fixed: it's not possible to make `AND`
have higher precedence than `OR`.
As with the standard queue group syntax documented previously, a single `*` as the
entire queue group selects all queues.
### List of available job classes
For a list of the existing Sidekiq job classes and queues, check the following
files:
- [Queues for all GitLab editions](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/all_queues.yml)
- [Queues for GitLab Enterprise Editions only](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/all_queues.yml)
|
https://docs.gitlab.com/administration/sidekiq_health_check
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/sidekiq_health_check.md
|
2025-08-13
|
doc/administration/sidekiq
|
[
"doc",
"administration",
"sidekiq"
] |
sidekiq_health_check.md
|
Data access
|
Durability
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Sidekiq health check
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides liveness and readiness probes to indicate service health and
reachability to the Sidekiq cluster. These endpoints
[can be provided to schedulers like Kubernetes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
to hold traffic until the system is ready or restart the container as needed.
The health check server can be set up when [configuring Sidekiq](_index.md).
## Readiness
The readiness probe checks whether the Sidekiq workers are ready to process jobs.
```plaintext
GET /readiness
```
If the server is bound to `localhost:8092`, the process cluster can be probed for readiness as follows:
```shell
curl "http://localhost:8092/readiness"
```
On success, the endpoint returns a `200` HTTP status code, and a response like the following:
```json
{
"status": "ok"
}
```
## Liveness
Checks whether the Sidekiq cluster is running.
```plaintext
GET /liveness
```
If the server is bound to `localhost:8092`, the process cluster can be probed for liveness as follows:
```shell
curl "http://localhost:8092/liveness"
```
On success, the endpoint returns a `200` HTTP status code, and a response like the following:
```json
{
"status": "ok"
}
```
|
---
stage: Data access
group: Durability
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Sidekiq health check
breadcrumbs:
- doc
- administration
- sidekiq
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides liveness and readiness probes to indicate service health and
reachability to the Sidekiq cluster. These endpoints
[can be provided to schedulers like Kubernetes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
to hold traffic until the system is ready or restart the container as needed.
The health check server can be set up when [configuring Sidekiq](_index.md).
## Readiness
The readiness probe checks whether the Sidekiq workers are ready to process jobs.
```plaintext
GET /readiness
```
If the server is bound to `localhost:8092`, the process cluster can be probed for readiness as follows:
```shell
curl "http://localhost:8092/readiness"
```
On success, the endpoint returns a `200` HTTP status code, and a response like the following:
```json
{
"status": "ok"
}
```
## Liveness
Checks whether the Sidekiq cluster is running.
```plaintext
GET /liveness
```
If the server is bound to `localhost:8092`, the process cluster can be probed for liveness as follows:
```shell
curl "http://localhost:8092/liveness"
```
On success, the endpoint returns a `200` HTTP status code, and a response like the following:
```json
{
"status": "ok"
}
```
|
https://docs.gitlab.com/administration/architecture
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/architecture.md
|
2025-08-13
|
doc/administration/dedicated
|
[
"doc",
"administration",
"dedicated"
] |
architecture.md
|
GitLab Dedicated
|
Environment Automation
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Dedicated architecture
|
Get to know the GitLab Dedicated architecture through a series of diagrams.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
This page provides a set of architectural documents and diagrams for GitLab Dedicated.
## High-level overview
The following diagram shows a high-level overview of the architecture for GitLab Dedicated,
where various AWS accounts managed by GitLab and customers are controlled by the Switchboard application.

When managing GitLab Dedicated tenant instances:
- Switchboard is responsible for managing global configuration shared between the AWS cloud providers, accessible by tenants.
- Amp is responsible for the interaction with the customer tenant accounts, such as configuring expected roles and policies, enabling the required services, and provisioning environments.
GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/e0b6661c-6c10-43d9-8afa-1fe0677e060c/edit?page=0_0#) files for the diagram in Lucidchart.
## Tenant network
The customer tenant account is a single AWS cloud provider account. The single account provides full tenancy isolation, in its own VPC, and with its own resource quotas.
The cloud provider account is where a highly resilient GitLab installation resides, in its own isolated VPC. On provisioning, the customer tenant gets access to a High Availability (HA) GitLab primary site and a GitLab Geo secondary site.

GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/0815dd58-b926-454e-8354-c33fe3e7bff0/edit?invitationId=inv_a6b618ff-6c18-4571-806a-bfb3fe97cb12) files for the diagram in Lucidchart.
### Gitaly setup
GitLab Dedicated deploys Gitaly [in a sharded setup](../gitaly/praefect/_index.md#before-deploying-gitaly-cluster-praefect), not in a Gitaly Cluster (Praefect) configuration.
- Customer repositories are spread across multiple virtual machines.
- GitLab manages [storage weights](../repository_storage_paths.md#configure-where-new-repositories-are-stored) on behalf of the customer.
### Geo setup
GitLab Dedicated leverages GitLab Geo for [disaster recovery](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#disaster-recovery).
Geo does not use an active-active failover configuration. For more information, see [Geo](../geo/_index.md).
### AWS PrivateLink connection
{{< alert type="note" >}}
Required for Geo migrations to Dedicated. Otherwise, optional
{{< /alert >}}
Optionally, private connectivity is available for your GitLab Dedicated instance, using [AWS PrivateLink](https://aws.amazon.com/privatelink/) as a connection gateway.
Both [inbound](configure_instance/network_security.md#inbound-private-link) and [outbound](configure_instance/network_security.md#outbound-private-link) private links are supported.
#### Inbound

GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/933b958b-bfad-4898-a8ae-182815f159ca/edit?invitationId=inv_38b9a265-dff2-4db6-abdb-369ea1e92f5f) files for the diagram in Lucidchart.
#### Outbound

GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/5aeae97e-a3c4-43e3-8b9d-27900d944147/edit?invitationId=inv_0e4fee9f-cf63-439c-9bf9-71ecbfbd8979&page=F5pcfQybsAYU8#) files for the diagram in Lucidchart.
#### AWS PrivateLink for migration
Additionally, AWS PrivateLink is also used for migration purposes. The customer's Dedicated GitLab instance can use AWS PrivateLink to pull data for a migration to GitLab Dedicated.

GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/1e83e102-37b3-48a9-885d-e72122683bce/edit?view_items=AzvnMfovRJe3p&invitationId=inv_c02140dd-416b-41b5-b14a-7288b54bb9b5) files for the diagram in Lucidchart.
## Hosted runners for GitLab Dedicated
The following diagram illustrates a GitLab-managed AWS account that contains GitLab runners, which are interconnected to a GitLab Dedicated instance, the public internet, and optionally a customer AWS account that uses AWS PrivateLink.

For more information on how runners authenticate and execute the job payload, see [runner execution flow](https://docs.gitlab.com/runner#runner-execution-flow).
GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/0fb12de8-5236-4d80-9a9c-61c08b714e6f/edit?invitationId=inv_4a12e347-49e8-438e-a28f-3930f936defd) files for the diagram in Lucidchart.
|
---
stage: GitLab Dedicated
group: Environment Automation
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Get to know the GitLab Dedicated architecture through a series of diagrams.
title: GitLab Dedicated architecture
breadcrumbs:
- doc
- administration
- dedicated
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
This page provides a set of architectural documents and diagrams for GitLab Dedicated.
## High-level overview
The following diagram shows a high-level overview of the architecture for GitLab Dedicated,
where various AWS accounts managed by GitLab and customers are controlled by the Switchboard application.

When managing GitLab Dedicated tenant instances:
- Switchboard is responsible for managing global configuration shared between the AWS cloud providers, accessible by tenants.
- Amp is responsible for the interaction with the customer tenant accounts, such as configuring expected roles and policies, enabling the required services, and provisioning environments.
GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/e0b6661c-6c10-43d9-8afa-1fe0677e060c/edit?page=0_0#) files for the diagram in Lucidchart.
## Tenant network
The customer tenant account is a single AWS cloud provider account. The single account provides full tenancy isolation, in its own VPC, and with its own resource quotas.
The cloud provider account is where a highly resilient GitLab installation resides, in its own isolated VPC. On provisioning, the customer tenant gets access to a High Availability (HA) GitLab primary site and a GitLab Geo secondary site.

GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/0815dd58-b926-454e-8354-c33fe3e7bff0/edit?invitationId=inv_a6b618ff-6c18-4571-806a-bfb3fe97cb12) files for the diagram in Lucidchart.
### Gitaly setup
GitLab Dedicated deploys Gitaly [in a sharded setup](../gitaly/praefect/_index.md#before-deploying-gitaly-cluster-praefect), not in a Gitaly Cluster (Praefect) configuration.
- Customer repositories are spread across multiple virtual machines.
- GitLab manages [storage weights](../repository_storage_paths.md#configure-where-new-repositories-are-stored) on behalf of the customer.
### Geo setup
GitLab Dedicated leverages GitLab Geo for [disaster recovery](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#disaster-recovery).
Geo does not use an active-active failover configuration. For more information, see [Geo](../geo/_index.md).
### AWS PrivateLink connection
{{< alert type="note" >}}
Required for Geo migrations to Dedicated. Otherwise, optional
{{< /alert >}}
Optionally, private connectivity is available for your GitLab Dedicated instance, using [AWS PrivateLink](https://aws.amazon.com/privatelink/) as a connection gateway.
Both [inbound](configure_instance/network_security.md#inbound-private-link) and [outbound](configure_instance/network_security.md#outbound-private-link) private links are supported.
#### Inbound

GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/933b958b-bfad-4898-a8ae-182815f159ca/edit?invitationId=inv_38b9a265-dff2-4db6-abdb-369ea1e92f5f) files for the diagram in Lucidchart.
#### Outbound

GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/5aeae97e-a3c4-43e3-8b9d-27900d944147/edit?invitationId=inv_0e4fee9f-cf63-439c-9bf9-71ecbfbd8979&page=F5pcfQybsAYU8#) files for the diagram in Lucidchart.
#### AWS PrivateLink for migration
Additionally, AWS PrivateLink is also used for migration purposes. The customer's Dedicated GitLab instance can use AWS PrivateLink to pull data for a migration to GitLab Dedicated.

GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/1e83e102-37b3-48a9-885d-e72122683bce/edit?view_items=AzvnMfovRJe3p&invitationId=inv_c02140dd-416b-41b5-b14a-7288b54bb9b5) files for the diagram in Lucidchart.
## Hosted runners for GitLab Dedicated
The following diagram illustrates a GitLab-managed AWS account that contains GitLab runners, which are interconnected to a GitLab Dedicated instance, the public internet, and optionally a customer AWS account that uses AWS PrivateLink.

For more information on how runners authenticate and execute the job payload, see [runner execution flow](https://docs.gitlab.com/runner#runner-execution-flow).
GitLab team members with edit access can update the [source](https://lucid.app/lucidchart/0fb12de8-5236-4d80-9a9c-61c08b714e6f/edit?invitationId=inv_4a12e347-49e8-438e-a28f-3930f936defd) files for the diagram in Lucidchart.
|
https://docs.gitlab.com/administration/monitor
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/monitor.md
|
2025-08-13
|
doc/administration/dedicated
|
[
"doc",
"administration",
"dedicated"
] |
monitor.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Monitor your GitLab Dedicated instance
|
Access application logs and S3 bucket data to monitor your GitLab Dedicated instance.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
GitLab delivers [application logs](../logs/_index.md) to an Amazon S3 bucket in the GitLab
tenant account, which can be shared with you.
To access these logs, you must provide AWS Identity and Access Management (IAM) Amazon Resource
Names (ARNs) that uniquely identify your AWS users or roles.
Logs stored in the S3 bucket are retained indefinitely.
GitLab team members can view more information about the proposed retention policy in
this confidential issue: `https://gitlab.com/gitlab-com/gl-infra/gitlab-dedicated/team/-/issues/483`.
## Request access to application logs
To gain read-only access to the S3 bucket with your application logs:
1. Open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650)
with the title `Customer Log Access`.
1. In the body of the ticket, include a list of IAM ARNs for the users or roles that require
access to the logs. Specify the full ARN path without wildcards (`*`). For example:
- User: `arn:aws:iam::123456789012:user/username`
- Role: `arn:aws:iam::123456789012:role/rolename`
{{< alert type="note" >}}
Only IAM user and role ARNs are supported.
Security Token Service (STS) ARNs (`arn:aws:sts::...`) cannot be used.
{{< /alert >}}
GitLab provides the name of the S3 bucket. Your authorized users or roles can then access all objects in the bucket.
To verify access, you can use the [AWS CLI](https://aws.amazon.com/cli/).
GitLab team members can view more information about the proposed feature to add wildcard support in this
confidential issue: `https://gitlab.com/gitlab-com/gl-infra/gitlab-dedicated/team/-/issues/7010`.
## Find your S3 bucket name
To find your S3 bucket name:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. In the **Tenant details** section, locate the **AWS S3 bucket for tenant logs** field.
For information about how to access S3 buckets after you have the name, see the [AWS documentation about accessing S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-bucket-intro.html).
## S3 bucket contents and structure
The Amazon S3 bucket contains a combination of infrastructure logs and application logs from the GitLab [log system](../logs/_index.md).
The logs in the bucket are encrypted using an AWS KMS key managed by GitLab. If you choose to enable [BYOK](encryption.md#bring-your-own-key-byok), the application logs are not encrypted with the key you provide.
<!-- vale gitlab_base.Spelling = NO -->
The logs in the S3 bucket are organized by date in `YYYY/MM/DD/HH` format. For example, a directory named `2023/10/12/13` contains logs from October 12, 2023 at 13:00 UTC. The logs are streamed into the bucket with [Amazon Kinesis Data Firehose](https://aws.amazon.com/firehose/).
<!-- vale gitlab_base.Spelling = YES -->
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Access application logs and S3 bucket data to monitor your GitLab Dedicated
instance.
title: Monitor your GitLab Dedicated instance
breadcrumbs:
- doc
- administration
- dedicated
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
GitLab delivers [application logs](../logs/_index.md) to an Amazon S3 bucket in the GitLab
tenant account, which can be shared with you.
To access these logs, you must provide AWS Identity and Access Management (IAM) Amazon Resource
Names (ARNs) that uniquely identify your AWS users or roles.
Logs stored in the S3 bucket are retained indefinitely.
GitLab team members can view more information about the proposed retention policy in
this confidential issue: `https://gitlab.com/gitlab-com/gl-infra/gitlab-dedicated/team/-/issues/483`.
## Request access to application logs
To gain read-only access to the S3 bucket with your application logs:
1. Open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650)
with the title `Customer Log Access`.
1. In the body of the ticket, include a list of IAM ARNs for the users or roles that require
access to the logs. Specify the full ARN path without wildcards (`*`). For example:
- User: `arn:aws:iam::123456789012:user/username`
- Role: `arn:aws:iam::123456789012:role/rolename`
{{< alert type="note" >}}
Only IAM user and role ARNs are supported.
Security Token Service (STS) ARNs (`arn:aws:sts::...`) cannot be used.
{{< /alert >}}
GitLab provides the name of the S3 bucket. Your authorized users or roles can then access all objects in the bucket.
To verify access, you can use the [AWS CLI](https://aws.amazon.com/cli/).
GitLab team members can view more information about the proposed feature to add wildcard support in this
confidential issue: `https://gitlab.com/gitlab-com/gl-infra/gitlab-dedicated/team/-/issues/7010`.
## Find your S3 bucket name
To find your S3 bucket name:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. In the **Tenant details** section, locate the **AWS S3 bucket for tenant logs** field.
For information about how to access S3 buckets after you have the name, see the [AWS documentation about accessing S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-bucket-intro.html).
## S3 bucket contents and structure
The Amazon S3 bucket contains a combination of infrastructure logs and application logs from the GitLab [log system](../logs/_index.md).
The logs in the bucket are encrypted using an AWS KMS key managed by GitLab. If you choose to enable [BYOK](encryption.md#bring-your-own-key-byok), the application logs are not encrypted with the key you provide.
<!-- vale gitlab_base.Spelling = NO -->
The logs in the S3 bucket are organized by date in `YYYY/MM/DD/HH` format. For example, a directory named `2023/10/12/13` contains logs from October 12, 2023 at 13:00 UTC. The logs are streamed into the bucket with [Amazon Kinesis Data Firehose](https://aws.amazon.com/firehose/).
<!-- vale gitlab_base.Spelling = YES -->
|
https://docs.gitlab.com/administration/tenant_overview
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/tenant_overview.md
|
2025-08-13
|
doc/administration/dedicated
|
[
"doc",
"administration",
"dedicated"
] |
tenant_overview.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
View GitLab Dedicated instance details
|
View information about your GitLab Dedicated instance with Switchboard.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Monitor your GitLab Dedicated instance details, maintenance windows, and configuration status in Switchboard.
## View your instance details
To access your instance details:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. Select your tenant.
The **Overview** page displays:
- Any pending configuration changes
- When the instance was updated
- Instance details
- Maintenance windows
- Hosted runners
- Customer communication
## Tenant overview
The top section shows important information about your tenant, including:
- Tenant name and URL
- [Repository storage](create_instance/storage_types.md#repository-storage)
- Current GitLab version
- Reference architecture
- Maintenance window
- Primary and secondary AWS regions for data storage, with their availability zone IDs
- Backup AWS region
- AWS account IDs for the tenant and hosted runners
## Maintenance windows
The **Maintenance windows** section displays the:
- Next scheduled maintenance window
- Most recent completed maintenance window
- Most recent emergency maintenance window (if applicable)
- Upcoming GitLab version upgrade
{{< alert type="note" >}}
Each Sunday night in UTC, Switchboard updates to display the planned GitLab version upgrades for the upcoming week's maintenance windows. For more information, see [Maintenance windows](maintenance.md#maintenance-windows).
{{< /alert >}}
## Hosted runners
The **Hosted runners** section shows the [hosted runners](hosted_runners.md) associated with your instance.
## NAT IP addresses
NAT gateway IP addresses typically remain consistent during standard operations but might change occasionally, such as when GitLab needs to rebuild your instance during disaster recovery.
You need to know your NAT gateway IP addresses in cases like:
- Configuring webhook receivers to accept incoming requests from your GitLab Dedicated instance.
- Setting up allowlists for external services to accept connections from your GitLab Dedicated instance.
### View your NAT gateway IP addresses
To view the current NAT gateway IP addresses for your GitLab Dedicated instance:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. Select your tenant.
1. Select the **Configuration** tab.
1. Under **Tenant Details**, find your **NAT gateways**.
## Customer communication
The **Customer communication** section shows the **Operational email addresses** configured for your GitLab Dedicated instance. These email addresses receive notifications about your instance, including:
- Emergency maintenance
- Incidents
- Other critical updates
You cannot turn off notifications for operational email addresses.
To update your customer communication information, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: View information about your GitLab Dedicated instance with Switchboard.
title: View GitLab Dedicated instance details
breadcrumbs:
- doc
- administration
- dedicated
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Monitor your GitLab Dedicated instance details, maintenance windows, and configuration status in Switchboard.
## View your instance details
To access your instance details:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. Select your tenant.
The **Overview** page displays:
- Any pending configuration changes
- When the instance was updated
- Instance details
- Maintenance windows
- Hosted runners
- Customer communication
## Tenant overview
The top section shows important information about your tenant, including:
- Tenant name and URL
- [Repository storage](create_instance/storage_types.md#repository-storage)
- Current GitLab version
- Reference architecture
- Maintenance window
- Primary and secondary AWS regions for data storage, with their availability zone IDs
- Backup AWS region
- AWS account IDs for the tenant and hosted runners
## Maintenance windows
The **Maintenance windows** section displays the:
- Next scheduled maintenance window
- Most recent completed maintenance window
- Most recent emergency maintenance window (if applicable)
- Upcoming GitLab version upgrade
{{< alert type="note" >}}
Each Sunday night in UTC, Switchboard updates to display the planned GitLab version upgrades for the upcoming week's maintenance windows. For more information, see [Maintenance windows](maintenance.md#maintenance-windows).
{{< /alert >}}
## Hosted runners
The **Hosted runners** section shows the [hosted runners](hosted_runners.md) associated with your instance.
## NAT IP addresses
NAT gateway IP addresses typically remain consistent during standard operations but might change occasionally, such as when GitLab needs to rebuild your instance during disaster recovery.
You need to know your NAT gateway IP addresses in cases like:
- Configuring webhook receivers to accept incoming requests from your GitLab Dedicated instance.
- Setting up allowlists for external services to accept connections from your GitLab Dedicated instance.
### View your NAT gateway IP addresses
To view the current NAT gateway IP addresses for your GitLab Dedicated instance:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. Select your tenant.
1. Select the **Configuration** tab.
1. Under **Tenant Details**, find your **NAT gateways**.
## Customer communication
The **Customer communication** section shows the **Operational email addresses** configured for your GitLab Dedicated instance. These email addresses receive notifications about your instance, including:
- Emergency maintenance
- Incidents
- Other critical updates
You cannot turn off notifications for operational email addresses.
To update your customer communication information, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
|
https://docs.gitlab.com/administration/hosted_runners
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/hosted_runners.md
|
2025-08-13
|
doc/administration/dedicated
|
[
"doc",
"administration",
"dedicated"
] |
hosted_runners.md
|
Production Engineering
|
Runners Platform
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Hosted runners for GitLab Dedicated
|
Use hosted runners to run your CI/CD jobs on GitLab Dedicated.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
- Status: Limited availability
{{< /details >}}
{{< alert type="note" >}}
To use this feature, you must purchase a subscription for Hosted Runners for GitLab Dedicated. To participate in the limited availability of Hosted Runners for Dedicated, reach out to your Customer Success Manager or Account representative.
{{< /alert >}}
You can run your CI/CD jobs on GitLab-hosted [runners](../../ci/runners/_index.md). These runners are managed by GitLab and fully integrated with your GitLab Dedicated instance.
GitLab-hosted runners for Dedicated are autoscaling [instance runners](../../ci/runners/runners_scope.md#instance-runners),
running on AWS EC2 in the same region as the GitLab Dedicated instance.
When you use hosted runners:
- Each job runs in a newly provisioned virtual machine (VM), which is dedicated to the specific job.
- The VM where your job runs has `sudo` access with no password.
- The storage is shared by the operating system, the image with pre-installed software, and a copy of your cloned repository. This means that the available free disk space for your jobs is reduced.
- By default, untagged jobs run on the small Linux x86-64 runner. GitLab administrators can [change the run untagged jobs option in GitLab](#configure-hosted-runners-in-gitlab).
## Hosted runners on Linux
Hosted runners on Linux for GitLab Dedicated use the [Docker Autoscaler](https://docs.gitlab.com/runner/executors/docker_autoscaler.html) executor. Each job gets a Docker environment in a fully isolated, ephemeral virtual machine (VM), and runs on the latest version of Docker Engine.
### Machine types for Linux - x86-64
The following machine types are available for hosted runners on Linux x86-64.
| Size | Runner Tag | vCPUs | Memory | Storage |
|----------|-------------------------------|-------|--------|---------|
| Small | `linux-small-amd64` (default) | 2 | 8 GB | 30 GB |
| Medium | `linux-medium-amd64` | 4 | 16 GB | 50 GB |
| Large | `linux-large-amd64` | 8 | 32 GB | 100 GB |
| X-Large | `linux-xlarge-amd64` | 16 | 64 GB | 200 GB |
| 2X-Large | `linux-2xlarge-amd64` | 32 | 128 GB | 200 GB |
### Machine types for Linux - Arm64
The following machine types are available for hosted runners on Linux Arm64.
| Size | Runner Tag | vCPUs | Memory | Storage |
|----------|-----------------------|-------|--------|---------|
| Small | `linux-small-arm64` | 2 | 8 GB | 30 GB |
| Medium | `linux-medium-arm64` | 4 | 16 GB | 50 GB |
| Large | `linux-large-arm64` | 8 | 32 GB | 100 GB |
| X-Large | `linux-xlarge-arm64` | 16 | 64 GB | 200 GB |
| 2X-Large | `linux-2xlarge-arm64` | 32 | 128 GB | 200 GB |
{{< alert type="note" >}}
The machine type and underlying processor type might change. Jobs optimized for a specific processor design might behave inconsistently.
{{< /alert >}}
Default runner tags are assigned upon creation. Administrators can subsequently [modify the tag settings](#configure-hosted-runners-in-gitlab) for their instance runners.
### Container images
As runners on Linux are using the [Docker Autoscaler](https://docs.gitlab.com/runner/executors/docker_autoscaler.html) executor, you can choose any container image by defining the image in your `.gitlab-ci.yml` file. Make sure that the selected Docker image is compatible with the underlying processor architecture. See the [example `.gitlab-ci.yml` file](../../ci/runners/hosted_runners/linux.md#example-gitlab-ciyml-file).
If no image is set, the default is `ruby:3.1`.
If you use images from the Docker Hub container registry, you might run into [rate limits](../settings/user_and_ip_rate_limits.md). This is because GitLab Dedicated uses a single Network Address Translation (NAT) IP address.
To avoid rate limits, instead use:
- Images stored in the [GitLab container registry](../../user/packages/container_registry/_index.md).
- Images stored in other public registries with no rate limits.
- The [dependency proxy](../../user/packages/dependency_proxy/_index.md), acting as a pull-through cache.
### Docker in Docker support
The runners are configured to run in `privileged` mode to support [Docker in Docker](../../ci/docker/using_docker_build.md#use-docker-in-docker) to build Docker images natively or run multiple containers within your isolated job.
## Manage hosted runners
### Manage hosted runners in Switchboard
You can create and view hosted runners for your GitLab Dedicated instance using Switchboard.
Prerequisites:
- You must purchase a subscription for Hosted Runners for GitLab Dedicated.
#### Create hosted runners in Switchboard
For each instance, you can create one runner of each type and size combination. Switchboard displays the available runner options.
To create hosted runners:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com).
1. At the top of the page, select **Hosted runners**.
1. Select **New hosted runner**.
1. Choose a size for the runner, then select **Create hosted runner**.
You will receive an email notification when your hosted runner is ready to use.
[Outbound private links](#outbound-private-link) configured for existing runners don't apply to new runners. A separate request is required for each new runner.
#### View hosted runners in Switchboard
To view hosted runners:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com).
1. At the top of the page, select **Hosted runners**.
1. Optional. From the list of hosted runners, copy the **Runner ID** of the runner you want to access in GitLab.
### View and configure hosted runners in GitLab
GitLab administrators can manage hosted runners for their GitLab Dedicated instance from the [**Admin** area](../admin_area.md#administering-runners).
#### View hosted runners in GitLab
You can view hosted runners for your GitLab Dedicated instance in the Runners page and in the [Fleet dashboard](../../ci/runners/runner_fleet_dashboard.md).
Prerequisites:
- You must be an administrator.
{{< alert type="note" >}}
Compute usage visualizations are not available, but an [epic](https://gitlab.com/groups/gitlab-com/gl-infra/gitlab-dedicated/-/epics/524) exists to add them for general availability.
{{< /alert >}}
To view hosted runners in GitLab:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **CI/CD** > **Runners**.
1. Optional. Select **Fleet dashboard**.
#### Configure hosted runners in GitLab
Prerequisites:
- You must be an administrator.
You can configure hosted runners for your GitLab Dedicated instance, including changing the default values for the runner tags.
Available configuration options include:
- [Change the maximum job timeout](../../ci/runners/configure_runners.md#for-an-instance-runner).
- [Set the runner to run tagged or untagged jobs](../../ci/runners/configure_runners.md#for-an-instance-runner-2).
{{< alert type="note" >}}
Any changes to the runner description and the runner tags are not controlled by GitLab.
{{< /alert >}}
### Disable hosted runners for groups or projects in GitLab
By default, hosted runners are available for all projects and groups in your GitLab Dedicated instance.
GitLab maintainers can disable hosted runners for a [project](../../ci/runners/runners_scope.md#disable-instance-runners-for-a-project) or a [group](../../ci/runners/runners_scope.md#disable-instance-runners-for-a-group).
## Security and Network
Hosted runners for GitLab Dedicated have built-in layers that harden the security of the GitLab Runner build environment.
Hosted runners for GitLab Dedicated have the following configurations:
- Firewall rules allow only outbound communication from the ephemeral VM to the public internet.
- Firewall rules do not allow inbound communication from the public internet to the ephemeral VM.
- Firewall rules do not allow communication between VMs.
- Only the runner manager can communicate with the ephemeral VMs.
- Ephemeral runner VMs only serve a single job and are deleted after the job execution.
You can also [enable private connections](#outbound-private-link) from hosted runners to your AWS account.
For more information, see the architecture diagram for [hosted runners for GitLab Dedicated](architecture.md#hosted-runners-for-gitlab-dedicated).
### Outbound private link
Outbound private link creates a secure connection between hosted runners for GitLab Dedicated and services in your AWS VPC.
This connection doesn't expose any traffic to the public internet and allows hosted runners to:
- Access private services, such as custom secrets managers.
- Retrieve artifacts or job images stored in your infrastructure.
- Deploy to your infrastructure.
Two outbound private links exist by default for all runners in the GitLab-managed runner account:
- A link to your GitLab instance
- A link to a GitLab-controlled Prometheus instance
These links are pre-configured and cannot be modified. The tenant's Prometheus instance is managed by GitLab and is not accessible to users.
To use an outbound private link with other VPC services for hosted runners,
[manual configuration is required with a support request](configure_instance/network_security.md#add-an-outbound-private-link-with-a-support-request).
For more information, see [Outbound private link](configure_instance/network_security.md#outbound-private-link).
### IP ranges
IP ranges for hosted runners for GitLab Dedicated are available upon request. IP ranges are maintained on a best-effort basis and may change at any time due to changes in the infrastructure. For more information, reach out to your Customer Success Manager or Account representative.
## Use hosted runners
After you [create hosted runners in Switchboard](#create-hosted-runners-in-switchboard) and the runners are ready, you can use them.
To use runners, adjust the [tags](../../ci/yaml/_index.md#tags) in your job configuration in the `.gitlab-ci.yml` file to match the hosted
runner you want to use.
For the Linux medium x86-64 runner, configure your job like this:
```yaml
job_name:
tags:
- linux-medium-amd64 # Use the medium-sized Linux runner
```
By default, untagged jobs are picked up by the small Linux x86-64 runner.
GitLab administrators can [configure instance runners in GitLab](#configure-hosted-runners-in-gitlab) to not run untagged jobs.
To migrate jobs without changing job configurations, [modify the hosted runner tags](#configure-hosted-runners-in-gitlab)
to match the tags used in your existing job configurations.
If you see your job is stuck with the error message `no runners that match all of the job's tags`:
1. Verify if you've selected the correct tag
1. Confirm if [instance runners are enabled for your project or group](../../ci/runners/runners_scope.md#enable-instance-runners-for-a-project).
## Upgrades
Runner version upgrades require a short downtime.
Runners are upgraded during the scheduled maintenance windows of a GitLab Dedicated tenant.
An [issue](https://gitlab.com/gitlab-com/gl-infra/gitlab-dedicated/team/-/issues/4505) exists to implement zero downtime upgrades.
## Pricing
For pricing details, reach out to your account representative.
We offer a 30-day free trial for GitLab Dedicated customers. The trial includes:
- Small, Medium, and Large Linux x86-64 runners
- Small and Medium Linux Arm runners
- Limited autoscaling configuration that supports up to 100 concurrent jobs
|
---
stage: Production Engineering
group: Runners Platform
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Use hosted runners to run your CI/CD jobs on GitLab Dedicated.
title: Hosted runners for GitLab Dedicated
breadcrumbs:
- doc
- administration
- dedicated
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
- Status: Limited availability
{{< /details >}}
{{< alert type="note" >}}
To use this feature, you must purchase a subscription for Hosted Runners for GitLab Dedicated. To participate in the limited availability of Hosted Runners for Dedicated, reach out to your Customer Success Manager or Account representative.
{{< /alert >}}
You can run your CI/CD jobs on GitLab-hosted [runners](../../ci/runners/_index.md). These runners are managed by GitLab and fully integrated with your GitLab Dedicated instance.
GitLab-hosted runners for Dedicated are autoscaling [instance runners](../../ci/runners/runners_scope.md#instance-runners),
running on AWS EC2 in the same region as the GitLab Dedicated instance.
When you use hosted runners:
- Each job runs in a newly provisioned virtual machine (VM), which is dedicated to the specific job.
- The VM where your job runs has `sudo` access with no password.
- The storage is shared by the operating system, the image with pre-installed software, and a copy of your cloned repository. This means that the available free disk space for your jobs is reduced.
- By default, untagged jobs run on the small Linux x86-64 runner. GitLab administrators can [change the run untagged jobs option in GitLab](#configure-hosted-runners-in-gitlab).
## Hosted runners on Linux
Hosted runners on Linux for GitLab Dedicated use the [Docker Autoscaler](https://docs.gitlab.com/runner/executors/docker_autoscaler.html) executor. Each job gets a Docker environment in a fully isolated, ephemeral virtual machine (VM), and runs on the latest version of Docker Engine.
### Machine types for Linux - x86-64
The following machine types are available for hosted runners on Linux x86-64.
| Size | Runner Tag | vCPUs | Memory | Storage |
|----------|-------------------------------|-------|--------|---------|
| Small | `linux-small-amd64` (default) | 2 | 8 GB | 30 GB |
| Medium | `linux-medium-amd64` | 4 | 16 GB | 50 GB |
| Large | `linux-large-amd64` | 8 | 32 GB | 100 GB |
| X-Large | `linux-xlarge-amd64` | 16 | 64 GB | 200 GB |
| 2X-Large | `linux-2xlarge-amd64` | 32 | 128 GB | 200 GB |
### Machine types for Linux - Arm64
The following machine types are available for hosted runners on Linux Arm64.
| Size | Runner Tag | vCPUs | Memory | Storage |
|----------|-----------------------|-------|--------|---------|
| Small | `linux-small-arm64` | 2 | 8 GB | 30 GB |
| Medium | `linux-medium-arm64` | 4 | 16 GB | 50 GB |
| Large | `linux-large-arm64` | 8 | 32 GB | 100 GB |
| X-Large | `linux-xlarge-arm64` | 16 | 64 GB | 200 GB |
| 2X-Large | `linux-2xlarge-arm64` | 32 | 128 GB | 200 GB |
{{< alert type="note" >}}
The machine type and underlying processor type might change. Jobs optimized for a specific processor design might behave inconsistently.
{{< /alert >}}
Default runner tags are assigned upon creation. Administrators can subsequently [modify the tag settings](#configure-hosted-runners-in-gitlab) for their instance runners.
### Container images
As runners on Linux are using the [Docker Autoscaler](https://docs.gitlab.com/runner/executors/docker_autoscaler.html) executor, you can choose any container image by defining the image in your `.gitlab-ci.yml` file. Make sure that the selected Docker image is compatible with the underlying processor architecture. See the [example `.gitlab-ci.yml` file](../../ci/runners/hosted_runners/linux.md#example-gitlab-ciyml-file).
If no image is set, the default is `ruby:3.1`.
If you use images from the Docker Hub container registry, you might run into [rate limits](../settings/user_and_ip_rate_limits.md). This is because GitLab Dedicated uses a single Network Address Translation (NAT) IP address.
To avoid rate limits, instead use:
- Images stored in the [GitLab container registry](../../user/packages/container_registry/_index.md).
- Images stored in other public registries with no rate limits.
- The [dependency proxy](../../user/packages/dependency_proxy/_index.md), acting as a pull-through cache.
### Docker in Docker support
The runners are configured to run in `privileged` mode to support [Docker in Docker](../../ci/docker/using_docker_build.md#use-docker-in-docker) to build Docker images natively or run multiple containers within your isolated job.
## Manage hosted runners
### Manage hosted runners in Switchboard
You can create and view hosted runners for your GitLab Dedicated instance using Switchboard.
Prerequisites:
- You must purchase a subscription for Hosted Runners for GitLab Dedicated.
#### Create hosted runners in Switchboard
For each instance, you can create one runner of each type and size combination. Switchboard displays the available runner options.
To create hosted runners:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com).
1. At the top of the page, select **Hosted runners**.
1. Select **New hosted runner**.
1. Choose a size for the runner, then select **Create hosted runner**.
You will receive an email notification when your hosted runner is ready to use.
[Outbound private links](#outbound-private-link) configured for existing runners don't apply to new runners. A separate request is required for each new runner.
#### View hosted runners in Switchboard
To view hosted runners:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com).
1. At the top of the page, select **Hosted runners**.
1. Optional. From the list of hosted runners, copy the **Runner ID** of the runner you want to access in GitLab.
### View and configure hosted runners in GitLab
GitLab administrators can manage hosted runners for their GitLab Dedicated instance from the [**Admin** area](../admin_area.md#administering-runners).
#### View hosted runners in GitLab
You can view hosted runners for your GitLab Dedicated instance in the Runners page and in the [Fleet dashboard](../../ci/runners/runner_fleet_dashboard.md).
Prerequisites:
- You must be an administrator.
{{< alert type="note" >}}
Compute usage visualizations are not available, but an [epic](https://gitlab.com/groups/gitlab-com/gl-infra/gitlab-dedicated/-/epics/524) exists to add them for general availability.
{{< /alert >}}
To view hosted runners in GitLab:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **CI/CD** > **Runners**.
1. Optional. Select **Fleet dashboard**.
#### Configure hosted runners in GitLab
Prerequisites:
- You must be an administrator.
You can configure hosted runners for your GitLab Dedicated instance, including changing the default values for the runner tags.
Available configuration options include:
- [Change the maximum job timeout](../../ci/runners/configure_runners.md#for-an-instance-runner).
- [Set the runner to run tagged or untagged jobs](../../ci/runners/configure_runners.md#for-an-instance-runner-2).
{{< alert type="note" >}}
Any changes to the runner description and the runner tags are not controlled by GitLab.
{{< /alert >}}
### Disable hosted runners for groups or projects in GitLab
By default, hosted runners are available for all projects and groups in your GitLab Dedicated instance.
GitLab maintainers can disable hosted runners for a [project](../../ci/runners/runners_scope.md#disable-instance-runners-for-a-project) or a [group](../../ci/runners/runners_scope.md#disable-instance-runners-for-a-group).
## Security and Network
Hosted runners for GitLab Dedicated have built-in layers that harden the security of the GitLab Runner build environment.
Hosted runners for GitLab Dedicated have the following configurations:
- Firewall rules allow only outbound communication from the ephemeral VM to the public internet.
- Firewall rules do not allow inbound communication from the public internet to the ephemeral VM.
- Firewall rules do not allow communication between VMs.
- Only the runner manager can communicate with the ephemeral VMs.
- Ephemeral runner VMs only serve a single job and are deleted after the job execution.
You can also [enable private connections](#outbound-private-link) from hosted runners to your AWS account.
For more information, see the architecture diagram for [hosted runners for GitLab Dedicated](architecture.md#hosted-runners-for-gitlab-dedicated).
### Outbound private link
Outbound private link creates a secure connection between hosted runners for GitLab Dedicated and services in your AWS VPC.
This connection doesn't expose any traffic to the public internet and allows hosted runners to:
- Access private services, such as custom secrets managers.
- Retrieve artifacts or job images stored in your infrastructure.
- Deploy to your infrastructure.
Two outbound private links exist by default for all runners in the GitLab-managed runner account:
- A link to your GitLab instance
- A link to a GitLab-controlled Prometheus instance
These links are pre-configured and cannot be modified. The tenant's Prometheus instance is managed by GitLab and is not accessible to users.
To use an outbound private link with other VPC services for hosted runners,
[manual configuration is required with a support request](configure_instance/network_security.md#add-an-outbound-private-link-with-a-support-request).
For more information, see [Outbound private link](configure_instance/network_security.md#outbound-private-link).
### IP ranges
IP ranges for hosted runners for GitLab Dedicated are available upon request. IP ranges are maintained on a best-effort basis and may change at any time due to changes in the infrastructure. For more information, reach out to your Customer Success Manager or Account representative.
## Use hosted runners
After you [create hosted runners in Switchboard](#create-hosted-runners-in-switchboard) and the runners are ready, you can use them.
To use runners, adjust the [tags](../../ci/yaml/_index.md#tags) in your job configuration in the `.gitlab-ci.yml` file to match the hosted
runner you want to use.
For the Linux medium x86-64 runner, configure your job like this:
```yaml
job_name:
tags:
- linux-medium-amd64 # Use the medium-sized Linux runner
```
By default, untagged jobs are picked up by the small Linux x86-64 runner.
GitLab administrators can [configure instance runners in GitLab](#configure-hosted-runners-in-gitlab) to not run untagged jobs.
To migrate jobs without changing job configurations, [modify the hosted runner tags](#configure-hosted-runners-in-gitlab)
to match the tags used in your existing job configurations.
If you see your job is stuck with the error message `no runners that match all of the job's tags`:
1. Verify if you've selected the correct tag
1. Confirm if [instance runners are enabled for your project or group](../../ci/runners/runners_scope.md#enable-instance-runners-for-a-project).
## Upgrades
Runner version upgrades require a short downtime.
Runners are upgraded during the scheduled maintenance windows of a GitLab Dedicated tenant.
An [issue](https://gitlab.com/gitlab-com/gl-infra/gitlab-dedicated/team/-/issues/4505) exists to implement zero downtime upgrades.
## Pricing
For pricing details, reach out to your account representative.
We offer a 30-day free trial for GitLab Dedicated customers. The trial includes:
- Small, Medium, and Large Linux x86-64 runners
- Small and Medium Linux Arm runners
- Limited autoscaling configuration that supports up to 100 concurrent jobs
|
https://docs.gitlab.com/administration/dedicated
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/dedicated
|
[
"doc",
"administration",
"dedicated"
] |
_index.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Administer GitLab Dedicated
|
Get started with GitLab Dedicated.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Use GitLab Dedicated to run GitLab on a fully-managed, single-tenant instance hosted on AWS. You maintain control over your instance configuration through Switchboard, the GitLab Dedicated management portal, while GitLab manages the underlying infrastructure.
For more information about this offering, see the [subscription page](../../subscriptions/gitlab_dedicated/_index.md).
## Architecture overview
GitLab Dedicated runs on a secure infrastructure that provides:
- A fully isolated tenant environment in AWS
- High availability with automated failover
- Geo-based disaster recovery
- Regular updates and maintenance
- Enterprise-grade security controls
To learn more, see [GitLab Dedicated Architecture](architecture.md).
## Configure infrastructure
| Feature | Description | Set up with |
|------------|-------------|---------------------|
| [Instance sizing](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#availability-and-scalability) | You select an instance size based on your user count. GitLab provisions and maintains the infrastructure. | Onboarding |
| [AWS data regions](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#available-aws-regions) | You choose regions for primary operations, disaster recovery, and backup. GitLab replicates your data across these regions. | Onboarding |
| [Maintenance windows](maintenance.md#maintenance-windows) | You select a weekly 4-hour maintenance window. GitLab performs updates, configuration changes, and security patches during this time. | Onboarding |
| [Release management](maintenance.md#release-rollout-schedule) | GitLab updates your instance monthly with new features and security patches. | Available by <br>default |
| [Geo disaster recovery](create_instance/_index.md#step-2-create-your-gitlab-dedicated-instance) | You choose the secondary region during onboarding. GitLab maintains a replicated secondary site in your chosen region using Geo. | Onboarding |
| [Backup and recovery](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#disaster-recovery) | GitLab backs up your data to your chosen AWS region. | Available by <br>default |
## Secure your instance
| Feature | Description | Set up with |
|------------|-------------|-----------------|
| [Data encryption](encryption.md) | GitLab encrypts your data both at rest and in transit through infrastructure provided by AWS. | Available by <br>default |
| [Bring your own key (BYOK)](encryption.md#bring-your-own-key-byok) | You can provide your own AWS KMS keys for encryption instead of using GitLab-managed AWS KMS keys. GitLab integrates these keys with your instance to encrypt data at rest. | Onboarding |
| [SAML SSO](configure_instance/saml.md) | You configure the connection to your SAML identity providers. GitLab handles the authentication flow. | Switchboard |
| [IP allowlists](configure_instance/network_security.md#ip-allowlist) | You specify approved IP addresses. GitLab blocks unauthorized access attempts. | Switchboard |
| [Custom certificates](configure_instance/network_security.md#custom-certificates) | You import your SSL certificates. GitLab maintains secure connections to your private services. | Switchboard |
| [Compliance frameworks](../../subscriptions/gitlab_dedicated/_index.md#monitoring) | GitLab maintains compliance with SOC 2, ISO 27001, and other frameworks. You can access reports through the [Trust Center](https://trust.gitlab.com/?product=gitlab-dedicated). | Available by <br>default |
| [Emergency access protocols](../../subscriptions/gitlab_dedicated/_index.md#access-controls) | GitLab provides controlled break-glass procedures for urgent situations. | Available by <br>default |
## Set up networking
| Feature | Description | Set up with |
|------------|-------------|-----------------|
| [Custom hostname (BYOD)](configure_instance/network_security.md#bring-your-own-domain-byod) | You provide a domain name and configure DNS records. GitLab manages SSL certificates through Let's Encrypt. | Support ticket |
| [Inbound Private Link](configure_instance/network_security.md#inbound-private-link) | You request secure AWS VPC connections. GitLab configures PrivateLink endpoints in your VPC. | Support ticket |
| [Outbound Private Link](configure_instance/network_security.md#outbound-private-link) | You create the endpoint service in your AWS account. GitLab establishes connections using your service endpoints. | Switchboard |
| [Private hosted zones](configure_instance/network_security.md#private-hosted-zones) | You define internal DNS requirements. GitLab configures DNS resolution in your instance network. | Switchboard |
## Use platform tools
| Feature | Description | Set up with |
|------------|-------------|-----------------|
| [GitLab Pages](../../subscriptions/gitlab_dedicated/_index.md#gitlab-pages) | GitLab hosts your static websites on a dedicated domain. You can publish sites from your repositories. | Available by <br>default |
| [Advanced search](../../integration/advanced_search/elasticsearch.md) | GitLab maintains the search infrastructure. You can search across your code, issues, and merge requests. | Available by <br>default |
| [Hosted runners (beta)](hosted_runners.md) | You purchase a subscription and configure your hosted runners. GitLab manages the auto-scaling CI/CD infrastructure. | Switchboard |
| [ClickHouse](../../integration/clickhouse.md) | GitLab maintains the ClickHouse infrastructure and integration. You can access all advanced analytical features such as [AI impact analytics](../../user/analytics/ai_impact_analytics.md) and [CI analytics](../../ci/runners/runner_fleet_dashboard.md#enable-more-ci-analytics-features-with-clickhouse). | Available by <br>default for [eligible customers](../../subscriptions/gitlab_dedicated/_index.md#clickhouse) |
## Manage daily operations
| Feature | Description | Set up with |
|------------|-------------|-----------------|
| [Application logs](monitor.md) | GitLab delivers logs to your AWS S3 bucket. You can request access to monitor instance activity through these logs. | Support ticket |
| [Email service](configure_instance/users_notifications.md#smtp-email-service) | GitLab provides AWS SES by default to send emails from your GitLab Dedicated instance. You can also configure your own SMTP email service. | Support ticket for <br/>custom service |
| [Switchboard access and <br>notifications](configure_instance/users_notifications.md) | You manage Switchboard permissions and notification settings. GitLab maintains the Switchboard infrastructure. | Switchboard |
| [Switchboard SSO](configure_instance/authentication/_index.md#configure-switchboard-sso) | You configure your organization's identity provider and supply GitLab with the necessary details. GitLab configures single-sign-on (SSO) for Switchboard. | Support ticket |
## Get started
To get started with GitLab Dedicated:
1. [Create your GitLab Dedicated instance](create_instance/_index.md).
1. [Configure your GitLab Dedicated instance](configure_instance/_index.md).
1. [Create a hosted runner](hosted_runners.md).
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Get started with GitLab Dedicated.
title: Administer GitLab Dedicated
breadcrumbs:
- doc
- administration
- dedicated
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Use GitLab Dedicated to run GitLab on a fully-managed, single-tenant instance hosted on AWS. You maintain control over your instance configuration through Switchboard, the GitLab Dedicated management portal, while GitLab manages the underlying infrastructure.
For more information about this offering, see the [subscription page](../../subscriptions/gitlab_dedicated/_index.md).
## Architecture overview
GitLab Dedicated runs on a secure infrastructure that provides:
- A fully isolated tenant environment in AWS
- High availability with automated failover
- Geo-based disaster recovery
- Regular updates and maintenance
- Enterprise-grade security controls
To learn more, see [GitLab Dedicated Architecture](architecture.md).
## Configure infrastructure
| Feature | Description | Set up with |
|------------|-------------|---------------------|
| [Instance sizing](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#availability-and-scalability) | You select an instance size based on your user count. GitLab provisions and maintains the infrastructure. | Onboarding |
| [AWS data regions](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#available-aws-regions) | You choose regions for primary operations, disaster recovery, and backup. GitLab replicates your data across these regions. | Onboarding |
| [Maintenance windows](maintenance.md#maintenance-windows) | You select a weekly 4-hour maintenance window. GitLab performs updates, configuration changes, and security patches during this time. | Onboarding |
| [Release management](maintenance.md#release-rollout-schedule) | GitLab updates your instance monthly with new features and security patches. | Available by <br>default |
| [Geo disaster recovery](create_instance/_index.md#step-2-create-your-gitlab-dedicated-instance) | You choose the secondary region during onboarding. GitLab maintains a replicated secondary site in your chosen region using Geo. | Onboarding |
| [Backup and recovery](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#disaster-recovery) | GitLab backs up your data to your chosen AWS region. | Available by <br>default |
## Secure your instance
| Feature | Description | Set up with |
|------------|-------------|-----------------|
| [Data encryption](encryption.md) | GitLab encrypts your data both at rest and in transit through infrastructure provided by AWS. | Available by <br>default |
| [Bring your own key (BYOK)](encryption.md#bring-your-own-key-byok) | You can provide your own AWS KMS keys for encryption instead of using GitLab-managed AWS KMS keys. GitLab integrates these keys with your instance to encrypt data at rest. | Onboarding |
| [SAML SSO](configure_instance/saml.md) | You configure the connection to your SAML identity providers. GitLab handles the authentication flow. | Switchboard |
| [IP allowlists](configure_instance/network_security.md#ip-allowlist) | You specify approved IP addresses. GitLab blocks unauthorized access attempts. | Switchboard |
| [Custom certificates](configure_instance/network_security.md#custom-certificates) | You import your SSL certificates. GitLab maintains secure connections to your private services. | Switchboard |
| [Compliance frameworks](../../subscriptions/gitlab_dedicated/_index.md#monitoring) | GitLab maintains compliance with SOC 2, ISO 27001, and other frameworks. You can access reports through the [Trust Center](https://trust.gitlab.com/?product=gitlab-dedicated). | Available by <br>default |
| [Emergency access protocols](../../subscriptions/gitlab_dedicated/_index.md#access-controls) | GitLab provides controlled break-glass procedures for urgent situations. | Available by <br>default |
## Set up networking
| Feature | Description | Set up with |
|------------|-------------|-----------------|
| [Custom hostname (BYOD)](configure_instance/network_security.md#bring-your-own-domain-byod) | You provide a domain name and configure DNS records. GitLab manages SSL certificates through Let's Encrypt. | Support ticket |
| [Inbound Private Link](configure_instance/network_security.md#inbound-private-link) | You request secure AWS VPC connections. GitLab configures PrivateLink endpoints in your VPC. | Support ticket |
| [Outbound Private Link](configure_instance/network_security.md#outbound-private-link) | You create the endpoint service in your AWS account. GitLab establishes connections using your service endpoints. | Switchboard |
| [Private hosted zones](configure_instance/network_security.md#private-hosted-zones) | You define internal DNS requirements. GitLab configures DNS resolution in your instance network. | Switchboard |
## Use platform tools
| Feature | Description | Set up with |
|------------|-------------|-----------------|
| [GitLab Pages](../../subscriptions/gitlab_dedicated/_index.md#gitlab-pages) | GitLab hosts your static websites on a dedicated domain. You can publish sites from your repositories. | Available by <br>default |
| [Advanced search](../../integration/advanced_search/elasticsearch.md) | GitLab maintains the search infrastructure. You can search across your code, issues, and merge requests. | Available by <br>default |
| [Hosted runners (beta)](hosted_runners.md) | You purchase a subscription and configure your hosted runners. GitLab manages the auto-scaling CI/CD infrastructure. | Switchboard |
| [ClickHouse](../../integration/clickhouse.md) | GitLab maintains the ClickHouse infrastructure and integration. You can access all advanced analytical features such as [AI impact analytics](../../user/analytics/ai_impact_analytics.md) and [CI analytics](../../ci/runners/runner_fleet_dashboard.md#enable-more-ci-analytics-features-with-clickhouse). | Available by <br>default for [eligible customers](../../subscriptions/gitlab_dedicated/_index.md#clickhouse) |
## Manage daily operations
| Feature | Description | Set up with |
|------------|-------------|-----------------|
| [Application logs](monitor.md) | GitLab delivers logs to your AWS S3 bucket. You can request access to monitor instance activity through these logs. | Support ticket |
| [Email service](configure_instance/users_notifications.md#smtp-email-service) | GitLab provides AWS SES by default to send emails from your GitLab Dedicated instance. You can also configure your own SMTP email service. | Support ticket for <br/>custom service |
| [Switchboard access and <br>notifications](configure_instance/users_notifications.md) | You manage Switchboard permissions and notification settings. GitLab maintains the Switchboard infrastructure. | Switchboard |
| [Switchboard SSO](configure_instance/authentication/_index.md#configure-switchboard-sso) | You configure your organization's identity provider and supply GitLab with the necessary details. GitLab configures single-sign-on (SSO) for Switchboard. | Support ticket |
## Get started
To get started with GitLab Dedicated:
1. [Create your GitLab Dedicated instance](create_instance/_index.md).
1. [Configure your GitLab Dedicated instance](configure_instance/_index.md).
1. [Create a hosted runner](hosted_runners.md).
|
https://docs.gitlab.com/administration/maintenance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/maintenance.md
|
2025-08-13
|
doc/administration/dedicated
|
[
"doc",
"administration",
"dedicated"
] |
maintenance.md
|
GitLab Dedicated
|
Environment Automation
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Dedicated maintenance and release schedule
|
Maintenance windows, release schedules, and emergency maintenance processes for GitLab Dedicated instances.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Regular maintenance is performed on GitLab Dedicated instances according to scheduled maintenance windows and release upgrade timelines.
During scheduled maintenance windows, the following tasks might be performed:
- Application and operating system software patches and upgrades.
- Operating system restarts.
- Infrastructure upgrades.
- Activities needed to operate and enhance the availability or security of your tenant.
- Feature enhancements.
## Maintenance windows
Maintenance is performed outside standard working hours:
| Region | Day | Time (UTC) |
|---------------------------------|---------------|------------|
| Asia Pacific | Wednesday | 1:00 PM-5:00 PM |
| Europe, Middle East, and Africa | Tuesday | 1:00 AM-5:00 AM |
| Americas (Option 1) | Tuesday | 7:00 AM-11:00 AM |
| Americas (Option 2) | Sunday-Monday | 9:00 PM-1:00 AM |
View your maintenance window in [Switchboard](tenant_overview.md#maintenance-windows), including upcoming and recent maintenance.
You can postpone scheduled maintenance to another window in the same week by contacting your Customer Success Manager at least one week in advance.
{{< alert type="note" >}}
The scheduled weekly maintenance window is separate from [emergency maintenance](#emergency-maintenance), which cannot be postponed.
{{< /alert >}}
### Access during maintenance
Downtime is not expected for the entire duration of your maintenance window. A brief service interruption (less than one minute) may occur when compute resources restart after upgrades, typically during the first half of the maintenance window.
Long-running connections may be interrupted during this period. To minimize disruption, you can implement strategies like automatic recovery and retry.
Longer service interruptions are rare. If extended downtime is expected, GitLab provides advance notice.
{{< alert type="note" >}}
Performance degradation or downtime during the scheduled maintenance window does not count against the system service level availability (SLA).
{{< /alert >}}
## Release rollout schedule
GitLab Dedicated is [upgraded](../../subscriptions/gitlab_dedicated/maintenance.md#upgrades-and-patches) to the previous minor version (`N-1`) after each GitLab release. For example, when GitLab 16.9 is released, GitLab Dedicated instances are upgraded to 16.8.
Upgrades occur in your selected [maintenance window](#maintenance-windows) according to the following schedule, where `T` is the date of a [minor GitLab release](../../policy/maintenance.md):
| Calendar days after release | Maintenance window region |
|-------------------|---------------------------|
| `T`+5 | Europe, Middle East, and Africa,<br/> Americas (Option 1) |
| `T`+6 | Asia Pacific |
| `T`+10 | Americas (Option 2) |
For example, GitLab 16.9 released on 2024-02-15. Instances in the EMEA and Americas (Option 1) regions were then upgraded to 16.8 on 2024-02-20, 5 days after the 16.9 release.
{{< alert type="note" >}}
If a production change lock (PCL) is active during a scheduled upgrade, GitLab defers the upgrade to the first maintenance window after the PCL ends.
A PCL for GitLab Dedicated is a complete pause on all production changes during periods of reduced team availability such as major holidays. During a PCL, the following is paused:
- Configuration changes using Switchboard.
- Code deployments or infrastructure changes.
- Automated maintenance.
- New customer onboarding.
When a PCL is in effect, Switchboard displays a notification banner to alert users.
PCLs help ensure system stability when support resources may be limited.
{{< /alert >}}
## Emergency maintenance
Emergency maintenance is initiated when urgent actions are required on a GitLab Dedicated tenant instance. For example, when a critical (S1) security vulnerability requires urgent patching, GitLab performs emergency maintenance to upgrade your tenant instance to a secure version. This maintenance can occur outside scheduled maintenance windows.
GitLab prioritizes stability and security while minimizing customer impact during emergency maintenance. The specific maintenance procedures follow established internal processes, and all changes undergo appropriate review and approval before they are applied.
GitLab provides advance notice when possible and sends complete details
after the issue is resolved. The GitLab Support team:
- Creates a support ticket for tracking.
- Sends email notifications only to addresses listed as **Operational email addresses** in the
**Customer communication** section of Switchboard.
- Copies your Customer Success Manager (CSM) on all communications.
You cannot postpone emergency maintenance, because the same process must be applied to all
GitLab Dedicated instances to ensure their security and availability.
### Verify your operational contacts
To ensure you receive maintenance notifications:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. Select your tenant.
1. In the **Customer communication** section, review the email addresses listed under **Operational email addresses**.
To update these contacts, submit a support ticket.
|
---
stage: GitLab Dedicated
group: Environment Automation
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Maintenance windows, release schedules, and emergency maintenance processes
for GitLab Dedicated instances.
title: GitLab Dedicated maintenance and release schedule
breadcrumbs:
- doc
- administration
- dedicated
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Regular maintenance is performed on GitLab Dedicated instances according to scheduled maintenance windows and release upgrade timelines.
During scheduled maintenance windows, the following tasks might be performed:
- Application and operating system software patches and upgrades.
- Operating system restarts.
- Infrastructure upgrades.
- Activities needed to operate and enhance the availability or security of your tenant.
- Feature enhancements.
## Maintenance windows
Maintenance is performed outside standard working hours:
| Region | Day | Time (UTC) |
|---------------------------------|---------------|------------|
| Asia Pacific | Wednesday | 1:00 PM-5:00 PM |
| Europe, Middle East, and Africa | Tuesday | 1:00 AM-5:00 AM |
| Americas (Option 1) | Tuesday | 7:00 AM-11:00 AM |
| Americas (Option 2) | Sunday-Monday | 9:00 PM-1:00 AM |
View your maintenance window in [Switchboard](tenant_overview.md#maintenance-windows), including upcoming and recent maintenance.
You can postpone scheduled maintenance to another window in the same week by contacting your Customer Success Manager at least one week in advance.
{{< alert type="note" >}}
The scheduled weekly maintenance window is separate from [emergency maintenance](#emergency-maintenance), which cannot be postponed.
{{< /alert >}}
### Access during maintenance
Downtime is not expected for the entire duration of your maintenance window. A brief service interruption (less than one minute) may occur when compute resources restart after upgrades, typically during the first half of the maintenance window.
Long-running connections may be interrupted during this period. To minimize disruption, you can implement strategies like automatic recovery and retry.
Longer service interruptions are rare. If extended downtime is expected, GitLab provides advance notice.
{{< alert type="note" >}}
Performance degradation or downtime during the scheduled maintenance window does not count against the system service level availability (SLA).
{{< /alert >}}
## Release rollout schedule
GitLab Dedicated is [upgraded](../../subscriptions/gitlab_dedicated/maintenance.md#upgrades-and-patches) to the previous minor version (`N-1`) after each GitLab release. For example, when GitLab 16.9 is released, GitLab Dedicated instances are upgraded to 16.8.
Upgrades occur in your selected [maintenance window](#maintenance-windows) according to the following schedule, where `T` is the date of a [minor GitLab release](../../policy/maintenance.md):
| Calendar days after release | Maintenance window region |
|-------------------|---------------------------|
| `T`+5 | Europe, Middle East, and Africa,<br/> Americas (Option 1) |
| `T`+6 | Asia Pacific |
| `T`+10 | Americas (Option 2) |
For example, GitLab 16.9 released on 2024-02-15. Instances in the EMEA and Americas (Option 1) regions were then upgraded to 16.8 on 2024-02-20, 5 days after the 16.9 release.
{{< alert type="note" >}}
If a production change lock (PCL) is active during a scheduled upgrade, GitLab defers the upgrade to the first maintenance window after the PCL ends.
A PCL for GitLab Dedicated is a complete pause on all production changes during periods of reduced team availability such as major holidays. During a PCL, the following is paused:
- Configuration changes using Switchboard.
- Code deployments or infrastructure changes.
- Automated maintenance.
- New customer onboarding.
When a PCL is in effect, Switchboard displays a notification banner to alert users.
PCLs help ensure system stability when support resources may be limited.
{{< /alert >}}
## Emergency maintenance
Emergency maintenance is initiated when urgent actions are required on a GitLab Dedicated tenant instance. For example, when a critical (S1) security vulnerability requires urgent patching, GitLab performs emergency maintenance to upgrade your tenant instance to a secure version. This maintenance can occur outside scheduled maintenance windows.
GitLab prioritizes stability and security while minimizing customer impact during emergency maintenance. The specific maintenance procedures follow established internal processes, and all changes undergo appropriate review and approval before they are applied.
GitLab provides advance notice when possible and sends complete details
after the issue is resolved. The GitLab Support team:
- Creates a support ticket for tracking.
- Sends email notifications only to addresses listed as **Operational email addresses** in the
**Customer communication** section of Switchboard.
- Copies your Customer Success Manager (CSM) on all communications.
You cannot postpone emergency maintenance, because the same process must be applied to all
GitLab Dedicated instances to ensure their security and availability.
### Verify your operational contacts
To ensure you receive maintenance notifications:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. Select your tenant.
1. In the **Customer communication** section, review the email addresses listed under **Operational email addresses**.
To update these contacts, submit a support ticket.
|
https://docs.gitlab.com/administration/disaster_recovery
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/disaster_recovery.md
|
2025-08-13
|
doc/administration/dedicated
|
[
"doc",
"administration",
"dedicated"
] |
disaster_recovery.md
|
GitLab Dedicated
|
Environment Automation
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Disaster recovery for GitLab Dedicated
| null |
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
The disaster recovery process ensures your GitLab Dedicated instance
can be restored if a disaster affects your primary region.
GitLab can deploy your instance in these AWS regions:
- A primary region where your instance runs.
- If selected, a secondary region that serves as a backup if the primary region fails.
- A backup region where your data backups are replicated for additional protection.
If your primary region becomes unavailable due to an outage or critical system failure,
GitLab initiates a failover to your secondary region. If no secondary region is configured,
recovery uses backup restoration from the backup region.
## Prerequisites
To be eligible for the full recovery objectives, you must:
- Specify both a primary and secondary region during [onboarding](create_instance/_index.md). If you don't specify a secondary region, recovery is limited to [backup restoration](#automated-backups).
- Make sure both regions are [supported by GitLab Dedicated](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#available-aws-regions). If you select a secondary region with [limited support](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#secondary-regions-with-limited-support), the recovery time and point objectives do not apply.
## Recovery objectives
GitLab Dedicated provides disaster recovery with these recovery objectives:
- Recovery Time Objective (RTO): Service is restored to your secondary region in eight hours or less.
- Recovery Point Objective (RPO): Data loss is limited to a maximum of four hours of the most recent changes, depending on when the disaster occurs relative to the last backup.
## Components
GitLab Dedicated leverages two key components to meet disaster recovery commitments:
- Geo replication
- Automated backups
### Geo replication
When you onboard to GitLab Dedicated, you select a primary region and a secondary region for
your environment. Geo continuously replicates data between these regions, including:
- Database content
- Repository storage
- Object storage
### Automated backups
GitLab performs automated backups of the database and repositories every four hours
(six times daily) by creating snapshots. Backups are retained for 30 days
and are geographically replicated by AWS for additional protection.
Database backups:
- Use continuous log-based backups in the primary region for point-in-time recovery.
- Stream replication to the secondary region to provide a near-real-time copy.
Object storage backups:
- Use geographical replication and versioning to provide backup protection.
These backups serve as recovery points during disaster recovery operations.
The four-hour backup frequency supports the Recovery Point Objective (RPO) to ensure
you lose no more than four hours of data.
## Disaster coverage
Disaster recovery covers these scenarios with guaranteed recovery objectives:
- Partial region outage (for example, availability zone failure)
- Complete outage of your primary region
Disaster recovery covers these scenarios on a best-effort basis without guaranteed recovery objectives:
- Loss of both primary and secondary regions
- Global internet outages
- Data corruption issues
Disaster recovery has these service limitations:
- Advanced search indexes are not continuously replicated. After failover, these indexes are rebuilt when the secondary region is promoted. Basic search remains available during rebuilding.
- ClickHouse Cloud is provisioned only in the primary region. Features that require this service might be unavailable if the primary region is completely down.
- Production preview environments do not have secondary instances.
- Hosted runners are supported only in the primary region and cannot be rebuilt in the secondary instance.
- Some secondary regions have limited support and are not covered by the RPO and RTO targets. These regions have limited email functionality and resilience in your failover instance because of AWS limitations. For more information, see [secondary regions with limited support](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#secondary-regions-with-limited-support).
GitLab does not provide:
- Programmatic monitoring of failover events
- Customer-initiated disaster recovery testing
## Disaster recovery workflow
Disaster recovery is initiated when your instance becomes unavailable to most users due to:
- Complete region failure (for example, an AWS region outage).
- Critical component failure in the GitLab service or infrastructure that cannot be quickly recovered.
### Failover process
When your instance becomes unavailable, the GitLab Dedicated team:
1. Gets alerted by monitoring systems.
1. Investigates if failover is required.
1. If failover is required:
1. Notifies you that failover is in progress.
1. Promotes the secondary region to primary.
1. Updates DNS records for `<customer>.gitlab-dedicated.com` to point to the newly promoted
region.
1. Notifies you when failover completes.
If you use PrivateLink, you must update your internal networking configuration
to target the PrivateLink endpoint for the secondary region. To minimize downtime,
configure equivalent PrivateLink endpoints in your secondary region before a disaster occurs.
The failover process typically completes in 90 minutes or less.
### Communication during a disaster
During a disaster event, GitLab communicates with you through one or more of:
- Your operational contact information in Switchboard
- Slack
- Support tickets
GitLab may establish a temporary Slack channel and Zoom bridge to coordinate with
your team throughout the recovery process.
## Related topics
- [Data residency and high availability](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md)
- [GitLab Dedicated architecture](architecture.md)
|
---
stage: GitLab Dedicated
group: Environment Automation
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Disaster recovery for GitLab Dedicated
breadcrumbs:
- doc
- administration
- dedicated
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
The disaster recovery process ensures your GitLab Dedicated instance
can be restored if a disaster affects your primary region.
GitLab can deploy your instance in these AWS regions:
- A primary region where your instance runs.
- If selected, a secondary region that serves as a backup if the primary region fails.
- A backup region where your data backups are replicated for additional protection.
If your primary region becomes unavailable due to an outage or critical system failure,
GitLab initiates a failover to your secondary region. If no secondary region is configured,
recovery uses backup restoration from the backup region.
## Prerequisites
To be eligible for the full recovery objectives, you must:
- Specify both a primary and secondary region during [onboarding](create_instance/_index.md). If you don't specify a secondary region, recovery is limited to [backup restoration](#automated-backups).
- Make sure both regions are [supported by GitLab Dedicated](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#available-aws-regions). If you select a secondary region with [limited support](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#secondary-regions-with-limited-support), the recovery time and point objectives do not apply.
## Recovery objectives
GitLab Dedicated provides disaster recovery with these recovery objectives:
- Recovery Time Objective (RTO): Service is restored to your secondary region in eight hours or less.
- Recovery Point Objective (RPO): Data loss is limited to a maximum of four hours of the most recent changes, depending on when the disaster occurs relative to the last backup.
## Components
GitLab Dedicated leverages two key components to meet disaster recovery commitments:
- Geo replication
- Automated backups
### Geo replication
When you onboard to GitLab Dedicated, you select a primary region and a secondary region for
your environment. Geo continuously replicates data between these regions, including:
- Database content
- Repository storage
- Object storage
### Automated backups
GitLab performs automated backups of the database and repositories every four hours
(six times daily) by creating snapshots. Backups are retained for 30 days
and are geographically replicated by AWS for additional protection.
Database backups:
- Use continuous log-based backups in the primary region for point-in-time recovery.
- Stream replication to the secondary region to provide a near-real-time copy.
Object storage backups:
- Use geographical replication and versioning to provide backup protection.
These backups serve as recovery points during disaster recovery operations.
The four-hour backup frequency supports the Recovery Point Objective (RPO) to ensure
you lose no more than four hours of data.
## Disaster coverage
Disaster recovery covers these scenarios with guaranteed recovery objectives:
- Partial region outage (for example, availability zone failure)
- Complete outage of your primary region
Disaster recovery covers these scenarios on a best-effort basis without guaranteed recovery objectives:
- Loss of both primary and secondary regions
- Global internet outages
- Data corruption issues
Disaster recovery has these service limitations:
- Advanced search indexes are not continuously replicated. After failover, these indexes are rebuilt when the secondary region is promoted. Basic search remains available during rebuilding.
- ClickHouse Cloud is provisioned only in the primary region. Features that require this service might be unavailable if the primary region is completely down.
- Production preview environments do not have secondary instances.
- Hosted runners are supported only in the primary region and cannot be rebuilt in the secondary instance.
- Some secondary regions have limited support and are not covered by the RPO and RTO targets. These regions have limited email functionality and resilience in your failover instance because of AWS limitations. For more information, see [secondary regions with limited support](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#secondary-regions-with-limited-support).
GitLab does not provide:
- Programmatic monitoring of failover events
- Customer-initiated disaster recovery testing
## Disaster recovery workflow
Disaster recovery is initiated when your instance becomes unavailable to most users due to:
- Complete region failure (for example, an AWS region outage).
- Critical component failure in the GitLab service or infrastructure that cannot be quickly recovered.
### Failover process
When your instance becomes unavailable, the GitLab Dedicated team:
1. Gets alerted by monitoring systems.
1. Investigates if failover is required.
1. If failover is required:
1. Notifies you that failover is in progress.
1. Promotes the secondary region to primary.
1. Updates DNS records for `<customer>.gitlab-dedicated.com` to point to the newly promoted
region.
1. Notifies you when failover completes.
If you use PrivateLink, you must update your internal networking configuration
to target the PrivateLink endpoint for the secondary region. To minimize downtime,
configure equivalent PrivateLink endpoints in your secondary region before a disaster occurs.
The failover process typically completes in 90 minutes or less.
### Communication during a disaster
During a disaster event, GitLab communicates with you through one or more of:
- Your operational contact information in Switchboard
- Slack
- Support tickets
GitLab may establish a temporary Slack channel and Zoom bridge to coordinate with
your team throughout the recovery process.
## Related topics
- [Data residency and high availability](../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md)
- [GitLab Dedicated architecture](architecture.md)
|
https://docs.gitlab.com/administration/encryption
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/encryption.md
|
2025-08-13
|
doc/administration/dedicated
|
[
"doc",
"administration",
"dedicated"
] |
encryption.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Dedicated encryption
|
GitLab Dedicated encryption protects data at rest and in transit using AWS technologies, with support to bring your own encryption keys (BYOK).
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
GitLab Dedicated provides secure encryption capabilities to protect your data through robust security
infrastructure provided by AWS. Data is encrypted both at rest and in transit.
## Encrypted data at rest
GitLab Dedicated encrypts all stored data using AWS AES-256 (Advanced Encryption Standard with 256-bit
keys). This encryption applies to all AWS storage services used by GitLab Dedicated.
| Service | How it's encrypted |
|-------------|-------------------|
| Amazon S3 (SSE-S3) | Uses per-object encryption where each object is encrypted with its own unique key, which is then encrypted by an AWS-managed root key. |
| Amazon EBS | Uses volume-level encryption using Data Encryption Keys (DEKs) generated by AWS Key Management Service (KMS). |
| Amazon RDS (PostgreSQL) | Uses storage-level encryption using DEKs generated by AWS KMS. |
| AWS KMS | Manages encryption keys in an AWS-managed key hierarchy, protecting them using Hardware Security Modules (HSMs). |
All services use AES-256 encryption standard. In this envelope encryption system:
1. Your data is encrypted using Data Encryption Keys (DEKs).
1. The DEKs themselves are encrypted using AWS KMS keys.
1. Encrypted DEKs are stored alongside your encrypted data.
1. AWS KMS keys remain in the AWS Key Management Service and are never exposed in unencrypted form.
1. All encryption keys are protected by Hardware Security Modules (HSMs).
This envelope encryption process works by having AWS KMS derive the DEKs specifically for each
encryption operation. The DEK directly encrypts your data, while the DEK itself is encrypted by the
AWS KMS key, creating a secure envelope around your data.
### Encryption key sources
Your AWS KMS encryption key can come from one of the following sources:
- [AWS-managed keys](#aws-managed-keys) (default): GitLab and AWS handle all aspects of key generation and
management.
- [Bring your own key (BYOK)](#bring-your-own-key-byok): You provide and control your own AWS KMS
keys.
All key generation takes place in AWS KMS using dedicated hardware, ensuring high security
standards for encryption across all storage services.
The following table summarizes the functional differences between these options:
| Encryption key source | AWS-managed keys | Bring your own key (BYOK) |
|-----------------------|---------------------------------------------------------------------------------------------|---------------------------|
| **Key generation** | Generated automatically if BYOK not provided. | You create your own AWS KMS keys. |
| **Ownership** | AWS manages on your behalf. | You own and manage your keys. |
| **Access control** | Only AWS services using the keys can decrypt and access them. You don't have direct access. | You control access through IAM policies in your AWS account. |
| **Setup** | No setup required. | Must be set up before onboarding. Cannot be enabled later. |
### AWS-managed keys
When you don't bring your own key, AWS uses AWS-managed KMS keys for encryption by
default. These keys are automatically created and maintained by AWS for each service.
AWS KMS manages access to AWS-managed keys using AWS Identity and Access
Management (IAM). This architecture ensures that even AWS personnel cannot access your encryption
keys or decrypt your data directly, as all key operations are managed through the HSM-based security
controls.
You do not have direct access to AWS-managed KMS keys. Only the specific AWS services you use with
your instance can request encryption or decryption operations for resources they manage on your
behalf.
Only AWS services that need access to the key (S3, EBS, RDS) can use them. AWS personnel do not have
direct access to key material, as AWS KMS keys are protected by an internal HSM-based mechanism.
To learn more, see the Amazon documentation on [AWS managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk).
### Bring your own key (BYOK)
With BYOK, you can encrypt your GitLab Dedicated data at rest using your own AWS KMS keys. This way,
you retain control over your own AWS KMS encryption keys. You manage access policies through your AWS
account.
{{< alert type="note" >}}
BYOK must be enabled during instance onboarding. Once enabled, it cannot be disabled.
If you did not enable BYOK during onboarding, your data is still encrypted at rest with AWS-managed
keys, but you cannot use your own keys.
{{< /alert >}}
Due to key rotation requirements, GitLab Dedicated only supports keys with AWS-managed key material
(the [AWS_KMS](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-origin)
origin type).
In GitLab Dedicated, you can use KMS keys in several ways:
- One KMS key for all services across all regions: Use a single multi-region key with replicas in each region where you have Geo instances.
- One KMS key for all services within each region: Use separate keys for each region where you have Geo instances.
- Per-service KMS keys per region: Use different keys for different services (backup, EBS, RDS, S3, advanced search) within each region.
- Keys do not need to be unique to each service.
- Selective enablement is not supported.
#### Create AWS KMS keys for BYOK
Create your KMS keys using the AWS Console.
Prerequisites:
- You must have received your GitLab AWS account ID from the GitLab Dedicated account team.
- Your GitLab Dedicated tenant instance must not yet be created.
To create AWS KMS keys for BYOK:
1. Sign in to the AWS Console and go to the KMS service.
1. Select the region of the Geo instance you want to create a key for.
1. Select **Create key**.
1. In the **Configure key** section:
1. For **Key type**, select **Symmetrical**.
1. For **Key usage**, select **Encrypt and decrypt**.
1. Under **Advanced options**:
1. For **Key material origin**, select **KMS**.
1. For **Regionality**, select **Multi-Region key**.
1. Enter your values for key alias, description, and tags.
1. Select key administrators.
1. Optional. Allow or prevent key administrators from deleting the key.
1. On the **Define key usage permissions** page, under **Other AWS accounts**, add the GitLab AWS
account.
1. Review the KMS key policy. It should look similar to the example below, populated with your
account IDs and usernames.
```json
{
"Version": "2012-10-17",
"Id": "byok-key-policy",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<CUSTOMER-ACCOUNT-ID>:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow access for Key Administrators",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<CUSTOMER-ACCOUNT-ID>:user/<CUSTOMER-USER>"
]
},
"Action": [
"kms:Create*",
"kms:Describe*",
"kms:Enable*",
"kms:List*",
"kms:Put*",
"kms:Update*",
"kms:Revoke*",
"kms:Disable*",
"kms:Get*",
"kms:Delete*",
"kms:TagResource",
"kms:UntagResource",
"kms:ScheduleKeyDeletion",
"kms:CancelKeyDeletion",
"kms:ReplicateKey",
"kms:UpdatePrimaryRegion"
],
"Resource": "*"
},
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<GITLAB-ACCOUNT-ID>:root"
]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
},
{
"Sid": "Allow attachment of persistent resources",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<GITLAB-ACCOUNT-ID>:root"
]
},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*"
}
]
}
```
#### Create replica keys for additional Geo instances
Create [replica keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-replicate.html)
when you want to use the same KMS key across multiple Geo instances in different regions.
To create replica keys:
1. In the AWS Key Management Service (AWS KMS) console, go to the key you previously created.
1. Select the **Regionality** tab.
1. In the **Related multi-Region keys** section, select **Create new replica keys**.
1. Choose one or more AWS Regions where you have additional Geo instances.
1. Keep the original alias or enter a different alias for the replica key.
1. Optional. Enter a description and add tags.
1. Select the IAM users and roles that can administer the replica key.
1. Optional. Select or clear the **Allow key administrators to delete this key** checkbox.
1. Select **Next**.
1. On the **Define key usage permissions** page, verify that the GitLab AWS account
is listed under **Other AWS accounts**.
1. Select **Next** and review the policy.
1. Select **Next**, review your settings, and select **Finish**.
For more information on creating and managing KMS keys, see the [AWS KMS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html).
#### Enable BYOK for your instance
To enable BYOK:
1. Collect the ARNs for all keys you created, including any replica keys in their respective regions.
1. Before your GitLab Dedicated tenant is provisioned, ensure these ARNs have been entered into in Switchboard during [onboarding](create_instance/_index.md).
1. Make sure the AWS KMS keys are replicated to your desired primary, secondary, and backup regions
specified in Switchboard during [onboarding](create_instance/_index.md).
## Encrypted data in transit
GitLab Dedicated encrypts all data moving over networks using TLS (Transport Layer Security) with
strong cipher suites. This encryption applies to all network communications used by GitLab Dedicated
services.
| Service | How it's encrypted |
|---------|-------------------|
| Web application | Uses TLS 1.2/1.3 to encrypt client-server communication. |
| Amazon S3 | Uses TLS 1.2/1.3 to encrypt HTTPS access. |
| Amazon EBS | Uses TLS to encrypt data replication between AWS data centers. |
| Amazon RDS (PostgreSQL) | Uses SSL/TLS (minimum TLS 1.2) to encrypt database connections. |
| AWS KMS | Uses TLS to encrypt API requests. |
{{< alert type="note" >}}
Encryption for data in transit is performed with TLS using keys generated and managed by GitLab
Dedicated components, and is not covered by BYOK.
{{< /alert >}}
### Custom TLS certificates
You can configure custom TLS certificates to use your organization's certificates for encrypted
communications.
For more information on configuring custom certificates, see
[custom certificates](configure_instance/network_security.md#custom-certificates).
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: GitLab Dedicated encryption protects data at rest and in transit using
AWS technologies, with support to bring your own encryption keys (BYOK).
title: GitLab Dedicated encryption
breadcrumbs:
- doc
- administration
- dedicated
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
GitLab Dedicated provides secure encryption capabilities to protect your data through robust security
infrastructure provided by AWS. Data is encrypted both at rest and in transit.
## Encrypted data at rest
GitLab Dedicated encrypts all stored data using AWS AES-256 (Advanced Encryption Standard with 256-bit
keys). This encryption applies to all AWS storage services used by GitLab Dedicated.
| Service | How it's encrypted |
|-------------|-------------------|
| Amazon S3 (SSE-S3) | Uses per-object encryption where each object is encrypted with its own unique key, which is then encrypted by an AWS-managed root key. |
| Amazon EBS | Uses volume-level encryption using Data Encryption Keys (DEKs) generated by AWS Key Management Service (KMS). |
| Amazon RDS (PostgreSQL) | Uses storage-level encryption using DEKs generated by AWS KMS. |
| AWS KMS | Manages encryption keys in an AWS-managed key hierarchy, protecting them using Hardware Security Modules (HSMs). |
All services use AES-256 encryption standard. In this envelope encryption system:
1. Your data is encrypted using Data Encryption Keys (DEKs).
1. The DEKs themselves are encrypted using AWS KMS keys.
1. Encrypted DEKs are stored alongside your encrypted data.
1. AWS KMS keys remain in the AWS Key Management Service and are never exposed in unencrypted form.
1. All encryption keys are protected by Hardware Security Modules (HSMs).
This envelope encryption process works by having AWS KMS derive the DEKs specifically for each
encryption operation. The DEK directly encrypts your data, while the DEK itself is encrypted by the
AWS KMS key, creating a secure envelope around your data.
### Encryption key sources
Your AWS KMS encryption key can come from one of the following sources:
- [AWS-managed keys](#aws-managed-keys) (default): GitLab and AWS handle all aspects of key generation and
management.
- [Bring your own key (BYOK)](#bring-your-own-key-byok): You provide and control your own AWS KMS
keys.
All key generation takes place in AWS KMS using dedicated hardware, ensuring high security
standards for encryption across all storage services.
The following table summarizes the functional differences between these options:
| Encryption key source | AWS-managed keys | Bring your own key (BYOK) |
|-----------------------|---------------------------------------------------------------------------------------------|---------------------------|
| **Key generation** | Generated automatically if BYOK not provided. | You create your own AWS KMS keys. |
| **Ownership** | AWS manages on your behalf. | You own and manage your keys. |
| **Access control** | Only AWS services using the keys can decrypt and access them. You don't have direct access. | You control access through IAM policies in your AWS account. |
| **Setup** | No setup required. | Must be set up before onboarding. Cannot be enabled later. |
### AWS-managed keys
When you don't bring your own key, AWS uses AWS-managed KMS keys for encryption by
default. These keys are automatically created and maintained by AWS for each service.
AWS KMS manages access to AWS-managed keys using AWS Identity and Access
Management (IAM). This architecture ensures that even AWS personnel cannot access your encryption
keys or decrypt your data directly, as all key operations are managed through the HSM-based security
controls.
You do not have direct access to AWS-managed KMS keys. Only the specific AWS services you use with
your instance can request encryption or decryption operations for resources they manage on your
behalf.
Only AWS services that need access to the key (S3, EBS, RDS) can use them. AWS personnel do not have
direct access to key material, as AWS KMS keys are protected by an internal HSM-based mechanism.
To learn more, see the Amazon documentation on [AWS managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk).
### Bring your own key (BYOK)
With BYOK, you can encrypt your GitLab Dedicated data at rest using your own AWS KMS keys. This way,
you retain control over your own AWS KMS encryption keys. You manage access policies through your AWS
account.
{{< alert type="note" >}}
BYOK must be enabled during instance onboarding. Once enabled, it cannot be disabled.
If you did not enable BYOK during onboarding, your data is still encrypted at rest with AWS-managed
keys, but you cannot use your own keys.
{{< /alert >}}
Due to key rotation requirements, GitLab Dedicated only supports keys with AWS-managed key material
(the [AWS_KMS](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-origin)
origin type).
In GitLab Dedicated, you can use KMS keys in several ways:
- One KMS key for all services across all regions: Use a single multi-region key with replicas in each region where you have Geo instances.
- One KMS key for all services within each region: Use separate keys for each region where you have Geo instances.
- Per-service KMS keys per region: Use different keys for different services (backup, EBS, RDS, S3, advanced search) within each region.
- Keys do not need to be unique to each service.
- Selective enablement is not supported.
#### Create AWS KMS keys for BYOK
Create your KMS keys using the AWS Console.
Prerequisites:
- You must have received your GitLab AWS account ID from the GitLab Dedicated account team.
- Your GitLab Dedicated tenant instance must not yet be created.
To create AWS KMS keys for BYOK:
1. Sign in to the AWS Console and go to the KMS service.
1. Select the region of the Geo instance you want to create a key for.
1. Select **Create key**.
1. In the **Configure key** section:
1. For **Key type**, select **Symmetrical**.
1. For **Key usage**, select **Encrypt and decrypt**.
1. Under **Advanced options**:
1. For **Key material origin**, select **KMS**.
1. For **Regionality**, select **Multi-Region key**.
1. Enter your values for key alias, description, and tags.
1. Select key administrators.
1. Optional. Allow or prevent key administrators from deleting the key.
1. On the **Define key usage permissions** page, under **Other AWS accounts**, add the GitLab AWS
account.
1. Review the KMS key policy. It should look similar to the example below, populated with your
account IDs and usernames.
```json
{
"Version": "2012-10-17",
"Id": "byok-key-policy",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<CUSTOMER-ACCOUNT-ID>:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow access for Key Administrators",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<CUSTOMER-ACCOUNT-ID>:user/<CUSTOMER-USER>"
]
},
"Action": [
"kms:Create*",
"kms:Describe*",
"kms:Enable*",
"kms:List*",
"kms:Put*",
"kms:Update*",
"kms:Revoke*",
"kms:Disable*",
"kms:Get*",
"kms:Delete*",
"kms:TagResource",
"kms:UntagResource",
"kms:ScheduleKeyDeletion",
"kms:CancelKeyDeletion",
"kms:ReplicateKey",
"kms:UpdatePrimaryRegion"
],
"Resource": "*"
},
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<GITLAB-ACCOUNT-ID>:root"
]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
},
{
"Sid": "Allow attachment of persistent resources",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<GITLAB-ACCOUNT-ID>:root"
]
},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*"
}
]
}
```
#### Create replica keys for additional Geo instances
Create [replica keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-replicate.html)
when you want to use the same KMS key across multiple Geo instances in different regions.
To create replica keys:
1. In the AWS Key Management Service (AWS KMS) console, go to the key you previously created.
1. Select the **Regionality** tab.
1. In the **Related multi-Region keys** section, select **Create new replica keys**.
1. Choose one or more AWS Regions where you have additional Geo instances.
1. Keep the original alias or enter a different alias for the replica key.
1. Optional. Enter a description and add tags.
1. Select the IAM users and roles that can administer the replica key.
1. Optional. Select or clear the **Allow key administrators to delete this key** checkbox.
1. Select **Next**.
1. On the **Define key usage permissions** page, verify that the GitLab AWS account
is listed under **Other AWS accounts**.
1. Select **Next** and review the policy.
1. Select **Next**, review your settings, and select **Finish**.
For more information on creating and managing KMS keys, see the [AWS KMS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html).
#### Enable BYOK for your instance
To enable BYOK:
1. Collect the ARNs for all keys you created, including any replica keys in their respective regions.
1. Before your GitLab Dedicated tenant is provisioned, ensure these ARNs have been entered into in Switchboard during [onboarding](create_instance/_index.md).
1. Make sure the AWS KMS keys are replicated to your desired primary, secondary, and backup regions
specified in Switchboard during [onboarding](create_instance/_index.md).
## Encrypted data in transit
GitLab Dedicated encrypts all data moving over networks using TLS (Transport Layer Security) with
strong cipher suites. This encryption applies to all network communications used by GitLab Dedicated
services.
| Service | How it's encrypted |
|---------|-------------------|
| Web application | Uses TLS 1.2/1.3 to encrypt client-server communication. |
| Amazon S3 | Uses TLS 1.2/1.3 to encrypt HTTPS access. |
| Amazon EBS | Uses TLS to encrypt data replication between AWS data centers. |
| Amazon RDS (PostgreSQL) | Uses SSL/TLS (minimum TLS 1.2) to encrypt database connections. |
| AWS KMS | Uses TLS to encrypt API requests. |
{{< alert type="note" >}}
Encryption for data in transit is performed with TLS using keys generated and managed by GitLab
Dedicated components, and is not covered by BYOK.
{{< /alert >}}
### Custom TLS certificates
You can configure custom TLS certificates to use your organization's certificates for encrypted
communications.
For more information on configuring custom certificates, see
[custom certificates](configure_instance/network_security.md#custom-certificates).
|
https://docs.gitlab.com/administration/dedicated/storage_types
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/storage_types.md
|
2025-08-13
|
doc/administration/dedicated/create_instance
|
[
"doc",
"administration",
"dedicated",
"create_instance"
] |
storage_types.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Dedicated storage types
|
Learn how storage is allocated and managed in GitLab Dedicated, including repository storage and object storage.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
GitLab Dedicated provides a single-tenant, fully managed GitLab instance deployed in your preferred AWS
cloud region. Your account team works with you to determine your storage needs during the procurement
process.
Understanding how storage works in GitLab Dedicated helps you make informed decisions
about instance configuration and resource management.
## Storage components
GitLab Dedicated uses different types of storage for different purposes. The total storage allocation is
divided between these components based on usage patterns.
### Total storage size
Total storage size is the combined storage allocated to a GitLab Dedicated instance, including both
your repository storage and object storage. This allocation represents the total storage capacity purchased with a
GitLab Dedicated subscription and configured during instance provisioning.
When determining storage needs, this is the primary metric used for planning and pricing. The total
storage is then distributed between repository storage and object storage based on expected usage patterns.
### Repository storage
Repository storage refers to the space allocated for Git repositories across your Gitaly nodes. This storage
is distributed among the Gitaly nodes in your instance based on your reference architecture.
#### Repository storage per Gitaly node
Each Gitaly node in your instance has a specific storage capacity. This capacity affects how large individual
repositories can be, because no single repository can exceed the capacity of a single Gitaly node.
For example, if each Gitaly node has 100 GB of storage capacity and there are 3 Gitaly nodes, your instance
can store a total of 300 GB of repository data, but no single repository can exceed 100 GB.
### Object storage
Object storage is a storage architecture that manages data as objects rather than as a file hierarchy.
In GitLab, object storage handles everything that is not part of Git repositories, including:
- Job artifacts and job logs from CI/CD pipelines
- Images stored in the container registry
- Packages stored in the package registry
- Websites deployed with GitLab Pages
- State files for Terraform projects
Object storage in GitLab Dedicated is implemented using Amazon S3 with appropriate replication for data protection.
### Blended storage
Blended storage is the overall storage used by a GitLab Dedicated instance, including object
storage, repository storage, and data transfer.
<!-- vale gitlab_base.Spelling = NO -->
### Unblended storage
Unblended storage is the storage capacity at the infrastructure level for each storage type.
You primarily work with the total storage size and repository storage numbers.
<!-- vale gitlab_base.Spelling = YES -->
## Storage planning and configuration
Storage planning for a GitLab Dedicated instance involves understanding how object and repository storage is
allocated across the infrastructure.
### Determining initial storage allocation
The GitLab Dedicated account team helps determine the appropriate storage amount based on:
- Number of users
- Number and size of repositories
- CI/CD usage patterns
- Anticipated growth
### Repository capacity and reference architectures
Your repository storage is distributed across Gitaly nodes. This affects how large
individual repositories can be, as no single repository can exceed the capacity of a single Gitaly
node.
The number of Gitaly nodes for an instance depends on the reference architecture determined during
onboarding, based primarily on user count. Reference architectures for instances with more than
2,000 users typically use three Gitaly nodes. For more information, see
[reference architectures](../../reference_architectures/_index.md).
#### View reference architecture
To view your reference architecture:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. From the tenant overview page, locate the **Reference architecture** field.
{{< alert type="note" >}}
To confirm the number of Gitaly nodes in your tenant architecture, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
{{< /alert >}}
### Example storage calculations
These examples demonstrate how storage allocation affects repository size limitations:
#### Standard workload with 2,000 users
- Reference architecture: Up to 2,000 users (1 Gitaly node)
- Total storage size: 1 TB (1,024 GB)
- Allocation: 600 GB repository storage, 424 GB object storage
- Repository storage per Gitaly node: 600 GB
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
accTitle: Storage allocation for 2,000 users
accDescr: Diagram showing 1 TB total storage with repository storage on a single Gitaly node and object storage
subgraph A[Total storage size: 1 TB]
B[Repository storage: 600 GB]
C[Object storage: 424 GB]
B --> D[Gitaly node: 600 GB]
end
```
#### CI/CD-intensive workload with 10,000 users
- Reference architecture: Up to 10,000 users (3 Gitaly nodes)
- Total storage size: 5 TB (5,120 GB)
- Allocation: 2,048 GB repository storage, 3,072 GB object storage
- Repository storage per Gitaly node: 682 GB (2,048 GB ÷ 3 Gitaly nodes)
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
accTitle: Storage allocation for 10,000 users
accDescr: Diagram showing 5 TB total storage with repository storage across 3 Gitaly nodes and object storage
subgraph A[Total storage size: 5 TB]
B[Repository storage: 2,048 GB]
C[Object storage: 3,072 GB]
D[Gitaly node 1: 682 GB]
E[Gitaly node 2: 682 GB]
F[Gitaly node 3: 682 GB]
B --- D
B --- E
B --- F
end
```
## Manage storage growth
To manage storage growth effectively:
- Set cleanup policies for the [package registry](../../../user/packages/package_registry/reduce_package_registry_storage.md#cleanup-policy) to automatically remove old package assets.
- Set cleanup policies for the [container registry](../../../user/packages/container_registry/reduce_container_registry_storage.md#cleanup-policy) to remove unused container tags.
- Set an expiration period for [job artifacts](../../../ci/jobs/job_artifacts.md#with-an-expiry).
- Review and archive or remove [unused projects](../../../user/project/working_with_projects.md).
## Frequently asked questions
### Can I change my storage allocation after my instance is provisioned?
Yes, you can request additional storage by contacting your account team or opening a support ticket.
Changes to storage affect billing.
### How does storage affect performance?
Proper storage allocation ensures optimal performance. Undersized storage can lead to performance
issues, particularly for repository operations and CI/CD pipelines.
### How is storage handled for Geo replication?
GitLab Dedicated includes a secondary Geo site for disaster recovery, with storage allocation
based on your primary site configuration.
### Can I bring my own S3 bucket for object storage?
No, GitLab Dedicated uses AWS S3 buckets managed by GitLab in your tenant account.
## Related topics
- [Data residency and high availability](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md)
- [Reference architectures](../../reference_architectures/_index.md)
- [Repository storage](../../repository_storage_paths.md)
- [Object storage](../../object_storage.md)
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Learn how storage is allocated and managed in GitLab Dedicated, including
repository storage and object storage.
title: GitLab Dedicated storage types
breadcrumbs:
- doc
- administration
- dedicated
- create_instance
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
GitLab Dedicated provides a single-tenant, fully managed GitLab instance deployed in your preferred AWS
cloud region. Your account team works with you to determine your storage needs during the procurement
process.
Understanding how storage works in GitLab Dedicated helps you make informed decisions
about instance configuration and resource management.
## Storage components
GitLab Dedicated uses different types of storage for different purposes. The total storage allocation is
divided between these components based on usage patterns.
### Total storage size
Total storage size is the combined storage allocated to a GitLab Dedicated instance, including both
your repository storage and object storage. This allocation represents the total storage capacity purchased with a
GitLab Dedicated subscription and configured during instance provisioning.
When determining storage needs, this is the primary metric used for planning and pricing. The total
storage is then distributed between repository storage and object storage based on expected usage patterns.
### Repository storage
Repository storage refers to the space allocated for Git repositories across your Gitaly nodes. This storage
is distributed among the Gitaly nodes in your instance based on your reference architecture.
#### Repository storage per Gitaly node
Each Gitaly node in your instance has a specific storage capacity. This capacity affects how large individual
repositories can be, because no single repository can exceed the capacity of a single Gitaly node.
For example, if each Gitaly node has 100 GB of storage capacity and there are 3 Gitaly nodes, your instance
can store a total of 300 GB of repository data, but no single repository can exceed 100 GB.
### Object storage
Object storage is a storage architecture that manages data as objects rather than as a file hierarchy.
In GitLab, object storage handles everything that is not part of Git repositories, including:
- Job artifacts and job logs from CI/CD pipelines
- Images stored in the container registry
- Packages stored in the package registry
- Websites deployed with GitLab Pages
- State files for Terraform projects
Object storage in GitLab Dedicated is implemented using Amazon S3 with appropriate replication for data protection.
### Blended storage
Blended storage is the overall storage used by a GitLab Dedicated instance, including object
storage, repository storage, and data transfer.
<!-- vale gitlab_base.Spelling = NO -->
### Unblended storage
Unblended storage is the storage capacity at the infrastructure level for each storage type.
You primarily work with the total storage size and repository storage numbers.
<!-- vale gitlab_base.Spelling = YES -->
## Storage planning and configuration
Storage planning for a GitLab Dedicated instance involves understanding how object and repository storage is
allocated across the infrastructure.
### Determining initial storage allocation
The GitLab Dedicated account team helps determine the appropriate storage amount based on:
- Number of users
- Number and size of repositories
- CI/CD usage patterns
- Anticipated growth
### Repository capacity and reference architectures
Your repository storage is distributed across Gitaly nodes. This affects how large
individual repositories can be, as no single repository can exceed the capacity of a single Gitaly
node.
The number of Gitaly nodes for an instance depends on the reference architecture determined during
onboarding, based primarily on user count. Reference architectures for instances with more than
2,000 users typically use three Gitaly nodes. For more information, see
[reference architectures](../../reference_architectures/_index.md).
#### View reference architecture
To view your reference architecture:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. From the tenant overview page, locate the **Reference architecture** field.
{{< alert type="note" >}}
To confirm the number of Gitaly nodes in your tenant architecture, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
{{< /alert >}}
### Example storage calculations
These examples demonstrate how storage allocation affects repository size limitations:
#### Standard workload with 2,000 users
- Reference architecture: Up to 2,000 users (1 Gitaly node)
- Total storage size: 1 TB (1,024 GB)
- Allocation: 600 GB repository storage, 424 GB object storage
- Repository storage per Gitaly node: 600 GB
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
accTitle: Storage allocation for 2,000 users
accDescr: Diagram showing 1 TB total storage with repository storage on a single Gitaly node and object storage
subgraph A[Total storage size: 1 TB]
B[Repository storage: 600 GB]
C[Object storage: 424 GB]
B --> D[Gitaly node: 600 GB]
end
```
#### CI/CD-intensive workload with 10,000 users
- Reference architecture: Up to 10,000 users (3 Gitaly nodes)
- Total storage size: 5 TB (5,120 GB)
- Allocation: 2,048 GB repository storage, 3,072 GB object storage
- Repository storage per Gitaly node: 682 GB (2,048 GB ÷ 3 Gitaly nodes)
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
accTitle: Storage allocation for 10,000 users
accDescr: Diagram showing 5 TB total storage with repository storage across 3 Gitaly nodes and object storage
subgraph A[Total storage size: 5 TB]
B[Repository storage: 2,048 GB]
C[Object storage: 3,072 GB]
D[Gitaly node 1: 682 GB]
E[Gitaly node 2: 682 GB]
F[Gitaly node 3: 682 GB]
B --- D
B --- E
B --- F
end
```
## Manage storage growth
To manage storage growth effectively:
- Set cleanup policies for the [package registry](../../../user/packages/package_registry/reduce_package_registry_storage.md#cleanup-policy) to automatically remove old package assets.
- Set cleanup policies for the [container registry](../../../user/packages/container_registry/reduce_container_registry_storage.md#cleanup-policy) to remove unused container tags.
- Set an expiration period for [job artifacts](../../../ci/jobs/job_artifacts.md#with-an-expiry).
- Review and archive or remove [unused projects](../../../user/project/working_with_projects.md).
## Frequently asked questions
### Can I change my storage allocation after my instance is provisioned?
Yes, you can request additional storage by contacting your account team or opening a support ticket.
Changes to storage affect billing.
### How does storage affect performance?
Proper storage allocation ensures optimal performance. Undersized storage can lead to performance
issues, particularly for repository operations and CI/CD pipelines.
### How is storage handled for Geo replication?
GitLab Dedicated includes a secondary Geo site for disaster recovery, with storage allocation
based on your primary site configuration.
### Can I bring my own S3 bucket for object storage?
No, GitLab Dedicated uses AWS S3 buckets managed by GitLab in your tenant account.
## Related topics
- [Data residency and high availability](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md)
- [Reference architectures](../../reference_architectures/_index.md)
- [Repository storage](../../repository_storage_paths.md)
- [Object storage](../../object_storage.md)
|
https://docs.gitlab.com/administration/dedicated/create_instance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/_index.md
|
2025-08-13
|
doc/administration/dedicated/create_instance
|
[
"doc",
"administration",
"dedicated",
"create_instance"
] |
_index.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Create your GitLab Dedicated instance
|
Create your GitLab Dedicated instance with Switchboard.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
The instructions on this page guide you through the onboarding and initial setup of your GitLab Dedicated instance using [Switchboard](https://about.gitlab.com/direction/platforms/switchboard/), the GitLab Dedicated portal.
## Step 1: Get access to Switchboard
Your GitLab Dedicated instance will be set up using Switchboard. To gain access to Switchboard,
provide the following information to your account team:
- Expected number of users.
- Initial storage size for your repositories in GB.
- Email addresses of any users that need to complete the onboarding and create your GitLab Dedicated instance.
- Whether you want to [bring your own encryption keys (BYOK)](../encryption.md#bring-your-own-key-byok). If so, GitLab provides an AWS account ID, which is necessary to enable BYOK.
- Whether you want to use Geo migration for inbound migration of your Dedicated instance.
If you've been granted access to Switchboard, you will receive an email invitation with temporary
credentials to sign in.
The credentials for Switchboard are separate from any other GitLab credentials you may already have
to sign in to a GitLab Self-Managed instance or GitLab.com.
After you first sign in to Switchboard, you must update your password and set up MFA before you can
complete your onboarding to create a new instance.
## Step 2: Create your GitLab Dedicated instance
After you sign in to Switchboard, follow these steps to create your instance:
1. On the **Account details** page, review and confirm your subscription settings. These settings are based on the information you provided to your account team:
- **Reference architecture**: The maximum number of users allowed in your instance. For more information, see [availability and scalability](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#availability-and-scalability). For example, up to 3,000 users.
- **Total repository capacity**: The total storage space available for all repositories in your instance. For example, 16 GB. This setting cannot be reduced after you create your instance. You can increase storage capacity later if needed. For more information about how storage is calculated for GitLab Dedicated, see [GitLab Dedicated storage types](storage_types.md).
If you need to change either of these values, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
1. On the **Configuration** page, choose your environment access, location, and maintenance window settings:
- **Tenant name**: Enter a name for your tenant. This name is permanent unless you [bring your own domain](../configure_instance/network_security.md#bring-your-own-domain-byod).
- **Tenant URL**: Your instance URL is automatically generated as `<tenant_name>.gitlab-dedicated.com`.
- **Primary region**: Select the primary AWS region to use for data storage. Note the
[available AWS regions](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#available-aws-regions).
- **Secondary region**: Select a secondary AWS region to use for data storage and [disaster recovery](../disaster_recovery.md). This field does not appear for Geo migrations from an existing GitLab Self-Managed instance. Some regions have [limited support](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#secondary-regions-with-limited-support).
- **Backup region**: Select a region to replicate and store your primary data backups.
You can use the same option as your primary or secondary regions, or choose a different region for [increased redundancy](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#disaster-recovery).
- **Time zone**: Select a weekly four-hour time slot when GitLab performs routine
maintenance and upgrades. For more information, see [maintenance windows](../maintenance.md#maintenance-windows).
1. Optional. On the **Security** page, add your [AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) for encrypted AWS services. If you do not add keys, GitLab generates encryption keys for your instance. For more information, see [encrypting your data at rest](../encryption.md#encrypted-data-at-rest).
1. On the **Tenant summary** page, review the tenant configuration details. After you confirm that the information you've provided in the previous steps is accurate, select **Create tenant**.
{{< alert type="note" >}}
Confirm these settings carefully before you create your instance,
as you cannot change them later:
- Security keys and AWS KMS keys (BYOK) configuration
- AWS regions (primary, secondary, backup)
- Total repository capacity (you can increase storage but cannot reduce it)
- Tenant name and URL (unless you [bring your own domain](../configure_instance/network_security.md#bring-your-own-domain-byod))
{{< /alert >}}
Your GitLab Dedicated instance can take up to three hours to create. GitLab sends a confirmation email when the setup is complete.
## Step 3: Access and configure your GitLab Dedicated instance
To access and configure your GitLab Dedicated instance:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. In the **Access your GitLab Dedicated instance** banner, select **View credentials**.
1. Copy the tenant URL and temporary root credentials for your instance.
{{< alert type="note" >}}
For security, you can retrieve the temporary root credentials from Switchboard only once. Be sure to store these credentials securely (for example, in a password manager) before leaving Switchboard.
{{< /alert >}}
1. Go to the tenant URL for your GitLab Dedicated instance and sign in with your temporary root credentials.
1. [Change your temporary root password](../../../user/profile/user_passwords.md#change-your-password) to a new secure password.
1. Go to the Admin area and [add the license key](../../license_file.md#add-license-in-the-admin-area) for your GitLab Dedicated subscription.
1. Return to Switchboard and [add users](../configure_instance/users_notifications.md#add-switchboard-users), if needed.
1. Review the [release rollout schedule](../maintenance.md#release-rollout-schedule) for upgrades and maintenance.
Also plan ahead if you need the following GitLab Dedicated features:
- [Inbound Private Link](../configure_instance/network_security.md#inbound-private-link)
- [Outbound Private Link](../configure_instance/network_security.md#outbound-private-link)
- [SAML SSO](../configure_instance/saml.md)
- [Bring your own domain](../configure_instance/network_security.md#bring-your-own-domain-byod)
To view all available infrastructure configuration options, see [Configure your GitLab Dedicated instance](../configure_instance/_index.md).
{{< alert type="note" >}}
New GitLab Dedicated instances use the same default settings as GitLab Self-Managed. A GitLab administrator can change these settings from the [Admin Area](../../admin_area.md).
For instances created in GitLab 18.0 and later, [Duo Core](../../../subscriptions/subscription-add-ons.md#gitlab-duo-core) features are turned on by default for all users.
If your organization requires data to remain within your specified regions or has restrictions on AI feature usage,
you can [turn off Duo Core](../../../user/gitlab_duo/turn_on_off.md#for-an-instance).
{{< /alert >}}
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Create your GitLab Dedicated instance with Switchboard.
title: Create your GitLab Dedicated instance
breadcrumbs:
- doc
- administration
- dedicated
- create_instance
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
The instructions on this page guide you through the onboarding and initial setup of your GitLab Dedicated instance using [Switchboard](https://about.gitlab.com/direction/platforms/switchboard/), the GitLab Dedicated portal.
## Step 1: Get access to Switchboard
Your GitLab Dedicated instance will be set up using Switchboard. To gain access to Switchboard,
provide the following information to your account team:
- Expected number of users.
- Initial storage size for your repositories in GB.
- Email addresses of any users that need to complete the onboarding and create your GitLab Dedicated instance.
- Whether you want to [bring your own encryption keys (BYOK)](../encryption.md#bring-your-own-key-byok). If so, GitLab provides an AWS account ID, which is necessary to enable BYOK.
- Whether you want to use Geo migration for inbound migration of your Dedicated instance.
If you've been granted access to Switchboard, you will receive an email invitation with temporary
credentials to sign in.
The credentials for Switchboard are separate from any other GitLab credentials you may already have
to sign in to a GitLab Self-Managed instance or GitLab.com.
After you first sign in to Switchboard, you must update your password and set up MFA before you can
complete your onboarding to create a new instance.
## Step 2: Create your GitLab Dedicated instance
After you sign in to Switchboard, follow these steps to create your instance:
1. On the **Account details** page, review and confirm your subscription settings. These settings are based on the information you provided to your account team:
- **Reference architecture**: The maximum number of users allowed in your instance. For more information, see [availability and scalability](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#availability-and-scalability). For example, up to 3,000 users.
- **Total repository capacity**: The total storage space available for all repositories in your instance. For example, 16 GB. This setting cannot be reduced after you create your instance. You can increase storage capacity later if needed. For more information about how storage is calculated for GitLab Dedicated, see [GitLab Dedicated storage types](storage_types.md).
If you need to change either of these values, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
1. On the **Configuration** page, choose your environment access, location, and maintenance window settings:
- **Tenant name**: Enter a name for your tenant. This name is permanent unless you [bring your own domain](../configure_instance/network_security.md#bring-your-own-domain-byod).
- **Tenant URL**: Your instance URL is automatically generated as `<tenant_name>.gitlab-dedicated.com`.
- **Primary region**: Select the primary AWS region to use for data storage. Note the
[available AWS regions](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#available-aws-regions).
- **Secondary region**: Select a secondary AWS region to use for data storage and [disaster recovery](../disaster_recovery.md). This field does not appear for Geo migrations from an existing GitLab Self-Managed instance. Some regions have [limited support](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#secondary-regions-with-limited-support).
- **Backup region**: Select a region to replicate and store your primary data backups.
You can use the same option as your primary or secondary regions, or choose a different region for [increased redundancy](../../../subscriptions/gitlab_dedicated/data_residency_and_high_availability.md#disaster-recovery).
- **Time zone**: Select a weekly four-hour time slot when GitLab performs routine
maintenance and upgrades. For more information, see [maintenance windows](../maintenance.md#maintenance-windows).
1. Optional. On the **Security** page, add your [AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) for encrypted AWS services. If you do not add keys, GitLab generates encryption keys for your instance. For more information, see [encrypting your data at rest](../encryption.md#encrypted-data-at-rest).
1. On the **Tenant summary** page, review the tenant configuration details. After you confirm that the information you've provided in the previous steps is accurate, select **Create tenant**.
{{< alert type="note" >}}
Confirm these settings carefully before you create your instance,
as you cannot change them later:
- Security keys and AWS KMS keys (BYOK) configuration
- AWS regions (primary, secondary, backup)
- Total repository capacity (you can increase storage but cannot reduce it)
- Tenant name and URL (unless you [bring your own domain](../configure_instance/network_security.md#bring-your-own-domain-byod))
{{< /alert >}}
Your GitLab Dedicated instance can take up to three hours to create. GitLab sends a confirmation email when the setup is complete.
## Step 3: Access and configure your GitLab Dedicated instance
To access and configure your GitLab Dedicated instance:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. In the **Access your GitLab Dedicated instance** banner, select **View credentials**.
1. Copy the tenant URL and temporary root credentials for your instance.
{{< alert type="note" >}}
For security, you can retrieve the temporary root credentials from Switchboard only once. Be sure to store these credentials securely (for example, in a password manager) before leaving Switchboard.
{{< /alert >}}
1. Go to the tenant URL for your GitLab Dedicated instance and sign in with your temporary root credentials.
1. [Change your temporary root password](../../../user/profile/user_passwords.md#change-your-password) to a new secure password.
1. Go to the Admin area and [add the license key](../../license_file.md#add-license-in-the-admin-area) for your GitLab Dedicated subscription.
1. Return to Switchboard and [add users](../configure_instance/users_notifications.md#add-switchboard-users), if needed.
1. Review the [release rollout schedule](../maintenance.md#release-rollout-schedule) for upgrades and maintenance.
Also plan ahead if you need the following GitLab Dedicated features:
- [Inbound Private Link](../configure_instance/network_security.md#inbound-private-link)
- [Outbound Private Link](../configure_instance/network_security.md#outbound-private-link)
- [SAML SSO](../configure_instance/saml.md)
- [Bring your own domain](../configure_instance/network_security.md#bring-your-own-domain-byod)
To view all available infrastructure configuration options, see [Configure your GitLab Dedicated instance](../configure_instance/_index.md).
{{< alert type="note" >}}
New GitLab Dedicated instances use the same default settings as GitLab Self-Managed. A GitLab administrator can change these settings from the [Admin Area](../../admin_area.md).
For instances created in GitLab 18.0 and later, [Duo Core](../../../subscriptions/subscription-add-ons.md#gitlab-duo-core) features are turned on by default for all users.
If your organization requires data to remain within your specified regions or has restrictions on AI feature usage,
you can [turn off Duo Core](../../../user/gitlab_duo/turn_on_off.md#for-an-instance).
{{< /alert >}}
|
https://docs.gitlab.com/administration/dedicated/saml
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/saml.md
|
2025-08-13
|
doc/administration/dedicated/configure_instance
|
[
"doc",
"administration",
"dedicated",
"configure_instance"
] |
saml.md
| null | null | null | null | null |
<!-- markdownlint-disable -->
This document was moved to [another location](authentication/saml.md).
<!-- This redirect file can be deleted after 2025-11-01. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in the same project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
---
redirect_to: authentication/saml.md
remove_date: '2025-11-01'
breadcrumbs:
- doc
- administration
- dedicated
- configure_instance
---
<!-- markdownlint-disable -->
This document was moved to [another location](authentication/saml.md).
<!-- This redirect file can be deleted after 2025-11-01. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in the same project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
https://docs.gitlab.com/administration/dedicated/configure_instance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/_index.md
|
2025-08-13
|
doc/administration/dedicated/configure_instance
|
[
"doc",
"administration",
"dedicated",
"configure_instance"
] |
_index.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Configure GitLab Dedicated
|
Configure your GitLab Dedicated instance with Switchboard.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
The instructions on this page guide you through configuring your GitLab Dedicated instance, including enabling and updating the settings for [available functionality](../../../subscriptions/gitlab_dedicated/_index.md#available-features).
Administrators can configure additional settings in their GitLab application by using the [**Admin** area](../../admin_area.md).
As a GitLab-managed solution, you cannot change any GitLab functionality controlled by SaaS environment settings. Examples of such SaaS environment settings include `gitlab.rb` configurations and access to shell, Rails console, and PostgreSQL console.
GitLab Dedicated engineers do not have direct access to your environment, except for [break glass situations](../../../subscriptions/gitlab_dedicated/_index.md#access-controls).
{{< alert type="note" >}}
An instance refers to a GitLab Dedicated deployment, whereas a tenant refers to a customer.
{{< /alert >}}
## Configure your instance using Switchboard
You can use Switchboard to make limited configuration changes to your GitLab Dedicated instance.
The following configuration settings are available in Switchboard:
- [IP allowlist](network_security.md#ip-allowlist)
- [SAML settings](saml.md)
- [Custom certificates](network_security.md#custom-certificates)
- [Outbound private links](network_security.md#outbound-private-link)
- [Private hosted zones](network_security.md#private-hosted-zones)
Prerequisites:
- You must have the [Admin](users_notifications.md#add-switchboard-users) role.
To make a configuration change:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Follow the instructions in the relevant sections below.
For all other instance configurations, submit a support ticket according to the
[configuration change request policy](_index.md#request-configuration-changes-with-a-support-ticket).
### Apply configuration changes in Switchboard
You can apply configuration changes made in Switchboard immediately or defer them until your next scheduled weekly [maintenance window](../maintenance.md#maintenance-windows).
When you apply changes immediately:
- Deployment can take up to 90 minutes.
- Changes are applied in the order they're saved.
- You can save multiple changes and apply them in one batch.
- Your GitLab Dedicated instance remains available during the deployment.
- Changes to private hosted zones can disrupt services that use these records for up to 5 minutes.
After the deployment job is complete, you receive an email notification. Check your spam folder if you do not see a notification in your main inbox.
All users with access to view or edit your tenant in Switchboard receive a notification for each change. For more information, see [Manage Switchboard notification preferences](users_notifications.md#manage-notification-preferences).
{{< alert type="note" >}}
You only receive email notifications for changes made by a Switchboard tenant administrator. Changes made by a GitLab Operator (for example, a GitLab version update completed during a maintenance window) do not trigger email notifications.
{{< /alert >}}
## Configuration change log
The **Configuration change log** page in Switchboard tracks changes made to your GitLab Dedicated instance.
Each change log entry includes the following details:
| Field | Description |
|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| Configuration change | Name of the configuration setting that changed. |
| User | Email address of the user that made the configuration change. For changes made by a GitLab Operator, this value appears as `GitLab Operator`. |
| IP | IP address of the user that made the configuration change. For changes made by a GitLab Operator, this value appears as `Unavailable`. |
| Status | Whether the configuration change is initiated, in progress, completed, or deferred. |
| Start time | Start date and time when the configuration change is initiated, in UTC. |
| End time | End date and time when the configuration change is deployed, in UTC. |
Each configuration change has a status:
| Status | Description |
|-------------|-------------|
| Initiated | Configuration change is made in Switchboard, but not yet deployed to the instance. |
| In progress | Configuration change is actively being deployed to the instance. |
| Complete | Configuration change has been deployed to the instance. |
| Delayed | Initial job to deploy a change has failed and the change has not yet been assigned to a new job. |
### View the configuration change log
To view the configuration change log:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. Select your tenant.
1. At the top of the page, select **Configuration change log**.
Each configuration change appears as an entry in the table. Select **View details** to see more information about each change.
## Request configuration changes with a support ticket
Certain configuration changes require that you submit a support ticket to request the changes. For more information on how to create a support ticket, see [creating a ticket](https://about.gitlab.com/support/portal/#creating-a-ticket).
Configuration changes requested with a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) adhere to the following policies:
- Are applied during your environment's weekly four-hour maintenance window.
- Can be requested for options specified during onboarding or for optional features listed on this page.
- May be postponed to the following week if GitLab needs to perform high-priority maintenance tasks.
- Can't be applied outside the weekly maintenance window unless they qualify for [emergency support](https://about.gitlab.com/support/#how-to-engage-emergency-support).
{{< alert type="note" >}}
Even if a change request meets the minimum lead time, it might not be applied during the upcoming maintenance window.
{{< /alert >}}
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Configure your GitLab Dedicated instance with Switchboard.
title: Configure GitLab Dedicated
breadcrumbs:
- doc
- administration
- dedicated
- configure_instance
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
The instructions on this page guide you through configuring your GitLab Dedicated instance, including enabling and updating the settings for [available functionality](../../../subscriptions/gitlab_dedicated/_index.md#available-features).
Administrators can configure additional settings in their GitLab application by using the [**Admin** area](../../admin_area.md).
As a GitLab-managed solution, you cannot change any GitLab functionality controlled by SaaS environment settings. Examples of such SaaS environment settings include `gitlab.rb` configurations and access to shell, Rails console, and PostgreSQL console.
GitLab Dedicated engineers do not have direct access to your environment, except for [break glass situations](../../../subscriptions/gitlab_dedicated/_index.md#access-controls).
{{< alert type="note" >}}
An instance refers to a GitLab Dedicated deployment, whereas a tenant refers to a customer.
{{< /alert >}}
## Configure your instance using Switchboard
You can use Switchboard to make limited configuration changes to your GitLab Dedicated instance.
The following configuration settings are available in Switchboard:
- [IP allowlist](network_security.md#ip-allowlist)
- [SAML settings](saml.md)
- [Custom certificates](network_security.md#custom-certificates)
- [Outbound private links](network_security.md#outbound-private-link)
- [Private hosted zones](network_security.md#private-hosted-zones)
Prerequisites:
- You must have the [Admin](users_notifications.md#add-switchboard-users) role.
To make a configuration change:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Follow the instructions in the relevant sections below.
For all other instance configurations, submit a support ticket according to the
[configuration change request policy](_index.md#request-configuration-changes-with-a-support-ticket).
### Apply configuration changes in Switchboard
You can apply configuration changes made in Switchboard immediately or defer them until your next scheduled weekly [maintenance window](../maintenance.md#maintenance-windows).
When you apply changes immediately:
- Deployment can take up to 90 minutes.
- Changes are applied in the order they're saved.
- You can save multiple changes and apply them in one batch.
- Your GitLab Dedicated instance remains available during the deployment.
- Changes to private hosted zones can disrupt services that use these records for up to 5 minutes.
After the deployment job is complete, you receive an email notification. Check your spam folder if you do not see a notification in your main inbox.
All users with access to view or edit your tenant in Switchboard receive a notification for each change. For more information, see [Manage Switchboard notification preferences](users_notifications.md#manage-notification-preferences).
{{< alert type="note" >}}
You only receive email notifications for changes made by a Switchboard tenant administrator. Changes made by a GitLab Operator (for example, a GitLab version update completed during a maintenance window) do not trigger email notifications.
{{< /alert >}}
## Configuration change log
The **Configuration change log** page in Switchboard tracks changes made to your GitLab Dedicated instance.
Each change log entry includes the following details:
| Field | Description |
|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| Configuration change | Name of the configuration setting that changed. |
| User | Email address of the user that made the configuration change. For changes made by a GitLab Operator, this value appears as `GitLab Operator`. |
| IP | IP address of the user that made the configuration change. For changes made by a GitLab Operator, this value appears as `Unavailable`. |
| Status | Whether the configuration change is initiated, in progress, completed, or deferred. |
| Start time | Start date and time when the configuration change is initiated, in UTC. |
| End time | End date and time when the configuration change is deployed, in UTC. |
Each configuration change has a status:
| Status | Description |
|-------------|-------------|
| Initiated | Configuration change is made in Switchboard, but not yet deployed to the instance. |
| In progress | Configuration change is actively being deployed to the instance. |
| Complete | Configuration change has been deployed to the instance. |
| Delayed | Initial job to deploy a change has failed and the change has not yet been assigned to a new job. |
### View the configuration change log
To view the configuration change log:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. Select your tenant.
1. At the top of the page, select **Configuration change log**.
Each configuration change appears as an entry in the table. Select **View details** to see more information about each change.
## Request configuration changes with a support ticket
Certain configuration changes require that you submit a support ticket to request the changes. For more information on how to create a support ticket, see [creating a ticket](https://about.gitlab.com/support/portal/#creating-a-ticket).
Configuration changes requested with a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) adhere to the following policies:
- Are applied during your environment's weekly four-hour maintenance window.
- Can be requested for options specified during onboarding or for optional features listed on this page.
- May be postponed to the following week if GitLab needs to perform high-priority maintenance tasks.
- Can't be applied outside the weekly maintenance window unless they qualify for [emergency support](https://about.gitlab.com/support/#how-to-engage-emergency-support).
{{< alert type="note" >}}
Even if a change request meets the minimum lead time, it might not be applied during the upcoming maintenance window.
{{< /alert >}}
|
https://docs.gitlab.com/administration/dedicated/users_notifications
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/users_notifications.md
|
2025-08-13
|
doc/administration/dedicated/configure_instance
|
[
"doc",
"administration",
"dedicated",
"configure_instance"
] |
users_notifications.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Dedicated users and notifications
|
Manage Switchboard users and configure notification preferences, including SMTP email service settings.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Manage users who can access Switchboard and configure email notifications for your GitLab Dedicated instance.
## Switchboard user management
Switchboard is the administrative interface for managing your GitLab Dedicated instance.
Switchboard users are administrators who can configure and monitor the instance.
{{< alert type="note" >}}
Switchboard users are separate from the users on your GitLab Dedicated instance.
For information about configuring authentication for both Switchboard and your GitLab Dedicated instance,
see [authentication for GitLab Dedicated](authentication/_index.md).
{{< /alert >}}
### Add Switchboard users
Administrators can add two types of Switchboard users to manage and view their GitLab Dedicated instance:
- **Read only**: Users can only view instance data.
- **Admin**: Users can edit the instance configuration and manage users.
To add a new user to Switchboard for your GitLab Dedicated instance:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. From the top of the page, select **Users**.
1. Select **New user**.
1. Enter the **Email** and select a **Role** for the user.
1. Select **Create**.
An invitation to use Switchboard is sent to the user.
### Reset a Switchboard user password
To reset your Switchboard password, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
The support team will help you regain access to your account.
## Email notifications
Switchboard sends email notifications about instance incidents, maintenance, performance issues, and security updates.
Notifications are sent to:
- Switchboard users: Receive notifications based on their notification settings.
- Operational email addresses: Receive notifications for important instance events and service updates,
regardless of their notification settings.
Operational email addresses receive customer notifications, even if recipients:
- Are not Switchboard users.
- Have not signed in to Switchboard.
- Turn off email notifications.
To stop receiving operational email notifications, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
### Manage notification preferences
To receive email notifications, you must first:
- Receive an email invitation and sign in to Switchboard.
- Set up a password and two-factor authentication (2FA).
To turn your personal notifications on or off:
1. Select the dropdown list next to your user name.
1. Select **Toggle email notifications off** or **Toggle email notifications on**.
An alert confirms that your notification preferences have been updated.
## SMTP email service
You can configure an [SMTP](../../../subscriptions/gitlab_dedicated/_index.md#email-service) email service for your GitLab Dedicated instance.
To configure an SMTP email service, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650)
with the credentials and settings for your SMTP server.
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Manage Switchboard users and configure notification preferences, including
SMTP email service settings.
title: GitLab Dedicated users and notifications
breadcrumbs:
- doc
- administration
- dedicated
- configure_instance
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Manage users who can access Switchboard and configure email notifications for your GitLab Dedicated instance.
## Switchboard user management
Switchboard is the administrative interface for managing your GitLab Dedicated instance.
Switchboard users are administrators who can configure and monitor the instance.
{{< alert type="note" >}}
Switchboard users are separate from the users on your GitLab Dedicated instance.
For information about configuring authentication for both Switchboard and your GitLab Dedicated instance,
see [authentication for GitLab Dedicated](authentication/_index.md).
{{< /alert >}}
### Add Switchboard users
Administrators can add two types of Switchboard users to manage and view their GitLab Dedicated instance:
- **Read only**: Users can only view instance data.
- **Admin**: Users can edit the instance configuration and manage users.
To add a new user to Switchboard for your GitLab Dedicated instance:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. From the top of the page, select **Users**.
1. Select **New user**.
1. Enter the **Email** and select a **Role** for the user.
1. Select **Create**.
An invitation to use Switchboard is sent to the user.
### Reset a Switchboard user password
To reset your Switchboard password, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
The support team will help you regain access to your account.
## Email notifications
Switchboard sends email notifications about instance incidents, maintenance, performance issues, and security updates.
Notifications are sent to:
- Switchboard users: Receive notifications based on their notification settings.
- Operational email addresses: Receive notifications for important instance events and service updates,
regardless of their notification settings.
Operational email addresses receive customer notifications, even if recipients:
- Are not Switchboard users.
- Have not signed in to Switchboard.
- Turn off email notifications.
To stop receiving operational email notifications, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
### Manage notification preferences
To receive email notifications, you must first:
- Receive an email invitation and sign in to Switchboard.
- Set up a password and two-factor authentication (2FA).
To turn your personal notifications on or off:
1. Select the dropdown list next to your user name.
1. Select **Toggle email notifications off** or **Toggle email notifications on**.
An alert confirms that your notification preferences have been updated.
## SMTP email service
You can configure an [SMTP](../../../subscriptions/gitlab_dedicated/_index.md#email-service) email service for your GitLab Dedicated instance.
To configure an SMTP email service, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650)
with the credentials and settings for your SMTP server.
|
https://docs.gitlab.com/administration/dedicated/network_security
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/network_security.md
|
2025-08-13
|
doc/administration/dedicated/configure_instance
|
[
"doc",
"administration",
"dedicated",
"configure_instance"
] |
network_security.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Dedicated network access and security
|
Configure network access and security settings for GitLab Dedicated.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
## Bring your own domain (BYOD)
You can use a [custom hostname](../../../subscriptions/gitlab_dedicated/_index.md#bring-your-own-domain) to access your GitLab Dedicated instance. You can also provide a custom hostname for the bundled container registry and Kubernetes Agent Server (KAS) services.
### Let's Encrypt certificates
GitLab Dedicated integrates with [Let's Encrypt](https://letsencrypt.org/), a free, automated, and open source certificate authority. When you use a custom hostname, Let's Encrypt automatically issues and renews SSL/TLS certificates for your domain.
This integration uses the [`http-01` challenge](https://letsencrypt.org/docs/challenge-types/#http-01-challenge) to obtain certificates through Let's Encrypt.
### Set up DNS records
To use a custom hostname with GitLab Dedicated, you must update your domain's DNS records.
Prerequisites:
- Access to your domain host's DNS settings.
To set up DNS records for a custom hostname with GitLab Dedicated:
1. Sign in to your domain host's website.
1. Go to the DNS settings.
1. Add a `CNAME` record that points your custom hostname to your GitLab Dedicated tenant. For example:
```plaintext
gitlab.my-company.com. CNAME my-tenant.gitlab-dedicated.com
```
1. Optional. If your domain has an existing `CAA` record, update it to include [Let's Encrypt](https://letsencrypt.org/docs/caa/) as a valid certificate authority. If your domain does not have any `CAA` records, you can skip this step. For example:
```plaintext
example.com. IN CAA 0 issue "pki.goog"
example.com. IN CAA 0 issue "letsencrypt.org"
```
In this example, the `CAA` record defines Google Trust Services (`pki.goog`) and Let's Encrypt (`letsencrypt.org`) as certificate authorities that are allowed to issue certificates for your domain.
1. Save your changes and wait for the DNS changes to propagate.
{{< alert type="note" >}}
DNS records must stay in place as long as you use the BYOD feature.
{{< /alert >}}
### DNS requirements for Let's Encrypt certificates
When using custom hostnames with GitLab Dedicated, your domain must be publicly resolvable
through DNS, even if you plan to access your instance through private networks only.
This public DNS requirement exists because:
- Let's Encrypt uses the HTTP-01 challenge, which requires public internet access to verify
domain ownership.
- The validation process must reach your custom hostname from the public internet through
the CNAME record that points to your GitLab Dedicated tenant.
- Certificate renewal happens automatically every 90 days and uses the same public
validation process as the initial issuance.
For instances configured with private networking (such as AWS PrivateLink), maintaining public
DNS resolution ensures certificate renewal works properly, even when all other access is
restricted to private networks.
### Add your custom hostname
To add a custom hostname to your existing GitLab Dedicated instance, submit a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
## Custom certificates
Custom certificates establish trust between your GitLab Dedicated instance and certificates signed by non-public Certificate Authorities (CA). If you want to connect to a service that uses a certificate signed by a private or internal CA, you must first add that certificate to your GitLab Dedicated instance.
### Add a custom certificate with Switchboard
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Custom Certificate Authorities**.
1. Select **+ Add Certificate**.
1. Paste the certificate into the text box.
1. Select **Save**.
1. Scroll up to the top of the page and select whether to apply the changes immediately or during the next maintenance window.
### Add a custom certificate with a Support Request
If you are unable to use Switchboard to add a custom certificate, you can open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) and attach your custom public certificate files to request this change.
## AWS Private Link connectivity
### Inbound Private Link
[AWS Private Link](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) allows users and applications in your VPC on AWS to securely connect to the GitLab Dedicated endpoint without network traffic going over the public internet.
To enable the Inbound Private Link:
1. Open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650). In the body of your support ticket, include the IAM principals for the AWS users or roles in your AWS organization that are establishing the VPC endpoints in your AWS account. The IAM principals must be [IAM role principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-roles) or [IAM user principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-users). GitLab Dedicated uses these IAM Principals for access-control. These IAM principals are the only ones able to set up an endpoint to the service.
1. After your IAM Principals have been allowlisted, GitLab [creates the Endpoint Service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) and communicates the `Service Endpoint Name` on the support ticket. The service name is generated by AWS upon creation of the service endpoint.
- GitLab handles the domain verification for the Private DNS name, so that DNS resolution of the tenant instance domain name in your VPC resolves to the PrivateLink endpoint.
- The endpoint service is available in two Availability Zones. These Availability Zones are either the zones you chose during onboarding, or if you did not specify any, two randomly selected zones.
1. In your own AWS account, create an [Endpoint Interface](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) in your VPC, with the following settings:
- Service Endpoint Name: use the name provided by GitLab on the support ticket.
- Private DNS names enabled: yes.
- Subnets: choose all matching subnets.
1. After you create the endpoint, use the instance URL provided to you during onboarding to securely connect to your GitLab Dedicated instance from your VPC, without the traffic going over the public internet.
#### Enable KAS and registry for Inbound Private Link
When you use Inbound Private Link to connect to your GitLab Dedicated instance,
only the main instance URL has automatic DNS resolution through the private network.
To access KAS (GitLab agent for Kubernetes) and registry services through your private network,
you must create an additional DNS configuration in your VPC.
Prerequisites:
- You have configured Inbound Private Link for your GitLab Dedicated instance.
- You have permissions to create Route 53 private hosted zones in your AWS account.
To enable KAS and registry through your private network:
1. In your AWS console, create a private hosted zone for `gitlab-dedicated.com`
and associate it with the VPC that contains your private link connection.
1. After you create the private hosted zone, add the following DNS records (replace `example` with your instance name):
1. Create an `A` record for your GitLab Dedicated instance:
- Configure your full instance domain (for example, `example.gitlab-dedicated.com`) to resolve to your VPC endpoint as an Alias.
- Select the VPC endpoint that does not contain an Availability Zone reference.

1. Create `CNAME` records for both KAS and the registry to resolve to your GitLab Dedicated instance domain (`example.gitlab-dedicated.com`):
- `kas.example.gitlab-dedicated.com`
- `registry.example.gitlab-dedicated.com`
1. To verify connectivity, from a resource in your VPC, run these commands:
```shell
nslookup kas.example.gitlab-dedicated.com
nslookup registry.example.gitlab-dedicated.com
nslookup example.gitlab-dedicated.com
```
All commands should resolve to private IP addresses within your VPC.
This configuration is robust to IP address changes because it uses the VPC endpoint interface rather than specific IP addresses.
### Outbound Private Link
Outbound private links allow your GitLab Dedicated instance and the hosted runners for GitLab Dedicated to securely communicate with services running in your VPC on AWS without exposing any traffic to the public internet.
This type of connection allows GitLab functionality to access private services:
- For the GitLab Dedicated instance:
- [webhooks](../../../user/project/integrations/webhooks.md)
- import or mirror projects and repositories
- For hosted runners:
- custom secrets managers
- artifacts or job images stored in your infrastructure
- deployments into your infrastructure
Consider the following:
- You can only establish private links between VPCs in the same region. Therefore, you can only establish a connection in the regions specified for your Dedicated instance.
- The connection requires the [Availability Zone IDs (AZ IDs)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#az-ids) for the two Availability Zones (AZs) in the regions that you selected during onboarding.
- If you did not specify any AZs during onboarding to Dedicated, GitLab randomly selects both AZ IDs. AZ IDs are displayed in Switchboard on the Overview page for both the Primary and Secondary regions.
- GitLab Dedicated limits the number of outbound private link connections to 10.
#### Add an outbound private link with Switchboard
Prerequisites:
- [Create the endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) for your internal service to be available to GitLab Dedicated.
- Configure a Network Load Balancer (NLB) for the endpoint service in the Availability Zones (AZs) where your Dedicated instance is deployed. Either:
- Use the configured AZs. AZ IDs are displayed on the Overview page in Switchboard.
- Enable the NLB in every AZ in the region.
- Add the ARN of the role that GitLab Dedicated uses to connect to your endpoint service to the Allowed Principals list on the Endpoint Service. You can find this ARN in Switchboard under Outbound private link IAM principal. For more information, see [Manage permissions](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions).
- Recommended. Set **Acceptance required** to **No** to enable GitLab Dedicated to connect in a single operation. If set to **Yes**, you must manually accept the connection after it's initiated.
{{< alert type="note" >}}
If you set **Acceptance required** to **Yes**, Switchboard cannot accurately determine when the link is accepted. After you manually accept the link, the status shows as **Pending** instead of **Active** until next scheduled maintenance. After maintenance, the link status refreshes and shows as connected.
{{< /alert >}}
- Once the endpoint service is created, note the Service Name and if you have enabled Private DNS or not.
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Outbound private link**.
1. Complete the fields.
1. To add endpoint services, select **Add endpoint service**. You can add up to ten endpoint services for each region. At least one endpoint service is required to save the region.
1. Select **Save**.
1. Optional. To add an outbound private link for a second region, select **Add outbound connection**, then repeat the previous steps.
#### Delete an outbound private link with Switchboard
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Outbound private link**.
1. Go to the outbound private link you want to delete, then select **Delete** ({{< icon name="remove" >}}).
1. Select **Delete**.
1. Optional. To delete all the links in a region, from the region header, select **Delete** ({{< icon name="remove" >}}). This also deletes the region configuration.
#### Add an outbound private link with a support request
1. [Create the Endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) through which your internal service
will be available to GitLab Dedicated. Provide the associated `Service Endpoint Name` on a new
[support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
1. Configure a Network Load Balancer (NLB) for the endpoint service in the Availability Zones (AZs) where your Dedicated instance is deployed. Either:
- Use the configured AZs. AZ IDs are displayed on the Overview page in Switchboard.
- Enable the NLB in every AZ in the region.
1. In your [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650), GitLab will provide you with the ARN of an
IAM role that will be initiating the connection to your endpoint service. You must ensure this ARN is included, or otherwise covered by other
entries, in the list of "Allowed Principals" on the Endpoint Service, as described by the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions).
Though it's optional, you should you add it explicitly, allowing you to set `Acceptance required` to No so that Dedicated can connect in a single operation.
If you leave `Acceptance required` as Yes, then you must manually accept the connection after Dedicated has initiated it.
1. To connect to services using the Endpoint, the Dedicated services require a DNS name. Private Link automatically creates an internal name, but
it is machine-generated and not generally directly useful. Two options are available:
- In your Endpoint Service, enable [Private DNS name](https://docs.aws.amazon.com/vpc/latest/privatelink/manage-dns-names.html), perform the
required validation, and let GitLab know in the support ticket that you are using this option. If `Acceptance Required` is set to Yes on your
Endpoint Service, also note this on the support ticket because Dedicated will have to initiate the connection without Private DNS, wait for you
to confirm it has been accepted, and then update the connection to enable the use of Private DNS.
- Dedicated can manage a Private Hosted Zone (PHZ) within the Dedicated AWS Account and alias any arbitrary DNS names to the endpoint, directing
requests for those names to your endpoint service. These aliases are known as PHZ entries. For more information, see [Private hosted zones](#private-hosted-zones).
GitLab then configures the tenant instance to create the necessary Endpoint Interfaces based on the service names you provided. Any matching outbound
connections made from the tenant instance are directed through the PrivateLink into your VPC.
#### Troubleshooting
If you have trouble establishing a connection after the Outbound Private Link has been set up, a few things in your AWS infrastructure could be the cause of the problem. The specific things to check vary based on the unexpected behavior you're seeking to fix. Things to check include:
- Ensure that cross-zone load balancing is turned on in your Network Load Balancer (NLB).
- Ensure that the Inbound Rules section of the appropriate Security Groups permits traffic from the correct IP ranges.
- Ensure that the inbound traffic is mapped to the correct port on the Endpoint Service.
- In Switchboard, expand **Outbound private link** and confirm that the details appear as you expect.
- Ensure that you have [allowed requests to the local network from webhooks and integrations](../../../security/webhooks.md#allow-requests-to-the-local-network-from-webhooks-and-integrations).
## Private hosted zones
A private hosted zone (PHZ) creates custom DNS aliases (CNAMEs) that resolve in your GitLab Dedicated instance's network.
Use a PHZ when you want to:
- Create multiple DNS names or aliases that use a single endpoint, such as when running a reverse proxy to connect to multiple services.
- Use a private domain that cannot be validated by public DNS.
PHZs are commonly used with reverse PrivateLink to create readable domain names instead of using AWS-generated endpoint names. For example, you can use `alpha.beta.tenant.gitlab-dedicated.com` instead of `vpce-0987654321fedcba0-k99y1abc.vpce-svc-0a123bcd4e5f678gh.eu-west-1.vpce.amazonaws.com`.
In some cases, you can also use PHZs to create aliases that resolve to publicly accessible DNS names. For example, you can create an internal DNS name that resolves to a public endpoint when you need internal systems to access a service through its private name.
{{< alert type="note" >}}
Changes to private hosted zones can disrupt services that use these records for up to five minutes.
{{< /alert >}}
### PHZ domain structure
When using your GitLab Dedicated instance's domain as part of an alias, you must include two subdomains before the main domain, where:
- The first subdomain becomes the name of the PHZ.
- The second subdomain becomes the record entry for the alias.
For example:
- Valid PHZ entry: `subdomain2.subdomain1.<your-tenant-id>.gitlab-dedicated.com`.
- Invalid PHZ entry: `subdomain1.<your-tenant-id>.gitlab-dedicated.com`.
When not using your GitLab Dedicated instance domain, you must still provide:
- A Private Hosted Zone (PHZ) name
- A PHZ entry in the format `phz-entry.phz-name.com`
To prevent shadowing of public DNS domains when the domain is created inside the Dedicated tenant, use at least two additional subdomain levels below any public domain for your PHZ entries. For example, if your tenant is hosted at `tenant.gitlab-dedicated.com`, your PHZ entry should be at least `subdomain1.subdomain2.tenant.gitlab-dedicated.com`, or if you own `customer.com` then at least `subdomain1.subdomain2.customer.com`, where `subdomain2` is not a public domain.
### Add a private hosted zone with Switchboard
To add a private hosted zone:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Private hosted zones**.
1. Select **Add private hosted zone entry**.
1. Complete the fields.
- In the **Hostname** field, enter your Private Hosted Zone (PHZ) entry.
- For **Link type**, choose one of the following:
- For an outbound private link PHZ entry, select the endpoint service from the dropdown list.
Only links with the `Available` or `Pending Acceptance` status are shown.
- For other PHZ entries, provide a list of DNS aliases.
1. Select **Save**.
Your PHZ entry and any aliases should appear in the list.
1. Scroll to the top of the page, and select whether to apply the changes immediately or during the next maintenance window.
### Add a private hosted zone with a support request
If you are unable to use Switchboard to add a private hosted zone, you can open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) and provide a list of DNS names that should resolve to the endpoint service for the outbound private link. The list can be updated as needed.
## IP allowlist
GitLab Dedicated allows you to control which IP addresses can access your instance through an IP allowlist. Once the IP allowlist has been enabled, when an IP not on the allowlist tries to access your instance an `HTTP 403 Forbidden` response is returned.
IP addresses that have been added to your IP allowlist can be viewed on the Configuration page in Switchboard. You can add or remove IP addresses from your allowlist with Switchboard.
### Add an IP to the allowlist with Switchboard
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Allowed Source List Config / IP allowlist**.
1. Turn on the **Enable** toggle.
1. To add an IP address:
1. Select **Add Item**.
1. In the **Address** text box, enter either:
- A single IPv4 address (for example, `192.168.1.1`).
- An IPv4 address range in CIDR notation (for example, `192.168.1.0/24`).
1. In the **Description** text box, enter a description.
To add another address or range, repeat this step. IPv6 addresses are not supported.
1. Select **Save**.
1. Scroll up to the top of the page and select whether to apply the changes immediately or during the next maintenance window. After the changes are applied, the IP addresses are added to the IP allowlist for your instance.
### Add an IP to the allowlist with a Support Request
If you are unable to use Switchboard to update your IP allowlist, you can open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) and specify a comma separated list of IP addresses that can access your GitLab Dedicated instance.
### Enable OpenID Connect for your IP allowlist
Using [GitLab as an OpenID Connect identity provider](../../../integration/openid_connect_provider.md) requires internet access to the OpenID Connect verification endpoint.
To enable access to the OpenID Connect endpoint while maintaining your IP allowlist:
- In a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650), request to allow access to the OpenID Connect endpoint.
The configuration is applied during the next maintenance window.
### Enable SCIM provisioning for your IP allowlist
You can use SCIM with external identity providers to automatically provision and manage users. To use SCIM, your identity provider must be able to access the instance SCIM API endpoints. By default, IP allowlisting blocks communication to these endpoints.
To enable SCIM while maintaining your IP allowlist:
- In a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650), request to enable SCIM endpoints to the internet.
The configuration is applied during the next maintenance window.
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Configure network access and security settings for GitLab Dedicated.
title: GitLab Dedicated network access and security
breadcrumbs:
- doc
- administration
- dedicated
- configure_instance
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
## Bring your own domain (BYOD)
You can use a [custom hostname](../../../subscriptions/gitlab_dedicated/_index.md#bring-your-own-domain) to access your GitLab Dedicated instance. You can also provide a custom hostname for the bundled container registry and Kubernetes Agent Server (KAS) services.
### Let's Encrypt certificates
GitLab Dedicated integrates with [Let's Encrypt](https://letsencrypt.org/), a free, automated, and open source certificate authority. When you use a custom hostname, Let's Encrypt automatically issues and renews SSL/TLS certificates for your domain.
This integration uses the [`http-01` challenge](https://letsencrypt.org/docs/challenge-types/#http-01-challenge) to obtain certificates through Let's Encrypt.
### Set up DNS records
To use a custom hostname with GitLab Dedicated, you must update your domain's DNS records.
Prerequisites:
- Access to your domain host's DNS settings.
To set up DNS records for a custom hostname with GitLab Dedicated:
1. Sign in to your domain host's website.
1. Go to the DNS settings.
1. Add a `CNAME` record that points your custom hostname to your GitLab Dedicated tenant. For example:
```plaintext
gitlab.my-company.com. CNAME my-tenant.gitlab-dedicated.com
```
1. Optional. If your domain has an existing `CAA` record, update it to include [Let's Encrypt](https://letsencrypt.org/docs/caa/) as a valid certificate authority. If your domain does not have any `CAA` records, you can skip this step. For example:
```plaintext
example.com. IN CAA 0 issue "pki.goog"
example.com. IN CAA 0 issue "letsencrypt.org"
```
In this example, the `CAA` record defines Google Trust Services (`pki.goog`) and Let's Encrypt (`letsencrypt.org`) as certificate authorities that are allowed to issue certificates for your domain.
1. Save your changes and wait for the DNS changes to propagate.
{{< alert type="note" >}}
DNS records must stay in place as long as you use the BYOD feature.
{{< /alert >}}
### DNS requirements for Let's Encrypt certificates
When using custom hostnames with GitLab Dedicated, your domain must be publicly resolvable
through DNS, even if you plan to access your instance through private networks only.
This public DNS requirement exists because:
- Let's Encrypt uses the HTTP-01 challenge, which requires public internet access to verify
domain ownership.
- The validation process must reach your custom hostname from the public internet through
the CNAME record that points to your GitLab Dedicated tenant.
- Certificate renewal happens automatically every 90 days and uses the same public
validation process as the initial issuance.
For instances configured with private networking (such as AWS PrivateLink), maintaining public
DNS resolution ensures certificate renewal works properly, even when all other access is
restricted to private networks.
### Add your custom hostname
To add a custom hostname to your existing GitLab Dedicated instance, submit a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
## Custom certificates
Custom certificates establish trust between your GitLab Dedicated instance and certificates signed by non-public Certificate Authorities (CA). If you want to connect to a service that uses a certificate signed by a private or internal CA, you must first add that certificate to your GitLab Dedicated instance.
### Add a custom certificate with Switchboard
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Custom Certificate Authorities**.
1. Select **+ Add Certificate**.
1. Paste the certificate into the text box.
1. Select **Save**.
1. Scroll up to the top of the page and select whether to apply the changes immediately or during the next maintenance window.
### Add a custom certificate with a Support Request
If you are unable to use Switchboard to add a custom certificate, you can open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) and attach your custom public certificate files to request this change.
## AWS Private Link connectivity
### Inbound Private Link
[AWS Private Link](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) allows users and applications in your VPC on AWS to securely connect to the GitLab Dedicated endpoint without network traffic going over the public internet.
To enable the Inbound Private Link:
1. Open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650). In the body of your support ticket, include the IAM principals for the AWS users or roles in your AWS organization that are establishing the VPC endpoints in your AWS account. The IAM principals must be [IAM role principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-roles) or [IAM user principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-users). GitLab Dedicated uses these IAM Principals for access-control. These IAM principals are the only ones able to set up an endpoint to the service.
1. After your IAM Principals have been allowlisted, GitLab [creates the Endpoint Service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) and communicates the `Service Endpoint Name` on the support ticket. The service name is generated by AWS upon creation of the service endpoint.
- GitLab handles the domain verification for the Private DNS name, so that DNS resolution of the tenant instance domain name in your VPC resolves to the PrivateLink endpoint.
- The endpoint service is available in two Availability Zones. These Availability Zones are either the zones you chose during onboarding, or if you did not specify any, two randomly selected zones.
1. In your own AWS account, create an [Endpoint Interface](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) in your VPC, with the following settings:
- Service Endpoint Name: use the name provided by GitLab on the support ticket.
- Private DNS names enabled: yes.
- Subnets: choose all matching subnets.
1. After you create the endpoint, use the instance URL provided to you during onboarding to securely connect to your GitLab Dedicated instance from your VPC, without the traffic going over the public internet.
#### Enable KAS and registry for Inbound Private Link
When you use Inbound Private Link to connect to your GitLab Dedicated instance,
only the main instance URL has automatic DNS resolution through the private network.
To access KAS (GitLab agent for Kubernetes) and registry services through your private network,
you must create an additional DNS configuration in your VPC.
Prerequisites:
- You have configured Inbound Private Link for your GitLab Dedicated instance.
- You have permissions to create Route 53 private hosted zones in your AWS account.
To enable KAS and registry through your private network:
1. In your AWS console, create a private hosted zone for `gitlab-dedicated.com`
and associate it with the VPC that contains your private link connection.
1. After you create the private hosted zone, add the following DNS records (replace `example` with your instance name):
1. Create an `A` record for your GitLab Dedicated instance:
- Configure your full instance domain (for example, `example.gitlab-dedicated.com`) to resolve to your VPC endpoint as an Alias.
- Select the VPC endpoint that does not contain an Availability Zone reference.

1. Create `CNAME` records for both KAS and the registry to resolve to your GitLab Dedicated instance domain (`example.gitlab-dedicated.com`):
- `kas.example.gitlab-dedicated.com`
- `registry.example.gitlab-dedicated.com`
1. To verify connectivity, from a resource in your VPC, run these commands:
```shell
nslookup kas.example.gitlab-dedicated.com
nslookup registry.example.gitlab-dedicated.com
nslookup example.gitlab-dedicated.com
```
All commands should resolve to private IP addresses within your VPC.
This configuration is robust to IP address changes because it uses the VPC endpoint interface rather than specific IP addresses.
### Outbound Private Link
Outbound private links allow your GitLab Dedicated instance and the hosted runners for GitLab Dedicated to securely communicate with services running in your VPC on AWS without exposing any traffic to the public internet.
This type of connection allows GitLab functionality to access private services:
- For the GitLab Dedicated instance:
- [webhooks](../../../user/project/integrations/webhooks.md)
- import or mirror projects and repositories
- For hosted runners:
- custom secrets managers
- artifacts or job images stored in your infrastructure
- deployments into your infrastructure
Consider the following:
- You can only establish private links between VPCs in the same region. Therefore, you can only establish a connection in the regions specified for your Dedicated instance.
- The connection requires the [Availability Zone IDs (AZ IDs)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#az-ids) for the two Availability Zones (AZs) in the regions that you selected during onboarding.
- If you did not specify any AZs during onboarding to Dedicated, GitLab randomly selects both AZ IDs. AZ IDs are displayed in Switchboard on the Overview page for both the Primary and Secondary regions.
- GitLab Dedicated limits the number of outbound private link connections to 10.
#### Add an outbound private link with Switchboard
Prerequisites:
- [Create the endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) for your internal service to be available to GitLab Dedicated.
- Configure a Network Load Balancer (NLB) for the endpoint service in the Availability Zones (AZs) where your Dedicated instance is deployed. Either:
- Use the configured AZs. AZ IDs are displayed on the Overview page in Switchboard.
- Enable the NLB in every AZ in the region.
- Add the ARN of the role that GitLab Dedicated uses to connect to your endpoint service to the Allowed Principals list on the Endpoint Service. You can find this ARN in Switchboard under Outbound private link IAM principal. For more information, see [Manage permissions](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions).
- Recommended. Set **Acceptance required** to **No** to enable GitLab Dedicated to connect in a single operation. If set to **Yes**, you must manually accept the connection after it's initiated.
{{< alert type="note" >}}
If you set **Acceptance required** to **Yes**, Switchboard cannot accurately determine when the link is accepted. After you manually accept the link, the status shows as **Pending** instead of **Active** until next scheduled maintenance. After maintenance, the link status refreshes and shows as connected.
{{< /alert >}}
- Once the endpoint service is created, note the Service Name and if you have enabled Private DNS or not.
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Outbound private link**.
1. Complete the fields.
1. To add endpoint services, select **Add endpoint service**. You can add up to ten endpoint services for each region. At least one endpoint service is required to save the region.
1. Select **Save**.
1. Optional. To add an outbound private link for a second region, select **Add outbound connection**, then repeat the previous steps.
#### Delete an outbound private link with Switchboard
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Outbound private link**.
1. Go to the outbound private link you want to delete, then select **Delete** ({{< icon name="remove" >}}).
1. Select **Delete**.
1. Optional. To delete all the links in a region, from the region header, select **Delete** ({{< icon name="remove" >}}). This also deletes the region configuration.
#### Add an outbound private link with a support request
1. [Create the Endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) through which your internal service
will be available to GitLab Dedicated. Provide the associated `Service Endpoint Name` on a new
[support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
1. Configure a Network Load Balancer (NLB) for the endpoint service in the Availability Zones (AZs) where your Dedicated instance is deployed. Either:
- Use the configured AZs. AZ IDs are displayed on the Overview page in Switchboard.
- Enable the NLB in every AZ in the region.
1. In your [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650), GitLab will provide you with the ARN of an
IAM role that will be initiating the connection to your endpoint service. You must ensure this ARN is included, or otherwise covered by other
entries, in the list of "Allowed Principals" on the Endpoint Service, as described by the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions).
Though it's optional, you should you add it explicitly, allowing you to set `Acceptance required` to No so that Dedicated can connect in a single operation.
If you leave `Acceptance required` as Yes, then you must manually accept the connection after Dedicated has initiated it.
1. To connect to services using the Endpoint, the Dedicated services require a DNS name. Private Link automatically creates an internal name, but
it is machine-generated and not generally directly useful. Two options are available:
- In your Endpoint Service, enable [Private DNS name](https://docs.aws.amazon.com/vpc/latest/privatelink/manage-dns-names.html), perform the
required validation, and let GitLab know in the support ticket that you are using this option. If `Acceptance Required` is set to Yes on your
Endpoint Service, also note this on the support ticket because Dedicated will have to initiate the connection without Private DNS, wait for you
to confirm it has been accepted, and then update the connection to enable the use of Private DNS.
- Dedicated can manage a Private Hosted Zone (PHZ) within the Dedicated AWS Account and alias any arbitrary DNS names to the endpoint, directing
requests for those names to your endpoint service. These aliases are known as PHZ entries. For more information, see [Private hosted zones](#private-hosted-zones).
GitLab then configures the tenant instance to create the necessary Endpoint Interfaces based on the service names you provided. Any matching outbound
connections made from the tenant instance are directed through the PrivateLink into your VPC.
#### Troubleshooting
If you have trouble establishing a connection after the Outbound Private Link has been set up, a few things in your AWS infrastructure could be the cause of the problem. The specific things to check vary based on the unexpected behavior you're seeking to fix. Things to check include:
- Ensure that cross-zone load balancing is turned on in your Network Load Balancer (NLB).
- Ensure that the Inbound Rules section of the appropriate Security Groups permits traffic from the correct IP ranges.
- Ensure that the inbound traffic is mapped to the correct port on the Endpoint Service.
- In Switchboard, expand **Outbound private link** and confirm that the details appear as you expect.
- Ensure that you have [allowed requests to the local network from webhooks and integrations](../../../security/webhooks.md#allow-requests-to-the-local-network-from-webhooks-and-integrations).
## Private hosted zones
A private hosted zone (PHZ) creates custom DNS aliases (CNAMEs) that resolve in your GitLab Dedicated instance's network.
Use a PHZ when you want to:
- Create multiple DNS names or aliases that use a single endpoint, such as when running a reverse proxy to connect to multiple services.
- Use a private domain that cannot be validated by public DNS.
PHZs are commonly used with reverse PrivateLink to create readable domain names instead of using AWS-generated endpoint names. For example, you can use `alpha.beta.tenant.gitlab-dedicated.com` instead of `vpce-0987654321fedcba0-k99y1abc.vpce-svc-0a123bcd4e5f678gh.eu-west-1.vpce.amazonaws.com`.
In some cases, you can also use PHZs to create aliases that resolve to publicly accessible DNS names. For example, you can create an internal DNS name that resolves to a public endpoint when you need internal systems to access a service through its private name.
{{< alert type="note" >}}
Changes to private hosted zones can disrupt services that use these records for up to five minutes.
{{< /alert >}}
### PHZ domain structure
When using your GitLab Dedicated instance's domain as part of an alias, you must include two subdomains before the main domain, where:
- The first subdomain becomes the name of the PHZ.
- The second subdomain becomes the record entry for the alias.
For example:
- Valid PHZ entry: `subdomain2.subdomain1.<your-tenant-id>.gitlab-dedicated.com`.
- Invalid PHZ entry: `subdomain1.<your-tenant-id>.gitlab-dedicated.com`.
When not using your GitLab Dedicated instance domain, you must still provide:
- A Private Hosted Zone (PHZ) name
- A PHZ entry in the format `phz-entry.phz-name.com`
To prevent shadowing of public DNS domains when the domain is created inside the Dedicated tenant, use at least two additional subdomain levels below any public domain for your PHZ entries. For example, if your tenant is hosted at `tenant.gitlab-dedicated.com`, your PHZ entry should be at least `subdomain1.subdomain2.tenant.gitlab-dedicated.com`, or if you own `customer.com` then at least `subdomain1.subdomain2.customer.com`, where `subdomain2` is not a public domain.
### Add a private hosted zone with Switchboard
To add a private hosted zone:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Private hosted zones**.
1. Select **Add private hosted zone entry**.
1. Complete the fields.
- In the **Hostname** field, enter your Private Hosted Zone (PHZ) entry.
- For **Link type**, choose one of the following:
- For an outbound private link PHZ entry, select the endpoint service from the dropdown list.
Only links with the `Available` or `Pending Acceptance` status are shown.
- For other PHZ entries, provide a list of DNS aliases.
1. Select **Save**.
Your PHZ entry and any aliases should appear in the list.
1. Scroll to the top of the page, and select whether to apply the changes immediately or during the next maintenance window.
### Add a private hosted zone with a support request
If you are unable to use Switchboard to add a private hosted zone, you can open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) and provide a list of DNS names that should resolve to the endpoint service for the outbound private link. The list can be updated as needed.
## IP allowlist
GitLab Dedicated allows you to control which IP addresses can access your instance through an IP allowlist. Once the IP allowlist has been enabled, when an IP not on the allowlist tries to access your instance an `HTTP 403 Forbidden` response is returned.
IP addresses that have been added to your IP allowlist can be viewed on the Configuration page in Switchboard. You can add or remove IP addresses from your allowlist with Switchboard.
### Add an IP to the allowlist with Switchboard
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **Allowed Source List Config / IP allowlist**.
1. Turn on the **Enable** toggle.
1. To add an IP address:
1. Select **Add Item**.
1. In the **Address** text box, enter either:
- A single IPv4 address (for example, `192.168.1.1`).
- An IPv4 address range in CIDR notation (for example, `192.168.1.0/24`).
1. In the **Description** text box, enter a description.
To add another address or range, repeat this step. IPv6 addresses are not supported.
1. Select **Save**.
1. Scroll up to the top of the page and select whether to apply the changes immediately or during the next maintenance window. After the changes are applied, the IP addresses are added to the IP allowlist for your instance.
### Add an IP to the allowlist with a Support Request
If you are unable to use Switchboard to update your IP allowlist, you can open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) and specify a comma separated list of IP addresses that can access your GitLab Dedicated instance.
### Enable OpenID Connect for your IP allowlist
Using [GitLab as an OpenID Connect identity provider](../../../integration/openid_connect_provider.md) requires internet access to the OpenID Connect verification endpoint.
To enable access to the OpenID Connect endpoint while maintaining your IP allowlist:
- In a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650), request to allow access to the OpenID Connect endpoint.
The configuration is applied during the next maintenance window.
### Enable SCIM provisioning for your IP allowlist
You can use SCIM with external identity providers to automatically provision and manage users. To use SCIM, your identity provider must be able to access the instance SCIM API endpoints. By default, IP allowlisting blocks communication to these endpoints.
To enable SCIM while maintaining your IP allowlist:
- In a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650), request to enable SCIM endpoints to the internet.
The configuration is applied during the next maintenance window.
|
https://docs.gitlab.com/administration/dedicated/openid_connect
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/openid_connect.md
|
2025-08-13
|
doc/administration/dedicated/configure_instance
|
[
"doc",
"administration",
"dedicated",
"configure_instance"
] |
openid_connect.md
| null | null | null | null | null |
<!-- markdownlint-disable -->
This document was moved to [another location](authentication/openid_connect.md).
<!-- This redirect file can be deleted after 2025-11-01. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in the same project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
---
redirect_to: authentication/openid_connect.md
remove_date: '2025-11-01'
breadcrumbs:
- doc
- administration
- dedicated
- configure_instance
---
<!-- markdownlint-disable -->
This document was moved to [another location](authentication/openid_connect.md).
<!-- This redirect file can be deleted after 2025-11-01. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in the same project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
https://docs.gitlab.com/administration/dedicated/configure_instance/saml
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/configure_instance/saml.md
|
2025-08-13
|
doc/administration/dedicated/configure_instance/authentication
|
[
"doc",
"administration",
"dedicated",
"configure_instance",
"authentication"
] |
saml.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
SAML SSO for GitLab Dedicated
|
Configure SAML single sign-on (SSO) authentication for GitLab Dedicated.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
You can configure SAML single sign-on (SSO) for your GitLab Dedicated instance for up to ten identity providers (IdPs).
The following SAML SSO options are available:
- [Request signing](#request-signing)
- [SAML SSO for groups](#saml-groups)
- [Group sync](#group-sync)
{{< alert type="note" >}}
This configures SAML SSO for end users of your GitLab Dedicated instance.
To configure SSO for Switchboard administrators, see [configure Switchboard SSO](_index.md#configure-switchboard-sso).
{{< /alert >}}
## Prerequisites
- You must [set up the identity provider](../../../../integration/saml.md#set-up-identity-providers) before you can configure SAML for GitLab Dedicated.
- To configure GitLab to sign SAML authentication requests, you must create a private key and public certificate pair for your GitLab Dedicated instance.
## Add a SAML provider with Switchboard
To add a SAML provider for your GitLab Dedicated instance:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **SAML providers**.
1. Select **Add SAML provider**.
1. In the **SAML label** text box, enter a name to identify this provider in Switchboard.
1. Optional. To configure users based on SAML group membership or use group sync, complete these fields:
- **SAML group attribute**
- **Admin groups**
- **Auditor groups**
- **External groups**
- **Required groups**
1. In the **IdP cert fingerprint** text box, enter your IdP certificate fingerprint. This value is a SHA1 checksum of your IdP's `X.509` certificate fingerprint.
1. In the **IdP SSO target URL** text box, enter the URL endpoint on your IdP where GitLab Dedicated redirects users to authenticate with this provider.
1. From the **Name identifier format** dropdown list, select the format of the NameID that this provider sends to GitLab.
1. Optional. To configure request signing, complete these fields:
- **Issuer**
- **Attribute statements**
- **Security**
1. To start using this provider, select the **Enable this provider** checkbox.
1. Select **Save**.
1. To add another SAML provider, select **Add SAML provider** again and follow the previous steps. You can add up to ten providers.
1. Scroll up to the top of the page. The **Initiated changes** banner explains that your SAML configuration changes are applied during the next maintenance window. To apply the changes immediately, select **Apply changes now**.
After the changes are applied, you can sign in to your GitLab Dedicated instance using this SAML provider.
To use group sync, [configure the SAML group links](../../../../user/group/saml_sso/group_sync.md#configure-saml-group-links).
## Verify your SAML configuration
To verify that your SAML configuration is successful:
1. Sign out and go to your GitLab Dedicated instance's sign-in page.
1. Check that the SSO button for your SAML provider appears on the sign-in page.
1. Go to the metadata URL of your instance (`https://INSTANCE-URL/users/auth/saml/metadata`).
The metadata URL shows information that can simplify configuration of your identity provider
and helps validate your SAML settings.
1. Try signing in through the SAML provider to ensure the authentication flow works correctly.
If troubleshooting information, see [troubleshooting SAML](../../../../user/group/saml_sso/troubleshooting.md).
## Add a SAML provider with a Support Request
If you are unable to use Switchboard to add or update SAML for your GitLab Dedicated instance, then you can open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650):
1. To make the necessary changes, include the desired [SAML configuration block](../../../../integration/saml.md#configure-saml-support-in-gitlab) for your GitLab application in your support ticket. At a minimum, GitLab needs the following information to enable SAML for your instance:
- IDP SSO Target URL
- Certificate fingerprint or certificate
- NameID format
- SSO login button description
```json
"saml": {
"attribute_statements": {
//optional
},
"enabled": true,
"groups_attribute": "",
"admin_groups": [
// optional
],
"idp_cert_fingerprint": "43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8",
"idp_sso_target_url": "https://login.example.com/idp",
"label": "IDP Name",
"name_identifier_format": "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent",
"security": {
// optional
},
"auditor_groups": [
// optional
],
"external_groups": [
// optional
],
"required_groups": [
// optional
],
}
```
1. After GitLab deploys the SAML configuration to your instance, you are notified on your support ticket.
1. To verify the SAML configuration is successful:
- Check that the SSO login button description is displayed on your instance's login page.
- Go to the metadata URL of your instance, which is provided by GitLab in the support ticket. This page can be used to simplify much of the configuration of the identity provider, as well as manually validate the settings.
## Request signing
If [SAML request signing](../../../../integration/saml.md#sign-saml-authentication-requests-optional) is desired, a certificate must be obtained. This certificate can be self-signed which has the advantage of not having to prove ownership of an arbitrary Common Name (CN) to a public Certificate Authority (CA).
{{< alert type="note" >}}
Because SAML request signing requires certificate signing, you must complete these steps to use SAML with this feature enabled.
{{< /alert >}}
To enable SAML request signing:
1. Open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) and indicate that you want request signing enabled.
1. GitLab will work with you on sending the Certificate Signing Request (CSR) for you to sign. Alternatively, the CSR can be signed with a public CA.
1. After the certificate is signed, you can then use the certificate and its associated private key to complete the `security` section of the [SAML configuration](#add-a-saml-provider-with-switchboard) in Switchboard.
Authentication requests from GitLab to your identity provider can now be signed.
## SAML groups
With SAML groups you can configure GitLab users based on SAML group membership.
To enable SAML groups, add the [required elements](../../../../integration/saml.md#configure-users-based-on-saml-group-membership) to your SAML configuration in [Switchboard](#add-a-saml-provider-with-switchboard) or to the SAML block you provide in a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
## Group sync
With [group sync](../../../../user/group/saml_sso/group_sync.md), you can sync users across identity provider groups to mapped groups in GitLab.
To enable group sync:
1. Add the [required elements](../../../../user/group/saml_sso/group_sync.md#configure-saml-group-sync) to your SAML configuration in [Switchboard](#add-a-saml-provider-with-switchboard) or to the SAML configuration block you provide in a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
1. Configure the [Group Links](../../../../user/group/saml_sso/group_sync.md#configure-saml-group-links).
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Configure SAML single sign-on (SSO) authentication for GitLab Dedicated.
title: SAML SSO for GitLab Dedicated
breadcrumbs:
- doc
- administration
- dedicated
- configure_instance
- authentication
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
You can configure SAML single sign-on (SSO) for your GitLab Dedicated instance for up to ten identity providers (IdPs).
The following SAML SSO options are available:
- [Request signing](#request-signing)
- [SAML SSO for groups](#saml-groups)
- [Group sync](#group-sync)
{{< alert type="note" >}}
This configures SAML SSO for end users of your GitLab Dedicated instance.
To configure SSO for Switchboard administrators, see [configure Switchboard SSO](_index.md#configure-switchboard-sso).
{{< /alert >}}
## Prerequisites
- You must [set up the identity provider](../../../../integration/saml.md#set-up-identity-providers) before you can configure SAML for GitLab Dedicated.
- To configure GitLab to sign SAML authentication requests, you must create a private key and public certificate pair for your GitLab Dedicated instance.
## Add a SAML provider with Switchboard
To add a SAML provider for your GitLab Dedicated instance:
1. Sign in to [Switchboard](https://console.gitlab-dedicated.com/).
1. At the top of the page, select **Configuration**.
1. Expand **SAML providers**.
1. Select **Add SAML provider**.
1. In the **SAML label** text box, enter a name to identify this provider in Switchboard.
1. Optional. To configure users based on SAML group membership or use group sync, complete these fields:
- **SAML group attribute**
- **Admin groups**
- **Auditor groups**
- **External groups**
- **Required groups**
1. In the **IdP cert fingerprint** text box, enter your IdP certificate fingerprint. This value is a SHA1 checksum of your IdP's `X.509` certificate fingerprint.
1. In the **IdP SSO target URL** text box, enter the URL endpoint on your IdP where GitLab Dedicated redirects users to authenticate with this provider.
1. From the **Name identifier format** dropdown list, select the format of the NameID that this provider sends to GitLab.
1. Optional. To configure request signing, complete these fields:
- **Issuer**
- **Attribute statements**
- **Security**
1. To start using this provider, select the **Enable this provider** checkbox.
1. Select **Save**.
1. To add another SAML provider, select **Add SAML provider** again and follow the previous steps. You can add up to ten providers.
1. Scroll up to the top of the page. The **Initiated changes** banner explains that your SAML configuration changes are applied during the next maintenance window. To apply the changes immediately, select **Apply changes now**.
After the changes are applied, you can sign in to your GitLab Dedicated instance using this SAML provider.
To use group sync, [configure the SAML group links](../../../../user/group/saml_sso/group_sync.md#configure-saml-group-links).
## Verify your SAML configuration
To verify that your SAML configuration is successful:
1. Sign out and go to your GitLab Dedicated instance's sign-in page.
1. Check that the SSO button for your SAML provider appears on the sign-in page.
1. Go to the metadata URL of your instance (`https://INSTANCE-URL/users/auth/saml/metadata`).
The metadata URL shows information that can simplify configuration of your identity provider
and helps validate your SAML settings.
1. Try signing in through the SAML provider to ensure the authentication flow works correctly.
If troubleshooting information, see [troubleshooting SAML](../../../../user/group/saml_sso/troubleshooting.md).
## Add a SAML provider with a Support Request
If you are unable to use Switchboard to add or update SAML for your GitLab Dedicated instance, then you can open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650):
1. To make the necessary changes, include the desired [SAML configuration block](../../../../integration/saml.md#configure-saml-support-in-gitlab) for your GitLab application in your support ticket. At a minimum, GitLab needs the following information to enable SAML for your instance:
- IDP SSO Target URL
- Certificate fingerprint or certificate
- NameID format
- SSO login button description
```json
"saml": {
"attribute_statements": {
//optional
},
"enabled": true,
"groups_attribute": "",
"admin_groups": [
// optional
],
"idp_cert_fingerprint": "43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8",
"idp_sso_target_url": "https://login.example.com/idp",
"label": "IDP Name",
"name_identifier_format": "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent",
"security": {
// optional
},
"auditor_groups": [
// optional
],
"external_groups": [
// optional
],
"required_groups": [
// optional
],
}
```
1. After GitLab deploys the SAML configuration to your instance, you are notified on your support ticket.
1. To verify the SAML configuration is successful:
- Check that the SSO login button description is displayed on your instance's login page.
- Go to the metadata URL of your instance, which is provided by GitLab in the support ticket. This page can be used to simplify much of the configuration of the identity provider, as well as manually validate the settings.
## Request signing
If [SAML request signing](../../../../integration/saml.md#sign-saml-authentication-requests-optional) is desired, a certificate must be obtained. This certificate can be self-signed which has the advantage of not having to prove ownership of an arbitrary Common Name (CN) to a public Certificate Authority (CA).
{{< alert type="note" >}}
Because SAML request signing requires certificate signing, you must complete these steps to use SAML with this feature enabled.
{{< /alert >}}
To enable SAML request signing:
1. Open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) and indicate that you want request signing enabled.
1. GitLab will work with you on sending the Certificate Signing Request (CSR) for you to sign. Alternatively, the CSR can be signed with a public CA.
1. After the certificate is signed, you can then use the certificate and its associated private key to complete the `security` section of the [SAML configuration](#add-a-saml-provider-with-switchboard) in Switchboard.
Authentication requests from GitLab to your identity provider can now be signed.
## SAML groups
With SAML groups you can configure GitLab users based on SAML group membership.
To enable SAML groups, add the [required elements](../../../../integration/saml.md#configure-users-based-on-saml-group-membership) to your SAML configuration in [Switchboard](#add-a-saml-provider-with-switchboard) or to the SAML block you provide in a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
## Group sync
With [group sync](../../../../user/group/saml_sso/group_sync.md), you can sync users across identity provider groups to mapped groups in GitLab.
To enable group sync:
1. Add the [required elements](../../../../user/group/saml_sso/group_sync.md#configure-saml-group-sync) to your SAML configuration in [Switchboard](#add-a-saml-provider-with-switchboard) or to the SAML configuration block you provide in a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
1. Configure the [Group Links](../../../../user/group/saml_sso/group_sync.md#configure-saml-group-links).
|
https://docs.gitlab.com/administration/dedicated/configure_instance/authentication
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/configure_instance/_index.md
|
2025-08-13
|
doc/administration/dedicated/configure_instance/authentication
|
[
"doc",
"administration",
"dedicated",
"configure_instance",
"authentication"
] |
_index.md
|
GitLab Dedicated
|
Switchboard
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Authentication for GitLab Dedicated
|
Configure authentication methods for GitLab Dedicated.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
GitLab Dedicated has two separate authentication contexts:
- Switchboard authentication: How administrators sign in to manage GitLab Dedicated instances.
- Instance authentication: How end users sign in to your GitLab Dedicated instance.
## Switchboard authentication
Administrators use GitLab Dedicated Switchboard to manage instances, users, and configuration.
Switchboard supports these authentication methods:
- Single sign-on (SSO) with SAML or OIDC
- Standard GitLab.com accounts
For information about Switchboard user management, see [manage users and notifications](../users_notifications.md).
### Configure Switchboard SSO
Enable single sign-on (SSO) for Switchboard to integrate with your organization's identity provider.
Switchboard supports both SAML and OIDC protocols.
{{< alert type="note" >}}
This configures SSO for Switchboard administrators who manage your GitLab Dedicated instance.
{{< /alert >}}
To configure SSO for Switchboard:
1. Gather the required information for your chosen protocol:
- [SAML parameters](#saml-parameters-for-switchboard)
- [OIDC parameters](#oidc-parameters-for-switchboard)
1. [Submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) with the information.
1. Configure your identity provider with the information GitLab provides.
#### SAML parameters for Switchboard
When requesting SAML configuration, you must provide:
| Parameter | Description |
| ------------------------- | ----------- |
| Metadata URL | The URL that points to your identity provider's SAML metadata document. This typically ends with `/saml/metadata.xml` or is available in your identity provider's SSO configuration section. |
| Email attribute mapping | The format your identity provider uses to represent email addresses. For example, in Auth0 this might be `http://schemas.auth0.com/email`. |
| Attributes request method | The HTTP method (GET or POST) that should be used when requesting attributes from your identity provider. Check your identity provider's documentation for the recommended method. |
| User email domain | The domain portion of your users' email addresses (for example, `gitlab.com`). |
GitLab provides the following information for you to configure in your identity provider:
| Parameter | Description |
| ------------------- | ----------- |
| Callback/ACS URL | The URL where your identity provider should send SAML responses after authentication. |
| Required attributes | Attributes that must be included in the SAML response. At minimum, an attribute mapped to `email` is required. |
If you require encrypted responses, GitLab can provide the necessary certificates upon request.
{{< alert type="note" >}}
GitLab Dedicated does not support IdP-initiated SAML.
{{< /alert >}}
#### OIDC parameters for Switchboard
When requesting OIDC configuration, you must provide:
| Parameter | Description |
| --------------- | ----------- |
| Issuer URL | The base URL that uniquely identifies your OIDC provider. This URL typically points to your provider's discovery document located at `https://[your-idp-domain]/.well-known/openid-configuration`. |
| Token endpoints | The specific URLs from your identity provider used for obtaining and validating authentication tokens. These endpoints are usually listed in your provider's OpenID Connect configuration documentation. |
| Scopes | The permission levels requested during authentication that determine what user information is shared. Standard scopes include `openid`, `email`, and `profile`. |
| Client ID | The unique identifier assigned to Switchboard when you register it as an application in your identity provider. You must create this registration in your identity provider's dashboard first. |
| Client secret | The confidential security key generated when you register Switchboard in your identity provider. This secret authenticates Switchboard to your IdP and should be kept secure. |
GitLab provides the following information for you to configure in your identity provider:
| Parameter | Description |
| ---------------------- | ----------- |
| Redirect/callback URLs | The URLs where your identity provider should redirect users after successful authentication. These must be added to your identity provider's allowed redirect URLs list. |
| Required claims | The specific user information that must be included in the authentication token payload. At minimum, a claim mapped to the user's email address is required. |
Additional configuration details might be required depending on your OIDC provider.
## Instance authentication
Configure how your organization's users authenticate to your GitLab Dedicated instance.
Your GitLab Dedicated instance supports these authentication methods:
- [Configure SAML SSO](saml.md)
- [Configure OIDC](openid_connect.md)
|
---
stage: GitLab Dedicated
group: Switchboard
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Configure authentication methods for GitLab Dedicated.
title: Authentication for GitLab Dedicated
breadcrumbs:
- doc
- administration
- dedicated
- configure_instance
- authentication
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
GitLab Dedicated has two separate authentication contexts:
- Switchboard authentication: How administrators sign in to manage GitLab Dedicated instances.
- Instance authentication: How end users sign in to your GitLab Dedicated instance.
## Switchboard authentication
Administrators use GitLab Dedicated Switchboard to manage instances, users, and configuration.
Switchboard supports these authentication methods:
- Single sign-on (SSO) with SAML or OIDC
- Standard GitLab.com accounts
For information about Switchboard user management, see [manage users and notifications](../users_notifications.md).
### Configure Switchboard SSO
Enable single sign-on (SSO) for Switchboard to integrate with your organization's identity provider.
Switchboard supports both SAML and OIDC protocols.
{{< alert type="note" >}}
This configures SSO for Switchboard administrators who manage your GitLab Dedicated instance.
{{< /alert >}}
To configure SSO for Switchboard:
1. Gather the required information for your chosen protocol:
- [SAML parameters](#saml-parameters-for-switchboard)
- [OIDC parameters](#oidc-parameters-for-switchboard)
1. [Submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) with the information.
1. Configure your identity provider with the information GitLab provides.
#### SAML parameters for Switchboard
When requesting SAML configuration, you must provide:
| Parameter | Description |
| ------------------------- | ----------- |
| Metadata URL | The URL that points to your identity provider's SAML metadata document. This typically ends with `/saml/metadata.xml` or is available in your identity provider's SSO configuration section. |
| Email attribute mapping | The format your identity provider uses to represent email addresses. For example, in Auth0 this might be `http://schemas.auth0.com/email`. |
| Attributes request method | The HTTP method (GET or POST) that should be used when requesting attributes from your identity provider. Check your identity provider's documentation for the recommended method. |
| User email domain | The domain portion of your users' email addresses (for example, `gitlab.com`). |
GitLab provides the following information for you to configure in your identity provider:
| Parameter | Description |
| ------------------- | ----------- |
| Callback/ACS URL | The URL where your identity provider should send SAML responses after authentication. |
| Required attributes | Attributes that must be included in the SAML response. At minimum, an attribute mapped to `email` is required. |
If you require encrypted responses, GitLab can provide the necessary certificates upon request.
{{< alert type="note" >}}
GitLab Dedicated does not support IdP-initiated SAML.
{{< /alert >}}
#### OIDC parameters for Switchboard
When requesting OIDC configuration, you must provide:
| Parameter | Description |
| --------------- | ----------- |
| Issuer URL | The base URL that uniquely identifies your OIDC provider. This URL typically points to your provider's discovery document located at `https://[your-idp-domain]/.well-known/openid-configuration`. |
| Token endpoints | The specific URLs from your identity provider used for obtaining and validating authentication tokens. These endpoints are usually listed in your provider's OpenID Connect configuration documentation. |
| Scopes | The permission levels requested during authentication that determine what user information is shared. Standard scopes include `openid`, `email`, and `profile`. |
| Client ID | The unique identifier assigned to Switchboard when you register it as an application in your identity provider. You must create this registration in your identity provider's dashboard first. |
| Client secret | The confidential security key generated when you register Switchboard in your identity provider. This secret authenticates Switchboard to your IdP and should be kept secure. |
GitLab provides the following information for you to configure in your identity provider:
| Parameter | Description |
| ---------------------- | ----------- |
| Redirect/callback URLs | The URLs where your identity provider should redirect users after successful authentication. These must be added to your identity provider's allowed redirect URLs list. |
| Required claims | The specific user information that must be included in the authentication token payload. At minimum, a claim mapped to the user's email address is required. |
Additional configuration details might be required depending on your OIDC provider.
## Instance authentication
Configure how your organization's users authenticate to your GitLab Dedicated instance.
Your GitLab Dedicated instance supports these authentication methods:
- [Configure SAML SSO](saml.md)
- [Configure OIDC](openid_connect.md)
|
https://docs.gitlab.com/administration/dedicated/configure_instance/openid_connect
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dedicated/configure_instance/openid_connect.md
|
2025-08-13
|
doc/administration/dedicated/configure_instance/authentication
|
[
"doc",
"administration",
"dedicated",
"configure_instance",
"authentication"
] |
openid_connect.md
|
GitLab Dedicated
|
Environment Automation
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
OpenID Connect SSO for GitLab Dedicated
|
Configure OpenID Connect single sign-on (SSO) authentication for GitLab Dedicated.
|
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Configure OpenID Connect (OIDC) single sign-on (SSO) for your GitLab Dedicated instance
to authenticate users with your identity provider.
Use OIDC SSO when you want to:
- Centralize user authentication through your existing identity provider.
- Reduce password management overhead for users.
- Implement consistent access controls across your organization's applications.
- Use a modern authentication protocol with broad industry support.
{{< alert type="note" >}}
This configures OIDC for end users of your GitLab Dedicated instance.
To configure SSO for Switchboard administrators, see [configure Switchboard SSO](_index.md#configure-switchboard-sso).
{{< /alert >}}
## Configure OpenID Connect
Prerequisites:
- Set up your identity provider. You can use a temporary callback URL, as GitLab provides the callback URL after configuration.
- Make sure your identity provider supports the OpenID Connect specification.
To configure OIDC for your GitLab Dedicated instance:
1. [Create a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
1. In your support ticket, provide the following configuration:
```json
{
"label": "Login with OIDC",
"issuer": "https://accounts.example.com",
"discovery": true
}
```
1. Provide your Client ID and Client Secret securely using a temporary link to a secrets manager that the support team can access.
1. If your identity provider does not support auto discovery, include the client endpoint options. For example:
```json
{
"label": "Login with OIDC",
"issuer": "https://example.com/accounts",
"discovery": false,
"client_options": {
"end_session_endpoint": "https://example.com/logout",
"authorization_endpoint": "https://example.com/authorize",
"token_endpoint": "https://example.com/token",
"userinfo_endpoint": "https://example.com/userinfo",
"jwks_uri": "https://example.com/jwks"
}
}
```
After GitLab configures OIDC for your instance:
1. You receive the callback URL in your support ticket.
1. Update your identity provider with this callback URL.
1. Verify the configuration by checking for the SSO login button on your instance's sign-in page.
## Configure users based on OIDC group membership
You can configure GitLab to assign user roles and access based on OIDC group membership.
Prerequisites:
- Your identity provider must include group information in the `ID token` or `userinfo` endpoint.
- You must have already configured basic OIDC authentication.
To configure users based on OIDC group membership:
1. Add the `groups_attribute` parameter to specify where GitLab should look for group information.
1. Configure the appropriate group arrays as needed.
1. In your support ticket, include the group configuration in your OIDC block. For example:
```json
{
"label": "Login with OIDC",
"issuer": "https://accounts.example.com",
"discovery": true,
"groups_attribute": "groups",
"required_groups": [
"gitlab-users"
],
"external_groups": [
"external-contractors"
],
"auditor_groups": [
"auditors"
],
"admin_groups": [
"gitlab-admins"
]
}
```
## Configuration parameters
The following parameters are available to configure OIDC for GitLab Dedicated instances.
For more information, see [use OpenID Connect as an authentication provider](../../../../administration/auth/oidc.md).
### Required parameters
| Parameter | Description |
|-----------|-------------|
| `issuer` | The OpenID Connect issuer URL of your identity provider. |
| `label` | Display name for the login button. |
| `discovery` | Whether to use OpenID Connect discovery (recommended: `true`). |
### Optional parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `admin_groups` | Groups with administrator access. | `[]` |
| `auditor_groups` | Groups with auditor access. | `[]` |
| `client_auth_method` | Client authentication method. | `"basic"` |
| `external_groups` | Groups marked as external users. | `[]` |
| `groups_attribute` | Where to look for groups in the OIDC response. | None |
| `pkce` | Enable PKCE (Proof Key for Code Exchange). | `false` |
| `required_groups` | Groups required for access. | `[]` |
| `response_mode` | How the authorization response is delivered. | None |
| `response_type` | OAuth 2.0 response type. | `"code"` |
| `scope` | OpenID Connect scopes to request. | `["openid"]` |
| `send_scope_to_token_endpoint` | Include scope parameter in token endpoint requests. | `true` |
| `uid_field` | Field to use as the unique identifier. | `"sub"` |
### Provider-specific examples
#### Google
```json
{
"label": "Google",
"scope": ["openid", "profile", "email"],
"response_type": "code",
"issuer": "https://accounts.google.com",
"client_auth_method": "query",
"discovery": true,
"uid_field": "preferred_username",
"pkce": true
}
```
#### Microsoft Azure AD
```json
{
"label": "Azure AD",
"scope": ["openid", "profile", "email"],
"response_type": "code",
"issuer": "https://login.microsoftonline.com/your-tenant-id/v2.0",
"client_auth_method": "query",
"discovery": true,
"uid_field": "preferred_username",
"pkce": true
}
```
#### Okta
```json
{
"label": "Okta",
"scope": ["openid", "profile", "email", "groups"],
"response_type": "code",
"issuer": "https://your-domain.okta.com/oauth2/default",
"client_auth_method": "query",
"discovery": true,
"uid_field": "preferred_username",
"pkce": true
}
```
## Troubleshooting
If you encounter issues with your OpenID Connect configuration:
- Verify that your identity provider is correctly configured and accessible.
- Check that the client ID and secret provided to support are correct.
- Ensure the redirect URI in your identity provider matches the one provided in your support ticket.
- Verify that the issuer URL is correct and accessible.
|
---
stage: GitLab Dedicated
group: Environment Automation
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Configure OpenID Connect single sign-on (SSO) authentication for GitLab
Dedicated.
title: OpenID Connect SSO for GitLab Dedicated
breadcrumbs:
- doc
- administration
- dedicated
- configure_instance
- authentication
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Dedicated
{{< /details >}}
Configure OpenID Connect (OIDC) single sign-on (SSO) for your GitLab Dedicated instance
to authenticate users with your identity provider.
Use OIDC SSO when you want to:
- Centralize user authentication through your existing identity provider.
- Reduce password management overhead for users.
- Implement consistent access controls across your organization's applications.
- Use a modern authentication protocol with broad industry support.
{{< alert type="note" >}}
This configures OIDC for end users of your GitLab Dedicated instance.
To configure SSO for Switchboard administrators, see [configure Switchboard SSO](_index.md#configure-switchboard-sso).
{{< /alert >}}
## Configure OpenID Connect
Prerequisites:
- Set up your identity provider. You can use a temporary callback URL, as GitLab provides the callback URL after configuration.
- Make sure your identity provider supports the OpenID Connect specification.
To configure OIDC for your GitLab Dedicated instance:
1. [Create a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
1. In your support ticket, provide the following configuration:
```json
{
"label": "Login with OIDC",
"issuer": "https://accounts.example.com",
"discovery": true
}
```
1. Provide your Client ID and Client Secret securely using a temporary link to a secrets manager that the support team can access.
1. If your identity provider does not support auto discovery, include the client endpoint options. For example:
```json
{
"label": "Login with OIDC",
"issuer": "https://example.com/accounts",
"discovery": false,
"client_options": {
"end_session_endpoint": "https://example.com/logout",
"authorization_endpoint": "https://example.com/authorize",
"token_endpoint": "https://example.com/token",
"userinfo_endpoint": "https://example.com/userinfo",
"jwks_uri": "https://example.com/jwks"
}
}
```
After GitLab configures OIDC for your instance:
1. You receive the callback URL in your support ticket.
1. Update your identity provider with this callback URL.
1. Verify the configuration by checking for the SSO login button on your instance's sign-in page.
## Configure users based on OIDC group membership
You can configure GitLab to assign user roles and access based on OIDC group membership.
Prerequisites:
- Your identity provider must include group information in the `ID token` or `userinfo` endpoint.
- You must have already configured basic OIDC authentication.
To configure users based on OIDC group membership:
1. Add the `groups_attribute` parameter to specify where GitLab should look for group information.
1. Configure the appropriate group arrays as needed.
1. In your support ticket, include the group configuration in your OIDC block. For example:
```json
{
"label": "Login with OIDC",
"issuer": "https://accounts.example.com",
"discovery": true,
"groups_attribute": "groups",
"required_groups": [
"gitlab-users"
],
"external_groups": [
"external-contractors"
],
"auditor_groups": [
"auditors"
],
"admin_groups": [
"gitlab-admins"
]
}
```
## Configuration parameters
The following parameters are available to configure OIDC for GitLab Dedicated instances.
For more information, see [use OpenID Connect as an authentication provider](../../../../administration/auth/oidc.md).
### Required parameters
| Parameter | Description |
|-----------|-------------|
| `issuer` | The OpenID Connect issuer URL of your identity provider. |
| `label` | Display name for the login button. |
| `discovery` | Whether to use OpenID Connect discovery (recommended: `true`). |
### Optional parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `admin_groups` | Groups with administrator access. | `[]` |
| `auditor_groups` | Groups with auditor access. | `[]` |
| `client_auth_method` | Client authentication method. | `"basic"` |
| `external_groups` | Groups marked as external users. | `[]` |
| `groups_attribute` | Where to look for groups in the OIDC response. | None |
| `pkce` | Enable PKCE (Proof Key for Code Exchange). | `false` |
| `required_groups` | Groups required for access. | `[]` |
| `response_mode` | How the authorization response is delivered. | None |
| `response_type` | OAuth 2.0 response type. | `"code"` |
| `scope` | OpenID Connect scopes to request. | `["openid"]` |
| `send_scope_to_token_endpoint` | Include scope parameter in token endpoint requests. | `true` |
| `uid_field` | Field to use as the unique identifier. | `"sub"` |
### Provider-specific examples
#### Google
```json
{
"label": "Google",
"scope": ["openid", "profile", "email"],
"response_type": "code",
"issuer": "https://accounts.google.com",
"client_auth_method": "query",
"discovery": true,
"uid_field": "preferred_username",
"pkce": true
}
```
#### Microsoft Azure AD
```json
{
"label": "Azure AD",
"scope": ["openid", "profile", "email"],
"response_type": "code",
"issuer": "https://login.microsoftonline.com/your-tenant-id/v2.0",
"client_auth_method": "query",
"discovery": true,
"uid_field": "preferred_username",
"pkce": true
}
```
#### Okta
```json
{
"label": "Okta",
"scope": ["openid", "profile", "email", "groups"],
"response_type": "code",
"issuer": "https://your-domain.okta.com/oauth2/default",
"client_auth_method": "query",
"discovery": true,
"uid_field": "preferred_username",
"pkce": true
}
```
## Troubleshooting
If you encounter issues with your OpenID Connect configuration:
- Verify that your identity provider is correctly configured and accessible.
- Check that the client ID and secret provided to support are correct.
- Ensure the redirect URI in your identity provider matches the one provided in your support ticket.
- Verify that the issuer URL is correct and accessible.
|
https://docs.gitlab.com/administration/compute_minutes
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/compute_minutes.md
|
2025-08-13
|
doc/administration/cicd
|
[
"doc",
"administration",
"cicd"
] |
compute_minutes.md
|
Verify
|
Pipeline Execution
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Compute minutes administration
|
Calculations, quotas, purchase information.
|
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Renamed](https://gitlab.com/groups/gitlab-com/-/epics/2150) from "CI/CD minutes" to "compute quota" or "compute minutes" in GitLab 16.1.
{{< /history >}}
Administrators can limit the amount of time that projects can use to run jobs on
[instance runners](../../ci/runners/runners_scope.md) each month. This limit
is tracked with a [compute minutes quota](../../ci/pipelines/compute_minutes.md).
Group and project runners are not subject to the compute quota.
On GitLab Self-Managed:
- Compute quotas are disabled by default.
- Administrators can [assign more compute minutes](#set-the-compute-quota-for-a-group)
if a namespace uses all its monthly quota.
- The [cost factor](../../ci/pipelines/compute_minutes.md#compute-usage-calculation) is `1` for all projects.
On GitLab.com:
- To learn about the quotas and cost factors applied, see [compute minutes](../../ci/pipelines/compute_minutes.md).
- To manage compute minutes as a GitLab team member, see [compute minutes administration for GitLab.com](dot_com_compute_minutes.md).
[Trigger jobs](../../ci/yaml/_index.md#trigger) do not execute on runners, so they do not
consume compute minutes, even when using [`strategy:depend`](../../ci/yaml/_index.md#triggerstrategy)
to wait for the [downstream pipeline](../../ci/pipelines/downstream_pipelines.md) status.
The triggered downstream pipeline consumes compute minutes the same as other pipelines.
## Set the compute quota for all namespaces
By default, GitLab instances do not have a compute quota. The default value for the quota is `0`,
which is unlimited.
Prerequisites:
- You must be a GitLab administrator.
To change the default quota that applies to all namespaces:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > CI/CD**.
1. Expand **Continuous Integration and Deployment**.
1. In the **Compute quota** box, enter a limit.
1. Select **Save changes**.
If a quota is already defined for a specific namespace, this value does not change that quota.
## Set the compute quota for a group
You can override the global value and set a compute quota for a group.
Prerequisites:
- You must be a GitLab administrator.
- The group must be a top-level group, not a subgroup.
To set a compute quota for a group or namespace:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Overview > Groups**.
1. For the group you want to update, select **Edit**.
1. In the **Compute quota** box, enter the maximum number of compute minutes.
1. Select **Save changes**.
You can also use the [update group API](../../api/groups.md#update-group-attributes) or the
[update user API](../../api/users.md#modify-a-user) instead.
## Reset compute usage
An administrator can reset the compute usage for a namespace for the current month.
### Reset usage for a personal namespace
1. Find the [user in the **Admin** area](../admin_area.md#administering-users).
1. Select **Edit**.
1. In **Limits**, select **Reset compute usage**.
### Reset usage for a group namespace
1. Find the [group in the **Admin** area](../admin_area.md#administering-groups).
1. Select **Edit**.
1. In **Permissions and group features**, select **Reset compute usage**.
|
---
stage: Verify
group: Pipeline Execution
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Calculations, quotas, purchase information.
title: Compute minutes administration
breadcrumbs:
- doc
- administration
- cicd
---
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Renamed](https://gitlab.com/groups/gitlab-com/-/epics/2150) from "CI/CD minutes" to "compute quota" or "compute minutes" in GitLab 16.1.
{{< /history >}}
Administrators can limit the amount of time that projects can use to run jobs on
[instance runners](../../ci/runners/runners_scope.md) each month. This limit
is tracked with a [compute minutes quota](../../ci/pipelines/compute_minutes.md).
Group and project runners are not subject to the compute quota.
On GitLab Self-Managed:
- Compute quotas are disabled by default.
- Administrators can [assign more compute minutes](#set-the-compute-quota-for-a-group)
if a namespace uses all its monthly quota.
- The [cost factor](../../ci/pipelines/compute_minutes.md#compute-usage-calculation) is `1` for all projects.
On GitLab.com:
- To learn about the quotas and cost factors applied, see [compute minutes](../../ci/pipelines/compute_minutes.md).
- To manage compute minutes as a GitLab team member, see [compute minutes administration for GitLab.com](dot_com_compute_minutes.md).
[Trigger jobs](../../ci/yaml/_index.md#trigger) do not execute on runners, so they do not
consume compute minutes, even when using [`strategy:depend`](../../ci/yaml/_index.md#triggerstrategy)
to wait for the [downstream pipeline](../../ci/pipelines/downstream_pipelines.md) status.
The triggered downstream pipeline consumes compute minutes the same as other pipelines.
## Set the compute quota for all namespaces
By default, GitLab instances do not have a compute quota. The default value for the quota is `0`,
which is unlimited.
Prerequisites:
- You must be a GitLab administrator.
To change the default quota that applies to all namespaces:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > CI/CD**.
1. Expand **Continuous Integration and Deployment**.
1. In the **Compute quota** box, enter a limit.
1. Select **Save changes**.
If a quota is already defined for a specific namespace, this value does not change that quota.
## Set the compute quota for a group
You can override the global value and set a compute quota for a group.
Prerequisites:
- You must be a GitLab administrator.
- The group must be a top-level group, not a subgroup.
To set a compute quota for a group or namespace:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Overview > Groups**.
1. For the group you want to update, select **Edit**.
1. In the **Compute quota** box, enter the maximum number of compute minutes.
1. Select **Save changes**.
You can also use the [update group API](../../api/groups.md#update-group-attributes) or the
[update user API](../../api/users.md#modify-a-user) instead.
## Reset compute usage
An administrator can reset the compute usage for a namespace for the current month.
### Reset usage for a personal namespace
1. Find the [user in the **Admin** area](../admin_area.md#administering-users).
1. Select **Edit**.
1. In **Limits**, select **Reset compute usage**.
### Reset usage for a group namespace
1. Find the [group in the **Admin** area](../admin_area.md#administering-groups).
1. Select **Edit**.
1. In **Permissions and group features**, select **Reset compute usage**.
|
https://docs.gitlab.com/administration/job_logs
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/job_logs.md
|
2025-08-13
|
doc/administration/cicd
|
[
"doc",
"administration",
"cicd"
] |
job_logs.md
|
Verify
|
Runner
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Job logs
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Job logs are sent by a runner while it's processing a job. You can see
logs in places like job pages, pipelines, and email notifications.
## Data flow
In general, there are two states for job logs: `log` and `archived log`.
In the following table you can see the phases a log goes through:
| Phase | State | Condition | Data flow | Stored path |
| -------------- | ------------ | ----------------------- | -----------------------------------------| ----------- |
| 1: patching | log | When a job is running | Runner => Puma => file storage | `#{ROOT_PATH}/gitlab-ci/builds/#{YYYY_mm}/#{project_id}/#{job_id}.log` |
| 2: archiving | archived log | After a job is finished | Sidekiq moves log to artifacts folder | `#{ROOT_PATH}/gitlab-rails/shared/artifacts/#{disk_hash}/#{YYYY_mm_dd}/#{job_id}/#{job_artifact_id}/job.log` |
| 3: uploading | archived log | After a log is archived | Sidekiq moves archived log to [object storage](#uploading-logs-to-object-storage) (if configured) | `#{bucket_name}/#{disk_hash}/#{YYYY_mm_dd}/#{job_id}/#{job_artifact_id}/job.log` |
The `ROOT_PATH` varies per environment:
- For the Linux package it's `/var/opt/gitlab`.
- For self-compiled installations it's `/home/git/gitlab`.
## Changing the job logs local location
{{< alert type="note" >}}
For Docker installations, you can change the path where your data is mounted.
For the Helm chart, use object storage.
{{< /alert >}}
To change the location where the job logs are stored:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Optional. If you have existing job logs, pause continuous integration data
processing by temporarily stopping Sidekiq:
```shell
sudo gitlab-ctl stop sidekiq
```
1. Set the new storage location in `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_ci['builds_directory'] = '/mnt/gitlab-ci/builds'
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Use `rsync` to move job logs from the current location to the new location:
```shell
sudo rsync -avzh --remove-source-files --ignore-existing --progress /var/opt/gitlab/gitlab-ci/builds/ /mnt/gitlab-ci/builds/
```
Use `--ignore-existing` so you don't override new job logs with older versions of the same log.
1. If you opted to pause the continuous integration data processing, you can
start Sidekiq again:
```shell
sudo gitlab-ctl start sidekiq
```
1. Remove the old job logs storage location:
```shell
sudo rm -rf /var/opt/gitlab/gitlab-ci/builds
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Optional. If you have existing job logs, pause continuous integration data
processing by temporarily stopping Sidekiq:
```shell
# For systems running systemd
sudo systemctl stop gitlab-sidekiq
# For systems running SysV init
sudo service gitlab stop
```
1. Edit `/home/git/gitlab/config/gitlab.yml` to set the new storage location:
```yaml
production: &base
gitlab_ci:
builds_path: /mnt/gitlab-ci/builds
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
1. Use `rsync` to move job logs from the current location to the new location:
```shell
sudo rsync -avzh --remove-source-files --ignore-existing --progress /home/git/gitlab/builds/ /mnt/gitlab-ci/builds/
```
Use `--ignore-existing` so you don't override new job logs with older versions of the same log.
1. If you opted to pause the continuous integration data processing, you can
start Sidekiq again:
```shell
# For systems running systemd
sudo systemctl start gitlab-sidekiq
# For systems running SysV init
sudo service gitlab start
```
1. Remove the old job logs storage location:
```shell
sudo rm -rf /home/git/gitlab/builds
```
{{< /tab >}}
{{< /tabs >}}
## Uploading logs to object storage
Archived logs are considered as [job artifacts](job_artifacts.md).
Therefore, when you [set up the object storage integration](job_artifacts.md#using-object-storage),
job logs are automatically migrated to it along with the other job artifacts.
See "Phase 3: uploading" in [Data flow](#data-flow) to learn about the process.
## Maximum log file size
The job log file size limit in GitLab is 100 megabytes by default.
Any job that exceeds the limit is marked as failed, and dropped by the runner.
For more details, see [Maximum file size for job logs](../instance_limits.md#maximum-file-size-for-job-logs).
## Prevent local disk usage
If you want to avoid any local disk usage for job logs,
you can do so using one of the following options:
- Turn on [incremental logging](#configure-incremental-logging).
- Set the [job logs location](#changing-the-job-logs-local-location)
to an NFS drive.
## How to remove job logs
There isn't a way to automatically expire old job logs. However, it's safe to remove
them if they're taking up too much space. If you remove the logs manually, the
job output in the UI is empty.
For details on how to delete job logs by using GitLab CLI,
see [Delete job logs](../../user/storage_management_automation.md#delete-job-logs).
Alternatively, you can delete job logs with shell commands. For example, to delete all job logs older than 60 days, run the following
command from a shell in your GitLab instance.
{{< alert type="note" >}}
For the Helm chart, use the storage management tools provided with your object
storage.
{{< /alert >}}
{{< alert type="warning" >}}
The following command permanently deletes the log files and is irreversible.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
find /var/opt/gitlab/gitlab-rails/shared/artifacts -name "job.log" -mtime +60 -delete
```
{{< /tab >}}
{{< tab title="Docker" >}}
Assuming you mounted `/var/opt/gitlab` to `/srv/gitlab`:
```shell
find /srv/gitlab/gitlab-rails/shared/artifacts -name "job.log" -mtime +60 -delete
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
find /home/git/gitlab/shared/artifacts -name "job.log" -mtime +60 -delete
```
{{< /tab >}}
{{< /tabs >}}
After the logs are deleted, you can find any broken file references by running
the Rake task that checks the
[integrity of the uploaded files](../raketasks/check.md#uploaded-files-integrity).
For more information, see how to
[delete references to missing artifacts](../raketasks/check.md#delete-references-to-missing-artifacts).
## Incremental logging
Incremental logging changes how job logs are processed and stored, improving performance in scaled-out deployments.
By default, job logs are sent from GitLab Runner in chunks and cached temporarily on disk. After the job completes, a background job archives the log to the artifacts directory or to object storage if configured.
With incremental logging, logs are stored in Redis and a persistent store instead of file storage. This approach:
- Prevents local disk usage for job logs.
- Eliminates the need for NFS sharing between Rails and Sidekiq servers.
- Improves performance in multi-node installations.
The incremental logging process uses Redis as temporary storage and follows this flow:
1. The runner picks a job from GitLab.
1. The runner sends a piece of log to GitLab.
1. GitLab appends the data to Redis in the `Gitlab::Redis::TraceChunks` namespace.
1. After the data in Redis reaches 128 KB, the data is flushed to a persistent store.
1. The previous steps repeat until the job is finished.
1. After the job is finished, GitLab schedules a Sidekiq worker to archive the log.
1. The Sidekiq worker archives the log to object storage and cleans up temporary data.
Redis Cluster is not supported with incremental logging.
For more information, see [issue 224171](https://gitlab.com/gitlab-org/gitlab/-/issues/224171).
### Configure incremental logging
Before you turn on incremental logging, you must [configure object storage](job_artifacts.md#using-object-storage) for CI/CD artifacts, logs, and builds. After incremental logging is turned on, files cannot be written to disk, and there is no protection against misconfiguration.
When you turn on incremental logging, running jobs' logs continue to be written to disk, but new jobs use incremental logging.
When you turn off incremental logging, running jobs continue to use incremental logging, but new jobs write to the disk.
To configure incremental logging:
- Use the setting in the [Admin area](../settings/continuous_integration.md#access-job-log-settings) or the [Settings API](../../api/settings.md).
|
---
stage: Verify
group: Runner
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Job logs
breadcrumbs:
- doc
- administration
- cicd
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Job logs are sent by a runner while it's processing a job. You can see
logs in places like job pages, pipelines, and email notifications.
## Data flow
In general, there are two states for job logs: `log` and `archived log`.
In the following table you can see the phases a log goes through:
| Phase | State | Condition | Data flow | Stored path |
| -------------- | ------------ | ----------------------- | -----------------------------------------| ----------- |
| 1: patching | log | When a job is running | Runner => Puma => file storage | `#{ROOT_PATH}/gitlab-ci/builds/#{YYYY_mm}/#{project_id}/#{job_id}.log` |
| 2: archiving | archived log | After a job is finished | Sidekiq moves log to artifacts folder | `#{ROOT_PATH}/gitlab-rails/shared/artifacts/#{disk_hash}/#{YYYY_mm_dd}/#{job_id}/#{job_artifact_id}/job.log` |
| 3: uploading | archived log | After a log is archived | Sidekiq moves archived log to [object storage](#uploading-logs-to-object-storage) (if configured) | `#{bucket_name}/#{disk_hash}/#{YYYY_mm_dd}/#{job_id}/#{job_artifact_id}/job.log` |
The `ROOT_PATH` varies per environment:
- For the Linux package it's `/var/opt/gitlab`.
- For self-compiled installations it's `/home/git/gitlab`.
## Changing the job logs local location
{{< alert type="note" >}}
For Docker installations, you can change the path where your data is mounted.
For the Helm chart, use object storage.
{{< /alert >}}
To change the location where the job logs are stored:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Optional. If you have existing job logs, pause continuous integration data
processing by temporarily stopping Sidekiq:
```shell
sudo gitlab-ctl stop sidekiq
```
1. Set the new storage location in `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_ci['builds_directory'] = '/mnt/gitlab-ci/builds'
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Use `rsync` to move job logs from the current location to the new location:
```shell
sudo rsync -avzh --remove-source-files --ignore-existing --progress /var/opt/gitlab/gitlab-ci/builds/ /mnt/gitlab-ci/builds/
```
Use `--ignore-existing` so you don't override new job logs with older versions of the same log.
1. If you opted to pause the continuous integration data processing, you can
start Sidekiq again:
```shell
sudo gitlab-ctl start sidekiq
```
1. Remove the old job logs storage location:
```shell
sudo rm -rf /var/opt/gitlab/gitlab-ci/builds
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Optional. If you have existing job logs, pause continuous integration data
processing by temporarily stopping Sidekiq:
```shell
# For systems running systemd
sudo systemctl stop gitlab-sidekiq
# For systems running SysV init
sudo service gitlab stop
```
1. Edit `/home/git/gitlab/config/gitlab.yml` to set the new storage location:
```yaml
production: &base
gitlab_ci:
builds_path: /mnt/gitlab-ci/builds
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
1. Use `rsync` to move job logs from the current location to the new location:
```shell
sudo rsync -avzh --remove-source-files --ignore-existing --progress /home/git/gitlab/builds/ /mnt/gitlab-ci/builds/
```
Use `--ignore-existing` so you don't override new job logs with older versions of the same log.
1. If you opted to pause the continuous integration data processing, you can
start Sidekiq again:
```shell
# For systems running systemd
sudo systemctl start gitlab-sidekiq
# For systems running SysV init
sudo service gitlab start
```
1. Remove the old job logs storage location:
```shell
sudo rm -rf /home/git/gitlab/builds
```
{{< /tab >}}
{{< /tabs >}}
## Uploading logs to object storage
Archived logs are considered as [job artifacts](job_artifacts.md).
Therefore, when you [set up the object storage integration](job_artifacts.md#using-object-storage),
job logs are automatically migrated to it along with the other job artifacts.
See "Phase 3: uploading" in [Data flow](#data-flow) to learn about the process.
## Maximum log file size
The job log file size limit in GitLab is 100 megabytes by default.
Any job that exceeds the limit is marked as failed, and dropped by the runner.
For more details, see [Maximum file size for job logs](../instance_limits.md#maximum-file-size-for-job-logs).
## Prevent local disk usage
If you want to avoid any local disk usage for job logs,
you can do so using one of the following options:
- Turn on [incremental logging](#configure-incremental-logging).
- Set the [job logs location](#changing-the-job-logs-local-location)
to an NFS drive.
## How to remove job logs
There isn't a way to automatically expire old job logs. However, it's safe to remove
them if they're taking up too much space. If you remove the logs manually, the
job output in the UI is empty.
For details on how to delete job logs by using GitLab CLI,
see [Delete job logs](../../user/storage_management_automation.md#delete-job-logs).
Alternatively, you can delete job logs with shell commands. For example, to delete all job logs older than 60 days, run the following
command from a shell in your GitLab instance.
{{< alert type="note" >}}
For the Helm chart, use the storage management tools provided with your object
storage.
{{< /alert >}}
{{< alert type="warning" >}}
The following command permanently deletes the log files and is irreversible.
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
find /var/opt/gitlab/gitlab-rails/shared/artifacts -name "job.log" -mtime +60 -delete
```
{{< /tab >}}
{{< tab title="Docker" >}}
Assuming you mounted `/var/opt/gitlab` to `/srv/gitlab`:
```shell
find /srv/gitlab/gitlab-rails/shared/artifacts -name "job.log" -mtime +60 -delete
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
find /home/git/gitlab/shared/artifacts -name "job.log" -mtime +60 -delete
```
{{< /tab >}}
{{< /tabs >}}
After the logs are deleted, you can find any broken file references by running
the Rake task that checks the
[integrity of the uploaded files](../raketasks/check.md#uploaded-files-integrity).
For more information, see how to
[delete references to missing artifacts](../raketasks/check.md#delete-references-to-missing-artifacts).
## Incremental logging
Incremental logging changes how job logs are processed and stored, improving performance in scaled-out deployments.
By default, job logs are sent from GitLab Runner in chunks and cached temporarily on disk. After the job completes, a background job archives the log to the artifacts directory or to object storage if configured.
With incremental logging, logs are stored in Redis and a persistent store instead of file storage. This approach:
- Prevents local disk usage for job logs.
- Eliminates the need for NFS sharing between Rails and Sidekiq servers.
- Improves performance in multi-node installations.
The incremental logging process uses Redis as temporary storage and follows this flow:
1. The runner picks a job from GitLab.
1. The runner sends a piece of log to GitLab.
1. GitLab appends the data to Redis in the `Gitlab::Redis::TraceChunks` namespace.
1. After the data in Redis reaches 128 KB, the data is flushed to a persistent store.
1. The previous steps repeat until the job is finished.
1. After the job is finished, GitLab schedules a Sidekiq worker to archive the log.
1. The Sidekiq worker archives the log to object storage and cleans up temporary data.
Redis Cluster is not supported with incremental logging.
For more information, see [issue 224171](https://gitlab.com/gitlab-org/gitlab/-/issues/224171).
### Configure incremental logging
Before you turn on incremental logging, you must [configure object storage](job_artifacts.md#using-object-storage) for CI/CD artifacts, logs, and builds. After incremental logging is turned on, files cannot be written to disk, and there is no protection against misconfiguration.
When you turn on incremental logging, running jobs' logs continue to be written to disk, but new jobs use incremental logging.
When you turn off incremental logging, running jobs continue to use incremental logging, but new jobs write to the disk.
To configure incremental logging:
- Use the setting in the [Admin area](../settings/continuous_integration.md#access-job-log-settings) or the [Settings API](../../api/settings.md).
|
https://docs.gitlab.com/administration/cicd
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/_index.md
|
2025-08-13
|
doc/administration/cicd
|
[
"doc",
"administration",
"cicd"
] |
_index.md
|
Verify
|
Pipeline Execution
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab CI/CD instance configuration
|
Manage GitLab CI/CD configuration.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab administrators can manage the GitLab CI/CD configuration for their instance.
## Disable GitLab CI/CD in new projects
GitLab CI/CD is enabled by default in all new projects on an instance. You can set
CI/CD to be disabled by default in new projects by modifying the settings in:
- `gitlab.yml` for self-compiled installations.
- `gitlab.rb` for Linux package installations.
Existing projects that already had CI/CD enabled are unchanged. Also, this setting only changes
the project default, so project owners [can still enable CI/CD in the project settings](../../ci/pipelines/settings.md#disable-gitlab-cicd-pipelines).
For self-compiled installations:
1. Open `gitlab.yml` with your editor and set `builds` to `false`:
```yaml
## Default project features settings
default_projects_features:
issues: true
merge_requests: true
wiki: true
snippets: false
builds: false
```
1. Save the `gitlab.yml` file.
1. Restart GitLab:
```shell
sudo service gitlab restart
```
For Linux package installations:
1. Edit `/etc/gitlab/gitlab.rb` and add this line:
```ruby
gitlab_rails['gitlab_default_projects_features_builds'] = false
```
1. Save the `/etc/gitlab/gitlab.rb` file.
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Set the `needs` job limit
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The maximum number of jobs that can be defined in `needs` defaults to 50.
A GitLab administrator with [access to the GitLab Rails console](../operations/rails_console.md#starting-a-rails-console-session)
can choose a custom limit. For example, to set the limit to `100`:
```ruby
Plan.default.actual_limits.update!(ci_needs_size_limit: 100)
```
To disable `needs` dependencies, set the limit to `0`. Pipelines with jobs
configured to use `needs` then return the error `job can only need 0 others`.
## Change maximum scheduled pipeline frequency
[Scheduled pipelines](../../ci/pipelines/schedules.md) can be configured with any [cron value](../../topics/cron/_index.md),
but they do not always run exactly when scheduled. An internal process, called the
_pipeline schedule worker_, queues all the scheduled pipelines, but does not
run continuously. The worker runs on its own schedule, and scheduled pipelines that
are ready to start are only queued the next time the worker runs. Scheduled pipelines
can't run more frequently than the worker.
The default frequency of the pipeline schedule worker is `3-59/10 * * * *` (every ten minutes,
starting with `0:03`, `0:13`, `0:23`, and so on). The default frequency for GitLab.com
is listed in the [GitLab.com settings](../../user/gitlab_com/_index.md#cicd).
To change the frequency of the pipeline schedule worker:
1. Edit the `gitlab_rails['pipeline_schedule_worker_cron']` value in your instance's `gitlab.rb` file.
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
For example, to set the maximum frequency of pipelines to twice a day, set `pipeline_schedule_worker_cron`
to a cron value of `0 */12 * * *` (`00:00` and `12:00` every day).
When many pipeline schedules run at the same time, additional delays can occur.
The pipeline schedule worker processes pipelines in [batches](https://gitlab.com/gitlab-org/gitlab/-/blob/3426be1b93852c5358240c5df40970c0ddfbdb2a/app/workers/pipeline_schedule_worker.rb#L13-14)
with a small delay between each batch to distribute system load. This can cause pipeline
schedules to start several minutes after their scheduled time.
## Disaster recovery
You can disable some important but computationally expensive parts of the application
to relieve stress on the database during ongoing downtime.
### Disable fair scheduling on instance runners
When clearing a large backlog of jobs, you can temporarily enable the `ci_queueing_disaster_recovery_disable_fair_scheduling`
[feature flag](../feature_flags/_index.md). This flag disables fair scheduling
on instance runners, which reduces system resource usage on the `jobs/request` endpoint.
When enabled, jobs are processed in the order they were put in the system, instead of
balanced across many projects.
### Disable compute quota enforcement
To disable the enforcement of [compute minutes quotas](compute_minutes.md) on instance runners, you can temporarily
enable the `ci_queueing_disaster_recovery_disable_quota` [feature flag](../feature_flags/_index.md).
This flag reduces system resource usage on the `jobs/request` endpoint.
When enabled, jobs created in the last hour can run in projects which are out of quota.
Earlier jobs are already canceled by a periodic background worker (`StuckCiJobsWorker`).
|
---
stage: Verify
group: Pipeline Execution
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: GitLab CI/CD instance configuration
description: Manage GitLab CI/CD configuration.
breadcrumbs:
- doc
- administration
- cicd
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab administrators can manage the GitLab CI/CD configuration for their instance.
## Disable GitLab CI/CD in new projects
GitLab CI/CD is enabled by default in all new projects on an instance. You can set
CI/CD to be disabled by default in new projects by modifying the settings in:
- `gitlab.yml` for self-compiled installations.
- `gitlab.rb` for Linux package installations.
Existing projects that already had CI/CD enabled are unchanged. Also, this setting only changes
the project default, so project owners [can still enable CI/CD in the project settings](../../ci/pipelines/settings.md#disable-gitlab-cicd-pipelines).
For self-compiled installations:
1. Open `gitlab.yml` with your editor and set `builds` to `false`:
```yaml
## Default project features settings
default_projects_features:
issues: true
merge_requests: true
wiki: true
snippets: false
builds: false
```
1. Save the `gitlab.yml` file.
1. Restart GitLab:
```shell
sudo service gitlab restart
```
For Linux package installations:
1. Edit `/etc/gitlab/gitlab.rb` and add this line:
```ruby
gitlab_rails['gitlab_default_projects_features_builds'] = false
```
1. Save the `/etc/gitlab/gitlab.rb` file.
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
## Set the `needs` job limit
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The maximum number of jobs that can be defined in `needs` defaults to 50.
A GitLab administrator with [access to the GitLab Rails console](../operations/rails_console.md#starting-a-rails-console-session)
can choose a custom limit. For example, to set the limit to `100`:
```ruby
Plan.default.actual_limits.update!(ci_needs_size_limit: 100)
```
To disable `needs` dependencies, set the limit to `0`. Pipelines with jobs
configured to use `needs` then return the error `job can only need 0 others`.
## Change maximum scheduled pipeline frequency
[Scheduled pipelines](../../ci/pipelines/schedules.md) can be configured with any [cron value](../../topics/cron/_index.md),
but they do not always run exactly when scheduled. An internal process, called the
_pipeline schedule worker_, queues all the scheduled pipelines, but does not
run continuously. The worker runs on its own schedule, and scheduled pipelines that
are ready to start are only queued the next time the worker runs. Scheduled pipelines
can't run more frequently than the worker.
The default frequency of the pipeline schedule worker is `3-59/10 * * * *` (every ten minutes,
starting with `0:03`, `0:13`, `0:23`, and so on). The default frequency for GitLab.com
is listed in the [GitLab.com settings](../../user/gitlab_com/_index.md#cicd).
To change the frequency of the pipeline schedule worker:
1. Edit the `gitlab_rails['pipeline_schedule_worker_cron']` value in your instance's `gitlab.rb` file.
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
For example, to set the maximum frequency of pipelines to twice a day, set `pipeline_schedule_worker_cron`
to a cron value of `0 */12 * * *` (`00:00` and `12:00` every day).
When many pipeline schedules run at the same time, additional delays can occur.
The pipeline schedule worker processes pipelines in [batches](https://gitlab.com/gitlab-org/gitlab/-/blob/3426be1b93852c5358240c5df40970c0ddfbdb2a/app/workers/pipeline_schedule_worker.rb#L13-14)
with a small delay between each batch to distribute system load. This can cause pipeline
schedules to start several minutes after their scheduled time.
## Disaster recovery
You can disable some important but computationally expensive parts of the application
to relieve stress on the database during ongoing downtime.
### Disable fair scheduling on instance runners
When clearing a large backlog of jobs, you can temporarily enable the `ci_queueing_disaster_recovery_disable_fair_scheduling`
[feature flag](../feature_flags/_index.md). This flag disables fair scheduling
on instance runners, which reduces system resource usage on the `jobs/request` endpoint.
When enabled, jobs are processed in the order they were put in the system, instead of
balanced across many projects.
### Disable compute quota enforcement
To disable the enforcement of [compute minutes quotas](compute_minutes.md) on instance runners, you can temporarily
enable the `ci_queueing_disaster_recovery_disable_quota` [feature flag](../feature_flags/_index.md).
This flag reduces system resource usage on the `jobs/request` endpoint.
When enabled, jobs created in the last hour can run in projects which are out of quota.
Earlier jobs are already canceled by a periodic background worker (`StuckCiJobsWorker`).
|
https://docs.gitlab.com/administration/maintenance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/maintenance.md
|
2025-08-13
|
doc/administration/cicd
|
[
"doc",
"administration",
"cicd"
] |
maintenance.md
|
Verify
|
Pipeline Execution
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
CI/CD maintenance console commands
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The following commands are run in the [Rails console](../operations/rails_console.md#starting-a-rails-console-session).
{{< alert type="warning" >}}
Any command that changes data directly could be damaging if not run correctly, or under the right conditions.
We highly recommend running them in a test environment with a backup of the instance ready to be restored, just in case.
{{< /alert >}}
## Cancel all running pipelines and their jobs
```ruby
admin = User.find(user_id) # replace user_id with the id of the admin you want to cancel the pipeline
# Iterate over each cancelable pipeline
Ci::Pipeline.cancelable.find_each do |pipeline|
Ci::CancelPipelineService.new(
pipeline: pipeline,
current_user: user,
cascade_to_children: false # the children are included in the outer loop
)
end
```
## Cancel stuck pending pipelines
```ruby
project = Project.find_by_full_path('<project_path>')
Ci::Pipeline.where(project_id: project.id).where(status: 'pending').count
Ci::Pipeline.where(project_id: project.id).where(status: 'pending').each {|p| p.cancel if p.stuck?}
Ci::Pipeline.where(project_id: project.id).where(status: 'pending').count
```
## Try merge request integration
```ruby
project = Project.find_by_full_path('<project_path>')
mr = project.merge_requests.find_by(iid: <merge_request_iid>)
mr.project.try(:ci_integration)
```
## Validate the `.gitlab-ci.yml` file
```ruby
project = Project.find_by_full_path('<project_path>')
content = project.ci_config_for(project.repository.root_ref_sha)
Gitlab::Ci::Lint.new(project: project, current_user: User.first).validate(content)
```
## Disable AutoDevOps on Existing Projects
```ruby
Project.all.each do |p|
p.auto_devops_attributes={"enabled"=>"0"}
p.save
end
```
## Run pipeline schedules manually
You can run pipeline schedules manually through the Rails console to reveal any errors that are usually not visible.
```ruby
# schedule_id can be obtained from Edit Pipeline Schedule page
schedule = Ci::PipelineSchedule.find_by(id: <schedule_id>)
# Select the user that you want to run the schedule for
user = User.find_by_username('<username>')
# Run the schedule
ps = Ci::CreatePipelineService.new(schedule.project, user, ref: schedule.ref).execute!(:schedule, ignore_skip_ci: true, save_on_errors: false, schedule: schedule)
```
<!--- start_remove The following content will be removed on remove_date: '2025-08-15' -->
## Obtain runners registration token (deprecated)
{{< alert type="warning" >}}
The option to pass runner registration tokens and support for certain configuration arguments are
[deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/380872) in GitLab 15.6 and is planned for removal in GitLab 20.0.
Use the [runner creation workflow](https://docs.gitlab.com/runner/register/#register-with-a-runner-authentication-token)
to generate an authentication token to register runners. This process provides full
traceability of runner ownership and enhances your runner fleet's security.
For more information, see
[Migrating to the new runner registration workflow](../../ci/runners/new_creation_workflow.md).
{{< /alert >}}
Prerequisites:
- Runner registration tokens must be [enabled](../settings/continuous_integration.md#control-runner-registration) in the **Admin** area.
```ruby
Gitlab::CurrentSettings.current_application_settings.runners_registration_token
```
## Seed runners registration token (deprecated)
{{< alert type="warning" >}}
The option to pass runner registration tokens and support for certain configuration arguments are
[deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/380872) in GitLab 15.6 and is planned for removal in GitLab 20.0.
Use the [runner creation workflow](https://docs.gitlab.com/runner/register/#register-with-a-runner-authentication-token)
to generate an authentication token to register runners. This process provides full
traceability of runner ownership and enhances your runner fleet's security.
For more information, see
[Migrating to the new runner registration workflow](../../ci/runners/new_creation_workflow.md).
{{< /alert >}}
```ruby
appSetting = Gitlab::CurrentSettings.current_application_settings
appSetting.set_runners_registration_token('<new-runners-registration-token>')
appSetting.save!
```
<!--- end_remove -->
|
---
stage: Verify
group: Pipeline Execution
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: CI/CD maintenance console commands
breadcrumbs:
- doc
- administration
- cicd
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The following commands are run in the [Rails console](../operations/rails_console.md#starting-a-rails-console-session).
{{< alert type="warning" >}}
Any command that changes data directly could be damaging if not run correctly, or under the right conditions.
We highly recommend running them in a test environment with a backup of the instance ready to be restored, just in case.
{{< /alert >}}
## Cancel all running pipelines and their jobs
```ruby
admin = User.find(user_id) # replace user_id with the id of the admin you want to cancel the pipeline
# Iterate over each cancelable pipeline
Ci::Pipeline.cancelable.find_each do |pipeline|
Ci::CancelPipelineService.new(
pipeline: pipeline,
current_user: user,
cascade_to_children: false # the children are included in the outer loop
)
end
```
## Cancel stuck pending pipelines
```ruby
project = Project.find_by_full_path('<project_path>')
Ci::Pipeline.where(project_id: project.id).where(status: 'pending').count
Ci::Pipeline.where(project_id: project.id).where(status: 'pending').each {|p| p.cancel if p.stuck?}
Ci::Pipeline.where(project_id: project.id).where(status: 'pending').count
```
## Try merge request integration
```ruby
project = Project.find_by_full_path('<project_path>')
mr = project.merge_requests.find_by(iid: <merge_request_iid>)
mr.project.try(:ci_integration)
```
## Validate the `.gitlab-ci.yml` file
```ruby
project = Project.find_by_full_path('<project_path>')
content = project.ci_config_for(project.repository.root_ref_sha)
Gitlab::Ci::Lint.new(project: project, current_user: User.first).validate(content)
```
## Disable AutoDevOps on Existing Projects
```ruby
Project.all.each do |p|
p.auto_devops_attributes={"enabled"=>"0"}
p.save
end
```
## Run pipeline schedules manually
You can run pipeline schedules manually through the Rails console to reveal any errors that are usually not visible.
```ruby
# schedule_id can be obtained from Edit Pipeline Schedule page
schedule = Ci::PipelineSchedule.find_by(id: <schedule_id>)
# Select the user that you want to run the schedule for
user = User.find_by_username('<username>')
# Run the schedule
ps = Ci::CreatePipelineService.new(schedule.project, user, ref: schedule.ref).execute!(:schedule, ignore_skip_ci: true, save_on_errors: false, schedule: schedule)
```
<!--- start_remove The following content will be removed on remove_date: '2025-08-15' -->
## Obtain runners registration token (deprecated)
{{< alert type="warning" >}}
The option to pass runner registration tokens and support for certain configuration arguments are
[deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/380872) in GitLab 15.6 and is planned for removal in GitLab 20.0.
Use the [runner creation workflow](https://docs.gitlab.com/runner/register/#register-with-a-runner-authentication-token)
to generate an authentication token to register runners. This process provides full
traceability of runner ownership and enhances your runner fleet's security.
For more information, see
[Migrating to the new runner registration workflow](../../ci/runners/new_creation_workflow.md).
{{< /alert >}}
Prerequisites:
- Runner registration tokens must be [enabled](../settings/continuous_integration.md#control-runner-registration) in the **Admin** area.
```ruby
Gitlab::CurrentSettings.current_application_settings.runners_registration_token
```
## Seed runners registration token (deprecated)
{{< alert type="warning" >}}
The option to pass runner registration tokens and support for certain configuration arguments are
[deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/380872) in GitLab 15.6 and is planned for removal in GitLab 20.0.
Use the [runner creation workflow](https://docs.gitlab.com/runner/register/#register-with-a-runner-authentication-token)
to generate an authentication token to register runners. This process provides full
traceability of runner ownership and enhances your runner fleet's security.
For more information, see
[Migrating to the new runner registration workflow](../../ci/runners/new_creation_workflow.md).
{{< /alert >}}
```ruby
appSetting = Gitlab::CurrentSettings.current_application_settings
appSetting.set_runners_registration_token('<new-runners-registration-token>')
appSetting.save!
```
<!--- end_remove -->
|
https://docs.gitlab.com/administration/secure_files
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/secure_files.md
|
2025-08-13
|
doc/administration/cicd
|
[
"doc",
"administration",
"cicd"
] |
secure_files.md
|
Verify
|
Mobile DevOps
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Secure Files administration
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/350748) and feature flag `ci_secure_files` removed in GitLab 15.7.
{{< /history >}}
You can securely store up to 100 files for use in CI/CD pipelines as secure files.
These files are stored securely outside of your project's repository and are not version controlled.
It is safe to store sensitive information in these files. Secure files support both plain text
and binary file types, and must be 5 MB or less.
The storage location of these files can be configured using the options described below,
but the default locations are:
- `/var/opt/gitlab/gitlab-rails/shared/ci_secure_files` for installations using the Linux package.
- `/home/git/gitlab/shared/ci_secure_files` for self-compiled installations.
Use [external object storage](https://docs.gitlab.com/charts/advanced/external-object-storage/#lfs-artifacts-uploads-packages-external-diffs-terraform-state-dependency-proxy)
configuration for [GitLab Helm chart](https://docs.gitlab.com/charts/) installations.
## Disabling Secure Files
You can disable Secure Files across the entire GitLab instance. You might want to disable
Secure Files to reduce disk space, or to remove access to the feature.
To disable Secure Files, follow the steps below according to your installation.
Prerequisites:
- You must be an administrator.
**For Linux package installations**
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['ci_secure_files_enabled'] = false
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
**For self-compiled installations**
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
```yaml
ci_secure_files:
enabled: false
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
## Using local storage
The default configuration uses local storage. To change the location where Secure Files
are stored locally, follow the steps below.
**For Linux package installations**
1. To change the storage path for example to `/mnt/storage/ci_secure_files`, edit
`/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['ci_secure_files_storage_path'] = "/mnt/storage/ci_secure_files"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
**For self-compiled installations**
1. To change the storage path for example to `/mnt/storage/ci_secure_files`, edit
`/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
```yaml
ci_secure_files:
enabled: true
storage_path: /mnt/storage/ci_secure_files
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
## Using object storage
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Instead of storing Secure Files on disk, you should use [one of the supported object storage options](../object_storage.md#supported-object-storage-providers).
This configuration relies on valid credentials to be configured already.
### Consolidated object storage
{{< history >}}
- Support for consolidated object storage was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149873) in GitLab 17.0.
{{< /history >}}
Using the [consolidated form](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form)
of the object storage is recommended.
### Storage-specific object storage
The following settings are:
- Nested under `ci_secure_files:` and then `object_store:` on self-compiled installations.
- Prefixed by `ci_secure_files_object_store_` on Linux package installations.
| Setting | Description | Default |
|---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where Secure Files are stored | |
| `connection` | Various connection options described below | |
### S3-compatible connection settings
See [the available connection settings for different providers](../object_storage.md#configure-the-connection-settings).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines, but using
the values you want:
```ruby
gitlab_rails['ci_secure_files_object_store_enabled'] = true
gitlab_rails['ci_secure_files_object_store_remote_directory'] = "ci_secure_files"
gitlab_rails['ci_secure_files_object_store_connection'] = {
'provider' => 'AWS',
'region' => 'eu-central-1',
'aws_access_key_id' => 'AWS_ACCESS_KEY_ID',
'aws_secret_access_key' => 'AWS_SECRET_ACCESS_KEY'
}
```
{{< alert type="note" >}}
If you are using AWS IAM profiles, be sure to omit the AWS access key and secret access key/value pairs:
{{< /alert >}}
```ruby
gitlab_rails['ci_secure_files_object_store_connection'] = {
'provider' => 'AWS',
'region' => 'eu-central-1',
'use_iam_profile' => true
}
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. [Migrate any existing local states to the object storage](#migrate-to-object-storage).
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
```yaml
ci_secure_files:
enabled: true
object_store:
enabled: true
remote_directory: "ci_secure_files" # The bucket name
connection:
provider: AWS # Only AWS supported at the moment
aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY
region: eu-central-1
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
1. [Migrate any existing local states to the object storage](#migrate-to-object-storage).
{{< /tab >}}
{{< /tabs >}}
### Migrate to object storage
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/readme/-/issues/125) in GitLab 16.1.
{{< /history >}}
{{< alert type="warning" >}}
It's not possible to migrate Secure Files from object storage back to local storage,
so proceed with caution.
{{< /alert >}}
To migrate Secure Files to object storage, follow the instructions below.
- For Linux package installations:
```shell
sudo gitlab-rake gitlab:ci_secure_files:migrate
```
- For self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:ci_secure_files:migrate RAILS_ENV=production
```
|
---
stage: Verify
group: Mobile DevOps
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Secure Files administration
breadcrumbs:
- doc
- administration
- cicd
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/350748) and feature flag `ci_secure_files` removed in GitLab 15.7.
{{< /history >}}
You can securely store up to 100 files for use in CI/CD pipelines as secure files.
These files are stored securely outside of your project's repository and are not version controlled.
It is safe to store sensitive information in these files. Secure files support both plain text
and binary file types, and must be 5 MB or less.
The storage location of these files can be configured using the options described below,
but the default locations are:
- `/var/opt/gitlab/gitlab-rails/shared/ci_secure_files` for installations using the Linux package.
- `/home/git/gitlab/shared/ci_secure_files` for self-compiled installations.
Use [external object storage](https://docs.gitlab.com/charts/advanced/external-object-storage/#lfs-artifacts-uploads-packages-external-diffs-terraform-state-dependency-proxy)
configuration for [GitLab Helm chart](https://docs.gitlab.com/charts/) installations.
## Disabling Secure Files
You can disable Secure Files across the entire GitLab instance. You might want to disable
Secure Files to reduce disk space, or to remove access to the feature.
To disable Secure Files, follow the steps below according to your installation.
Prerequisites:
- You must be an administrator.
**For Linux package installations**
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['ci_secure_files_enabled'] = false
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
**For self-compiled installations**
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
```yaml
ci_secure_files:
enabled: false
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations) for the changes to take effect.
## Using local storage
The default configuration uses local storage. To change the location where Secure Files
are stored locally, follow the steps below.
**For Linux package installations**
1. To change the storage path for example to `/mnt/storage/ci_secure_files`, edit
`/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['ci_secure_files_storage_path'] = "/mnt/storage/ci_secure_files"
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
**For self-compiled installations**
1. To change the storage path for example to `/mnt/storage/ci_secure_files`, edit
`/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
```yaml
ci_secure_files:
enabled: true
storage_path: /mnt/storage/ci_secure_files
```
1. Save the file and [restart GitLab](../restart_gitlab.md#self-compiled-installations)
for the changes to take effect.
## Using object storage
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
Instead of storing Secure Files on disk, you should use [one of the supported object storage options](../object_storage.md#supported-object-storage-providers).
This configuration relies on valid credentials to be configured already.
### Consolidated object storage
{{< history >}}
- Support for consolidated object storage was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149873) in GitLab 17.0.
{{< /history >}}
Using the [consolidated form](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form)
of the object storage is recommended.
### Storage-specific object storage
The following settings are:
- Nested under `ci_secure_files:` and then `object_store:` on self-compiled installations.
- Prefixed by `ci_secure_files_object_store_` on Linux package installations.
| Setting | Description | Default |
|---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where Secure Files are stored | |
| `connection` | Various connection options described below | |
### S3-compatible connection settings
See [the available connection settings for different providers](../object_storage.md#configure-the-connection-settings).
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines, but using
the values you want:
```ruby
gitlab_rails['ci_secure_files_object_store_enabled'] = true
gitlab_rails['ci_secure_files_object_store_remote_directory'] = "ci_secure_files"
gitlab_rails['ci_secure_files_object_store_connection'] = {
'provider' => 'AWS',
'region' => 'eu-central-1',
'aws_access_key_id' => 'AWS_ACCESS_KEY_ID',
'aws_secret_access_key' => 'AWS_SECRET_ACCESS_KEY'
}
```
{{< alert type="note" >}}
If you are using AWS IAM profiles, be sure to omit the AWS access key and secret access key/value pairs:
{{< /alert >}}
```ruby
gitlab_rails['ci_secure_files_object_store_connection'] = {
'provider' => 'AWS',
'region' => 'eu-central-1',
'use_iam_profile' => true
}
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. [Migrate any existing local states to the object storage](#migrate-to-object-storage).
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
```yaml
ci_secure_files:
enabled: true
object_store:
enabled: true
remote_directory: "ci_secure_files" # The bucket name
connection:
provider: AWS # Only AWS supported at the moment
aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY
region: eu-central-1
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
1. [Migrate any existing local states to the object storage](#migrate-to-object-storage).
{{< /tab >}}
{{< /tabs >}}
### Migrate to object storage
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/readme/-/issues/125) in GitLab 16.1.
{{< /history >}}
{{< alert type="warning" >}}
It's not possible to migrate Secure Files from object storage back to local storage,
so proceed with caution.
{{< /alert >}}
To migrate Secure Files to object storage, follow the instructions below.
- For Linux package installations:
```shell
sudo gitlab-rake gitlab:ci_secure_files:migrate
```
- For self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:ci_secure_files:migrate RAILS_ENV=production
```
|
https://docs.gitlab.com/administration/job_artifacts_troubleshooting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/job_artifacts_troubleshooting.md
|
2025-08-13
|
doc/administration/cicd
|
[
"doc",
"administration",
"cicd"
] |
job_artifacts_troubleshooting.md
|
Verify
|
Pipeline Execution
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Job artifact troubleshooting for administrators
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
When administering job artifacts, you might encounter the following issues.
## Job artifacts using too much disk space
Job artifacts can fill up your disk space quicker than expected. Some possible
reasons are:
- Users have configured job artifacts expiration to be longer than necessary.
- The number of jobs run, and hence artifacts generated, is higher than expected.
- Job logs are larger than expected, and have accumulated over time.
- The file system might run out of inodes because
[empty directories are left behind by artifact housekeeping](https://gitlab.com/gitlab-org/gitlab/-/issues/17465).
[The Rake task for orphaned artifact files](../raketasks/cleanup.md#remove-orphan-artifact-files)
removes these.
- Artifact files might be left on disk and not deleted by housekeeping. Run the
[Rake task for orphaned artifact files](../raketasks/cleanup.md#remove-orphan-artifact-files)
to remove these. This script should always find work to do because it also removes empty directories (see the previous reason).
- [Artifact housekeeping was changed significantly](#housekeeping-disabled-in-gitlab-150-to-152), and you might need to enable a feature flag to use the updated system.
- The [keep latest artifacts from most recent success jobs](../../ci/jobs/job_artifacts.md#keep-artifacts-from-most-recent-successful-jobs)
feature is enabled.
In these and other cases, identify the projects most responsible
for disk space usage, figure out what types of artifacts are using the most
space, and in some cases, manually delete job artifacts to reclaim disk space.
### Artifacts housekeeping
Artifacts housekeeping is the process that identifies which artifacts are expired
and can be deleted.
#### Housekeeping disabled in GitLab 15.0 to 15.2
Artifact housekeeping was significantly improved in GitLab 15.0, introduced behind [feature flags](../feature_flags/_index.md) disabled by default. The flags were enabled by default [in GitLab 15.3](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/92931).
If artifacts housekeeping does not seem to be working in GitLab 15.0 to GitLab 15.2, you should check if the feature flags are enabled.
To check if the feature flags are enabled:
1. Start a [Rails console](../operations/rails_console.md#starting-a-rails-console-session).
1. Check if the feature flags are enabled.
```ruby
Feature.enabled?(:ci_detect_wrongly_expired_artifacts)
Feature.enabled?(:ci_update_unlocked_job_artifacts)
Feature.enabled?(:ci_job_artifacts_backlog_work)
```
1. If any of the feature flags are disabled, enable them:
```ruby
Feature.enable(:ci_detect_wrongly_expired_artifacts)
Feature.enable(:ci_update_unlocked_job_artifacts)
Feature.enable(:ci_job_artifacts_backlog_work)
```
These changes include switching artifacts from `unlocked` to `locked` if
they [should be retained](../../ci/jobs/job_artifacts.md#keep-artifacts-from-most-recent-successful-jobs).
#### Artifacts with `unknown` status
Artifacts created before housekeeping was updated have a status of `unknown`. After they expire,
these artifacts are not processed by the new housekeeping.
You can check the database to confirm if your instance has artifacts with the `unknown` status:
1. Start a database console:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-psql
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# Find the toolbox pod
kubectl --namespace <namespace> get pods -lapp=toolbox
# Connect to the PostgreSQL console
kubectl exec -it <toolbox-pod-name> -- /srv/gitlab/bin/rails dbconsole --include-password --database main
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -it <container_name> /bin/bash
gitlab-psql
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H psql -d gitlabhq_production
```
{{< /tab >}}
{{< /tabs >}}
1. Run the following query:
```sql
select expire_at, file_type, locked, count(*) from ci_job_artifacts
where expire_at is not null and
file_type != 3
group by expire_at, file_type, locked having count(*) > 1;
```
If records are returned, then there are artifacts which the housekeeping job
is unable to process. For example:
```plaintext
expire_at | file_type | locked | count
-------------------------------+-----------+--------+--------
2021-06-21 22:00:00+00 | 1 | 2 | 73614
2021-06-21 22:00:00+00 | 2 | 2 | 73614
2021-06-21 22:00:00+00 | 4 | 2 | 3522
2021-06-21 22:00:00+00 | 9 | 2 | 32
2021-06-21 22:00:00+00 | 12 | 2 | 163
```
Artifacts with locked status `2` are `unknown`. Check
[issue #346261](https://gitlab.com/gitlab-org/gitlab/-/issues/346261#note_1028871458)
for more details.
#### Clean up `unknown` artifacts
The Sidekiq worker that processes all `unknown` artifacts is enabled by default in
GitLab 15.3 and later. It analyzes the artifacts returned by the previous database query and
determines which should be `locked` or `unlocked`. Artifacts are then deleted
by that worker if needed.
The worker can be enabled on GitLab Self-Managed:
1. Start a [Rails console](../operations/rails_console.md#starting-a-rails-console-session).
1. Check if the feature is enabled.
```ruby
Feature.enabled?(:ci_job_artifacts_backlog_work)
```
1. Enable the feature, if needed:
```ruby
Feature.enable(:ci_job_artifacts_backlog_work)
```
The worker processes 10,000 `unknown` artifacts every seven minutes, or roughly two million
in 24 hours.
#### `@final` artifacts not deleted from object store
In GitLab 16.1 and later, artifacts are uploaded directly to their final storage location in the `@final` directory, rather than using a temporary location first.
An issue in GitLab 16.1 and 16.2 causes [artifacts to not be deleted from object storage](https://gitlab.com/gitlab-org/gitlab/-/issues/419920) when they expire.
The cleanup process for expired artifacts does not remove artifacts from the `@final` directory. This issue is fixed in GitLab 16.3 and later.
Administrators of GitLab instances that ran GitLab 16.1 or 16.2 for some time could see an increase
in object storage used by artifacts. Follow this procedure to check for and remove these artifacts.
Removing the files is a two stage process:
1. [Identify which files have been orphaned](#list-orphaned-job-artifacts).
1. [Delete the identified files from object storage](#delete-orphaned-job-artifacts).
##### List orphaned job artifacts
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
docker exec -it <container-id> bash
gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
```
Either write to a persistent volume mounted in the container, or when the command completes: copy the output file out of the session.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:cleanup:list_orphan_job_artifact_final_objects RAILS_ENV=production
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# find the pod
kubectl get pods --namespace <namespace> -lapp=toolbox
# open the Rails console
kubectl exec -it -c toolbox <toolbox-pod-name> bash
gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
```
When the command complete, copy the file out of the session onto persistent storage.
{{< /tab >}}
{{< /tabs >}}
The Rake task has some additional features that apply to all types of GitLab deployment:
- Scanning object storage can be interrupted. Progress is recorded in Redis, this is used to resume
scanning artifacts from that point.
- By default, the Rake task generates a CSV file:
`/opt/gitlab/embedded/service/gitlab-rails/tmp/orphan_job_artifact_final_objects.csv`
- Set an environment variable to specify a different filename:
```shell
# Packaged GitLab
sudo su -
FILENAME='custom_filename.csv' gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
```
- If the output file exists already (the default, or the specified file) it appends entries to the file.
- Each row contains the fields `object_path,object_size` comma separated, with no file header. For example:
```plaintext
35/13/35135aaa6cc23891b40cb3f378c53a17a1127210ce60e125ccf03efcfdaec458/@final/1a/1a/5abfa4ec66f1cc3b681a4d430b8b04596cbd636f13cdff44277211778f26,201
```
##### Delete orphaned job artifacts
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:delete_orphan_job_artifact_final_objects
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
docker exec -it <container-id> bash
gitlab-rake gitlab:cleanup:delete_orphan_job_artifact_final_objects
```
- Copy the output file out of the session when the command completes, or write it to a volume that has been mounted by the container.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:cleanup:delete_orphan_job_artifact_final_objects RAILS_ENV=production
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# find the pod
kubectl get pods --namespace <namespace> -lapp=toolbox
# open the Rails console
kubectl exec -it -c toolbox <toolbox-pod-name> bash
gitlab-rake gitlab:cleanup:delete_orphan_job_artifact_final_objects
```
- When the command complete, copy the file out of the session onto persistent storage.
{{< /tab >}}
{{< /tabs >}}
The following applies to all types of GitLab deployment:
- Specify the input filename using the `FILENAME` variable. By default the script looks for:
`/opt/gitlab/embedded/service/gitlab-rails/tmp/orphan_job_artifact_final_objects.csv`
- As the script deletes files, it writes out a CSV file with the deleted files:
- the file is in the same directory as the input file
- the filename is prefixed with `deleted_from--`. For example: `deleted_from--orphan_job_artifact_final_objects.csv`.
- The rows in the file are: `object_path,object_size,object_generation/version`, for example:
```plaintext
35/13/35135aaa6cc23891b40cb3f378c53a17a1127210ce60e125ccf03efcfdaec458/@final/1a/1a/5abfa4ec66f1cc3b681a4d430b8b04596cbd636f13cdff44277211778f26,201,1711616743796587
```
### List projects and builds with artifacts with a specific expiration (or no expiration)
Using a [Rails console](../operations/rails_console.md), you can find projects that have job artifacts with either:
- No expiration date.
- An expiration date more than 7 days in the future.
Similar to [deleting artifacts](#delete-old-builds-and-artifacts), use the following example time frames
and alter them as needed:
- `7.days.from_now`
- `10.days.from_now`
- `2.weeks.from_now`
- `3.months.from_now`
- `1.year.from_now`
Each of the following scripts also limits the search to 50 results with `.limit(50)`, but this number can also be changed as needed:
```ruby
# Find builds & projects with artifacts that never expire
builds_with_artifacts_that_never_expire = Ci::Build.with_downloadable_artifacts.where(artifacts_expire_at: nil).limit(50)
builds_with_artifacts_that_never_expire.find_each do |build|
puts "Build with id #{build.id} has artifacts that don't expire and belongs to project #{build.project.full_path}"
end
# Find builds & projects with artifacts that expire after 7 days from today
builds_with_artifacts_that_expire_in_a_week = Ci::Build.with_downloadable_artifacts.where('artifacts_expire_at > ?', 7.days.from_now).limit(50)
builds_with_artifacts_that_expire_in_a_week.find_each do |build|
puts "Build with id #{build.id} has artifacts that expire at #{build.artifacts_expire_at} and belongs to project #{build.project.full_path}"
end
```
### List projects by total size of job artifacts stored
List the top 20 projects, sorted by the total size of job artifacts stored, by
running the following code in the [Rails console](../operations/rails_console.md):
```ruby
include ActionView::Helpers::NumberHelper
ProjectStatistics.order(build_artifacts_size: :desc).limit(20).each do |s|
puts "#{number_to_human_size(s.build_artifacts_size)} \t #{s.project.full_path}"
end
```
You can change the number of projects listed by modifying `.limit(20)` to the
number you want.
### List largest artifacts in a single project
List the 50 largest job artifacts in a single project by running the following
code in the [Rails console](../operations/rails_console.md):
```ruby
include ActionView::Helpers::NumberHelper
project = Project.find_by_full_path('path/to/project')
Ci::JobArtifact.where(project: project).order(size: :desc).limit(50).map { |a| puts "ID: #{a.id} - #{a.file_type}: #{number_to_human_size(a.size)}" }
```
You can change the number of job artifacts listed by modifying `.limit(50)` to
the number you want.
### List artifacts in a single project
List the artifacts for a single project, sorted by artifact size. The output includes the:
- ID of the job that created the artifact
- artifact size
- artifact file type
- artifact creation date
- on-disk location of the artifact
```ruby
p = Project.find_by_id(<project_id>)
arts = Ci::JobArtifact.where(project: p)
list = arts.order(size: :desc).limit(50).each do |art|
puts "Job ID: #{art.job_id} - Size: #{art.size}b - Type: #{art.file_type} - Created: #{art.created_at} - File loc: #{art.file}"
end
```
To change the number of job artifacts listed, change the number in `limit(50)`.
### Delete old builds and artifacts
{{< alert type="warning" >}}
These commands remove data permanently. Before running them in a production environment,
you should try them in a test environment first and make a backup of the instance
that can be restored if needed.
{{< /alert >}}
#### Delete old artifacts for a project
This step also erases artifacts that users have [chosen to keep](../../ci/jobs/job_artifacts.md#with-an-expiry):
```ruby
project = Project.find_by_full_path('path/to/project')
builds_with_artifacts = project.builds.with_downloadable_artifacts
builds_with_artifacts.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
Ci::JobArtifacts::DeleteService.new(build).execute
end
batch.update_all(artifacts_expire_at: Time.current)
end
```
#### Delete old artifacts instance wide
This step also erases artifacts that users have [chosen to keep](../../ci/jobs/job_artifacts.md#with-an-expiry):
```ruby
builds_with_artifacts = Ci::Build.with_downloadable_artifacts
builds_with_artifacts.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
Ci::JobArtifacts::DeleteService.new(build).execute
end
batch.update_all(artifacts_expire_at: Time.current)
end
```
#### Delete old job logs and artifacts for a project
```ruby
project = Project.find_by_full_path('path/to/project')
builds = project.builds
admin_user = User.find_by(username: 'username')
builds.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
print "Ci::Build ID #{build.id}... "
if build.erasable?
Ci::BuildEraseService.new(build, admin_user).execute
puts "Erased"
else
puts "Skipped (Nothing to erase or not erasable)"
end
end
end
```
#### Delete old job logs and artifacts instance wide
```ruby
builds = Ci::Build.all
admin_user = User.find_by(username: 'username')
builds.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
print "Ci::Build ID #{build.id}... "
if build.erasable?
Ci::BuildEraseService.new(build, admin_user).execute
puts "Erased"
else
puts "Skipped (Nothing to erase or not erasable)"
end
end
end
```
`1.year.ago` is a Rails [`ActiveSupport::Duration`](https://api.rubyonrails.org/classes/ActiveSupport/Duration.html) method.
Start with a long duration to reduce the risk of accidentally deleting artifacts that are still in use.
Rerun the deletion with shorter durations as needed, for example `3.months.ago`, `2.weeks.ago`, or `7.days.ago`.
The method `erase_erasable_artifacts!` is synchronous, and upon execution the artifacts are immediately removed;
they are not scheduled by a background queue.
### Deleting artifacts does not immediately reclaim disk space
When artifacts are deleted, the process occurs in two phases:
1. **Marked as ready for deletion**: `Ci::JobArtifact` records are removed from the database and
converted to `Ci::DeletedObject` records with a future `pick_up_at` timestamp.
1. **Remove from storage**: The artifact files remain on disk until the `Ci::ScheduleDeleteObjectsCronWorker` worker
processes the `Ci::DeletedObject` records and physically removes the files.
The removal is deliberately limited to prevent overwhelming system resources:
- The worker runs once per hour, at the 16-minute mark.
- It processes objects in batches with a maximum of 20 concurrent jobs.
- Each deleted object has a `pick_up_at` timestamp that determines when it becomes
eligible for physical deletion
For large-scale deletions, the physical cleanup can take a significant amount of time
before disk space is fully reclaimed. Cleanup could take several days for very large deletions.
If you need to reclaim disk space quickly, you can expedite artifact deletion.
#### Expedite artifact removal
If you need to reclaim disk space quickly after deleting a large number of artifacts,
you can bypass the standard scheduling limitations and expedite the deletion process.
{{< alert type="warning" >}}
These commands put significant load on your system if you are deleting a large number of artifacts.
{{< /alert >}}
```ruby
# Set the pick_up_date to the current time on all artifacts
# This will mark them for immediate deletion
Ci::DeletedObject.update_all(pick_up_at: Time.current)
# Get the count of artifacts marked for deletion
Ci::DeletedObject.where("pick_up_at < ?", Time.current)
# Delete the artifacts from disk
while Ci::DeletedObject.where("pick_up_at < ?", Time.current).count > 0
Ci::DeleteObjectsService.new.execute
sleep(10)
end
# Get the count of artifacts marked for deletion (should now be zero)
Ci::DeletedObject.count
```
### Delete old pipelines
{{< alert type="warning" >}}
These commands remove data permanently. Before running them in a production environment,
consider seeking guidance from a Support Engineer. You should also try them in a test environment first
and make a backup of the instance that can be restored if needed.
{{< /alert >}}
Deleting a pipeline also removes that pipeline's:
- Job artifacts
- Job logs
- Job metadata
- Pipeline metadata
Removing job and pipeline metadata can help reduce the size of the CI tables in the database.
The CI tables are usually the largest tables in an instance's database.
#### Delete old pipelines for a project
```ruby
project = Project.find_by_full_path('path/to/project')
user = User.find(1)
project.ci_pipelines.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |pipeline|
puts "Erasing pipeline #{pipeline.id}"
Ci::DestroyPipelineService.new(pipeline.project, user).execute(pipeline)
end
end
```
#### Delete old pipelines instance-wide
```ruby
user = User.find(1)
Ci::Pipeline.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |pipeline|
puts "Erasing pipeline #{pipeline.id} for project #{pipeline.project_id}"
Ci::DestroyPipelineService.new(pipeline.project, user).execute(pipeline)
end
end
```
## Job artifact upload fails with error 500
If you are using object storage for artifacts and a job artifact fails to upload,
review:
- The job log for an error message similar to:
```plaintext
WARNING: Uploading artifacts as "archive" to coordinator... failed id=12345 responseStatus=500 Internal Server Error status=500 token=abcd1234
```
- The [workhorse log](../logs/_index.md#workhorse-logs) for an error message similar to:
```json
{"error":"MissingRegion: could not find region configuration","level":"error","msg":"error uploading S3 session","time":"2021-03-16T22:10:55-04:00"}
```
In both cases, you might need to add `region` to the job artifact [object storage configuration](../object_storage.md).
## Job artifact upload fails with `500 Internal Server Error (Missing file)`
Bucket names that include folder paths are not supported with [consolidated object storage](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form).
For example, `bucket/path`. If a bucket name has a path in it, you might receive an error similar to:
```plaintext
WARNING: Uploading artifacts as "archive" to coordinator... POST https://gitlab.example.com/api/v4/jobs/job_id/artifacts?artifact_format=zip&artifact_type=archive&expire_in=1+day: 500 Internal Server Error (Missing file)
FATAL: invalid argument
```
If a job artifact fails to upload due to the previous error when using consolidated object storage, make sure you are [using separate buckets](../object_storage.md#use-separate-buckets) for each data type.
## Job artifacts fail to upload with `FATAL: invalid argument` when using Windows mount
If you are using a Windows mount with CIFS for job artifacts, you may see an
`invalid argument` error when the runner attempts to upload artifacts:
```plaintext
WARNING: Uploading artifacts as "dotenv" to coordinator... POST https://<your-gitlab-instance>/api/v4/jobs/<JOB_ID>/artifacts: 500 Internal Server Error id=1296 responseStatus=500 Internal Server Error status=500 token=*****
FATAL: invalid argument
```
To work around this issue, you can try:
- Switching to an ext4 mount instead of CIFS.
- Upgrading to at least Linux kernel 5.15 which contains a number of important bug fixes
relating to CIFS file leases.
- For older kernels, using the `nolease` mount option to disable file leasing.
For more information, [see the investigation details](https://gitlab.com/gitlab-org/gitlab/-/issues/389995).
## Usage quota shows incorrect artifact storage usage
Sometimes the [artifacts storage usage](../../user/storage_usage_quotas.md) displays an incorrect
value for the total storage space used by artifacts. To recalculate the artifact
usage statistics for all projects in the instance, you can run this background script:
```shell
gitlab-rake gitlab:refresh_project_statistics_build_artifacts_size[https://example.com/path/file.csv]
```
The `https://example.com/path/file.csv` file must list the project IDs for
all projects for which you want to recalculate artifact storage usage. Use this format for the file:
```plaintext
PROJECT_ID
1
2
```
The artifact usage value can fluctuate to `0` while the script is running. After
recalculation, usage should display as expected again.
## Artifact download flow diagrams
The following flow diagrams illustrate how job artifacts work. These
diagrams assume object storage is configured for job artifacts.
### Proxy download disabled
With [`proxy_download` set to `false`](../object_storage.md), GitLab
redirects the runner to download artifacts from object storage with a
pre-signed URL. It is usually faster for runners to fetch from the
source directly so this configuration is generally recommended. It
should also reduce bandwidth usage because the data does not have to be
fetched by GitLab and sent to the runner. However, it does require
giving runners direct access to object storage.
The request flow looks like:
```mermaid
sequenceDiagram
autonumber
participant C as Runner
participant O as Object Storage
participant W as Workhorse
participant R as Rails
participant P as PostgreSQL
C->>+W: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over C,W: gitlab-ci-token@<CI_JOB_TOKEN>
W-->+R: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over W,R: gitlab-ci-token@<CI_JOB_TOKEN>
R->>P: Look up job for CI_JOB_TOKEN
R->>P: Find user who triggered job
R->>R: Does user have :read_build access?
alt Yes
R->>W: Send 302 redirect to object storage presigned URL
R->>C: 302 redirect
C->>O: GET <presigned URL>
else No
R->>W: 401 Unauthorized
W->>C: 401 Unauthorized
end
```
In this diagram:
1. First, the runner attempts to fetch a job artifact by using the
`GET /api/v4/jobs/:id/artifacts` endpoint. The runner attaches the
`direct_download=true` query parameter on the first attempt to indicate
that it is capable of downloading from object storage directly. Direct
downloads can be disabled in the runner configuration via the
[`FF_USE_DIRECT_DOWNLOAD` feature flag](https://docs.gitlab.com/runner/configuration/feature-flags.html).
This flag is set to `true` by default.
1. The runner sends the GET request using HTTP Basic Authentication
with the `gitlab-ci-token` username and an auto-generated
CI/CD job token as the password. This token is generated by GitLab and
given to the runner at the start of a job.
1. The GET request gets passed to the GitLab API, which looks
up the token in the database and finds the user who triggered the job.
1. In steps 5-8:
- If the user has access to the build, then GitLab generates
a presigned URL and sends a 302 Redirect with the `Location` set to that
URL. The runner follows the 302 Redirect and downloads the artifacts.
- If the job cannot be found or the user does not have access to the job,
then the API returns 401 Unauthorized.
The runner does not retry if it receives the following HTTP status codes:
- 200 OK
- 401 Unauthorized
- 403 Forbidden
- 404 Not Found
However, if the runner receives any other status code, such as a 500 error,
it re-attempts to download the artifacts two more times, sleeping 1 second
between each attempt. The subsequent attempts omit `direct_download=true`.
### Proxy download enabled
If `proxy_download` is `true`, GitLab always fetches the
artifacts from object storage and send the data to the runner, even if
the runner sends the `direct_download=true` query parameter. Proxy
downloads might be desirable if runners have restricted network access.
The following diagram is similar to the disabled proxy download example,
except at steps 6-9, GitLab does not send a 302 Redirect to the
runner. Instead, GitLab instructs Workhorse to fetch the data and stream
it back to the runner. From the runner perspective, the original GET
request to `/api/v4/jobs/:id/artifacts` returns the binary data
directly.
```mermaid
sequenceDiagram
autonumber
participant C as Runner
participant O as Object Storage
participant W as Workhorse
participant R as Rails
participant P as PostgreSQL
C->>+W: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over C,W: gitlab-ci-token@<CI_JOB_TOKEN>
W-->+R: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over W,R: gitlab-ci-token@<CI_JOB_TOKEN>
R->>P: Look up job for CI_JOB_TOKEN
R->>P: Find user who triggered job
R->>R: Does user have :read_build access?
alt Yes
R->>W: SendURL with object storage presigned URL
W->>O: GET <presigned URL>
O->>W: <artifacts data>
W->>C: <artifacts data>
else No
R->>W: 401 Unauthorized
W->>C: 401 Unauthorized
end
```
## `413 Request Entity Too Large` error
If the artifacts are too large, the job might fail with the following error:
```plaintext
Uploading artifacts as "archive" to coordinator... too large archive <job-id> responseStatus=413 Request Entity Too Large status=413" at end of a build job on pipeline when trying to store artifacts to <object-storage>.
```
You might need to:
- Increase the [maximum artifacts size](../settings/continuous_integration.md#set-maximum-artifacts-size).
- If you are using NGINX as a proxy server, increase the file upload size limit which is limited to 1 MB by default.
Set a higher value for `client-max-body-size` in the NGINX configuration file.
|
---
stage: Verify
group: Pipeline Execution
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Job artifact troubleshooting for administrators
breadcrumbs:
- doc
- administration
- cicd
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
When administering job artifacts, you might encounter the following issues.
## Job artifacts using too much disk space
Job artifacts can fill up your disk space quicker than expected. Some possible
reasons are:
- Users have configured job artifacts expiration to be longer than necessary.
- The number of jobs run, and hence artifacts generated, is higher than expected.
- Job logs are larger than expected, and have accumulated over time.
- The file system might run out of inodes because
[empty directories are left behind by artifact housekeeping](https://gitlab.com/gitlab-org/gitlab/-/issues/17465).
[The Rake task for orphaned artifact files](../raketasks/cleanup.md#remove-orphan-artifact-files)
removes these.
- Artifact files might be left on disk and not deleted by housekeeping. Run the
[Rake task for orphaned artifact files](../raketasks/cleanup.md#remove-orphan-artifact-files)
to remove these. This script should always find work to do because it also removes empty directories (see the previous reason).
- [Artifact housekeeping was changed significantly](#housekeeping-disabled-in-gitlab-150-to-152), and you might need to enable a feature flag to use the updated system.
- The [keep latest artifacts from most recent success jobs](../../ci/jobs/job_artifacts.md#keep-artifacts-from-most-recent-successful-jobs)
feature is enabled.
In these and other cases, identify the projects most responsible
for disk space usage, figure out what types of artifacts are using the most
space, and in some cases, manually delete job artifacts to reclaim disk space.
### Artifacts housekeeping
Artifacts housekeeping is the process that identifies which artifacts are expired
and can be deleted.
#### Housekeeping disabled in GitLab 15.0 to 15.2
Artifact housekeeping was significantly improved in GitLab 15.0, introduced behind [feature flags](../feature_flags/_index.md) disabled by default. The flags were enabled by default [in GitLab 15.3](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/92931).
If artifacts housekeeping does not seem to be working in GitLab 15.0 to GitLab 15.2, you should check if the feature flags are enabled.
To check if the feature flags are enabled:
1. Start a [Rails console](../operations/rails_console.md#starting-a-rails-console-session).
1. Check if the feature flags are enabled.
```ruby
Feature.enabled?(:ci_detect_wrongly_expired_artifacts)
Feature.enabled?(:ci_update_unlocked_job_artifacts)
Feature.enabled?(:ci_job_artifacts_backlog_work)
```
1. If any of the feature flags are disabled, enable them:
```ruby
Feature.enable(:ci_detect_wrongly_expired_artifacts)
Feature.enable(:ci_update_unlocked_job_artifacts)
Feature.enable(:ci_job_artifacts_backlog_work)
```
These changes include switching artifacts from `unlocked` to `locked` if
they [should be retained](../../ci/jobs/job_artifacts.md#keep-artifacts-from-most-recent-successful-jobs).
#### Artifacts with `unknown` status
Artifacts created before housekeeping was updated have a status of `unknown`. After they expire,
these artifacts are not processed by the new housekeeping.
You can check the database to confirm if your instance has artifacts with the `unknown` status:
1. Start a database console:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-psql
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# Find the toolbox pod
kubectl --namespace <namespace> get pods -lapp=toolbox
# Connect to the PostgreSQL console
kubectl exec -it <toolbox-pod-name> -- /srv/gitlab/bin/rails dbconsole --include-password --database main
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -it <container_name> /bin/bash
gitlab-psql
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H psql -d gitlabhq_production
```
{{< /tab >}}
{{< /tabs >}}
1. Run the following query:
```sql
select expire_at, file_type, locked, count(*) from ci_job_artifacts
where expire_at is not null and
file_type != 3
group by expire_at, file_type, locked having count(*) > 1;
```
If records are returned, then there are artifacts which the housekeeping job
is unable to process. For example:
```plaintext
expire_at | file_type | locked | count
-------------------------------+-----------+--------+--------
2021-06-21 22:00:00+00 | 1 | 2 | 73614
2021-06-21 22:00:00+00 | 2 | 2 | 73614
2021-06-21 22:00:00+00 | 4 | 2 | 3522
2021-06-21 22:00:00+00 | 9 | 2 | 32
2021-06-21 22:00:00+00 | 12 | 2 | 163
```
Artifacts with locked status `2` are `unknown`. Check
[issue #346261](https://gitlab.com/gitlab-org/gitlab/-/issues/346261#note_1028871458)
for more details.
#### Clean up `unknown` artifacts
The Sidekiq worker that processes all `unknown` artifacts is enabled by default in
GitLab 15.3 and later. It analyzes the artifacts returned by the previous database query and
determines which should be `locked` or `unlocked`. Artifacts are then deleted
by that worker if needed.
The worker can be enabled on GitLab Self-Managed:
1. Start a [Rails console](../operations/rails_console.md#starting-a-rails-console-session).
1. Check if the feature is enabled.
```ruby
Feature.enabled?(:ci_job_artifacts_backlog_work)
```
1. Enable the feature, if needed:
```ruby
Feature.enable(:ci_job_artifacts_backlog_work)
```
The worker processes 10,000 `unknown` artifacts every seven minutes, or roughly two million
in 24 hours.
#### `@final` artifacts not deleted from object store
In GitLab 16.1 and later, artifacts are uploaded directly to their final storage location in the `@final` directory, rather than using a temporary location first.
An issue in GitLab 16.1 and 16.2 causes [artifacts to not be deleted from object storage](https://gitlab.com/gitlab-org/gitlab/-/issues/419920) when they expire.
The cleanup process for expired artifacts does not remove artifacts from the `@final` directory. This issue is fixed in GitLab 16.3 and later.
Administrators of GitLab instances that ran GitLab 16.1 or 16.2 for some time could see an increase
in object storage used by artifacts. Follow this procedure to check for and remove these artifacts.
Removing the files is a two stage process:
1. [Identify which files have been orphaned](#list-orphaned-job-artifacts).
1. [Delete the identified files from object storage](#delete-orphaned-job-artifacts).
##### List orphaned job artifacts
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
docker exec -it <container-id> bash
gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
```
Either write to a persistent volume mounted in the container, or when the command completes: copy the output file out of the session.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:cleanup:list_orphan_job_artifact_final_objects RAILS_ENV=production
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# find the pod
kubectl get pods --namespace <namespace> -lapp=toolbox
# open the Rails console
kubectl exec -it -c toolbox <toolbox-pod-name> bash
gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
```
When the command complete, copy the file out of the session onto persistent storage.
{{< /tab >}}
{{< /tabs >}}
The Rake task has some additional features that apply to all types of GitLab deployment:
- Scanning object storage can be interrupted. Progress is recorded in Redis, this is used to resume
scanning artifacts from that point.
- By default, the Rake task generates a CSV file:
`/opt/gitlab/embedded/service/gitlab-rails/tmp/orphan_job_artifact_final_objects.csv`
- Set an environment variable to specify a different filename:
```shell
# Packaged GitLab
sudo su -
FILENAME='custom_filename.csv' gitlab-rake gitlab:cleanup:list_orphan_job_artifact_final_objects
```
- If the output file exists already (the default, or the specified file) it appends entries to the file.
- Each row contains the fields `object_path,object_size` comma separated, with no file header. For example:
```plaintext
35/13/35135aaa6cc23891b40cb3f378c53a17a1127210ce60e125ccf03efcfdaec458/@final/1a/1a/5abfa4ec66f1cc3b681a4d430b8b04596cbd636f13cdff44277211778f26,201
```
##### Delete orphaned job artifacts
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:cleanup:delete_orphan_job_artifact_final_objects
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
docker exec -it <container-id> bash
gitlab-rake gitlab:cleanup:delete_orphan_job_artifact_final_objects
```
- Copy the output file out of the session when the command completes, or write it to a volume that has been mounted by the container.
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:cleanup:delete_orphan_job_artifact_final_objects RAILS_ENV=production
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
```shell
# find the pod
kubectl get pods --namespace <namespace> -lapp=toolbox
# open the Rails console
kubectl exec -it -c toolbox <toolbox-pod-name> bash
gitlab-rake gitlab:cleanup:delete_orphan_job_artifact_final_objects
```
- When the command complete, copy the file out of the session onto persistent storage.
{{< /tab >}}
{{< /tabs >}}
The following applies to all types of GitLab deployment:
- Specify the input filename using the `FILENAME` variable. By default the script looks for:
`/opt/gitlab/embedded/service/gitlab-rails/tmp/orphan_job_artifact_final_objects.csv`
- As the script deletes files, it writes out a CSV file with the deleted files:
- the file is in the same directory as the input file
- the filename is prefixed with `deleted_from--`. For example: `deleted_from--orphan_job_artifact_final_objects.csv`.
- The rows in the file are: `object_path,object_size,object_generation/version`, for example:
```plaintext
35/13/35135aaa6cc23891b40cb3f378c53a17a1127210ce60e125ccf03efcfdaec458/@final/1a/1a/5abfa4ec66f1cc3b681a4d430b8b04596cbd636f13cdff44277211778f26,201,1711616743796587
```
### List projects and builds with artifacts with a specific expiration (or no expiration)
Using a [Rails console](../operations/rails_console.md), you can find projects that have job artifacts with either:
- No expiration date.
- An expiration date more than 7 days in the future.
Similar to [deleting artifacts](#delete-old-builds-and-artifacts), use the following example time frames
and alter them as needed:
- `7.days.from_now`
- `10.days.from_now`
- `2.weeks.from_now`
- `3.months.from_now`
- `1.year.from_now`
Each of the following scripts also limits the search to 50 results with `.limit(50)`, but this number can also be changed as needed:
```ruby
# Find builds & projects with artifacts that never expire
builds_with_artifacts_that_never_expire = Ci::Build.with_downloadable_artifacts.where(artifacts_expire_at: nil).limit(50)
builds_with_artifacts_that_never_expire.find_each do |build|
puts "Build with id #{build.id} has artifacts that don't expire and belongs to project #{build.project.full_path}"
end
# Find builds & projects with artifacts that expire after 7 days from today
builds_with_artifacts_that_expire_in_a_week = Ci::Build.with_downloadable_artifacts.where('artifacts_expire_at > ?', 7.days.from_now).limit(50)
builds_with_artifacts_that_expire_in_a_week.find_each do |build|
puts "Build with id #{build.id} has artifacts that expire at #{build.artifacts_expire_at} and belongs to project #{build.project.full_path}"
end
```
### List projects by total size of job artifacts stored
List the top 20 projects, sorted by the total size of job artifacts stored, by
running the following code in the [Rails console](../operations/rails_console.md):
```ruby
include ActionView::Helpers::NumberHelper
ProjectStatistics.order(build_artifacts_size: :desc).limit(20).each do |s|
puts "#{number_to_human_size(s.build_artifacts_size)} \t #{s.project.full_path}"
end
```
You can change the number of projects listed by modifying `.limit(20)` to the
number you want.
### List largest artifacts in a single project
List the 50 largest job artifacts in a single project by running the following
code in the [Rails console](../operations/rails_console.md):
```ruby
include ActionView::Helpers::NumberHelper
project = Project.find_by_full_path('path/to/project')
Ci::JobArtifact.where(project: project).order(size: :desc).limit(50).map { |a| puts "ID: #{a.id} - #{a.file_type}: #{number_to_human_size(a.size)}" }
```
You can change the number of job artifacts listed by modifying `.limit(50)` to
the number you want.
### List artifacts in a single project
List the artifacts for a single project, sorted by artifact size. The output includes the:
- ID of the job that created the artifact
- artifact size
- artifact file type
- artifact creation date
- on-disk location of the artifact
```ruby
p = Project.find_by_id(<project_id>)
arts = Ci::JobArtifact.where(project: p)
list = arts.order(size: :desc).limit(50).each do |art|
puts "Job ID: #{art.job_id} - Size: #{art.size}b - Type: #{art.file_type} - Created: #{art.created_at} - File loc: #{art.file}"
end
```
To change the number of job artifacts listed, change the number in `limit(50)`.
### Delete old builds and artifacts
{{< alert type="warning" >}}
These commands remove data permanently. Before running them in a production environment,
you should try them in a test environment first and make a backup of the instance
that can be restored if needed.
{{< /alert >}}
#### Delete old artifacts for a project
This step also erases artifacts that users have [chosen to keep](../../ci/jobs/job_artifacts.md#with-an-expiry):
```ruby
project = Project.find_by_full_path('path/to/project')
builds_with_artifacts = project.builds.with_downloadable_artifacts
builds_with_artifacts.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
Ci::JobArtifacts::DeleteService.new(build).execute
end
batch.update_all(artifacts_expire_at: Time.current)
end
```
#### Delete old artifacts instance wide
This step also erases artifacts that users have [chosen to keep](../../ci/jobs/job_artifacts.md#with-an-expiry):
```ruby
builds_with_artifacts = Ci::Build.with_downloadable_artifacts
builds_with_artifacts.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
Ci::JobArtifacts::DeleteService.new(build).execute
end
batch.update_all(artifacts_expire_at: Time.current)
end
```
#### Delete old job logs and artifacts for a project
```ruby
project = Project.find_by_full_path('path/to/project')
builds = project.builds
admin_user = User.find_by(username: 'username')
builds.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
print "Ci::Build ID #{build.id}... "
if build.erasable?
Ci::BuildEraseService.new(build, admin_user).execute
puts "Erased"
else
puts "Skipped (Nothing to erase or not erasable)"
end
end
end
```
#### Delete old job logs and artifacts instance wide
```ruby
builds = Ci::Build.all
admin_user = User.find_by(username: 'username')
builds.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |build|
print "Ci::Build ID #{build.id}... "
if build.erasable?
Ci::BuildEraseService.new(build, admin_user).execute
puts "Erased"
else
puts "Skipped (Nothing to erase or not erasable)"
end
end
end
```
`1.year.ago` is a Rails [`ActiveSupport::Duration`](https://api.rubyonrails.org/classes/ActiveSupport/Duration.html) method.
Start with a long duration to reduce the risk of accidentally deleting artifacts that are still in use.
Rerun the deletion with shorter durations as needed, for example `3.months.ago`, `2.weeks.ago`, or `7.days.ago`.
The method `erase_erasable_artifacts!` is synchronous, and upon execution the artifacts are immediately removed;
they are not scheduled by a background queue.
### Deleting artifacts does not immediately reclaim disk space
When artifacts are deleted, the process occurs in two phases:
1. **Marked as ready for deletion**: `Ci::JobArtifact` records are removed from the database and
converted to `Ci::DeletedObject` records with a future `pick_up_at` timestamp.
1. **Remove from storage**: The artifact files remain on disk until the `Ci::ScheduleDeleteObjectsCronWorker` worker
processes the `Ci::DeletedObject` records and physically removes the files.
The removal is deliberately limited to prevent overwhelming system resources:
- The worker runs once per hour, at the 16-minute mark.
- It processes objects in batches with a maximum of 20 concurrent jobs.
- Each deleted object has a `pick_up_at` timestamp that determines when it becomes
eligible for physical deletion
For large-scale deletions, the physical cleanup can take a significant amount of time
before disk space is fully reclaimed. Cleanup could take several days for very large deletions.
If you need to reclaim disk space quickly, you can expedite artifact deletion.
#### Expedite artifact removal
If you need to reclaim disk space quickly after deleting a large number of artifacts,
you can bypass the standard scheduling limitations and expedite the deletion process.
{{< alert type="warning" >}}
These commands put significant load on your system if you are deleting a large number of artifacts.
{{< /alert >}}
```ruby
# Set the pick_up_date to the current time on all artifacts
# This will mark them for immediate deletion
Ci::DeletedObject.update_all(pick_up_at: Time.current)
# Get the count of artifacts marked for deletion
Ci::DeletedObject.where("pick_up_at < ?", Time.current)
# Delete the artifacts from disk
while Ci::DeletedObject.where("pick_up_at < ?", Time.current).count > 0
Ci::DeleteObjectsService.new.execute
sleep(10)
end
# Get the count of artifacts marked for deletion (should now be zero)
Ci::DeletedObject.count
```
### Delete old pipelines
{{< alert type="warning" >}}
These commands remove data permanently. Before running them in a production environment,
consider seeking guidance from a Support Engineer. You should also try them in a test environment first
and make a backup of the instance that can be restored if needed.
{{< /alert >}}
Deleting a pipeline also removes that pipeline's:
- Job artifacts
- Job logs
- Job metadata
- Pipeline metadata
Removing job and pipeline metadata can help reduce the size of the CI tables in the database.
The CI tables are usually the largest tables in an instance's database.
#### Delete old pipelines for a project
```ruby
project = Project.find_by_full_path('path/to/project')
user = User.find(1)
project.ci_pipelines.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |pipeline|
puts "Erasing pipeline #{pipeline.id}"
Ci::DestroyPipelineService.new(pipeline.project, user).execute(pipeline)
end
end
```
#### Delete old pipelines instance-wide
```ruby
user = User.find(1)
Ci::Pipeline.where("finished_at < ?", 1.year.ago).each_batch do |batch|
batch.each do |pipeline|
puts "Erasing pipeline #{pipeline.id} for project #{pipeline.project_id}"
Ci::DestroyPipelineService.new(pipeline.project, user).execute(pipeline)
end
end
```
## Job artifact upload fails with error 500
If you are using object storage for artifacts and a job artifact fails to upload,
review:
- The job log for an error message similar to:
```plaintext
WARNING: Uploading artifacts as "archive" to coordinator... failed id=12345 responseStatus=500 Internal Server Error status=500 token=abcd1234
```
- The [workhorse log](../logs/_index.md#workhorse-logs) for an error message similar to:
```json
{"error":"MissingRegion: could not find region configuration","level":"error","msg":"error uploading S3 session","time":"2021-03-16T22:10:55-04:00"}
```
In both cases, you might need to add `region` to the job artifact [object storage configuration](../object_storage.md).
## Job artifact upload fails with `500 Internal Server Error (Missing file)`
Bucket names that include folder paths are not supported with [consolidated object storage](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form).
For example, `bucket/path`. If a bucket name has a path in it, you might receive an error similar to:
```plaintext
WARNING: Uploading artifacts as "archive" to coordinator... POST https://gitlab.example.com/api/v4/jobs/job_id/artifacts?artifact_format=zip&artifact_type=archive&expire_in=1+day: 500 Internal Server Error (Missing file)
FATAL: invalid argument
```
If a job artifact fails to upload due to the previous error when using consolidated object storage, make sure you are [using separate buckets](../object_storage.md#use-separate-buckets) for each data type.
## Job artifacts fail to upload with `FATAL: invalid argument` when using Windows mount
If you are using a Windows mount with CIFS for job artifacts, you may see an
`invalid argument` error when the runner attempts to upload artifacts:
```plaintext
WARNING: Uploading artifacts as "dotenv" to coordinator... POST https://<your-gitlab-instance>/api/v4/jobs/<JOB_ID>/artifacts: 500 Internal Server Error id=1296 responseStatus=500 Internal Server Error status=500 token=*****
FATAL: invalid argument
```
To work around this issue, you can try:
- Switching to an ext4 mount instead of CIFS.
- Upgrading to at least Linux kernel 5.15 which contains a number of important bug fixes
relating to CIFS file leases.
- For older kernels, using the `nolease` mount option to disable file leasing.
For more information, [see the investigation details](https://gitlab.com/gitlab-org/gitlab/-/issues/389995).
## Usage quota shows incorrect artifact storage usage
Sometimes the [artifacts storage usage](../../user/storage_usage_quotas.md) displays an incorrect
value for the total storage space used by artifacts. To recalculate the artifact
usage statistics for all projects in the instance, you can run this background script:
```shell
gitlab-rake gitlab:refresh_project_statistics_build_artifacts_size[https://example.com/path/file.csv]
```
The `https://example.com/path/file.csv` file must list the project IDs for
all projects for which you want to recalculate artifact storage usage. Use this format for the file:
```plaintext
PROJECT_ID
1
2
```
The artifact usage value can fluctuate to `0` while the script is running. After
recalculation, usage should display as expected again.
## Artifact download flow diagrams
The following flow diagrams illustrate how job artifacts work. These
diagrams assume object storage is configured for job artifacts.
### Proxy download disabled
With [`proxy_download` set to `false`](../object_storage.md), GitLab
redirects the runner to download artifacts from object storage with a
pre-signed URL. It is usually faster for runners to fetch from the
source directly so this configuration is generally recommended. It
should also reduce bandwidth usage because the data does not have to be
fetched by GitLab and sent to the runner. However, it does require
giving runners direct access to object storage.
The request flow looks like:
```mermaid
sequenceDiagram
autonumber
participant C as Runner
participant O as Object Storage
participant W as Workhorse
participant R as Rails
participant P as PostgreSQL
C->>+W: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over C,W: gitlab-ci-token@<CI_JOB_TOKEN>
W-->+R: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over W,R: gitlab-ci-token@<CI_JOB_TOKEN>
R->>P: Look up job for CI_JOB_TOKEN
R->>P: Find user who triggered job
R->>R: Does user have :read_build access?
alt Yes
R->>W: Send 302 redirect to object storage presigned URL
R->>C: 302 redirect
C->>O: GET <presigned URL>
else No
R->>W: 401 Unauthorized
W->>C: 401 Unauthorized
end
```
In this diagram:
1. First, the runner attempts to fetch a job artifact by using the
`GET /api/v4/jobs/:id/artifacts` endpoint. The runner attaches the
`direct_download=true` query parameter on the first attempt to indicate
that it is capable of downloading from object storage directly. Direct
downloads can be disabled in the runner configuration via the
[`FF_USE_DIRECT_DOWNLOAD` feature flag](https://docs.gitlab.com/runner/configuration/feature-flags.html).
This flag is set to `true` by default.
1. The runner sends the GET request using HTTP Basic Authentication
with the `gitlab-ci-token` username and an auto-generated
CI/CD job token as the password. This token is generated by GitLab and
given to the runner at the start of a job.
1. The GET request gets passed to the GitLab API, which looks
up the token in the database and finds the user who triggered the job.
1. In steps 5-8:
- If the user has access to the build, then GitLab generates
a presigned URL and sends a 302 Redirect with the `Location` set to that
URL. The runner follows the 302 Redirect and downloads the artifacts.
- If the job cannot be found or the user does not have access to the job,
then the API returns 401 Unauthorized.
The runner does not retry if it receives the following HTTP status codes:
- 200 OK
- 401 Unauthorized
- 403 Forbidden
- 404 Not Found
However, if the runner receives any other status code, such as a 500 error,
it re-attempts to download the artifacts two more times, sleeping 1 second
between each attempt. The subsequent attempts omit `direct_download=true`.
### Proxy download enabled
If `proxy_download` is `true`, GitLab always fetches the
artifacts from object storage and send the data to the runner, even if
the runner sends the `direct_download=true` query parameter. Proxy
downloads might be desirable if runners have restricted network access.
The following diagram is similar to the disabled proxy download example,
except at steps 6-9, GitLab does not send a 302 Redirect to the
runner. Instead, GitLab instructs Workhorse to fetch the data and stream
it back to the runner. From the runner perspective, the original GET
request to `/api/v4/jobs/:id/artifacts` returns the binary data
directly.
```mermaid
sequenceDiagram
autonumber
participant C as Runner
participant O as Object Storage
participant W as Workhorse
participant R as Rails
participant P as PostgreSQL
C->>+W: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over C,W: gitlab-ci-token@<CI_JOB_TOKEN>
W-->+R: GET /api/v4/jobs/:id/artifacts?direct_download=true
Note over W,R: gitlab-ci-token@<CI_JOB_TOKEN>
R->>P: Look up job for CI_JOB_TOKEN
R->>P: Find user who triggered job
R->>R: Does user have :read_build access?
alt Yes
R->>W: SendURL with object storage presigned URL
W->>O: GET <presigned URL>
O->>W: <artifacts data>
W->>C: <artifacts data>
else No
R->>W: 401 Unauthorized
W->>C: 401 Unauthorized
end
```
## `413 Request Entity Too Large` error
If the artifacts are too large, the job might fail with the following error:
```plaintext
Uploading artifacts as "archive" to coordinator... too large archive <job-id> responseStatus=413 Request Entity Too Large status=413" at end of a build job on pipeline when trying to store artifacts to <object-storage>.
```
You might need to:
- Increase the [maximum artifacts size](../settings/continuous_integration.md#set-maximum-artifacts-size).
- If you are using NGINX as a proxy server, increase the file upload size limit which is limited to 1 MB by default.
Set a higher value for `client-max-body-size` in the NGINX configuration file.
|
https://docs.gitlab.com/administration/job_artifacts
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/job_artifacts.md
|
2025-08-13
|
doc/administration/cicd
|
[
"doc",
"administration",
"cicd"
] |
job_artifacts.md
|
Verify
|
Pipeline Execution
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Jobs artifacts administration
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This is the administration documentation. To learn how to use job artifacts in your GitLab CI/CD pipeline,
see the [job artifacts configuration documentation](../../ci/jobs/job_artifacts.md).
An artifact is a list of files and directories attached to a job after it
finishes. This feature is enabled by default in all GitLab installations.
## Disabling job artifacts
To disable artifacts site-wide:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['artifacts_enabled'] = false
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
appConfig:
artifacts:
enabled: false
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['artifacts_enabled'] = false
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
artifacts:
enabled: false
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
## Storing job artifacts
GitLab Runner can upload an archive containing the job artifacts to GitLab. By default,
this is done when the job succeeds, but can also be done on failure, or always, with the
[`artifacts:when`](../../ci/yaml/_index.md#artifactswhen) parameter.
Most artifacts are compressed by GitLab Runner before being sent to the coordinator. The exception to this is
[reports artifacts](../../ci/yaml/_index.md#artifactsreports), which are compressed after uploading.
### Using local storage
If you're using the Linux package or have a self-compiled installation, you
can change the location where the artifacts are stored locally.
{{< alert type="note" >}}
For Docker installations, you can change the path where your data is mounted.
For the Helm chart, use
[object storage](https://docs.gitlab.com/charts/advanced/external-object-storage/).
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
The artifacts are stored by default in `/var/opt/gitlab/gitlab-rails/shared/artifacts`.
1. To change the storage path, for example to `/mnt/storage/artifacts`, edit
`/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['artifacts_path'] = "/mnt/storage/artifacts"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
The artifacts are stored by default in `/home/git/gitlab/shared/artifacts`.
1. To change the storage path, for example to `/mnt/storage/artifacts`, edit
`/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
```yaml
production: &base
artifacts:
enabled: true
path: /mnt/storage/artifacts
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
### Using object storage
If you don't want to use the local disk where GitLab is installed to store the
artifacts, you can use an object storage like AWS S3 instead.
If you configure GitLab to store artifacts on object storage, you may also want to
[eliminate local disk usage for job logs](job_logs.md#prevent-local-disk-usage).
In both cases, job logs are archived and moved to object storage when the job completes.
{{< alert type="warning" >}}
In a multi-server setup you must use one of the options to
[eliminate local disk usage for job logs](job_logs.md#prevent-local-disk-usage), or job logs could be lost.
{{< /alert >}}
You should use the [consolidated object storage settings](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form).
### Migrating to object storage
You can migrate the job artifacts from local storage to object storage. The
processing is done in a background worker and requires **no downtime**.
1. [Configure the object storage](#using-object-storage).
1. Migrate the artifacts:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:artifacts:migrate
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab-rake gitlab:artifacts:migrate
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:artifacts:migrate RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
1. Optional. Track the progress and verify that all job artifacts migrated
successfully using the PostgreSQL console.
1. Open a PostgreSQL console:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-psql
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -it <container_name> /bin/bash
gitlab-psql
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H psql -d gitlabhq_production
```
{{< /tab >}}
{{< /tabs >}}
1. Verify that all artifacts migrated to object storage with the following
SQL query. The number of `objectstg` should be the same as `total`:
```shell
gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM ci_job_artifacts;
total | filesystem | objectstg
------+------------+-----------
19 | 0 | 19
```
1. Verify that there are no files on disk in the `artifacts` directory:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo find /var/opt/gitlab/gitlab-rails/shared/artifacts -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< tab title="Docker" >}}
Assuming you mounted `/var/opt/gitlab` to `/srv/gitlab`:
```shell
sudo find /srv/gitlab/gitlab-rails/shared/artifacts -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo find /home/git/gitlab/shared/artifacts -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< /tabs >}}
1. If [Geo](../geo/_index.md) is enabled, [reverify all job artifacts](../geo/replication/troubleshooting/synchronization_verification.md#reverify-one-component-on-all-sites).
In some cases, you need to run the [orphan artifact file cleanup Rake task](../raketasks/cleanup.md#remove-orphan-artifact-files)
to clean up orphaned artifacts.
### Migrating from object storage to local storage
To migrate artifacts back to local storage:
1. Run `gitlab-rake gitlab:artifacts:migrate_to_local`.
1. [Selectively disable the artifacts' storage](../object_storage.md#disable-object-storage-for-specific-features) in `gitlab.rb`.
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
## Expiring artifacts
If [`artifacts:expire_in`](../../ci/yaml/_index.md#artifactsexpire_in) is used to set
an expiry for the artifacts, they are marked for deletion right after that date passes.
Otherwise, they expire per the [default artifacts expiration setting](../settings/continuous_integration.md#set-default-artifacts-expiration).
Artifacts are deleted by the `expire_build_artifacts_worker` cron job which Sidekiq
runs every 7 minutes (`*/7 * * * *` in [Cron](../../topics/cron/_index.md) syntax).
To change the default schedule on which expired artifacts are deleted:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line (or uncomment it if
it already exists and is commented out), substituting your schedule in cron
syntax:
```ruby
gitlab_rails['expire_build_artifacts_worker_cron'] = "*/7 * * * *"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
appConfig:
cron_jobs:
expire_build_artifacts_worker:
cron: "*/7 * * * *"
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['expire_build_artifacts_worker_cron'] = "*/7 * * * *"
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
cron_jobs:
expire_build_artifacts_worker:
cron: "*/7 * * * *"
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
## Set the maximum file size of the artifacts
If artifacts are enabled, you can change the maximum file size of the
artifacts through the [**Admin** area settings](../settings/continuous_integration.md#set-maximum-artifacts-size).
## Storage statistics
You can see the total storage used for job artifacts for groups and projects in:
- The **Admin** area
- The [groups](../../api/groups.md) and [projects](../../api/projects.md) APIs
## Implementation details
When GitLab receives an artifacts archive, an archive metadata file is also
generated by [GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse). This metadata file describes all the entries
that are located in the artifacts archive itself.
The metadata file is in a binary format, with additional Gzip compression.
GitLab doesn't extract the artifacts archive to save space, memory, and disk
I/O. It instead inspects the metadata file which contains all the relevant
information. This is especially important when there is a lot of artifacts, or
an archive is a very large file.
When selecting a specific file, [GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse) extracts it
from the archive and the download begins. This implementation saves space,
memory and disk I/O.
|
---
stage: Verify
group: Pipeline Execution
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Jobs artifacts administration
breadcrumbs:
- doc
- administration
- cicd
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
This is the administration documentation. To learn how to use job artifacts in your GitLab CI/CD pipeline,
see the [job artifacts configuration documentation](../../ci/jobs/job_artifacts.md).
An artifact is a list of files and directories attached to a job after it
finishes. This feature is enabled by default in all GitLab installations.
## Disabling job artifacts
To disable artifacts site-wide:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['artifacts_enabled'] = false
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
appConfig:
artifacts:
enabled: false
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['artifacts_enabled'] = false
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
artifacts:
enabled: false
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
## Storing job artifacts
GitLab Runner can upload an archive containing the job artifacts to GitLab. By default,
this is done when the job succeeds, but can also be done on failure, or always, with the
[`artifacts:when`](../../ci/yaml/_index.md#artifactswhen) parameter.
Most artifacts are compressed by GitLab Runner before being sent to the coordinator. The exception to this is
[reports artifacts](../../ci/yaml/_index.md#artifactsreports), which are compressed after uploading.
### Using local storage
If you're using the Linux package or have a self-compiled installation, you
can change the location where the artifacts are stored locally.
{{< alert type="note" >}}
For Docker installations, you can change the path where your data is mounted.
For the Helm chart, use
[object storage](https://docs.gitlab.com/charts/advanced/external-object-storage/).
{{< /alert >}}
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
The artifacts are stored by default in `/var/opt/gitlab/gitlab-rails/shared/artifacts`.
1. To change the storage path, for example to `/mnt/storage/artifacts`, edit
`/etc/gitlab/gitlab.rb` and add the following line:
```ruby
gitlab_rails['artifacts_path'] = "/mnt/storage/artifacts"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
The artifacts are stored by default in `/home/git/gitlab/shared/artifacts`.
1. To change the storage path, for example to `/mnt/storage/artifacts`, edit
`/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
```yaml
production: &base
artifacts:
enabled: true
path: /mnt/storage/artifacts
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
### Using object storage
If you don't want to use the local disk where GitLab is installed to store the
artifacts, you can use an object storage like AWS S3 instead.
If you configure GitLab to store artifacts on object storage, you may also want to
[eliminate local disk usage for job logs](job_logs.md#prevent-local-disk-usage).
In both cases, job logs are archived and moved to object storage when the job completes.
{{< alert type="warning" >}}
In a multi-server setup you must use one of the options to
[eliminate local disk usage for job logs](job_logs.md#prevent-local-disk-usage), or job logs could be lost.
{{< /alert >}}
You should use the [consolidated object storage settings](../object_storage.md#configure-a-single-storage-connection-for-all-object-types-consolidated-form).
### Migrating to object storage
You can migrate the job artifacts from local storage to object storage. The
processing is done in a background worker and requires **no downtime**.
1. [Configure the object storage](#using-object-storage).
1. Migrate the artifacts:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-rake gitlab:artifacts:migrate
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -t <container name> gitlab-rake gitlab:artifacts:migrate
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H bundle exec rake gitlab:artifacts:migrate RAILS_ENV=production
```
{{< /tab >}}
{{< /tabs >}}
1. Optional. Track the progress and verify that all job artifacts migrated
successfully using the PostgreSQL console.
1. Open a PostgreSQL console:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo gitlab-psql
```
{{< /tab >}}
{{< tab title="Docker" >}}
```shell
sudo docker exec -it <container_name> /bin/bash
gitlab-psql
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo -u git -H psql -d gitlabhq_production
```
{{< /tab >}}
{{< /tabs >}}
1. Verify that all artifacts migrated to object storage with the following
SQL query. The number of `objectstg` should be the same as `total`:
```shell
gitlabhq_production=# SELECT count(*) AS total, sum(case when file_store = '1' then 1 else 0 end) AS filesystem, sum(case when file_store = '2' then 1 else 0 end) AS objectstg FROM ci_job_artifacts;
total | filesystem | objectstg
------+------------+-----------
19 | 0 | 19
```
1. Verify that there are no files on disk in the `artifacts` directory:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
```shell
sudo find /var/opt/gitlab/gitlab-rails/shared/artifacts -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< tab title="Docker" >}}
Assuming you mounted `/var/opt/gitlab` to `/srv/gitlab`:
```shell
sudo find /srv/gitlab/gitlab-rails/shared/artifacts -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
```shell
sudo find /home/git/gitlab/shared/artifacts -type f | grep -v tmp | wc -l
```
{{< /tab >}}
{{< /tabs >}}
1. If [Geo](../geo/_index.md) is enabled, [reverify all job artifacts](../geo/replication/troubleshooting/synchronization_verification.md#reverify-one-component-on-all-sites).
In some cases, you need to run the [orphan artifact file cleanup Rake task](../raketasks/cleanup.md#remove-orphan-artifact-files)
to clean up orphaned artifacts.
### Migrating from object storage to local storage
To migrate artifacts back to local storage:
1. Run `gitlab-rake gitlab:artifacts:migrate_to_local`.
1. [Selectively disable the artifacts' storage](../object_storage.md#disable-object-storage-for-specific-features) in `gitlab.rb`.
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
## Expiring artifacts
If [`artifacts:expire_in`](../../ci/yaml/_index.md#artifactsexpire_in) is used to set
an expiry for the artifacts, they are marked for deletion right after that date passes.
Otherwise, they expire per the [default artifacts expiration setting](../settings/continuous_integration.md#set-default-artifacts-expiration).
Artifacts are deleted by the `expire_build_artifacts_worker` cron job which Sidekiq
runs every 7 minutes (`*/7 * * * *` in [Cron](../../topics/cron/_index.md) syntax).
To change the default schedule on which expired artifacts are deleted:
{{< tabs >}}
{{< tab title="Linux package (Omnibus)" >}}
1. Edit `/etc/gitlab/gitlab.rb` and add the following line (or uncomment it if
it already exists and is commented out), substituting your schedule in cron
syntax:
```ruby
gitlab_rails['expire_build_artifacts_worker_cron'] = "*/7 * * * *"
```
1. Save the file and reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
{{< /tab >}}
{{< tab title="Helm chart (Kubernetes)" >}}
1. Export the Helm values:
```shell
helm get values gitlab > gitlab_values.yaml
```
1. Edit `gitlab_values.yaml`:
```yaml
global:
appConfig:
cron_jobs:
expire_build_artifacts_worker:
cron: "*/7 * * * *"
```
1. Save the file and apply the new values:
```shell
helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
```
{{< /tab >}}
{{< tab title="Docker" >}}
1. Edit `docker-compose.yml`:
```yaml
version: "3.6"
services:
gitlab:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['expire_build_artifacts_worker_cron'] = "*/7 * * * *"
```
1. Save the file and restart GitLab:
```shell
docker compose up -d
```
{{< /tab >}}
{{< tab title="Self-compiled (source)" >}}
1. Edit `/home/git/gitlab/config/gitlab.yml`:
```yaml
production: &base
cron_jobs:
expire_build_artifacts_worker:
cron: "*/7 * * * *"
```
1. Save the file and restart GitLab:
```shell
# For systems running systemd
sudo systemctl restart gitlab.target
# For systems running SysV init
sudo service gitlab restart
```
{{< /tab >}}
{{< /tabs >}}
## Set the maximum file size of the artifacts
If artifacts are enabled, you can change the maximum file size of the
artifacts through the [**Admin** area settings](../settings/continuous_integration.md#set-maximum-artifacts-size).
## Storage statistics
You can see the total storage used for job artifacts for groups and projects in:
- The **Admin** area
- The [groups](../../api/groups.md) and [projects](../../api/projects.md) APIs
## Implementation details
When GitLab receives an artifacts archive, an archive metadata file is also
generated by [GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse). This metadata file describes all the entries
that are located in the artifacts archive itself.
The metadata file is in a binary format, with additional Gzip compression.
GitLab doesn't extract the artifacts archive to save space, memory, and disk
I/O. It instead inspects the metadata file which contains all the relevant
information. This is especially important when there is a lot of artifacts, or
an archive is a very large file.
When selecting a specific file, [GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse) extracts it
from the archive and the download begins. This implementation saves space,
memory and disk I/O.
|
https://docs.gitlab.com/administration/external_pipeline_validation
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/external_pipeline_validation.md
|
2025-08-13
|
doc/administration/cicd
|
[
"doc",
"administration",
"cicd"
] |
external_pipeline_validation.md
|
Verify
|
Pipeline Execution
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
External pipeline validation
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
You can use an external service to validate a pipeline before it's created.
GitLab sends a POST request to the external service URL with the pipeline
data as payload. The response code from the external service determines if GitLab
should accept or reject the pipeline. If the response is:
- `200`, the pipeline is accepted.
- `406`, the pipeline is rejected.
- Other codes, the pipeline is accepted and logged.
If there's an error or the request times out, the pipeline is accepted.
Pipelines rejected by the external validation service aren't created, and don't
appear in pipeline lists in the GitLab UI or API. If you create a pipeline in the
UI that is rejected, `Pipeline cannot be run. External validation failed` is displayed.
## Configure external pipeline validation
To configure external pipeline validation, add the
[`EXTERNAL_VALIDATION_SERVICE_URL` environment variable](../environment_variables.md)
and set it to the external service URL.
By default, requests to the external service time out after five seconds. To override
the default, set the `EXTERNAL_VALIDATION_SERVICE_TIMEOUT` environment variable to the
required number of seconds.
## Payload schema
{{< history >}}
- `tag_list` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/335904) in GitLab 16.11.
{{< /history >}}
```json
{
"type": "object",
"required" : [
"project",
"user",
"credit_card",
"pipeline",
"builds",
"total_builds_count",
"namespace"
],
"properties" : {
"project": {
"type": "object",
"required": [
"id",
"path",
"created_at",
"shared_runners_enabled",
"group_runners_enabled"
],
"properties": {
"id": { "type": "integer" },
"path": { "type": "string" },
"created_at": { "type": ["string", "null"], "format": "date-time" },
"shared_runners_enabled": { "type": "boolean" },
"group_runners_enabled": { "type": "boolean" }
}
},
"user": {
"type": "object",
"required": [
"id",
"username",
"email",
"created_at"
],
"properties": {
"id": { "type": "integer" },
"username": { "type": "string" },
"email": { "type": "string" },
"created_at": { "type": ["string", "null"], "format": "date-time" },
"current_sign_in_ip": { "type": ["string", "null"] },
"last_sign_in_ip": { "type": ["string", "null"] },
"sign_in_count": { "type": "integer" }
}
},
"credit_card": {
"type": "object",
"required": [
"similar_cards_count",
"similar_holder_names_count"
],
"properties": {
"similar_cards_count": { "type": "integer" },
"similar_holder_names_count": { "type": "integer" }
}
},
"pipeline": {
"type": "object",
"required": [
"sha",
"ref",
"type"
],
"properties": {
"sha": { "type": "string" },
"ref": { "type": "string" },
"type": { "type": "string" }
}
},
"builds": {
"type": "array",
"items": {
"type": "object",
"required": [
"name",
"stage",
"image",
"tag_list",
"services",
"script"
],
"properties": {
"name": { "type": "string" },
"stage": { "type": "string" },
"image": { "type": ["string", "null"] },
"tag_list": { "type": ["array", "null"] },
"services": {
"type": ["array", "null"],
"items": { "type": "string" }
},
"script": {
"type": "array",
"items": { "type": "string" }
}
}
}
},
"total_builds_count": { "type": "integer" },
"namespace": {
"type": "object",
"required": [
"plan",
"trial"
],
"properties": {
"plan": { "type": "string" },
"trial": { "type": "boolean" }
}
},
"provisioning_group": {
"type": "object",
"required": [
"plan",
"trial"
],
"properties": {
"plan": { "type": "string" },
"trial": { "type": "boolean" }
}
}
}
}
```
The `namespace` field is only available in [GitLab Premium and Ultimate](https://about.gitlab.com/pricing/).
|
---
stage: Verify
group: Pipeline Execution
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: External pipeline validation
breadcrumbs:
- doc
- administration
- cicd
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
You can use an external service to validate a pipeline before it's created.
GitLab sends a POST request to the external service URL with the pipeline
data as payload. The response code from the external service determines if GitLab
should accept or reject the pipeline. If the response is:
- `200`, the pipeline is accepted.
- `406`, the pipeline is rejected.
- Other codes, the pipeline is accepted and logged.
If there's an error or the request times out, the pipeline is accepted.
Pipelines rejected by the external validation service aren't created, and don't
appear in pipeline lists in the GitLab UI or API. If you create a pipeline in the
UI that is rejected, `Pipeline cannot be run. External validation failed` is displayed.
## Configure external pipeline validation
To configure external pipeline validation, add the
[`EXTERNAL_VALIDATION_SERVICE_URL` environment variable](../environment_variables.md)
and set it to the external service URL.
By default, requests to the external service time out after five seconds. To override
the default, set the `EXTERNAL_VALIDATION_SERVICE_TIMEOUT` environment variable to the
required number of seconds.
## Payload schema
{{< history >}}
- `tag_list` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/335904) in GitLab 16.11.
{{< /history >}}
```json
{
"type": "object",
"required" : [
"project",
"user",
"credit_card",
"pipeline",
"builds",
"total_builds_count",
"namespace"
],
"properties" : {
"project": {
"type": "object",
"required": [
"id",
"path",
"created_at",
"shared_runners_enabled",
"group_runners_enabled"
],
"properties": {
"id": { "type": "integer" },
"path": { "type": "string" },
"created_at": { "type": ["string", "null"], "format": "date-time" },
"shared_runners_enabled": { "type": "boolean" },
"group_runners_enabled": { "type": "boolean" }
}
},
"user": {
"type": "object",
"required": [
"id",
"username",
"email",
"created_at"
],
"properties": {
"id": { "type": "integer" },
"username": { "type": "string" },
"email": { "type": "string" },
"created_at": { "type": ["string", "null"], "format": "date-time" },
"current_sign_in_ip": { "type": ["string", "null"] },
"last_sign_in_ip": { "type": ["string", "null"] },
"sign_in_count": { "type": "integer" }
}
},
"credit_card": {
"type": "object",
"required": [
"similar_cards_count",
"similar_holder_names_count"
],
"properties": {
"similar_cards_count": { "type": "integer" },
"similar_holder_names_count": { "type": "integer" }
}
},
"pipeline": {
"type": "object",
"required": [
"sha",
"ref",
"type"
],
"properties": {
"sha": { "type": "string" },
"ref": { "type": "string" },
"type": { "type": "string" }
}
},
"builds": {
"type": "array",
"items": {
"type": "object",
"required": [
"name",
"stage",
"image",
"tag_list",
"services",
"script"
],
"properties": {
"name": { "type": "string" },
"stage": { "type": "string" },
"image": { "type": ["string", "null"] },
"tag_list": { "type": ["array", "null"] },
"services": {
"type": ["array", "null"],
"items": { "type": "string" }
},
"script": {
"type": "array",
"items": { "type": "string" }
}
}
}
},
"total_builds_count": { "type": "integer" },
"namespace": {
"type": "object",
"required": [
"plan",
"trial"
],
"properties": {
"plan": { "type": "string" },
"trial": { "type": "boolean" }
}
},
"provisioning_group": {
"type": "object",
"required": [
"plan",
"trial"
],
"properties": {
"plan": { "type": "string" },
"trial": { "type": "boolean" }
}
}
}
}
```
The `namespace` field is only available in [GitLab Premium and Ultimate](https://about.gitlab.com/pricing/).
|
https://docs.gitlab.com/administration/dot_com_compute_minutes
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/dot_com_compute_minutes.md
|
2025-08-13
|
doc/administration/cicd
|
[
"doc",
"administration",
"cicd"
] |
dot_com_compute_minutes.md
|
Verify
|
Pipeline Execution
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Compute minutes administration for GitLab.com
|
Configure cost factor settings for compute minutes on GitLab.com.
|
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com
{{< /details >}}
GitLab.com administrators have additional controls over compute minutes beyond what is
available for [GitLab Self-Managed](compute_minutes.md).
## Set cost factors
Prerequisites:
- You must be an administrator for GitLab.com.
To set cost factors for a runner:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **CI/CD > Runners**.
1. For the runner you want to update, select **Edit** ({{< icon name="pencil" >}}).
1. In the **Public projects compute cost factor** text box, enter the public cost factor.
1. In the **Private projects compute cost factor** text box, enter the private cost factor.
1. Select **Save changes**.
## Reduce cost factors for community contributions
When the `ci_minimal_cost_factor_for_gitlab_namespaces` feature flag is enabled for a namespace,
merge request pipelines from forks that target projects in the enabled namespace use a reduced cost factor.
This ensures community contributions don't consume excessive compute minutes.
Prerequisites:
- You must be able to control feature flags.
- You must have the namespace ID for which you want to enable reduced cost factors.
To enable a namespace to use a reduced cost factor:
1. [Enable the feature flag](../feature_flags/_index.md#how-to-enable-and-disable-features-behind-flags) `ci_minimal_cost_factor_for_gitlab_namespaces` for the namespace ID you want to include.
This feature is recommended for use on GitLab.com only. Community contributors should use
community forks for contributions to avoid accumulating minutes when running pipelines
that are not in a merge request targeting a GitLab project.
|
---
stage: Verify
group: Pipeline Execution
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Configure cost factor settings for compute minutes on GitLab.com.
title: Compute minutes administration for GitLab.com
breadcrumbs:
- doc
- administration
- cicd
---
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com
{{< /details >}}
GitLab.com administrators have additional controls over compute minutes beyond what is
available for [GitLab Self-Managed](compute_minutes.md).
## Set cost factors
Prerequisites:
- You must be an administrator for GitLab.com.
To set cost factors for a runner:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **CI/CD > Runners**.
1. For the runner you want to update, select **Edit** ({{< icon name="pencil" >}}).
1. In the **Public projects compute cost factor** text box, enter the public cost factor.
1. In the **Private projects compute cost factor** text box, enter the private cost factor.
1. Select **Save changes**.
## Reduce cost factors for community contributions
When the `ci_minimal_cost_factor_for_gitlab_namespaces` feature flag is enabled for a namespace,
merge request pipelines from forks that target projects in the enabled namespace use a reduced cost factor.
This ensures community contributions don't consume excessive compute minutes.
Prerequisites:
- You must be able to control feature flags.
- You must have the namespace ID for which you want to enable reduced cost factors.
To enable a namespace to use a reduced cost factor:
1. [Enable the feature flag](../feature_flags/_index.md#how-to-enable-and-disable-features-behind-flags) `ci_minimal_cost_factor_for_gitlab_namespaces` for the namespace ID you want to include.
This feature is recommended for use on GitLab.com only. Community contributors should use
community forks for contributions to avoid accumulating minutes when running pipelines
that are not in a merge request targeting a GitLab project.
|
https://docs.gitlab.com/administration/kas
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/kas.md
|
2025-08-13
|
doc/administration/clusters
|
[
"doc",
"administration",
"clusters"
] |
kas.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Install the GitLab agent server for Kubernetes (KAS)
|
Manage the GitLab agent for Kubernetes.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The agent server is a component installed together with GitLab. It is required to
manage the [GitLab agent for Kubernetes](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent).
The KAS acronym refers to the former name, `Kubernetes agent server`.
The agent server for Kubernetes is installed and available on GitLab.com at `wss://kas.gitlab.com`.
If you use GitLab Self-Managed, by default the agent server is installed and available.
## Installation options
As a GitLab administrator, you can control the agent server installation:
- For [Linux package installations](#for-linux-package-installations).
- For [GitLab Helm chart installations](#for-gitlab-helm-chart).
### For Linux package installations
The agent server for Linux package installations can be enabled on a single node, or on multiple nodes at once.
By default, the agent server is enabled and available at `ws://gitlab.example.com/-/kubernetes-agent/`.
#### Disable on a single node
To disable the agent server on a single node:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_kas['enable'] = false
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
#### Turn on KAS on multiple nodes
KAS instances communicate with each other by registering their private addresses in Redis at a well-known location.
Each KAS must be configured to present its private address details so that other instances can reach it.
To turn on KAS on multiple nodes:
1. Add the [common configuration](#common-configuration).
1. Add the configuration from one of the following options:
- [Option 1 - explicit manual configuration](#option-1---explicit-manual-configuration)
- [Option 2 - automatic CIDR-based configuration](#option-2---automatic-cidr-based-configuration)
- [Option 3 - automatic configuration based on listener configuration](#option-3---automatic-configuration-based-on-listener-configuration)
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. (Optional) If you use a multi-server environment with separate GitLab Rails and Sidekiq nodes, enable KAS on the Sidekiq nodes.
##### Common configuration
For each KAS node, edit the file at `/etc/gitlab/gitlab.rb` and add the following configuration:
```ruby
gitlab_kas_external_url 'wss://kas.gitlab.example.com/'
gitlab_kas['api_secret_key'] = '<32_bytes_long_base64_encoded_value>'
gitlab_kas['private_api_secret_key'] = '<32_bytes_long_base64_encoded_value>'
# private_api_listen_address examples, pick one:
gitlab_kas['private_api_listen_address'] = 'A.B.C.D:8155' # Listen on a particular IPv4. Each node must use its own unique IP.
# gitlab_kas['private_api_listen_address'] = '[A:B:C::D]:8155' # Listen on a particular IPv6. Each node must use its own unique IP.
# gitlab_kas['private_api_listen_address'] = 'kas-N.gitlab.example.com:8155' # Listen on all IPv4 and IPv6 interfaces that the DNS name resolves to. Each node must use its own unique domain.
# gitlab_kas['private_api_listen_address'] = ':8155' # Listen on all IPv4 and IPv6 interfaces.
# gitlab_kas['private_api_listen_address'] = '0.0.0.0:8155' # Listen on all IPv4 interfaces.
# gitlab_kas['private_api_listen_address'] = '[::]:8155' # Listen on all IPv6 interfaces.
gitlab_kas['env'] = {
# 'OWN_PRIVATE_API_HOST' => '<server-name-from-cert>' # Add if you want to use TLS for KAS->KAS communication. This name is used to verify the TLS certificate host name instead of the host in the URL of the destination KAS.
'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/",
}
gitlab_rails['gitlab_kas_external_url'] = 'wss://gitlab.example.com/-/kubernetes-agent/'
gitlab_rails['gitlab_kas_internal_url'] = 'grpc://kas.internal.gitlab.example.com'
gitlab_rails['gitlab_kas_external_k8s_proxy_url'] = 'https://gitlab.example.com/-/kubernetes-agent/k8s-proxy/'
```
**Do not** set `private_api_listen_address` to listen on an internal address, such as:
- `localhost`
- Loopback IP addresses, like `127.0.0.1` or `::1`
- A UNIX socket
Other KAS nodes cannot reach these addresses.
For single-node configurations, you can set `private_api_listen_address` to listen on an internal address.
##### Option 1 - explicit manual configuration
For each KAS node, edit the file at `/etc/gitlab/gitlab.rb` and set the `OWN_PRIVATE_API_URL` environment variable explicitly:
```ruby
gitlab_kas['env'] = {
# OWN_PRIVATE_API_URL examples, pick one. Each node must use its own unique IP or DNS name.
# Use grpcs:// when using TLS on the private API endpoint.
'OWN_PRIVATE_API_URL' => 'grpc://A.B.C.D:8155' # IPv4
# 'OWN_PRIVATE_API_URL' => 'grpcs://A.B.C.D:8155' # IPv4 + TLS
# 'OWN_PRIVATE_API_URL' => 'grpc://[A:B:C::D]:8155' # IPv6
# 'OWN_PRIVATE_API_URL' => 'grpc://kas-N-private-api.gitlab.example.com:8155' # DNS name
}
```
##### Option 2 - automatic CIDR-based configuration
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues/464) in GitLab 16.5.0.
- [Added](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/merge_requests/2183) multiple CIDR support to `OWN_PRIVATE_API_CIDR` in GitLab 17.8.1.
{{< /history >}}
You might not be able to set an exact IP address or hostname in the `OWN_PRIVATE_API_URL` variable if, for example,
the KAS host is assigned an IP address and a hostname dynamically.
If you cannot set an exact IP address or hostname, you can configure `OWN_PRIVATE_API_CIDR` to set up KAS to dynamically construct
`OWN_PRIVATE_API_URL` based on one or more [CIDRs](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing):
This approach allows each KAS node to use a static configuration that works as long as
the CIDR doesn't change.
For each KAS node, edit the file at `/etc/gitlab/gitlab.rb` to dynamically construct the
`OWN_PRIVATE_API_URL` URL:
1. Comment out `OWN_PRIVATE_API_URL` in your common configuration to turn off this variable.
1. Configure `OWN_PRIVATE_API_CIDR` to specify what networks the KAS nodes listen on.
When you start KAS, it determines which private IP address to use by selecting the host address that matches the specified CIDR.
1. Configure `OWN_PRIVATE_API_PORT` to use a different port. By default, KAS uses the port from the `private_api_listen_address` parameter.
1. If you use TLS on the private API endpoint, configure `OWN_PRIVATE_API_SCHEME=grpcs`. By default, KAS uses the `grpc` scheme.
```ruby
gitlab_kas['env'] = {
# 'OWN_PRIVATE_API_CIDR' => '10.0.0.0/8', # IPv4 example
# 'OWN_PRIVATE_API_CIDR' => '2001:db8:8a2e:370::7334/64', # IPv6 example
# 'OWN_PRIVATE_API_CIDR' => '10.0.0.0/8,2001:db8:8a2e:370::7334/64', # multiple CIRDs example
# 'OWN_PRIVATE_API_PORT' => '8155',
# 'OWN_PRIVATE_API_SCHEME' => 'grpc',
}
```
##### Option 3 - automatic configuration based on listener configuration
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues/464) in GitLab 16.5.0.
- [Updated](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues/510) KAS to listen on and publish all non-loopback IP addresses and filter out IPv4 and IPv6 addresses based on the value of `private_api_listen_network`.
{{< /history >}}
A KAS node can determine what IP addresses are available based on the `private_api_listen_network` and
`private_api_listen_address` settings:
- If `private_api_listen_address` is set to a fixed IP address and port number (for example, `ip:port`), it uses this IP address.
- If `private_api_listen_address` has no IP address (for example, `:8155`), or has an unspecified IP address
(for example, `[::]:8155` or `0.0.0.0:8155`), KAS assigns all non-loopback and non-link-local IP addresses to the node.
IPv4 and IPv6 addresses are filtered based on the value of `private_api_listen_network`.
- If `private_api_listen_address` is a `hostname:PORT` (for example, `kas-N-private-api.gitlab.example.com:8155`), KAS
resolves the DNS name and assigns all IP addresses to the node.
In this mode, KAS listens only on the first IP address (This behavior is defined by the [Go standard library](https://pkg.go.dev/net#Listen)).
IPv4 and IPv6 addresses are filtered based on the value of `private_api_listen_network`.
Before exposing the private API address of a KAS on all IP addresses, make sure this action does not conflict with your organization's security policy.
The private API endpoint requires a valid authentication token for all requests.
For each KAS node, edit the file at `/etc/gitlab/gitlab.rb`:
Example 1. Listen on all IPv4 and IPv6 interfaces:
```ruby
# gitlab_kas['private_api_listen_network'] = 'tcp' # this is the default value, no need to set it.
gitlab_kas['private_api_listen_address'] = ':8155' # Listen on all IPv4 and IPv6 interfaces
```
Example 2. Listen on all IPv4 interfaces:
```ruby
gitlab_kas['private_api_listen_network'] = 'tcp4'
gitlab_kas['private_api_listen_address'] = ':8155'
```
Example 3. Listen on all IPv6 interfaces:
```ruby
gitlab_kas['private_api_listen_network'] = 'tcp6'
gitlab_kas['private_api_listen_address'] = ':8155'
```
You can use environment variables to override the scheme and port that
construct the `OWN_PRIVATE_API_URL`:
```ruby
gitlab_kas['env'] = {
# 'OWN_PRIVATE_API_PORT' => '8155',
# 'OWN_PRIVATE_API_SCHEME' => 'grpc',
}
```
##### Agent server node settings
| Setting | Description |
|-----------------------------------------------------|-------------|
| `gitlab_kas['private_api_listen_network']` | The network family KAS listens on. Defaults to `tcp` for both IPv4 and IPv6 networks. Set to `tcp4` for IPv4 or `tcp6` for IPv6. |
| `gitlab_kas['private_api_listen_address']` | The address the KAS listens on. Set to `0.0.0.0:8155` or to an IP and port reachable by other nodes in the cluster. |
| `gitlab_kas['api_secret_key']` | The shared secret used for authentication between KAS and GitLab. The value must be Base64-encoded and exactly 32 bytes long. |
| `gitlab_kas['private_api_secret_key']` | The shared secret used for authentication between different KAS instances. The value must be Base64-encoded and exactly 32 bytes long. |
| `OWN_PRIVATE_API_SCHEME` | Optional value used to specify what scheme to use when constructing `OWN_PRIVATE_API_URL`. Can be `grpc` or `grpcs`. |
| `OWN_PRIVATE_API_URL` | The environment variable used by KAS for service discovery. Set to the hostname or IP address of the node you're configuring. The node must be reachable by other nodes in the cluster. |
| `OWN_PRIVATE_API_HOST` | Optional value used to verify the TLS certificate hostname. <sup>1</sup> A client compares this value to the hostname in the server's TLS certificate file. |
| `OWN_PRIVATE_API_PORT` | Optional value used to specify what port to use when constructing `OWN_PRIVATE_API_URL`. |
| `OWN_PRIVATE_API_CIDR` | Optional value used to specify which IP addresses from the available networks to use when constructing `OWN_PRIVATE_API_URL`. |
| `gitlab_kas['client_timeout_seconds']` | The timeout for the client to connect to the KAS. |
| `gitlab_kas_external_url` | The user-facing URL for the in-cluster `agentk`. Can be a fully qualified domain or subdomain, <sup>2</sup> or a GitLab external URL. <sup>3</sup> If blank, defaults to a GitLab external URL. |
| `gitlab_rails['gitlab_kas_external_url']` | The user-facing URL for the in-cluster `agentk`. If blank, defaults to the `gitlab_kas_external_url`. |
| `gitlab_rails['gitlab_kas_external_k8s_proxy_url']` | The user-facing URL for Kubernetes API proxying. If blank, defaults to a URL based on `gitlab_kas_external_url`. |
| `gitlab_rails['gitlab_kas_internal_url']` | The internal URL the GitLab backend uses to communicate with KAS. |
**Footnotes**:
1. TLS for outbound connections is enabled when `OWN_PRIVATE_API_URL` or `OWN_PRIVATE_API_SCHEME` starts with `grpcs`.
1. For example, `wss://kas.gitlab.example.com/`.
1. For example, `wss://gitlab.example.com/-/kubernetes-agent/`.
### For GitLab Helm Chart
See [how to use the GitLab-KAS chart](https://docs.gitlab.com/charts/charts/gitlab/kas/).
## Kubernetes API proxy cookie
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/104504) in GitLab 15.10 [with feature flags](../feature_flags/_index.md) named `kas_user_access` and `kas_user_access_project`. Disabled by default.
- Feature flags `kas_user_access` and `kas_user_access_project` [enabled](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123479) in GitLab 16.1.
- Feature flags `kas_user_access` and `kas_user_access_project` [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/125835) in GitLab 16.2.
{{< /history >}}
KAS proxies Kubernetes API requests to the GitLab agent for Kubernetes with either:
- A [CI/CD job](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kubernetes_ci_access.md).
- [GitLab user credentials](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kubernetes_user_access.md).
To authenticate with user credentials, Rails sets a cookie for the GitLab frontend.
This cookie is called `_gitlab_kas` and it contains an encrypted
session ID, like the [`_gitlab_session` cookie](../../user/profile/_index.md#cookies-used-for-sign-in).
The `_gitlab_kas` cookie must be sent to the KAS proxy endpoint with every request
to authenticate and authorize the user.
## Enable receptive agents
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/12180) in GitLab 17.4.
{{< /history >}}
[Receptive agents](../../user/clusters/agent/_index.md#receptive-agents) allow GitLab to integrate with Kubernetes clusters
that cannot establish a network connection to the GitLab instance, but can be connected to by GitLab.
To enable receptive agents:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > General**.
1. Expand **GitLab Agent for Kubernetes**.
1. Turn on the **Enable receptive mode** toggle.
## Configure Kubernetes API proxy response header allowlist
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues/642) in GitLab 18.3 [with a flag](../../administration/feature_flags/_index.md) named `kas_k8s_api_proxy_response_header_allowlist`. Disabled by default.
{{< /history >}}
{{< alert type="flag" >}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{< /alert >}}
The Kubernetes API proxy in KAS uses an allowlist for the response headers.
Secure and well-known Kubernetes and HTTP headers are allowed by default.
For a list of allowed response headers, see the [response header allowlist](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/internal/module/kubernetes_api/server/proxy_headers.go).
If you require response headers
that are not in the default allowlist,
you can add your response headers
in the KAS configuration.
To add extra allowed response headers:
```yaml
agent:
kubernetes_api:
extra_allowed_response_headers:
- 'X-My-Custom-Header-To-Allow'
```
Support for the addition of more response headers is tracked in
[issue 550614](https://gitlab.com/gitlab-org/gitlab/-/issues/550614).
## Troubleshooting
If you have issues while using the agent server for Kubernetes, view the
service logs by running the following command:
```shell
kubectl logs -f -l=app=kas -n <YOUR-GITLAB-NAMESPACE>
```
In Linux package installations, find the logs in `/var/log/gitlab/gitlab-kas/`.
You can also [troubleshoot issues with individual agents](../../user/clusters/agent/troubleshooting.md).
### Configuration file not found
If you get the following error message:
```plaintext
time="2020-10-29T04:44:14Z" level=warning msg="Config: failed to fetch" agent_id=2 error="configuration file not found: \".gitlab/agents/test-agent/config.yaml\
```
The path is incorrect for either:
- The repository where the agent was registered.
- The agent configuration file.
To fix this issue, ensure that the paths are correct.
### Error: `dial tcp <GITLAB_INTERNAL_IP>:443: connect: connection refused`
If you are running GitLab Self-Managed and:
- The instance isn't running behind an SSL-terminating proxy.
- The instance doesn't have HTTPS configured on the GitLab instance itself.
- The instance's hostname resolves locally to its internal IP address.
When the agent server tries to connect to the GitLab API, the following error might occur:
```json
{"level":"error","time":"2021-08-16T14:56:47.289Z","msg":"GetAgentInfo()","correlation_id":"01FD7QE35RXXXX8R47WZFBAXTN","grpc_service":"gitlab.agent.reverse_tunnel.rpc.ReverseTunnel","grpc_method":"Connect","error":"Get \"https://gitlab.example.com/api/v4/internal/kubernetes/agent_info\": dial tcp 172.17.0.4:443: connect: connection refused"}
```
To fix this issue for Linux package installations,
set the following parameter in `/etc/gitlab/gitlab.rb`. Replace `gitlab.example.com` with your GitLab instance's hostname:
```ruby
gitlab_kas['gitlab_address'] = 'http://gitlab.example.com'
```
### Error: `x509: certificate signed by unknown authority`
If you encounter this error when trying to reach the GitLab URL, it means it doesn't trust the GitLab certificate.
You might see a similar error in the KAS logs of your GitLab application server:
```json
{"level":"error","time":"2023-03-07T20:19:48.151Z","msg":"AgentInfo()","grpc_service":"gitlab.agent.agent_configuration.rpc.AgentConfiguration","grpc_method":"GetConfiguration","error":"Get \"https://gitlab.example.com/api/v4/internal/kubernetes/agent_info\": x509: certificate signed by unknown authority"}
```
To fix this error, install the public certificate of your internal CA in the `/etc/gitlab/trusted-certs` directory.
Alternatively, you can configure KAS to read the certificate from a custom directory. To do this, add the following configuration to the file at `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_kas['env'] = {
'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
}
```
To apply the changes:
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Restart agent server:
```shell
gitlab-ctl restart gitlab-kas
```
### Error: `GRPC::DeadlineExceeded in Clusters::Agents::NotifyGitPushWorker`
This error likely occurs when the client does not receive a response within the default timeout period (5 seconds). To resolve the issue, you can increase the client timeout by modifying the `/etc/gitlab/gitlab.rb` configuration file.
#### Steps to Resolve
1. Add or update the following configuration to increase the timeout value:
```ruby
gitlab_kas['client_timeout_seconds'] = "10"
```
1. Apply the changes by reconfiguring GitLab:
```shell
gitlab-ctl reconfigure
```
#### Note
You can adjust the timeout value to suit your specific needs. Testing is recommended to ensure the issue is resolved without impacting system performance.
### Error: `Blocked Kubernetes API proxy response header`
If HTTP response headers are lost when sent from the Kubernetes cluster to the user through the Kubernetes API proxy, check the KAS logs or Sentry instance for the following error:
```plaintext
Blocked Kubernetes API proxy response header. Please configure extra allowed headers for your instance in the KAS config with `extra_allowed_response_headers` and have a look at the troubleshooting guide at https://docs.gitlab.com/administration/clusters/kas/#troubleshooting.
```
This error means that the Kubernetes API proxy blocked response headers because
they are not defined in the response header allowlist.
For more information on adding response headers,
see [configure the response header allowlist](#configure-kubernetes-api-proxy-response-header-allowlist).
Support for the addition of more response headers is tracked in
[issue 550614](https://gitlab.com/gitlab-org/gitlab/-/issues/550614).
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Install the GitLab agent server for Kubernetes (KAS)
description: Manage the GitLab agent for Kubernetes.
breadcrumbs:
- doc
- administration
- clusters
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
The agent server is a component installed together with GitLab. It is required to
manage the [GitLab agent for Kubernetes](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent).
The KAS acronym refers to the former name, `Kubernetes agent server`.
The agent server for Kubernetes is installed and available on GitLab.com at `wss://kas.gitlab.com`.
If you use GitLab Self-Managed, by default the agent server is installed and available.
## Installation options
As a GitLab administrator, you can control the agent server installation:
- For [Linux package installations](#for-linux-package-installations).
- For [GitLab Helm chart installations](#for-gitlab-helm-chart).
### For Linux package installations
The agent server for Linux package installations can be enabled on a single node, or on multiple nodes at once.
By default, the agent server is enabled and available at `ws://gitlab.example.com/-/kubernetes-agent/`.
#### Disable on a single node
To disable the agent server on a single node:
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_kas['enable'] = false
```
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
#### Turn on KAS on multiple nodes
KAS instances communicate with each other by registering their private addresses in Redis at a well-known location.
Each KAS must be configured to present its private address details so that other instances can reach it.
To turn on KAS on multiple nodes:
1. Add the [common configuration](#common-configuration).
1. Add the configuration from one of the following options:
- [Option 1 - explicit manual configuration](#option-1---explicit-manual-configuration)
- [Option 2 - automatic CIDR-based configuration](#option-2---automatic-cidr-based-configuration)
- [Option 3 - automatic configuration based on listener configuration](#option-3---automatic-configuration-based-on-listener-configuration)
1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
1. (Optional) If you use a multi-server environment with separate GitLab Rails and Sidekiq nodes, enable KAS on the Sidekiq nodes.
##### Common configuration
For each KAS node, edit the file at `/etc/gitlab/gitlab.rb` and add the following configuration:
```ruby
gitlab_kas_external_url 'wss://kas.gitlab.example.com/'
gitlab_kas['api_secret_key'] = '<32_bytes_long_base64_encoded_value>'
gitlab_kas['private_api_secret_key'] = '<32_bytes_long_base64_encoded_value>'
# private_api_listen_address examples, pick one:
gitlab_kas['private_api_listen_address'] = 'A.B.C.D:8155' # Listen on a particular IPv4. Each node must use its own unique IP.
# gitlab_kas['private_api_listen_address'] = '[A:B:C::D]:8155' # Listen on a particular IPv6. Each node must use its own unique IP.
# gitlab_kas['private_api_listen_address'] = 'kas-N.gitlab.example.com:8155' # Listen on all IPv4 and IPv6 interfaces that the DNS name resolves to. Each node must use its own unique domain.
# gitlab_kas['private_api_listen_address'] = ':8155' # Listen on all IPv4 and IPv6 interfaces.
# gitlab_kas['private_api_listen_address'] = '0.0.0.0:8155' # Listen on all IPv4 interfaces.
# gitlab_kas['private_api_listen_address'] = '[::]:8155' # Listen on all IPv6 interfaces.
gitlab_kas['env'] = {
# 'OWN_PRIVATE_API_HOST' => '<server-name-from-cert>' # Add if you want to use TLS for KAS->KAS communication. This name is used to verify the TLS certificate host name instead of the host in the URL of the destination KAS.
'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/",
}
gitlab_rails['gitlab_kas_external_url'] = 'wss://gitlab.example.com/-/kubernetes-agent/'
gitlab_rails['gitlab_kas_internal_url'] = 'grpc://kas.internal.gitlab.example.com'
gitlab_rails['gitlab_kas_external_k8s_proxy_url'] = 'https://gitlab.example.com/-/kubernetes-agent/k8s-proxy/'
```
**Do not** set `private_api_listen_address` to listen on an internal address, such as:
- `localhost`
- Loopback IP addresses, like `127.0.0.1` or `::1`
- A UNIX socket
Other KAS nodes cannot reach these addresses.
For single-node configurations, you can set `private_api_listen_address` to listen on an internal address.
##### Option 1 - explicit manual configuration
For each KAS node, edit the file at `/etc/gitlab/gitlab.rb` and set the `OWN_PRIVATE_API_URL` environment variable explicitly:
```ruby
gitlab_kas['env'] = {
# OWN_PRIVATE_API_URL examples, pick one. Each node must use its own unique IP or DNS name.
# Use grpcs:// when using TLS on the private API endpoint.
'OWN_PRIVATE_API_URL' => 'grpc://A.B.C.D:8155' # IPv4
# 'OWN_PRIVATE_API_URL' => 'grpcs://A.B.C.D:8155' # IPv4 + TLS
# 'OWN_PRIVATE_API_URL' => 'grpc://[A:B:C::D]:8155' # IPv6
# 'OWN_PRIVATE_API_URL' => 'grpc://kas-N-private-api.gitlab.example.com:8155' # DNS name
}
```
##### Option 2 - automatic CIDR-based configuration
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues/464) in GitLab 16.5.0.
- [Added](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/merge_requests/2183) multiple CIDR support to `OWN_PRIVATE_API_CIDR` in GitLab 17.8.1.
{{< /history >}}
You might not be able to set an exact IP address or hostname in the `OWN_PRIVATE_API_URL` variable if, for example,
the KAS host is assigned an IP address and a hostname dynamically.
If you cannot set an exact IP address or hostname, you can configure `OWN_PRIVATE_API_CIDR` to set up KAS to dynamically construct
`OWN_PRIVATE_API_URL` based on one or more [CIDRs](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing):
This approach allows each KAS node to use a static configuration that works as long as
the CIDR doesn't change.
For each KAS node, edit the file at `/etc/gitlab/gitlab.rb` to dynamically construct the
`OWN_PRIVATE_API_URL` URL:
1. Comment out `OWN_PRIVATE_API_URL` in your common configuration to turn off this variable.
1. Configure `OWN_PRIVATE_API_CIDR` to specify what networks the KAS nodes listen on.
When you start KAS, it determines which private IP address to use by selecting the host address that matches the specified CIDR.
1. Configure `OWN_PRIVATE_API_PORT` to use a different port. By default, KAS uses the port from the `private_api_listen_address` parameter.
1. If you use TLS on the private API endpoint, configure `OWN_PRIVATE_API_SCHEME=grpcs`. By default, KAS uses the `grpc` scheme.
```ruby
gitlab_kas['env'] = {
# 'OWN_PRIVATE_API_CIDR' => '10.0.0.0/8', # IPv4 example
# 'OWN_PRIVATE_API_CIDR' => '2001:db8:8a2e:370::7334/64', # IPv6 example
# 'OWN_PRIVATE_API_CIDR' => '10.0.0.0/8,2001:db8:8a2e:370::7334/64', # multiple CIRDs example
# 'OWN_PRIVATE_API_PORT' => '8155',
# 'OWN_PRIVATE_API_SCHEME' => 'grpc',
}
```
##### Option 3 - automatic configuration based on listener configuration
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues/464) in GitLab 16.5.0.
- [Updated](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues/510) KAS to listen on and publish all non-loopback IP addresses and filter out IPv4 and IPv6 addresses based on the value of `private_api_listen_network`.
{{< /history >}}
A KAS node can determine what IP addresses are available based on the `private_api_listen_network` and
`private_api_listen_address` settings:
- If `private_api_listen_address` is set to a fixed IP address and port number (for example, `ip:port`), it uses this IP address.
- If `private_api_listen_address` has no IP address (for example, `:8155`), or has an unspecified IP address
(for example, `[::]:8155` or `0.0.0.0:8155`), KAS assigns all non-loopback and non-link-local IP addresses to the node.
IPv4 and IPv6 addresses are filtered based on the value of `private_api_listen_network`.
- If `private_api_listen_address` is a `hostname:PORT` (for example, `kas-N-private-api.gitlab.example.com:8155`), KAS
resolves the DNS name and assigns all IP addresses to the node.
In this mode, KAS listens only on the first IP address (This behavior is defined by the [Go standard library](https://pkg.go.dev/net#Listen)).
IPv4 and IPv6 addresses are filtered based on the value of `private_api_listen_network`.
Before exposing the private API address of a KAS on all IP addresses, make sure this action does not conflict with your organization's security policy.
The private API endpoint requires a valid authentication token for all requests.
For each KAS node, edit the file at `/etc/gitlab/gitlab.rb`:
Example 1. Listen on all IPv4 and IPv6 interfaces:
```ruby
# gitlab_kas['private_api_listen_network'] = 'tcp' # this is the default value, no need to set it.
gitlab_kas['private_api_listen_address'] = ':8155' # Listen on all IPv4 and IPv6 interfaces
```
Example 2. Listen on all IPv4 interfaces:
```ruby
gitlab_kas['private_api_listen_network'] = 'tcp4'
gitlab_kas['private_api_listen_address'] = ':8155'
```
Example 3. Listen on all IPv6 interfaces:
```ruby
gitlab_kas['private_api_listen_network'] = 'tcp6'
gitlab_kas['private_api_listen_address'] = ':8155'
```
You can use environment variables to override the scheme and port that
construct the `OWN_PRIVATE_API_URL`:
```ruby
gitlab_kas['env'] = {
# 'OWN_PRIVATE_API_PORT' => '8155',
# 'OWN_PRIVATE_API_SCHEME' => 'grpc',
}
```
##### Agent server node settings
| Setting | Description |
|-----------------------------------------------------|-------------|
| `gitlab_kas['private_api_listen_network']` | The network family KAS listens on. Defaults to `tcp` for both IPv4 and IPv6 networks. Set to `tcp4` for IPv4 or `tcp6` for IPv6. |
| `gitlab_kas['private_api_listen_address']` | The address the KAS listens on. Set to `0.0.0.0:8155` or to an IP and port reachable by other nodes in the cluster. |
| `gitlab_kas['api_secret_key']` | The shared secret used for authentication between KAS and GitLab. The value must be Base64-encoded and exactly 32 bytes long. |
| `gitlab_kas['private_api_secret_key']` | The shared secret used for authentication between different KAS instances. The value must be Base64-encoded and exactly 32 bytes long. |
| `OWN_PRIVATE_API_SCHEME` | Optional value used to specify what scheme to use when constructing `OWN_PRIVATE_API_URL`. Can be `grpc` or `grpcs`. |
| `OWN_PRIVATE_API_URL` | The environment variable used by KAS for service discovery. Set to the hostname or IP address of the node you're configuring. The node must be reachable by other nodes in the cluster. |
| `OWN_PRIVATE_API_HOST` | Optional value used to verify the TLS certificate hostname. <sup>1</sup> A client compares this value to the hostname in the server's TLS certificate file. |
| `OWN_PRIVATE_API_PORT` | Optional value used to specify what port to use when constructing `OWN_PRIVATE_API_URL`. |
| `OWN_PRIVATE_API_CIDR` | Optional value used to specify which IP addresses from the available networks to use when constructing `OWN_PRIVATE_API_URL`. |
| `gitlab_kas['client_timeout_seconds']` | The timeout for the client to connect to the KAS. |
| `gitlab_kas_external_url` | The user-facing URL for the in-cluster `agentk`. Can be a fully qualified domain or subdomain, <sup>2</sup> or a GitLab external URL. <sup>3</sup> If blank, defaults to a GitLab external URL. |
| `gitlab_rails['gitlab_kas_external_url']` | The user-facing URL for the in-cluster `agentk`. If blank, defaults to the `gitlab_kas_external_url`. |
| `gitlab_rails['gitlab_kas_external_k8s_proxy_url']` | The user-facing URL for Kubernetes API proxying. If blank, defaults to a URL based on `gitlab_kas_external_url`. |
| `gitlab_rails['gitlab_kas_internal_url']` | The internal URL the GitLab backend uses to communicate with KAS. |
**Footnotes**:
1. TLS for outbound connections is enabled when `OWN_PRIVATE_API_URL` or `OWN_PRIVATE_API_SCHEME` starts with `grpcs`.
1. For example, `wss://kas.gitlab.example.com/`.
1. For example, `wss://gitlab.example.com/-/kubernetes-agent/`.
### For GitLab Helm Chart
See [how to use the GitLab-KAS chart](https://docs.gitlab.com/charts/charts/gitlab/kas/).
## Kubernetes API proxy cookie
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/104504) in GitLab 15.10 [with feature flags](../feature_flags/_index.md) named `kas_user_access` and `kas_user_access_project`. Disabled by default.
- Feature flags `kas_user_access` and `kas_user_access_project` [enabled](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123479) in GitLab 16.1.
- Feature flags `kas_user_access` and `kas_user_access_project` [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/125835) in GitLab 16.2.
{{< /history >}}
KAS proxies Kubernetes API requests to the GitLab agent for Kubernetes with either:
- A [CI/CD job](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kubernetes_ci_access.md).
- [GitLab user credentials](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kubernetes_user_access.md).
To authenticate with user credentials, Rails sets a cookie for the GitLab frontend.
This cookie is called `_gitlab_kas` and it contains an encrypted
session ID, like the [`_gitlab_session` cookie](../../user/profile/_index.md#cookies-used-for-sign-in).
The `_gitlab_kas` cookie must be sent to the KAS proxy endpoint with every request
to authenticate and authorize the user.
## Enable receptive agents
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/12180) in GitLab 17.4.
{{< /history >}}
[Receptive agents](../../user/clusters/agent/_index.md#receptive-agents) allow GitLab to integrate with Kubernetes clusters
that cannot establish a network connection to the GitLab instance, but can be connected to by GitLab.
To enable receptive agents:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > General**.
1. Expand **GitLab Agent for Kubernetes**.
1. Turn on the **Enable receptive mode** toggle.
## Configure Kubernetes API proxy response header allowlist
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues/642) in GitLab 18.3 [with a flag](../../administration/feature_flags/_index.md) named `kas_k8s_api_proxy_response_header_allowlist`. Disabled by default.
{{< /history >}}
{{< alert type="flag" >}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{< /alert >}}
The Kubernetes API proxy in KAS uses an allowlist for the response headers.
Secure and well-known Kubernetes and HTTP headers are allowed by default.
For a list of allowed response headers, see the [response header allowlist](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/internal/module/kubernetes_api/server/proxy_headers.go).
If you require response headers
that are not in the default allowlist,
you can add your response headers
in the KAS configuration.
To add extra allowed response headers:
```yaml
agent:
kubernetes_api:
extra_allowed_response_headers:
- 'X-My-Custom-Header-To-Allow'
```
Support for the addition of more response headers is tracked in
[issue 550614](https://gitlab.com/gitlab-org/gitlab/-/issues/550614).
## Troubleshooting
If you have issues while using the agent server for Kubernetes, view the
service logs by running the following command:
```shell
kubectl logs -f -l=app=kas -n <YOUR-GITLAB-NAMESPACE>
```
In Linux package installations, find the logs in `/var/log/gitlab/gitlab-kas/`.
You can also [troubleshoot issues with individual agents](../../user/clusters/agent/troubleshooting.md).
### Configuration file not found
If you get the following error message:
```plaintext
time="2020-10-29T04:44:14Z" level=warning msg="Config: failed to fetch" agent_id=2 error="configuration file not found: \".gitlab/agents/test-agent/config.yaml\
```
The path is incorrect for either:
- The repository where the agent was registered.
- The agent configuration file.
To fix this issue, ensure that the paths are correct.
### Error: `dial tcp <GITLAB_INTERNAL_IP>:443: connect: connection refused`
If you are running GitLab Self-Managed and:
- The instance isn't running behind an SSL-terminating proxy.
- The instance doesn't have HTTPS configured on the GitLab instance itself.
- The instance's hostname resolves locally to its internal IP address.
When the agent server tries to connect to the GitLab API, the following error might occur:
```json
{"level":"error","time":"2021-08-16T14:56:47.289Z","msg":"GetAgentInfo()","correlation_id":"01FD7QE35RXXXX8R47WZFBAXTN","grpc_service":"gitlab.agent.reverse_tunnel.rpc.ReverseTunnel","grpc_method":"Connect","error":"Get \"https://gitlab.example.com/api/v4/internal/kubernetes/agent_info\": dial tcp 172.17.0.4:443: connect: connection refused"}
```
To fix this issue for Linux package installations,
set the following parameter in `/etc/gitlab/gitlab.rb`. Replace `gitlab.example.com` with your GitLab instance's hostname:
```ruby
gitlab_kas['gitlab_address'] = 'http://gitlab.example.com'
```
### Error: `x509: certificate signed by unknown authority`
If you encounter this error when trying to reach the GitLab URL, it means it doesn't trust the GitLab certificate.
You might see a similar error in the KAS logs of your GitLab application server:
```json
{"level":"error","time":"2023-03-07T20:19:48.151Z","msg":"AgentInfo()","grpc_service":"gitlab.agent.agent_configuration.rpc.AgentConfiguration","grpc_method":"GetConfiguration","error":"Get \"https://gitlab.example.com/api/v4/internal/kubernetes/agent_info\": x509: certificate signed by unknown authority"}
```
To fix this error, install the public certificate of your internal CA in the `/etc/gitlab/trusted-certs` directory.
Alternatively, you can configure KAS to read the certificate from a custom directory. To do this, add the following configuration to the file at `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_kas['env'] = {
'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/"
}
```
To apply the changes:
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Restart agent server:
```shell
gitlab-ctl restart gitlab-kas
```
### Error: `GRPC::DeadlineExceeded in Clusters::Agents::NotifyGitPushWorker`
This error likely occurs when the client does not receive a response within the default timeout period (5 seconds). To resolve the issue, you can increase the client timeout by modifying the `/etc/gitlab/gitlab.rb` configuration file.
#### Steps to Resolve
1. Add or update the following configuration to increase the timeout value:
```ruby
gitlab_kas['client_timeout_seconds'] = "10"
```
1. Apply the changes by reconfiguring GitLab:
```shell
gitlab-ctl reconfigure
```
#### Note
You can adjust the timeout value to suit your specific needs. Testing is recommended to ensure the issue is resolved without impacting system performance.
### Error: `Blocked Kubernetes API proxy response header`
If HTTP response headers are lost when sent from the Kubernetes cluster to the user through the Kubernetes API proxy, check the KAS logs or Sentry instance for the following error:
```plaintext
Blocked Kubernetes API proxy response header. Please configure extra allowed headers for your instance in the KAS config with `extra_allowed_response_headers` and have a look at the troubleshooting guide at https://docs.gitlab.com/administration/clusters/kas/#troubleshooting.
```
This error means that the Kubernetes API proxy blocked response headers because
they are not defined in the response header allowlist.
For more information on adding response headers,
see [configure the response header allowlist](#configure-kubernetes-api-proxy-response-header-allowlist).
Support for the addition of more response headers is tracked in
[issue 550614](https://gitlab.com/gitlab-org/gitlab/-/issues/550614).
|
https://docs.gitlab.com/administration/spamcheck
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/spamcheck.md
|
2025-08-13
|
doc/administration/reporting
|
[
"doc",
"administration",
"reporting"
] |
spamcheck.md
|
GitLab Delivery
|
Self Managed
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Spamcheck anti-spam service
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< alert type="warning" >}}
Spamcheck is available to all tiers, but only on instances using GitLab Enterprise Edition (EE). For [licensing reasons](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6259#note_726605397), it is not included in the GitLab Community Edition (CE) package. You can [migrate from CE to EE](../../update/package/convert_to_ee.md).
{{< /alert >}}
[Spamcheck](https://gitlab.com/gitlab-org/gl-security/security-engineering/security-automation/spam/spamcheck) is an anti-spam engine
developed by GitLab originally to combat rising amount of spam in GitLab.com,
and later made public to be used in GitLab Self-Managed instances.
## Enable Spamcheck
Spamcheck is only available for package-based installations:
1. Edit `/etc/gitlab/gitlab.rb` and enable Spamcheck:
```ruby
spamcheck['enable'] = true
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Verify that the new services `spamcheck` and `spam-classifier` are
up and running:
```shell
sudo gitlab-ctl status
```
## Configure GitLab to use Spamcheck
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Reporting**.
1. Expand **Spam and Anti-bot Protection**.
1. Update the Spam Check settings:
1. Check the "Enable Spam Check via external API endpoint" checkbox.
1. For **URL of the external Spam Check endpoint** use `grpc://localhost:8001`.
1. Leave **Spam Check API key** blank.
1. Select **Save changes**.
{{< alert type="note" >}}
In single-node instances, Spamcheck runs over `localhost`, and hence is running
in an unauthenticated mode. If on multi-node instances where GitLab runs on one
server and Spamcheck runs on another server listening over a public endpoint, it
is recommended to enforce some sort of authentication using a reverse proxy in
front of the Spamcheck service that can be used along with an API key. One
example would be to use `JWT` authentication for this and specifying a bearer
token as the API key.
[Native authentication for Spamcheck is in the works](https://gitlab.com/gitlab-com/gl-security/engineering-and-research/automation-team/spam/spamcheck/-/issues/171).
{{< /alert >}}
## Running Spamcheck over TLS
Spamcheck service on its own cannot communicate directly over TLS with GitLab.
However, Spamcheck can be deployed behind a reverse proxy which performs TLS
termination. In such a scenario, GitLab can be made to communicate with
Spamcheck over TLS by specifying `tls://` scheme for the external Spamcheck URL
instead of `grpc://` in the **Admin** area settings.
|
---
stage: GitLab Delivery
group: Self Managed
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Spamcheck anti-spam service
breadcrumbs:
- doc
- administration
- reporting
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
{{< alert type="warning" >}}
Spamcheck is available to all tiers, but only on instances using GitLab Enterprise Edition (EE). For [licensing reasons](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6259#note_726605397), it is not included in the GitLab Community Edition (CE) package. You can [migrate from CE to EE](../../update/package/convert_to_ee.md).
{{< /alert >}}
[Spamcheck](https://gitlab.com/gitlab-org/gl-security/security-engineering/security-automation/spam/spamcheck) is an anti-spam engine
developed by GitLab originally to combat rising amount of spam in GitLab.com,
and later made public to be used in GitLab Self-Managed instances.
## Enable Spamcheck
Spamcheck is only available for package-based installations:
1. Edit `/etc/gitlab/gitlab.rb` and enable Spamcheck:
```ruby
spamcheck['enable'] = true
```
1. Reconfigure GitLab:
```shell
sudo gitlab-ctl reconfigure
```
1. Verify that the new services `spamcheck` and `spam-classifier` are
up and running:
```shell
sudo gitlab-ctl status
```
## Configure GitLab to use Spamcheck
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Reporting**.
1. Expand **Spam and Anti-bot Protection**.
1. Update the Spam Check settings:
1. Check the "Enable Spam Check via external API endpoint" checkbox.
1. For **URL of the external Spam Check endpoint** use `grpc://localhost:8001`.
1. Leave **Spam Check API key** blank.
1. Select **Save changes**.
{{< alert type="note" >}}
In single-node instances, Spamcheck runs over `localhost`, and hence is running
in an unauthenticated mode. If on multi-node instances where GitLab runs on one
server and Spamcheck runs on another server listening over a public endpoint, it
is recommended to enforce some sort of authentication using a reverse proxy in
front of the Spamcheck service that can be used along with an API key. One
example would be to use `JWT` authentication for this and specifying a bearer
token as the API key.
[Native authentication for Spamcheck is in the works](https://gitlab.com/gitlab-com/gl-security/engineering-and-research/automation-team/spam/spamcheck/-/issues/171).
{{< /alert >}}
## Running Spamcheck over TLS
Spamcheck service on its own cannot communicate directly over TLS with GitLab.
However, Spamcheck can be deployed behind a reverse proxy which performs TLS
termination. In such a scenario, GitLab can be made to communicate with
Spamcheck over TLS by specifying `tls://` scheme for the external Spamcheck URL
instead of `grpc://` in the **Admin** area settings.
|
https://docs.gitlab.com/administration/git_abuse_rate_limit
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/git_abuse_rate_limit.md
|
2025-08-13
|
doc/administration/reporting
|
[
"doc",
"administration",
"reporting"
] |
git_abuse_rate_limit.md
|
Software Supply Chain Security
|
Authorization
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Git abuse rate limit (administration)
| null |
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/8066) in GitLab 15.2 [with a flag](../feature_flags/_index.md) named `git_abuse_rate_limit_feature_flag`. Disabled by default.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/394996) in GitLab 15.11. Feature flag `git_abuse_rate_limit_feature_flag` removed.
{{< /history >}}
This is the administration documentation. For information about Git abuse rate limiting for a group, see the [group documentation](../../user/group/reporting/git_abuse_rate_limit.md).
Git abuse rate limiting is a feature to automatically [ban users](../moderate_users.md#ban-and-unban-users) who download, clone, or fork more than a specified number of repositories in any project in the instance in a given time frame. Banned users cannot sign in to the instance and cannot access any non-public group via HTTP or SSH. The rate limit also applies to users who authenticate with a [personal](../../user/profile/personal_access_tokens.md) or [group access token](../../user/group/settings/group_access_tokens.md).
Git abuse rate limiting does not apply to instance administrators, [deploy tokens](../../user/project/deploy_tokens/_index.md), or [deploy keys](../../user/project/deploy_keys/_index.md).
How GitLab determines a user's rate limit is under development.
GitLab team members can view more information in this confidential epic:
`https://gitlab.com/groups/gitlab-org/modelops/anti-abuse/-/epics/14`.
## Configure Git abuse rate limiting
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Reporting**.
1. Expand **Git abuse rate limit**.
1. Update the Git abuse rate limit settings:
1. Enter a number in the **Number of repositories** field, greater than or equal to `0` and less than or equal to `10,000`. This number specifies the maximum amount of unique repositories a user can download in the specified time period before they're banned. When set to `0`, Git abuse rate limiting is disabled.
1. Enter a number in the **Reporting time period (seconds)** field, greater than or equal to `0` and less than or equal to `86,400` (10 days). This number specifies the time in seconds a user can download the maximum amount of repositories before they're banned. When set to `0`, Git abuse rate limiting is disabled.
1. Optional. Exclude up to `100` users by adding them to the **Excluded users** field. Excluded users are not automatically banned.
1. Add up to `100` users to the **Send notifications to** field. You must select at least one user. All application administrators are selected by default.
1. Optional. Turn on the **Automatically ban users from this namespace when they exceed the specified limits** toggle to enable automatic banning.
1. Select **Save changes**.
## Automatic ban notifications
If automatic banning is disabled, a user is not banned automatically when they exceed the limit. However, notifications are still sent to the users listed under **Send notifications to**. You can use this setup to determine the correct values of the rate limit settings before enabling automatic banning.
If automatic banning is enabled, an email notification is sent when a user is about to be banned, and the user is automatically banned from the GitLab instance.
## Unban a user
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Overview > Users**.
1. Select the **Banned** tab and search for the account you want to unban.
1. From the **User administration** dropdown list select **Unban user**.
1. On the confirmation dialog, select **Unban user**.
|
---
stage: Software Supply Chain Security
group: Authorization
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
gitlab_dedicated: true
title: Git abuse rate limit (administration)
breadcrumbs:
- doc
- administration
- reporting
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/8066) in GitLab 15.2 [with a flag](../feature_flags/_index.md) named `git_abuse_rate_limit_feature_flag`. Disabled by default.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/394996) in GitLab 15.11. Feature flag `git_abuse_rate_limit_feature_flag` removed.
{{< /history >}}
This is the administration documentation. For information about Git abuse rate limiting for a group, see the [group documentation](../../user/group/reporting/git_abuse_rate_limit.md).
Git abuse rate limiting is a feature to automatically [ban users](../moderate_users.md#ban-and-unban-users) who download, clone, or fork more than a specified number of repositories in any project in the instance in a given time frame. Banned users cannot sign in to the instance and cannot access any non-public group via HTTP or SSH. The rate limit also applies to users who authenticate with a [personal](../../user/profile/personal_access_tokens.md) or [group access token](../../user/group/settings/group_access_tokens.md).
Git abuse rate limiting does not apply to instance administrators, [deploy tokens](../../user/project/deploy_tokens/_index.md), or [deploy keys](../../user/project/deploy_keys/_index.md).
How GitLab determines a user's rate limit is under development.
GitLab team members can view more information in this confidential epic:
`https://gitlab.com/groups/gitlab-org/modelops/anti-abuse/-/epics/14`.
## Configure Git abuse rate limiting
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Reporting**.
1. Expand **Git abuse rate limit**.
1. Update the Git abuse rate limit settings:
1. Enter a number in the **Number of repositories** field, greater than or equal to `0` and less than or equal to `10,000`. This number specifies the maximum amount of unique repositories a user can download in the specified time period before they're banned. When set to `0`, Git abuse rate limiting is disabled.
1. Enter a number in the **Reporting time period (seconds)** field, greater than or equal to `0` and less than or equal to `86,400` (10 days). This number specifies the time in seconds a user can download the maximum amount of repositories before they're banned. When set to `0`, Git abuse rate limiting is disabled.
1. Optional. Exclude up to `100` users by adding them to the **Excluded users** field. Excluded users are not automatically banned.
1. Add up to `100` users to the **Send notifications to** field. You must select at least one user. All application administrators are selected by default.
1. Optional. Turn on the **Automatically ban users from this namespace when they exceed the specified limits** toggle to enable automatic banning.
1. Select **Save changes**.
## Automatic ban notifications
If automatic banning is disabled, a user is not banned automatically when they exceed the limit. However, notifications are still sent to the users listed under **Send notifications to**. You can use this setup to determine the correct values of the rate limit settings before enabling automatic banning.
If automatic banning is enabled, an email notification is sent when a user is about to be banned, and the user is automatically banned from the GitLab instance.
## Unban a user
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Overview > Users**.
1. Select the **Banned** tab and search for the account you want to unban.
1. From the **User administration** dropdown list select **Unban user**.
1. On the confirmation dialog, select **Unban user**.
|
https://docs.gitlab.com/administration/ip_addr_restrictions
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/ip_addr_restrictions.md
|
2025-08-13
|
doc/administration/reporting
|
[
"doc",
"administration",
"reporting"
] |
ip_addr_restrictions.md
|
Software Supply Chain Security
|
Authorization
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
IP address restrictions
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
IP address restrictions help prevent malicious users hiding their activities behind multiple IP addresses.
GitLab maintains a list of the unique IP addresses used by a user to make requests over a specified period. When the
specified limit is reached, any requests made by the user from a new IP address are rejected with a `403 Forbidden` error.
IP addresses are cleared from the list when no further requests have been made by the user from the IP address in the specified time period.
{{< alert type="note" >}}
When a runner runs a CI/CD job as a particular user, the runner IP address is also stored against the user's list of
unique IP addresses. Therefore, the IP addresses per user limit should take into account the number of configured active runners.
{{< /alert >}}
## Configure IP address restrictions
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Reporting**.
1. Expand **Spam and Anti-bot Protection**.
1. Update the IP address restrictions settings:
1. Select the **Limit sign in from multiple IP addresses** checkbox to enable IP address restrictions.
1. Enter a number in the **IP addresses per user** field, greater than or equal to `1`. This number specifies the
maximum number of unique IP addresses a user can access GitLab from in the specified time period before requests
from a new IP address are rejected.
1. Enter a number in the **IP address expiration time** field, greater than or equal to `0`. This number specifies the
time in seconds an IP address counts towards the limit for a user, taken from the time the last request was made.
1. Select **Save changes**.
|
---
stage: Software Supply Chain Security
group: Authorization
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: IP address restrictions
breadcrumbs:
- doc
- administration
- reporting
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
IP address restrictions help prevent malicious users hiding their activities behind multiple IP addresses.
GitLab maintains a list of the unique IP addresses used by a user to make requests over a specified period. When the
specified limit is reached, any requests made by the user from a new IP address are rejected with a `403 Forbidden` error.
IP addresses are cleared from the list when no further requests have been made by the user from the IP address in the specified time period.
{{< alert type="note" >}}
When a runner runs a CI/CD job as a particular user, the runner IP address is also stored against the user's list of
unique IP addresses. Therefore, the IP addresses per user limit should take into account the number of configured active runners.
{{< /alert >}}
## Configure IP address restrictions
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Reporting**.
1. Expand **Spam and Anti-bot Protection**.
1. Update the IP address restrictions settings:
1. Select the **Limit sign in from multiple IP addresses** checkbox to enable IP address restrictions.
1. Enter a number in the **IP addresses per user** field, greater than or equal to `1`. This number specifies the
maximum number of unique IP addresses a user can access GitLab from in the specified time period before requests
from a new IP address are rejected.
1. Enter a number in the **IP address expiration time** field, greater than or equal to `0`. This number specifies the
time in seconds an IP address counts towards the limit for a user, taken from the time the last request was made.
1. Select **Save changes**.
|
https://docs.gitlab.com/administration/list
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/administration/list.md
|
2025-08-13
|
doc/administration/feature_flags
|
[
"doc",
"administration",
"feature_flags"
] |
list.md
|
none
|
unassigned
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
All feature flags in GitLab
|
Complete list of all feature flags in GitLab.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides feature flags to turn specific features on or off.
This page contains a list of all feature flags provided by GitLab. In GitLab Self-Managed,
GitLab administrators can [change the state of these feature flags](_index.md).
For help developing custom feature flags, see
[Create a feature flag](../../operations/feature_flags.md#create-a-feature-flag).
<!-- markdownlint-disable MD044 -->
<!-- MD044/proper-names test disabled after this line to make page compatible with markdownlint-cli 0.29.0. -->
<!-- See https://docs.gitlab.com/ee/development/documentation/testing/markdownlint.html#disable-markdownlint-tests -->
<div class="d-none">
<strong>If you don't see the feature flag tables below, view them at <a href="https://docs.gitlab.com/ee/user/feature_flags.html">docs.gitlab.com</a>.</strong>
</div>
<!-- the div tag will not display on the docs site but will display on /help -->
<!-- markdownlint-enable MD044 -->
{{< feature-flags >}}
|
---
stage: none
group: unassigned
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Complete list of all feature flags in GitLab.
title: All feature flags in GitLab
layout: feature_flags
breadcrumbs:
- doc
- administration
- feature_flags
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides feature flags to turn specific features on or off.
This page contains a list of all feature flags provided by GitLab. In GitLab Self-Managed,
GitLab administrators can [change the state of these feature flags](_index.md).
For help developing custom feature flags, see
[Create a feature flag](../../operations/feature_flags.md#create-a-feature-flag).
<!-- markdownlint-disable MD044 -->
<!-- MD044/proper-names test disabled after this line to make page compatible with markdownlint-cli 0.29.0. -->
<!-- See https://docs.gitlab.com/ee/development/documentation/testing/markdownlint.html#disable-markdownlint-tests -->
<div class="d-none">
<strong>If you don't see the feature flag tables below, view them at <a href="https://docs.gitlab.com/ee/user/feature_flags.html">docs.gitlab.com</a>.</strong>
</div>
<!-- the div tag will not display on the docs site but will display on /help -->
<!-- markdownlint-enable MD044 -->
{{< feature-flags >}}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.