url
stringlengths 24
122
| repo_url
stringlengths 60
156
| date_extracted
stringdate 2025-08-13 00:00:00
2025-08-13 00:00:00
| root
stringlengths 3
85
| breadcrumbs
listlengths 1
6
| filename
stringlengths 6
60
| stage
stringclasses 33
values | group
stringclasses 81
values | info
stringclasses 22
values | title
stringlengths 3
110
⌀ | description
stringlengths 11
359
⌀ | clean_text
stringlengths 47
3.32M
| rich_text
stringlengths 321
3.32M
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://docs.gitlab.com/development/virtual_tables
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/virtual_tables.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
virtual_tables.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
SQL views
| null |
## Overview
At GitLab, we use SQL views as an abstraction layer over PostgreSQL's system catalogs (`pg_*` tables). This makes it easier to query the system catalogs from Rails.
## Example
For example, the SQL view `postgres_sequences` is an abstraction layer over `pg_sequence` and other `pg_*` tables. It's queried using the following Rails model:
```ruby
module Gitlab
module Database
# Backed by the postgres_sequences view
class PostgresSequence < SharedModel
self.primary_key = :seq_name
scope :by_table_name, ->(table_name) { where(table_name: table_name) }
scope :by_col_name, ->(col_name) { where(col_name: col_name) }
end
end
end
```
This allows us to manage database maintenance tasks through Ruby code:
```ruby
Gitlab::Database::PostgresSequence.by_table_name('web_hook_logs')
=> #<Gitlab::Database::PostgresSequence:0x0000000301a1d7a0
seq_name: "web_hook_logs_id_seq",
table_name: "web_hook_logs",
col_name: "id",
seq_max: 9223372036854775807,
seq_min: 1,
seq_start: 1>
```
## Benefits
Using these views provides several advantages:
1. **ActiveRecord Integration**: Complex PostgreSQL metadata queries are wrapped in familiar ActiveRecord models
1. **Maintenance Automation**: Enables automated database maintenance tasks through Ruby code
1. **Monitoring**: Simplifies database health monitoring and metrics collection
1. **Consistency**: Provides a standardized interface for database operations
## Drawbacks
1. **Performance overhead**: Views can introduce additional query overhead due to materialization and computation on access.
1. **Debugging complexity**: Debugging can become more challenging because you need to trace through both the Ruby/Rails layer and the PostgreSQL.
1. **Migration challenges**: Views need to be managed carefully during schema migrations. If underlying tables change, you need to ensure views are updated accordingly. Rails migrations don't handle views as seamlessly as they handle regular tables.
1. **Maintenance overhead**: Views add another layer of programming languages to maintain in your database schema.
1. **Testing complexity**: Testing code that relies on views often requires more testing setup.
## Guidelines
When working with views, always use ActiveRecord models with appropriate scopes and relationships instead of raw SQL queries. Views are read-only by design. When adding new views, ensure proper migrations, models, tests, and documentation are in place.
## Testing
When testing views, use the `swapout_view_for_table` helper to temporarily replace a view with a table.
This way you can use factories to create records similar to ones returned by the view.
```ruby
RSpec.describe Gitlab::Database::PostgresSequence do
include Database::DatabaseHelpers
before do
swapout_view_for_table(:postgres_sequences, connection: ApplicationRecord.connection)
end
end
```
## Further Reading
- [PostgreSQL System Catalogs](https://www.postgresql.org/docs/16/catalogs.html)
- [PostgreSQL Views](https://www.postgresql.org/docs/16/sql-createview.html)
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: SQL views
breadcrumbs:
- doc
- development
- database
---
## Overview
At GitLab, we use SQL views as an abstraction layer over PostgreSQL's system catalogs (`pg_*` tables). This makes it easier to query the system catalogs from Rails.
## Example
For example, the SQL view `postgres_sequences` is an abstraction layer over `pg_sequence` and other `pg_*` tables. It's queried using the following Rails model:
```ruby
module Gitlab
module Database
# Backed by the postgres_sequences view
class PostgresSequence < SharedModel
self.primary_key = :seq_name
scope :by_table_name, ->(table_name) { where(table_name: table_name) }
scope :by_col_name, ->(col_name) { where(col_name: col_name) }
end
end
end
```
This allows us to manage database maintenance tasks through Ruby code:
```ruby
Gitlab::Database::PostgresSequence.by_table_name('web_hook_logs')
=> #<Gitlab::Database::PostgresSequence:0x0000000301a1d7a0
seq_name: "web_hook_logs_id_seq",
table_name: "web_hook_logs",
col_name: "id",
seq_max: 9223372036854775807,
seq_min: 1,
seq_start: 1>
```
## Benefits
Using these views provides several advantages:
1. **ActiveRecord Integration**: Complex PostgreSQL metadata queries are wrapped in familiar ActiveRecord models
1. **Maintenance Automation**: Enables automated database maintenance tasks through Ruby code
1. **Monitoring**: Simplifies database health monitoring and metrics collection
1. **Consistency**: Provides a standardized interface for database operations
## Drawbacks
1. **Performance overhead**: Views can introduce additional query overhead due to materialization and computation on access.
1. **Debugging complexity**: Debugging can become more challenging because you need to trace through both the Ruby/Rails layer and the PostgreSQL.
1. **Migration challenges**: Views need to be managed carefully during schema migrations. If underlying tables change, you need to ensure views are updated accordingly. Rails migrations don't handle views as seamlessly as they handle regular tables.
1. **Maintenance overhead**: Views add another layer of programming languages to maintain in your database schema.
1. **Testing complexity**: Testing code that relies on views often requires more testing setup.
## Guidelines
When working with views, always use ActiveRecord models with appropriate scopes and relationships instead of raw SQL queries. Views are read-only by design. When adding new views, ensure proper migrations, models, tests, and documentation are in place.
## Testing
When testing views, use the `swapout_view_for_table` helper to temporarily replace a view with a table.
This way you can use factories to create records similar to ones returned by the view.
```ruby
RSpec.describe Gitlab::Database::PostgresSequence do
include Database::DatabaseHelpers
before do
swapout_view_for_table(:postgres_sequences, connection: ApplicationRecord.connection)
end
end
```
## Further Reading
- [PostgreSQL System Catalogs](https://www.postgresql.org/docs/16/catalogs.html)
- [PostgreSQL Views](https://www.postgresql.org/docs/16/sql-createview.html)
|
https://docs.gitlab.com/development/iterating_tables_in_batches
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/iterating_tables_in_batches.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
iterating_tables_in_batches.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Iterating tables in batches
| null |
Rails provides a method called `in_batches` that can be used to iterate over
rows in batches. For example:
```ruby
User.in_batches(of: 10) do |relation|
relation.update_all(updated_at: Time.now)
end
```
Unfortunately, this method is implemented in a way that is not very efficient,
both query and memory usage wise.
To work around this you can include the `EachBatch` module into your models,
then use the `each_batch` class method. For example:
```ruby
class User < ActiveRecord::Base
include EachBatch
end
User.each_batch(of: 10) do |relation|
relation.update_all(updated_at: Time.now)
end
```
This produces queries such as:
```plaintext
User Load (0.7ms) SELECT "users"."id" FROM "users" WHERE ("users"."id" >= 41654) ORDER BY "users"."id" ASC LIMIT 1 OFFSET 1000
(0.7ms) SELECT COUNT(*) FROM "users" WHERE ("users"."id" >= 41654) AND ("users"."id" < 42687)
```
The API of this method is similar to `in_batches`, though it doesn't support
all of the arguments that `in_batches` supports. You should always use
`each_batch` unless you have a specific need for `in_batches`.
## Iterating over non-unique columns
You should not use the `each_batch` method with a non-unique column (in the context of the relation) as it
[may result in an infinite loop](https://gitlab.com/gitlab-org/gitlab/-/issues/285097).
Additionally, the inconsistent batch sizes cause performance issues when you
iterate over non-unique columns. Even when you apply a max batch size
when iterating over an attribute, there's no guarantee that the resulting
batches don't surpass it. The following snippet demonstrates this situation
when you attempt to select
`Ci::Build` entries for users with `id` between `1` and `10,000`, the database returns
`1 215 178` matching rows.
```ruby
[ gstg ] production> Ci::Build.where(user_id: (1..10_000)).size
=> 1215178
```
This happens because the built relation is translated into the following query:
```ruby
[ gstg ] production> puts Ci::Build.where(user_id: (1..10_000)).to_sql
SELECT "ci_builds".* FROM "ci_builds" WHERE "ci_builds"."type" = 'Ci::Build' AND "ci_builds"."user_id" BETWEEN 1 AND 10000
=> nil
```
`And` queries which filter non-unique column by range `WHERE "ci_builds"."user_id" BETWEEN ? AND ?`,
even though the range size is limited to a certain threshold (`10,000` in the previous example) this
threshold does not translate to the size of the returned dataset. That happens because when taking
`n` possible values of attributes, one can't tell for sure that the number of records that contains
them is less than `n`.
### Loose-index scan with `distinct_each_batch`
When iterating over a non-unique column is necessary, use the `distinct_each_batch` helper
method. The helper uses the [loose-index scan technique](https://wiki.postgresql.org/wiki/Loose_indexscan)
(skip-index scan) to skip duplicated values within a database index.
Example: iterating over distinct `author_id` in the Issue model
```ruby
Issue.distinct_each_batch(column: :author_id, of: 1000) do |relation|
users = User.where(id: relation.select(:author_id)).to_a
end
```
The technique provides stable performance between the batches regardless of the data distribution.
The `relation` object returns an ActiveRecord scope where only the given `column` is available.
Other columns are not loaded.
The underlying database queries use recursive CTEs, which adds extra overhead. We therefore advise to use
smaller batch sizes than those used for a standard `each_batch` iteration.
## Column definition
`EachBatch` uses the primary key of the model by default for the iteration. This works most of the
cases, however in some cases, you might want to use a different column for the iteration.
```ruby
Project.distinct.each_batch(column: :creator_id, of: 10) do |relation|
puts User.where(id: relation.select(:creator_id)).map(&:id)
end
```
The query above iterates over the project creators and prints them out without duplications.
{{< alert type="note" >}}
In case the column is not unique (no unique index definition), calling the `distinct` method on
the relation is necessary. Using not unique column without `distinct` may result in `each_batch`
falling into an endless loop as described in following
[issue](https://gitlab.com/gitlab-org/gitlab/-/issues/285097).
{{< /alert >}}
## `EachBatch` in data migrations
When dealing with data migrations the preferred way to iterate over a large volume of data is using
`EachBatch`.
A special case of data migration is a [batched background migration](batched_background_migrations.md)
where the actual data modification is executed in a background job. The migration code that
determines the data ranges (slices) and schedules the background jobs uses `each_batch`.
## Efficient usage of `each_batch`
`EachBatch` helps to iterate over large tables. It's important to highlight that `EachBatch`
does not magically solve all iteration-related performance problems, and it might not help at
all in some scenarios. From the database point of view, correctly configured database indexes are
also necessary to make `EachBatch` perform well.
### Example 1: Simple iteration
Let's consider that we want to iterate over the `users` table and print the `User` records to the
standard output. The `users` table contains millions of records, thus running one query to fetch
the users likely times out.
This table is a simplified version of the `users` table which contains several rows. We have a few
smaller gaps in the `id` column to make the example a bit more realistic (a few records were
already deleted). One index exists on the `id` field:
| `ID` | `sign_in_count` | `created_at` |
|-------|:----------------|--------------|
| `1` | `1` | 2020-01-01 |
| `2` | `4` | 2020-01-01 |
| `9` | `1` | 2020-01-03 |
| `300` | `5` | 2020-01-03 |
| `301` | `9` | 2020-01-03 |
| `302` | `8` | 2020-01-03 |
| `303` | `2` | 2020-01-03 |
| `350` | `1` | 2020-01-03 |
| `351` | `3` | 2020-01-04 |
| `352` | `0` | 2020-01-05 |
| `353` | `9` | 2020-01-11 |
| `354` | `3` | 2020-01-12 |
Loading all users into memory (avoid):
```ruby
users = User.all
users.each { |user| puts user.inspect }
```
Use `each_batch`:
```ruby
# Note: for this example I picked 5 as the batch size, the default is 1_000
User.each_batch(of: 5) do |relation|
relation.each { |user| puts user.inspect }
end
```
#### How `each_batch` works
As the first step, it finds the lowest `id` (start `id`) in the table by executing the following
database query:
```sql
SELECT "users"."id" FROM "users" ORDER BY "users"."id" ASC LIMIT 1
```

Notice that the query only reads data from the index (`INDEX ONLY SCAN`), the table is not
accessed. Database indexes are sorted so taking out the first item is a very cheap operation.
The next step is to find the next `id` (end `id`) which should respect the batch size
configuration. In this example we used a batch size of 5. `EachBatch` uses the `OFFSET` clause
to get a "shifted" `id` value.
```sql
SELECT "users"."id" FROM "users" WHERE "users"."id" >= 1 ORDER BY "users"."id" ASC LIMIT 1 OFFSET 5
```

Again, the query only looks into the index. The `OFFSET 5` takes out the sixth `id` value: this
query reads a maximum of six items from the index regardless of the table size or the iteration
count.
At this point, we know the `id` range for the first batch. Now it's time to construct the query
for the `relation` block.
```sql
SELECT "users".* FROM "users" WHERE "users"."id" >= 1 AND "users"."id" < 302
```

Notice the `<` sign. Previously six items were read from the index and in this query, the last
value is "excluded". The query looks at the index to get the location of the five `user`
rows on the disk and read the rows from the table. The returned array is processed in Ruby.
The first iteration is done. For the next iteration, the last `id` value is reused from the
previous iteration to find out the next end `id` value.
```sql
SELECT "users"."id" FROM "users" WHERE "users"."id" >= 302 ORDER BY "users"."id" ASC LIMIT 1 OFFSET 5
```

Now we can easily construct the `users` query for the second iteration.
```sql
SELECT "users".* FROM "users" WHERE "users"."id" >= 302 AND "users"."id" < 353
```

### Example 2: Iteration with filters
Building on top of the previous example, we want to print users with zero sign-in count. We keep
track of the number of sign-ins in the `sign_in_count` column so we write the following code:
```ruby
users = User.where(sign_in_count: 0)
users.each_batch(of: 5) do |relation|
relation.each { |user| puts user.inspect }
end
```
`each_batch` produces the following SQL query for the start `id` value:
```sql
SELECT "users"."id" FROM "users" WHERE "users"."sign_in_count" = 0 ORDER BY "users"."id" ASC LIMIT 1
```
Selecting only the `id` column and ordering by `id` forces the database to use the
index on the `id` (primary key index) column however, we also have an extra condition on the
`sign_in_count` column. The column is not part of the index, so the database needs to look into
the actual table to find the first matching row.

{{< alert type="note" >}}
The number of scanned rows depends on the data distribution in the table.
{{< /alert >}}
- Best case scenario: the first user was never logged in. The database reads only one row.
- Worst case scenario: all users were logged in at least once. The database reads all rows.
In this particular example, the database had to read 10 rows (regardless of our batch size setting)
to determine the first `id` value. In a "real-world" application it's hard to predict whether the
filtering causes problems or not. In the case of GitLab, verifying the data on a
production replica is a good start, but keep in mind that data distribution on GitLab.com can be
different from GitLab Self-Managed instances.
#### Improve filtering with `each_batch`
##### Specialized conditional index
```sql
CREATE INDEX index_on_users_never_logged_in ON users (id) WHERE sign_in_count = 0
```
This is how our table and the newly created index looks like:

This index definition covers the conditions on the `id` and `sign_in_count` columns thus makes the
`each_batch` queries very effective (similar to the simple iteration example).
It's rare when a user was never signed in so we a anticipate small index size. Including only the
`id` in the index definition also helps to keep the index size small.
##### Index on columns
Later on, we might want to iterate over the table filtering for different `sign_in_count` values, in
those cases we cannot use the previously suggested conditional index because the `WHERE` condition
does not match with our new filter (`sign_in_count > 10`).
To address this problem, we have two options:
- Create another, conditional index to cover the new query.
- Replace the index with a more generalized configuration.
{{< alert type="note" >}}
Having multiple indexes on the same table and on the same columns could be a performance bottleneck
when writing data.
{{< /alert >}}
Let's consider the following index (avoid):
```sql
CREATE INDEX index_on_users_never_logged_in ON users (id, sign_in_count)
```
The index definition starts with the `id` column which makes the index very inefficient from data
selectivity point of view.
```sql
SELECT "users"."id" FROM "users" WHERE "users"."sign_in_count" = 0 ORDER BY "users"."id" ASC LIMIT 1
```
Executing the query above results in an `INDEX ONLY SCAN`. However, the query still needs to
iterate over an unknown number of entries in the index, and then find the first item where the
`sign_in_count` is `0`.

We can improve the query significantly by swapping the columns in the index definition (prefer).
```sql
CREATE INDEX index_on_users_never_logged_in ON users (sign_in_count, id)
```

The following index definition does not work well with `each_batch` (avoid).
```sql
CREATE INDEX index_on_users_never_logged_in ON users (sign_in_count)
```
Since `each_batch` builds range queries based on the `id` column, this index cannot be used
efficiently. The DB reads the rows from the table or uses a bitmap search where the primary
key index is also read.
##### "Slow" iteration
Slow iteration means that we use a good index configuration to iterate over the table and
apply filtering on the yielded relation.
```ruby
User.each_batch(of: 5) do |relation|
relation.where(sign_in_count: 0).each { |user| puts user inspect }
end
```
The iteration uses the primary key index (on the `id` column) which makes it safe from statement
timeouts. The filter (`sign_in_count: 0`) is applied on the `relation` where the `id` is already
constrained (range). The number of rows is limited.
Slow iteration generally takes more time to finish. The iteration count is higher and
one iteration could yield fewer records than the batch size. Iterations may even yield
0 records. This is not an optimal solution; however, in some cases (especially when
dealing with large tables) this is the only viable option.
### Using Subqueries
Using subqueries in your `each_batch` query does not work well in most cases. Consider the following example:
```ruby
projects = Project.where(creator_id: Issue.where(confidential: true).select(:author_id))
projects.each_batch do |relation|
# do something
end
```
The iteration uses the `id` column of the `projects` table. The batching does not affect the
subquery. This means for each iteration, the subquery is executed by the database. This adds a
constant "load" on the query which often ends up in statement timeouts. We have an unknown number
of [confidential issues](../../user/project/issues/confidential_issues.md), the execution time
and the accessed database rows depend on the data distribution in the `issues` table.
{{< alert type="note" >}}
Using subqueries works only when the subquery returns a small number of rows.
{{< /alert >}}
#### Improving Subqueries
When dealing with subqueries, a slow iteration approach could work: the filter on `creator_id`
can be part of the generated `relation` object.
```ruby
projects = Project.all
projects.each_batch do |relation|
relation.where(creator_id: Issue.where(confidential: true).select(:author_id))
end
```
If the query on the `issues` table itself is not performant enough, a nested loop could be
constructed. Try to avoid it when possible.
```ruby
projects = Project.all
projects.each_batch do |relation|
issues = Issue.where(confidential: true)
issues.each_batch do |issues_relation|
relation.where(creator_id: issues_relation.select(:author_id))
end
end
```
If we know that the `issues` table has many more rows than `projects`, it would make sense to flip
the queries, where the `issues` table is batched first.
### Using `JOIN` and `EXISTS`
When to use `JOINS`:
- When there's a 1:1 or 1:N relationship between the tables where we know that the joined record
(almost) always exists. This works well for "extension-like" tables:
- `projects` - `project_settings`
- `users` - `user_details`
- `users` - `user_statuses`
- `LEFT JOIN` works well in this case. Conditions on the joined table need to go to the yielded
relation so the iteration is not affected by the data distribution in the joined table.
Example:
```ruby
User.each_batch do |relation|
relation
.joins("LEFT JOIN personal_access_tokens on personal_access_tokens.user_id = users.id")
.where("personal_access_tokens.name = 'name'")
end
```
`EXISTS` queries should be added only to the inner `relation` of the `each_batch` query:
```ruby
User.each_batch do |relation|
relation.where("EXISTS (SELECT 1 FROM ...")
end
```
### Complex queries on the relation object
When the `relation` object has several extra conditions, the execution plans might become
"unstable".
Example:
```ruby
Issue.each_batch do |relation|
relation
.joins(:metrics)
.joins(:merge_requests_closing_issues)
.where("id IN (SELECT ...)")
.where(confidential: true)
end
```
Here, we expect that the `relation` query reads the `BATCH_SIZE` of user records and then
filters down the results according to the provided queries. The planner might decide that
using a bitmap index lookup with the index on the `confidential` column is a better way to
execute the query. This can cause an unexpectedly high amount of rows to be read and the
query could time out.
Problem: we know for sure that the relation is returning maximum `BATCH_SIZE` of records
however, the planner does not know this.
Common table expression (CTE) trick to force the range query to execute first:
```ruby
Issue.each_batch(of: 1000) do |relation|
cte = Gitlab::SQL::CTE.new(:batched_relation, relation.limit(1000))
scope = cte
.apply_to(Issue.all)
.joins(:metrics)
.joins(:merge_requests_closing_issues)
.where("id IN (SELECT ...)")
.where(confidential: true)
puts scope.to_a
end
```
### Counting records
For tables with a large amount of data, counting records through queries can result
in timeouts. The `EachBatch` module provides an alternative way to iteratively count
records. The downside of using `each_batch` is the extra count query which is executed
on the yielded relation object.
The `each_batch_count` method is a more efficient approach that eliminates the need
for the extra count query. By invoking this method, the iteration process can be
paused and resumed as needed. This feature is particularly useful in situations
where error budget violations are triggered after five minutes, such as when performing
counting operations within Sidekiq workers.
To illustrate, counting records using `EachBatch` involves invoking an additional
count query as follows:
```ruby
count = 0
Issue.each_batch do |relation|
count += relation.count
end
puts count
```
On the other hand, the `each_batch_count` method enables the counting process to be
performed more efficiently (counting is part of the iteration query) without invoking
an extra count query:
```ruby
count, _last_value = Issue.each_batch_count # last value can be ignored here
```
Furthermore, the `each_batch_count` method allows the counting process to be paused
and resumed at any point. This capability is demonstrated in the following code snippet:
```ruby
stop_at = Time.current + 3.minutes
count, last_value = Issue.each_batch_count do
stop_at.past? # condition for stopping the counting
end
# Continue the counting later
stop_at = Time.current + 3.minutes
count, last_value = Issue.each_batch_count(last_count: count, last_value: last_value) do
stop_at.past?
end
```
### `EachBatch` vs `BatchCount`
When adding new counters for Service Ping, the preferred way to count records is using the
`Gitlab::Database::BatchCount` class. The iteration logic implemented in `BatchCount`
has similar performance characteristics like `EachBatch`. Most of the tips and suggestions
for improving `BatchCount` mentioned above applies to `BatchCount` as well.
## Iterate with keyset pagination
There are a few special cases where iterating with `EachBatch` does not work. `EachBatch`
requires one distinct column (usually the primary key), which makes the iteration impossible
for timestamp columns and tables with composite primary keys.
Where `EachBatch` does not work, you can use
[keyset pagination](pagination_guidelines.md#keyset-pagination) to iterate over the
table or a range of rows. The scaling and performance characteristics are very similar to
`EachBatch`.
Examples:
- Iterate over the table in a specific order (timestamp columns) in combination with a tie-breaker
if column user to sort by does not contain unique values.
- Iterate over the table with composite primary keys.
### Iterate over the issues in a project by creation date
You can use keyset pagination to iterate over any database column in a specific order (for example,
`created_at DESC`). To ensure consistent order of the returned records with the same values for
`created_at`, use a tie-breaker column with unique values (for example, `id`).
Assume you have the following index in the `issues` table:
```sql
idx_issues_on_project_id_and_created_at_and_id" btree (project_id, created_at, id)
```
### Fetching records for further processing
The following snippet iterates over issue records within the project using the specified order
(`created_at, id`).
```ruby
scope = Issue.where(project_id: 278964).order(:created_at, :id) # id is the tie-breaker
iterator = Gitlab::Pagination::Keyset::Iterator.new(scope: scope)
iterator.each_batch(of: 100) do |records|
puts records.map(&:id)
end
```
You can add extra filters to the query. This example only lists the issue IDs created in the last
30 days:
```ruby
scope = Issue.where(project_id: 278964).where('created_at > ?', 30.days.ago).order(:created_at, :id) # id is the tie-breaker
iterator = Gitlab::Pagination::Keyset::Iterator.new(scope: scope)
iterator.each_batch(of: 100) do |records|
puts records.map(&:id)
end
```
### Updating records in the batch
For complex `ActiveRecord` queries, the `.update_all` method does not work well, because it
generates an incorrect `UPDATE` statement.
You can use raw SQL for updating records in batches:
```ruby
scope = Issue.where(project_id: 278964).order(:created_at, :id) # id is the tie-breaker
iterator = Gitlab::Pagination::Keyset::Iterator.new(scope: scope)
iterator.each_batch(of: 100) do |records|
ApplicationRecord.connection.execute("UPDATE issues SET updated_at=NOW() WHERE issues.id in (#{records.dup.reselect(:id).to_sql})")
end
```
{{< alert type="note" >}}
To keep the iteration stable and predictable, avoid updating the columns in the `ORDER BY` clause.
{{< /alert >}}
### Iterate over the `merge_request_diff_commits` table
The `merge_request_diff_commits` table uses a composite primary key (`merge_request_diff_id, relative_order`),
which makes `EachBatch` impossible to use efficiently.
To paginate over the `merge_request_diff_commits` table, you can use the following snippet:
```ruby
# Custom order object configuration:
order = Gitlab::Pagination::Keyset::Order.build([
Gitlab::Pagination::Keyset::ColumnOrderDefinition.new(
attribute_name: 'merge_request_diff_id',
order_expression: MergeRequestDiffCommit.arel_table[:merge_request_diff_id].asc,
nullable: :not_nullable
),
Gitlab::Pagination::Keyset::ColumnOrderDefinition.new(
attribute_name: 'relative_order',
order_expression: MergeRequestDiffCommit.arel_table[:relative_order].asc,
nullable: :not_nullable
)
])
MergeRequestDiffCommit.include(FromUnion) # keyset pagination generates UNION queries
scope = MergeRequestDiffCommit.order(order)
iterator = Gitlab::Pagination::Keyset::Iterator.new(scope: scope)
iterator.each_batch(of: 100) do |records|
puts records.map { |record| [record.merge_request_diff_id, record.relative_order] }.inspect
end
```
### Order object configuration
Keyset pagination works well with simple `ActiveRecord` `order` scopes
([first example](#iterate-over-the-issues-in-a-project-by-creation-date)).
However, in special cases, you need to describe the columns in the `ORDER BY` clause (second example)
for the underlying keyset pagination library. When the `ORDER BY` configuration cannot be
automatically determined by the keyset pagination library, an error is raised.
The code comments of the
[`Gitlab::Pagination::Keyset::Order`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/pagination/keyset/order.rb)
and [`Gitlab::Pagination::Keyset::ColumnOrderDefinition`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/pagination/keyset/column_order_definition.rb)
classes give an overview of the possible options for configuring the `ORDER BY` clause. You can
also find a few code examples in the
[keyset pagination](keyset_pagination.md#complex-order-configuration) documentation.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Iterating tables in batches
breadcrumbs:
- doc
- development
- database
---
Rails provides a method called `in_batches` that can be used to iterate over
rows in batches. For example:
```ruby
User.in_batches(of: 10) do |relation|
relation.update_all(updated_at: Time.now)
end
```
Unfortunately, this method is implemented in a way that is not very efficient,
both query and memory usage wise.
To work around this you can include the `EachBatch` module into your models,
then use the `each_batch` class method. For example:
```ruby
class User < ActiveRecord::Base
include EachBatch
end
User.each_batch(of: 10) do |relation|
relation.update_all(updated_at: Time.now)
end
```
This produces queries such as:
```plaintext
User Load (0.7ms) SELECT "users"."id" FROM "users" WHERE ("users"."id" >= 41654) ORDER BY "users"."id" ASC LIMIT 1 OFFSET 1000
(0.7ms) SELECT COUNT(*) FROM "users" WHERE ("users"."id" >= 41654) AND ("users"."id" < 42687)
```
The API of this method is similar to `in_batches`, though it doesn't support
all of the arguments that `in_batches` supports. You should always use
`each_batch` unless you have a specific need for `in_batches`.
## Iterating over non-unique columns
You should not use the `each_batch` method with a non-unique column (in the context of the relation) as it
[may result in an infinite loop](https://gitlab.com/gitlab-org/gitlab/-/issues/285097).
Additionally, the inconsistent batch sizes cause performance issues when you
iterate over non-unique columns. Even when you apply a max batch size
when iterating over an attribute, there's no guarantee that the resulting
batches don't surpass it. The following snippet demonstrates this situation
when you attempt to select
`Ci::Build` entries for users with `id` between `1` and `10,000`, the database returns
`1 215 178` matching rows.
```ruby
[ gstg ] production> Ci::Build.where(user_id: (1..10_000)).size
=> 1215178
```
This happens because the built relation is translated into the following query:
```ruby
[ gstg ] production> puts Ci::Build.where(user_id: (1..10_000)).to_sql
SELECT "ci_builds".* FROM "ci_builds" WHERE "ci_builds"."type" = 'Ci::Build' AND "ci_builds"."user_id" BETWEEN 1 AND 10000
=> nil
```
`And` queries which filter non-unique column by range `WHERE "ci_builds"."user_id" BETWEEN ? AND ?`,
even though the range size is limited to a certain threshold (`10,000` in the previous example) this
threshold does not translate to the size of the returned dataset. That happens because when taking
`n` possible values of attributes, one can't tell for sure that the number of records that contains
them is less than `n`.
### Loose-index scan with `distinct_each_batch`
When iterating over a non-unique column is necessary, use the `distinct_each_batch` helper
method. The helper uses the [loose-index scan technique](https://wiki.postgresql.org/wiki/Loose_indexscan)
(skip-index scan) to skip duplicated values within a database index.
Example: iterating over distinct `author_id` in the Issue model
```ruby
Issue.distinct_each_batch(column: :author_id, of: 1000) do |relation|
users = User.where(id: relation.select(:author_id)).to_a
end
```
The technique provides stable performance between the batches regardless of the data distribution.
The `relation` object returns an ActiveRecord scope where only the given `column` is available.
Other columns are not loaded.
The underlying database queries use recursive CTEs, which adds extra overhead. We therefore advise to use
smaller batch sizes than those used for a standard `each_batch` iteration.
## Column definition
`EachBatch` uses the primary key of the model by default for the iteration. This works most of the
cases, however in some cases, you might want to use a different column for the iteration.
```ruby
Project.distinct.each_batch(column: :creator_id, of: 10) do |relation|
puts User.where(id: relation.select(:creator_id)).map(&:id)
end
```
The query above iterates over the project creators and prints them out without duplications.
{{< alert type="note" >}}
In case the column is not unique (no unique index definition), calling the `distinct` method on
the relation is necessary. Using not unique column without `distinct` may result in `each_batch`
falling into an endless loop as described in following
[issue](https://gitlab.com/gitlab-org/gitlab/-/issues/285097).
{{< /alert >}}
## `EachBatch` in data migrations
When dealing with data migrations the preferred way to iterate over a large volume of data is using
`EachBatch`.
A special case of data migration is a [batched background migration](batched_background_migrations.md)
where the actual data modification is executed in a background job. The migration code that
determines the data ranges (slices) and schedules the background jobs uses `each_batch`.
## Efficient usage of `each_batch`
`EachBatch` helps to iterate over large tables. It's important to highlight that `EachBatch`
does not magically solve all iteration-related performance problems, and it might not help at
all in some scenarios. From the database point of view, correctly configured database indexes are
also necessary to make `EachBatch` perform well.
### Example 1: Simple iteration
Let's consider that we want to iterate over the `users` table and print the `User` records to the
standard output. The `users` table contains millions of records, thus running one query to fetch
the users likely times out.
This table is a simplified version of the `users` table which contains several rows. We have a few
smaller gaps in the `id` column to make the example a bit more realistic (a few records were
already deleted). One index exists on the `id` field:
| `ID` | `sign_in_count` | `created_at` |
|-------|:----------------|--------------|
| `1` | `1` | 2020-01-01 |
| `2` | `4` | 2020-01-01 |
| `9` | `1` | 2020-01-03 |
| `300` | `5` | 2020-01-03 |
| `301` | `9` | 2020-01-03 |
| `302` | `8` | 2020-01-03 |
| `303` | `2` | 2020-01-03 |
| `350` | `1` | 2020-01-03 |
| `351` | `3` | 2020-01-04 |
| `352` | `0` | 2020-01-05 |
| `353` | `9` | 2020-01-11 |
| `354` | `3` | 2020-01-12 |
Loading all users into memory (avoid):
```ruby
users = User.all
users.each { |user| puts user.inspect }
```
Use `each_batch`:
```ruby
# Note: for this example I picked 5 as the batch size, the default is 1_000
User.each_batch(of: 5) do |relation|
relation.each { |user| puts user.inspect }
end
```
#### How `each_batch` works
As the first step, it finds the lowest `id` (start `id`) in the table by executing the following
database query:
```sql
SELECT "users"."id" FROM "users" ORDER BY "users"."id" ASC LIMIT 1
```

Notice that the query only reads data from the index (`INDEX ONLY SCAN`), the table is not
accessed. Database indexes are sorted so taking out the first item is a very cheap operation.
The next step is to find the next `id` (end `id`) which should respect the batch size
configuration. In this example we used a batch size of 5. `EachBatch` uses the `OFFSET` clause
to get a "shifted" `id` value.
```sql
SELECT "users"."id" FROM "users" WHERE "users"."id" >= 1 ORDER BY "users"."id" ASC LIMIT 1 OFFSET 5
```

Again, the query only looks into the index. The `OFFSET 5` takes out the sixth `id` value: this
query reads a maximum of six items from the index regardless of the table size or the iteration
count.
At this point, we know the `id` range for the first batch. Now it's time to construct the query
for the `relation` block.
```sql
SELECT "users".* FROM "users" WHERE "users"."id" >= 1 AND "users"."id" < 302
```

Notice the `<` sign. Previously six items were read from the index and in this query, the last
value is "excluded". The query looks at the index to get the location of the five `user`
rows on the disk and read the rows from the table. The returned array is processed in Ruby.
The first iteration is done. For the next iteration, the last `id` value is reused from the
previous iteration to find out the next end `id` value.
```sql
SELECT "users"."id" FROM "users" WHERE "users"."id" >= 302 ORDER BY "users"."id" ASC LIMIT 1 OFFSET 5
```

Now we can easily construct the `users` query for the second iteration.
```sql
SELECT "users".* FROM "users" WHERE "users"."id" >= 302 AND "users"."id" < 353
```

### Example 2: Iteration with filters
Building on top of the previous example, we want to print users with zero sign-in count. We keep
track of the number of sign-ins in the `sign_in_count` column so we write the following code:
```ruby
users = User.where(sign_in_count: 0)
users.each_batch(of: 5) do |relation|
relation.each { |user| puts user.inspect }
end
```
`each_batch` produces the following SQL query for the start `id` value:
```sql
SELECT "users"."id" FROM "users" WHERE "users"."sign_in_count" = 0 ORDER BY "users"."id" ASC LIMIT 1
```
Selecting only the `id` column and ordering by `id` forces the database to use the
index on the `id` (primary key index) column however, we also have an extra condition on the
`sign_in_count` column. The column is not part of the index, so the database needs to look into
the actual table to find the first matching row.

{{< alert type="note" >}}
The number of scanned rows depends on the data distribution in the table.
{{< /alert >}}
- Best case scenario: the first user was never logged in. The database reads only one row.
- Worst case scenario: all users were logged in at least once. The database reads all rows.
In this particular example, the database had to read 10 rows (regardless of our batch size setting)
to determine the first `id` value. In a "real-world" application it's hard to predict whether the
filtering causes problems or not. In the case of GitLab, verifying the data on a
production replica is a good start, but keep in mind that data distribution on GitLab.com can be
different from GitLab Self-Managed instances.
#### Improve filtering with `each_batch`
##### Specialized conditional index
```sql
CREATE INDEX index_on_users_never_logged_in ON users (id) WHERE sign_in_count = 0
```
This is how our table and the newly created index looks like:

This index definition covers the conditions on the `id` and `sign_in_count` columns thus makes the
`each_batch` queries very effective (similar to the simple iteration example).
It's rare when a user was never signed in so we a anticipate small index size. Including only the
`id` in the index definition also helps to keep the index size small.
##### Index on columns
Later on, we might want to iterate over the table filtering for different `sign_in_count` values, in
those cases we cannot use the previously suggested conditional index because the `WHERE` condition
does not match with our new filter (`sign_in_count > 10`).
To address this problem, we have two options:
- Create another, conditional index to cover the new query.
- Replace the index with a more generalized configuration.
{{< alert type="note" >}}
Having multiple indexes on the same table and on the same columns could be a performance bottleneck
when writing data.
{{< /alert >}}
Let's consider the following index (avoid):
```sql
CREATE INDEX index_on_users_never_logged_in ON users (id, sign_in_count)
```
The index definition starts with the `id` column which makes the index very inefficient from data
selectivity point of view.
```sql
SELECT "users"."id" FROM "users" WHERE "users"."sign_in_count" = 0 ORDER BY "users"."id" ASC LIMIT 1
```
Executing the query above results in an `INDEX ONLY SCAN`. However, the query still needs to
iterate over an unknown number of entries in the index, and then find the first item where the
`sign_in_count` is `0`.

We can improve the query significantly by swapping the columns in the index definition (prefer).
```sql
CREATE INDEX index_on_users_never_logged_in ON users (sign_in_count, id)
```

The following index definition does not work well with `each_batch` (avoid).
```sql
CREATE INDEX index_on_users_never_logged_in ON users (sign_in_count)
```
Since `each_batch` builds range queries based on the `id` column, this index cannot be used
efficiently. The DB reads the rows from the table or uses a bitmap search where the primary
key index is also read.
##### "Slow" iteration
Slow iteration means that we use a good index configuration to iterate over the table and
apply filtering on the yielded relation.
```ruby
User.each_batch(of: 5) do |relation|
relation.where(sign_in_count: 0).each { |user| puts user inspect }
end
```
The iteration uses the primary key index (on the `id` column) which makes it safe from statement
timeouts. The filter (`sign_in_count: 0`) is applied on the `relation` where the `id` is already
constrained (range). The number of rows is limited.
Slow iteration generally takes more time to finish. The iteration count is higher and
one iteration could yield fewer records than the batch size. Iterations may even yield
0 records. This is not an optimal solution; however, in some cases (especially when
dealing with large tables) this is the only viable option.
### Using Subqueries
Using subqueries in your `each_batch` query does not work well in most cases. Consider the following example:
```ruby
projects = Project.where(creator_id: Issue.where(confidential: true).select(:author_id))
projects.each_batch do |relation|
# do something
end
```
The iteration uses the `id` column of the `projects` table. The batching does not affect the
subquery. This means for each iteration, the subquery is executed by the database. This adds a
constant "load" on the query which often ends up in statement timeouts. We have an unknown number
of [confidential issues](../../user/project/issues/confidential_issues.md), the execution time
and the accessed database rows depend on the data distribution in the `issues` table.
{{< alert type="note" >}}
Using subqueries works only when the subquery returns a small number of rows.
{{< /alert >}}
#### Improving Subqueries
When dealing with subqueries, a slow iteration approach could work: the filter on `creator_id`
can be part of the generated `relation` object.
```ruby
projects = Project.all
projects.each_batch do |relation|
relation.where(creator_id: Issue.where(confidential: true).select(:author_id))
end
```
If the query on the `issues` table itself is not performant enough, a nested loop could be
constructed. Try to avoid it when possible.
```ruby
projects = Project.all
projects.each_batch do |relation|
issues = Issue.where(confidential: true)
issues.each_batch do |issues_relation|
relation.where(creator_id: issues_relation.select(:author_id))
end
end
```
If we know that the `issues` table has many more rows than `projects`, it would make sense to flip
the queries, where the `issues` table is batched first.
### Using `JOIN` and `EXISTS`
When to use `JOINS`:
- When there's a 1:1 or 1:N relationship between the tables where we know that the joined record
(almost) always exists. This works well for "extension-like" tables:
- `projects` - `project_settings`
- `users` - `user_details`
- `users` - `user_statuses`
- `LEFT JOIN` works well in this case. Conditions on the joined table need to go to the yielded
relation so the iteration is not affected by the data distribution in the joined table.
Example:
```ruby
User.each_batch do |relation|
relation
.joins("LEFT JOIN personal_access_tokens on personal_access_tokens.user_id = users.id")
.where("personal_access_tokens.name = 'name'")
end
```
`EXISTS` queries should be added only to the inner `relation` of the `each_batch` query:
```ruby
User.each_batch do |relation|
relation.where("EXISTS (SELECT 1 FROM ...")
end
```
### Complex queries on the relation object
When the `relation` object has several extra conditions, the execution plans might become
"unstable".
Example:
```ruby
Issue.each_batch do |relation|
relation
.joins(:metrics)
.joins(:merge_requests_closing_issues)
.where("id IN (SELECT ...)")
.where(confidential: true)
end
```
Here, we expect that the `relation` query reads the `BATCH_SIZE` of user records and then
filters down the results according to the provided queries. The planner might decide that
using a bitmap index lookup with the index on the `confidential` column is a better way to
execute the query. This can cause an unexpectedly high amount of rows to be read and the
query could time out.
Problem: we know for sure that the relation is returning maximum `BATCH_SIZE` of records
however, the planner does not know this.
Common table expression (CTE) trick to force the range query to execute first:
```ruby
Issue.each_batch(of: 1000) do |relation|
cte = Gitlab::SQL::CTE.new(:batched_relation, relation.limit(1000))
scope = cte
.apply_to(Issue.all)
.joins(:metrics)
.joins(:merge_requests_closing_issues)
.where("id IN (SELECT ...)")
.where(confidential: true)
puts scope.to_a
end
```
### Counting records
For tables with a large amount of data, counting records through queries can result
in timeouts. The `EachBatch` module provides an alternative way to iteratively count
records. The downside of using `each_batch` is the extra count query which is executed
on the yielded relation object.
The `each_batch_count` method is a more efficient approach that eliminates the need
for the extra count query. By invoking this method, the iteration process can be
paused and resumed as needed. This feature is particularly useful in situations
where error budget violations are triggered after five minutes, such as when performing
counting operations within Sidekiq workers.
To illustrate, counting records using `EachBatch` involves invoking an additional
count query as follows:
```ruby
count = 0
Issue.each_batch do |relation|
count += relation.count
end
puts count
```
On the other hand, the `each_batch_count` method enables the counting process to be
performed more efficiently (counting is part of the iteration query) without invoking
an extra count query:
```ruby
count, _last_value = Issue.each_batch_count # last value can be ignored here
```
Furthermore, the `each_batch_count` method allows the counting process to be paused
and resumed at any point. This capability is demonstrated in the following code snippet:
```ruby
stop_at = Time.current + 3.minutes
count, last_value = Issue.each_batch_count do
stop_at.past? # condition for stopping the counting
end
# Continue the counting later
stop_at = Time.current + 3.minutes
count, last_value = Issue.each_batch_count(last_count: count, last_value: last_value) do
stop_at.past?
end
```
### `EachBatch` vs `BatchCount`
When adding new counters for Service Ping, the preferred way to count records is using the
`Gitlab::Database::BatchCount` class. The iteration logic implemented in `BatchCount`
has similar performance characteristics like `EachBatch`. Most of the tips and suggestions
for improving `BatchCount` mentioned above applies to `BatchCount` as well.
## Iterate with keyset pagination
There are a few special cases where iterating with `EachBatch` does not work. `EachBatch`
requires one distinct column (usually the primary key), which makes the iteration impossible
for timestamp columns and tables with composite primary keys.
Where `EachBatch` does not work, you can use
[keyset pagination](pagination_guidelines.md#keyset-pagination) to iterate over the
table or a range of rows. The scaling and performance characteristics are very similar to
`EachBatch`.
Examples:
- Iterate over the table in a specific order (timestamp columns) in combination with a tie-breaker
if column user to sort by does not contain unique values.
- Iterate over the table with composite primary keys.
### Iterate over the issues in a project by creation date
You can use keyset pagination to iterate over any database column in a specific order (for example,
`created_at DESC`). To ensure consistent order of the returned records with the same values for
`created_at`, use a tie-breaker column with unique values (for example, `id`).
Assume you have the following index in the `issues` table:
```sql
idx_issues_on_project_id_and_created_at_and_id" btree (project_id, created_at, id)
```
### Fetching records for further processing
The following snippet iterates over issue records within the project using the specified order
(`created_at, id`).
```ruby
scope = Issue.where(project_id: 278964).order(:created_at, :id) # id is the tie-breaker
iterator = Gitlab::Pagination::Keyset::Iterator.new(scope: scope)
iterator.each_batch(of: 100) do |records|
puts records.map(&:id)
end
```
You can add extra filters to the query. This example only lists the issue IDs created in the last
30 days:
```ruby
scope = Issue.where(project_id: 278964).where('created_at > ?', 30.days.ago).order(:created_at, :id) # id is the tie-breaker
iterator = Gitlab::Pagination::Keyset::Iterator.new(scope: scope)
iterator.each_batch(of: 100) do |records|
puts records.map(&:id)
end
```
### Updating records in the batch
For complex `ActiveRecord` queries, the `.update_all` method does not work well, because it
generates an incorrect `UPDATE` statement.
You can use raw SQL for updating records in batches:
```ruby
scope = Issue.where(project_id: 278964).order(:created_at, :id) # id is the tie-breaker
iterator = Gitlab::Pagination::Keyset::Iterator.new(scope: scope)
iterator.each_batch(of: 100) do |records|
ApplicationRecord.connection.execute("UPDATE issues SET updated_at=NOW() WHERE issues.id in (#{records.dup.reselect(:id).to_sql})")
end
```
{{< alert type="note" >}}
To keep the iteration stable and predictable, avoid updating the columns in the `ORDER BY` clause.
{{< /alert >}}
### Iterate over the `merge_request_diff_commits` table
The `merge_request_diff_commits` table uses a composite primary key (`merge_request_diff_id, relative_order`),
which makes `EachBatch` impossible to use efficiently.
To paginate over the `merge_request_diff_commits` table, you can use the following snippet:
```ruby
# Custom order object configuration:
order = Gitlab::Pagination::Keyset::Order.build([
Gitlab::Pagination::Keyset::ColumnOrderDefinition.new(
attribute_name: 'merge_request_diff_id',
order_expression: MergeRequestDiffCommit.arel_table[:merge_request_diff_id].asc,
nullable: :not_nullable
),
Gitlab::Pagination::Keyset::ColumnOrderDefinition.new(
attribute_name: 'relative_order',
order_expression: MergeRequestDiffCommit.arel_table[:relative_order].asc,
nullable: :not_nullable
)
])
MergeRequestDiffCommit.include(FromUnion) # keyset pagination generates UNION queries
scope = MergeRequestDiffCommit.order(order)
iterator = Gitlab::Pagination::Keyset::Iterator.new(scope: scope)
iterator.each_batch(of: 100) do |records|
puts records.map { |record| [record.merge_request_diff_id, record.relative_order] }.inspect
end
```
### Order object configuration
Keyset pagination works well with simple `ActiveRecord` `order` scopes
([first example](#iterate-over-the-issues-in-a-project-by-creation-date)).
However, in special cases, you need to describe the columns in the `ORDER BY` clause (second example)
for the underlying keyset pagination library. When the `ORDER BY` configuration cannot be
automatically determined by the keyset pagination library, an error is raised.
The code comments of the
[`Gitlab::Pagination::Keyset::Order`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/pagination/keyset/order.rb)
and [`Gitlab::Pagination::Keyset::ColumnOrderDefinition`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/pagination/keyset/column_order_definition.rb)
classes give an overview of the possible options for configuring the `ORDER BY` clause. You can
also find a few code examples in the
[keyset pagination](keyset_pagination.md#complex-order-configuration) documentation.
|
https://docs.gitlab.com/development/batched_background_migrations
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/batched_background_migrations.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
batched_background_migrations.md
|
Data Access
|
Database Frameworks
|
See the Technical Writers assigned to Development Guidelines: https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines
|
Batched background migrations
| null |
Batched background migrations should be used to perform data migrations whenever a
migration exceeds [the time limits](../migration_style_guide.md#how-long-a-migration-should-take)
in our guidelines. For example, you can use batched background
migrations to migrate data that's stored in a single JSON column
to a separate table instead.
{{< alert type="note" >}}
Batched background migrations replaced the legacy background migrations framework.
Check that documentation in reference to any changes involving that framework.
{{< /alert >}}
{{< alert type="note" >}}
The batched background migrations framework has ChatOps support. Using ChatOps, GitLab engineers can interact with the batched background migrations present in the system.
{{< /alert >}}
## When to use batched background migrations
Use a batched background migration when you migrate data in tables containing
so many rows that the process would exceed
[the time limits in our guidelines](../migration_style_guide.md#how-long-a-migration-should-take)
if performed using a regular Rails migration.
- Batched background migrations should be used when migrating data in
[high-traffic tables](../migration_style_guide.md#high-traffic-tables).
- Batched background migrations may also be used when executing numerous single-row queries
for every item on a large dataset. Typically, for single-record patterns, runtime is
largely dependent on the size of the dataset. Split the dataset accordingly,
and put it into background migrations.
- Don't use batched background migrations to perform schema migrations.
Background migrations can help when:
- Migrating events from one table to multiple separate tables.
- Populating one column based on JSON stored in another column.
- Migrating data that depends on the output of external services. (For example, an API.)
### Notes
- If the batched background migration is part of an important upgrade, it must be announced
in the release post. Discuss with your Project Manager if you're unsure if the migration falls
into this category.
- You should use the [generator](#generate-a-batched-background-migration) to create batched background migrations,
so that required files are created by default.
## How batched background migrations work
Batched background migrations (BBM) are subclasses of
`Gitlab::BackgroundMigration::BatchedMigrationJob` that define a `perform` method.
As the first step, a regular migration creates a `batched_background_migrations`
record with the BBM class and the required arguments. By default,
`batched_background_migrations` is in an active state, and those are picked up
by the Sidekiq worker to execute the actual batched migration.
All migration classes must be defined in the namespace `Gitlab::BackgroundMigration`. Place the files
in the directory `lib/gitlab/background_migration/`.
### Execution mechanism
Batched background migrations are picked from the queue in the order they are enqueued. Multiple migrations are fetched
and executed in parallel, as long they are in active state and do not target the same database table.
The default number of migrations processed in parallel is 2, for GitLab.com this limit is configured to 4.
Once migration is picked for execution, a job is created for the specific batch. After each job execution, migration's
batch size may be increased or decreased, based on the performance of the last 20 jobs.
```plantuml
@startuml
hide empty description
skinparam ConditionEndStyle hline
left to right direction
rectangle "Batched background migration queue" as migrations {
rectangle "Migration N (active)" as migrationn
rectangle "Migration 1 (completed)" as migration1
rectangle "Migration 2 (active)" as migration2
rectangle "Migration 3 (on hold)" as migration3
rectangle "Migration 4 (active)" as migration4
migration1 -[hidden]> migration2
migration2 -[hidden]> migration3
migration3 -[hidden]> migration4
migration4 -[hidden]> migrationn
}
rectangle "Execution Workers" as workers {
rectangle "Execution Worker 1 (busy)" as worker1
rectangle "Execution Worker 2 (available)" as worker2
worker1 -[hidden]> worker2
}
migration2 --> [Scheduling Worker]
migration4 --> [Scheduling Worker]
[Scheduling Worker] --> worker2
@enduml
```
Soon as a worker is available, the BBM is processed by the runner.
```plantuml
@startuml
hide empty description
start
rectangle Runner {
:Migration;
if (Have reached batching bounds?) then (Yes)
if (Have jobs to retry?) then (Yes)
:Fetch the batched job;
else (No)
:Finish active migration;
stop
endif
else (No)
:Create a batched job;
endif
:Execute batched job;
:Evaluate DB health;
note right: Checks for table autovacuum, Patroni Apdex, Write-ahead logging
if (Evaluation signs to stop?) then (Yes)
:Put migration on hold;
else (No)
:Optimize migration;
endif
}
@enduml
```
### Idempotence
Batched background migrations are executed in a context of a Sidekiq process.
The usual Sidekiq rules apply, especially the rule that jobs should be small
and idempotent. Ensure that in the case where your migration job is retried, data
integrity is guaranteed.
See [Sidekiq best practices guidelines](https://github.com/mperham/sidekiq/wiki/Best-Practices)
for more details.
### Migration optimization
After each job execution, a verification takes place to check if the migration can be optimized.
The optimization underlying mechanic is based on the concept of time efficiency. It calculates
the exponential moving average of time efficiencies for the last N jobs and updates the batch
size of the batched background migration to its optimal value.
This mechanism, however, makes it hard for us to provide an accurate estimation for total
execution time of the migration when using the [database migration pipeline](database_migration_pipeline.md).
We are discussing the ways to fix this problem in
[this issue](https://gitlab.com/gitlab-org/database-team/gitlab-com-database-testing/-/issues/162)
### Job retry mechanism
The batched background migrations retry mechanism ensures that a job is executed again in case of failure.
The following diagram shows the different stages of our retry mechanism:
```plantuml
@startuml
hide empty description
note as N1
can_split?:
the failure is due to a query timeout
end note
[*] --> Running
Running --> Failed
note on link
if number of retries <= MAX_ATTEMPTS
end note
Running --> Succeeded
Failed --> Running
note on link
if number of retries > MAX_ATTEMPTS
and can_split? == true
then two jobs with smaller
batch size will be created
end note
Failed --> [*]
Succeeded --> [*]
@enduml
```
- `MAX_ATTEMPTS` is defined in the [`Gitlab::Database::BackgroundMigration`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/database/background_migration/batched_job.rb)
class.
- `can_split?` is defined in the [`Gitlab::Database::BatchedJob`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database/background_migration/batched_job.rb) class.
### Failed batched background migrations
The whole batched background migration is marked as `failed`
(`/chatops run batched_background_migrations status MIGRATION_ID` shows
the migration as `failed`) if any of the following is true:
- There are no more jobs to consume, and there are failed jobs.
- More than [half of the jobs failed since the background migration was started](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/database/background_migration/batched_migration.rb#L160).
### Throttling batched migrations
Because batched migrations are update heavy and there have been incidents due to the heavy load from these migrations while the database was underperforming, a throttling mechanism exists to mitigate future incidents.
These database indicators are checked to throttle a migration. Upon receiving a
stop signal, the migration is paused for a set time (10 minutes):
- WAL queue pending archival crossing the threshold.
- Active autovacuum on the tables on which the migration works on (enabled by default as of GitLab 18.0).
- Patroni apdex SLI dropping below the SLO.
- WAL rate crossing the threshold.
There is an ongoing effort to add more indicators to further enhance the
database health check framework. For more details, see
[epic 7594](https://gitlab.com/groups/gitlab-org/-/epics/7594).
#### How to disable/enable autovacuum indicator on tables
As of GitLab 18.0, this health indicator is enabled by default. To disable it, run the following command on the rails console:
```ruby
Feature.disable(:batched_migrations_health_status_autovacuum)
```
Alternatively, if you want to enable it again, run the following command in rails console:
```ruby
Feature.enable(:batched_migrations_health_status_autovacuum)
```
### Isolation
Batched background migrations must be isolated and cannot use application code (for example,
models defined in `app/models` except the `ApplicationRecord` classes).
Because these migrations can take a long time to run, it's possible
for new versions to deploy while the migrations are still running.
### Depending on migrated data
Unlike a regular or a post migration, waiting for the next release is not enough to guarantee that the data was fully migrated.
That means that you shouldn't depend on the data until the BBM is finished. If having 100% of the data migrated is a requirement,
then, the `ensure_batched_background_migration_is_finished` helper can be used to guarantee that the migration was finished and the
data fully migrated. ([See an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L13-18)).
## How to
### Generate a batched background migration
The custom generator `batched_background_migration` scaffolds necessary files and
accepts `table_name`, `column_name`, and `feature_category` as arguments. When
choosing the `column_name`, ensure that you are using a column type that can be iterated over distinctly,
preferably the table's primary key. The table will be iterated over based on the column defined here.
For more information, see [Batch over non-distinct columns](#batch-over-non-distinct-columns).
Usage:
```shell
bundle exec rails g batched_background_migration my_batched_migration --table_name=<table-name> --column_name=<column-name> --feature_category=<feature-category>
```
This command creates the following files:
- `db/post_migrate/20230214231008_queue_my_batched_migration.rb`
- `spec/migrations/20230214231008_queue_my_batched_migration_spec.rb`
- `lib/gitlab/background_migration/my_batched_migration.rb`
- `spec/lib/gitlab/background_migration/my_batched_migration_spec.rb`
### Enqueue a batched background migration
Queueing a batched background migration should be done in a post-deployment
migration. Use this `queue_batched_background_migration` example, queueing the
migration to be executed in batches. Replace the class name and arguments with the values
from your migration:
```ruby
queue_batched_background_migration(
JOB_CLASS_NAME,
TABLE_NAME,
JOB_ARGUMENTS
)
```
{{< alert type="note" >}}
This helper raises an error if the number of provided job arguments does not match
the number of [job arguments](#use-job-arguments) defined in `JOB_CLASS_NAME`.
{{< /alert >}}
Make sure the newly-created data is either migrated, or
saved in both the old and new version upon creation. Removals in
turn can be handled by defining foreign keys with cascading deletes.
### Finalize a batched background migration
Finalizing a batched background migration is done by calling
`ensure_batched_background_migration_is_finished`, but only if the migration was added
in or before the last required stop. This ensures a smooth upgrade process for
GitLab Self-Managed instances.
It is important to finalize all batched background migrations when it is safe
to do so. Leaving around old batched background migration is a form of
technical debt that needs to be maintained in tests and in application
behavior.
{{< alert type="note" >}}
You cannot depend on any batched background migration being completed until after it is finalized.
{{< /alert >}}
We recommend that batched background migrations are finalized after all of the
following conditions are met:
- The batched background migration is completed on GitLab.com
- The batched background migration was added in or before the last [required stop](required_stops.md). For example if 17.8 is a required stop and the migration was added in 17.7, the [finalizing migration can be added in 17.9](required_stops.md#long-running-migrations-being-finalized).
The `ensure_batched_background_migration_is_finished` call must exactly match
the migration that was used to enqueue it. Pay careful attention to:
- The job arguments: Needs to exactly match or it will not find the queued migration
- The `gitlab_schema`: Needs to exactly match or it will not find the queued
migration. Even if the `gitlab_schema` of the table has changed from
`gitlab_main` to `gitlab_main_cell` in the meantime you must finalize it
with `gitlab_main` if that's what was used when queueing the batched
background migration.
When finalizing a batched background migration you also need to update the
`finalized_by` in the corresponding `db/docs/batched_background_migrations`
file. The value should be the timestamp/version of the migration you added to
finalize it.
See the below [Examples](#examples) for specific details on what the actual
migration code should be.
{{< alert type="note" >}}
If the migration is being finalized before one required stop since it was enqueued, an early finalization
error will be raised. If the migration requires to be finalized before one required stop,
use `skip_early_finalization_validation: true` option to skip this check.
{{< /alert >}}
### Deleting batched background migration code
Once a batched background migration has completed, is finalized and has not been [re-queued](#re-queue-batched-background-migrations),
the migration code in `lib/gitlab/background_migration/` and its associated tests can be deleted after the next required stop following
the finalization.
Here is an example scenario:
- 17.3 and 17.5 are required stops.
- In 17.1 the batched background migration is queued.
- In 17.4 the migration may be finalized, provided that it's completed in GitLab.com.
- In 17.6 the code related to the migration may be deleted.
Batched background migration code is routinely deleted when [migrations are squashed](migration_squashing.md).
### Re-queue batched background migrations
A batched background migration might need to be re-run for one of several
reasons:
- The migration contains a bug ([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/93546)).
- The migration cleaned up data but the data became de-normalized again due to a
bypass in application logic ([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123002)).
- The batch size of the original migration causes the migration to fail ([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121404)).
To requeue a batched background migration, you must:
- No-op the contents of the `#up` and `#down` methods of the
original migration file. Otherwise, the batched background migration is created,
deleted, then created again on systems that are upgrading multiple patch
releases at once.
- Add a new post-deployment migration that re-runs the batched background
migration.
- In the new post-deployment migration, delete the existing batched background
migration using the `delete_batched_background_migration` method at the start
of the `#up` method to ensure that any existing runs are cleaned up.
- Update the `db/docs/batched_background_migration/*.yml` file from the original
migration to include information about the requeue.
#### Example
**Original Migration**:
```ruby
# frozen_string_literal: true
class QueueResolveVulnerabilitiesForRemovedAnalyzers < Gitlab::Database::Migration[2.2]
milestone '17.3'
MIGRATION = "ResolveVulnerabilitiesForRemovedAnalyzers"
def up
# no-op because there was a bug in the original migration, which has been
# fixed by
end
def down
# no-op because there was a bug in the original migration, which has been
# fixed in https://gitlab.com/gitlab-org/gitlab/-/merge_requests/162527
end
end
```
**Requeued migration**:
```ruby
# frozen_string_literal: true
class RequeueResolveVulnerabilitiesForRemovedAnalyzers < Gitlab::Database::Migration[2.2]
milestone '17.4'
restrict_gitlab_migration gitlab_schema: :gitlab_main
MIGRATION = "ResolveVulnerabilitiesForRemovedAnalyzers"
BATCH_SIZE = 10_000
SUB_BATCH_SIZE = 100
def up
# Clear previous background migration execution from QueueResolveVulnerabilitiesForRemovedAnalyzers
delete_batched_background_migration(MIGRATION, :vulnerability_reads, :id, [])
queue_batched_background_migration(
MIGRATION,
:vulnerability_reads,
:id,
batch_size: BATCH_SIZE,
sub_batch_size: SUB_BATCH_SIZE
)
end
def down
delete_batched_background_migration(MIGRATION, :vulnerability_reads, :id, [])
end
end
```
**Batched migration dictionary**:
The `milestone` and `queued_migration_version` should be the ones of requeued migration (in this example: RequeueResolveVulnerabilitiesForRemovedAnalyzers).
```markdown
---
migration_job_name: ResolveVulnerabilitiesForRemovedAnalyzers
description: Resolves all detected vulnerabilities for removed analyzers.
feature_category: static_application_security_testing
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/162691
milestone: '17.4'
queued_migration_version: 20240814085540
finalized_by: # version of the migration that finalized this BBM
```
### Stop and remove batched background migrations
A batched background migration in running state can be stopped and removed for several reasons:
- When the migration is no longer relevant or required as the product use case changed.
- The migration has to be superseded with another migration with a different logic.
To stop and remove an in-progress batched background migration, you must:
- In Release N, No-op the contents of the `#up` and `#down` methods of the scheduling database migration.
```ruby
class BackfillNamespaceType < Gitlab::Database::Migration[2.1]
# Reason why we don't need the BBM anymore. E.G: This BBM is no longer needed because it will be superseded by another BBM with different logic.
def up; end
def down; end
end
```
- In Release N, add a regular migration, to delete the existing batched migration.
Delete the existing batched background migration using the `delete_batched_background_migration` method at the
start of the `#up` method to ensure that any existing runs are cleaned up.
```ruby
class CleanupBackfillNamespaceType < Gitlab::Database::Migration[2.1]
MIGRATION = "MyMigrationClass"
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
delete_batched_background_migration(MIGRATION, :vulnerabilities, :id, [])
end
def down; end
end
```
- In Release N, also delete the migration class file (`lib/gitlab/background_migration/my_batched_migration.rb`) and its specs.
All the above steps can be implemented in a single MR.
### Use job arguments
`BatchedMigrationJob` provides the `job_arguments` helper method for job classes to define the job arguments they need.
Batched migrations scheduled with `queue_batched_background_migration` **must** use the helper to define the job arguments:
```ruby
queue_batched_background_migration(
'CopyColumnUsingBackgroundMigrationJob',
TABLE_NAME,
'name', 'name_convert_to_text'
)
```
{{< alert type="note" >}}
If the number of defined job arguments does not match the number of job arguments provided when
scheduling the migration, `queue_batched_background_migration` raises an error.
{{< /alert >}}
In this example, `copy_from` returns `name`, and `copy_to` returns `name_convert_to_text`:
```ruby
class CopyColumnUsingBackgroundMigrationJob < BatchedMigrationJob
job_arguments :copy_from, :copy_to
operation_name :update_all
def perform
from_column = connection.quote_column_name(copy_from)
to_column = connection.quote_column_name(copy_to)
assignment_clause = "#{to_column} = #{from_column}"
each_sub_batch do |relation|
relation.update_all(assignment_clause)
end
end
end
```
### Use filters
By default, when creating background jobs to perform the migration, batched background migrations
iterate over the full specified table. This iteration is done using the
[`PrimaryKeyBatchingStrategy`](https://gitlab.com/gitlab-org/gitlab/-/blob/c9dabd1f4b8058eece6d8cb4af95e9560da9a2ee/lib/gitlab/database/migrations/batched_background_migration_helpers.rb#L17). If the table has 1000 records
and the batch size is 100, the work is batched into 10 jobs. For illustrative purposes,
`EachBatch` is used like this:
```ruby
# PrimaryKeyBatchingStrategy
Namespace.each_batch(of: 100) do |relation|
relation.where(type: nil).update_all(type: 'User') # this happens in each background job
end
```
#### Using a composite or partial index to iterate a subset of the table
When applying additional filters, it is important to ensure they are properly
[covered by an index](iterating_tables_in_batches.md#example-2-iteration-with-filters)
to optimize `EachBatch` performance.
In the below examples we need an index on `(type, id)` or `id WHERE type IS NULL`
to support the filters. See
the [`EachBatch` documentation](iterating_tables_in_batches.md) for more information.
If you have a suitable index and you want to iterate only a subset of the table
you can apply a `where` clause before the `each_batch` like:
```ruby
# Works well if there is an index like either of:
# - `id WHERE type IS NULL`
# - `(type, id)`
# Does not work well otherwise.
Namespace.where(type: nil).each_batch(of: 100) do |relation|
relation.update_all(type: 'User')
end
```
An advantage of this approach is that you get consistent batch sizes. But it is
only suitable where there is an index that matches the `where` clauses as well
as the batching strategy.
`BatchedMigrationJob` provides a `scope_to` helper method to apply additional filters and achieve this:
1. Create a new migration job class that inherits from `BatchedMigrationJob` and defines the additional filter:
```ruby
class BackfillNamespaceType < BatchedMigrationJob
# Works well if there is an index like either of:
# - `id WHERE type IS NULL`
# - `(type, id)`
# Does not work well otherwise.
scope_to ->(relation) { relation.where(type: nil) }
operation_name :update_all
feature_category :source_code_management
def perform
each_sub_batch do |sub_batch|
sub_batch.update_all(type: 'User')
end
end
end
```
{{< alert type="note" >}}
For EE migrations that define `scope_to`, ensure the module extends `ActiveSupport::Concern`.
Otherwise, records are processed without taking the scope into consideration.
{{< /alert >}}
1. In the post-deployment migration, enqueue the batched background migration:
```ruby
class BackfillNamespaceType < Gitlab::Database::Migration[2.1]
MIGRATION = 'BackfillNamespaceType'
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
queue_batched_background_migration(
MIGRATION,
:namespaces,
:id
)
end
def down
delete_batched_background_migration(MIGRATION, :namespaces, :id, [])
end
end
```
### Access data for multiple databases
Background migration contrary to regular migrations does have access to multiple databases
and can be used to efficiently access and update data across them. To properly indicate
a database to be used it is desired to create ActiveRecord model inline the migration code.
Such model should use a correct [`ApplicationRecord`](multiple_databases.md#gitlab-schema)
depending on which database the table is located. As such usage of `ActiveRecord::Base`
is disallowed as it does not describe a explicitly database to be used to access given table.
```ruby
# good
class Gitlab::BackgroundMigration::ExtractIntegrationsUrl
class Project < ::ApplicationRecord
self.table_name = 'projects'
end
class Build < ::Ci::ApplicationRecord
self.table_name = 'ci_builds'
end
end
# bad
class Gitlab::BackgroundMigration::ExtractIntegrationsUrl
class Project < ActiveRecord::Base
self.table_name = 'projects'
end
class Build < ActiveRecord::Base
self.table_name = 'ci_builds'
end
end
```
Similarly the usage of `ActiveRecord::Base.connection` is disallowed and needs to be
replaced preferably with the usage of model connection.
```ruby
# good
Project.connection.execute("SELECT * FROM projects")
# acceptable
ApplicationRecord.connection.execute("SELECT * FROM projects")
# bad
ActiveRecord::Base.connection.execute("SELECT * FROM projects")
```
### Batch over non-distinct columns
The default batching strategy provides an efficient way to iterate over primary key columns.
However, if you need to iterate over columns where values are not unique, you must use a
different batching strategy.
The `LooseIndexScanBatchingStrategy` batching strategy uses a special version of [`EachBatch`](iterating_tables_in_batches.md#loose-index-scan-with-distinct_each_batch)
to provide efficient and stable iteration over the distinct column values.
This example shows a batched background migration where the `issues.project_id` column is used as
the batching column.
Database post-migration:
```ruby
class ProjectsWithIssuesMigration < Gitlab::Database::Migration[2.1]
MIGRATION = 'BatchProjectsWithIssues'
BATCH_SIZE = 5000
SUB_BATCH_SIZE = 500
restrict_gitlab_migration gitlab_schema: :gitlab_main
disable_ddl_transaction!
def up
queue_batched_background_migration(
MIGRATION,
:issues,
:project_id,
batch_size: BATCH_SIZE,
batch_class_name: 'LooseIndexScanBatchingStrategy', # Override the default batching strategy
sub_batch_size: SUB_BATCH_SIZE
)
end
def down
delete_batched_background_migration(MIGRATION, :issues, :project_id, [])
end
end
```
Implementing the background migration class:
```ruby
module Gitlab
module BackgroundMigration
class BatchProjectsWithIssues < Gitlab::BackgroundMigration::BatchedMigrationJob
include Gitlab::Database::DynamicModelHelpers
operation_name :backfill_issues
def perform
distinct_each_batch do |batch|
project_ids = batch.pluck(batch_column)
# do something with the distinct project_ids
end
end
end
end
end
```
{{< alert type="note" >}}
[Additional filters](#use-filters) defined with `scope_to` are ignored by `LooseIndexScanBatchingStrategy` and `distinct_each_batch`.
{{< /alert >}}
### Calculate overall time estimation of a batched background migration
It's possible to estimate how long a BBM takes to complete. GitLab already provides an estimation through the `db:gitlabcom-database-testing` pipeline.
This estimation is built based on sampling production data in a test environment and represents the max time that the migration could take and, not necessarily,
the actual time that the migration takes. In certain scenarios, estimations provided by the `db:gitlabcom-database-testing` pipeline may not be enough to
calculate all the singularities around the records being migrated, making further calculations necessary. As it made necessary, the formula
`interval * number of records / max batch size` can be used to determine an approximate estimation of how long the migration takes.
Where `interval` and `max batch size` refer to options defined for the job, and the `total tuple count` is the number of records to be migrated.
{{< alert type="note" >}}
Estimations may be affected by the [migration optimization mechanism](#migration-optimization).
{{< /alert >}}
### Cleaning up a batched background migration
{{< alert type="note" >}}
Cleaning up any remaining background migrations must be done in either a major
or minor release. You must not do this in a patch release.
{{< /alert >}}
Because background migrations can take a long time, you can't immediately clean
things up after queueing them. For example, you can't drop a column used in the
migration process, as jobs would fail. You must add a separate _post-deployment_
migration in a future release that finishes any remaining
jobs before cleaning things up. (For example, removing a column.)
To migrate the data from column `foo` (containing a big JSON blob) to column `bar`
(containing a string), you would:
1. Release A:
1. Create a migration class that performs the migration for a row with a given ID.
1. Update new rows using one of these techniques:
- Create a new trigger for copy operations that don't need application logic.
- Handle this operation in the model/service as the records are created or updated.
- Create a new custom background job that updates the records.
1. Queue the batched background migration for all existing rows in a post-deployment migration.
1. Release B:
1. Add a post-deployment migration that checks if the batched background migration is completed.
1. Deploy code so that the application starts using the new column and stops to update new records.
1. Remove the old column.
Bumping the [import/export version](../../user/project/settings/import_export.md) may
be required, if importing a project from a prior version of GitLab requires the
data to be in the new format.
### Add indexes to support batched background migrations
Sometimes it is necessary to add a new or temporary index to support a batched background migration.
To do this, create the index in a post-deployment migration that precedes the post-deployment
migration that queues the background migration.
See the documentation for [adding database indexes](adding_database_indexes.md#analyzing-a-new-index-before-a-batched-background-migration)
for additional information about some cases that require special attention to allow the index to be used directly after
creation.
### Execute a particular batch on the database testing pipeline
{{< alert type="note" >}}
Only [database maintainers](https://gitlab.com/groups/gitlab-org/maintainers/database/-/group_members?with_inherited_permissions=exclude) can view the database testing pipeline artifacts. Ask one for help if you need to use this method.
{{< /alert >}}
Let's assume that a batched background migration failed on a particular batch on GitLab.com and you want to figure out which query failed and why. At the moment, we don't have a good way to retrieve query information (especially the query parameters) and rerunning the entire migration with more logging would be a long process.
Fortunately you can leverage our [database migration pipeline](database_migration_pipeline.md) to rerun a particular batch with additional logging and/or fix to see if it solves the problem.
For an example see [Draft: `Test PG::CardinalityViolation` fix](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/110910) but make sure to read the entire section.
To do that, you need to:
1. [Find the batch `start_id` and `end_id`](#find-the-batch-start_id-and-end_id)
1. [Create a regular migration](#create-a-regular-migration)
1. [Apply a workaround for our migration helpers](#apply-a-workaround-for-our-migration-helpers-optional) (optional)
1. [Start the database migration pipeline](#start-the-database-migration-pipeline)
#### Find the batch `start_id` and `end_id`
You should be able to find those in [Kibana](#viewing-failure-error-logs).
#### Create a regular migration
Schedule the batch in the `up` block of a regular migration:
```ruby
def up
instance = Gitlab::BackgroundMigration::YourBackgroundMigrationClass.new(
start_id: <batch start_id>,
end_id: <batch end_id>,
batch_table: <table name>,
batch_column: <batching column>,
sub_batch_size: <sub batch size>,
pause_ms: <milliseconds between batches>,
job_arguments: <job arguments if any>,
connection: connection
)
instance.perform
end
def down
# no-op
end
```
#### Apply a workaround for our migration helpers (optional)
If your batched background migration touches tables from a schema other than the one you specified by using `restrict_gitlab_migration` helper (example: the scheduling migration has `restrict_gitlab_migration gitlab_schema: :gitlab_main` but the background job uses tables from the `:gitlab_ci` schema) then the migration will fail. To prevent that from happening you must to monkey patch database helpers so they don't fail the testing pipeline job:
1. Add the schema names to [`RestrictGitlabSchema`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb#L57)
```diff
diff --git a/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb b/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb
index b8d1d21a0d2d2a23d9e8c8a0a17db98ed1ed40b7..912e20659a6919f771045178c66828563cb5a4a1 100644
--- a/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb
+++ b/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb
@@ -55,7 +55,7 @@ def unmatched_schemas
end
def allowed_schemas_for_connection
- Gitlab::Database.gitlab_schemas_for_connection(connection)
+ Gitlab::Database.gitlab_schemas_for_connection(connection) << :gitlab_ci
end
end
end
```
1. Add the schema names to [`RestrictAllowedSchemas`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb#L82)
```diff
diff --git a/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb b/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb
index 4ae3622479f0800c0553959e132143ec9051898e..d556ec7f55adae9d46a56665ce02de782cb09f2d 100644
--- a/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb
+++ b/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb
@@ -79,7 +79,7 @@ def restrict_to_dml_only(parsed)
tables = self.dml_tables(parsed)
schemas = self.dml_schemas(tables)
- if (schemas - self.allowed_gitlab_schemas).any?
+ if (schemas - (self.allowed_gitlab_schemas << :gitlab_ci)).any?
raise DMLAccessDeniedError, \
"Select/DML queries (SELECT/UPDATE/DELETE) do access '#{tables}' (#{schemas.to_a}) " \
"which is outside of list of allowed schemas: '#{self.allowed_gitlab_schemas}'. " \
```
#### Start the database migration pipeline
Create a Draft merge request with your changes and trigger the manual `db:gitlabcom-database-testing` job.
### Establish dependencies
In some instances, migrations depended on the completion of previously enqueued BBMs. If the BBMs are
still running, the dependent migration fails. For example: introducing an unique index on a large table can depend on
the previously enqueued BBM to handle any duplicate records.
The following process has been configured to make dependencies more evident while writing a migration.
- Version of the migration that queued the BBM is stored in `batched_background_migrations` table and in BBM dictionary file.
- `DEPENDENT_BATCHED_BACKGROUND_MIGRATIONS` constant is added (commented by default) in each migration file.
To establish the dependency, add `queued_migration_version` of the dependent BBMs. If not, remove
the commented line.
- `Migration::UnfinishedDependencies` cop complains if the dependent BBMs are not yet finished. It determines
whether they got finished by looking up the `finalized_by` key in the
[BBM dictionary](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/generators/batched_background_migration/templates/batched_background_migration_dictionary.template).
Example:
```ruby
# db/post_migrate/20231113120650_queue_backfill_routes_namespace_id.rb
class QueueBackfillRoutesNamespaceId < Gitlab::Database::Migration[2.1]
MIGRATION = 'BackfillRouteNamespaceId'
restrict_gitlab_migration gitlab_schema: :gitlab_main
...
...
def up
queue_batched_background_migration(
MIGRATION,
...
)
end
end
```
```ruby
# This depends on the finalization of QueueBackfillRoutesNamespaceId BBM
class AddNotNullToRoutesNamespaceId < Gitlab::Database::Migration[2.1]
DEPENDENT_BATCHED_BACKGROUND_MIGRATIONS = ["20231113120650"]
def up
add_not_null_constraint :routes, :namespace_id
end
def down
remove_not_null_constraint :routes, :namespace_id
end
end
```
## Managing
{{< alert type="note" >}}
BBM management takes place through `chatops` integration, which is limited to GitLab team members only.
{{< /alert >}}
### List batched background migrations
To list the batched background migrations in the system, run this command:
`/chatops run batched_background_migrations list`
This command supports the following options:
- Database selection:
- `--database DATABASE_NAME`: Connects to the given database:
- `main`: Uses the main database (default).
- `ci`: Uses the CI database.
- Environment selection:
- `--dev`: Uses the `dev` environment.
- `--staging`: Uses the `staging` environment.
- `--staging_ref`: Uses the `staging_ref` environment.
- `--production` : Uses the `production` environment (default).
- Filter by job class
- `--job-class-name JOB_CLASS_NAME`: Only list jobs for the given job class.
- This is the `migration_job_name` in the YAML definition of the background migration.
Output example:

{{< alert type="note" >}}
ChatOps returns 20 batched background migrations order by `created_at` (DESC).
{{< /alert >}}
### Monitor the progress and status of a batched background migration
To see the status and progress of a specific batched background migration, run this command:
`/chatops run batched_background_migrations status MIGRATION_ID`
This command supports the following options:
- Database selection:
- `--database DATABASE_NAME`: Connects to the given database:
- `main`: Uses the main database (default)
- `ci`: Uses the CI database
- Environment selection:
- `--dev`: Uses the `dev` environment.
- `--staging`: Uses the `staging` environment.
- `--staging_ref`: Uses the `staging_ref` environment.
- `--production` : Uses the `production` environment (default).
Output example:

`Progress` represents the percentage of the background migration that has been completed.
Definitions of the batched background migration states:
- **Active**: Either:
- Ready to be picked by the runner.
- Running batched jobs.
- **Finalizing**: Running batched jobs.
- **Failed**: Failed batched background migration.
- **Finished**: All jobs were executed successfully and the batched background migration is complete.
- **Paused**: Not visible to the runner.
- **Finalized**: Batched migration was verified with
[`ensure_batched_background_migration_is_finished`](#finalize-a-batched-background-migration) and is complete.
### Pause a batched background migration
If you want to pause a batched background migration, you need to run the following command:
`/chatops run batched_background_migrations pause MIGRATION_ID`
This command supports the following options:
- Database selection:
- `--database DATABASE_NAME`: Connects to the given database:
- `main`: Uses the main database (default).
- `ci`: Uses the CI database.
- Environment selection:
- `--dev`: Uses the `dev` environment.
- `--staging`: Uses the `staging` environment.
- `--staging_ref`: Uses the `staging_ref` environment.
- `--production` : Uses the `production` environment (default).
Output example:

{{< alert type="note" >}}
You can pause only `active` batched background migrations.
{{< /alert >}}
### Resume a batched background migration
If you want to resume a batched background migration, you need to run the following command:
`/chatops run batched_background_migrations resume MIGRATION_ID`
This command supports the following options:
- Database selection:
- `--database DATABASE_NAME`: Connects to the given database:
- `main`: Uses the main database (default).
- `ci`: Uses the CI database.
- Environment selection:
- `--dev`: Uses the `dev` environment.
- `--staging`: Uses the `staging` environment.
- `--staging_ref`: Uses the `staging_ref` environment.
- `--production` : Uses the `production` environment (default).
Output example:

{{< alert type="note" >}}
You can resume only `active` batched background migrations
{{< /alert >}}
### Enable or disable background migrations
In extremely limited circumstances, a GitLab administrator can disable either or
both of these [feature flags](../../administration/feature_flags/_index.md):
- `execute_background_migrations`
- `execute_batched_migrations_on_schedule`
These flags are enabled by default. Disable them only as a last resort
to limit database operations in special circumstances, like database host maintenance.
{{< alert type="warning" >}}
Do not disable either of these flags unless you fully understand the ramifications. If you disable
the `execute_background_migrations` or `execute_batched_migrations_on_schedule` feature flag,
GitLab upgrades might fail and data loss might occur.
{{< /alert >}}
## Batched background migrations for EE-only features
All the background migration classes for EE-only features should be present in GitLab FOSS.
For this purpose, create an empty class for GitLab FOSS, and extend it for GitLab EE
as explained in the guidelines for
[implementing Enterprise Edition features](../ee_features.md#code-in-libgitlabbackground_migration).
{{< alert type="note" >}}
Background migration classes for EE-only features that use job arguments should define them
in the GitLab FOSS class. Definitions are required to prevent job arguments validation from failing when
migration is scheduled in the GitLab FOSS context.
{{< /alert >}}
You can use the [generator](#generate-a-batched-background-migration) to generate an EE-only migration scaffold by passing
`--ee-only` flag when generating a new batched background migration.
## Debug
### Viewing failure error logs
You can view failures in two ways:
- Via GitLab logs:
1. After running a batched background migration, if any jobs fail,
view the logs in [Kibana](https://log.gprd.gitlab.net/goto/4cb43f40-f861-11ec-b86b-d963a1a6788e).
View the production Sidekiq log and filter for:
- `json.new_state: failed`
- `json.job_class_name: <Batched Background Migration job class name>`
- `json.job_arguments: <Batched Background Migration job class arguments>`
1. Review the `json.exception_class` and `json.exception_message` values to help
understand why the jobs failed.
1. Remember the retry mechanism. Having a failure does not mean the job failed.
Always check the last status of the job.
- Via database:
1. Get the batched background migration `CLASS_NAME`.
1. Execute the following query in the PostgreSQL console:
```sql
SELECT migration.id, migration.job_class_name, transition_logs.exception_class, transition_logs.exception_message
FROM batched_background_migrations as migration
INNER JOIN batched_background_migration_jobs as jobs
ON jobs.batched_background_migration_id = migration.id
INNER JOIN batched_background_migration_job_transition_logs as transition_logs
ON transition_logs.batched_background_migration_job_id = jobs.id
WHERE transition_logs.next_status = '2' AND migration.job_class_name = "CLASS_NAME";
```
## Testing
Writing tests is required for:
- The batched background migrations' queueing migration.
- The batched background migration itself.
- A cleanup migration.
The `:migration` and `schema: :latest` RSpec tags are automatically set for
background migration specs. Refer to the
[Testing Rails migrations](../testing_guide/testing_migrations_guide.md#testing-a-non-activerecordmigration-class)
style guide.
Remember that `before` and `after` RSpec hooks
migrate your database down and up. These hooks can result in other batched background
migrations being called. Using `spy` test doubles with
`have_received` is encouraged, instead of using regular test doubles, because
your expectations defined in a `it` block can conflict with what is
called in RSpec hooks. Refer to [issue #35351](https://gitlab.com/gitlab-org/gitlab/-/issues/18839)
for more details.
## Best practices
1. Know how much data you're dealing with.
1. Make sure the batched background migration jobs are idempotent.
1. Confirm the tests you write are not false positives.
1. If the data being migrated is critical and cannot be lost, the
clean-up migration must also check the final state of the data before completing.
1. Discuss the numbers with a database specialist. The migration may add
more pressure on DB than you expect. Measure on staging,
or ask someone to measure on production.
1. Know how much time is required to run the batched background migration.
1. Be careful when silently rescuing exceptions inside job classes. This may lead to
jobs being marked as successful, even in a failure scenario.
```ruby
# good
def perform
each_sub_batch do |sub_batch|
sub_batch.update_all(name: 'My Name')
end
end
# acceptable
def perform
each_sub_batch do |sub_batch|
sub_batch.update_all(name: 'My Name')
rescue Exception => error
logger.error(message: error.message, class: error.class)
raise
end
end
# bad
def perform
each_sub_batch do |sub_batch|
sub_batch.update_all(name: 'My Name')
rescue Exception => error
logger.error(message: error.message, class: self.class.name)
end
end
```
1. If possible update the entire sub-batch in a single query
instead of updating each model separately.
This can be achieve in different ways, depending on the scenario.
- Generate an `UPDATE` query, and use `FROM` to join the tables
that provide the necessary values
([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/184051)).
- Generate an `UPDATE` query, and use `FROM(VALUES( ...))` to
pass values calculated beforehand
([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/177993)).
- Pass all keys and values to `ActiveRelation#update`.
```ruby
# good
def perform
each_sub_batch do |sub_batch|
connection.execute <<~SQL
UPDATE fork_networks
SET organization_id = projects.organization_id
FROM projects
WHERE fork_networks.id IN (#{sub_batch.pluck(:id)})
AND fork_networks.root_project_id = projects.id
AND fork_networks.organization_id IS NULL
SQL
end
end
# bad
def perform
each_sub_batch do |sub_batch|
sub_batch.each |fork_network|
fork_network.update!(organization_id: fork_network.root_project.organization_id)
end
end
end
```
### Use of scope_to
When writing a batched background migration class, you have the option to define a `scope_to` block. This block adds an additional qualifier to the query that determines the minimum and maximum range for each batch.
By default, the batching range is determined using the primary key index, which is highly efficient. However, using `scope_to` means the query must consider only rows matching the given condition, potentially impacting performance.
Consider the following simple query:
```sql
SELECT id FROM users WHERE id BETWEEN 1 AND 3000;
```
This query is fast because the `id` column is indexed. PostgreSQL can use an index-only scan to return results efficiently. The query plan might look like this:
```plain
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using users_pkey on users (cost=0.44..307.24 rows=2751 width=4) (actual time=0.016..177.028 rows=2654 loops=1)
Index Cond: ((id >= 1) AND (id <= 3000))
Heap Fetches: 219
Planning Time: 0.183 ms
Execution Time: 177.158 ms
```
Now, let's apply a scope:
```ruby
scope_to ->(relation) { relation.where(theme_id: 4) }
```
This results in the following query:
```sql
SELECT id FROM users WHERE id BETWEEN 1 AND 3000 AND theme_id = 4;
```
The associated query plan is less efficient:
```plain
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Index Scan using users_pkey on users (cost=0.44..3773.66 rows=10 width=4) (actual time=8.047..2290.528 rows=28 loops=1)
Index Cond: ((id >= 1) AND (id <= 3000))
Filter: (theme_id = 4)
Rows Removed by Filter: 2626
Planning Time: 1.292 ms
Execution Time: 2290.582 ms
```
In this case, PostgreSQL uses an index scan on `id` but applies the `theme_id` filter after row access. This causes many rows to be discarded after retrieval, resulting in degraded performance, over 12x slower in this case.
#### When to override
Use `scope_to` **only when the scoped column is indexed**, and ideally, the batching query avoids filtering out rows.
A strong indicator of good performance is the absence of the `Rows Removed by Filter` line in the query plan.
Let's improve performance by indexing the `theme_id` column:
```sql
CREATE INDEX idx_users_theme_id ON users (theme_id);
```
Re-running the same query produces this plan:
```plain
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on users (cost=691.28..706.53 rows=10 width=4) (actual time=13.532..13.578 rows=28 loops=1)
Recheck Cond: ((id >= 1) AND (id <= 3000) AND (theme_id = 4))
Heap Blocks: exact=28
Buffers: shared hit=41 read=62
I/O Timings: shared read=0.721
-> BitmapAnd (cost=691.28..691.28 rows=10 width=0) (actual time=13.509..13.511 rows=0 loops=1)
Buffers: shared hit=13 read=62
I/O Timings: shared read=0.721
-> Bitmap Index Scan on users_pkey (cost=0.00..45.95 rows=2751 width=0) (actual time=0.390..0.390 rows=2654 loops=1)
Index Cond: ((id >= 1) AND (id <= 3000))
Buffers: shared hit=10
-> Bitmap Index Scan on idx_users_theme_id (cost=0.00..645.08 rows=73352 width=0) (actual time=12.933..12.933 rows=69872 loops=1)
Index Cond: (theme_id = 4)
Buffers: shared hit=3 read=62
I/O Timings: shared read=0.721
Planning:
Buffers: shared hit=35 read=1 dirtied=2
I/O Timings: shared read=0.045
Planning Time: 0.514 ms
Execution Time: 13.634 ms
```
#### Summary
Use `scope_to` **only** when:
- The scoped column is backed by an index.
- Query plans avoid significant row filtering (`Rows Removed by Filter` is low or absent).
- Batching remains efficient under real data loads.
Otherwise, scoping can drastically reduce performance.
## Examples
### Routes use-case
The `routes` table has a `source_type` field that's used for a polymorphic relationship.
As part of a database redesign, we're removing the polymorphic relationship. One step of
the work is migrating data from the `source_id` column into a new singular foreign key.
Because we intend to delete old rows later, there's no need to update them as part of the
background migration.
1. Start by using the generator to create batched background migration files:
```shell
bundle exec rails g batched_background_migration BackfillRouteNamespaceId --table_name=routes --column_name=id --feature_category=source_code_management
```
1. Update the migration job (subclass of `BatchedMigrationJob`) to copy `source_id` values to `namespace_id`:
```ruby
class Gitlab::BackgroundMigration::BackfillRouteNamespaceId < BatchedMigrationJob
# For illustration purposes, if we were to use a local model we could
# define it like below, using an `ApplicationRecord` as the base class
# class Route < ::ApplicationRecord
# self.table_name = 'routes'
# end
operation_name :update_all
feature_category :source_code_management
def perform
each_sub_batch(
batching_scope: -> (relation) { relation.where("source_type <> 'UnusedType'") }
) do |sub_batch|
sub_batch.update_all('namespace_id = source_id')
end
end
end
```
{{< alert type="note" >}}
Job classes inherit from `BatchedMigrationJob` to ensure they are
correctly handled by the batched migration framework. Any subclass of
`BatchedMigrationJob` is initialized with the necessary arguments to
execute the batch, and a connection to the tracking database.
{{< /alert >}}
1. Create a database migration that adds a new trigger to the database. Example:
```ruby
class AddTriggerToRoutesToCopySourceIdToNamespaceId < Gitlab::Database::Migration[2.1]
FUNCTION_NAME = 'example_function'
TRIGGER_NAME = 'example_trigger'
def up
execute(<<~SQL)
CREATE OR REPLACE FUNCTION #{FUNCTION_NAME}() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
NEW."namespace_id" = NEW."source_id"
RETURN NEW;
END;
$$;
CREATE TRIGGER #{TRIGGER_NAME}() AFTER INSERT OR UPDATE
ON routes
FOR EACH ROW EXECUTE FUNCTION #{FUNCTION_NAME}();
SQL
end
def down
drop_trigger(TRIGGER_NAME, :routes)
drop_function(FUNCTION_NAME)
end
end
```
1. Update the created post-deployment migration with required batch sizes:
```ruby
class QueueBackfillRoutesNamespaceId < Gitlab::Database::Migration[2.1]
MIGRATION = 'BackfillRouteNamespaceId'
BATCH_SIZE = 1000
SUB_BATCH_SIZE = 100
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
queue_batched_background_migration(
MIGRATION,
:routes,
:id,
batch_size: BATCH_SIZE,
sub_batch_size: SUB_BATCH_SIZE
)
end
def down
delete_batched_background_migration(MIGRATION, :routes, :id, [])
end
end
```
```yaml
# db/docs/batched_background_migrations/backfill_route_namespace_id.yml
---
migration_job_name: BackfillRouteNamespaceId
description: Copies source_id values from routes to namespace_id
feature_category: source_code_management
introduced_by_url: "https://mr_url"
milestone: 16.6
queued_migration_version: 20231113120650
finalized_by: # version of the migration that ensured this bbm
```
{{< alert type="note" >}}
When queuing a batched background migration, you need to restrict
the schema to the database where you make the actual changes.
In this case, we are updating `routes` records, so we set
`restrict_gitlab_migration gitlab_schema: :gitlab_main`. If, however,
you need to perform a CI data migration, you would set
`restrict_gitlab_migration gitlab_schema: :gitlab_ci`.
{{< /alert >}}
After deployment, our application:
- Continues using the data as before.
- Ensures that both existing and new data are migrated.
1. Add a new post-deployment migration that checks that the batched background migration is complete. Also update
`finalized_by` attribute in BBM dictionary with the version of this migration.
```ruby
class FinalizeBackfillRouteNamespaceId < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
ensure_batched_background_migration_is_finished(
job_class_name: 'BackfillRouteNamespaceId',
table_name: :routes,
column_name: :id,
job_arguments: [],
finalize: true
)
end
def down
# no-op
end
end
```
```yaml
# db/docs/batched_background_migrations/backfill_route_namespace_id.yml
---
migration_job_name: BackfillRouteNamespaceId
description: Copies source_id values from routes to namespace_id
feature_category: source_code_management
introduced_by_url: "https://mr_url"
milestone: 16.6
queued_migration_version: 20231113120650
finalized_by: 20231115120912
```
{{< alert type="note" >}}
If the batched background migration is not finished, the system will
execute the batched background migration inline. If you don't want
to see this behavior, you need to pass `finalize: false`.
{{< /alert >}}
If the application does not depend on the data being 100% migrated (for
instance, the data is advisory, and not mission-critical), then you can skip this
final step. This step confirms that the migration is completed, and all of the rows were migrated.
1. Add a database migration to remove the trigger.
```ruby
class RemoveNamepaceIdTriggerFromRoutes < Gitlab::Database::Migration[2.1]
FUNCTION_NAME = 'example_function'
TRIGGER_NAME = 'example_trigger'
def up
drop_trigger(TRIGGER_NAME, :routes)
drop_function(FUNCTION_NAME)
end
def down
# Should reverse the trigger and the function in the up method of the migration that added it
end
end
```
After the batched migration is completed, you can safely depend on the
data in `routes.namespace_id` being populated.
|
---
stage: Data Access
group: Database Frameworks
info: 'See the Technical Writers assigned to Development Guidelines: https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines'
title: Batched background migrations
breadcrumbs:
- doc
- development
- database
---
Batched background migrations should be used to perform data migrations whenever a
migration exceeds [the time limits](../migration_style_guide.md#how-long-a-migration-should-take)
in our guidelines. For example, you can use batched background
migrations to migrate data that's stored in a single JSON column
to a separate table instead.
{{< alert type="note" >}}
Batched background migrations replaced the legacy background migrations framework.
Check that documentation in reference to any changes involving that framework.
{{< /alert >}}
{{< alert type="note" >}}
The batched background migrations framework has ChatOps support. Using ChatOps, GitLab engineers can interact with the batched background migrations present in the system.
{{< /alert >}}
## When to use batched background migrations
Use a batched background migration when you migrate data in tables containing
so many rows that the process would exceed
[the time limits in our guidelines](../migration_style_guide.md#how-long-a-migration-should-take)
if performed using a regular Rails migration.
- Batched background migrations should be used when migrating data in
[high-traffic tables](../migration_style_guide.md#high-traffic-tables).
- Batched background migrations may also be used when executing numerous single-row queries
for every item on a large dataset. Typically, for single-record patterns, runtime is
largely dependent on the size of the dataset. Split the dataset accordingly,
and put it into background migrations.
- Don't use batched background migrations to perform schema migrations.
Background migrations can help when:
- Migrating events from one table to multiple separate tables.
- Populating one column based on JSON stored in another column.
- Migrating data that depends on the output of external services. (For example, an API.)
### Notes
- If the batched background migration is part of an important upgrade, it must be announced
in the release post. Discuss with your Project Manager if you're unsure if the migration falls
into this category.
- You should use the [generator](#generate-a-batched-background-migration) to create batched background migrations,
so that required files are created by default.
## How batched background migrations work
Batched background migrations (BBM) are subclasses of
`Gitlab::BackgroundMigration::BatchedMigrationJob` that define a `perform` method.
As the first step, a regular migration creates a `batched_background_migrations`
record with the BBM class and the required arguments. By default,
`batched_background_migrations` is in an active state, and those are picked up
by the Sidekiq worker to execute the actual batched migration.
All migration classes must be defined in the namespace `Gitlab::BackgroundMigration`. Place the files
in the directory `lib/gitlab/background_migration/`.
### Execution mechanism
Batched background migrations are picked from the queue in the order they are enqueued. Multiple migrations are fetched
and executed in parallel, as long they are in active state and do not target the same database table.
The default number of migrations processed in parallel is 2, for GitLab.com this limit is configured to 4.
Once migration is picked for execution, a job is created for the specific batch. After each job execution, migration's
batch size may be increased or decreased, based on the performance of the last 20 jobs.
```plantuml
@startuml
hide empty description
skinparam ConditionEndStyle hline
left to right direction
rectangle "Batched background migration queue" as migrations {
rectangle "Migration N (active)" as migrationn
rectangle "Migration 1 (completed)" as migration1
rectangle "Migration 2 (active)" as migration2
rectangle "Migration 3 (on hold)" as migration3
rectangle "Migration 4 (active)" as migration4
migration1 -[hidden]> migration2
migration2 -[hidden]> migration3
migration3 -[hidden]> migration4
migration4 -[hidden]> migrationn
}
rectangle "Execution Workers" as workers {
rectangle "Execution Worker 1 (busy)" as worker1
rectangle "Execution Worker 2 (available)" as worker2
worker1 -[hidden]> worker2
}
migration2 --> [Scheduling Worker]
migration4 --> [Scheduling Worker]
[Scheduling Worker] --> worker2
@enduml
```
Soon as a worker is available, the BBM is processed by the runner.
```plantuml
@startuml
hide empty description
start
rectangle Runner {
:Migration;
if (Have reached batching bounds?) then (Yes)
if (Have jobs to retry?) then (Yes)
:Fetch the batched job;
else (No)
:Finish active migration;
stop
endif
else (No)
:Create a batched job;
endif
:Execute batched job;
:Evaluate DB health;
note right: Checks for table autovacuum, Patroni Apdex, Write-ahead logging
if (Evaluation signs to stop?) then (Yes)
:Put migration on hold;
else (No)
:Optimize migration;
endif
}
@enduml
```
### Idempotence
Batched background migrations are executed in a context of a Sidekiq process.
The usual Sidekiq rules apply, especially the rule that jobs should be small
and idempotent. Ensure that in the case where your migration job is retried, data
integrity is guaranteed.
See [Sidekiq best practices guidelines](https://github.com/mperham/sidekiq/wiki/Best-Practices)
for more details.
### Migration optimization
After each job execution, a verification takes place to check if the migration can be optimized.
The optimization underlying mechanic is based on the concept of time efficiency. It calculates
the exponential moving average of time efficiencies for the last N jobs and updates the batch
size of the batched background migration to its optimal value.
This mechanism, however, makes it hard for us to provide an accurate estimation for total
execution time of the migration when using the [database migration pipeline](database_migration_pipeline.md).
We are discussing the ways to fix this problem in
[this issue](https://gitlab.com/gitlab-org/database-team/gitlab-com-database-testing/-/issues/162)
### Job retry mechanism
The batched background migrations retry mechanism ensures that a job is executed again in case of failure.
The following diagram shows the different stages of our retry mechanism:
```plantuml
@startuml
hide empty description
note as N1
can_split?:
the failure is due to a query timeout
end note
[*] --> Running
Running --> Failed
note on link
if number of retries <= MAX_ATTEMPTS
end note
Running --> Succeeded
Failed --> Running
note on link
if number of retries > MAX_ATTEMPTS
and can_split? == true
then two jobs with smaller
batch size will be created
end note
Failed --> [*]
Succeeded --> [*]
@enduml
```
- `MAX_ATTEMPTS` is defined in the [`Gitlab::Database::BackgroundMigration`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/database/background_migration/batched_job.rb)
class.
- `can_split?` is defined in the [`Gitlab::Database::BatchedJob`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database/background_migration/batched_job.rb) class.
### Failed batched background migrations
The whole batched background migration is marked as `failed`
(`/chatops run batched_background_migrations status MIGRATION_ID` shows
the migration as `failed`) if any of the following is true:
- There are no more jobs to consume, and there are failed jobs.
- More than [half of the jobs failed since the background migration was started](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/database/background_migration/batched_migration.rb#L160).
### Throttling batched migrations
Because batched migrations are update heavy and there have been incidents due to the heavy load from these migrations while the database was underperforming, a throttling mechanism exists to mitigate future incidents.
These database indicators are checked to throttle a migration. Upon receiving a
stop signal, the migration is paused for a set time (10 minutes):
- WAL queue pending archival crossing the threshold.
- Active autovacuum on the tables on which the migration works on (enabled by default as of GitLab 18.0).
- Patroni apdex SLI dropping below the SLO.
- WAL rate crossing the threshold.
There is an ongoing effort to add more indicators to further enhance the
database health check framework. For more details, see
[epic 7594](https://gitlab.com/groups/gitlab-org/-/epics/7594).
#### How to disable/enable autovacuum indicator on tables
As of GitLab 18.0, this health indicator is enabled by default. To disable it, run the following command on the rails console:
```ruby
Feature.disable(:batched_migrations_health_status_autovacuum)
```
Alternatively, if you want to enable it again, run the following command in rails console:
```ruby
Feature.enable(:batched_migrations_health_status_autovacuum)
```
### Isolation
Batched background migrations must be isolated and cannot use application code (for example,
models defined in `app/models` except the `ApplicationRecord` classes).
Because these migrations can take a long time to run, it's possible
for new versions to deploy while the migrations are still running.
### Depending on migrated data
Unlike a regular or a post migration, waiting for the next release is not enough to guarantee that the data was fully migrated.
That means that you shouldn't depend on the data until the BBM is finished. If having 100% of the data migrated is a requirement,
then, the `ensure_batched_background_migration_is_finished` helper can be used to guarantee that the migration was finished and the
data fully migrated. ([See an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L13-18)).
## How to
### Generate a batched background migration
The custom generator `batched_background_migration` scaffolds necessary files and
accepts `table_name`, `column_name`, and `feature_category` as arguments. When
choosing the `column_name`, ensure that you are using a column type that can be iterated over distinctly,
preferably the table's primary key. The table will be iterated over based on the column defined here.
For more information, see [Batch over non-distinct columns](#batch-over-non-distinct-columns).
Usage:
```shell
bundle exec rails g batched_background_migration my_batched_migration --table_name=<table-name> --column_name=<column-name> --feature_category=<feature-category>
```
This command creates the following files:
- `db/post_migrate/20230214231008_queue_my_batched_migration.rb`
- `spec/migrations/20230214231008_queue_my_batched_migration_spec.rb`
- `lib/gitlab/background_migration/my_batched_migration.rb`
- `spec/lib/gitlab/background_migration/my_batched_migration_spec.rb`
### Enqueue a batched background migration
Queueing a batched background migration should be done in a post-deployment
migration. Use this `queue_batched_background_migration` example, queueing the
migration to be executed in batches. Replace the class name and arguments with the values
from your migration:
```ruby
queue_batched_background_migration(
JOB_CLASS_NAME,
TABLE_NAME,
JOB_ARGUMENTS
)
```
{{< alert type="note" >}}
This helper raises an error if the number of provided job arguments does not match
the number of [job arguments](#use-job-arguments) defined in `JOB_CLASS_NAME`.
{{< /alert >}}
Make sure the newly-created data is either migrated, or
saved in both the old and new version upon creation. Removals in
turn can be handled by defining foreign keys with cascading deletes.
### Finalize a batched background migration
Finalizing a batched background migration is done by calling
`ensure_batched_background_migration_is_finished`, but only if the migration was added
in or before the last required stop. This ensures a smooth upgrade process for
GitLab Self-Managed instances.
It is important to finalize all batched background migrations when it is safe
to do so. Leaving around old batched background migration is a form of
technical debt that needs to be maintained in tests and in application
behavior.
{{< alert type="note" >}}
You cannot depend on any batched background migration being completed until after it is finalized.
{{< /alert >}}
We recommend that batched background migrations are finalized after all of the
following conditions are met:
- The batched background migration is completed on GitLab.com
- The batched background migration was added in or before the last [required stop](required_stops.md). For example if 17.8 is a required stop and the migration was added in 17.7, the [finalizing migration can be added in 17.9](required_stops.md#long-running-migrations-being-finalized).
The `ensure_batched_background_migration_is_finished` call must exactly match
the migration that was used to enqueue it. Pay careful attention to:
- The job arguments: Needs to exactly match or it will not find the queued migration
- The `gitlab_schema`: Needs to exactly match or it will not find the queued
migration. Even if the `gitlab_schema` of the table has changed from
`gitlab_main` to `gitlab_main_cell` in the meantime you must finalize it
with `gitlab_main` if that's what was used when queueing the batched
background migration.
When finalizing a batched background migration you also need to update the
`finalized_by` in the corresponding `db/docs/batched_background_migrations`
file. The value should be the timestamp/version of the migration you added to
finalize it.
See the below [Examples](#examples) for specific details on what the actual
migration code should be.
{{< alert type="note" >}}
If the migration is being finalized before one required stop since it was enqueued, an early finalization
error will be raised. If the migration requires to be finalized before one required stop,
use `skip_early_finalization_validation: true` option to skip this check.
{{< /alert >}}
### Deleting batched background migration code
Once a batched background migration has completed, is finalized and has not been [re-queued](#re-queue-batched-background-migrations),
the migration code in `lib/gitlab/background_migration/` and its associated tests can be deleted after the next required stop following
the finalization.
Here is an example scenario:
- 17.3 and 17.5 are required stops.
- In 17.1 the batched background migration is queued.
- In 17.4 the migration may be finalized, provided that it's completed in GitLab.com.
- In 17.6 the code related to the migration may be deleted.
Batched background migration code is routinely deleted when [migrations are squashed](migration_squashing.md).
### Re-queue batched background migrations
A batched background migration might need to be re-run for one of several
reasons:
- The migration contains a bug ([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/93546)).
- The migration cleaned up data but the data became de-normalized again due to a
bypass in application logic ([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123002)).
- The batch size of the original migration causes the migration to fail ([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121404)).
To requeue a batched background migration, you must:
- No-op the contents of the `#up` and `#down` methods of the
original migration file. Otherwise, the batched background migration is created,
deleted, then created again on systems that are upgrading multiple patch
releases at once.
- Add a new post-deployment migration that re-runs the batched background
migration.
- In the new post-deployment migration, delete the existing batched background
migration using the `delete_batched_background_migration` method at the start
of the `#up` method to ensure that any existing runs are cleaned up.
- Update the `db/docs/batched_background_migration/*.yml` file from the original
migration to include information about the requeue.
#### Example
**Original Migration**:
```ruby
# frozen_string_literal: true
class QueueResolveVulnerabilitiesForRemovedAnalyzers < Gitlab::Database::Migration[2.2]
milestone '17.3'
MIGRATION = "ResolveVulnerabilitiesForRemovedAnalyzers"
def up
# no-op because there was a bug in the original migration, which has been
# fixed by
end
def down
# no-op because there was a bug in the original migration, which has been
# fixed in https://gitlab.com/gitlab-org/gitlab/-/merge_requests/162527
end
end
```
**Requeued migration**:
```ruby
# frozen_string_literal: true
class RequeueResolveVulnerabilitiesForRemovedAnalyzers < Gitlab::Database::Migration[2.2]
milestone '17.4'
restrict_gitlab_migration gitlab_schema: :gitlab_main
MIGRATION = "ResolveVulnerabilitiesForRemovedAnalyzers"
BATCH_SIZE = 10_000
SUB_BATCH_SIZE = 100
def up
# Clear previous background migration execution from QueueResolveVulnerabilitiesForRemovedAnalyzers
delete_batched_background_migration(MIGRATION, :vulnerability_reads, :id, [])
queue_batched_background_migration(
MIGRATION,
:vulnerability_reads,
:id,
batch_size: BATCH_SIZE,
sub_batch_size: SUB_BATCH_SIZE
)
end
def down
delete_batched_background_migration(MIGRATION, :vulnerability_reads, :id, [])
end
end
```
**Batched migration dictionary**:
The `milestone` and `queued_migration_version` should be the ones of requeued migration (in this example: RequeueResolveVulnerabilitiesForRemovedAnalyzers).
```markdown
---
migration_job_name: ResolveVulnerabilitiesForRemovedAnalyzers
description: Resolves all detected vulnerabilities for removed analyzers.
feature_category: static_application_security_testing
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/162691
milestone: '17.4'
queued_migration_version: 20240814085540
finalized_by: # version of the migration that finalized this BBM
```
### Stop and remove batched background migrations
A batched background migration in running state can be stopped and removed for several reasons:
- When the migration is no longer relevant or required as the product use case changed.
- The migration has to be superseded with another migration with a different logic.
To stop and remove an in-progress batched background migration, you must:
- In Release N, No-op the contents of the `#up` and `#down` methods of the scheduling database migration.
```ruby
class BackfillNamespaceType < Gitlab::Database::Migration[2.1]
# Reason why we don't need the BBM anymore. E.G: This BBM is no longer needed because it will be superseded by another BBM with different logic.
def up; end
def down; end
end
```
- In Release N, add a regular migration, to delete the existing batched migration.
Delete the existing batched background migration using the `delete_batched_background_migration` method at the
start of the `#up` method to ensure that any existing runs are cleaned up.
```ruby
class CleanupBackfillNamespaceType < Gitlab::Database::Migration[2.1]
MIGRATION = "MyMigrationClass"
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
delete_batched_background_migration(MIGRATION, :vulnerabilities, :id, [])
end
def down; end
end
```
- In Release N, also delete the migration class file (`lib/gitlab/background_migration/my_batched_migration.rb`) and its specs.
All the above steps can be implemented in a single MR.
### Use job arguments
`BatchedMigrationJob` provides the `job_arguments` helper method for job classes to define the job arguments they need.
Batched migrations scheduled with `queue_batched_background_migration` **must** use the helper to define the job arguments:
```ruby
queue_batched_background_migration(
'CopyColumnUsingBackgroundMigrationJob',
TABLE_NAME,
'name', 'name_convert_to_text'
)
```
{{< alert type="note" >}}
If the number of defined job arguments does not match the number of job arguments provided when
scheduling the migration, `queue_batched_background_migration` raises an error.
{{< /alert >}}
In this example, `copy_from` returns `name`, and `copy_to` returns `name_convert_to_text`:
```ruby
class CopyColumnUsingBackgroundMigrationJob < BatchedMigrationJob
job_arguments :copy_from, :copy_to
operation_name :update_all
def perform
from_column = connection.quote_column_name(copy_from)
to_column = connection.quote_column_name(copy_to)
assignment_clause = "#{to_column} = #{from_column}"
each_sub_batch do |relation|
relation.update_all(assignment_clause)
end
end
end
```
### Use filters
By default, when creating background jobs to perform the migration, batched background migrations
iterate over the full specified table. This iteration is done using the
[`PrimaryKeyBatchingStrategy`](https://gitlab.com/gitlab-org/gitlab/-/blob/c9dabd1f4b8058eece6d8cb4af95e9560da9a2ee/lib/gitlab/database/migrations/batched_background_migration_helpers.rb#L17). If the table has 1000 records
and the batch size is 100, the work is batched into 10 jobs. For illustrative purposes,
`EachBatch` is used like this:
```ruby
# PrimaryKeyBatchingStrategy
Namespace.each_batch(of: 100) do |relation|
relation.where(type: nil).update_all(type: 'User') # this happens in each background job
end
```
#### Using a composite or partial index to iterate a subset of the table
When applying additional filters, it is important to ensure they are properly
[covered by an index](iterating_tables_in_batches.md#example-2-iteration-with-filters)
to optimize `EachBatch` performance.
In the below examples we need an index on `(type, id)` or `id WHERE type IS NULL`
to support the filters. See
the [`EachBatch` documentation](iterating_tables_in_batches.md) for more information.
If you have a suitable index and you want to iterate only a subset of the table
you can apply a `where` clause before the `each_batch` like:
```ruby
# Works well if there is an index like either of:
# - `id WHERE type IS NULL`
# - `(type, id)`
# Does not work well otherwise.
Namespace.where(type: nil).each_batch(of: 100) do |relation|
relation.update_all(type: 'User')
end
```
An advantage of this approach is that you get consistent batch sizes. But it is
only suitable where there is an index that matches the `where` clauses as well
as the batching strategy.
`BatchedMigrationJob` provides a `scope_to` helper method to apply additional filters and achieve this:
1. Create a new migration job class that inherits from `BatchedMigrationJob` and defines the additional filter:
```ruby
class BackfillNamespaceType < BatchedMigrationJob
# Works well if there is an index like either of:
# - `id WHERE type IS NULL`
# - `(type, id)`
# Does not work well otherwise.
scope_to ->(relation) { relation.where(type: nil) }
operation_name :update_all
feature_category :source_code_management
def perform
each_sub_batch do |sub_batch|
sub_batch.update_all(type: 'User')
end
end
end
```
{{< alert type="note" >}}
For EE migrations that define `scope_to`, ensure the module extends `ActiveSupport::Concern`.
Otherwise, records are processed without taking the scope into consideration.
{{< /alert >}}
1. In the post-deployment migration, enqueue the batched background migration:
```ruby
class BackfillNamespaceType < Gitlab::Database::Migration[2.1]
MIGRATION = 'BackfillNamespaceType'
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
queue_batched_background_migration(
MIGRATION,
:namespaces,
:id
)
end
def down
delete_batched_background_migration(MIGRATION, :namespaces, :id, [])
end
end
```
### Access data for multiple databases
Background migration contrary to regular migrations does have access to multiple databases
and can be used to efficiently access and update data across them. To properly indicate
a database to be used it is desired to create ActiveRecord model inline the migration code.
Such model should use a correct [`ApplicationRecord`](multiple_databases.md#gitlab-schema)
depending on which database the table is located. As such usage of `ActiveRecord::Base`
is disallowed as it does not describe a explicitly database to be used to access given table.
```ruby
# good
class Gitlab::BackgroundMigration::ExtractIntegrationsUrl
class Project < ::ApplicationRecord
self.table_name = 'projects'
end
class Build < ::Ci::ApplicationRecord
self.table_name = 'ci_builds'
end
end
# bad
class Gitlab::BackgroundMigration::ExtractIntegrationsUrl
class Project < ActiveRecord::Base
self.table_name = 'projects'
end
class Build < ActiveRecord::Base
self.table_name = 'ci_builds'
end
end
```
Similarly the usage of `ActiveRecord::Base.connection` is disallowed and needs to be
replaced preferably with the usage of model connection.
```ruby
# good
Project.connection.execute("SELECT * FROM projects")
# acceptable
ApplicationRecord.connection.execute("SELECT * FROM projects")
# bad
ActiveRecord::Base.connection.execute("SELECT * FROM projects")
```
### Batch over non-distinct columns
The default batching strategy provides an efficient way to iterate over primary key columns.
However, if you need to iterate over columns where values are not unique, you must use a
different batching strategy.
The `LooseIndexScanBatchingStrategy` batching strategy uses a special version of [`EachBatch`](iterating_tables_in_batches.md#loose-index-scan-with-distinct_each_batch)
to provide efficient and stable iteration over the distinct column values.
This example shows a batched background migration where the `issues.project_id` column is used as
the batching column.
Database post-migration:
```ruby
class ProjectsWithIssuesMigration < Gitlab::Database::Migration[2.1]
MIGRATION = 'BatchProjectsWithIssues'
BATCH_SIZE = 5000
SUB_BATCH_SIZE = 500
restrict_gitlab_migration gitlab_schema: :gitlab_main
disable_ddl_transaction!
def up
queue_batched_background_migration(
MIGRATION,
:issues,
:project_id,
batch_size: BATCH_SIZE,
batch_class_name: 'LooseIndexScanBatchingStrategy', # Override the default batching strategy
sub_batch_size: SUB_BATCH_SIZE
)
end
def down
delete_batched_background_migration(MIGRATION, :issues, :project_id, [])
end
end
```
Implementing the background migration class:
```ruby
module Gitlab
module BackgroundMigration
class BatchProjectsWithIssues < Gitlab::BackgroundMigration::BatchedMigrationJob
include Gitlab::Database::DynamicModelHelpers
operation_name :backfill_issues
def perform
distinct_each_batch do |batch|
project_ids = batch.pluck(batch_column)
# do something with the distinct project_ids
end
end
end
end
end
```
{{< alert type="note" >}}
[Additional filters](#use-filters) defined with `scope_to` are ignored by `LooseIndexScanBatchingStrategy` and `distinct_each_batch`.
{{< /alert >}}
### Calculate overall time estimation of a batched background migration
It's possible to estimate how long a BBM takes to complete. GitLab already provides an estimation through the `db:gitlabcom-database-testing` pipeline.
This estimation is built based on sampling production data in a test environment and represents the max time that the migration could take and, not necessarily,
the actual time that the migration takes. In certain scenarios, estimations provided by the `db:gitlabcom-database-testing` pipeline may not be enough to
calculate all the singularities around the records being migrated, making further calculations necessary. As it made necessary, the formula
`interval * number of records / max batch size` can be used to determine an approximate estimation of how long the migration takes.
Where `interval` and `max batch size` refer to options defined for the job, and the `total tuple count` is the number of records to be migrated.
{{< alert type="note" >}}
Estimations may be affected by the [migration optimization mechanism](#migration-optimization).
{{< /alert >}}
### Cleaning up a batched background migration
{{< alert type="note" >}}
Cleaning up any remaining background migrations must be done in either a major
or minor release. You must not do this in a patch release.
{{< /alert >}}
Because background migrations can take a long time, you can't immediately clean
things up after queueing them. For example, you can't drop a column used in the
migration process, as jobs would fail. You must add a separate _post-deployment_
migration in a future release that finishes any remaining
jobs before cleaning things up. (For example, removing a column.)
To migrate the data from column `foo` (containing a big JSON blob) to column `bar`
(containing a string), you would:
1. Release A:
1. Create a migration class that performs the migration for a row with a given ID.
1. Update new rows using one of these techniques:
- Create a new trigger for copy operations that don't need application logic.
- Handle this operation in the model/service as the records are created or updated.
- Create a new custom background job that updates the records.
1. Queue the batched background migration for all existing rows in a post-deployment migration.
1. Release B:
1. Add a post-deployment migration that checks if the batched background migration is completed.
1. Deploy code so that the application starts using the new column and stops to update new records.
1. Remove the old column.
Bumping the [import/export version](../../user/project/settings/import_export.md) may
be required, if importing a project from a prior version of GitLab requires the
data to be in the new format.
### Add indexes to support batched background migrations
Sometimes it is necessary to add a new or temporary index to support a batched background migration.
To do this, create the index in a post-deployment migration that precedes the post-deployment
migration that queues the background migration.
See the documentation for [adding database indexes](adding_database_indexes.md#analyzing-a-new-index-before-a-batched-background-migration)
for additional information about some cases that require special attention to allow the index to be used directly after
creation.
### Execute a particular batch on the database testing pipeline
{{< alert type="note" >}}
Only [database maintainers](https://gitlab.com/groups/gitlab-org/maintainers/database/-/group_members?with_inherited_permissions=exclude) can view the database testing pipeline artifacts. Ask one for help if you need to use this method.
{{< /alert >}}
Let's assume that a batched background migration failed on a particular batch on GitLab.com and you want to figure out which query failed and why. At the moment, we don't have a good way to retrieve query information (especially the query parameters) and rerunning the entire migration with more logging would be a long process.
Fortunately you can leverage our [database migration pipeline](database_migration_pipeline.md) to rerun a particular batch with additional logging and/or fix to see if it solves the problem.
For an example see [Draft: `Test PG::CardinalityViolation` fix](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/110910) but make sure to read the entire section.
To do that, you need to:
1. [Find the batch `start_id` and `end_id`](#find-the-batch-start_id-and-end_id)
1. [Create a regular migration](#create-a-regular-migration)
1. [Apply a workaround for our migration helpers](#apply-a-workaround-for-our-migration-helpers-optional) (optional)
1. [Start the database migration pipeline](#start-the-database-migration-pipeline)
#### Find the batch `start_id` and `end_id`
You should be able to find those in [Kibana](#viewing-failure-error-logs).
#### Create a regular migration
Schedule the batch in the `up` block of a regular migration:
```ruby
def up
instance = Gitlab::BackgroundMigration::YourBackgroundMigrationClass.new(
start_id: <batch start_id>,
end_id: <batch end_id>,
batch_table: <table name>,
batch_column: <batching column>,
sub_batch_size: <sub batch size>,
pause_ms: <milliseconds between batches>,
job_arguments: <job arguments if any>,
connection: connection
)
instance.perform
end
def down
# no-op
end
```
#### Apply a workaround for our migration helpers (optional)
If your batched background migration touches tables from a schema other than the one you specified by using `restrict_gitlab_migration` helper (example: the scheduling migration has `restrict_gitlab_migration gitlab_schema: :gitlab_main` but the background job uses tables from the `:gitlab_ci` schema) then the migration will fail. To prevent that from happening you must to monkey patch database helpers so they don't fail the testing pipeline job:
1. Add the schema names to [`RestrictGitlabSchema`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb#L57)
```diff
diff --git a/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb b/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb
index b8d1d21a0d2d2a23d9e8c8a0a17db98ed1ed40b7..912e20659a6919f771045178c66828563cb5a4a1 100644
--- a/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb
+++ b/lib/gitlab/database/migration_helpers/restrict_gitlab_schema.rb
@@ -55,7 +55,7 @@ def unmatched_schemas
end
def allowed_schemas_for_connection
- Gitlab::Database.gitlab_schemas_for_connection(connection)
+ Gitlab::Database.gitlab_schemas_for_connection(connection) << :gitlab_ci
end
end
end
```
1. Add the schema names to [`RestrictAllowedSchemas`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb#L82)
```diff
diff --git a/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb b/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb
index 4ae3622479f0800c0553959e132143ec9051898e..d556ec7f55adae9d46a56665ce02de782cb09f2d 100644
--- a/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb
+++ b/lib/gitlab/database/query_analyzers/restrict_allowed_schemas.rb
@@ -79,7 +79,7 @@ def restrict_to_dml_only(parsed)
tables = self.dml_tables(parsed)
schemas = self.dml_schemas(tables)
- if (schemas - self.allowed_gitlab_schemas).any?
+ if (schemas - (self.allowed_gitlab_schemas << :gitlab_ci)).any?
raise DMLAccessDeniedError, \
"Select/DML queries (SELECT/UPDATE/DELETE) do access '#{tables}' (#{schemas.to_a}) " \
"which is outside of list of allowed schemas: '#{self.allowed_gitlab_schemas}'. " \
```
#### Start the database migration pipeline
Create a Draft merge request with your changes and trigger the manual `db:gitlabcom-database-testing` job.
### Establish dependencies
In some instances, migrations depended on the completion of previously enqueued BBMs. If the BBMs are
still running, the dependent migration fails. For example: introducing an unique index on a large table can depend on
the previously enqueued BBM to handle any duplicate records.
The following process has been configured to make dependencies more evident while writing a migration.
- Version of the migration that queued the BBM is stored in `batched_background_migrations` table and in BBM dictionary file.
- `DEPENDENT_BATCHED_BACKGROUND_MIGRATIONS` constant is added (commented by default) in each migration file.
To establish the dependency, add `queued_migration_version` of the dependent BBMs. If not, remove
the commented line.
- `Migration::UnfinishedDependencies` cop complains if the dependent BBMs are not yet finished. It determines
whether they got finished by looking up the `finalized_by` key in the
[BBM dictionary](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/generators/batched_background_migration/templates/batched_background_migration_dictionary.template).
Example:
```ruby
# db/post_migrate/20231113120650_queue_backfill_routes_namespace_id.rb
class QueueBackfillRoutesNamespaceId < Gitlab::Database::Migration[2.1]
MIGRATION = 'BackfillRouteNamespaceId'
restrict_gitlab_migration gitlab_schema: :gitlab_main
...
...
def up
queue_batched_background_migration(
MIGRATION,
...
)
end
end
```
```ruby
# This depends on the finalization of QueueBackfillRoutesNamespaceId BBM
class AddNotNullToRoutesNamespaceId < Gitlab::Database::Migration[2.1]
DEPENDENT_BATCHED_BACKGROUND_MIGRATIONS = ["20231113120650"]
def up
add_not_null_constraint :routes, :namespace_id
end
def down
remove_not_null_constraint :routes, :namespace_id
end
end
```
## Managing
{{< alert type="note" >}}
BBM management takes place through `chatops` integration, which is limited to GitLab team members only.
{{< /alert >}}
### List batched background migrations
To list the batched background migrations in the system, run this command:
`/chatops run batched_background_migrations list`
This command supports the following options:
- Database selection:
- `--database DATABASE_NAME`: Connects to the given database:
- `main`: Uses the main database (default).
- `ci`: Uses the CI database.
- Environment selection:
- `--dev`: Uses the `dev` environment.
- `--staging`: Uses the `staging` environment.
- `--staging_ref`: Uses the `staging_ref` environment.
- `--production` : Uses the `production` environment (default).
- Filter by job class
- `--job-class-name JOB_CLASS_NAME`: Only list jobs for the given job class.
- This is the `migration_job_name` in the YAML definition of the background migration.
Output example:

{{< alert type="note" >}}
ChatOps returns 20 batched background migrations order by `created_at` (DESC).
{{< /alert >}}
### Monitor the progress and status of a batched background migration
To see the status and progress of a specific batched background migration, run this command:
`/chatops run batched_background_migrations status MIGRATION_ID`
This command supports the following options:
- Database selection:
- `--database DATABASE_NAME`: Connects to the given database:
- `main`: Uses the main database (default)
- `ci`: Uses the CI database
- Environment selection:
- `--dev`: Uses the `dev` environment.
- `--staging`: Uses the `staging` environment.
- `--staging_ref`: Uses the `staging_ref` environment.
- `--production` : Uses the `production` environment (default).
Output example:

`Progress` represents the percentage of the background migration that has been completed.
Definitions of the batched background migration states:
- **Active**: Either:
- Ready to be picked by the runner.
- Running batched jobs.
- **Finalizing**: Running batched jobs.
- **Failed**: Failed batched background migration.
- **Finished**: All jobs were executed successfully and the batched background migration is complete.
- **Paused**: Not visible to the runner.
- **Finalized**: Batched migration was verified with
[`ensure_batched_background_migration_is_finished`](#finalize-a-batched-background-migration) and is complete.
### Pause a batched background migration
If you want to pause a batched background migration, you need to run the following command:
`/chatops run batched_background_migrations pause MIGRATION_ID`
This command supports the following options:
- Database selection:
- `--database DATABASE_NAME`: Connects to the given database:
- `main`: Uses the main database (default).
- `ci`: Uses the CI database.
- Environment selection:
- `--dev`: Uses the `dev` environment.
- `--staging`: Uses the `staging` environment.
- `--staging_ref`: Uses the `staging_ref` environment.
- `--production` : Uses the `production` environment (default).
Output example:

{{< alert type="note" >}}
You can pause only `active` batched background migrations.
{{< /alert >}}
### Resume a batched background migration
If you want to resume a batched background migration, you need to run the following command:
`/chatops run batched_background_migrations resume MIGRATION_ID`
This command supports the following options:
- Database selection:
- `--database DATABASE_NAME`: Connects to the given database:
- `main`: Uses the main database (default).
- `ci`: Uses the CI database.
- Environment selection:
- `--dev`: Uses the `dev` environment.
- `--staging`: Uses the `staging` environment.
- `--staging_ref`: Uses the `staging_ref` environment.
- `--production` : Uses the `production` environment (default).
Output example:

{{< alert type="note" >}}
You can resume only `active` batched background migrations
{{< /alert >}}
### Enable or disable background migrations
In extremely limited circumstances, a GitLab administrator can disable either or
both of these [feature flags](../../administration/feature_flags/_index.md):
- `execute_background_migrations`
- `execute_batched_migrations_on_schedule`
These flags are enabled by default. Disable them only as a last resort
to limit database operations in special circumstances, like database host maintenance.
{{< alert type="warning" >}}
Do not disable either of these flags unless you fully understand the ramifications. If you disable
the `execute_background_migrations` or `execute_batched_migrations_on_schedule` feature flag,
GitLab upgrades might fail and data loss might occur.
{{< /alert >}}
## Batched background migrations for EE-only features
All the background migration classes for EE-only features should be present in GitLab FOSS.
For this purpose, create an empty class for GitLab FOSS, and extend it for GitLab EE
as explained in the guidelines for
[implementing Enterprise Edition features](../ee_features.md#code-in-libgitlabbackground_migration).
{{< alert type="note" >}}
Background migration classes for EE-only features that use job arguments should define them
in the GitLab FOSS class. Definitions are required to prevent job arguments validation from failing when
migration is scheduled in the GitLab FOSS context.
{{< /alert >}}
You can use the [generator](#generate-a-batched-background-migration) to generate an EE-only migration scaffold by passing
`--ee-only` flag when generating a new batched background migration.
## Debug
### Viewing failure error logs
You can view failures in two ways:
- Via GitLab logs:
1. After running a batched background migration, if any jobs fail,
view the logs in [Kibana](https://log.gprd.gitlab.net/goto/4cb43f40-f861-11ec-b86b-d963a1a6788e).
View the production Sidekiq log and filter for:
- `json.new_state: failed`
- `json.job_class_name: <Batched Background Migration job class name>`
- `json.job_arguments: <Batched Background Migration job class arguments>`
1. Review the `json.exception_class` and `json.exception_message` values to help
understand why the jobs failed.
1. Remember the retry mechanism. Having a failure does not mean the job failed.
Always check the last status of the job.
- Via database:
1. Get the batched background migration `CLASS_NAME`.
1. Execute the following query in the PostgreSQL console:
```sql
SELECT migration.id, migration.job_class_name, transition_logs.exception_class, transition_logs.exception_message
FROM batched_background_migrations as migration
INNER JOIN batched_background_migration_jobs as jobs
ON jobs.batched_background_migration_id = migration.id
INNER JOIN batched_background_migration_job_transition_logs as transition_logs
ON transition_logs.batched_background_migration_job_id = jobs.id
WHERE transition_logs.next_status = '2' AND migration.job_class_name = "CLASS_NAME";
```
## Testing
Writing tests is required for:
- The batched background migrations' queueing migration.
- The batched background migration itself.
- A cleanup migration.
The `:migration` and `schema: :latest` RSpec tags are automatically set for
background migration specs. Refer to the
[Testing Rails migrations](../testing_guide/testing_migrations_guide.md#testing-a-non-activerecordmigration-class)
style guide.
Remember that `before` and `after` RSpec hooks
migrate your database down and up. These hooks can result in other batched background
migrations being called. Using `spy` test doubles with
`have_received` is encouraged, instead of using regular test doubles, because
your expectations defined in a `it` block can conflict with what is
called in RSpec hooks. Refer to [issue #35351](https://gitlab.com/gitlab-org/gitlab/-/issues/18839)
for more details.
## Best practices
1. Know how much data you're dealing with.
1. Make sure the batched background migration jobs are idempotent.
1. Confirm the tests you write are not false positives.
1. If the data being migrated is critical and cannot be lost, the
clean-up migration must also check the final state of the data before completing.
1. Discuss the numbers with a database specialist. The migration may add
more pressure on DB than you expect. Measure on staging,
or ask someone to measure on production.
1. Know how much time is required to run the batched background migration.
1. Be careful when silently rescuing exceptions inside job classes. This may lead to
jobs being marked as successful, even in a failure scenario.
```ruby
# good
def perform
each_sub_batch do |sub_batch|
sub_batch.update_all(name: 'My Name')
end
end
# acceptable
def perform
each_sub_batch do |sub_batch|
sub_batch.update_all(name: 'My Name')
rescue Exception => error
logger.error(message: error.message, class: error.class)
raise
end
end
# bad
def perform
each_sub_batch do |sub_batch|
sub_batch.update_all(name: 'My Name')
rescue Exception => error
logger.error(message: error.message, class: self.class.name)
end
end
```
1. If possible update the entire sub-batch in a single query
instead of updating each model separately.
This can be achieve in different ways, depending on the scenario.
- Generate an `UPDATE` query, and use `FROM` to join the tables
that provide the necessary values
([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/184051)).
- Generate an `UPDATE` query, and use `FROM(VALUES( ...))` to
pass values calculated beforehand
([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/177993)).
- Pass all keys and values to `ActiveRelation#update`.
```ruby
# good
def perform
each_sub_batch do |sub_batch|
connection.execute <<~SQL
UPDATE fork_networks
SET organization_id = projects.organization_id
FROM projects
WHERE fork_networks.id IN (#{sub_batch.pluck(:id)})
AND fork_networks.root_project_id = projects.id
AND fork_networks.organization_id IS NULL
SQL
end
end
# bad
def perform
each_sub_batch do |sub_batch|
sub_batch.each |fork_network|
fork_network.update!(organization_id: fork_network.root_project.organization_id)
end
end
end
```
### Use of scope_to
When writing a batched background migration class, you have the option to define a `scope_to` block. This block adds an additional qualifier to the query that determines the minimum and maximum range for each batch.
By default, the batching range is determined using the primary key index, which is highly efficient. However, using `scope_to` means the query must consider only rows matching the given condition, potentially impacting performance.
Consider the following simple query:
```sql
SELECT id FROM users WHERE id BETWEEN 1 AND 3000;
```
This query is fast because the `id` column is indexed. PostgreSQL can use an index-only scan to return results efficiently. The query plan might look like this:
```plain
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using users_pkey on users (cost=0.44..307.24 rows=2751 width=4) (actual time=0.016..177.028 rows=2654 loops=1)
Index Cond: ((id >= 1) AND (id <= 3000))
Heap Fetches: 219
Planning Time: 0.183 ms
Execution Time: 177.158 ms
```
Now, let's apply a scope:
```ruby
scope_to ->(relation) { relation.where(theme_id: 4) }
```
This results in the following query:
```sql
SELECT id FROM users WHERE id BETWEEN 1 AND 3000 AND theme_id = 4;
```
The associated query plan is less efficient:
```plain
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Index Scan using users_pkey on users (cost=0.44..3773.66 rows=10 width=4) (actual time=8.047..2290.528 rows=28 loops=1)
Index Cond: ((id >= 1) AND (id <= 3000))
Filter: (theme_id = 4)
Rows Removed by Filter: 2626
Planning Time: 1.292 ms
Execution Time: 2290.582 ms
```
In this case, PostgreSQL uses an index scan on `id` but applies the `theme_id` filter after row access. This causes many rows to be discarded after retrieval, resulting in degraded performance, over 12x slower in this case.
#### When to override
Use `scope_to` **only when the scoped column is indexed**, and ideally, the batching query avoids filtering out rows.
A strong indicator of good performance is the absence of the `Rows Removed by Filter` line in the query plan.
Let's improve performance by indexing the `theme_id` column:
```sql
CREATE INDEX idx_users_theme_id ON users (theme_id);
```
Re-running the same query produces this plan:
```plain
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on users (cost=691.28..706.53 rows=10 width=4) (actual time=13.532..13.578 rows=28 loops=1)
Recheck Cond: ((id >= 1) AND (id <= 3000) AND (theme_id = 4))
Heap Blocks: exact=28
Buffers: shared hit=41 read=62
I/O Timings: shared read=0.721
-> BitmapAnd (cost=691.28..691.28 rows=10 width=0) (actual time=13.509..13.511 rows=0 loops=1)
Buffers: shared hit=13 read=62
I/O Timings: shared read=0.721
-> Bitmap Index Scan on users_pkey (cost=0.00..45.95 rows=2751 width=0) (actual time=0.390..0.390 rows=2654 loops=1)
Index Cond: ((id >= 1) AND (id <= 3000))
Buffers: shared hit=10
-> Bitmap Index Scan on idx_users_theme_id (cost=0.00..645.08 rows=73352 width=0) (actual time=12.933..12.933 rows=69872 loops=1)
Index Cond: (theme_id = 4)
Buffers: shared hit=3 read=62
I/O Timings: shared read=0.721
Planning:
Buffers: shared hit=35 read=1 dirtied=2
I/O Timings: shared read=0.045
Planning Time: 0.514 ms
Execution Time: 13.634 ms
```
#### Summary
Use `scope_to` **only** when:
- The scoped column is backed by an index.
- Query plans avoid significant row filtering (`Rows Removed by Filter` is low or absent).
- Batching remains efficient under real data loads.
Otherwise, scoping can drastically reduce performance.
## Examples
### Routes use-case
The `routes` table has a `source_type` field that's used for a polymorphic relationship.
As part of a database redesign, we're removing the polymorphic relationship. One step of
the work is migrating data from the `source_id` column into a new singular foreign key.
Because we intend to delete old rows later, there's no need to update them as part of the
background migration.
1. Start by using the generator to create batched background migration files:
```shell
bundle exec rails g batched_background_migration BackfillRouteNamespaceId --table_name=routes --column_name=id --feature_category=source_code_management
```
1. Update the migration job (subclass of `BatchedMigrationJob`) to copy `source_id` values to `namespace_id`:
```ruby
class Gitlab::BackgroundMigration::BackfillRouteNamespaceId < BatchedMigrationJob
# For illustration purposes, if we were to use a local model we could
# define it like below, using an `ApplicationRecord` as the base class
# class Route < ::ApplicationRecord
# self.table_name = 'routes'
# end
operation_name :update_all
feature_category :source_code_management
def perform
each_sub_batch(
batching_scope: -> (relation) { relation.where("source_type <> 'UnusedType'") }
) do |sub_batch|
sub_batch.update_all('namespace_id = source_id')
end
end
end
```
{{< alert type="note" >}}
Job classes inherit from `BatchedMigrationJob` to ensure they are
correctly handled by the batched migration framework. Any subclass of
`BatchedMigrationJob` is initialized with the necessary arguments to
execute the batch, and a connection to the tracking database.
{{< /alert >}}
1. Create a database migration that adds a new trigger to the database. Example:
```ruby
class AddTriggerToRoutesToCopySourceIdToNamespaceId < Gitlab::Database::Migration[2.1]
FUNCTION_NAME = 'example_function'
TRIGGER_NAME = 'example_trigger'
def up
execute(<<~SQL)
CREATE OR REPLACE FUNCTION #{FUNCTION_NAME}() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
NEW."namespace_id" = NEW."source_id"
RETURN NEW;
END;
$$;
CREATE TRIGGER #{TRIGGER_NAME}() AFTER INSERT OR UPDATE
ON routes
FOR EACH ROW EXECUTE FUNCTION #{FUNCTION_NAME}();
SQL
end
def down
drop_trigger(TRIGGER_NAME, :routes)
drop_function(FUNCTION_NAME)
end
end
```
1. Update the created post-deployment migration with required batch sizes:
```ruby
class QueueBackfillRoutesNamespaceId < Gitlab::Database::Migration[2.1]
MIGRATION = 'BackfillRouteNamespaceId'
BATCH_SIZE = 1000
SUB_BATCH_SIZE = 100
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
queue_batched_background_migration(
MIGRATION,
:routes,
:id,
batch_size: BATCH_SIZE,
sub_batch_size: SUB_BATCH_SIZE
)
end
def down
delete_batched_background_migration(MIGRATION, :routes, :id, [])
end
end
```
```yaml
# db/docs/batched_background_migrations/backfill_route_namespace_id.yml
---
migration_job_name: BackfillRouteNamespaceId
description: Copies source_id values from routes to namespace_id
feature_category: source_code_management
introduced_by_url: "https://mr_url"
milestone: 16.6
queued_migration_version: 20231113120650
finalized_by: # version of the migration that ensured this bbm
```
{{< alert type="note" >}}
When queuing a batched background migration, you need to restrict
the schema to the database where you make the actual changes.
In this case, we are updating `routes` records, so we set
`restrict_gitlab_migration gitlab_schema: :gitlab_main`. If, however,
you need to perform a CI data migration, you would set
`restrict_gitlab_migration gitlab_schema: :gitlab_ci`.
{{< /alert >}}
After deployment, our application:
- Continues using the data as before.
- Ensures that both existing and new data are migrated.
1. Add a new post-deployment migration that checks that the batched background migration is complete. Also update
`finalized_by` attribute in BBM dictionary with the version of this migration.
```ruby
class FinalizeBackfillRouteNamespaceId < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
ensure_batched_background_migration_is_finished(
job_class_name: 'BackfillRouteNamespaceId',
table_name: :routes,
column_name: :id,
job_arguments: [],
finalize: true
)
end
def down
# no-op
end
end
```
```yaml
# db/docs/batched_background_migrations/backfill_route_namespace_id.yml
---
migration_job_name: BackfillRouteNamespaceId
description: Copies source_id values from routes to namespace_id
feature_category: source_code_management
introduced_by_url: "https://mr_url"
milestone: 16.6
queued_migration_version: 20231113120650
finalized_by: 20231115120912
```
{{< alert type="note" >}}
If the batched background migration is not finished, the system will
execute the batched background migration inline. If you don't want
to see this behavior, you need to pass `finalize: false`.
{{< /alert >}}
If the application does not depend on the data being 100% migrated (for
instance, the data is advisory, and not mission-critical), then you can skip this
final step. This step confirms that the migration is completed, and all of the rows were migrated.
1. Add a database migration to remove the trigger.
```ruby
class RemoveNamepaceIdTriggerFromRoutes < Gitlab::Database::Migration[2.1]
FUNCTION_NAME = 'example_function'
TRIGGER_NAME = 'example_trigger'
def up
drop_trigger(TRIGGER_NAME, :routes)
drop_function(FUNCTION_NAME)
end
def down
# Should reverse the trigger and the function in the up method of the migration that added it
end
end
```
After the batched migration is completed, you can safely depend on the
data in `routes.namespace_id` being populated.
|
https://docs.gitlab.com/development/database_dictionary
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database_dictionary.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
database_dictionary.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Database Dictionary
| null |
This page documents the database schema for GitLab, so data analysts and other groups can
locate the feature categories responsible for specific database tables.
## Location
Database dictionary metadata files are stored in the `gitlab` project under `db/docs/` for the `main`, `ci`, and `sec` databases.
For the `embedding` database, the dictionary files are stored under `ee/db/embedding/docs/`.
For the `geo` database, the dictionary files are stored under `ee/db/geo/docs/`.
## Example dictionary file
```yaml
---
table_name: terraform_states
classes:
- Terraform::State
feature_categories:
- infrastructure_as_code
description: Represents a Terraform state backend
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26619
milestone: '13.0'
gitlab_schema: gitlab_main
table_size: small
```
## Adding tables
### Schema
| Attribute | Type | Required | Description |
|----------------------------|---------------|----------|-------------|
| `table_name` | String | yes | Database table name. |
| `classes` | Array(String) | no | List of classes that are associated to this table. |
| `feature_categories` | Array(String) | yes | List of feature categories using this table. |
| `description` | String | no | Text description of the information stored in the table, and its purpose. |
| `introduced_by_url` | URL | no | URL to the merge request or commit which introduced this table. |
| `milestone` | String | yes | The milestone that introduced this table. |
| `gitlab_schema` | String | yes | GitLab schema name. |
| `notes` | String | no | Use for comments, as Psych cannot parse YAML comments. |
| `table_size` | String | yes | Classification of current table size on GitLab.com[^1]. The size includes indexes. For partitioned tables, the size is the size of the largest partition. Valid options are `unknown`, `small` (< 10 GB), `medium` (< 50 GB), `large` (< 100 GB), `over_limit` (above 100 GB). |
[^1] New tables are usually `small` by default as they contain no data. This attribute is updated automatically monthly.
### Process
When adding a table, you should:
1. Create a new file for this table in the appropriate directory:
- `gitlab_main` table: `db/docs/`
- `gitlab_ci` table: `db/docs/`
- `gitlab_sec` table: `db/docs/`
- `gitlab_shared` table: `db/docs/`
- `gitlab_embedding` table: `ee/db/embedding/docs/`
- `gitlab_geo` table: `ee/db/geo/docs/`
1. Name the file `<table_name>.yml`, and include as much information as you know about the table.
1. Include this file in the commit with the migration that creates the table.
## Dropping tables
### Schema
| Attribute | Type | Required | Description |
|----------------------------|---------------|----------|-------------|
| `table_name` | String | yes | Database table name. |
| `classes` | Array(String) | no | List of classes that are associated to this table. |
| `feature_categories` | Array(String) | yes | List of feature categories using this table. |
| `description` | String | no | Text description of the information stored in the table, and its purpose. |
| `introduced_by_url` | URL | no | URL to the merge request or commit which introduced this table. |
| `milestone` | String | no | The milestone that introduced this table. |
| `gitlab_schema` | String | yes | GitLab schema name. |
| `removed_by_url` | String | yes | URL to the merge request or commit which removed this table. |
| `removed_in_milestone` | String | yes | The milestone that removes this table. |
### Process
When dropping a table, you should:
1. Move the dictionary file for this table to the `deleted_tables` directory:
- `gitlab_main` table: `db/docs/deleted_tables/`
- `gitlab_ci` table: `db/docs/deleted_tables/`
- `gitlab_sec` table: `db/docs/deleted_tables/`
- `gitlab_shared` table: `db/docs/deleted_tables/`
- `gitlab_embedding` table: `ee/db/embedding/docs/deleted_tables/`
- `gitlab_geo` table: `ee/db/geo/docs/deleted_tables/`
1. Add the fields `removed_by_url` and `removed_in_milestone` to the dictionary file.
1. Include this change in the commit with the migration that drops the table.
## Adding views
### Schema
| Attribute | Type | Required | Description |
|----------------------------|---------------|----------|-------------|
| `table_name` | String | yes | Database view name. |
| `classes` | Array(String) | no | List of classes that are associated to this view. |
| `feature_categories` | Array(String) | yes | List of feature categories using this view. |
| `description` | String | no | Text description of the information stored in the view, and its purpose. |
| `introduced_by_url` | URL | no | URL to the merge request or commit which introduced this view. |
| `milestone` | String | no | The milestone that introduced this view. |
| `gitlab_schema` | String | yes | GitLab schema name. |
### Process
When adding a new view, you should:
1. Create a new file for this view in the appropriate directory:
- `gitlab_main` view: `db/docs/views/`
- `gitlab_ci` view: `db/docs/views/`
- `gitlab_sec` view: `db/docs/views/`
- `gitlab_shared` view: `db/docs/views/`
- `gitlab_embedding` view: `ee/db/embedding/docs/views/`
- `gitlab_geo` view: `ee/db/geo/docs/views/`
1. Name the file `<view_name>.yml`, and include as much information as you know about the view.
1. Include this file in the commit with the migration that creates the view.
## Dropping views
## Schema
| Attribute | Type | Required | Description |
|----------------------------|---------------|----------|-------------|
| `view_name` | String | yes | Database view name. |
| `classes` | Array(String) | no | List of classes that are associated to this view. |
| `feature_categories` | Array(String) | yes | List of feature categories using this view. |
| `description` | String | no | Text description of the information stored in the view, and its purpose. |
| `introduced_by_url` | URL | no | URL to the merge request or commit which introduced this view. |
| `milestone` | String | no | The milestone that introduced this view. |
| `gitlab_schema` | String | yes | GitLab schema name. |
| `removed_by_url` | String | yes | URL to the merge request or commit which removed this view. |
| `removed_in_milestone` | String | yes | The milestone that removes this view. |
### Process
When dropping a view, you should:
1. Move the dictionary file for this table to the `deleted_views` directory:
- `gitlab_main` view: `db/docs/deleted_views/`
- `gitlab_ci` view: `db/docs/deleted_views/`
- `gitlab_sec` view: `db/docs/deleted_views/`
- `gitlab_shared` view: `db/docs/deleted_views/`
- `gitlab_embedding` view: `ee/db/embedding/docs/deleted_views/`
- `gitlab_geo` view: `ee/db/geo/docs/deleted_views/`
1. Add the fields `removed_by_url` and `removed_in_milestone` to the dictionary file.
1. Include this change in the commit with the migration that drops the view.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Database Dictionary
breadcrumbs:
- doc
- development
- database
---
This page documents the database schema for GitLab, so data analysts and other groups can
locate the feature categories responsible for specific database tables.
## Location
Database dictionary metadata files are stored in the `gitlab` project under `db/docs/` for the `main`, `ci`, and `sec` databases.
For the `embedding` database, the dictionary files are stored under `ee/db/embedding/docs/`.
For the `geo` database, the dictionary files are stored under `ee/db/geo/docs/`.
## Example dictionary file
```yaml
---
table_name: terraform_states
classes:
- Terraform::State
feature_categories:
- infrastructure_as_code
description: Represents a Terraform state backend
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26619
milestone: '13.0'
gitlab_schema: gitlab_main
table_size: small
```
## Adding tables
### Schema
| Attribute | Type | Required | Description |
|----------------------------|---------------|----------|-------------|
| `table_name` | String | yes | Database table name. |
| `classes` | Array(String) | no | List of classes that are associated to this table. |
| `feature_categories` | Array(String) | yes | List of feature categories using this table. |
| `description` | String | no | Text description of the information stored in the table, and its purpose. |
| `introduced_by_url` | URL | no | URL to the merge request or commit which introduced this table. |
| `milestone` | String | yes | The milestone that introduced this table. |
| `gitlab_schema` | String | yes | GitLab schema name. |
| `notes` | String | no | Use for comments, as Psych cannot parse YAML comments. |
| `table_size` | String | yes | Classification of current table size on GitLab.com[^1]. The size includes indexes. For partitioned tables, the size is the size of the largest partition. Valid options are `unknown`, `small` (< 10 GB), `medium` (< 50 GB), `large` (< 100 GB), `over_limit` (above 100 GB). |
[^1] New tables are usually `small` by default as they contain no data. This attribute is updated automatically monthly.
### Process
When adding a table, you should:
1. Create a new file for this table in the appropriate directory:
- `gitlab_main` table: `db/docs/`
- `gitlab_ci` table: `db/docs/`
- `gitlab_sec` table: `db/docs/`
- `gitlab_shared` table: `db/docs/`
- `gitlab_embedding` table: `ee/db/embedding/docs/`
- `gitlab_geo` table: `ee/db/geo/docs/`
1. Name the file `<table_name>.yml`, and include as much information as you know about the table.
1. Include this file in the commit with the migration that creates the table.
## Dropping tables
### Schema
| Attribute | Type | Required | Description |
|----------------------------|---------------|----------|-------------|
| `table_name` | String | yes | Database table name. |
| `classes` | Array(String) | no | List of classes that are associated to this table. |
| `feature_categories` | Array(String) | yes | List of feature categories using this table. |
| `description` | String | no | Text description of the information stored in the table, and its purpose. |
| `introduced_by_url` | URL | no | URL to the merge request or commit which introduced this table. |
| `milestone` | String | no | The milestone that introduced this table. |
| `gitlab_schema` | String | yes | GitLab schema name. |
| `removed_by_url` | String | yes | URL to the merge request or commit which removed this table. |
| `removed_in_milestone` | String | yes | The milestone that removes this table. |
### Process
When dropping a table, you should:
1. Move the dictionary file for this table to the `deleted_tables` directory:
- `gitlab_main` table: `db/docs/deleted_tables/`
- `gitlab_ci` table: `db/docs/deleted_tables/`
- `gitlab_sec` table: `db/docs/deleted_tables/`
- `gitlab_shared` table: `db/docs/deleted_tables/`
- `gitlab_embedding` table: `ee/db/embedding/docs/deleted_tables/`
- `gitlab_geo` table: `ee/db/geo/docs/deleted_tables/`
1. Add the fields `removed_by_url` and `removed_in_milestone` to the dictionary file.
1. Include this change in the commit with the migration that drops the table.
## Adding views
### Schema
| Attribute | Type | Required | Description |
|----------------------------|---------------|----------|-------------|
| `table_name` | String | yes | Database view name. |
| `classes` | Array(String) | no | List of classes that are associated to this view. |
| `feature_categories` | Array(String) | yes | List of feature categories using this view. |
| `description` | String | no | Text description of the information stored in the view, and its purpose. |
| `introduced_by_url` | URL | no | URL to the merge request or commit which introduced this view. |
| `milestone` | String | no | The milestone that introduced this view. |
| `gitlab_schema` | String | yes | GitLab schema name. |
### Process
When adding a new view, you should:
1. Create a new file for this view in the appropriate directory:
- `gitlab_main` view: `db/docs/views/`
- `gitlab_ci` view: `db/docs/views/`
- `gitlab_sec` view: `db/docs/views/`
- `gitlab_shared` view: `db/docs/views/`
- `gitlab_embedding` view: `ee/db/embedding/docs/views/`
- `gitlab_geo` view: `ee/db/geo/docs/views/`
1. Name the file `<view_name>.yml`, and include as much information as you know about the view.
1. Include this file in the commit with the migration that creates the view.
## Dropping views
## Schema
| Attribute | Type | Required | Description |
|----------------------------|---------------|----------|-------------|
| `view_name` | String | yes | Database view name. |
| `classes` | Array(String) | no | List of classes that are associated to this view. |
| `feature_categories` | Array(String) | yes | List of feature categories using this view. |
| `description` | String | no | Text description of the information stored in the view, and its purpose. |
| `introduced_by_url` | URL | no | URL to the merge request or commit which introduced this view. |
| `milestone` | String | no | The milestone that introduced this view. |
| `gitlab_schema` | String | yes | GitLab schema name. |
| `removed_by_url` | String | yes | URL to the merge request or commit which removed this view. |
| `removed_in_milestone` | String | yes | The milestone that removes this view. |
### Process
When dropping a view, you should:
1. Move the dictionary file for this table to the `deleted_views` directory:
- `gitlab_main` view: `db/docs/deleted_views/`
- `gitlab_ci` view: `db/docs/deleted_views/`
- `gitlab_sec` view: `db/docs/deleted_views/`
- `gitlab_shared` view: `db/docs/deleted_views/`
- `gitlab_embedding` view: `ee/db/embedding/docs/deleted_views/`
- `gitlab_geo` view: `ee/db/geo/docs/deleted_views/`
1. Add the fields `removed_by_url` and `removed_in_milestone` to the dictionary file.
1. Include this change in the commit with the migration that drops the view.
|
https://docs.gitlab.com/development/offset_pagination_optimization
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/offset_pagination_optimization.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
offset_pagination_optimization.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Offset pagination optimization
| null |
In many REST APIs endpoints, we use [offset-based pagination](pagination_guidelines.md#offset-pagination) which uses the `page` URL parameter to paginate through the results. Offset pagination [scales linearly](pagination_guidelines.md#offset-on-a-large-dataset), the higher the page number, the slower the database query gets. This means that for large page numbers, the database query can time out. This usually happens when third-party integrations and scripts interact with the system as users are unlikely to deliberately visit a high page number.
The ideal way of dealing with scalability problems related to offset pagination is switching to [keyset pagination](pagination_guidelines.md#keyset-pagination) however, this is means a breaking API change. To provide a temporary, stop-gap measure, you can use the [`Gitlab::Pagination::Offset::PaginationWithIndexOnlyScan`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/pagination/offset/pagination_with_index_only_scan.rb) class. The optimization can help in certain cases to improve the performance of offset-paginated queries when high `OFFSET` value is present. The performance improvement means that the queries will continue to scale linearly with improved query timings, which means that reaching database timeout will happen at a much higher `page` number if it happens at all.
## Requirements for using the optimization
The optimization avoids calling `SELECT *` when determining the records based on the `ORDER BY`, `OFFSET`, and `LIMIT` clauses and attempts to use an index only scan to reduce database I/O. To use the optimization, the same requirements must be met as for keyset pagination:
- `ORDER BY` clause is present.
- The `ORDER BY` clause uniquely identifies one database column.
- Good, uses the primary key: `ORDER BY id`
- Bad, `created_at` not unique: `ORDER BY created_at`
- Good, there is a [tie-breaker](pagination_performance_guidelines.md#tie-breaker-column): `ORDER BY created_at, id`
- The query is well-covered with a database index.
## How to use the optimizator class
The optimizator class can be used with `ActiveRecord::Relation` objects, as a result, it will return an optimized, [kaminari-paginated](https://github.com/kaminari/kaminari) `ActiveRecord::Relation` object. In case the optimization cannot be applied, the original `ActiveRecord::Relation` object will be used for the pagination.
Basic usage:
```ruby
scope = Issue.where(project_id: 1).order(:id)
records = Gitlab::Pagination::Offset::PaginationWithIndexOnlyScan.new(scope: scope, page: 5, per_page: 100).paginate_with_kaminari
puts records.to_a
```
Optimizations should be always rolled out with feature flags, you can also target the usage of the optimization when certain conditions are met.
```ruby
# - Only apply optimization for large page number lookups
# - When label_names filter parameter is given, the optimization will not have effect (complex JOIN).
if params[:page] > 100 && params[:label_names].blank? && Feature.enabled?(:my_optimized_offet_query)
Gitlab::Pagination::Offset::PaginationWithIndexOnlyScan.new(scope: scope, page: params[:page], per_page: params[:per_page]).paginate_with_kaminari
else
scope.page(params[:page]).per(params[:per_page])
end
```
## How does the optimization work
The optimization takes the passed `ActiveRecord::Relation` object and moves it into a CTE. Within the CTE, the original query is altered to only
select the `ORDER BY` columns. This will make it possible for the database to use index only scan.
When the query is executed, the query within the CTE is evaluated first, the CTE will contain `LIMIT` number of rows with the selected columns.
Using the `ORDER BY` values, a `LATERAL` query will locate the full rows one by one. `LATERAL` query is used here in order to force out
a nested loop: for each row in the CTE, look up a full row in the table.
Original query:
- Reads `OFFSET + LIMIT` number of entries from the index.
- Reads `OFFSET + LIMIT` number of rows from the table.
Optimized query:
- Reads `OFFSET + LIMIT` number of entries from the index.
- Reads `LIMIT` number of rows from the table.
## Determine if the optimization helps
By running `EXPLAIN (buffers, analyze)` on the database query with a high (100_000) `OFFSET` value, we can clearly see if the optimization helps.
Look for the following:
- The optimized query plan must have an index only scan node.
- Comparing the cached buffer count and timing should be lower.
- This can be done when executing the same query 2 or 3 times.
Consider the following query:
```sql
SELECT issues.*
FROM issues
ORDER BY id
OFFSET 100000
LIMIT 100
```
It produces an execution plan which uses an index scan:
```plaintext
Limit (cost=27800.96..27828.77 rows=100 width=1491) (actual time=138.305..138.470 rows=100 loops=1)
Buffers: shared hit=73212
I/O Timings: read=0.000 write=0.000
-> Index Scan using issues_pkey on public.issues (cost=0.57..26077453.90 rows=93802448 width=1491) (actual time=0.063..133.688 rows=100100 loops=1)
Buffers: shared hit=73212
I/O Timings: read=0.000 write=0.000
Time: 143.779 ms
- planning: 5.222 ms
- execution: 138.557 ms
- I/O read: 0.000 ms
- I/O write: 0.000 ms
Shared buffers:
- hits: 73212 (~572.00 MiB) from the buffer pool
- reads: 0 from the OS file cache, including disk I/O
- dirtied: 0
- writes: 0
```
The optimized query:
```sql
WITH index_only_scan_pagination_cte AS MATERIALIZED
(SELECT id
FROM issues
ORDER BY id ASC
LIMIT 100
OFFSET 100000)
SELECT issues.*
FROM
(SELECT id
FROM index_only_scan_pagination_cte) index_only_scan_subquery,
LATERAL
(SELECT issues.*
FROM issues
WHERE issues.id = index_only_scan_subquery.id
LIMIT 1) issues
```
Execution plan:
```plaintext
Nested Loop (cost=2453.51..2815.44 rows=100 width=1491) (actual time=23.614..23.973 rows=100 loops=1)
Buffers: shared hit=56167
I/O Timings: read=0.000 write=0.000
CTE index_only_scan_pagination_cte
-> Limit (cost=2450.49..2452.94 rows=100 width=4) (actual time=23.590..23.621 rows=100 loops=1)
Buffers: shared hit=55667
I/O Timings: read=0.000 write=0.000
-> Index Only Scan using issues_pkey on public.issues issues_1 (cost=0.57..2298090.72 rows=93802448 width=4) (actual time=0.070..20.412 rows=100100 loops=1)
Heap Fetches: 1063
Buffers: shared hit=55667
I/O Timings: read=0.000 write=0.000
-> CTE Scan on index_only_scan_pagination_cte (cost=0.00..2.00 rows=100 width=4) (actual time=23.593..23.641 rows=100 loops=1)
Buffers: shared hit=55667
I/O Timings: read=0.000 write=0.000
-> Limit (cost=0.57..3.58 rows=1 width=1491) (actual time=0.003..0.003 rows=1 loops=100)
Buffers: shared hit=500
I/O Timings: read=0.000 write=0.000
-> Index Scan using issues_pkey on public.issues (cost=0.57..3.58 rows=1 width=1491) (actual time=0.003..0.003 rows=1 loops=100)
Index Cond: (issues.id = index_only_scan_pagination_cte.id)
Buffers: shared hit=500
I/O Timings: read=0.000 write=0.000
Time: 29.562 ms
- planning: 5.506 ms
- execution: 24.056 ms
- I/O read: 0.000 ms
- I/O write: 0.000 ms
Shared buffers:
- hits: 56167 (~438.80 MiB) from the buffer pool
- reads: 0 from the OS file cache, including disk I/O
- dirtied: 0
- writes: 0
```
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Offset pagination optimization
breadcrumbs:
- doc
- development
- database
---
In many REST APIs endpoints, we use [offset-based pagination](pagination_guidelines.md#offset-pagination) which uses the `page` URL parameter to paginate through the results. Offset pagination [scales linearly](pagination_guidelines.md#offset-on-a-large-dataset), the higher the page number, the slower the database query gets. This means that for large page numbers, the database query can time out. This usually happens when third-party integrations and scripts interact with the system as users are unlikely to deliberately visit a high page number.
The ideal way of dealing with scalability problems related to offset pagination is switching to [keyset pagination](pagination_guidelines.md#keyset-pagination) however, this is means a breaking API change. To provide a temporary, stop-gap measure, you can use the [`Gitlab::Pagination::Offset::PaginationWithIndexOnlyScan`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/pagination/offset/pagination_with_index_only_scan.rb) class. The optimization can help in certain cases to improve the performance of offset-paginated queries when high `OFFSET` value is present. The performance improvement means that the queries will continue to scale linearly with improved query timings, which means that reaching database timeout will happen at a much higher `page` number if it happens at all.
## Requirements for using the optimization
The optimization avoids calling `SELECT *` when determining the records based on the `ORDER BY`, `OFFSET`, and `LIMIT` clauses and attempts to use an index only scan to reduce database I/O. To use the optimization, the same requirements must be met as for keyset pagination:
- `ORDER BY` clause is present.
- The `ORDER BY` clause uniquely identifies one database column.
- Good, uses the primary key: `ORDER BY id`
- Bad, `created_at` not unique: `ORDER BY created_at`
- Good, there is a [tie-breaker](pagination_performance_guidelines.md#tie-breaker-column): `ORDER BY created_at, id`
- The query is well-covered with a database index.
## How to use the optimizator class
The optimizator class can be used with `ActiveRecord::Relation` objects, as a result, it will return an optimized, [kaminari-paginated](https://github.com/kaminari/kaminari) `ActiveRecord::Relation` object. In case the optimization cannot be applied, the original `ActiveRecord::Relation` object will be used for the pagination.
Basic usage:
```ruby
scope = Issue.where(project_id: 1).order(:id)
records = Gitlab::Pagination::Offset::PaginationWithIndexOnlyScan.new(scope: scope, page: 5, per_page: 100).paginate_with_kaminari
puts records.to_a
```
Optimizations should be always rolled out with feature flags, you can also target the usage of the optimization when certain conditions are met.
```ruby
# - Only apply optimization for large page number lookups
# - When label_names filter parameter is given, the optimization will not have effect (complex JOIN).
if params[:page] > 100 && params[:label_names].blank? && Feature.enabled?(:my_optimized_offet_query)
Gitlab::Pagination::Offset::PaginationWithIndexOnlyScan.new(scope: scope, page: params[:page], per_page: params[:per_page]).paginate_with_kaminari
else
scope.page(params[:page]).per(params[:per_page])
end
```
## How does the optimization work
The optimization takes the passed `ActiveRecord::Relation` object and moves it into a CTE. Within the CTE, the original query is altered to only
select the `ORDER BY` columns. This will make it possible for the database to use index only scan.
When the query is executed, the query within the CTE is evaluated first, the CTE will contain `LIMIT` number of rows with the selected columns.
Using the `ORDER BY` values, a `LATERAL` query will locate the full rows one by one. `LATERAL` query is used here in order to force out
a nested loop: for each row in the CTE, look up a full row in the table.
Original query:
- Reads `OFFSET + LIMIT` number of entries from the index.
- Reads `OFFSET + LIMIT` number of rows from the table.
Optimized query:
- Reads `OFFSET + LIMIT` number of entries from the index.
- Reads `LIMIT` number of rows from the table.
## Determine if the optimization helps
By running `EXPLAIN (buffers, analyze)` on the database query with a high (100_000) `OFFSET` value, we can clearly see if the optimization helps.
Look for the following:
- The optimized query plan must have an index only scan node.
- Comparing the cached buffer count and timing should be lower.
- This can be done when executing the same query 2 or 3 times.
Consider the following query:
```sql
SELECT issues.*
FROM issues
ORDER BY id
OFFSET 100000
LIMIT 100
```
It produces an execution plan which uses an index scan:
```plaintext
Limit (cost=27800.96..27828.77 rows=100 width=1491) (actual time=138.305..138.470 rows=100 loops=1)
Buffers: shared hit=73212
I/O Timings: read=0.000 write=0.000
-> Index Scan using issues_pkey on public.issues (cost=0.57..26077453.90 rows=93802448 width=1491) (actual time=0.063..133.688 rows=100100 loops=1)
Buffers: shared hit=73212
I/O Timings: read=0.000 write=0.000
Time: 143.779 ms
- planning: 5.222 ms
- execution: 138.557 ms
- I/O read: 0.000 ms
- I/O write: 0.000 ms
Shared buffers:
- hits: 73212 (~572.00 MiB) from the buffer pool
- reads: 0 from the OS file cache, including disk I/O
- dirtied: 0
- writes: 0
```
The optimized query:
```sql
WITH index_only_scan_pagination_cte AS MATERIALIZED
(SELECT id
FROM issues
ORDER BY id ASC
LIMIT 100
OFFSET 100000)
SELECT issues.*
FROM
(SELECT id
FROM index_only_scan_pagination_cte) index_only_scan_subquery,
LATERAL
(SELECT issues.*
FROM issues
WHERE issues.id = index_only_scan_subquery.id
LIMIT 1) issues
```
Execution plan:
```plaintext
Nested Loop (cost=2453.51..2815.44 rows=100 width=1491) (actual time=23.614..23.973 rows=100 loops=1)
Buffers: shared hit=56167
I/O Timings: read=0.000 write=0.000
CTE index_only_scan_pagination_cte
-> Limit (cost=2450.49..2452.94 rows=100 width=4) (actual time=23.590..23.621 rows=100 loops=1)
Buffers: shared hit=55667
I/O Timings: read=0.000 write=0.000
-> Index Only Scan using issues_pkey on public.issues issues_1 (cost=0.57..2298090.72 rows=93802448 width=4) (actual time=0.070..20.412 rows=100100 loops=1)
Heap Fetches: 1063
Buffers: shared hit=55667
I/O Timings: read=0.000 write=0.000
-> CTE Scan on index_only_scan_pagination_cte (cost=0.00..2.00 rows=100 width=4) (actual time=23.593..23.641 rows=100 loops=1)
Buffers: shared hit=55667
I/O Timings: read=0.000 write=0.000
-> Limit (cost=0.57..3.58 rows=1 width=1491) (actual time=0.003..0.003 rows=1 loops=100)
Buffers: shared hit=500
I/O Timings: read=0.000 write=0.000
-> Index Scan using issues_pkey on public.issues (cost=0.57..3.58 rows=1 width=1491) (actual time=0.003..0.003 rows=1 loops=100)
Index Cond: (issues.id = index_only_scan_pagination_cte.id)
Buffers: shared hit=500
I/O Timings: read=0.000 write=0.000
Time: 29.562 ms
- planning: 5.506 ms
- execution: 24.056 ms
- I/O read: 0.000 ms
- I/O write: 0.000 ms
Shared buffers:
- hits: 56167 (~438.80 MiB) from the buffer pool
- reads: 0 from the OS file cache, including disk I/O
- dirtied: 0
- writes: 0
```
|
https://docs.gitlab.com/development/serializing_data
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/serializing_data.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
serializing_data.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Serializing Data
| null |
**Summary**: don't store serialized data in the database, use separate columns
and/or tables instead. This includes storing of comma separated values as a
string.
Rails makes it possible to store serialized data in JSON, YAML or other formats.
Such a field can be defined as follows:
```ruby
class Issue < ActiveRecord::Model
serialize :custom_fields
end
```
While it may be tempting to store serialized data in the database there are many
problems with this. This document outlines these problems and provide an
alternative.
## Serialized Data Is Less Powerful
When using a relational database you have the ability to query individual
fields, change the schema, index data, and so forth. When you use serialized data
all of that becomes either very difficult or downright impossible. While
PostgreSQL does offer the ability to query JSON fields it is mostly meant for
very specialized use cases, and not for more general use. If you use YAML in
turn there's no way to query the data at all.
## Waste Of Space
Storing serialized data such as JSON or YAML ends up wasting a lot of space.
This is because these formats often include additional characters (for example, double
quotes or newlines) besides the data that you are storing.
## Difficult To Manage
There comes a time where you must add a new field to the serialized
data, or change an existing one. Using serialized data this becomes difficult
and very time consuming as the only way of doing so is to re-write all the
stored values. To do so you would have to:
1. Retrieve the data
1. Parse it into a Ruby structure
1. Mutate it
1. Serialize it back to a String
1. Store it in the database
On the other hand, if one were to use regular columns adding a column would be:
```sql
ALTER TABLE table_name ADD COLUMN column_name type;
```
Such a query would take very little to no time and would immediately apply to
all rows, without having to re-write large JSON or YAML structures.
Finally, there comes a time when the JSON or YAML structure is no longer
sufficient and you must migrate away from it. When storing only a few rows
this may not be a problem, but when storing millions of rows such a migration
can take hours or even days to complete.
## Relational Databases Are Not Document Stores
When storing data as JSON or YAML you're essentially using your database as if
it were a document store (for example, MongoDB), except you're not using any of the
powerful features provided by a typical RDBMS nor are you using any of the
features provided by a typical document store (for example, the ability to index fields
of documents with variable fields). In other words, it's a waste.
## Consistent Fields
One argument sometimes made in favour of serialized data is having to store
widely varying fields and values. Sometimes this is truly the case, and then
perhaps it might make sense to use serialized data. However, in 99% of the cases
the fields and types stored tend to be the same for every row. Even if there is
a slight difference you can still use separate columns and just not set the ones
you don't need.
## The Solution
The solution is to use separate columns and/or separate tables.
This allows you to use all the features provided by your database, it
makes it easier to manage and migrate the data, you conserve space, you can
index the data efficiently and so forth.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Serializing Data
breadcrumbs:
- doc
- development
- database
---
**Summary**: don't store serialized data in the database, use separate columns
and/or tables instead. This includes storing of comma separated values as a
string.
Rails makes it possible to store serialized data in JSON, YAML or other formats.
Such a field can be defined as follows:
```ruby
class Issue < ActiveRecord::Model
serialize :custom_fields
end
```
While it may be tempting to store serialized data in the database there are many
problems with this. This document outlines these problems and provide an
alternative.
## Serialized Data Is Less Powerful
When using a relational database you have the ability to query individual
fields, change the schema, index data, and so forth. When you use serialized data
all of that becomes either very difficult or downright impossible. While
PostgreSQL does offer the ability to query JSON fields it is mostly meant for
very specialized use cases, and not for more general use. If you use YAML in
turn there's no way to query the data at all.
## Waste Of Space
Storing serialized data such as JSON or YAML ends up wasting a lot of space.
This is because these formats often include additional characters (for example, double
quotes or newlines) besides the data that you are storing.
## Difficult To Manage
There comes a time where you must add a new field to the serialized
data, or change an existing one. Using serialized data this becomes difficult
and very time consuming as the only way of doing so is to re-write all the
stored values. To do so you would have to:
1. Retrieve the data
1. Parse it into a Ruby structure
1. Mutate it
1. Serialize it back to a String
1. Store it in the database
On the other hand, if one were to use regular columns adding a column would be:
```sql
ALTER TABLE table_name ADD COLUMN column_name type;
```
Such a query would take very little to no time and would immediately apply to
all rows, without having to re-write large JSON or YAML structures.
Finally, there comes a time when the JSON or YAML structure is no longer
sufficient and you must migrate away from it. When storing only a few rows
this may not be a problem, but when storing millions of rows such a migration
can take hours or even days to complete.
## Relational Databases Are Not Document Stores
When storing data as JSON or YAML you're essentially using your database as if
it were a document store (for example, MongoDB), except you're not using any of the
powerful features provided by a typical RDBMS nor are you using any of the
features provided by a typical document store (for example, the ability to index fields
of documents with variable fields). In other words, it's a waste.
## Consistent Fields
One argument sometimes made in favour of serialized data is having to store
widely varying fields and values. Sometimes this is truly the case, and then
perhaps it might make sense to use serialized data. However, in 99% of the cases
the fields and types stored tend to be the same for every row. Even if there is
a slight difference you can still use separate columns and just not set the ones
you don't need.
## The Solution
The solution is to use separate columns and/or separate tables.
This allows you to use all the features provided by your database, it
makes it easier to manage and migrate the data, you conserve space, you can
index the data efficiently and so forth.
|
https://docs.gitlab.com/development/foreign_keys
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/foreign_keys.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
foreign_keys.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Foreign keys and associations
| null |
Foreign keys ensure consistency between related database tables. Starting with Rails version 4, Rails includes migration helpers to add foreign key constraints
to database tables. Before Rails 4, the only way for ensuring some level of consistency was the
[`dependent`](https://guides.rubyonrails.org/association_basics.html#options-for-belongs-to-dependent)
option in the association definition.
Ensuring data consistency on the application level could fail
in some unfortunate cases, so we might end up with inconsistent data in the table. This mostly affects
older tables, where we didn't have the framework support to ensure consistency on the database level.
These data inconsistencies can cause unexpected application behavior or bugs.
When creating tables that reference records from other tables, a FK should be added to maintain data integrity.
And when adding an association to a model you must also add a foreign key. Also on
adding a foreign key you must always add an [index](#indexes) first.
For example, say you have the following model:
```ruby
class User < ActiveRecord::Base
has_many :posts
end
```
Add a foreign key here on column `posts.user_id`. This ensures
that data consistency is enforced on database level. Foreign keys also mean that
the database can remove associated data (for example, when removing a
user), instead of Rails having to do this.
## Avoiding downtime and migration failures
Adding a foreign key has two parts to it
1. Adding the FK column and the constraint.
1. Validating the added constraint to maintain data integrity.
(1) uses ALTER TABLE statements which takes the most strict lock (ACCESS EXCLUSIVE) and validating the constraint has to
traverse the entire table which will be time consuming for large/high-traffic tables.
So in almost all cases we have to run them in separate transactions to avoid holding the
stricter lock and blocking other operations on the tables for a longer time.
### On a new column
If the FK is added while creating the table, it is straight forward and
`create_table (t.references, ..., foreign_key: true)` can be used.
If you have a new (without much records) or empty table that doesn't reference a
[high-traffic table](../migration_style_guide.md#high-traffic-tables), either of below approaches can be used.
1. add_reference(... foreign_key: true)
1. add_column(...) and add_foreign_key(...) in the same transaction.
For all other cases, adding the column, adding FK constraint and validating the constraint should be done in
separate transactions.
### On an existing column
Adding a foreign key to an existing database column requires database structure changes and potential data changes.
{{< alert type="note" >}}
In case the table is in use, we should always assume that there is inconsistent data.
{{< /alert >}}
Adding a FK constraint to an existing column is a multi-milestone process:
1. `N.M`: Add a `NOT VALID` FK constraint to the column, it will also ensure there are no inconsistent records created or updated.
1. `N.M`: Add a data migration, to fix or clean up existing records.
2. This can be a regular or post deployment migration if the migration queries lie within the [timing guidelines](query_performance.md).
3. If not, this has to be done in a [batched background migration](batched_background_migrations.md).
1. Validate the FK constraint
2. If the data migration was a regular or a post deployment migration, the constraint can be validated in the same milestone.
3. If it was a background migration, then the FK can be validated only after the BBM is finalized.
This is required so that the FK validation won't happen while the data migration is still running in background.
{{< alert type="note" >}}
Adding a foreign-key constraint to either an existing or a new column
needs an index on the column.
If the index was added [asynchronously](adding_database_indexes.md#create-indexes-asynchronously), we should wait till
the index gets added in the `structure.sql`.
{{< /alert >}}
This is **required** for all foreign-keys, for example, to support efficient cascading
deleting: when a lot of rows in a table get deleted, the referenced records need
to be deleted too. The database has to look for corresponding records in the
referenced table. Without an index, this results in a sequential scan on the
table, which can take a long time.
#### Example
Consider the following table structures:
`users` table:
- `id` (integer, primary key)
- `name` (string)
`emails` table:
- `id` (integer, primary key)
- `user_id` (integer)
- `email` (string)
Express the relationship in `ActiveRecord`:
```ruby
class User < ActiveRecord::Base
has_many :emails
end
class Email < ActiveRecord::Base
belongs_to :user
end
```
Problem: when the user is removed, the email records related to the removed user stays in the `emails` table:
```ruby
user = User.find(1)
user.destroy
emails = Email.where(user_id: 1) # returns emails for the deleted user
```
#### Adding the FK constraint (NOT VALID)
Add a `NOT VALID` foreign key constraint to the table, which enforces consistency on adding or updating records.
In the example above, you will still be able to update records in the `emails` table. However, when you try to update `user_id` with non-existent value, the constraint will throw an error.
Migration file for adding `NOT VALID` foreign key:
```ruby
class AddNotValidForeignKeyToEmailsUser < Gitlab::Database::Migration[2.1]
milestone '17.10'
disable_ddl_transaction!
def up
add_concurrent_foreign_key(
:emails,
:users,
column: :user_id,
on_delete: :cascade,
validate: false
)
end
def down
remove_foreign_key_if_exists :emails, column: :user_id
end
end
```
INFO:
By default `add_concurrent_foreign_key` method validates the foreign key, so explicitly pass `validate: false`.
Adding a foreign key without validating it is a fast operation. It only requires a
short lock on the table before being able to enforce the constraint on new data.
Also `add_concurrent_foreign_key` will add the constraint only if it's not existing.
{{< alert type="warning" >}}
Avoid using `add_foreign_key` or `add_concurrent_foreign_key` constraints more than
once per migration file, unless the source and target tables are identical.
{{< /alert >}}
#### Data migration to fix existing records
The approach here depends on the data volume and the cleanup strategy. If we can find "invalid"
records by doing a database query and the record count is not high, then the data migration can
be executed in regular or post deployment rails migration.
In case the data volume is higher (>1000 records), it's better to create a background migration. If unsure, refer to our [query guidelines](query_performance.md) or contact the database frameworks team for advice.
Example for cleaning up records in the `emails` table in a database migration:
```ruby
class RemoveRecordsWithoutUserFromEmailsTable < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
class Email < ActiveRecord::Base
include EachBatch
end
def up
Email.each_batch do |batch|
batch.joins('LEFT JOIN users ON emails.user_id = users.id')
.where('users.id IS NULL')
.delete_all
end
end
def down
# Can be a no-op when data inconsistency is not affecting the pre and post deployment version of the application.
# In this case we might have records in the `emails` table where the associated record in the `users` table is not there anymore.
end
end
```
{{< alert type="note" >}}
The MR that adds this data migration should have ~data-deletion label applied.
Refer [preparation-when-adding-data-migrations](../database_review.md#preparation-when-adding-data-migrations) for more information.
{{< /alert >}}
#### Validate the foreign key
Validating the foreign key scans the whole table and makes sure that each relation is correct.
Fortunately, this does not lock the source table (`users`) while running.
As aforementioned when using [batched background migrations](batched_background_migrations.md), foreign key validation should happen only after the BBM is finalized.
Migration file for validating the foreign key:
```ruby
# frozen_string_literal: true
class ValidateForeignKeyOnEmailUsers < Gitlab::Database::Migration[2.1]
def up
validate_foreign_key :emails, :user_id
end
def down
# Can be safely a no-op if we don't roll back the inconsistent data.
end
end
```
#### Validate the foreign key asynchronously
For very large tables, foreign key validation can be a challenge to manage when
it runs for many hours. Necessary database operations like `autovacuum` cannot
run, and on GitLab.com, the deployment process is blocked waiting for the
migrations to finish.
To limit impact on GitLab.com, a process exists to validate them asynchronously
during weekend hours. Due to generally lower traffic and fewer deployments,
FK validation can proceed at a lower level of risk.
##### Schedule foreign key validation for a low-impact time
1. [Schedule the FK to be validated](#schedule-the-fk-to-be-validated).
1. [Verify the MR was deployed and the FK is valid in production](#verify-the-mr-was-deployed-and-the-fk-is-valid-in-production).
1. [Add a migration to validate the FK synchronously](#add-a-migration-to-validate-the-fk-synchronously).
##### Schedule the FK to be validated
1. Create a merge request containing a post-deployment migration, which prepares
the foreign key for asynchronous validation.
1. Create a follow-up issue to add a migration that validates the foreign key
synchronously.
1. In the merge request that prepares the asynchronous foreign key, add a
comment mentioning the follow-up issue.
An example of validating the foreign key using the asynchronous helpers can be
seen in the block below. This migration enters the foreign key name into the
`postgres_async_foreign_key_validations` table. The process that runs on
weekends pulls foreign keys from this table and attempts to validate them.
```ruby
# in db/post_migrate/
FK_NAME = :fk_be5624bf37
# TODO: FK to be validated synchronously in issue or merge request
def up
# `some_column` can be an array of columns, and is not mandatory if `name` is supplied.
# `name` takes precedence over other arguments.
prepare_async_foreign_key_validation :ci_builds, :some_column, name: FK_NAME
# Or in case of partitioned tables, use:
prepare_partitioned_async_foreign_key_validation :p_ci_builds, :some_column, name: FK_NAME
end
def down
unprepare_async_foreign_key_validation :ci_builds, :some_column, name: FK_NAME
# Or in case of partitioned tables, use:
unprepare_partitioned_async_foreign_key_validation :p_ci_builds, :some_column, name: FK_NAME
end
```
##### Verify the MR was deployed and the FK is valid in production
1. Verify that the post-deploy migration was executed on GitLab.com using ChatOps with
`/chatops run auto_deploy status <merge_sha>`. If the output returns `db/gprd`,
the post-deploy migration has been executed in the production database. For more information, see
[How to determine if a post-deploy migration has been executed on GitLab.com](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/post_deploy_migration/readme.md#how-to-determine-if-a-post-deploy-migration-has-been-executed-on-gitlabcom).
1. Wait until the next week so that the FK can be validated over a weekend.
1. Use [Database Lab](database_lab.md) to check if validation was successful.
Ensure the output does not indicate the foreign key is `NOT VALID`.
##### Add a migration to validate the FK synchronously
After the foreign key is valid on the production database, create a second
merge request that validates the foreign key synchronously. The schema changes
must be updated and committed to `structure.sql` in this second merge request.
The synchronous migration results in a no-op on GitLab.com, but you should still
add the migration as expected for other installations. The below block
demonstrates how to create the second migration for the previous
asynchronous example.
{{< alert type="warning" >}}
Verify that the foreign key is valid in production before merging a second
migration with `validate_foreign_key`. If the second migration is deployed
before the validation has been executed, the foreign key is validated
synchronously when the second migration executes.
{{< /alert >}}
```ruby
# in db/post_migrate/
FK_NAME = :fk_be5624bf37
def up
validate_foreign_key :ci_builds, :some_column, name: FK_NAME
end
def down
# Can be safely a no-op if we don't roll back the inconsistent data.
end
end
```
#### Test database FK changes locally
You must test the database foreign key changes locally before creating a merge request.
##### Verify the foreign keys validated asynchronously
Use the asynchronous helpers on your local environment to test changes for
validating a foreign key:
1. Enable the feature flag by running `Feature.enable(:database_async_foreign_key_validation)`
in the Rails console.
1. Run `bundle exec rails db:migrate` so that it creates an entry in the async validation table.
1. Run `bundle exec rails gitlab:db:validate_async_constraints:all` so that the FK is validated
asynchronously on all databases.
1. To verify the foreign key, open the PostgreSQL console using the
[GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/postgresql.md)
command `gdk psql` and run the command `\d+ table_name` to check that your
foreign key is valid. A successful validation removes `NOT VALID` from
the foreign key definition.
### Removing foreign keys
This operation does not require downtime.
## Use `bigint` for foreign keys
When adding a new foreign key, you should define it as `bigint`.
Even if the referenced table has an `integer` primary key type,
you must reference the new foreign key as `bigint`. As we are
migrating all primary keys to `bigint`, using `bigint` foreign keys
saves time, and requires fewer steps, when migrating the parent table
to `bigint` primary keys.
## Consider `reverse_lock_order`
Consider using `reverse_lock_order` for [high traffic tables](../migration_style_guide.md#high-traffic-tables)
Both `add_concurrent_foreign_key` and `remove_foreign_key_if_exists` take a
boolean option `reverse_lock_order` which defaults to false.
You can read more about the context for this in the
[the original issue](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67448).
This can be useful where we have known queries that are also acquiring locks
(usually row locks) on the same tables at a high frequency.
Consider, for example, the scenario where you want to add a foreign key like:
```sql
ALTER TABLE ONLY todos
ADD CONSTRAINT fk_91d1f47b13 FOREIGN KEY (note_id) REFERENCES notes(id) ON DELETE CASCADE;
```
And consider the following hypothetical application code:
```ruby
Todo.transaction do
note = Note.create(...)
# Observe what happens if foreign key is added here!
todo = Todo.create!(note_id: note.id)
end
```
If you try to create the foreign key in between the 2 insert statements we can
end up with a deadlock on both transactions in Postgres. Here is how it happens:
1. `Note.create`: acquires a row lock on `notes`
1. `ALTER TABLE ...` acquires a table lock on `todos`
1. `ALTER TABLE ... FOREIGN KEY` attempts to acquire a table lock on `notes` but this blocks on the other transaction which has a row lock
1. `Todo.create` attempts to acquire a row lock on `todos` but this blocks on the other transaction which has a table lock on `todos`
This illustrates how both transactions can be stuck waiting for each other to
finish and they will both timeout. We usually have transaction retries in our
migrations so it is usually OK but the application code might also timeout and
there might be an error for that user. If this application code is running very
frequently it's possible that we will be constantly timing out the migration
and users may also be regularly getting errors.
The deadlock case with removing a foreign key is similar because it also
acquires locks on both tables but a more common scenario, using the example
above, would be a `DELETE FROM notes WHERE id = ...`. This query will acquire a
lock on `notes` followed by a lock on `todos` and the exact same deadlock
described above can happen. For this reason it's almost always best to use
`reverse_lock_order` for removing a foreign key.
## Updating foreign keys in migrations
Sometimes a foreign key constraint must be changed, preserving the column
but updating the constraint condition. For example, moving from
`ON DELETE CASCADE` to `ON DELETE SET NULL` or vice-versa.
PostgreSQL does not prevent you from adding overlapping foreign keys. It
honors the most recently added constraint. This allows us to replace foreign keys without
ever losing foreign key protection on a column.
To replace a foreign key:
1. Add the new foreign key:
```ruby
class ReplaceFkOnPackagesPackagesProjectId < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
NEW_CONSTRAINT_NAME = 'fk_new'
def up
add_concurrent_foreign_key(:packages_packages, :projects, column: :project_id, on_delete: :nullify, name: NEW_CONSTRAINT_NAME)
end
def down
with_lock_retries do
remove_foreign_key_if_exists(:packages_packages, column: :project_id, on_delete: :nullify, name: NEW_CONSTRAINT_NAME)
end
end
end
```
1. Remove the old foreign key:
```ruby
class RemoveFkOld < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
OLD_CONSTRAINT_NAME = 'fk_old'
def up
with_lock_retries do
remove_foreign_key_if_exists(:packages_packages, column: :project_id, on_delete: :cascade, name: OLD_CONSTRAINT_NAME)
end
end
def down
add_concurrent_foreign_key(:packages_packages, :projects, column: :project_id, on_delete: :cascade, name: OLD_CONSTRAINT_NAME)
end
end
```
## Cascading deletes
Every foreign key must define an `ON DELETE` clause, and in 99% of the cases
this should be set to `CASCADE`.
## Indexes
When adding a foreign key in PostgreSQL the column is not indexed automatically,
thus you must also add a concurrent index. Indexes are required for all foreign
keys and they must be added before the foreign key. This can mean that they are
an earlier step in the same migration or they are added in an earlier migration
than the migration adding the foreign key. For the same reasons, foreign keys
must be removed before removing indexes supporting these foreign keys.
Without an index on the foreign key it forces Postgres to do a full table scan
every time a record is deleted from the referenced table. In the past this has
led to incidents where deleting `projects` and `namespaces` times out.
It is also ok to have a composite index which covers this foreign key so long
as the foreign key is in the first position of the composite index. For example
if you have a foreign key `project_id` then it is OK to have a composite index
like `BTREE (project_id, user_id)` but it is not OK to have an index like
`BTREE (user_id, project_id)`. The latter does not allow efficient lookups by
`project_id` alone and therefore would not prevent the cascade deletes from
timing out. Partial indexes like `BTREE (project_id) WHERE user_id IS NULL`
can never be used for cascading deletes and are not OK for serving as an index
for the foreign key.
## Naming foreign keys
By default Ruby on Rails uses the `_id` suffix for foreign keys. So we should
only use this suffix for associations between two tables. If you want to
reference an ID on a third party platform the `_xid` suffix is recommended.
The spec `spec/db/schema_spec.rb` tests if all columns with the `_id` suffix
have a foreign key constraint. If that spec fails, add the column to
`ignored_fk_columns_map` if the column fits any of the two criteria:
1. The column references another table, such as the two tables belong to
[GitLab schemas](multiple_databases.md#gitlab-schema) that don't
allow Foreign Keys between them.
1. The foreign key is replaced by a [Loose Foreign Key](loose_foreign_keys.md) for performance reasons.
1. The column represents a [polymorphic relationship](polymorphic_associations.md). Note that polymorphic associations should not be used.
1. The column is not meant to reference another table. For example, it's common to have `partition_id`
for partitioned tables.
## Dependent removals
Don't define options such as `dependent: :destroy` or `dependent: :delete` when
defining an association. Defining these options means Rails handles the
removal of data, instead of letting the database handle this in the most
efficient way possible.
In other words, this is bad and should be avoided at all costs:
```ruby
class User < ActiveRecord::Base
has_many :posts, dependent: :destroy
end
```
Should you truly have a need for this it should be approved by a database
specialist first.
You should also not define any `before_destroy` or `after_destroy` callbacks on
your models unless absolutely required and only when approved by database
specialists. For example, if each row in a table has a corresponding file on a
file system it may be tempting to add a `after_destroy` hook. This however
introduces non database logic to a model, and means we can no longer rely on
foreign keys to remove the data as this would result in the file system data
being left behind. In such a case you should use a service class instead that
takes care of removing non database data.
In cases where the relation spans multiple databases you have even
further problems using `dependent: :destroy` or the above hooks. You can
read more about alternatives at
[Avoid `dependent: :nullify` and `dependent: :destroy` across databases](multiple_databases.md#avoid-dependent-nullify-and-dependent-destroy-across-databases).
## Alternative primary keys with `has_one` associations
Sometimes a `has_one` association is used to create a one-to-one relationship:
```ruby
class User < ActiveRecord::Base
has_one :user_config
end
class UserConfig < ActiveRecord::Base
belongs_to :user
end
```
In these cases, there may be an opportunity to remove the unnecessary `id`
column on the associated table, `user_config.id` in this example. Instead,
the originating table ID can be used as the primary key for the associated
table:
```ruby
create_table :user_configs, id: false do |t|
t.references :users, primary_key: true, default: nil, index: false, foreign_key: { on_delete: :cascade }
...
end
```
Setting `default: nil` ensures a primary key sequence is not created, and because the primary key
automatically gets an index, we set `index: false` to avoid creating a duplicate.
You must also add the new primary key to the model:
```ruby
class UserConfig < ActiveRecord::Base
self.primary_key = :user_id
belongs_to :user
end
```
Using a foreign key as primary key saves space but can make
[batch counting](../internal_analytics/metrics/metrics_instrumentation.md#batch-counters-example) in [Service Ping](../internal_analytics/service_ping/_index.md) less efficient.
Consider using a regular `id` column if the table is relevant for Service Ping.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Foreign keys and associations
breadcrumbs:
- doc
- development
- database
---
Foreign keys ensure consistency between related database tables. Starting with Rails version 4, Rails includes migration helpers to add foreign key constraints
to database tables. Before Rails 4, the only way for ensuring some level of consistency was the
[`dependent`](https://guides.rubyonrails.org/association_basics.html#options-for-belongs-to-dependent)
option in the association definition.
Ensuring data consistency on the application level could fail
in some unfortunate cases, so we might end up with inconsistent data in the table. This mostly affects
older tables, where we didn't have the framework support to ensure consistency on the database level.
These data inconsistencies can cause unexpected application behavior or bugs.
When creating tables that reference records from other tables, a FK should be added to maintain data integrity.
And when adding an association to a model you must also add a foreign key. Also on
adding a foreign key you must always add an [index](#indexes) first.
For example, say you have the following model:
```ruby
class User < ActiveRecord::Base
has_many :posts
end
```
Add a foreign key here on column `posts.user_id`. This ensures
that data consistency is enforced on database level. Foreign keys also mean that
the database can remove associated data (for example, when removing a
user), instead of Rails having to do this.
## Avoiding downtime and migration failures
Adding a foreign key has two parts to it
1. Adding the FK column and the constraint.
1. Validating the added constraint to maintain data integrity.
(1) uses ALTER TABLE statements which takes the most strict lock (ACCESS EXCLUSIVE) and validating the constraint has to
traverse the entire table which will be time consuming for large/high-traffic tables.
So in almost all cases we have to run them in separate transactions to avoid holding the
stricter lock and blocking other operations on the tables for a longer time.
### On a new column
If the FK is added while creating the table, it is straight forward and
`create_table (t.references, ..., foreign_key: true)` can be used.
If you have a new (without much records) or empty table that doesn't reference a
[high-traffic table](../migration_style_guide.md#high-traffic-tables), either of below approaches can be used.
1. add_reference(... foreign_key: true)
1. add_column(...) and add_foreign_key(...) in the same transaction.
For all other cases, adding the column, adding FK constraint and validating the constraint should be done in
separate transactions.
### On an existing column
Adding a foreign key to an existing database column requires database structure changes and potential data changes.
{{< alert type="note" >}}
In case the table is in use, we should always assume that there is inconsistent data.
{{< /alert >}}
Adding a FK constraint to an existing column is a multi-milestone process:
1. `N.M`: Add a `NOT VALID` FK constraint to the column, it will also ensure there are no inconsistent records created or updated.
1. `N.M`: Add a data migration, to fix or clean up existing records.
2. This can be a regular or post deployment migration if the migration queries lie within the [timing guidelines](query_performance.md).
3. If not, this has to be done in a [batched background migration](batched_background_migrations.md).
1. Validate the FK constraint
2. If the data migration was a regular or a post deployment migration, the constraint can be validated in the same milestone.
3. If it was a background migration, then the FK can be validated only after the BBM is finalized.
This is required so that the FK validation won't happen while the data migration is still running in background.
{{< alert type="note" >}}
Adding a foreign-key constraint to either an existing or a new column
needs an index on the column.
If the index was added [asynchronously](adding_database_indexes.md#create-indexes-asynchronously), we should wait till
the index gets added in the `structure.sql`.
{{< /alert >}}
This is **required** for all foreign-keys, for example, to support efficient cascading
deleting: when a lot of rows in a table get deleted, the referenced records need
to be deleted too. The database has to look for corresponding records in the
referenced table. Without an index, this results in a sequential scan on the
table, which can take a long time.
#### Example
Consider the following table structures:
`users` table:
- `id` (integer, primary key)
- `name` (string)
`emails` table:
- `id` (integer, primary key)
- `user_id` (integer)
- `email` (string)
Express the relationship in `ActiveRecord`:
```ruby
class User < ActiveRecord::Base
has_many :emails
end
class Email < ActiveRecord::Base
belongs_to :user
end
```
Problem: when the user is removed, the email records related to the removed user stays in the `emails` table:
```ruby
user = User.find(1)
user.destroy
emails = Email.where(user_id: 1) # returns emails for the deleted user
```
#### Adding the FK constraint (NOT VALID)
Add a `NOT VALID` foreign key constraint to the table, which enforces consistency on adding or updating records.
In the example above, you will still be able to update records in the `emails` table. However, when you try to update `user_id` with non-existent value, the constraint will throw an error.
Migration file for adding `NOT VALID` foreign key:
```ruby
class AddNotValidForeignKeyToEmailsUser < Gitlab::Database::Migration[2.1]
milestone '17.10'
disable_ddl_transaction!
def up
add_concurrent_foreign_key(
:emails,
:users,
column: :user_id,
on_delete: :cascade,
validate: false
)
end
def down
remove_foreign_key_if_exists :emails, column: :user_id
end
end
```
INFO:
By default `add_concurrent_foreign_key` method validates the foreign key, so explicitly pass `validate: false`.
Adding a foreign key without validating it is a fast operation. It only requires a
short lock on the table before being able to enforce the constraint on new data.
Also `add_concurrent_foreign_key` will add the constraint only if it's not existing.
{{< alert type="warning" >}}
Avoid using `add_foreign_key` or `add_concurrent_foreign_key` constraints more than
once per migration file, unless the source and target tables are identical.
{{< /alert >}}
#### Data migration to fix existing records
The approach here depends on the data volume and the cleanup strategy. If we can find "invalid"
records by doing a database query and the record count is not high, then the data migration can
be executed in regular or post deployment rails migration.
In case the data volume is higher (>1000 records), it's better to create a background migration. If unsure, refer to our [query guidelines](query_performance.md) or contact the database frameworks team for advice.
Example for cleaning up records in the `emails` table in a database migration:
```ruby
class RemoveRecordsWithoutUserFromEmailsTable < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
class Email < ActiveRecord::Base
include EachBatch
end
def up
Email.each_batch do |batch|
batch.joins('LEFT JOIN users ON emails.user_id = users.id')
.where('users.id IS NULL')
.delete_all
end
end
def down
# Can be a no-op when data inconsistency is not affecting the pre and post deployment version of the application.
# In this case we might have records in the `emails` table where the associated record in the `users` table is not there anymore.
end
end
```
{{< alert type="note" >}}
The MR that adds this data migration should have ~data-deletion label applied.
Refer [preparation-when-adding-data-migrations](../database_review.md#preparation-when-adding-data-migrations) for more information.
{{< /alert >}}
#### Validate the foreign key
Validating the foreign key scans the whole table and makes sure that each relation is correct.
Fortunately, this does not lock the source table (`users`) while running.
As aforementioned when using [batched background migrations](batched_background_migrations.md), foreign key validation should happen only after the BBM is finalized.
Migration file for validating the foreign key:
```ruby
# frozen_string_literal: true
class ValidateForeignKeyOnEmailUsers < Gitlab::Database::Migration[2.1]
def up
validate_foreign_key :emails, :user_id
end
def down
# Can be safely a no-op if we don't roll back the inconsistent data.
end
end
```
#### Validate the foreign key asynchronously
For very large tables, foreign key validation can be a challenge to manage when
it runs for many hours. Necessary database operations like `autovacuum` cannot
run, and on GitLab.com, the deployment process is blocked waiting for the
migrations to finish.
To limit impact on GitLab.com, a process exists to validate them asynchronously
during weekend hours. Due to generally lower traffic and fewer deployments,
FK validation can proceed at a lower level of risk.
##### Schedule foreign key validation for a low-impact time
1. [Schedule the FK to be validated](#schedule-the-fk-to-be-validated).
1. [Verify the MR was deployed and the FK is valid in production](#verify-the-mr-was-deployed-and-the-fk-is-valid-in-production).
1. [Add a migration to validate the FK synchronously](#add-a-migration-to-validate-the-fk-synchronously).
##### Schedule the FK to be validated
1. Create a merge request containing a post-deployment migration, which prepares
the foreign key for asynchronous validation.
1. Create a follow-up issue to add a migration that validates the foreign key
synchronously.
1. In the merge request that prepares the asynchronous foreign key, add a
comment mentioning the follow-up issue.
An example of validating the foreign key using the asynchronous helpers can be
seen in the block below. This migration enters the foreign key name into the
`postgres_async_foreign_key_validations` table. The process that runs on
weekends pulls foreign keys from this table and attempts to validate them.
```ruby
# in db/post_migrate/
FK_NAME = :fk_be5624bf37
# TODO: FK to be validated synchronously in issue or merge request
def up
# `some_column` can be an array of columns, and is not mandatory if `name` is supplied.
# `name` takes precedence over other arguments.
prepare_async_foreign_key_validation :ci_builds, :some_column, name: FK_NAME
# Or in case of partitioned tables, use:
prepare_partitioned_async_foreign_key_validation :p_ci_builds, :some_column, name: FK_NAME
end
def down
unprepare_async_foreign_key_validation :ci_builds, :some_column, name: FK_NAME
# Or in case of partitioned tables, use:
unprepare_partitioned_async_foreign_key_validation :p_ci_builds, :some_column, name: FK_NAME
end
```
##### Verify the MR was deployed and the FK is valid in production
1. Verify that the post-deploy migration was executed on GitLab.com using ChatOps with
`/chatops run auto_deploy status <merge_sha>`. If the output returns `db/gprd`,
the post-deploy migration has been executed in the production database. For more information, see
[How to determine if a post-deploy migration has been executed on GitLab.com](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/post_deploy_migration/readme.md#how-to-determine-if-a-post-deploy-migration-has-been-executed-on-gitlabcom).
1. Wait until the next week so that the FK can be validated over a weekend.
1. Use [Database Lab](database_lab.md) to check if validation was successful.
Ensure the output does not indicate the foreign key is `NOT VALID`.
##### Add a migration to validate the FK synchronously
After the foreign key is valid on the production database, create a second
merge request that validates the foreign key synchronously. The schema changes
must be updated and committed to `structure.sql` in this second merge request.
The synchronous migration results in a no-op on GitLab.com, but you should still
add the migration as expected for other installations. The below block
demonstrates how to create the second migration for the previous
asynchronous example.
{{< alert type="warning" >}}
Verify that the foreign key is valid in production before merging a second
migration with `validate_foreign_key`. If the second migration is deployed
before the validation has been executed, the foreign key is validated
synchronously when the second migration executes.
{{< /alert >}}
```ruby
# in db/post_migrate/
FK_NAME = :fk_be5624bf37
def up
validate_foreign_key :ci_builds, :some_column, name: FK_NAME
end
def down
# Can be safely a no-op if we don't roll back the inconsistent data.
end
end
```
#### Test database FK changes locally
You must test the database foreign key changes locally before creating a merge request.
##### Verify the foreign keys validated asynchronously
Use the asynchronous helpers on your local environment to test changes for
validating a foreign key:
1. Enable the feature flag by running `Feature.enable(:database_async_foreign_key_validation)`
in the Rails console.
1. Run `bundle exec rails db:migrate` so that it creates an entry in the async validation table.
1. Run `bundle exec rails gitlab:db:validate_async_constraints:all` so that the FK is validated
asynchronously on all databases.
1. To verify the foreign key, open the PostgreSQL console using the
[GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/postgresql.md)
command `gdk psql` and run the command `\d+ table_name` to check that your
foreign key is valid. A successful validation removes `NOT VALID` from
the foreign key definition.
### Removing foreign keys
This operation does not require downtime.
## Use `bigint` for foreign keys
When adding a new foreign key, you should define it as `bigint`.
Even if the referenced table has an `integer` primary key type,
you must reference the new foreign key as `bigint`. As we are
migrating all primary keys to `bigint`, using `bigint` foreign keys
saves time, and requires fewer steps, when migrating the parent table
to `bigint` primary keys.
## Consider `reverse_lock_order`
Consider using `reverse_lock_order` for [high traffic tables](../migration_style_guide.md#high-traffic-tables)
Both `add_concurrent_foreign_key` and `remove_foreign_key_if_exists` take a
boolean option `reverse_lock_order` which defaults to false.
You can read more about the context for this in the
[the original issue](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67448).
This can be useful where we have known queries that are also acquiring locks
(usually row locks) on the same tables at a high frequency.
Consider, for example, the scenario where you want to add a foreign key like:
```sql
ALTER TABLE ONLY todos
ADD CONSTRAINT fk_91d1f47b13 FOREIGN KEY (note_id) REFERENCES notes(id) ON DELETE CASCADE;
```
And consider the following hypothetical application code:
```ruby
Todo.transaction do
note = Note.create(...)
# Observe what happens if foreign key is added here!
todo = Todo.create!(note_id: note.id)
end
```
If you try to create the foreign key in between the 2 insert statements we can
end up with a deadlock on both transactions in Postgres. Here is how it happens:
1. `Note.create`: acquires a row lock on `notes`
1. `ALTER TABLE ...` acquires a table lock on `todos`
1. `ALTER TABLE ... FOREIGN KEY` attempts to acquire a table lock on `notes` but this blocks on the other transaction which has a row lock
1. `Todo.create` attempts to acquire a row lock on `todos` but this blocks on the other transaction which has a table lock on `todos`
This illustrates how both transactions can be stuck waiting for each other to
finish and they will both timeout. We usually have transaction retries in our
migrations so it is usually OK but the application code might also timeout and
there might be an error for that user. If this application code is running very
frequently it's possible that we will be constantly timing out the migration
and users may also be regularly getting errors.
The deadlock case with removing a foreign key is similar because it also
acquires locks on both tables but a more common scenario, using the example
above, would be a `DELETE FROM notes WHERE id = ...`. This query will acquire a
lock on `notes` followed by a lock on `todos` and the exact same deadlock
described above can happen. For this reason it's almost always best to use
`reverse_lock_order` for removing a foreign key.
## Updating foreign keys in migrations
Sometimes a foreign key constraint must be changed, preserving the column
but updating the constraint condition. For example, moving from
`ON DELETE CASCADE` to `ON DELETE SET NULL` or vice-versa.
PostgreSQL does not prevent you from adding overlapping foreign keys. It
honors the most recently added constraint. This allows us to replace foreign keys without
ever losing foreign key protection on a column.
To replace a foreign key:
1. Add the new foreign key:
```ruby
class ReplaceFkOnPackagesPackagesProjectId < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
NEW_CONSTRAINT_NAME = 'fk_new'
def up
add_concurrent_foreign_key(:packages_packages, :projects, column: :project_id, on_delete: :nullify, name: NEW_CONSTRAINT_NAME)
end
def down
with_lock_retries do
remove_foreign_key_if_exists(:packages_packages, column: :project_id, on_delete: :nullify, name: NEW_CONSTRAINT_NAME)
end
end
end
```
1. Remove the old foreign key:
```ruby
class RemoveFkOld < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
OLD_CONSTRAINT_NAME = 'fk_old'
def up
with_lock_retries do
remove_foreign_key_if_exists(:packages_packages, column: :project_id, on_delete: :cascade, name: OLD_CONSTRAINT_NAME)
end
end
def down
add_concurrent_foreign_key(:packages_packages, :projects, column: :project_id, on_delete: :cascade, name: OLD_CONSTRAINT_NAME)
end
end
```
## Cascading deletes
Every foreign key must define an `ON DELETE` clause, and in 99% of the cases
this should be set to `CASCADE`.
## Indexes
When adding a foreign key in PostgreSQL the column is not indexed automatically,
thus you must also add a concurrent index. Indexes are required for all foreign
keys and they must be added before the foreign key. This can mean that they are
an earlier step in the same migration or they are added in an earlier migration
than the migration adding the foreign key. For the same reasons, foreign keys
must be removed before removing indexes supporting these foreign keys.
Without an index on the foreign key it forces Postgres to do a full table scan
every time a record is deleted from the referenced table. In the past this has
led to incidents where deleting `projects` and `namespaces` times out.
It is also ok to have a composite index which covers this foreign key so long
as the foreign key is in the first position of the composite index. For example
if you have a foreign key `project_id` then it is OK to have a composite index
like `BTREE (project_id, user_id)` but it is not OK to have an index like
`BTREE (user_id, project_id)`. The latter does not allow efficient lookups by
`project_id` alone and therefore would not prevent the cascade deletes from
timing out. Partial indexes like `BTREE (project_id) WHERE user_id IS NULL`
can never be used for cascading deletes and are not OK for serving as an index
for the foreign key.
## Naming foreign keys
By default Ruby on Rails uses the `_id` suffix for foreign keys. So we should
only use this suffix for associations between two tables. If you want to
reference an ID on a third party platform the `_xid` suffix is recommended.
The spec `spec/db/schema_spec.rb` tests if all columns with the `_id` suffix
have a foreign key constraint. If that spec fails, add the column to
`ignored_fk_columns_map` if the column fits any of the two criteria:
1. The column references another table, such as the two tables belong to
[GitLab schemas](multiple_databases.md#gitlab-schema) that don't
allow Foreign Keys between them.
1. The foreign key is replaced by a [Loose Foreign Key](loose_foreign_keys.md) for performance reasons.
1. The column represents a [polymorphic relationship](polymorphic_associations.md). Note that polymorphic associations should not be used.
1. The column is not meant to reference another table. For example, it's common to have `partition_id`
for partitioned tables.
## Dependent removals
Don't define options such as `dependent: :destroy` or `dependent: :delete` when
defining an association. Defining these options means Rails handles the
removal of data, instead of letting the database handle this in the most
efficient way possible.
In other words, this is bad and should be avoided at all costs:
```ruby
class User < ActiveRecord::Base
has_many :posts, dependent: :destroy
end
```
Should you truly have a need for this it should be approved by a database
specialist first.
You should also not define any `before_destroy` or `after_destroy` callbacks on
your models unless absolutely required and only when approved by database
specialists. For example, if each row in a table has a corresponding file on a
file system it may be tempting to add a `after_destroy` hook. This however
introduces non database logic to a model, and means we can no longer rely on
foreign keys to remove the data as this would result in the file system data
being left behind. In such a case you should use a service class instead that
takes care of removing non database data.
In cases where the relation spans multiple databases you have even
further problems using `dependent: :destroy` or the above hooks. You can
read more about alternatives at
[Avoid `dependent: :nullify` and `dependent: :destroy` across databases](multiple_databases.md#avoid-dependent-nullify-and-dependent-destroy-across-databases).
## Alternative primary keys with `has_one` associations
Sometimes a `has_one` association is used to create a one-to-one relationship:
```ruby
class User < ActiveRecord::Base
has_one :user_config
end
class UserConfig < ActiveRecord::Base
belongs_to :user
end
```
In these cases, there may be an opportunity to remove the unnecessary `id`
column on the associated table, `user_config.id` in this example. Instead,
the originating table ID can be used as the primary key for the associated
table:
```ruby
create_table :user_configs, id: false do |t|
t.references :users, primary_key: true, default: nil, index: false, foreign_key: { on_delete: :cascade }
...
end
```
Setting `default: nil` ensures a primary key sequence is not created, and because the primary key
automatically gets an index, we set `index: false` to avoid creating a duplicate.
You must also add the new primary key to the model:
```ruby
class UserConfig < ActiveRecord::Base
self.primary_key = :user_id
belongs_to :user
end
```
Using a foreign key as primary key saves space but can make
[batch counting](../internal_analytics/metrics/metrics_instrumentation.md#batch-counters-example) in [Service Ping](../internal_analytics/service_ping/_index.md) less efficient.
Consider using a regular `id` column if the table is relevant for Service Ping.
|
https://docs.gitlab.com/development/large_tables_limitations
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/large_tables_limitations.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
large_tables_limitations.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content.
|
Large tables limitations
| null |
GitLab enforces some limitations on large database tables schema changes to improve manageability for both GitLab and its customers. The list of tables subject to these limitations is defined in [`rubocop/rubocop-migrations.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml).
## Table size restrictions
The following limitations apply to table schema changes on GitLab.com:
| Limitation | Maximum size after the action (including indexes and column size) |
| ------ | ------------------------------- |
| Can not add an index | 50 GB |
| Can not add a column with foreign key | 50 GB |
| Can not add a new column | 100 GB |
These limitations align with our goal to maintain [all tables under 100 GB](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/database_size_limits/) for improved [stability and performance](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/database_size_limits/#motivation-gitlabcom-stability-and-performance).
## Exceptions
Exceptions to these size limitations should only granted for the following cases:
- Migrate a table's columns from `int4` to `int8`
- Add a sharding key to support cells
- Modify a table to assist in partitioning or data retention efforts
- Replace an existing index to provide better query performance
### Requesting an exception
To request an exception to these limitations:
1. Create a new issue using the [Database Team Tasks template](https://gitlab.com/gitlab-org/database-team/team-tasks/-/issues/new?issuable_template=schema_change_exception)
1. Select the `schema_change_exception` template
1. Provide detailed justification for why your case requires an exception
1. Wait for review and approval from the Database team before proceeding
1. Link the approval issue when disabling the cop for your migration
## Techniques to reduce table size
Before requesting an exception, consider these approaches to manage table size:
### Archiving data
- Move old, infrequently accessed data to archive tables
- Implement archiving workers for automated data migration
- Consider using partitioning by date to facilitate archiving, see [date range partitioning](partitioning/date_range.md)
### Data retention
- Implement retention policies to remove old data
- Configure automated cleanup jobs for expired data, see [deleting old pipelines](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/171142)
### Table partitioning
- [Partition large tables by date](scalability/patterns/time_decay.md#time-decay-data-strategies), ID ranges, or other criteria
- Consider [range](partitioning/date_range.md) or [list](partitioning/list.md) partitioning based on access patterns
### Column optimization
- Use appropriate data types (for example, `smallint` instead of `integer` when possible)
- Remove unused or redundant indexes
- Consider using `NULL` instead of empty strings or zeros
- Use `text` instead of `varchar` to [avoid storage overhead](ordering_table_columns.md)
### Normalization
- Split large tables into related smaller tables
- Move rarely used columns to [separate tables](layout_and_access_patterns.md#data-model-trade-offs)
- Use junction tables for many-to-many relationships
- Consider vertical partitioning for [wide tables](layout_and_access_patterns.md#wide-tables)
### External storage
- Move large text or binary data to object storage
- Store only metadata in the database
- Use [Elasticsearch](../../user/search/advanced_search.md) for search-specific data
- Consider using Redis for temporary or cached data
## Alternatives to table modifications
Consider these alternatives when working with large tables:
1. Creates a separate table for new columns, especially if the column is not present in all rows. The new table references the original table through a foreign key.
1. Work with the Global Search team to add your data to Elasticsearch for enhanced filter/search functionality.
1. Simplify filtering/sorting options (for example, use `id` instead of `created_at` for sorting).
## Benefits of table size limitations
Table size limitations provide several advantages:
- Enable separate vacuum operations with different frequencies
- Generate less Write-Ahead Log (WAL) data for column updates
- Prevent unnecessary data copying during row updates
For more information about data model trade-offs, see the [database documentation](layout_and_access_patterns.md#data-model-trade-offs).
## Using `has_one` relationships
When a table becomes too large for new columns, create a new table with a `has_one` relation. For example, in [merge request !170371](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170371), we track the total weight count of an issue in a separate table.
Benefits of this approach:
1. Keeps the main table narrower, reducing data load from PostgreSQL
1. Creates an efficient narrow table for specific queries
1. Allows selective population of the new table as needed
This approach is particularly effective when:
- The new column applies to a subset of the main table
- Only specific queries need the new data
Disadvantages
1. More tables may result in more "joins" which will complicate queries
1. Queries with multiple joins may end up being hard to optimize
## Related links
- [Database size limits](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/database_size_limits/#solutions)
- [Adding database indexes](adding_database_indexes.md)
- [Database layout and access patterns](layout_and_access_patterns.md#data-model-trade-offs)
- [Data retention guidelines for feature development](../data_retention_policies.md)
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
title: Large tables limitations
breadcrumbs:
- doc
- development
- database
---
GitLab enforces some limitations on large database tables schema changes to improve manageability for both GitLab and its customers. The list of tables subject to these limitations is defined in [`rubocop/rubocop-migrations.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml).
## Table size restrictions
The following limitations apply to table schema changes on GitLab.com:
| Limitation | Maximum size after the action (including indexes and column size) |
| ------ | ------------------------------- |
| Can not add an index | 50 GB |
| Can not add a column with foreign key | 50 GB |
| Can not add a new column | 100 GB |
These limitations align with our goal to maintain [all tables under 100 GB](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/database_size_limits/) for improved [stability and performance](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/database_size_limits/#motivation-gitlabcom-stability-and-performance).
## Exceptions
Exceptions to these size limitations should only granted for the following cases:
- Migrate a table's columns from `int4` to `int8`
- Add a sharding key to support cells
- Modify a table to assist in partitioning or data retention efforts
- Replace an existing index to provide better query performance
### Requesting an exception
To request an exception to these limitations:
1. Create a new issue using the [Database Team Tasks template](https://gitlab.com/gitlab-org/database-team/team-tasks/-/issues/new?issuable_template=schema_change_exception)
1. Select the `schema_change_exception` template
1. Provide detailed justification for why your case requires an exception
1. Wait for review and approval from the Database team before proceeding
1. Link the approval issue when disabling the cop for your migration
## Techniques to reduce table size
Before requesting an exception, consider these approaches to manage table size:
### Archiving data
- Move old, infrequently accessed data to archive tables
- Implement archiving workers for automated data migration
- Consider using partitioning by date to facilitate archiving, see [date range partitioning](partitioning/date_range.md)
### Data retention
- Implement retention policies to remove old data
- Configure automated cleanup jobs for expired data, see [deleting old pipelines](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/171142)
### Table partitioning
- [Partition large tables by date](scalability/patterns/time_decay.md#time-decay-data-strategies), ID ranges, or other criteria
- Consider [range](partitioning/date_range.md) or [list](partitioning/list.md) partitioning based on access patterns
### Column optimization
- Use appropriate data types (for example, `smallint` instead of `integer` when possible)
- Remove unused or redundant indexes
- Consider using `NULL` instead of empty strings or zeros
- Use `text` instead of `varchar` to [avoid storage overhead](ordering_table_columns.md)
### Normalization
- Split large tables into related smaller tables
- Move rarely used columns to [separate tables](layout_and_access_patterns.md#data-model-trade-offs)
- Use junction tables for many-to-many relationships
- Consider vertical partitioning for [wide tables](layout_and_access_patterns.md#wide-tables)
### External storage
- Move large text or binary data to object storage
- Store only metadata in the database
- Use [Elasticsearch](../../user/search/advanced_search.md) for search-specific data
- Consider using Redis for temporary or cached data
## Alternatives to table modifications
Consider these alternatives when working with large tables:
1. Creates a separate table for new columns, especially if the column is not present in all rows. The new table references the original table through a foreign key.
1. Work with the Global Search team to add your data to Elasticsearch for enhanced filter/search functionality.
1. Simplify filtering/sorting options (for example, use `id` instead of `created_at` for sorting).
## Benefits of table size limitations
Table size limitations provide several advantages:
- Enable separate vacuum operations with different frequencies
- Generate less Write-Ahead Log (WAL) data for column updates
- Prevent unnecessary data copying during row updates
For more information about data model trade-offs, see the [database documentation](layout_and_access_patterns.md#data-model-trade-offs).
## Using `has_one` relationships
When a table becomes too large for new columns, create a new table with a `has_one` relation. For example, in [merge request !170371](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170371), we track the total weight count of an issue in a separate table.
Benefits of this approach:
1. Keeps the main table narrower, reducing data load from PostgreSQL
1. Creates an efficient narrow table for specific queries
1. Allows selective population of the new table as needed
This approach is particularly effective when:
- The new column applies to a subset of the main table
- Only specific queries need the new data
Disadvantages
1. More tables may result in more "joins" which will complicate queries
1. Queries with multiple joins may end up being hard to optimize
## Related links
- [Database size limits](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/database_size_limits/#solutions)
- [Adding database indexes](adding_database_indexes.md)
- [Database layout and access patterns](layout_and_access_patterns.md#data-model-trade-offs)
- [Data retention guidelines for feature development](../data_retention_policies.md)
|
https://docs.gitlab.com/development/setting_multiple_values
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/setting_multiple_values.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
setting_multiple_values.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Update multiple database objects
| null |
You can update multiple database objects with new values for one or more columns.
One method is to use `Relation#update_all`:
```ruby
user.issues.open.update_all(due_date: 7.days.from_now) # (1)
user.issues.update_all('relative_position = relative_position + 1') # (2)
```
If you cannot express the update as either a static value (1) or as a calculation (2),
use `UPDATE FROM` to express the need to update multiple rows with distinct values
in a single query. Create a temporary table, or a Common Table Expression (CTE),
and use it as the source of the updates:
```sql
with updates(obj_id, new_title, new_weight) as (
values (1 :: integer, 'Very difficult issue' :: text, 8 :: integer),
(2, 'Very easy issue', 1)
)
update issues
set title = new_title, weight = new_weight
from updates
where id = obj_id
```
You can't express this in ActiveRecord, or by dropping down to [Arel](https://api.rubyonrails.org/classes/Arel.html),
because the `UpdateManager` does not support `update from`. However, we supply
an abstraction to help you generate these kinds of updates: `Gitlab::Database::BulkUpdate`.
This abstraction constructs queries like the previous example, and uses
binding parameters to avoid SQL injection.
## Usage
To use `Gitlab::Database::BulkUpdate`, we need:
- The list of columns to update.
- A mapping from the object (or ID) to the new values to set for that object.
- A way to determine the table for each object.
For example, we can express the example query in a way that determines the
table by calling `object.class.table_name`:
```ruby
issue_a = Issue.find(..)
issue_b = Issue.find(..)
# Issues a single query:
::Gitlab::Database::BulkUpdate.execute(%i[title weight], {
issue_a => { title: 'Very difficult issue', weight: 8 },
issue_b => { title: 'Very easy issue', weight: 1 }
})
```
You can even pass heterogeneous sets of objects, if the updates all make sense
for them:
```ruby
issue_a = Issue.find(..)
issue_b = Issue.find(..)
merge_request = MergeRequest.find(..)
# Issues two queries
::Gitlab::Database::BulkUpdate.execute(%i[title], {
issue_a => { title: 'A' },
issue_b => { title: 'B' },
merge_request => { title: 'B' }
})
```
If your objects do not return the correct model class, such as if they are part
of a union, then specify the model class explicitly in a block:
```ruby
bazzes = params
objects = Foo.from_union([
Foo.select("id, 'foo' as object_type").where(quux: true),
Bar.select("id, 'bar' as object_type").where(wibble: true)
])
# At this point, all the objects are instances of Foo, even the ones from the
# Bar table
mapping = objects.to_h { |obj| [obj, bazzes[obj.id]] }
# Issues at most 2 queries
::Gitlab::Database::BulkUpdate.execute(%i[baz], mapping) do |obj|
obj.object_type.constantize
end
```
## Caveats
This tool is **very low level**, and operates directly on the raw column
values. You should consider these issues if you implement it:
- Enumerations and state fields must be translated into their underlying
representations.
- Nested associations are not supported.
- No validations or hooks are called.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Update multiple database objects
breadcrumbs:
- doc
- development
- database
---
You can update multiple database objects with new values for one or more columns.
One method is to use `Relation#update_all`:
```ruby
user.issues.open.update_all(due_date: 7.days.from_now) # (1)
user.issues.update_all('relative_position = relative_position + 1') # (2)
```
If you cannot express the update as either a static value (1) or as a calculation (2),
use `UPDATE FROM` to express the need to update multiple rows with distinct values
in a single query. Create a temporary table, or a Common Table Expression (CTE),
and use it as the source of the updates:
```sql
with updates(obj_id, new_title, new_weight) as (
values (1 :: integer, 'Very difficult issue' :: text, 8 :: integer),
(2, 'Very easy issue', 1)
)
update issues
set title = new_title, weight = new_weight
from updates
where id = obj_id
```
You can't express this in ActiveRecord, or by dropping down to [Arel](https://api.rubyonrails.org/classes/Arel.html),
because the `UpdateManager` does not support `update from`. However, we supply
an abstraction to help you generate these kinds of updates: `Gitlab::Database::BulkUpdate`.
This abstraction constructs queries like the previous example, and uses
binding parameters to avoid SQL injection.
## Usage
To use `Gitlab::Database::BulkUpdate`, we need:
- The list of columns to update.
- A mapping from the object (or ID) to the new values to set for that object.
- A way to determine the table for each object.
For example, we can express the example query in a way that determines the
table by calling `object.class.table_name`:
```ruby
issue_a = Issue.find(..)
issue_b = Issue.find(..)
# Issues a single query:
::Gitlab::Database::BulkUpdate.execute(%i[title weight], {
issue_a => { title: 'Very difficult issue', weight: 8 },
issue_b => { title: 'Very easy issue', weight: 1 }
})
```
You can even pass heterogeneous sets of objects, if the updates all make sense
for them:
```ruby
issue_a = Issue.find(..)
issue_b = Issue.find(..)
merge_request = MergeRequest.find(..)
# Issues two queries
::Gitlab::Database::BulkUpdate.execute(%i[title], {
issue_a => { title: 'A' },
issue_b => { title: 'B' },
merge_request => { title: 'B' }
})
```
If your objects do not return the correct model class, such as if they are part
of a union, then specify the model class explicitly in a block:
```ruby
bazzes = params
objects = Foo.from_union([
Foo.select("id, 'foo' as object_type").where(quux: true),
Bar.select("id, 'bar' as object_type").where(wibble: true)
])
# At this point, all the objects are instances of Foo, even the ones from the
# Bar table
mapping = objects.to_h { |obj| [obj, bazzes[obj.id]] }
# Issues at most 2 queries
::Gitlab::Database::BulkUpdate.execute(%i[baz], mapping) do |obj|
obj.object_type.constantize
end
```
## Caveats
This tool is **very low level**, and operates directly on the raw column
values. You should consider these issues if you implement it:
- Enumerations and state fields must be translated into their underlying
representations.
- Nested associations are not supported.
- No validations or hooks are called.
|
https://docs.gitlab.com/development/client_side_connection_pool
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/client_side_connection_pool.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
client_side_connection_pool.md
|
Data Access
|
Database
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Client-side connection-pool
| null |
Ruby processes accessing the database through
ActiveRecord, automatically calculate the connection-pool size for the
process based on the concurrency.
Because of the way [Ruby on Rails manages database connections](#connection-lifecycle),
it is important that we have at
least as many connections as we have threads. While there is a 'pool'
setting in [`database.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/database.yml.postgresql), it is not very practical because you need to
maintain it in tandem with the number of application threads. For this
reason, we override the number of allowed connections in the database
connection-pool based on the configured number of application threads.
`Gitlab::Runtime.max_threads` is the number of user-facing
application threads the process has been configured with. We also have
auxiliary threads that use database connections. As it isn't
straightforward to keep an accurate count of the number of auxiliary threads as
the application evolves over time, we just add a fixed headroom to the
number of user-facing threads. It is OK if this number is too large
because connections are instantiated lazily.
## Troubleshooting connection-pool issues
The connection-pool usage can be seen per environment in the
[connection-pool saturation dashboard](https://dashboards.gitlab.net/d/alerts-sat_rails_db_connection_pool/alerts-rails_db_connection_pool-saturation-detail?orgId=1).
If the connection-pool is too small, this would manifest in
`ActiveRecord::ConnectionTimeoutError`s from the application. Because we alert
when almost all connections are used, we should know this before
timeouts occur. If this happens we can remediate by setting the
`DB_POOL_HEADROOM` environment variable to something bigger than the
hardcoded value (10).
At this point, we need to investigate what is using more connections
than we anticipated. To do that, we can use the
`gitlab_ruby_threads_running_threads` metric. For example,
[this graph](https://dashboards.gitlab.net/explore?schemaVersion=1&panes=%7B%22m95%22:%7B%22datasource%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22sum%20by%20%28thread_name%29%20%28%20gitlab_ruby_threads_running_threads%7Buses_db_connection%3D%5C%22yes%5C%22%7D%20%29%22,%22range%22:true,%22instant%22:true,%22datasource%22:%7B%22type%22:%22prometheus%22,%22uid%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%7D,%22editorMode%22:%22code%22,%22legendFormat%22:%22__auto%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&orgId=1)
shows all running threads that connect to the database by their
name. Threads labeled `puma worker` or `sidekiq_worker_thread` are
the threads that define `Gitlab::Runtime.max_threads` so those are
accounted for. If there's more than 10 other threads running, we could
consider raising the default headroom.
## Connection lifecycle
For web requests, a connection is obtained from the pool at the first
time a database query is made. The connection is returned to the pool
after the request completes.
For background jobs, the behavior is very similar. The thread obtains
a connection for the first query, and returns it after the job is
finished.
This is managed by Rails internally.
|
---
stage: Data Access
group: Database
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Client-side connection-pool
breadcrumbs:
- doc
- development
- database
---
Ruby processes accessing the database through
ActiveRecord, automatically calculate the connection-pool size for the
process based on the concurrency.
Because of the way [Ruby on Rails manages database connections](#connection-lifecycle),
it is important that we have at
least as many connections as we have threads. While there is a 'pool'
setting in [`database.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/database.yml.postgresql), it is not very practical because you need to
maintain it in tandem with the number of application threads. For this
reason, we override the number of allowed connections in the database
connection-pool based on the configured number of application threads.
`Gitlab::Runtime.max_threads` is the number of user-facing
application threads the process has been configured with. We also have
auxiliary threads that use database connections. As it isn't
straightforward to keep an accurate count of the number of auxiliary threads as
the application evolves over time, we just add a fixed headroom to the
number of user-facing threads. It is OK if this number is too large
because connections are instantiated lazily.
## Troubleshooting connection-pool issues
The connection-pool usage can be seen per environment in the
[connection-pool saturation dashboard](https://dashboards.gitlab.net/d/alerts-sat_rails_db_connection_pool/alerts-rails_db_connection_pool-saturation-detail?orgId=1).
If the connection-pool is too small, this would manifest in
`ActiveRecord::ConnectionTimeoutError`s from the application. Because we alert
when almost all connections are used, we should know this before
timeouts occur. If this happens we can remediate by setting the
`DB_POOL_HEADROOM` environment variable to something bigger than the
hardcoded value (10).
At this point, we need to investigate what is using more connections
than we anticipated. To do that, we can use the
`gitlab_ruby_threads_running_threads` metric. For example,
[this graph](https://dashboards.gitlab.net/explore?schemaVersion=1&panes=%7B%22m95%22:%7B%22datasource%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22sum%20by%20%28thread_name%29%20%28%20gitlab_ruby_threads_running_threads%7Buses_db_connection%3D%5C%22yes%5C%22%7D%20%29%22,%22range%22:true,%22instant%22:true,%22datasource%22:%7B%22type%22:%22prometheus%22,%22uid%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%7D,%22editorMode%22:%22code%22,%22legendFormat%22:%22__auto%22%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&orgId=1)
shows all running threads that connect to the database by their
name. Threads labeled `puma worker` or `sidekiq_worker_thread` are
the threads that define `Gitlab::Runtime.max_threads` so those are
accounted for. If there's more than 10 other threads running, we could
consider raising the default headroom.
## Connection lifecycle
For web requests, a connection is obtained from the pool at the first
time a database query is made. The connection is returned to the pool
after the request completes.
For background jobs, the behavior is very similar. The thread obtains
a connection for the first query, and returns it after the job is
finished.
This is managed by Rails internally.
|
https://docs.gitlab.com/development/query_recorder
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/query_recorder.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
query_recorder.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
QueryRecorder
| null |
QueryRecorder is a tool for detecting the [N+1 queries problem](https://guides.rubyonrails.org/active_record_querying.html#eager-loading-associations) from tests.
> Implemented in [spec/support/query_recorder.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/helpers/query_recorder.rb) via [9c623e3e](https://gitlab.com/gitlab-org/gitlab-foss/commit/9c623e3e5d7434f2e30f7c389d13e5af4ede770a)
As a rule, merge requests [should not increase query counts](../merge_request_concepts/performance.md#query-counts). If you find yourself adding something like `.includes(:author, :assignee)` to avoid having `N+1` queries, consider using QueryRecorder to enforce this with a test. Without this, a new feature which causes an additional model to be accessed can silently reintroduce the problem.
## How a QueryRecorder works
This style of test works by counting the number of SQL queries executed by ActiveRecord. First a control count is taken, then you add new records to the database and rerun the count. If the number of queries has significantly increased then an `N+1` queries problem exists.
```ruby
it "avoids N+1 database queries", :use_sql_query_cache do
control = ActiveRecord::QueryRecorder.new(skip_cached: false) { visit_some_page }
create_list(:issue, 5)
expect { visit_some_page }.to issue_same_number_of_queries_as(control)
end
```
You can if you wish, have both the expectation and the control as
`QueryRecorder` instances:
```ruby
it "avoids N+1 database queries" do
control = ActiveRecord::QueryRecorder.new { visit_some_page }
create_list(:issue, 5)
action = ActiveRecord::QueryRecorder.new { visit_some_page }
expect(action).to issue_same_number_of_queries_as(control)
end
```
As an example you might create 5 issues in between counts, which would cause the query count to increase by 5 if an N+1 problem exists.
In some cases, the query count might change slightly between runs for unrelated reasons.
In this case you might need to test `issue_same_number_of_queries_as(control_count + acceptable_change)`,
but this should be avoided if possible.
If this test fails, and the control was passed as a `QueryRecorder`, then the
failure message indicates where the extra queries are by matching queries on
the longest common prefix, grouping similar queries together.
In some cases, N+1 specs have been written to include three requests: first one to
warm the cache, second one to establish a control, third one to validate that
there are no N+1 queries. Rather than make an extra request to warm the cache, prefer two requests
(control and test) and configure your test to ignore [cached queries](#cached-queries) in N+1 specs.
```ruby
it "avoids N+1 database queries" do
# warm up
visit_some_page
control = ActiveRecord::QueryRecorder.new(skip_cached: true) { visit_some_page }
create_list(:issue, 5)
expect { visit_some_page }.to issue_same_number_of_queries_as(control)
end
```
## Cached queries
By default, QueryRecorder ignores [cached queries](../merge_request_concepts/performance.md#cached-queries) in the count.
However, it may be better to count all queries to avoid introducing an N+1 query that may be masked by the statement cache.
To do this, this requires the `:use_sql_query_cache` flag to be set.
You should pass the `skip_cached` variable to `QueryRecorder` and use the `issue_same_number_of_queries_as` matcher:
```ruby
it "avoids N+1 database queries", :use_sql_query_cache do
control = ActiveRecord::QueryRecorder.new(skip_cached: false) { visit_some_page }
create_list(:issue, 5)
expect { visit_some_page }.to issue_same_number_of_queries_as(control)
end
```
## Using RequestStore
[`RequestStore` / `Gitlab::SafeRequestStore`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/gems/gitlab-safe_request_store/README.md)
helps us to avoid N+1 queries by caching data in memory for the duration of a request. However, it is disabled by default in tests
and can lead to false negatives when testing for N+1 queries.
To enable `RequestStore` in tests, use the `request_store` helper when needed:
```ruby
it "avoids N+1 database queries", :request_store do
control = ActiveRecord::QueryRecorder.new(skip_cached: true) { visit_some_page }
create_list(:issue, 5)
expect { visit_some_page }.to issue_same_number_of_queries_as(control)
end
```
## Use request specs instead of controller specs
Use a [request spec](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/requests) when writing a N+1 test on the controller level.
Controller specs should not be used to write N+1 tests as the controller is only initialized once per example.
This could lead to false successes where subsequent "requests" could have queries reduced (for example, because of memoization).
## Never trust a test you haven't seen fail
Before you add a test for N+1 queries, you should first verify that the test fails without your change.
This is because the test may be broken, or the test may be passing for the wrong reasons.
## Finding the source of the query
There are multiple ways to find the source of queries.
- Inspect the `QueryRecorder` `data` attribute. It stores queries by `file_name:line_number:method_name`.
Each entry is a `hash` with the following fields:
- `count`: the number of times a query from this `file_name:line_number:method_name` was called
- `occurrences`: the actual `SQL` of each call
- `backtrace`: the stack trace of each call (if either of the two following options were enabled)
`QueryRecorder#find_query` allows filtering queries by their `file_name:line_number:method_name` and
`count` attributes. For example:
```ruby
control = ActiveRecord::QueryRecorder.new(skip_cached: false) { visit_some_page }
control.find_query(/.*note.rb.*/, 0, first_only: true)
```
`QueryRecorder#occurrences_by_line_method` returns a sorted array based on `data`, sorted by `count`.
- View the call backtrace for the specific `QueryRecorder` instance you want
by using `ActiveRecord::QueryRecorder.new(query_recorder_debug: true)`. The output
is stored in file `test.log`.
- Enable the call backtrace for all tests using the `QUERY_RECORDER_DEBUG` environment variable.
To enable this, run the specs with the `QUERY_RECORDER_DEBUG` environment variable set. For example:
```shell
QUERY_RECORDER_DEBUG=1 bundle exec rspec spec/requests/api/projects_spec.rb
```
This logs calls to QueryRecorder into the `test.log` file. For example:
```sql
QueryRecorder SQL: SELECT COUNT(*) FROM "issues" WHERE "issues"."deleted_at" IS NULL AND "issues"."project_id" = $1 AND ("issues"."state" IN ('opened')) AND "issues"."confidential" = $2
--> /home/user/gitlab/gdk/gitlab/spec/support/query_recorder.rb:19:in `callback'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/fanout.rb:127:in `finish'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/fanout.rb:46:in `block in finish'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/fanout.rb:46:in `each'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/fanout.rb:46:in `finish'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/instrumenter.rb:36:in `finish'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/instrumenter.rb:25:in `instrument'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract_adapter.rb:478:in `log'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:601:in `exec_cache'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:585:in `execute_and_clear'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/postgresql/database_statements.rb:160:in `exec_query'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/database_statements.rb:356:in `select'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/database_statements.rb:32:in `select_all'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/query_cache.rb:68:in `block in select_all'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/query_cache.rb:83:in `cache_sql'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/query_cache.rb:68:in `select_all'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/relation/calculations.rb:270:in `execute_simple_calculation'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/relation/calculations.rb:227:in `perform_calculation'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/relation/calculations.rb:133:in `calculate'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/relation/calculations.rb:48:in `count'
--> /home/user/gitlab/gdk/gitlab/app/services/base_count_service.rb:20:in `uncached_count'
--> /home/user/gitlab/gdk/gitlab/app/services/base_count_service.rb:12:in `block in count'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:299:in `block in fetch'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:585:in `block in save_block_result_to_cache'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:547:in `block in instrument'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications.rb:166:in `instrument'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:547:in `instrument'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:584:in `save_block_result_to_cache'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:299:in `fetch'
--> /home/user/gitlab/gdk/gitlab/app/services/base_count_service.rb:12:in `count'
--> /home/user/gitlab/gdk/gitlab/app/models/project.rb:1296:in `open_issues_count'
```
## See also
- [Bullet](../profiling.md#bullet) For finding `N+1` query problems
- [Performance guidelines](../performance.md)
- [Merge request performance guidelines - Query counts](../merge_request_concepts/performance.md#query-counts)
- [Merge request performance guidelines - Cached queries](../merge_request_concepts/performance.md#cached-queries)
- [RedisCommands::Recorder](../redis.md#n1-calls-problem) For testing `N+1` calls in Redis
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: QueryRecorder
breadcrumbs:
- doc
- development
- database
---
QueryRecorder is a tool for detecting the [N+1 queries problem](https://guides.rubyonrails.org/active_record_querying.html#eager-loading-associations) from tests.
> Implemented in [spec/support/query_recorder.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/helpers/query_recorder.rb) via [9c623e3e](https://gitlab.com/gitlab-org/gitlab-foss/commit/9c623e3e5d7434f2e30f7c389d13e5af4ede770a)
As a rule, merge requests [should not increase query counts](../merge_request_concepts/performance.md#query-counts). If you find yourself adding something like `.includes(:author, :assignee)` to avoid having `N+1` queries, consider using QueryRecorder to enforce this with a test. Without this, a new feature which causes an additional model to be accessed can silently reintroduce the problem.
## How a QueryRecorder works
This style of test works by counting the number of SQL queries executed by ActiveRecord. First a control count is taken, then you add new records to the database and rerun the count. If the number of queries has significantly increased then an `N+1` queries problem exists.
```ruby
it "avoids N+1 database queries", :use_sql_query_cache do
control = ActiveRecord::QueryRecorder.new(skip_cached: false) { visit_some_page }
create_list(:issue, 5)
expect { visit_some_page }.to issue_same_number_of_queries_as(control)
end
```
You can if you wish, have both the expectation and the control as
`QueryRecorder` instances:
```ruby
it "avoids N+1 database queries" do
control = ActiveRecord::QueryRecorder.new { visit_some_page }
create_list(:issue, 5)
action = ActiveRecord::QueryRecorder.new { visit_some_page }
expect(action).to issue_same_number_of_queries_as(control)
end
```
As an example you might create 5 issues in between counts, which would cause the query count to increase by 5 if an N+1 problem exists.
In some cases, the query count might change slightly between runs for unrelated reasons.
In this case you might need to test `issue_same_number_of_queries_as(control_count + acceptable_change)`,
but this should be avoided if possible.
If this test fails, and the control was passed as a `QueryRecorder`, then the
failure message indicates where the extra queries are by matching queries on
the longest common prefix, grouping similar queries together.
In some cases, N+1 specs have been written to include three requests: first one to
warm the cache, second one to establish a control, third one to validate that
there are no N+1 queries. Rather than make an extra request to warm the cache, prefer two requests
(control and test) and configure your test to ignore [cached queries](#cached-queries) in N+1 specs.
```ruby
it "avoids N+1 database queries" do
# warm up
visit_some_page
control = ActiveRecord::QueryRecorder.new(skip_cached: true) { visit_some_page }
create_list(:issue, 5)
expect { visit_some_page }.to issue_same_number_of_queries_as(control)
end
```
## Cached queries
By default, QueryRecorder ignores [cached queries](../merge_request_concepts/performance.md#cached-queries) in the count.
However, it may be better to count all queries to avoid introducing an N+1 query that may be masked by the statement cache.
To do this, this requires the `:use_sql_query_cache` flag to be set.
You should pass the `skip_cached` variable to `QueryRecorder` and use the `issue_same_number_of_queries_as` matcher:
```ruby
it "avoids N+1 database queries", :use_sql_query_cache do
control = ActiveRecord::QueryRecorder.new(skip_cached: false) { visit_some_page }
create_list(:issue, 5)
expect { visit_some_page }.to issue_same_number_of_queries_as(control)
end
```
## Using RequestStore
[`RequestStore` / `Gitlab::SafeRequestStore`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/gems/gitlab-safe_request_store/README.md)
helps us to avoid N+1 queries by caching data in memory for the duration of a request. However, it is disabled by default in tests
and can lead to false negatives when testing for N+1 queries.
To enable `RequestStore` in tests, use the `request_store` helper when needed:
```ruby
it "avoids N+1 database queries", :request_store do
control = ActiveRecord::QueryRecorder.new(skip_cached: true) { visit_some_page }
create_list(:issue, 5)
expect { visit_some_page }.to issue_same_number_of_queries_as(control)
end
```
## Use request specs instead of controller specs
Use a [request spec](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/requests) when writing a N+1 test on the controller level.
Controller specs should not be used to write N+1 tests as the controller is only initialized once per example.
This could lead to false successes where subsequent "requests" could have queries reduced (for example, because of memoization).
## Never trust a test you haven't seen fail
Before you add a test for N+1 queries, you should first verify that the test fails without your change.
This is because the test may be broken, or the test may be passing for the wrong reasons.
## Finding the source of the query
There are multiple ways to find the source of queries.
- Inspect the `QueryRecorder` `data` attribute. It stores queries by `file_name:line_number:method_name`.
Each entry is a `hash` with the following fields:
- `count`: the number of times a query from this `file_name:line_number:method_name` was called
- `occurrences`: the actual `SQL` of each call
- `backtrace`: the stack trace of each call (if either of the two following options were enabled)
`QueryRecorder#find_query` allows filtering queries by their `file_name:line_number:method_name` and
`count` attributes. For example:
```ruby
control = ActiveRecord::QueryRecorder.new(skip_cached: false) { visit_some_page }
control.find_query(/.*note.rb.*/, 0, first_only: true)
```
`QueryRecorder#occurrences_by_line_method` returns a sorted array based on `data`, sorted by `count`.
- View the call backtrace for the specific `QueryRecorder` instance you want
by using `ActiveRecord::QueryRecorder.new(query_recorder_debug: true)`. The output
is stored in file `test.log`.
- Enable the call backtrace for all tests using the `QUERY_RECORDER_DEBUG` environment variable.
To enable this, run the specs with the `QUERY_RECORDER_DEBUG` environment variable set. For example:
```shell
QUERY_RECORDER_DEBUG=1 bundle exec rspec spec/requests/api/projects_spec.rb
```
This logs calls to QueryRecorder into the `test.log` file. For example:
```sql
QueryRecorder SQL: SELECT COUNT(*) FROM "issues" WHERE "issues"."deleted_at" IS NULL AND "issues"."project_id" = $1 AND ("issues"."state" IN ('opened')) AND "issues"."confidential" = $2
--> /home/user/gitlab/gdk/gitlab/spec/support/query_recorder.rb:19:in `callback'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/fanout.rb:127:in `finish'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/fanout.rb:46:in `block in finish'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/fanout.rb:46:in `each'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/fanout.rb:46:in `finish'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/instrumenter.rb:36:in `finish'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/instrumenter.rb:25:in `instrument'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract_adapter.rb:478:in `log'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:601:in `exec_cache'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:585:in `execute_and_clear'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/postgresql/database_statements.rb:160:in `exec_query'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/database_statements.rb:356:in `select'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/database_statements.rb:32:in `select_all'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/query_cache.rb:68:in `block in select_all'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/query_cache.rb:83:in `cache_sql'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/query_cache.rb:68:in `select_all'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/relation/calculations.rb:270:in `execute_simple_calculation'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/relation/calculations.rb:227:in `perform_calculation'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/relation/calculations.rb:133:in `calculate'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/relation/calculations.rb:48:in `count'
--> /home/user/gitlab/gdk/gitlab/app/services/base_count_service.rb:20:in `uncached_count'
--> /home/user/gitlab/gdk/gitlab/app/services/base_count_service.rb:12:in `block in count'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:299:in `block in fetch'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:585:in `block in save_block_result_to_cache'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:547:in `block in instrument'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications.rb:166:in `instrument'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:547:in `instrument'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:584:in `save_block_result_to_cache'
--> /home/user/.rbenv/versions/2.3.5/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/cache.rb:299:in `fetch'
--> /home/user/gitlab/gdk/gitlab/app/services/base_count_service.rb:12:in `count'
--> /home/user/gitlab/gdk/gitlab/app/models/project.rb:1296:in `open_issues_count'
```
## See also
- [Bullet](../profiling.md#bullet) For finding `N+1` query problems
- [Performance guidelines](../performance.md)
- [Merge request performance guidelines - Query counts](../merge_request_concepts/performance.md#query-counts)
- [Merge request performance guidelines - Cached queries](../merge_request_concepts/performance.md#cached-queries)
- [RedisCommands::Recorder](../redis.md#n1-calls-problem) For testing `N+1` calls in Redis
|
https://docs.gitlab.com/development/avoiding_downtime_in_migrations
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/avoiding_downtime_in_migrations.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
avoiding_downtime_in_migrations.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Avoiding downtime in migrations
| null |
When working with a database certain operations may require downtime. As we
cannot have downtime in migrations we need to use a set of steps to get the
same end result without downtime. This guide describes various operations that
may appear to need downtime, their impact, and how to perform them without
requiring downtime.
## Dropping columns
Removing columns is tricky because running GitLab processes expect these columns to exist.
ActiveRecord caches the tables schema when it boots even if the columns are not referenced.
This happens if the columns are not explicitly marked as ignored.
In addition, any database view that references such columns needs to be considered as well.
To work around this safely, you need three steps in three releases:
1. [Ignoring the column](#ignoring-the-column-release-m) (release M)
1. [Dropping the column](#dropping-the-column-release-m1) (release M+1)
1. [Removing the ignore rule](#removing-the-ignore-rule-release-m2) (release M+2)
The reason we spread this out across three releases is that dropping a column is
a destructive operation that can't be rolled back easily.
Following this procedure helps us to make sure there are no deployments to GitLab.com
and upgrade processes for GitLab Self-Managed instances that lump together any of these steps.
### Ignoring the column (release M)
The first step is to ignore the column in the application code and remove all code references to it including
model validations.
This step is necessary because Rails caches the columns and re-uses this cache in various
places. This can be done by defining the columns to ignore. For example, in release `12.5`, to ignore
`updated_at` in the User model you'd use the following:
```ruby
class User < ApplicationRecord
ignore_column :updated_at, remove_with: '12.7', remove_after: '2019-12-22'
end
```
Multiple columns can be ignored, too:
```ruby
ignore_columns %i[updated_at created_at], remove_with: '12.7', remove_after: '2019-12-22'
```
If the model exists in CE and EE, the column has to be ignored in the CE model. If the
model only exists in EE, then it has to be added there.
We require indication of when it is safe to remove the column ignore rule with:
- `remove_with`: set to a GitLab release typically two releases (M+2) (`12.7` in our example) after adding the
column ignore.
- `remove_after`: set to a date after which we consider it safe to remove the column
ignore, typically after the M+1 release date, during the M+2 development cycle. For example, since the development cycle of `12.7` is between `2019-12-18` and `2020-01-17`, and `12.6` is the release to [drop the column](#dropping-the-column-release-m1), it's safe to set the date to the release date of `12.6` as `2019-12-22`.
This information allows us to reason better about column ignores and makes sure we
don't remove column ignores too early for both regular releases and deployments to GitLab.com. For
example, this avoids a situation where we deploy a bulk of changes that include both changes
to ignore the column and subsequently remove the column ignore (which would result in a downtime).
In this example, the change to ignore the column went into release `12.5`.
{{< alert type="note" >}}
Ignoring and dropping columns should not occur simultaneously in the same release. Dropping a column before proper ignoring it in the model can cause problems with zero-downtime migrations,
where the running instances can fail trying to look up for the removed column until the Rails schema cache expires. This can be an issue for self-managed customers whom attempt to follow zero-downtime upgrades,
forcing them to explicit restart all running GitLab instances to re-load the updated schema. To avoid this scenario, first, ignore the column (release M), then, drop it in the next release (release M+1).
{{< /alert >}}
#### Ignoring columns referenced by database views
When the column is also referenced by a database view, as in the follow example:
```sql
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at
FROM users
WHERE updated_at > now() - interval '30 day'
```
The `ignore_columns` instruction should also be included on the corresponding model class:
```ruby
class RecentlyUpdatedUsersView < ApplicationRecord
self.table_name = 'recently_updated_users_view'
ignore_columns :updated_at
end
```
### Dropping the column (release M+1)
Continuing our example, dropping the column goes into a _post-deployment_ migration in release `12.6`:
Start by creating the **post-deployment migration**:
```shell
bundle exec rails g post_deployment_migration remove_users_updated_at_column
```
You must consider these scenarios when you write a migration that removes a column:
- [The removed column has no indexes or constraints that belong to it](#the-removed-column-has-no-indexes-or-constraints-that-belong-to-it)
- [The removed column has an index or constraint that belongs to it](#the-removed-column-has-an-index-or-constraint-that-belongs-to-it)
#### The removed column has no indexes or constraints that belong to it
In this case, a **transactional migration** can be used:
```ruby
class RemoveUsersUpdatedAtColumn < Gitlab::Database::Migration[2.1]
def up
remove_column :users, :updated_at
end
def down
add_column :users, :updated_at, :datetime
end
end
```
#### The removed column has an index or constraint that belongs to it
If the `down` method requires adding back any dropped indexes or constraints, that cannot
be done in a transactional migration. The migration would look like this:
```ruby
class RemoveUsersUpdatedAtColumn < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
remove_column :users, :updated_at
end
def down
add_column(:users, :updated_at, :datetime, if_not_exists: true)
# Make sure to add back any indexes or constraints,
# that were dropped in the `up` method. For example:
add_concurrent_index(:users, :updated_at)
end
end
```
In the `down` method, we check to see if the column already exists before adding it again.
We do this because the migration is non-transactional and might have failed while it was running.
The [`disable_ddl_transaction!`](../migration_style_guide.md#usage-with-non-transactional-migrations)
is used to disable the transaction that wraps the whole migration.
You can refer to the page [Migration Style Guide](../migration_style_guide.md)
for more information about database migrations.
#### The removed column is referenced by a database view
When a column is referenced by a database view, it behaves as if the column had a constraint attached to it
so the view needs to be updated first before dropping the column:
1. Recreate the view excluding the column
1. Drop the column from the original table
The `down` method should perform the operation in reverse order as the column must exist before it is referenced
by the view:
1. Reintroduce the column to the original table
1. Recreate the view including the column again
The migration would look like this:
```ruby
class RemoveUsersUpdatedAtColumn < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username) AS
SELECT id, username
FROM users;
SQL
remove_column :users, :updated_at
end
def down
add_column :users, :updated_at, :datetime
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
end
end
```
### Removing the ignore rule (release M+2)
With the next release, in this example `12.7`, we set up another merge request to remove the ignore rule.
This removes the `ignore_column` line and - if not needed anymore - also the inclusion of `IgnoreableColumns`.
This should only get merged with the release indicated with `remove_with` and once
the `remove_after` date has passed.
## Renaming columns
{{< alert type="note" >}}
The below procedure is only appropriate for small tables. The procedure copies
all the data from one column to the other in a regular migration which may take
too long for large tables. For large tables you should look at using
[Batched Background Migrations](batched_background_migrations.md) to copy
the data over and perform the rename over multiple milestones.
{{< /alert >}}
Renaming columns the standard way requires downtime as an application may continue
to use the old column names during or after a database migration. To rename a column
without requiring downtime, we need two migrations: a regular migration and a
post-deployment migration. Both these migrations can go in the same release.
The steps:
1. [Add the regular migration](#add-the-regular-migration-release-m) (release M)
1. [Ignore the column](#ignore-the-column-release-m) (release M)
1. [Add a post-deployment migration](#add-a-post-deployment-migration-release-m) (release M)
1. [Remove the ignore rule](#remove-the-ignore-rule-release-m1) (release M+1)
When renaming columns that is referenced by a database view in the regular way, it requires no additional step as
views updated to the new column name while preserving the `SELECT` portion intact.
With no downtime there are additional considerations mentioned in the steps above.
### Add the regular migration (release M)
First we need to create the regular migration. This migration should use
`Gitlab::Database::MigrationHelpers#rename_column_concurrently` to perform the
renaming. For example:
```ruby
# A regular migration in db/migrate
class RenameUsersUpdatedAtToUpdatedAtTimestamp < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
rename_column_concurrently :users, :updated_at, :updated_at_timestamp
end
def down
undo_rename_column_concurrently :users, :updated_at, :updated_at_timestamp
end
end
```
This takes care of renaming the column, ensuring data stays in sync, and
copying over indexes and foreign keys.
If a column contains one or more indexes that don't contain the name of the
original column, the previously described procedure fails. In that case,
you need to rename these indexes.
When the column is referenced by a database view, the view needs to be recreated
and pointed to the new column. The `down` operation needs to restore it back
before executing the `undo_rename_column_concurrently`:
```ruby
# A regular migration in db/migrate including database view recreation
class RenameUsersUpdatedAtToUpdatedAtTimestamp < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
rename_column_concurrently :users, :updated_at, :updated_at_timestamp
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at_timestamp
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
end
def down
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
undo_rename_column_concurrently :users, :updated_at, :updated_at_timestamp
end
end
```
### Ignore the column (release M)
The next step is to ignore the column in the application code, and make sure it is not used. This step is
necessary because Rails caches the columns and re-uses this cache in various places.
This step is similar to [the first step when column is dropped](#ignoring-the-column-release-m), and the same requirements apply.
```ruby
class User < ApplicationRecord
ignore_column :updated_at, remove_with: '12.7', remove_after: '2019-12-22'
end
```
### Add a post-deployment migration (release M)
The renaming procedure requires some cleaning up in a post-deployment migration.
We can perform this cleanup using
`Gitlab::Database::MigrationHelpers#cleanup_concurrent_column_rename`:
```ruby
# A post-deployment migration in db/post_migrate
class CleanupUsersUpdatedAtRename < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
cleanup_concurrent_column_rename :users, :updated_at, :updated_at_timestamp
end
def down
undo_cleanup_concurrent_column_rename :users, :updated_at, :updated_at_timestamp
end
end
```
If you're renaming a [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3), carefully consider the state when the first migration has run but the second cleanup migration hasn't been run yet.
With [Canary](https://gitlab.com/gitlab-com/gl-infra/readiness/-/tree/master/library/canary/) it is possible that the system runs in this state for a significant amount of time.
### Remove the ignore rule (release M+1)
Same as when column is dropped, after the rename is completed, we need to [remove the ignore rule](#removing-the-ignore-rule-release-m2) in a subsequent release.
## Changing column constraints
Adding or removing a `NOT NULL` clause (or another constraint) can typically be
done without requiring downtime. Adding a `NOT NULL` constraint requires that any application
changes are deployed _first_, so it should happen in a post-deployment migration.
In contrary removing a `NOT NULL` constraint should be done in a regular migration.
This way any code which inserts `NULL` values can safely run for the column.
Avoid using `change_column` as it produces an inefficient query because it re-defines
the whole column type.
You can check the following guides for each specific use case:
- [Adding foreign-key constraints](foreign_keys.md)
- [Adding `NOT NULL` constraints](not_null_constraints.md)
- [Adding limits to text columns](strings_and_the_text_data_type.md)
## Changing column types
Changing the type of a column can be done using
`Gitlab::Database::MigrationHelpers#change_column_type_concurrently`. This
method works similarly to `rename_column_concurrently`. For example, if
we want to change the type of `users.username` from `string` to `text`:
1. [Create a regular migration](#create-a-regular-migration)
1. [Create a post-deployment migration](#create-a-post-deployment-migration)
1. [Casting data to a new type](#casting-data-to-a-new-type)
When changing columns type that are referenced by a database view the view needs to be recreated as part of the process.
### Create a regular migration
A regular migration is used to create a new column with a temporary name along
with setting up some triggers to keep data in sync. Such a migration would look
as follows:
```ruby
# A regular migration in db/migrate
class ChangeUsersUsernameStringToText < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
change_column_type_concurrently :users, :username, :text
end
def down
undo_change_column_type_concurrently :users, :username
end
end
```
When the column is referenced by a database view, the view needs to be recreated
and pointed to the new temporary column.
When in the later step the temporary column is renamed back to the original name, the view updates
itself internally and doesn't require any other change:
```ruby
# A regular migration in db/migrate including database view recreation
class ChangeUsersUsernameStringToText < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
change_column_type_concurrently :users, :username, :text
# temporary column name follows this pattern: `"#{column}_for_type_change"`
# so the column named `username` becomes `username_for_type_change`
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username_for_type_change, updated_at
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
end
def down
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at_timestamp
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
undo_change_column_type_concurrently :users, :username
end
end
```
### Create a post-deployment migration
Next we need to clean up our changes using a post-deployment migration:
```ruby
# A post-deployment migration in db/post_migrate
class ChangeUsersUsernameStringToTextCleanup < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
cleanup_concurrent_column_type_change :users, :username
end
def down
undo_cleanup_concurrent_column_type_change :users, :username, :string
end
end
```
And that's it, we're done!
### Casting data to a new type
Some type changes require casting data to a new type. For example when changing from `text` to `jsonb`.
In this case, use the `type_cast_function` option.
Make sure there is no bad data and the cast always succeeds. You can also provide a custom function that handles
casting errors.
Example migration:
```ruby
def up
change_column_type_concurrently :users, :settings, :jsonb, type_cast_function: 'jsonb'
end
```
## Changing column defaults
Changing column defaults is difficult because of how Rails handles values
that are equal to the default.
{{< alert type="note" >}}
Rails ignores sending the default values to PostgreSQL when inserting records, if the [partial_inserts](https://gitlab.com/gitlab-org/gitlab/-/blob/55ac06c9083434e6c18e0a2aaf8be5f189ef34eb/config/application.rb#L40) config has been enabled. It leaves this task to
the database. When migrations change the default values of the columns, the running application is unaware
of this change due to the schema cache. The application is then under the risk of accidentally writing
wrong data to the database, especially when deploying the new version of the code
long after we run database migrations.
{{< /alert >}}
If running code ever explicitly writes the old default value of a column, you must follow a multi-step
process to prevent Rails replacing the old default with the new default in INSERT queries that explicitly
specify the old default.
Doing this requires steps in two minor releases:
1. [Add the `SafelyChangeColumnDefault` concern to the model](#add-the-safelychangecolumndefault-concern-to-the-model-and-change-the-default-in-a-post-migration) and change the default in a post-migration.
1. [Clean up the `SafelyChangeColumnDefault` concern](#clean-up-the-safelychangecolumndefault-concern-in-the-next-minor-release) in the next minor release.
We must wait a minor release before cleaning up the `SafelyChangeColumnDefault` because self-managed
releases bundle an entire minor release into a single zero-downtime deployment.
### Add the `SafelyChangeColumnDefault` concern to the model and change the default in a post-migration
The first step is to mark the column as safe to change in application code.
```ruby
class Ci::Build < ApplicationRecord
include SafelyChangeColumnDefault
columns_changing_default :partition_id
end
```
Then create a **post-deployment migration** to change the default:
```shell
bundle exec rails g post_deployment_migration change_ci_builds_default
```
```ruby
class ChangeCiBuildsDefault < Gitlab::Database::Migration[2.1]
def change
change_column_default('ci_builds', 'partition_id', from: 100, to: 101)
end
end
```
### Clean up the `SafelyChangeColumnDefault` concern in the next minor release
In the next minor release, create a new merge request to remove the `columns_changing_default` call. Also remove the `SafelyChangeColumnDefault` include
if it is not needed for a different column.
## Changing the schema for large tables
While `change_column_type_concurrently` and `rename_column_concurrently` can be
used for changing the schema of a table without downtime, it doesn't work very
well for large tables. Because all of the work happens in sequence the migration
can take a very long time to complete, preventing a deployment from proceeding.
They can also produce a lot of pressure on the database due to it rapidly
updating many rows in sequence.
To reduce database pressure you should instead use a background migration
when migrating a column in a large table (for example, `issues`). Background
migrations spread the work / load over a longer time period, without slowing
down deployments.
For more information, see [the documentation on cleaning up batched background migrations](batched_background_migrations.md#cleaning-up-a-batched-background-migration).
## Adding indexes
Adding indexes does not require downtime when `add_concurrent_index`
is used.
See also [Migration Style Guide](../migration_style_guide.md#adding-indexes)
for more information.
## Dropping indexes
Dropping an index does not require downtime.
## Adding tables
This operation is safe as there's no code using the table just yet.
## Dropping tables
Dropping tables can be done safely using a post-deployment migration, but only
if the application no longer uses the table.
Add the table to [`db/docs/deleted_tables`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/db/docs/deleted_tables) using the process described in [database dictionary](database_dictionary.md#dropping-tables).
Even though the table is deleted, it is still referenced in database migrations.
## Renaming tables
Renaming tables requires downtime as an application may continue
using the old table name during/after a database migration.
If the table and the ActiveRecord model is not in use yet, removing the old
table and creating a new one is the preferred way to "rename" the table.
Renaming a table is possible without downtime by following our multi-release
[rename table process](rename_database_tables.md).
## Adding foreign keys
Adding foreign keys can potentially cause downtime, please refer [FK: Avoiding downtime and migration failures](foreign_keys.md#avoiding-downtime-and-migration-failures) docs for details.
## Migrating `integer` primary keys to `bigint`
To [prevent the overflow risk](https://gitlab.com/groups/gitlab-org/-/epics/4785) for some tables
with `integer` primary key (PK), we have to migrate their PK to `bigint`. The process to do this
without downtime and causing too much load on the database is described below.
### Initialize the conversion and start migrating existing data (release N)
To start the process, add a regular migration to create the new `bigint` columns. Use the provided
`initialize_conversion_of_integer_to_bigint` helper. The helper also creates a database trigger
to keep in sync both columns for any new records ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/97aee76c4bfc2043dc0a1ef9ffbb71c58e0e2857/db/migrate/20230127093353_initialize_conversion_of_merge_request_metrics_to_bigint.rb)):
```ruby
# frozen_string_literal: true
class InitializeConversionOfMergeRequestMetricsToBigint < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
TABLE = :merge_request_metrics
COLUMNS = %i[id]
def up
initialize_conversion_of_integer_to_bigint(TABLE, COLUMNS)
end
def down
revert_initialize_conversion_of_integer_to_bigint(TABLE, COLUMNS)
end
end
```
Ignore the new `bigint` columns:
```ruby
# frozen_string_literal: true
class MergeRequest::Metrics < ApplicationRecord
ignore_column :id_convert_to_bigint, remove_with: '16.0', remove_after: '2023-05-22'
end
```
Enqueue batched background migration ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/97aee76c4bfc2043dc0a1ef9ffbb71c58e0e2857/db/post_migrate/20230127101834_backfill_merge_request_metrics_for_bigint_conversion.rb))
to migrate the existing data:
```ruby
# frozen_string_literal: true
class BackfillMergeRequestMetricsForBigintConversion < Gitlab::Database::Migration[2.1]
restrict_gitlab_migration gitlab_schema: :gitlab_main
TABLE = :merge_request_metrics
COLUMNS = %i[id]
def up
backfill_conversion_of_integer_to_bigint(TABLE, COLUMNS, sub_batch_size: 200)
end
def down
revert_backfill_conversion_of_integer_to_bigint(TABLE, COLUMNS)
end
end
```
{{< alert type="note" >}}
- With [Issue#438124](https://gitlab.com/gitlab-org/gitlab/-/issues/438124) new instances have all ID columns in bigint.
The list of IDs yet to be converted to bigint in old instances (includes `Gitlab.com` SaaS) is maintained in `db/integer_ids_not_yet_initialized_to_bigint.yml`. **Do not edit this file manually** - it gets automatically updated during the [cleanup process](https://gitlab.com/gitlab-org/gitlab/-/blob/c6f4ea1bf1d693f1a0379964dd83a4bfec3e2f8d/lib/gitlab/database/migrations/conversions/bigint_converter.rb#L17-23).
- Since the schema file already has all IDs in `bigint`, don't push any changes to `db/structure.sql`.
{{< /alert >}}
### Monitor the background migration
Check how the migration is performing while it's running. Multiple ways to do this are described below.
#### High-level status of batched background migrations
See how to [check the status of batched background migrations](../../update/background_migrations.md).
#### Query the database
We can query the related database tables directly. Requires access to read-only replica.
Example queries:
```sql
-- Get details for batched background migration for given table
SELECT * FROM batched_background_migrations WHERE table_name = 'namespaces'\gx
-- Get count of batched background migration jobs by status for given table
SELECT
batched_background_migrations.id, batched_background_migration_jobs.status, COUNT(*)
FROM
batched_background_migrations
JOIN batched_background_migration_jobs ON batched_background_migrations.id = batched_background_migration_jobs.batched_background_migration_id
WHERE
table_name = 'namespaces'
GROUP BY
batched_background_migrations.id, batched_background_migration_jobs.status;
-- Batched background migration progress for given table (based on estimated total number of tuples)
SELECT
m.table_name,
LEAST(100 * sum(j.batch_size) / pg_class.reltuples, 100) AS percentage_complete
FROM
batched_background_migrations m
JOIN batched_background_migration_jobs j ON j.batched_background_migration_id = m.id
JOIN pg_class ON pg_class.relname = m.table_name
WHERE
j.status = 3 AND m.table_name = 'namespaces'
GROUP BY m.id, pg_class.reltuples;
```
#### Sidekiq logs
We can also use the Sidekiq logs to monitor the worker that executes the batched background
migrations:
1. Sign in to [Kibana](https://log.gprd.gitlab.net) with a `@gitlab.com` email address.
1. Change the index pattern to `pubsub-sidekiq-inf-gprd*`.
1. Add filter for `json.queue: cronjob:database_batched_background_migration`.
#### PostgreSQL slow queries log
Slow queries log keeps track of low queries that took above 1 second to execute. To see them
for batched background migration:
1. Sign in to [Kibana](https://log.gprd.gitlab.net) with a `@gitlab.com` email address.
1. Change the index pattern to `pubsub-postgres-inf-gprd*`.
1. Add filter for `json.endpoint_id.keyword: Database::BatchedBackgroundMigrationWorker`.
1. Optional. To see only updates, add a filter for `json.command_tag.keyword: UPDATE`.
1. Optional. To see only failed statements, add a filter for `json.error_severity.keyword: ERROR`.
1. Optional. Add a filter by table name.
#### Grafana dashboards
To monitor the health of the database, use these additional metrics:
- [PostgreSQL Tuple Statistics](https://dashboards.gitlab.net/d/000000167/postgresql-tuple-statistics?orgId=1&refresh=1m): if you see high rate of updates for the tables being actively converted, or increasing percentage of dead tuples for this table, it might mean that `autovacuum` cannot keep up.
- [PostgreSQL Overview](https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1): if you see high system usage or transactions per second (TPS) on the primary database server, it might mean that the migration is causing problems.
### Prometheus metrics
Number of [metrics](https://gitlab.com/gitlab-org/gitlab/-/blob/294a92484ce4611f660439aa48eee4dfec2230b5/lib/gitlab/database/background_migration/batched_migration_wrapper.rb#L90-128)
for each batched background migration are published to Prometheus. These metrics can be searched for and
visualized in Grafana ([see an example](https://dashboards.gitlab.net/explore?schemaVersion=1&panes=%7B%22m95%22:%7B%22datasource%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22sum%20%28rate%28batched_migration_job_updated_tuples_total%7Benv%3D%5C%22gprd%5C%22%7D%5B5m%5D%29%29%20by%20%28migration_id%29%20%22,%22range%22:true,%22instant%22:true,%22datasource%22:%7B%22type%22:%22prometheus%22,%22uid%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%7D,%22editorMode%22:%22code%22,%22legendFormat%22:%22__auto%22%7D%5D,%22range%22:%7B%22from%22:%22now-3d%22,%22to%22:%22now%22%7D%7D%7D&orgId=1)).
### Swap the columns (release N + 1)
After the background migration is complete and the new `bigint` columns are populated for all records, we can
swap the columns. Swapping is done with post-deployment migration. The exact process depends on the
table being converted, but in general it's done in the following steps:
1. Using the provided `ensure_backfill_conversion_of_integer_to_bigint_is_finished` helper, make sure the batched
migration has finished.
If the migration has not completed, the subsequent steps fail anyway. By checking in advance we
aim to have more helpful error message.
```ruby
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_ci
def up
ensure_backfill_conversion_of_integer_to_bigint_is_finished(
:ci_builds,
%i[
project_id
runner_id
user_id
],
# optional. Only needed when there is no primary key, for example, like schema_migrations.
primary_key: :id
)
end
def down; end
```
1. Use the `add_bigint_column_indexes` helper method from `Gitlab::Database::MigrationHelpers::ConvertToBigint` module
to create indexes with the `bigint` columns that match the existing indexes using the `integer` column.
- The helper method is expected to create all required `bigint` indexes, but it's advised to recheck to make sure
we are not missing any of the existing indexes. More information about the helper can be
found in merge request [135781](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/135781).
1. Create foreign keys (FK) using the `bigint` columns that match the existing FK using the
`integer` column. Do this both for FK referencing other tables, and FK that reference the table
that is being migrated ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L36-43)).
1. Inside a transaction, swap the columns:
1. Lock the tables involved. To reduce the chance of hitting a deadlock, we recommended to do this in parent to child order ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L47)).
1. Rename the columns to swap names ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L49-54))
1. Reset the trigger function ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L56-57)).
1. Swap the defaults ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L59-62)).
1. Swap the PK constraint (if any) ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L64-68)).
1. Remove old indexes and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L70-72)).
- Names of the `bigint` indexes created using `add_bigint_column_indexes` helper can be retrieved by calling
`bigint_index_name` from `Gitlab::Database::MigrationHelpers::ConvertToBigint` module.
1. Remove old foreign keys (if still present) and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L74)).
See example [merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66088), and [migration](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb).
### Remove the trigger and old `integer` columns (release N + 2)
Using post-deployment migration and the provided `cleanup_conversion_of_integer_to_bigint` helper,
drop the database trigger and the old `integer` columns ([see an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/70351)).
### Remove ignore rules (release N + 3)
In the next release after the columns were dropped, remove the ignore rules as we do not need them
anymore ([see an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/71161)).
## Database Views
GitLab makes light usage of database views, as they may introduce additional complexity when handling
migrations.
There are currently two situations where views are used:
- To [expose Postgres internal metrics](virtual_tables.md)
- To expose limited read-only data for the Unified Backup CLI
### Postgres internal metrics
Postgres internal metrics are accessible via `Gitlab::Database::Postgres*` models (in `lib/gitlab/database`),
and rely on the `Gitlab::Database::SharedModel` class.
### Unified Backup CLI
The Unified Backup CLI relies on a couple of views to retrieve a limited amount of information necessary
to trigger `gitaly-backup` for the many repository types. The views are accessible via `Gitlab::Backup::Cli::Models::*`
(in `gems/gitlab-backup-cli/lib/gitlab/backup/cli/models`) and rely on the `Gitlab::Backup::Cli::Models::Base`
class to handle the connection.
As the Unified Backup CLI code is in a separate gem, the main codebase also contains specs to ensure the required views
return the information needed by the tool. This ensures a "contract" between the two codebases.
In case any of the columns needed by this view needs to change, follow those steps:
- To drop a column
- Coordinate with Durability team (responsible for the Unified Backup) and Gitaly (responsible for `gitaly-backup`)
- To rename a column
- Follow [Renaming Columns](#renaming-columns) including the view specific considerations
- To change a column type
- Follow [Changing column types](#changing-column-types) including the view specific considerations
## Data migrations
Data migrations can be tricky. The usual approach to migrate data is to take a 3
step approach:
1. Migrate the initial batch of data
1. Deploy the application code
1. Migrate any remaining data
Usually this works, but not always. For example, if a field's format is to be
changed from JSON to something else we have a bit of a problem. If we were to
change existing data before deploying application code we would most likely run
into errors. On the other hand, if we were to migrate after deploying the
application code we could run into the same problems.
If you merely need to correct some invalid data, then a post-deployment
migration is usually enough. If you need to change the format of data (for example, from
JSON to something else) it's typically best to add a new column for the new data
format, and have the application use that. In such a case the procedure would
be:
1. Add a new column in the new format
1. Copy over existing data to this new column
1. Deploy the application code
1. In a post-deployment migration, copy over any remaining data
In general there is no one-size-fits-all solution, therefore it's best to
discuss these kind of migrations in a merge request to make sure they are
implemented in the best way possible.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Avoiding downtime in migrations
breadcrumbs:
- doc
- development
- database
---
When working with a database certain operations may require downtime. As we
cannot have downtime in migrations we need to use a set of steps to get the
same end result without downtime. This guide describes various operations that
may appear to need downtime, their impact, and how to perform them without
requiring downtime.
## Dropping columns
Removing columns is tricky because running GitLab processes expect these columns to exist.
ActiveRecord caches the tables schema when it boots even if the columns are not referenced.
This happens if the columns are not explicitly marked as ignored.
In addition, any database view that references such columns needs to be considered as well.
To work around this safely, you need three steps in three releases:
1. [Ignoring the column](#ignoring-the-column-release-m) (release M)
1. [Dropping the column](#dropping-the-column-release-m1) (release M+1)
1. [Removing the ignore rule](#removing-the-ignore-rule-release-m2) (release M+2)
The reason we spread this out across three releases is that dropping a column is
a destructive operation that can't be rolled back easily.
Following this procedure helps us to make sure there are no deployments to GitLab.com
and upgrade processes for GitLab Self-Managed instances that lump together any of these steps.
### Ignoring the column (release M)
The first step is to ignore the column in the application code and remove all code references to it including
model validations.
This step is necessary because Rails caches the columns and re-uses this cache in various
places. This can be done by defining the columns to ignore. For example, in release `12.5`, to ignore
`updated_at` in the User model you'd use the following:
```ruby
class User < ApplicationRecord
ignore_column :updated_at, remove_with: '12.7', remove_after: '2019-12-22'
end
```
Multiple columns can be ignored, too:
```ruby
ignore_columns %i[updated_at created_at], remove_with: '12.7', remove_after: '2019-12-22'
```
If the model exists in CE and EE, the column has to be ignored in the CE model. If the
model only exists in EE, then it has to be added there.
We require indication of when it is safe to remove the column ignore rule with:
- `remove_with`: set to a GitLab release typically two releases (M+2) (`12.7` in our example) after adding the
column ignore.
- `remove_after`: set to a date after which we consider it safe to remove the column
ignore, typically after the M+1 release date, during the M+2 development cycle. For example, since the development cycle of `12.7` is between `2019-12-18` and `2020-01-17`, and `12.6` is the release to [drop the column](#dropping-the-column-release-m1), it's safe to set the date to the release date of `12.6` as `2019-12-22`.
This information allows us to reason better about column ignores and makes sure we
don't remove column ignores too early for both regular releases and deployments to GitLab.com. For
example, this avoids a situation where we deploy a bulk of changes that include both changes
to ignore the column and subsequently remove the column ignore (which would result in a downtime).
In this example, the change to ignore the column went into release `12.5`.
{{< alert type="note" >}}
Ignoring and dropping columns should not occur simultaneously in the same release. Dropping a column before proper ignoring it in the model can cause problems with zero-downtime migrations,
where the running instances can fail trying to look up for the removed column until the Rails schema cache expires. This can be an issue for self-managed customers whom attempt to follow zero-downtime upgrades,
forcing them to explicit restart all running GitLab instances to re-load the updated schema. To avoid this scenario, first, ignore the column (release M), then, drop it in the next release (release M+1).
{{< /alert >}}
#### Ignoring columns referenced by database views
When the column is also referenced by a database view, as in the follow example:
```sql
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at
FROM users
WHERE updated_at > now() - interval '30 day'
```
The `ignore_columns` instruction should also be included on the corresponding model class:
```ruby
class RecentlyUpdatedUsersView < ApplicationRecord
self.table_name = 'recently_updated_users_view'
ignore_columns :updated_at
end
```
### Dropping the column (release M+1)
Continuing our example, dropping the column goes into a _post-deployment_ migration in release `12.6`:
Start by creating the **post-deployment migration**:
```shell
bundle exec rails g post_deployment_migration remove_users_updated_at_column
```
You must consider these scenarios when you write a migration that removes a column:
- [The removed column has no indexes or constraints that belong to it](#the-removed-column-has-no-indexes-or-constraints-that-belong-to-it)
- [The removed column has an index or constraint that belongs to it](#the-removed-column-has-an-index-or-constraint-that-belongs-to-it)
#### The removed column has no indexes or constraints that belong to it
In this case, a **transactional migration** can be used:
```ruby
class RemoveUsersUpdatedAtColumn < Gitlab::Database::Migration[2.1]
def up
remove_column :users, :updated_at
end
def down
add_column :users, :updated_at, :datetime
end
end
```
#### The removed column has an index or constraint that belongs to it
If the `down` method requires adding back any dropped indexes or constraints, that cannot
be done in a transactional migration. The migration would look like this:
```ruby
class RemoveUsersUpdatedAtColumn < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
remove_column :users, :updated_at
end
def down
add_column(:users, :updated_at, :datetime, if_not_exists: true)
# Make sure to add back any indexes or constraints,
# that were dropped in the `up` method. For example:
add_concurrent_index(:users, :updated_at)
end
end
```
In the `down` method, we check to see if the column already exists before adding it again.
We do this because the migration is non-transactional and might have failed while it was running.
The [`disable_ddl_transaction!`](../migration_style_guide.md#usage-with-non-transactional-migrations)
is used to disable the transaction that wraps the whole migration.
You can refer to the page [Migration Style Guide](../migration_style_guide.md)
for more information about database migrations.
#### The removed column is referenced by a database view
When a column is referenced by a database view, it behaves as if the column had a constraint attached to it
so the view needs to be updated first before dropping the column:
1. Recreate the view excluding the column
1. Drop the column from the original table
The `down` method should perform the operation in reverse order as the column must exist before it is referenced
by the view:
1. Reintroduce the column to the original table
1. Recreate the view including the column again
The migration would look like this:
```ruby
class RemoveUsersUpdatedAtColumn < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username) AS
SELECT id, username
FROM users;
SQL
remove_column :users, :updated_at
end
def down
add_column :users, :updated_at, :datetime
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
end
end
```
### Removing the ignore rule (release M+2)
With the next release, in this example `12.7`, we set up another merge request to remove the ignore rule.
This removes the `ignore_column` line and - if not needed anymore - also the inclusion of `IgnoreableColumns`.
This should only get merged with the release indicated with `remove_with` and once
the `remove_after` date has passed.
## Renaming columns
{{< alert type="note" >}}
The below procedure is only appropriate for small tables. The procedure copies
all the data from one column to the other in a regular migration which may take
too long for large tables. For large tables you should look at using
[Batched Background Migrations](batched_background_migrations.md) to copy
the data over and perform the rename over multiple milestones.
{{< /alert >}}
Renaming columns the standard way requires downtime as an application may continue
to use the old column names during or after a database migration. To rename a column
without requiring downtime, we need two migrations: a regular migration and a
post-deployment migration. Both these migrations can go in the same release.
The steps:
1. [Add the regular migration](#add-the-regular-migration-release-m) (release M)
1. [Ignore the column](#ignore-the-column-release-m) (release M)
1. [Add a post-deployment migration](#add-a-post-deployment-migration-release-m) (release M)
1. [Remove the ignore rule](#remove-the-ignore-rule-release-m1) (release M+1)
When renaming columns that is referenced by a database view in the regular way, it requires no additional step as
views updated to the new column name while preserving the `SELECT` portion intact.
With no downtime there are additional considerations mentioned in the steps above.
### Add the regular migration (release M)
First we need to create the regular migration. This migration should use
`Gitlab::Database::MigrationHelpers#rename_column_concurrently` to perform the
renaming. For example:
```ruby
# A regular migration in db/migrate
class RenameUsersUpdatedAtToUpdatedAtTimestamp < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
rename_column_concurrently :users, :updated_at, :updated_at_timestamp
end
def down
undo_rename_column_concurrently :users, :updated_at, :updated_at_timestamp
end
end
```
This takes care of renaming the column, ensuring data stays in sync, and
copying over indexes and foreign keys.
If a column contains one or more indexes that don't contain the name of the
original column, the previously described procedure fails. In that case,
you need to rename these indexes.
When the column is referenced by a database view, the view needs to be recreated
and pointed to the new column. The `down` operation needs to restore it back
before executing the `undo_rename_column_concurrently`:
```ruby
# A regular migration in db/migrate including database view recreation
class RenameUsersUpdatedAtToUpdatedAtTimestamp < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
rename_column_concurrently :users, :updated_at, :updated_at_timestamp
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at_timestamp
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
end
def down
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
undo_rename_column_concurrently :users, :updated_at, :updated_at_timestamp
end
end
```
### Ignore the column (release M)
The next step is to ignore the column in the application code, and make sure it is not used. This step is
necessary because Rails caches the columns and re-uses this cache in various places.
This step is similar to [the first step when column is dropped](#ignoring-the-column-release-m), and the same requirements apply.
```ruby
class User < ApplicationRecord
ignore_column :updated_at, remove_with: '12.7', remove_after: '2019-12-22'
end
```
### Add a post-deployment migration (release M)
The renaming procedure requires some cleaning up in a post-deployment migration.
We can perform this cleanup using
`Gitlab::Database::MigrationHelpers#cleanup_concurrent_column_rename`:
```ruby
# A post-deployment migration in db/post_migrate
class CleanupUsersUpdatedAtRename < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
cleanup_concurrent_column_rename :users, :updated_at, :updated_at_timestamp
end
def down
undo_cleanup_concurrent_column_rename :users, :updated_at, :updated_at_timestamp
end
end
```
If you're renaming a [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3), carefully consider the state when the first migration has run but the second cleanup migration hasn't been run yet.
With [Canary](https://gitlab.com/gitlab-com/gl-infra/readiness/-/tree/master/library/canary/) it is possible that the system runs in this state for a significant amount of time.
### Remove the ignore rule (release M+1)
Same as when column is dropped, after the rename is completed, we need to [remove the ignore rule](#removing-the-ignore-rule-release-m2) in a subsequent release.
## Changing column constraints
Adding or removing a `NOT NULL` clause (or another constraint) can typically be
done without requiring downtime. Adding a `NOT NULL` constraint requires that any application
changes are deployed _first_, so it should happen in a post-deployment migration.
In contrary removing a `NOT NULL` constraint should be done in a regular migration.
This way any code which inserts `NULL` values can safely run for the column.
Avoid using `change_column` as it produces an inefficient query because it re-defines
the whole column type.
You can check the following guides for each specific use case:
- [Adding foreign-key constraints](foreign_keys.md)
- [Adding `NOT NULL` constraints](not_null_constraints.md)
- [Adding limits to text columns](strings_and_the_text_data_type.md)
## Changing column types
Changing the type of a column can be done using
`Gitlab::Database::MigrationHelpers#change_column_type_concurrently`. This
method works similarly to `rename_column_concurrently`. For example, if
we want to change the type of `users.username` from `string` to `text`:
1. [Create a regular migration](#create-a-regular-migration)
1. [Create a post-deployment migration](#create-a-post-deployment-migration)
1. [Casting data to a new type](#casting-data-to-a-new-type)
When changing columns type that are referenced by a database view the view needs to be recreated as part of the process.
### Create a regular migration
A regular migration is used to create a new column with a temporary name along
with setting up some triggers to keep data in sync. Such a migration would look
as follows:
```ruby
# A regular migration in db/migrate
class ChangeUsersUsernameStringToText < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
change_column_type_concurrently :users, :username, :text
end
def down
undo_change_column_type_concurrently :users, :username
end
end
```
When the column is referenced by a database view, the view needs to be recreated
and pointed to the new temporary column.
When in the later step the temporary column is renamed back to the original name, the view updates
itself internally and doesn't require any other change:
```ruby
# A regular migration in db/migrate including database view recreation
class ChangeUsersUsernameStringToText < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
change_column_type_concurrently :users, :username, :text
# temporary column name follows this pattern: `"#{column}_for_type_change"`
# so the column named `username` becomes `username_for_type_change`
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username_for_type_change, updated_at
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
end
def down
execute <<-SQL
DROP VIEW IF EXISTS recently_updated_users_view;
CREATE VIEW recently_updated_users_view(id, username, updated_at) AS
SELECT id, username, updated_at_timestamp
FROM users
WHERE updated_at > now() - interval '30 day'
SQL
undo_change_column_type_concurrently :users, :username
end
end
```
### Create a post-deployment migration
Next we need to clean up our changes using a post-deployment migration:
```ruby
# A post-deployment migration in db/post_migrate
class ChangeUsersUsernameStringToTextCleanup < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
cleanup_concurrent_column_type_change :users, :username
end
def down
undo_cleanup_concurrent_column_type_change :users, :username, :string
end
end
```
And that's it, we're done!
### Casting data to a new type
Some type changes require casting data to a new type. For example when changing from `text` to `jsonb`.
In this case, use the `type_cast_function` option.
Make sure there is no bad data and the cast always succeeds. You can also provide a custom function that handles
casting errors.
Example migration:
```ruby
def up
change_column_type_concurrently :users, :settings, :jsonb, type_cast_function: 'jsonb'
end
```
## Changing column defaults
Changing column defaults is difficult because of how Rails handles values
that are equal to the default.
{{< alert type="note" >}}
Rails ignores sending the default values to PostgreSQL when inserting records, if the [partial_inserts](https://gitlab.com/gitlab-org/gitlab/-/blob/55ac06c9083434e6c18e0a2aaf8be5f189ef34eb/config/application.rb#L40) config has been enabled. It leaves this task to
the database. When migrations change the default values of the columns, the running application is unaware
of this change due to the schema cache. The application is then under the risk of accidentally writing
wrong data to the database, especially when deploying the new version of the code
long after we run database migrations.
{{< /alert >}}
If running code ever explicitly writes the old default value of a column, you must follow a multi-step
process to prevent Rails replacing the old default with the new default in INSERT queries that explicitly
specify the old default.
Doing this requires steps in two minor releases:
1. [Add the `SafelyChangeColumnDefault` concern to the model](#add-the-safelychangecolumndefault-concern-to-the-model-and-change-the-default-in-a-post-migration) and change the default in a post-migration.
1. [Clean up the `SafelyChangeColumnDefault` concern](#clean-up-the-safelychangecolumndefault-concern-in-the-next-minor-release) in the next minor release.
We must wait a minor release before cleaning up the `SafelyChangeColumnDefault` because self-managed
releases bundle an entire minor release into a single zero-downtime deployment.
### Add the `SafelyChangeColumnDefault` concern to the model and change the default in a post-migration
The first step is to mark the column as safe to change in application code.
```ruby
class Ci::Build < ApplicationRecord
include SafelyChangeColumnDefault
columns_changing_default :partition_id
end
```
Then create a **post-deployment migration** to change the default:
```shell
bundle exec rails g post_deployment_migration change_ci_builds_default
```
```ruby
class ChangeCiBuildsDefault < Gitlab::Database::Migration[2.1]
def change
change_column_default('ci_builds', 'partition_id', from: 100, to: 101)
end
end
```
### Clean up the `SafelyChangeColumnDefault` concern in the next minor release
In the next minor release, create a new merge request to remove the `columns_changing_default` call. Also remove the `SafelyChangeColumnDefault` include
if it is not needed for a different column.
## Changing the schema for large tables
While `change_column_type_concurrently` and `rename_column_concurrently` can be
used for changing the schema of a table without downtime, it doesn't work very
well for large tables. Because all of the work happens in sequence the migration
can take a very long time to complete, preventing a deployment from proceeding.
They can also produce a lot of pressure on the database due to it rapidly
updating many rows in sequence.
To reduce database pressure you should instead use a background migration
when migrating a column in a large table (for example, `issues`). Background
migrations spread the work / load over a longer time period, without slowing
down deployments.
For more information, see [the documentation on cleaning up batched background migrations](batched_background_migrations.md#cleaning-up-a-batched-background-migration).
## Adding indexes
Adding indexes does not require downtime when `add_concurrent_index`
is used.
See also [Migration Style Guide](../migration_style_guide.md#adding-indexes)
for more information.
## Dropping indexes
Dropping an index does not require downtime.
## Adding tables
This operation is safe as there's no code using the table just yet.
## Dropping tables
Dropping tables can be done safely using a post-deployment migration, but only
if the application no longer uses the table.
Add the table to [`db/docs/deleted_tables`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/db/docs/deleted_tables) using the process described in [database dictionary](database_dictionary.md#dropping-tables).
Even though the table is deleted, it is still referenced in database migrations.
## Renaming tables
Renaming tables requires downtime as an application may continue
using the old table name during/after a database migration.
If the table and the ActiveRecord model is not in use yet, removing the old
table and creating a new one is the preferred way to "rename" the table.
Renaming a table is possible without downtime by following our multi-release
[rename table process](rename_database_tables.md).
## Adding foreign keys
Adding foreign keys can potentially cause downtime, please refer [FK: Avoiding downtime and migration failures](foreign_keys.md#avoiding-downtime-and-migration-failures) docs for details.
## Migrating `integer` primary keys to `bigint`
To [prevent the overflow risk](https://gitlab.com/groups/gitlab-org/-/epics/4785) for some tables
with `integer` primary key (PK), we have to migrate their PK to `bigint`. The process to do this
without downtime and causing too much load on the database is described below.
### Initialize the conversion and start migrating existing data (release N)
To start the process, add a regular migration to create the new `bigint` columns. Use the provided
`initialize_conversion_of_integer_to_bigint` helper. The helper also creates a database trigger
to keep in sync both columns for any new records ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/97aee76c4bfc2043dc0a1ef9ffbb71c58e0e2857/db/migrate/20230127093353_initialize_conversion_of_merge_request_metrics_to_bigint.rb)):
```ruby
# frozen_string_literal: true
class InitializeConversionOfMergeRequestMetricsToBigint < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
TABLE = :merge_request_metrics
COLUMNS = %i[id]
def up
initialize_conversion_of_integer_to_bigint(TABLE, COLUMNS)
end
def down
revert_initialize_conversion_of_integer_to_bigint(TABLE, COLUMNS)
end
end
```
Ignore the new `bigint` columns:
```ruby
# frozen_string_literal: true
class MergeRequest::Metrics < ApplicationRecord
ignore_column :id_convert_to_bigint, remove_with: '16.0', remove_after: '2023-05-22'
end
```
Enqueue batched background migration ([code](https://gitlab.com/gitlab-org/gitlab/-/blob/97aee76c4bfc2043dc0a1ef9ffbb71c58e0e2857/db/post_migrate/20230127101834_backfill_merge_request_metrics_for_bigint_conversion.rb))
to migrate the existing data:
```ruby
# frozen_string_literal: true
class BackfillMergeRequestMetricsForBigintConversion < Gitlab::Database::Migration[2.1]
restrict_gitlab_migration gitlab_schema: :gitlab_main
TABLE = :merge_request_metrics
COLUMNS = %i[id]
def up
backfill_conversion_of_integer_to_bigint(TABLE, COLUMNS, sub_batch_size: 200)
end
def down
revert_backfill_conversion_of_integer_to_bigint(TABLE, COLUMNS)
end
end
```
{{< alert type="note" >}}
- With [Issue#438124](https://gitlab.com/gitlab-org/gitlab/-/issues/438124) new instances have all ID columns in bigint.
The list of IDs yet to be converted to bigint in old instances (includes `Gitlab.com` SaaS) is maintained in `db/integer_ids_not_yet_initialized_to_bigint.yml`. **Do not edit this file manually** - it gets automatically updated during the [cleanup process](https://gitlab.com/gitlab-org/gitlab/-/blob/c6f4ea1bf1d693f1a0379964dd83a4bfec3e2f8d/lib/gitlab/database/migrations/conversions/bigint_converter.rb#L17-23).
- Since the schema file already has all IDs in `bigint`, don't push any changes to `db/structure.sql`.
{{< /alert >}}
### Monitor the background migration
Check how the migration is performing while it's running. Multiple ways to do this are described below.
#### High-level status of batched background migrations
See how to [check the status of batched background migrations](../../update/background_migrations.md).
#### Query the database
We can query the related database tables directly. Requires access to read-only replica.
Example queries:
```sql
-- Get details for batched background migration for given table
SELECT * FROM batched_background_migrations WHERE table_name = 'namespaces'\gx
-- Get count of batched background migration jobs by status for given table
SELECT
batched_background_migrations.id, batched_background_migration_jobs.status, COUNT(*)
FROM
batched_background_migrations
JOIN batched_background_migration_jobs ON batched_background_migrations.id = batched_background_migration_jobs.batched_background_migration_id
WHERE
table_name = 'namespaces'
GROUP BY
batched_background_migrations.id, batched_background_migration_jobs.status;
-- Batched background migration progress for given table (based on estimated total number of tuples)
SELECT
m.table_name,
LEAST(100 * sum(j.batch_size) / pg_class.reltuples, 100) AS percentage_complete
FROM
batched_background_migrations m
JOIN batched_background_migration_jobs j ON j.batched_background_migration_id = m.id
JOIN pg_class ON pg_class.relname = m.table_name
WHERE
j.status = 3 AND m.table_name = 'namespaces'
GROUP BY m.id, pg_class.reltuples;
```
#### Sidekiq logs
We can also use the Sidekiq logs to monitor the worker that executes the batched background
migrations:
1. Sign in to [Kibana](https://log.gprd.gitlab.net) with a `@gitlab.com` email address.
1. Change the index pattern to `pubsub-sidekiq-inf-gprd*`.
1. Add filter for `json.queue: cronjob:database_batched_background_migration`.
#### PostgreSQL slow queries log
Slow queries log keeps track of low queries that took above 1 second to execute. To see them
for batched background migration:
1. Sign in to [Kibana](https://log.gprd.gitlab.net) with a `@gitlab.com` email address.
1. Change the index pattern to `pubsub-postgres-inf-gprd*`.
1. Add filter for `json.endpoint_id.keyword: Database::BatchedBackgroundMigrationWorker`.
1. Optional. To see only updates, add a filter for `json.command_tag.keyword: UPDATE`.
1. Optional. To see only failed statements, add a filter for `json.error_severity.keyword: ERROR`.
1. Optional. Add a filter by table name.
#### Grafana dashboards
To monitor the health of the database, use these additional metrics:
- [PostgreSQL Tuple Statistics](https://dashboards.gitlab.net/d/000000167/postgresql-tuple-statistics?orgId=1&refresh=1m): if you see high rate of updates for the tables being actively converted, or increasing percentage of dead tuples for this table, it might mean that `autovacuum` cannot keep up.
- [PostgreSQL Overview](https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1): if you see high system usage or transactions per second (TPS) on the primary database server, it might mean that the migration is causing problems.
### Prometheus metrics
Number of [metrics](https://gitlab.com/gitlab-org/gitlab/-/blob/294a92484ce4611f660439aa48eee4dfec2230b5/lib/gitlab/database/background_migration/batched_migration_wrapper.rb#L90-128)
for each batched background migration are published to Prometheus. These metrics can be searched for and
visualized in Grafana ([see an example](https://dashboards.gitlab.net/explore?schemaVersion=1&panes=%7B%22m95%22:%7B%22datasource%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22sum%20%28rate%28batched_migration_job_updated_tuples_total%7Benv%3D%5C%22gprd%5C%22%7D%5B5m%5D%29%29%20by%20%28migration_id%29%20%22,%22range%22:true,%22instant%22:true,%22datasource%22:%7B%22type%22:%22prometheus%22,%22uid%22:%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%7D,%22editorMode%22:%22code%22,%22legendFormat%22:%22__auto%22%7D%5D,%22range%22:%7B%22from%22:%22now-3d%22,%22to%22:%22now%22%7D%7D%7D&orgId=1)).
### Swap the columns (release N + 1)
After the background migration is complete and the new `bigint` columns are populated for all records, we can
swap the columns. Swapping is done with post-deployment migration. The exact process depends on the
table being converted, but in general it's done in the following steps:
1. Using the provided `ensure_backfill_conversion_of_integer_to_bigint_is_finished` helper, make sure the batched
migration has finished.
If the migration has not completed, the subsequent steps fail anyway. By checking in advance we
aim to have more helpful error message.
```ruby
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_ci
def up
ensure_backfill_conversion_of_integer_to_bigint_is_finished(
:ci_builds,
%i[
project_id
runner_id
user_id
],
# optional. Only needed when there is no primary key, for example, like schema_migrations.
primary_key: :id
)
end
def down; end
```
1. Use the `add_bigint_column_indexes` helper method from `Gitlab::Database::MigrationHelpers::ConvertToBigint` module
to create indexes with the `bigint` columns that match the existing indexes using the `integer` column.
- The helper method is expected to create all required `bigint` indexes, but it's advised to recheck to make sure
we are not missing any of the existing indexes. More information about the helper can be
found in merge request [135781](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/135781).
1. Create foreign keys (FK) using the `bigint` columns that match the existing FK using the
`integer` column. Do this both for FK referencing other tables, and FK that reference the table
that is being migrated ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L36-43)).
1. Inside a transaction, swap the columns:
1. Lock the tables involved. To reduce the chance of hitting a deadlock, we recommended to do this in parent to child order ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L47)).
1. Rename the columns to swap names ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L49-54))
1. Reset the trigger function ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L56-57)).
1. Swap the defaults ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L59-62)).
1. Swap the PK constraint (if any) ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L64-68)).
1. Remove old indexes and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L70-72)).
- Names of the `bigint` indexes created using `add_bigint_column_indexes` helper can be retrieved by calling
`bigint_index_name` from `Gitlab::Database::MigrationHelpers::ConvertToBigint` module.
1. Remove old foreign keys (if still present) and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L74)).
See example [merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66088), and [migration](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb).
### Remove the trigger and old `integer` columns (release N + 2)
Using post-deployment migration and the provided `cleanup_conversion_of_integer_to_bigint` helper,
drop the database trigger and the old `integer` columns ([see an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/70351)).
### Remove ignore rules (release N + 3)
In the next release after the columns were dropped, remove the ignore rules as we do not need them
anymore ([see an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/71161)).
## Database Views
GitLab makes light usage of database views, as they may introduce additional complexity when handling
migrations.
There are currently two situations where views are used:
- To [expose Postgres internal metrics](virtual_tables.md)
- To expose limited read-only data for the Unified Backup CLI
### Postgres internal metrics
Postgres internal metrics are accessible via `Gitlab::Database::Postgres*` models (in `lib/gitlab/database`),
and rely on the `Gitlab::Database::SharedModel` class.
### Unified Backup CLI
The Unified Backup CLI relies on a couple of views to retrieve a limited amount of information necessary
to trigger `gitaly-backup` for the many repository types. The views are accessible via `Gitlab::Backup::Cli::Models::*`
(in `gems/gitlab-backup-cli/lib/gitlab/backup/cli/models`) and rely on the `Gitlab::Backup::Cli::Models::Base`
class to handle the connection.
As the Unified Backup CLI code is in a separate gem, the main codebase also contains specs to ensure the required views
return the information needed by the tool. This ensures a "contract" between the two codebases.
In case any of the columns needed by this view needs to change, follow those steps:
- To drop a column
- Coordinate with Durability team (responsible for the Unified Backup) and Gitaly (responsible for `gitaly-backup`)
- To rename a column
- Follow [Renaming Columns](#renaming-columns) including the view specific considerations
- To change a column type
- Follow [Changing column types](#changing-column-types) including the view specific considerations
## Data migrations
Data migrations can be tricky. The usual approach to migrate data is to take a 3
step approach:
1. Migrate the initial batch of data
1. Deploy the application code
1. Migrate any remaining data
Usually this works, but not always. For example, if a field's format is to be
changed from JSON to something else we have a bit of a problem. If we were to
change existing data before deploying application code we would most likely run
into errors. On the other hand, if we were to migrate after deploying the
application code we could run into the same problems.
If you merely need to correct some invalid data, then a post-deployment
migration is usually enough. If you need to change the format of data (for example, from
JSON to something else) it's typically best to add a new column for the new data
format, and have the application use that. In such a case the procedure would
be:
1. Add a new column in the new format
1. Copy over existing data to this new column
1. Deploy the application code
1. In a post-deployment migration, copy over any remaining data
In general there is no one-size-fits-all solution, therefore it's best to
discuss these kind of migrations in a merge request to make sure they are
implemented in the best way possible.
|
https://docs.gitlab.com/development/loose_foreign_keys
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/loose_foreign_keys.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
loose_foreign_keys.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Loose foreign keys
| null |
## Problem statement
In relational databases (including PostgreSQL), foreign keys provide a way to link
two database tables together, and ensure data-consistency between them. In GitLab,
[foreign keys](foreign_keys.md) are vital part of the database design process.
Most of our database tables have foreign keys.
With the ongoing database [decomposition work](https://gitlab.com/groups/gitlab-org/-/epics/6168),
linked records might be present on two different database servers. Ensuring data consistency
between two databases is not possible with standard PostgreSQL foreign keys. PostgreSQL
does not support foreign keys operating across multiple database servers.
Example:
- Database "Main": `projects` table
- Database "CI": `ci_pipelines` table
A project can have many pipelines. When a project is deleted, the associated `ci_pipeline` (via the
`project_id` column) records must be also deleted.
With a multi-database setup, this cannot be achieved with foreign keys.
## Asynchronous approach
Our preferred approach to this problem is eventual consistency. With the loose foreign keys
feature, we can configure delayed association cleanup without negatively affecting the
application performance.
### How eventual consistency is implemented
In the previous example, a record in the `projects` table can have multiple `ci_pipeline`
records. To keep the cleanup process separate from the actual parent record deletion,
we can:
1. Create a `DELETE` trigger on the `projects` table.
Record the deletions in a separate table (`deleted_records`).
1. A job checks the `deleted_records` table every minute or two.
1. For each record in the table, delete the associated `ci_pipelines` records
using the `project_id` column.
{{< alert type="note" >}}
For this procedure to work, we must register which tables to clean up asynchronously.
{{< /alert >}}
## The `scripts/decomposition/generate-loose-foreign-key`
We built an automation tool to aid migration of foreign keys into loose foreign keys as part of
decomposition effort. It presents existing keys and allows chosen foreign keys to be automatically
converted into loose foreign keys. This ensures consistency between foreign key and loose foreign
key definitions, and ensures that they are properly tested.
{{< alert type="warning" >}}
We strongly advise you to use the automation script for swapping any foreign key to a loose foreign key.
{{< /alert >}}
The tool ensures that all aspects of swapping a foreign key are covered. This includes:
- Creating a migration to remove a foreign key.
- Updating `db/structure.sql` with the new migration.
- Updating `config/gitlab_loose_foreign_keys.yml` to add the new loose foreign key.
- Creating or updating a model's specs to ensure that the loose foreign key is properly supported.
The tool is located at `scripts/decomposition/generate-loose-foreign-key`:
```shell
$ scripts/decomposition/generate-loose-foreign-key -h
Usage: scripts/decomposition/generate-loose-foreign-key [options] <filters...>
-c, --cross-schema Show only cross-schema foreign keys
-n, --dry-run Do not execute any commands (dry run)
-r, --[no-]rspec Create or not a rspecs automatically
-h, --help Prints this help
```
For the migration of cross-schema foreign keys, we use the `-c` modifier to show the foreign keys
yet to migrate:
```shell
$ scripts/decomposition/generate-loose-foreign-key -c
Re-creating current test database
Dropped database 'gitlabhq_test_ee'
Dropped database 'gitlabhq_geo_test_ee'
Created database 'gitlabhq_test_ee'
Created database 'gitlabhq_geo_test_ee'
Showing cross-schema foreign keys (20):
ID | HAS_LFK | FROM | TO | COLUMN | ON_DELETE
0 | N | ci_builds | projects | project_id | cascade
1 | N | ci_job_artifacts | projects | project_id | cascade
2 | N | ci_pipelines | projects | project_id | cascade
3 | Y | ci_pipelines | merge_requests | merge_request_id | cascade
4 | N | external_pull_requests | projects | project_id | cascade
5 | N | ci_sources_pipelines | projects | project_id | cascade
6 | N | ci_stages | projects | project_id | cascade
7 | N | ci_pipeline_schedules | projects | project_id | cascade
8 | N | ci_runner_projects | projects | project_id | cascade
9 | Y | dast_site_profiles_pipelines | ci_pipelines | ci_pipeline_id | cascade
10 | Y | vulnerability_feedback | ci_pipelines | pipeline_id | nullify
11 | N | ci_variables | projects | project_id | cascade
12 | N | ci_refs | projects | project_id | cascade
13 | N | ci_builds_metadata | projects | project_id | cascade
14 | N | ci_subscriptions_projects | projects | downstream_project_id | cascade
15 | N | ci_subscriptions_projects | projects | upstream_project_id | cascade
16 | N | ci_sources_projects | projects | source_project_id | cascade
17 | N | ci_job_token_project_scope_links | projects | source_project_id | cascade
18 | N | ci_job_token_project_scope_links | projects | target_project_id | cascade
19 | N | ci_project_monthly_usages | projects | project_id | cascade
To match foreign key (FK), write one or many filters to match against FROM/TO/COLUMN:
- scripts/decomposition/generate-loose-foreign-key (filters...)
- scripts/decomposition/generate-loose-foreign-key ci_job_artifacts project_id
- scripts/decomposition/generate-loose-foreign-key dast_site_profiles_pipelines
```
The command accepts a list of regular expressions to match from, to, or column
for the purpose of the foreign key generation. For example, run this to swap
all foreign keys for `ci_job_token_project_scope_links` for the decomposed database:
```shell
scripts/decomposition/generate-loose-foreign-key -c ci_job_token_project_scope_links
```
To swap only the `source_project_id` of `ci_job_token_project_scope_links` for the decomposed database, run:
```shell
scripts/decomposition/generate-loose-foreign-key -c ci_job_token_project_scope_links source_project_id
```
To match the exact name of a table or columns, you can make use of the regular expressions
position anchors `^` and `$`. For example, this command matches only the
foreign keys on the `events` table only, but not on the table
`incident_management_timeline_events`.
```shell
scripts/decomposition/generate-loose-foreign-key -n ^events$
```
To swap all the foreign keys (all having `_id` appended), but not create a new branch (only commit
the changes) and not create RSpec tests, run:
```shell
scripts/decomposition/generate-loose-foreign-key -c --no-branch --no-rspec _id
```
To swap all foreign keys referencing `projects`, but not create a new branch (only commit the
changes), run:
```shell
scripts/decomposition/generate-loose-foreign-key -c --no-branch projects
```
## Example migration and configuration
### Configure the loose foreign key
Loose foreign keys are defined in a YAML file. The configuration requires the
following information:
- Parent table name (`projects`)
- Child table name (`ci_pipelines`)
- The data cleanup method (`async_delete` or `async_nullify`)
The YAML file is located at `config/gitlab_loose_foreign_keys.yml`. The file groups
foreign key definitions by the name of the child table. The child table can have multiple loose
foreign key definitions, therefore we store them as an array.
Example definition:
```yaml
ci_pipelines:
- table: projects
column: project_id
on_delete: async_delete
```
If the `ci_pipelines` key is already present in the YAML file, then a new entry can be added
to the array:
```yaml
ci_pipelines:
- table: projects
column: project_id
on_delete: async_delete
- table: another_table
column: another_id
on_delete: :async_nullify
```
### Assign specific tables to custom workers
By default, all loose foreign key cleanup is handled by the `LooseForeignKeys::CleanupWorker`. However,
you can specify a custom worker class to handle cleanup for specific tables. This allows for better
load distribution and specialized handling of different table types.
To assign a table to a custom worker, add the `worker_class` attribute to the configuration:
```yaml
ci_pipelines:
- table: projects
column: project_id
on_delete: async_delete
worker_class: 'CustomLooseForeignKeysWorker'
```
If the `worker_class` attribute is not specified, the table will default to using
`::LooseForeignKeys::CleanupWorker`.
**Important considerations:**
- The `worker_class` must be a valid Ruby class name as a string
- The custom worker should follow the same pattern as `LooseForeignKeys::CleanupWorker`
- Each worker processes only the tables specifically assigned to it through the `worker_class` attribute
- Tables without a `worker_class` specified are processed by the default `CleanupWorker`
- When adding a new custom worker, you must also add it to the `ALLOWED_WORKER_CLASSES` constant in `lib/gitlab/database/loose_foreign_keys.rb`
- When adding a new custom worker, you must also add its cron job configuration to `config/initializers/1_settings.rb`
Example with mixed worker assignments:
```yaml
ci_pipelines:
- table: projects
column: project_id
on_delete: async_delete
worker_class: 'CustomCiCleanupWorker' # Processed by CustomCiCleanupWorker
- table: users
column: user_id
on_delete: async_nullify
# No worker_class = processed by default CleanupWorker
ci_builds:
- table: projects
column: project_id
on_delete: async_delete # No worker_class = processed by default CleanupWorker
```
### Track record changes
#### On normal non-partitioned tables
To know about deletions in the `projects` table, configure a `DELETE` trigger
using a [post-deployment migration](post_deployment_migrations.md). The
trigger needs to be configured only once. If the model already has at least one
`loose_foreign_key` definition, then this step can be skipped:
```ruby
class TrackProjectRecordChanges < Gitlab::Database::Migration[2.3]
include Gitlab::Database::MigrationHelpers::LooseForeignKeyHelpers
def up
track_record_deletions(:projects)
end
def down
untrack_record_deletions(:projects)
end
end
```
#### On partitioned tables
To track deletions on partitioned tables, we need to use the helper `track_record_deletions_override_table_name`
instead. It's because we need to make sure that when `DELETE` statements run against the partitioned
table or its partitions, we are always registering the parent (partitioned) table instead of the partition
(child) table name.
Here is an example:
```ruby
class TrackWorkloadDeletions < Gitlab::Database::Migration[2.3]
include Gitlab::Database::MigrationHelpers::LooseForeignKeyHelpers
def up
track_record_deletions_override_table_name(:p_ci_workloads)
end
def down
untrack_record_deletions(:p_ci_workloads)
end
end
```
### Remove the foreign key
If there is an existing foreign key, then it can be removed from the database. This foreign key describes the link between the `projects` and `ci_pipelines` tables:
```sql
ALTER TABLE ONLY ci_pipelines
ADD CONSTRAINT fk_86635dbd80
FOREIGN KEY (project_id)
REFERENCES projects(id)
ON DELETE CASCADE;
```
The migration must run after the `DELETE` trigger is installed and the loose
foreign key definition is deployed. As such, it must be a
[post-deployment migration](post_deployment_migrations.md) dated after the migration for the
trigger. If the foreign key is deleted earlier, there is a good chance of
introducing data inconsistency which needs manual cleanup:
```ruby
class RemoveProjectsCiPipelineFk < Gitlab::Database::Migration[2.3]
disable_ddl_transaction!
def up
with_lock_retries do
remove_foreign_key_if_exists(:ci_pipelines, :projects, name: "fk_86635dbd80")
end
end
def down
add_concurrent_foreign_key(:ci_pipelines, :projects, name: "fk_86635dbd80", column: :project_id, target_column: :id, on_delete: "cascade")
end
end
```
At this point, the setup phase is concluded. The deleted `projects` records should be automatically
picked up by the scheduled cleanup worker job.
### Remove the loose foreign key
When the loose foreign key definition is no longer needed (parent table is removed, or FK is restored),
we need to remove the definition from the YAML file and ensure that we don't leave pending deleted
records in the database.
1. Remove the loose foreign key definition from the configuration (`config/gitlab_loose_foreign_keys.yml`).
The deletion tracking trigger needs to be removed only when the parent table no longer uses loose foreign keys.
If the model still has at least one `loose_foreign_key` definition remaining, then these steps can be skipped:
1. Remove the trigger from the parent table (if the parent table is still there).
1. Remove leftover deleted records from the `loose_foreign_keys_deleted_records` table.
Migration for removing the trigger:
```ruby
class UnTrackProjectRecordChanges < Gitlab::Database::Migration[2.3]
include Gitlab::Database::MigrationHelpers::LooseForeignKeyHelpers
def up
untrack_record_deletions(:projects)
end
def down
track_record_deletions(:projects)
end
end
```
With the trigger removal, we prevent further records to be inserted in the `loose_foreign_keys_deleted_records`
table however, there is still a chance for having leftover pending records in the table. These records
must be removed with an inline data migration.
```ruby
class RemoveLeftoverProjectDeletions < Gitlab::Database::Migration[2.3]
disable_ddl_transaction!
def up
loop do
result = execute <<~SQL
DELETE FROM "loose_foreign_keys_deleted_records"
WHERE
("loose_foreign_keys_deleted_records"."partition", "loose_foreign_keys_deleted_records"."id") IN (
SELECT "loose_foreign_keys_deleted_records"."partition", "loose_foreign_keys_deleted_records"."id"
FROM "loose_foreign_keys_deleted_records"
WHERE
"loose_foreign_keys_deleted_records"."fully_qualified_table_name" = 'public.projects' AND
"loose_foreign_keys_deleted_records"."status" = 1
LIMIT 100
)
SQL
break if result.cmd_tuples == 0
end
end
def down
# no-op
end
end
```
## Testing
The "`it has loose foreign keys`" shared example can be used to test the presence of the `ON DELETE` trigger and the
loose foreign key definitions.
Add to the model test file:
```ruby
it_behaves_like 'it has loose foreign keys' do
let(:factory_name) { :project }
end
```
**After** [removing a foreign key](#remove-the-foreign-key),
use the "`cleanup by a loose foreign key`" shared example to test a child record's deletion or nullification
via the added loose foreign key:
```ruby
it_behaves_like 'cleanup by a loose foreign key' do
let!(:model) { create(:ci_pipeline, user: create(:user)) }
let!(:parent) { model.user }
end
```
## Caveats of loose foreign keys
### Record creation
The feature provides an efficient way of cleaning up associated records after the parent record is
deleted. Without foreign keys, it's the application's responsibility to validate if the parent record
exists when a new associated record is created.
A bad example: record creation with the given ID (`project_id` comes from user input).
In this example, nothing prevents us from passing a random project ID:
```ruby
Ci::Pipeline.create!(project_id: params[:project_id])
```
A good example: record creation with extra check:
```ruby
project = Project.find(params[:project_id])
Ci::Pipeline.create!(project_id: project.id)
```
### Association lookup
Consider the following HTTP request:
```plaintext
GET /projects/5/pipelines/100
```
The controller action ignores the `project_id` parameter and finds the pipeline using the ID:
```ruby
def show
# bad, avoid it
pipeline = Ci::Pipeline.find(params[:id]) # 100
end
```
This endpoint still works when the parent `Project` model is deleted. This can be considered a
data leak which should not happen under typical circumstances:
```ruby
def show
# good
project = Project.find(params[:project_id])
pipeline = project.pipelines.find(params[:pipeline_id]) # 100
end
```
{{< alert type="note" >}}
This example is unlikely in GitLab, because we usually look up the parent models to perform
permission checks.
{{< /alert >}}
## A note on `dependent: :destroy` and `dependent: :nullify`
We considered using these Rails features as an alternative to foreign keys but there are several problems which include:
1. These run on a different connection in the context of a transaction [which we do not allow](multiple_databases.md#removing-cross-database-transactions).
1. These can lead to severe performance degradation as we load all records from PostgreSQL, loop over them in Ruby, and call individual `DELETE` queries.
1. These can miss data as they only cover the case when the `destroy` method is called directly on the model. There are other cases including `delete_all` and cascading deletes from another parent table that could mean these are missed.
For non-trivial objects that need to clean up data outside the
database (for example, object storage) where you might wish to use `dependent: :destroy`,
see alternatives in
[Avoid `dependent: :nullify` and `dependent: :destroy` across databases](multiple_databases.md#avoid-dependent-nullify-and-dependent-destroy-across-databases).
## Update target column to a value
A loose foreign key might be used to update a target column to a value when an
entry in parent table is deleted.
It's important to add an index (if it doesn't exist yet) on
(`column`, `target_column`) to avoid any performance issues.
Any index starting with these two columns will work.
The configuration requires additional information:
- Column to be updated (`target_column`)
- Value to be set in the target column (`target_value`)
Example definition:
```yaml
packages:
- table: projects
column: project_id
on_delete: update_column_to
target_column: status
target_value: 4
```
## Risks of loose foreign keys and possible mitigations
In general, the loose foreign keys architecture is eventually consistent and
the cleanup latency might lead to problems visible to GitLab users or
operators. We consider the tradeoff as acceptable, but there might be
cases where the problems are too frequent or too severe, and we must
implement a mitigation strategy. A general mitigation strategy might be to have
an "urgent" queue for cleanup of records that have higher impact with a delayed
cleanup.
Below are some more specific examples of problems that might occur and how we
might mitigate them. In all the listed cases we might still consider the problem
described to be low risk and low impact, and in that case we would choose to not
implement any mitigation.
### The record should be deleted but it shows up in a view
This hypothetical example might happen with a foreign key like:
```sql
ALTER TABLE ONLY vulnerability_occurrence_pipelines
ADD CONSTRAINT fk_rails_6421e35d7d FOREIGN KEY (pipeline_id) REFERENCES ci_pipelines(id) ON DELETE CASCADE;
```
In this example we expect to delete all associated `vulnerability_occurrence_pipelines` records
whenever we delete the `ci_pipelines` record associated with them. In this case
you might end up with some vulnerability page in GitLab which shows an occurrence
of a vulnerability. However, when you try to select a link to the pipeline, you get
a 404, because the pipeline is deleted. Then, when you navigate back you might find the
occurrence has disappeared too.
**Mitigation**
When rendering the vulnerability occurrences on the vulnerability page we could
try to load the corresponding pipeline and choose to skip displaying that
occurrence if pipeline is not found.
### The deleted parent record is needed to render a view and causes a `500` error
This hypothetical example might happen with a foreign key like:
```sql
ALTER TABLE ONLY vulnerability_occurrence_pipelines
ADD CONSTRAINT fk_rails_6421e35d7d FOREIGN KEY (pipeline_id) REFERENCES ci_pipelines(id) ON DELETE CASCADE;
```
In this example we expect to delete all associated `vulnerability_occurrence_pipelines` records
whenever we delete the `ci_pipelines` record associated with them. In this case
you might end up with a vulnerability page in GitLab which shows an "occurrence"
of a vulnerability. However, when rendering the occurrence we try to load, for example,
`occurrence.pipeline.created_at`, which causes a 500 for the user.
**Mitigation**
When rendering the vulnerability occurrences on the vulnerability page we could
try to load the corresponding pipeline and choose to skip displaying that
occurrence if pipeline is not found.
### The deleted parent record is accessed in a Sidekiq worker and causes a failed job
This hypothetical example might happen with a foreign key like:
```sql
ALTER TABLE ONLY vulnerability_occurrence_pipelines
ADD CONSTRAINT fk_rails_6421e35d7d FOREIGN KEY (pipeline_id) REFERENCES ci_pipelines(id) ON DELETE CASCADE;
```
In this example we expect to delete all associated `vulnerability_occurrence_pipelines` records
whenever we delete the `ci_pipelines` record associated with them. In this case
you might end up with a Sidekiq worker that is responsible for processing a
vulnerability and looping over all occurrences causing a Sidekiq job to fail if
it executes `occurrence.pipeline.created_at`.
**Mitigation**
When looping through the vulnerability occurrences in the Sidekiq worker, we
could try to load the corresponding pipeline and choose to skip processing that
occurrence if pipeline is not found.
## Architecture
The loose foreign keys feature is implemented within the `LooseForeignKeys` Ruby namespace. The
code is isolated from the core application code and theoretically, it could be a standalone library.
The feature is invoked by worker classes, primarily the [`LooseForeignKeys::CleanupWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/loose_foreign_keys/cleanup_worker.rb). Custom workers can be assigned to specific tables through the `worker_class` configuration option. Workers are scheduled via cron jobs where the schedule depends on the configuration of the GitLab instance.
- Non-decomposed GitLab (1 database): invoked every minute.
- Decomposed GitLab (2 databases, CI and Main): invoked every minute, cleaning up one database
at a time. For example, the cleanup worker for the main database runs every two minutes.
To avoid lock contention and the processing of the same database rows, the worker does not run
parallel. This behavior is ensured with a Redis lock.
**Record cleanup procedure**:
1. Acquire the Redis lock.
1. Determine which database to clean up.
1. Collect all database tables where the deletions are tracked (parent tables).
- This is achieved by reading the `config/gitlab_loose_foreign_keys.yml` file.
- A table is considered "tracked" when a loose foreign key definition exists for the table and
the `DELETE` trigger is installed.
- When using custom workers via the `worker_class` attribute, each worker only processes tables
specifically assigned to it, filtering out tables assigned to other workers.
1. Cycle through the tables with an infinite loop.
1. For each table, load a batch of deleted parent records to clean up.
1. Depending on the YAML configuration, build `DELETE` or `UPDATE` (nullify) queries for the
referenced child tables.
1. Invoke the queries.
1. Repeat until all child records are cleaned up or the maximum limit is reached.
1. Remove the deleted parent records when all child records are cleaned up.
### Database structure
The feature relies on triggers installed on the parent tables. When a parent record is deleted,
the trigger automatically inserts a new record into the `loose_foreign_keys_deleted_records`
database table.
The inserted record stores the following information about the deleted record:
- `fully_qualified_table_name`: name of the database table where the record was located.
- `primary_key_value`: the ID of the record, the value is present in the child tables as
the foreign key value. At the moment, composite primary keys are not supported, the parent table
must have an `id` column.
- `status`: defaults to pending, represents the status of the cleanup process.
- `consume_after`: defaults to the current time.
- `cleanup_attempts`: defaults to 0. The number of times the worker tried to clean up this record.
A non-zero number would mean that this record has many child records and cleaning it up requires
several runs.
#### Database decomposition
The `loose_foreign_keys_deleted_records` table exists on both database servers (`ci` and `main`)
after the [database decomposition](https://gitlab.com/groups/gitlab-org/-/epics/6168). The worker
ill determine which parent tables belong to which database by reading the
`lib/gitlab/database/gitlab_schemas.yml` YAML file.
Example:
- Main database tables
- `projects`
- `namespaces`
- `merge_requests`
- Ci database tables
- `ci_builds`
- `ci_pipelines`
When the worker is invoked for the `ci` database, the worker loads deleted records only from the
`ci_builds` and `ci_pipelines` tables. During the cleanup process, `DELETE` and `UPDATE` queries
mostly run on tables located in the Main database. In this example, one `UPDATE` query
nullifies the `merge_requests.head_pipeline_id` column.
#### Database partitioning
Due to the large volume of inserts the database table receives daily, a special partitioning
strategy was implemented to address data bloat concerns. Originally, the
[time-decay](https://handbook.gitlab.com/handbook/company/working-groups/database-scalability/time-decay/)
strategy was considered for the feature but due to the large data volume we decided to implement a
new strategy.
A deleted record is considered fully processed when all its direct children records have been
cleaned up. When this happens, the loose foreign key worker updates the `status` column of
the deleted record. After this step, the record is no longer needed.
The sliding partitioning strategy provides an efficient way of cleaning up old, unused data by
adding a new database partition and removing the old one when certain conditions are met.
The `loose_foreign_keys_deleted_records` database table is list partitioned where most of the
time there is only one partition attached to the table.
```sql
Partitioned table "public.loose_foreign_keys_deleted_records"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------------------------+--------------------------+-----------+----------+----------------------------------------------------------------+----------+--------------+-------------
id | bigint | | not null | nextval('loose_foreign_keys_deleted_records_id_seq'::regclass) | plain | |
partition | bigint | | not null | 84 | plain | |
primary_key_value | bigint | | not null | | plain | |
status | smallint | | not null | 1 | plain | |
created_at | timestamp with time zone | | not null | now() | plain | |
fully_qualified_table_name | text | | not null | | extended | |
consume_after | timestamp with time zone | | | now() | plain | |
cleanup_attempts | smallint | | | 0 | plain | |
Partition key: LIST (partition)
Indexes:
"loose_foreign_keys_deleted_records_pkey" PRIMARY KEY, btree (partition, id)
"index_loose_foreign_keys_deleted_records_for_partitioned_query" btree (partition, fully_qualified_table_name, consume_after, id) WHERE status = 1
Check constraints:
"check_1a541f3235" CHECK (char_length(fully_qualified_table_name) <= 150)
Partitions: gitlab_partitions_dynamic.loose_foreign_keys_deleted_records_84 FOR VALUES IN ('84')
```
The `partition` column controls the insert direction, the `partition` value determines which
partition gets the deleted rows inserted via the trigger. Notice that the default value of
the `partition` table matches with the value of the list partition (84). In `INSERT` query
within the trigger the value of the `partition` is omitted, the trigger always relies on the
default value of the column.
Example `INSERT` query for the trigger:
```sql
INSERT INTO loose_foreign_keys_deleted_records
(fully_qualified_table_name, primary_key_value)
SELECT TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME, old_table.id FROM old_table;
```
The partition "sliding" process is controlled by two, regularly executed callbacks. These
callbacks are defined within the `LooseForeignKeys::DeletedRecord` model.
The `next_partition_if` callback controls when to create a new partition. A new partition is
created when the current partition has at least one record older than 24 hours. A new partition
is added by the [`PartitionManager`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database/partitioning/partition_manager.rb)
using the following steps:
1. Create a new partition, where the `VALUE` for the partition is `CURRENT_PARTITION + 1`.
1. Update the default value of the `partition` column to `CURRENT_PARTITION + 1`.
With these steps, all new `INSERT` queries via the triggers end up in the new partition. At this point,
the database table has two partitions.
The `detach_partition_if` callback determines if the old partitions can be detached from the table.
A partition is detachable if there are no pending (unprocessed) records in the partition
(`status = 1`). The detached partitions are available for some time, you can see the list
detached partitions in the `detached_partitions` table:
```sql
select * from detached_partitions;
```
#### Cleanup queries
The `LooseForeignKeys::CleanupWorker` has its database query builder which depends on `Arel`.
The feature doesn't reference any application-specific `ActiveRecord` models to avoid unexpected
side effects. The database queries are batched, which means that several parent records are being
cleaned up at the same time.
Example `DELETE` query:
```sql
DELETE
FROM "merge_request_metrics"
WHERE ("merge_request_metrics"."id") IN
(SELECT "merge_request_metrics"."id"
FROM "merge_request_metrics"
WHERE "merge_request_metrics"."pipeline_id" IN (1, 2, 10, 20)
LIMIT 1000 FOR UPDATE SKIP LOCKED)
```
The primary key values of the parent records are 1, 2, 10, and 20.
Example `UPDATE` (nullify) query:
```sql
UPDATE "merge_requests"
SET "head_pipeline_id" = NULL
WHERE ("merge_requests"."id") IN
(SELECT "merge_requests"."id"
FROM "merge_requests"
WHERE "merge_requests"."head_pipeline_id" IN (3, 4, 30, 40)
LIMIT 500 FOR UPDATE SKIP LOCKED)
```
These queries are batched, which means that in many cases, several invocations are needed to clean
up all associated child records.
The batching is implemented with loops, the processing stops when all associated child records
are cleaned up or the limit is reached.
```ruby
loop do
modification_count = process_batch_with_skip_locked
break if modification_count == 0 || over_limit?
end
loop do
modification_count = process_batch
break if modification_count == 0 || over_limit?
end
```
The loop-based batch processing is preferred over `EachBatch` for the following reasons:
- The records in the batch are modified, so the next batch contains different records.
- There is always an index on the foreign key column however, the column is usually not unique.
`EachBatch` requires a unique column for the iteration.
- The record order doesn't matter for the cleanup.
Notice that we have two loops. The initial loop processes records with the `SKIP LOCKED` clause.
The query skips rows that are locked by other application processes. This ensures that the
cleanup worker is less likely to become blocked. The second loop executes the database
queries without `SKIP LOCKED` to ensure that all records have been processed.
#### Processing limits
A constant, large volume of record updates or deletions can cause incidents and affect the
availability of GitLab:
- Increased table bloat.
- Increased number of pending WAL files.
- Busy tables, difficulty when acquiring locks.
To mitigate these issues, several limits are applied when the worker runs.
- Each query has `LIMIT`, a query cannot process an unbounded number of rows.
- The maximum number of record deletions and record updates is limited.
- The maximum runtime (30 seconds) for the database queries is limited.
The limit rules are implemented in the `LooseForeignKeys::ModificationTracker` class. When one of
the limits (record modification count, time limit) is reached the processing is stopped
immediately. After some time, the next scheduled worker continues the cleanup process.
#### Performance characteristics
The database trigger on the parent tables **decreases** the record deletion speed. Each
statement that removes rows from the parent table invokes the trigger to insert records
into the `loose_foreign_keys_deleted_records` table.
The queries within the cleanup worker are fairly efficient index scans, with limits in place
they're unlikely to affect other parts of the application.
The database queries are not running in transaction, when an error happens for example a statement
timeout or a worker crash, the next job continues the processing.
## Troubleshooting
### Accumulation of deleted records
There can be cases where the workers need to process an unusually large amount of data. This can
happen under typical usage, for example when a large project or group is deleted. In this scenario,
there can be several million rows to be deleted or nullified. Due to the limits enforced by the
worker, processing this data takes some time.
When cleaning up "heavy-hitters", the feature ensures fair processing by rescheduling larger
batches for later. This gives time for other deleted records to be processed.
For example, a project with millions of `ci_builds` records is deleted. The `ci_builds` records
is deleted by the loose foreign keys feature.
1. The cleanup worker is scheduled and picks up a batch of deleted `projects` records. The large
project is part of the batch.
1. Deletion of the orphaned `ci_builds` rows has started.
1. The time limit is reached, but the cleanup is not complete.
1. The `cleanup_attempts` column is incremented for the deleted records.
1. Go to step 1. The next cleanup worker continues the cleanup.
1. When the `cleanup_attempts` reaches 3, the batch is re-scheduled 10 minutes later by updating
the `consume_after` column.
1. The next cleanup worker processes a different batch.
We have Prometheus metrics in place to monitor the deleted record cleanup:
- `loose_foreign_key_processed_deleted_records`: Number of processed deleted records. When large
cleanup happens, this number would decrease.
- `loose_foreign_key_incremented_deleted_records`: Number of deleted records which were not
finished processing. The `cleanup_attempts` column was incremented.
- `loose_foreign_key_rescheduled_deleted_records`: Number of deleted records that had to be
rescheduled at a later time after 3 cleanup attempts.
Example PromQL query:
```plaintext
loose_foreign_key_rescheduled_deleted_records{env="gprd", table="ci_runners"}
```
Another way to look at the situation is by running a database query. This query gives the exact
counts of the unprocessed records:
```sql
SELECT partition, fully_qualified_table_name, count(*)
FROM loose_foreign_keys_deleted_records
WHERE
status = 1
GROUP BY 1, 2;
```
Example output:
```sql
partition | fully_qualified_table_name | count
-----------+----------------------------+-------
87 | public.ci_builds | 874
87 | public.ci_job_artifacts | 6658
87 | public.ci_pipelines | 102
87 | public.ci_runners | 111
87 | public.merge_requests | 255
87 | public.namespaces | 25
87 | public.projects | 6
```
The query includes the partition number which can be useful to detect if the cleanup process is
significantly lagging behind. When multiple different partition values are present in the list
that means the cleanup of some deleted records didn't finish in several days (1 new partition
is added every day).
Steps to diagnose the problem:
- Check which records are accumulating.
- Try to get an estimate of the number of remaining records.
- Looking into the worker performance stats (Kibana or Grafana).
Possible solutions:
- Short-term: increase the batch sizes.
- Long-term: invoke the worker more frequently. Parallelize the worker
For a one-time fix, we can run the cleanup worker several times from the rails console. The worker
can run in parallel however, this can introduce lock contention and it could increase the worker
runtime.
```ruby
LooseForeignKeys::CleanupWorker.new.perform
```
When the cleanup is done, the older partitions are automatically detached by the
`PartitionManager`.
### PartitionManager bug
{{< alert type="note" >}}
This issue happened in the past on Staging and it has been mitigated.
{{< /alert >}}
When adding a new partition, the default value of the `partition` column is also updated. This is
a schema change that is executed in the same transaction as the new partition creation. It's highly
unlikely that the `partition` column goes outdated.
However, if this happens then this can cause application-wide incidents because the `partition`
value points to a partition that doesn't exist. Symptom: deletion of records from tables where the
`DELETE` trigger is installed fails.
```sql
\d+ loose_foreign_keys_deleted_records;
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------------------------+--------------------------+-----------+----------+----------------------------------------------------------------+----------+--------------+-------------
id | bigint | | not null | nextval('loose_foreign_keys_deleted_records_id_seq'::regclass) | plain | |
partition | bigint | | not null | 4 | plain | |
primary_key_value | bigint | | not null | | plain | |
status | smallint | | not null | 1 | plain | |
created_at | timestamp with time zone | | not null | now() | plain | |
fully_qualified_table_name | text | | not null | | extended | |
consume_after | timestamp with time zone | | | now() | plain | |
cleanup_attempts | smallint | | | 0 | plain | |
Partition key: LIST (partition)
Indexes:
"loose_foreign_keys_deleted_records_pkey" PRIMARY KEY, btree (partition, id)
"index_loose_foreign_keys_deleted_records_for_partitioned_query" btree (partition, fully_qualified_table_name, consume_after, id) WHERE status = 1
Check constraints:
"check_1a541f3235" CHECK (char_length(fully_qualified_table_name) <= 150)
Partitions: gitlab_partitions_dynamic.loose_foreign_keys_deleted_records_3 FOR VALUES IN ('3')
```
Check the default value of the `partition` column and compare it with the available partitions
(4 vs 3). The partition with the value of 4 does not exist. To mitigate the problem an emergency
schema change is required:
```sql
ALTER TABLE loose_foreign_keys_deleted_records ALTER COLUMN partition SET DEFAULT 3;
```
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Loose foreign keys
breadcrumbs:
- doc
- development
- database
---
## Problem statement
In relational databases (including PostgreSQL), foreign keys provide a way to link
two database tables together, and ensure data-consistency between them. In GitLab,
[foreign keys](foreign_keys.md) are vital part of the database design process.
Most of our database tables have foreign keys.
With the ongoing database [decomposition work](https://gitlab.com/groups/gitlab-org/-/epics/6168),
linked records might be present on two different database servers. Ensuring data consistency
between two databases is not possible with standard PostgreSQL foreign keys. PostgreSQL
does not support foreign keys operating across multiple database servers.
Example:
- Database "Main": `projects` table
- Database "CI": `ci_pipelines` table
A project can have many pipelines. When a project is deleted, the associated `ci_pipeline` (via the
`project_id` column) records must be also deleted.
With a multi-database setup, this cannot be achieved with foreign keys.
## Asynchronous approach
Our preferred approach to this problem is eventual consistency. With the loose foreign keys
feature, we can configure delayed association cleanup without negatively affecting the
application performance.
### How eventual consistency is implemented
In the previous example, a record in the `projects` table can have multiple `ci_pipeline`
records. To keep the cleanup process separate from the actual parent record deletion,
we can:
1. Create a `DELETE` trigger on the `projects` table.
Record the deletions in a separate table (`deleted_records`).
1. A job checks the `deleted_records` table every minute or two.
1. For each record in the table, delete the associated `ci_pipelines` records
using the `project_id` column.
{{< alert type="note" >}}
For this procedure to work, we must register which tables to clean up asynchronously.
{{< /alert >}}
## The `scripts/decomposition/generate-loose-foreign-key`
We built an automation tool to aid migration of foreign keys into loose foreign keys as part of
decomposition effort. It presents existing keys and allows chosen foreign keys to be automatically
converted into loose foreign keys. This ensures consistency between foreign key and loose foreign
key definitions, and ensures that they are properly tested.
{{< alert type="warning" >}}
We strongly advise you to use the automation script for swapping any foreign key to a loose foreign key.
{{< /alert >}}
The tool ensures that all aspects of swapping a foreign key are covered. This includes:
- Creating a migration to remove a foreign key.
- Updating `db/structure.sql` with the new migration.
- Updating `config/gitlab_loose_foreign_keys.yml` to add the new loose foreign key.
- Creating or updating a model's specs to ensure that the loose foreign key is properly supported.
The tool is located at `scripts/decomposition/generate-loose-foreign-key`:
```shell
$ scripts/decomposition/generate-loose-foreign-key -h
Usage: scripts/decomposition/generate-loose-foreign-key [options] <filters...>
-c, --cross-schema Show only cross-schema foreign keys
-n, --dry-run Do not execute any commands (dry run)
-r, --[no-]rspec Create or not a rspecs automatically
-h, --help Prints this help
```
For the migration of cross-schema foreign keys, we use the `-c` modifier to show the foreign keys
yet to migrate:
```shell
$ scripts/decomposition/generate-loose-foreign-key -c
Re-creating current test database
Dropped database 'gitlabhq_test_ee'
Dropped database 'gitlabhq_geo_test_ee'
Created database 'gitlabhq_test_ee'
Created database 'gitlabhq_geo_test_ee'
Showing cross-schema foreign keys (20):
ID | HAS_LFK | FROM | TO | COLUMN | ON_DELETE
0 | N | ci_builds | projects | project_id | cascade
1 | N | ci_job_artifacts | projects | project_id | cascade
2 | N | ci_pipelines | projects | project_id | cascade
3 | Y | ci_pipelines | merge_requests | merge_request_id | cascade
4 | N | external_pull_requests | projects | project_id | cascade
5 | N | ci_sources_pipelines | projects | project_id | cascade
6 | N | ci_stages | projects | project_id | cascade
7 | N | ci_pipeline_schedules | projects | project_id | cascade
8 | N | ci_runner_projects | projects | project_id | cascade
9 | Y | dast_site_profiles_pipelines | ci_pipelines | ci_pipeline_id | cascade
10 | Y | vulnerability_feedback | ci_pipelines | pipeline_id | nullify
11 | N | ci_variables | projects | project_id | cascade
12 | N | ci_refs | projects | project_id | cascade
13 | N | ci_builds_metadata | projects | project_id | cascade
14 | N | ci_subscriptions_projects | projects | downstream_project_id | cascade
15 | N | ci_subscriptions_projects | projects | upstream_project_id | cascade
16 | N | ci_sources_projects | projects | source_project_id | cascade
17 | N | ci_job_token_project_scope_links | projects | source_project_id | cascade
18 | N | ci_job_token_project_scope_links | projects | target_project_id | cascade
19 | N | ci_project_monthly_usages | projects | project_id | cascade
To match foreign key (FK), write one or many filters to match against FROM/TO/COLUMN:
- scripts/decomposition/generate-loose-foreign-key (filters...)
- scripts/decomposition/generate-loose-foreign-key ci_job_artifacts project_id
- scripts/decomposition/generate-loose-foreign-key dast_site_profiles_pipelines
```
The command accepts a list of regular expressions to match from, to, or column
for the purpose of the foreign key generation. For example, run this to swap
all foreign keys for `ci_job_token_project_scope_links` for the decomposed database:
```shell
scripts/decomposition/generate-loose-foreign-key -c ci_job_token_project_scope_links
```
To swap only the `source_project_id` of `ci_job_token_project_scope_links` for the decomposed database, run:
```shell
scripts/decomposition/generate-loose-foreign-key -c ci_job_token_project_scope_links source_project_id
```
To match the exact name of a table or columns, you can make use of the regular expressions
position anchors `^` and `$`. For example, this command matches only the
foreign keys on the `events` table only, but not on the table
`incident_management_timeline_events`.
```shell
scripts/decomposition/generate-loose-foreign-key -n ^events$
```
To swap all the foreign keys (all having `_id` appended), but not create a new branch (only commit
the changes) and not create RSpec tests, run:
```shell
scripts/decomposition/generate-loose-foreign-key -c --no-branch --no-rspec _id
```
To swap all foreign keys referencing `projects`, but not create a new branch (only commit the
changes), run:
```shell
scripts/decomposition/generate-loose-foreign-key -c --no-branch projects
```
## Example migration and configuration
### Configure the loose foreign key
Loose foreign keys are defined in a YAML file. The configuration requires the
following information:
- Parent table name (`projects`)
- Child table name (`ci_pipelines`)
- The data cleanup method (`async_delete` or `async_nullify`)
The YAML file is located at `config/gitlab_loose_foreign_keys.yml`. The file groups
foreign key definitions by the name of the child table. The child table can have multiple loose
foreign key definitions, therefore we store them as an array.
Example definition:
```yaml
ci_pipelines:
- table: projects
column: project_id
on_delete: async_delete
```
If the `ci_pipelines` key is already present in the YAML file, then a new entry can be added
to the array:
```yaml
ci_pipelines:
- table: projects
column: project_id
on_delete: async_delete
- table: another_table
column: another_id
on_delete: :async_nullify
```
### Assign specific tables to custom workers
By default, all loose foreign key cleanup is handled by the `LooseForeignKeys::CleanupWorker`. However,
you can specify a custom worker class to handle cleanup for specific tables. This allows for better
load distribution and specialized handling of different table types.
To assign a table to a custom worker, add the `worker_class` attribute to the configuration:
```yaml
ci_pipelines:
- table: projects
column: project_id
on_delete: async_delete
worker_class: 'CustomLooseForeignKeysWorker'
```
If the `worker_class` attribute is not specified, the table will default to using
`::LooseForeignKeys::CleanupWorker`.
**Important considerations:**
- The `worker_class` must be a valid Ruby class name as a string
- The custom worker should follow the same pattern as `LooseForeignKeys::CleanupWorker`
- Each worker processes only the tables specifically assigned to it through the `worker_class` attribute
- Tables without a `worker_class` specified are processed by the default `CleanupWorker`
- When adding a new custom worker, you must also add it to the `ALLOWED_WORKER_CLASSES` constant in `lib/gitlab/database/loose_foreign_keys.rb`
- When adding a new custom worker, you must also add its cron job configuration to `config/initializers/1_settings.rb`
Example with mixed worker assignments:
```yaml
ci_pipelines:
- table: projects
column: project_id
on_delete: async_delete
worker_class: 'CustomCiCleanupWorker' # Processed by CustomCiCleanupWorker
- table: users
column: user_id
on_delete: async_nullify
# No worker_class = processed by default CleanupWorker
ci_builds:
- table: projects
column: project_id
on_delete: async_delete # No worker_class = processed by default CleanupWorker
```
### Track record changes
#### On normal non-partitioned tables
To know about deletions in the `projects` table, configure a `DELETE` trigger
using a [post-deployment migration](post_deployment_migrations.md). The
trigger needs to be configured only once. If the model already has at least one
`loose_foreign_key` definition, then this step can be skipped:
```ruby
class TrackProjectRecordChanges < Gitlab::Database::Migration[2.3]
include Gitlab::Database::MigrationHelpers::LooseForeignKeyHelpers
def up
track_record_deletions(:projects)
end
def down
untrack_record_deletions(:projects)
end
end
```
#### On partitioned tables
To track deletions on partitioned tables, we need to use the helper `track_record_deletions_override_table_name`
instead. It's because we need to make sure that when `DELETE` statements run against the partitioned
table or its partitions, we are always registering the parent (partitioned) table instead of the partition
(child) table name.
Here is an example:
```ruby
class TrackWorkloadDeletions < Gitlab::Database::Migration[2.3]
include Gitlab::Database::MigrationHelpers::LooseForeignKeyHelpers
def up
track_record_deletions_override_table_name(:p_ci_workloads)
end
def down
untrack_record_deletions(:p_ci_workloads)
end
end
```
### Remove the foreign key
If there is an existing foreign key, then it can be removed from the database. This foreign key describes the link between the `projects` and `ci_pipelines` tables:
```sql
ALTER TABLE ONLY ci_pipelines
ADD CONSTRAINT fk_86635dbd80
FOREIGN KEY (project_id)
REFERENCES projects(id)
ON DELETE CASCADE;
```
The migration must run after the `DELETE` trigger is installed and the loose
foreign key definition is deployed. As such, it must be a
[post-deployment migration](post_deployment_migrations.md) dated after the migration for the
trigger. If the foreign key is deleted earlier, there is a good chance of
introducing data inconsistency which needs manual cleanup:
```ruby
class RemoveProjectsCiPipelineFk < Gitlab::Database::Migration[2.3]
disable_ddl_transaction!
def up
with_lock_retries do
remove_foreign_key_if_exists(:ci_pipelines, :projects, name: "fk_86635dbd80")
end
end
def down
add_concurrent_foreign_key(:ci_pipelines, :projects, name: "fk_86635dbd80", column: :project_id, target_column: :id, on_delete: "cascade")
end
end
```
At this point, the setup phase is concluded. The deleted `projects` records should be automatically
picked up by the scheduled cleanup worker job.
### Remove the loose foreign key
When the loose foreign key definition is no longer needed (parent table is removed, or FK is restored),
we need to remove the definition from the YAML file and ensure that we don't leave pending deleted
records in the database.
1. Remove the loose foreign key definition from the configuration (`config/gitlab_loose_foreign_keys.yml`).
The deletion tracking trigger needs to be removed only when the parent table no longer uses loose foreign keys.
If the model still has at least one `loose_foreign_key` definition remaining, then these steps can be skipped:
1. Remove the trigger from the parent table (if the parent table is still there).
1. Remove leftover deleted records from the `loose_foreign_keys_deleted_records` table.
Migration for removing the trigger:
```ruby
class UnTrackProjectRecordChanges < Gitlab::Database::Migration[2.3]
include Gitlab::Database::MigrationHelpers::LooseForeignKeyHelpers
def up
untrack_record_deletions(:projects)
end
def down
track_record_deletions(:projects)
end
end
```
With the trigger removal, we prevent further records to be inserted in the `loose_foreign_keys_deleted_records`
table however, there is still a chance for having leftover pending records in the table. These records
must be removed with an inline data migration.
```ruby
class RemoveLeftoverProjectDeletions < Gitlab::Database::Migration[2.3]
disable_ddl_transaction!
def up
loop do
result = execute <<~SQL
DELETE FROM "loose_foreign_keys_deleted_records"
WHERE
("loose_foreign_keys_deleted_records"."partition", "loose_foreign_keys_deleted_records"."id") IN (
SELECT "loose_foreign_keys_deleted_records"."partition", "loose_foreign_keys_deleted_records"."id"
FROM "loose_foreign_keys_deleted_records"
WHERE
"loose_foreign_keys_deleted_records"."fully_qualified_table_name" = 'public.projects' AND
"loose_foreign_keys_deleted_records"."status" = 1
LIMIT 100
)
SQL
break if result.cmd_tuples == 0
end
end
def down
# no-op
end
end
```
## Testing
The "`it has loose foreign keys`" shared example can be used to test the presence of the `ON DELETE` trigger and the
loose foreign key definitions.
Add to the model test file:
```ruby
it_behaves_like 'it has loose foreign keys' do
let(:factory_name) { :project }
end
```
**After** [removing a foreign key](#remove-the-foreign-key),
use the "`cleanup by a loose foreign key`" shared example to test a child record's deletion or nullification
via the added loose foreign key:
```ruby
it_behaves_like 'cleanup by a loose foreign key' do
let!(:model) { create(:ci_pipeline, user: create(:user)) }
let!(:parent) { model.user }
end
```
## Caveats of loose foreign keys
### Record creation
The feature provides an efficient way of cleaning up associated records after the parent record is
deleted. Without foreign keys, it's the application's responsibility to validate if the parent record
exists when a new associated record is created.
A bad example: record creation with the given ID (`project_id` comes from user input).
In this example, nothing prevents us from passing a random project ID:
```ruby
Ci::Pipeline.create!(project_id: params[:project_id])
```
A good example: record creation with extra check:
```ruby
project = Project.find(params[:project_id])
Ci::Pipeline.create!(project_id: project.id)
```
### Association lookup
Consider the following HTTP request:
```plaintext
GET /projects/5/pipelines/100
```
The controller action ignores the `project_id` parameter and finds the pipeline using the ID:
```ruby
def show
# bad, avoid it
pipeline = Ci::Pipeline.find(params[:id]) # 100
end
```
This endpoint still works when the parent `Project` model is deleted. This can be considered a
data leak which should not happen under typical circumstances:
```ruby
def show
# good
project = Project.find(params[:project_id])
pipeline = project.pipelines.find(params[:pipeline_id]) # 100
end
```
{{< alert type="note" >}}
This example is unlikely in GitLab, because we usually look up the parent models to perform
permission checks.
{{< /alert >}}
## A note on `dependent: :destroy` and `dependent: :nullify`
We considered using these Rails features as an alternative to foreign keys but there are several problems which include:
1. These run on a different connection in the context of a transaction [which we do not allow](multiple_databases.md#removing-cross-database-transactions).
1. These can lead to severe performance degradation as we load all records from PostgreSQL, loop over them in Ruby, and call individual `DELETE` queries.
1. These can miss data as they only cover the case when the `destroy` method is called directly on the model. There are other cases including `delete_all` and cascading deletes from another parent table that could mean these are missed.
For non-trivial objects that need to clean up data outside the
database (for example, object storage) where you might wish to use `dependent: :destroy`,
see alternatives in
[Avoid `dependent: :nullify` and `dependent: :destroy` across databases](multiple_databases.md#avoid-dependent-nullify-and-dependent-destroy-across-databases).
## Update target column to a value
A loose foreign key might be used to update a target column to a value when an
entry in parent table is deleted.
It's important to add an index (if it doesn't exist yet) on
(`column`, `target_column`) to avoid any performance issues.
Any index starting with these two columns will work.
The configuration requires additional information:
- Column to be updated (`target_column`)
- Value to be set in the target column (`target_value`)
Example definition:
```yaml
packages:
- table: projects
column: project_id
on_delete: update_column_to
target_column: status
target_value: 4
```
## Risks of loose foreign keys and possible mitigations
In general, the loose foreign keys architecture is eventually consistent and
the cleanup latency might lead to problems visible to GitLab users or
operators. We consider the tradeoff as acceptable, but there might be
cases where the problems are too frequent or too severe, and we must
implement a mitigation strategy. A general mitigation strategy might be to have
an "urgent" queue for cleanup of records that have higher impact with a delayed
cleanup.
Below are some more specific examples of problems that might occur and how we
might mitigate them. In all the listed cases we might still consider the problem
described to be low risk and low impact, and in that case we would choose to not
implement any mitigation.
### The record should be deleted but it shows up in a view
This hypothetical example might happen with a foreign key like:
```sql
ALTER TABLE ONLY vulnerability_occurrence_pipelines
ADD CONSTRAINT fk_rails_6421e35d7d FOREIGN KEY (pipeline_id) REFERENCES ci_pipelines(id) ON DELETE CASCADE;
```
In this example we expect to delete all associated `vulnerability_occurrence_pipelines` records
whenever we delete the `ci_pipelines` record associated with them. In this case
you might end up with some vulnerability page in GitLab which shows an occurrence
of a vulnerability. However, when you try to select a link to the pipeline, you get
a 404, because the pipeline is deleted. Then, when you navigate back you might find the
occurrence has disappeared too.
**Mitigation**
When rendering the vulnerability occurrences on the vulnerability page we could
try to load the corresponding pipeline and choose to skip displaying that
occurrence if pipeline is not found.
### The deleted parent record is needed to render a view and causes a `500` error
This hypothetical example might happen with a foreign key like:
```sql
ALTER TABLE ONLY vulnerability_occurrence_pipelines
ADD CONSTRAINT fk_rails_6421e35d7d FOREIGN KEY (pipeline_id) REFERENCES ci_pipelines(id) ON DELETE CASCADE;
```
In this example we expect to delete all associated `vulnerability_occurrence_pipelines` records
whenever we delete the `ci_pipelines` record associated with them. In this case
you might end up with a vulnerability page in GitLab which shows an "occurrence"
of a vulnerability. However, when rendering the occurrence we try to load, for example,
`occurrence.pipeline.created_at`, which causes a 500 for the user.
**Mitigation**
When rendering the vulnerability occurrences on the vulnerability page we could
try to load the corresponding pipeline and choose to skip displaying that
occurrence if pipeline is not found.
### The deleted parent record is accessed in a Sidekiq worker and causes a failed job
This hypothetical example might happen with a foreign key like:
```sql
ALTER TABLE ONLY vulnerability_occurrence_pipelines
ADD CONSTRAINT fk_rails_6421e35d7d FOREIGN KEY (pipeline_id) REFERENCES ci_pipelines(id) ON DELETE CASCADE;
```
In this example we expect to delete all associated `vulnerability_occurrence_pipelines` records
whenever we delete the `ci_pipelines` record associated with them. In this case
you might end up with a Sidekiq worker that is responsible for processing a
vulnerability and looping over all occurrences causing a Sidekiq job to fail if
it executes `occurrence.pipeline.created_at`.
**Mitigation**
When looping through the vulnerability occurrences in the Sidekiq worker, we
could try to load the corresponding pipeline and choose to skip processing that
occurrence if pipeline is not found.
## Architecture
The loose foreign keys feature is implemented within the `LooseForeignKeys` Ruby namespace. The
code is isolated from the core application code and theoretically, it could be a standalone library.
The feature is invoked by worker classes, primarily the [`LooseForeignKeys::CleanupWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/loose_foreign_keys/cleanup_worker.rb). Custom workers can be assigned to specific tables through the `worker_class` configuration option. Workers are scheduled via cron jobs where the schedule depends on the configuration of the GitLab instance.
- Non-decomposed GitLab (1 database): invoked every minute.
- Decomposed GitLab (2 databases, CI and Main): invoked every minute, cleaning up one database
at a time. For example, the cleanup worker for the main database runs every two minutes.
To avoid lock contention and the processing of the same database rows, the worker does not run
parallel. This behavior is ensured with a Redis lock.
**Record cleanup procedure**:
1. Acquire the Redis lock.
1. Determine which database to clean up.
1. Collect all database tables where the deletions are tracked (parent tables).
- This is achieved by reading the `config/gitlab_loose_foreign_keys.yml` file.
- A table is considered "tracked" when a loose foreign key definition exists for the table and
the `DELETE` trigger is installed.
- When using custom workers via the `worker_class` attribute, each worker only processes tables
specifically assigned to it, filtering out tables assigned to other workers.
1. Cycle through the tables with an infinite loop.
1. For each table, load a batch of deleted parent records to clean up.
1. Depending on the YAML configuration, build `DELETE` or `UPDATE` (nullify) queries for the
referenced child tables.
1. Invoke the queries.
1. Repeat until all child records are cleaned up or the maximum limit is reached.
1. Remove the deleted parent records when all child records are cleaned up.
### Database structure
The feature relies on triggers installed on the parent tables. When a parent record is deleted,
the trigger automatically inserts a new record into the `loose_foreign_keys_deleted_records`
database table.
The inserted record stores the following information about the deleted record:
- `fully_qualified_table_name`: name of the database table where the record was located.
- `primary_key_value`: the ID of the record, the value is present in the child tables as
the foreign key value. At the moment, composite primary keys are not supported, the parent table
must have an `id` column.
- `status`: defaults to pending, represents the status of the cleanup process.
- `consume_after`: defaults to the current time.
- `cleanup_attempts`: defaults to 0. The number of times the worker tried to clean up this record.
A non-zero number would mean that this record has many child records and cleaning it up requires
several runs.
#### Database decomposition
The `loose_foreign_keys_deleted_records` table exists on both database servers (`ci` and `main`)
after the [database decomposition](https://gitlab.com/groups/gitlab-org/-/epics/6168). The worker
ill determine which parent tables belong to which database by reading the
`lib/gitlab/database/gitlab_schemas.yml` YAML file.
Example:
- Main database tables
- `projects`
- `namespaces`
- `merge_requests`
- Ci database tables
- `ci_builds`
- `ci_pipelines`
When the worker is invoked for the `ci` database, the worker loads deleted records only from the
`ci_builds` and `ci_pipelines` tables. During the cleanup process, `DELETE` and `UPDATE` queries
mostly run on tables located in the Main database. In this example, one `UPDATE` query
nullifies the `merge_requests.head_pipeline_id` column.
#### Database partitioning
Due to the large volume of inserts the database table receives daily, a special partitioning
strategy was implemented to address data bloat concerns. Originally, the
[time-decay](https://handbook.gitlab.com/handbook/company/working-groups/database-scalability/time-decay/)
strategy was considered for the feature but due to the large data volume we decided to implement a
new strategy.
A deleted record is considered fully processed when all its direct children records have been
cleaned up. When this happens, the loose foreign key worker updates the `status` column of
the deleted record. After this step, the record is no longer needed.
The sliding partitioning strategy provides an efficient way of cleaning up old, unused data by
adding a new database partition and removing the old one when certain conditions are met.
The `loose_foreign_keys_deleted_records` database table is list partitioned where most of the
time there is only one partition attached to the table.
```sql
Partitioned table "public.loose_foreign_keys_deleted_records"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------------------------+--------------------------+-----------+----------+----------------------------------------------------------------+----------+--------------+-------------
id | bigint | | not null | nextval('loose_foreign_keys_deleted_records_id_seq'::regclass) | plain | |
partition | bigint | | not null | 84 | plain | |
primary_key_value | bigint | | not null | | plain | |
status | smallint | | not null | 1 | plain | |
created_at | timestamp with time zone | | not null | now() | plain | |
fully_qualified_table_name | text | | not null | | extended | |
consume_after | timestamp with time zone | | | now() | plain | |
cleanup_attempts | smallint | | | 0 | plain | |
Partition key: LIST (partition)
Indexes:
"loose_foreign_keys_deleted_records_pkey" PRIMARY KEY, btree (partition, id)
"index_loose_foreign_keys_deleted_records_for_partitioned_query" btree (partition, fully_qualified_table_name, consume_after, id) WHERE status = 1
Check constraints:
"check_1a541f3235" CHECK (char_length(fully_qualified_table_name) <= 150)
Partitions: gitlab_partitions_dynamic.loose_foreign_keys_deleted_records_84 FOR VALUES IN ('84')
```
The `partition` column controls the insert direction, the `partition` value determines which
partition gets the deleted rows inserted via the trigger. Notice that the default value of
the `partition` table matches with the value of the list partition (84). In `INSERT` query
within the trigger the value of the `partition` is omitted, the trigger always relies on the
default value of the column.
Example `INSERT` query for the trigger:
```sql
INSERT INTO loose_foreign_keys_deleted_records
(fully_qualified_table_name, primary_key_value)
SELECT TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME, old_table.id FROM old_table;
```
The partition "sliding" process is controlled by two, regularly executed callbacks. These
callbacks are defined within the `LooseForeignKeys::DeletedRecord` model.
The `next_partition_if` callback controls when to create a new partition. A new partition is
created when the current partition has at least one record older than 24 hours. A new partition
is added by the [`PartitionManager`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/database/partitioning/partition_manager.rb)
using the following steps:
1. Create a new partition, where the `VALUE` for the partition is `CURRENT_PARTITION + 1`.
1. Update the default value of the `partition` column to `CURRENT_PARTITION + 1`.
With these steps, all new `INSERT` queries via the triggers end up in the new partition. At this point,
the database table has two partitions.
The `detach_partition_if` callback determines if the old partitions can be detached from the table.
A partition is detachable if there are no pending (unprocessed) records in the partition
(`status = 1`). The detached partitions are available for some time, you can see the list
detached partitions in the `detached_partitions` table:
```sql
select * from detached_partitions;
```
#### Cleanup queries
The `LooseForeignKeys::CleanupWorker` has its database query builder which depends on `Arel`.
The feature doesn't reference any application-specific `ActiveRecord` models to avoid unexpected
side effects. The database queries are batched, which means that several parent records are being
cleaned up at the same time.
Example `DELETE` query:
```sql
DELETE
FROM "merge_request_metrics"
WHERE ("merge_request_metrics"."id") IN
(SELECT "merge_request_metrics"."id"
FROM "merge_request_metrics"
WHERE "merge_request_metrics"."pipeline_id" IN (1, 2, 10, 20)
LIMIT 1000 FOR UPDATE SKIP LOCKED)
```
The primary key values of the parent records are 1, 2, 10, and 20.
Example `UPDATE` (nullify) query:
```sql
UPDATE "merge_requests"
SET "head_pipeline_id" = NULL
WHERE ("merge_requests"."id") IN
(SELECT "merge_requests"."id"
FROM "merge_requests"
WHERE "merge_requests"."head_pipeline_id" IN (3, 4, 30, 40)
LIMIT 500 FOR UPDATE SKIP LOCKED)
```
These queries are batched, which means that in many cases, several invocations are needed to clean
up all associated child records.
The batching is implemented with loops, the processing stops when all associated child records
are cleaned up or the limit is reached.
```ruby
loop do
modification_count = process_batch_with_skip_locked
break if modification_count == 0 || over_limit?
end
loop do
modification_count = process_batch
break if modification_count == 0 || over_limit?
end
```
The loop-based batch processing is preferred over `EachBatch` for the following reasons:
- The records in the batch are modified, so the next batch contains different records.
- There is always an index on the foreign key column however, the column is usually not unique.
`EachBatch` requires a unique column for the iteration.
- The record order doesn't matter for the cleanup.
Notice that we have two loops. The initial loop processes records with the `SKIP LOCKED` clause.
The query skips rows that are locked by other application processes. This ensures that the
cleanup worker is less likely to become blocked. The second loop executes the database
queries without `SKIP LOCKED` to ensure that all records have been processed.
#### Processing limits
A constant, large volume of record updates or deletions can cause incidents and affect the
availability of GitLab:
- Increased table bloat.
- Increased number of pending WAL files.
- Busy tables, difficulty when acquiring locks.
To mitigate these issues, several limits are applied when the worker runs.
- Each query has `LIMIT`, a query cannot process an unbounded number of rows.
- The maximum number of record deletions and record updates is limited.
- The maximum runtime (30 seconds) for the database queries is limited.
The limit rules are implemented in the `LooseForeignKeys::ModificationTracker` class. When one of
the limits (record modification count, time limit) is reached the processing is stopped
immediately. After some time, the next scheduled worker continues the cleanup process.
#### Performance characteristics
The database trigger on the parent tables **decreases** the record deletion speed. Each
statement that removes rows from the parent table invokes the trigger to insert records
into the `loose_foreign_keys_deleted_records` table.
The queries within the cleanup worker are fairly efficient index scans, with limits in place
they're unlikely to affect other parts of the application.
The database queries are not running in transaction, when an error happens for example a statement
timeout or a worker crash, the next job continues the processing.
## Troubleshooting
### Accumulation of deleted records
There can be cases where the workers need to process an unusually large amount of data. This can
happen under typical usage, for example when a large project or group is deleted. In this scenario,
there can be several million rows to be deleted or nullified. Due to the limits enforced by the
worker, processing this data takes some time.
When cleaning up "heavy-hitters", the feature ensures fair processing by rescheduling larger
batches for later. This gives time for other deleted records to be processed.
For example, a project with millions of `ci_builds` records is deleted. The `ci_builds` records
is deleted by the loose foreign keys feature.
1. The cleanup worker is scheduled and picks up a batch of deleted `projects` records. The large
project is part of the batch.
1. Deletion of the orphaned `ci_builds` rows has started.
1. The time limit is reached, but the cleanup is not complete.
1. The `cleanup_attempts` column is incremented for the deleted records.
1. Go to step 1. The next cleanup worker continues the cleanup.
1. When the `cleanup_attempts` reaches 3, the batch is re-scheduled 10 minutes later by updating
the `consume_after` column.
1. The next cleanup worker processes a different batch.
We have Prometheus metrics in place to monitor the deleted record cleanup:
- `loose_foreign_key_processed_deleted_records`: Number of processed deleted records. When large
cleanup happens, this number would decrease.
- `loose_foreign_key_incremented_deleted_records`: Number of deleted records which were not
finished processing. The `cleanup_attempts` column was incremented.
- `loose_foreign_key_rescheduled_deleted_records`: Number of deleted records that had to be
rescheduled at a later time after 3 cleanup attempts.
Example PromQL query:
```plaintext
loose_foreign_key_rescheduled_deleted_records{env="gprd", table="ci_runners"}
```
Another way to look at the situation is by running a database query. This query gives the exact
counts of the unprocessed records:
```sql
SELECT partition, fully_qualified_table_name, count(*)
FROM loose_foreign_keys_deleted_records
WHERE
status = 1
GROUP BY 1, 2;
```
Example output:
```sql
partition | fully_qualified_table_name | count
-----------+----------------------------+-------
87 | public.ci_builds | 874
87 | public.ci_job_artifacts | 6658
87 | public.ci_pipelines | 102
87 | public.ci_runners | 111
87 | public.merge_requests | 255
87 | public.namespaces | 25
87 | public.projects | 6
```
The query includes the partition number which can be useful to detect if the cleanup process is
significantly lagging behind. When multiple different partition values are present in the list
that means the cleanup of some deleted records didn't finish in several days (1 new partition
is added every day).
Steps to diagnose the problem:
- Check which records are accumulating.
- Try to get an estimate of the number of remaining records.
- Looking into the worker performance stats (Kibana or Grafana).
Possible solutions:
- Short-term: increase the batch sizes.
- Long-term: invoke the worker more frequently. Parallelize the worker
For a one-time fix, we can run the cleanup worker several times from the rails console. The worker
can run in parallel however, this can introduce lock contention and it could increase the worker
runtime.
```ruby
LooseForeignKeys::CleanupWorker.new.perform
```
When the cleanup is done, the older partitions are automatically detached by the
`PartitionManager`.
### PartitionManager bug
{{< alert type="note" >}}
This issue happened in the past on Staging and it has been mitigated.
{{< /alert >}}
When adding a new partition, the default value of the `partition` column is also updated. This is
a schema change that is executed in the same transaction as the new partition creation. It's highly
unlikely that the `partition` column goes outdated.
However, if this happens then this can cause application-wide incidents because the `partition`
value points to a partition that doesn't exist. Symptom: deletion of records from tables where the
`DELETE` trigger is installed fails.
```sql
\d+ loose_foreign_keys_deleted_records;
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
----------------------------+--------------------------+-----------+----------+----------------------------------------------------------------+----------+--------------+-------------
id | bigint | | not null | nextval('loose_foreign_keys_deleted_records_id_seq'::regclass) | plain | |
partition | bigint | | not null | 4 | plain | |
primary_key_value | bigint | | not null | | plain | |
status | smallint | | not null | 1 | plain | |
created_at | timestamp with time zone | | not null | now() | plain | |
fully_qualified_table_name | text | | not null | | extended | |
consume_after | timestamp with time zone | | | now() | plain | |
cleanup_attempts | smallint | | | 0 | plain | |
Partition key: LIST (partition)
Indexes:
"loose_foreign_keys_deleted_records_pkey" PRIMARY KEY, btree (partition, id)
"index_loose_foreign_keys_deleted_records_for_partitioned_query" btree (partition, fully_qualified_table_name, consume_after, id) WHERE status = 1
Check constraints:
"check_1a541f3235" CHECK (char_length(fully_qualified_table_name) <= 150)
Partitions: gitlab_partitions_dynamic.loose_foreign_keys_deleted_records_3 FOR VALUES IN ('3')
```
Check the default value of the `partition` column and compare it with the available partitions
(4 vs 3). The partition with the value of 4 does not exist. To mitigate the problem an emergency
schema change is required:
```sql
ALTER TABLE loose_foreign_keys_deleted_records ALTER COLUMN partition SET DEFAULT 3;
```
|
https://docs.gitlab.com/development/deleting_migrations
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/deleting_migrations.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
deleting_migrations.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Delete existing migrations
| null |
When removing existing migrations from the GitLab project, you have to take into account
the possibility of the migration already been included in past releases or in the current release, and thus already executed on GitLab.com and/or in GitLab Self-Managed instances.
Because of it, it's not possible to delete existing migrations, as that could lead to:
- Schema inconsistency, as changes introduced into the database were not rolled back properly.
- Leaving a record on the `schema_versions` table, that points out to migration that no longer exists on the codebase.
Instead of deleting we can opt for disabling the migration.
## Pre-requisites to disable a migration
Migrations can be disabled if:
- They caused a timeout or general issue on GitLab.com.
- They are obsoleted, for example, changes are not necessary due to a feature change.
- Migration is a data migration only, that is, the migration does not change the database schema.
## How to disable a data migration?
In order to disable a migration, the following steps apply to all types of migrations:
1. Turn the migration into a no-op by removing the code inside `#up`, `#down`
or `#perform` methods, and adding `# no-op` comment instead.
1. Add a comment explaining why the code is gone.
Disabling migrations requires explicit approval of Database Maintainer.
## Examples
- [Disable scheduling of productivity analytics](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/17253)
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Delete existing migrations
breadcrumbs:
- doc
- development
- database
---
When removing existing migrations from the GitLab project, you have to take into account
the possibility of the migration already been included in past releases or in the current release, and thus already executed on GitLab.com and/or in GitLab Self-Managed instances.
Because of it, it's not possible to delete existing migrations, as that could lead to:
- Schema inconsistency, as changes introduced into the database were not rolled back properly.
- Leaving a record on the `schema_versions` table, that points out to migration that no longer exists on the codebase.
Instead of deleting we can opt for disabling the migration.
## Pre-requisites to disable a migration
Migrations can be disabled if:
- They caused a timeout or general issue on GitLab.com.
- They are obsoleted, for example, changes are not necessary due to a feature change.
- Migration is a data migration only, that is, the migration does not change the database schema.
## How to disable a data migration?
In order to disable a migration, the following steps apply to all types of migrations:
1. Turn the migration into a no-op by removing the code inside `#up`, `#down`
or `#perform` methods, and adding `# no-op` comment instead.
1. Add a comment explaining why the code is gone.
Disabling migrations requires explicit approval of Database Maintainer.
## Examples
- [Disable scheduling of productivity analytics](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/17253)
|
https://docs.gitlab.com/development/pagination_guidelines
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/pagination_guidelines.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
pagination_guidelines.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Pagination guidelines
| null |
This document gives an overview of the current capabilities and provides best practices for paginating over data in GitLab, and in particular for PostgreSQL.
## Why do we need pagination?
Pagination is a popular technique to avoid loading too much data in one web request. This usually happens when we render a list of records. A common scenario is visualizing parent-children relations (has many) on the UI.
Example: listing issues within a project
As the number of issues grows within the project, the list gets longer. To render the list, the backend does the following:
1. Loads the records from the database, usually in a particular order.
1. Serializes the records in Ruby. Build Ruby (ActiveRecord) objects and then build a JSON or HTML string.
1. Sends the response back to the browser.
1. The browser renders the content.
We have two options for rendering the content:
- HTML: backend deals with the rendering (HAML template).
- JSON: the client (client-side JavaScript) transforms the payload into HTML.
Rendering long lists can significantly affect both the frontend and backend performance:
- The database reads a lot of data from the disk.
- The result of the query (records) is eventually transformed to Ruby objects which increases memory allocation.
- Large responses take more time to send over the wire, to the user's browser.
- Rendering long lists might freeze the browser (bad user experience).
With pagination, the data is split into equal pieces (pages). On the first visit, the user receives only a limited number of items (page size). The user can see more items by paginating forward which results in a new HTTP request and a new database query.

## General guidelines for paginating
### Pick the right approach
Let the database handle the pagination, filtering, and data retrieval. Implementing in-memory pagination on the backend (`paginate_array` from Kaminari) or on the frontend (JavaScript) might work for a few hundreds of records. If application limits are not defined, things can get out of control quickly.
### Reduce complexity
When we list records on the page we often provide additional filters and different sort options. This can complicate things on the backend side significantly.
For the MVC version, consider the following:
- Reduce the number of sort options to the minimum.
- Reduce the number of filters (dropdown list, search bar) to the minimum.
To make sorting and pagination efficient, for each sort option we need at least two database indexes (ascending, descending order). If we add filter options (by state or by author), we might need more indexes to maintain good performance. Indexes are not free, they can significantly affect the `UPDATE` query timings.
It's not possible to make all filter and sort combinations performant, so we should try optimizing the performance by usage patterns.
### Prepare for scaling
Offset-based pagination is the easiest way to paginate over records, however, it does not scale well for large database tables. As a long-term solution, [keyset pagination](keyset_pagination.md) is preferred. Switching between offset and keyset pagination is generally straightforward and can be done without affecting the end-user if the following conditions are met:
- Avoid presenting total counts, prefer limit counts.
- Example: count maximum 1001 records, and then on the UI show 1000+ if the count is 1001, show the actual number otherwise.
- See the [badge counters approach](../merge_request_concepts/performance.md#badge-counters) for more information.
- Avoid using page numbers, use next and previous page buttons.
- Keyset pagination doesn't support page numbers.
- For APIs, advise against building URLs for the next page by "hand".
- Promote the usage of the [`Link` header](../../api/rest/_index.md#pagination-link-header) where the URLs for the next and previous page are provided by the backend.
- This way changing the URL structure is possible without breaking backward compatibility.
{{< alert type="note" >}}
Infinite scroll can use keyset pagination without affecting the user experience since there are no exposed page numbers.
{{< /alert >}}
## Options for pagination
### Offset pagination
The most common way to paginate lists is using offset-based pagination (UI and REST API). It's backed by the popular [Kaminari](https://github.com/kaminari/kaminari) Ruby gem, which provides convenient helper methods to implement pagination on ActiveRecord queries.
Offset-based pagination is leveraging the `LIMIT` and `OFFSET` SQL clauses to take out a specific slice from the table.
Example database query when looking for the 2nd page of the issues within our project:
```sql
SELECT issues.* FROM issues WHERE project_id = 1 ORDER BY id LIMIT 20 OFFSET 20
```
1. Move an imaginary pointer over the table rows and skip 20 rows.
1. Take the next 20 rows.
Notice that the query also orders the rows by the primary key (`id`). When paginating data, specifying the order is very important. Without it, the returned rows are non-deterministic and can confuse the end-user.
#### Page numbers
Example pagination bar:

The Kaminari gem renders a nice pagination bar on the UI with page numbers and optionally quick shortcuts the next, previous, first, and last page buttons. To render these buttons, Kaminari needs to know the number of rows, and for that, a count query is executed.
```sql
SELECT COUNT(*) FROM issues WHERE project_id = 1
```
#### Performance
##### Index coverage
To achieve the good performance, the `ORDER BY` clause needs to be covered by an index.
Assuming that we have the following index:
```sql
CREATE INDEX index_on_issues_project_id ON issues (project_id);
```
Let's try to request the first page:
```sql
SELECT issues.* FROM issues WHERE project_id = 1 ORDER BY id LIMIT 20;
```
We can produce the same query in Rails:
```ruby
Issue.where(project_id: 1).page(1).per(20)
```
The SQL query returns a maximum of 20 rows from the database. However, it doesn't mean that the database only reads 20 rows from the disk to produce the result.
This is what happens:
1. The database tries to plan the execution in the most efficient way possible based on the table statistics and the available indexes.
1. The planner knows that we have an index covering the `project_id` column.
1. The database reads all rows using the index on `project_id`.
1. The rows at this point are not sorted, so the database sorts the rows.
1. The database returns the first 20 rows.
In case the project has 10,000 rows, the database reads 10,000 rows and sorts them in memory (or on disk). This does not scale well in the long term.
To fix this we need the following index:
```sql
CREATE INDEX index_on_issues_project_id ON issues (project_id, id);
```
By making the `id` column part of the index, the previous query reads maximum 20 rows. The query performs well regardless of the number of issues within a project. So with this change, we've also improved the initial page load (when the user loads the issue page).
{{< alert type="note" >}}
Here we're leveraging the ordered property of the b-tree database index. Values in the index are sorted so reading 20 rows does not require further sorting.
{{< /alert >}}
#### Known issues
##### `COUNT(*)` on a large dataset
Kaminari by default executes a count query to determine the number of pages for rendering the page links. Count queries can be quite expensive for a large table. In an unfortunate scenario the queries time out.
To work around this, we can run Kaminari without invoking the count SQL query.
```ruby
Issue.where(project_id: 1).page(1).per(20).without_count
```
In this case, the count query is not executed and the pagination no longer renders the page numbers. We see only the next and previous links.
##### `OFFSET` on a large dataset
When we paginate over a large dataset, we might notice that the response time gets slower and slower. This is due to the `OFFSET` clause that seeks through the rows and skips N rows.
From the user point of view, this might not be always noticeable. As the user paginates forward, the previous rows might be still in the buffer cache of the database. If the user shares the link with someone else and it's opened after a few minutes or hours, the response time might be significantly higher or it would even time out.
When requesting a large page number, the database needs to read `PAGE * PAGE_SIZE` rows. This makes offset pagination **unsuitable for large database tables** however, with an [optimization technique](offset_pagination_optimization.md) the overall performance of the database queries can be slightly improved.
Example: listing users on the Admin area
Listing users with a very simple SQL query:
```sql
SELECT "users".* FROM "users" ORDER BY "users"."id" DESC LIMIT 20 OFFSET 0
```
The query execution plan shows that this query is efficient, the database only read 20 rows from the database (`rows=20`):
```plaintext
Limit (cost=0.43..3.19 rows=20 width=1309) (actual time=0.098..2.093 rows=20 loops=1)
Buffers: shared hit=103
-> Index Scan Backward using users_pkey on users (cost=0.43..X rows=X width=1309) (actual time=0.097..2.087 rows=20 loops=1)
Buffers: shared hit=103
Planning Time: 0.333 ms
Execution Time: 2.145 ms
(6 rows)
```
See the [Understanding EXPLAIN plans](understanding_explain_plans.md) to find more information about reading execution plans.
Let's visit the 50_000th page:
```sql
SELECT "users".* FROM "users" ORDER BY "users"."id" DESC LIMIT 20 OFFSET 999980;
```
The plan shows that the database reads 1_000_000 rows to return 20 rows, with a very high execution time (5.5 seconds):
```plaintext
Limit (cost=137878.89..137881.65 rows=20 width=1309) (actual time=5523.588..5523.667 rows=20 loops=1)
Buffers: shared hit=1007901 read=14774 written=609
I/O Timings: read=420.591 write=57.344
-> Index Scan Backward using users_pkey on users (cost=0.43..X rows=X width=1309) (actual time=0.060..5459.353 rows=1000000 loops=1)
Buffers: shared hit=1007901 read=14774 written=609
I/O Timings: read=420.591 write=57.344
Planning Time: 0.821 ms
Execution Time: 5523.745 ms
(8 rows)
```
We can argue that a typical user does not visit these pages. However, API users could go to very high page numbers (scraping, collecting data).
### Keyset pagination
Keyset pagination addresses the performance concerns of "skipping" previous rows when requesting a large page, however, it's not a drop-in replacement for offset-based pagination. When moving an API endpoint from offset-based pagination to keyset-based pagination, both must be supported. Removing one type of pagination entirely is a [breaking changes](../../update/terminology.md#breaking-change).
Keyset pagination used in both the [GraphQL API](../graphql_guide/pagination.md#keyset-pagination) and the [REST API](../../api/rest/_index.md#keyset-based-pagination).
Consider the following `issues` table:
| `id` | `project_id` |
|------|--------------|
| 1 | 1 |
| 2 | 1 |
| 3 | 2 |
| 4 | 1 |
| 5 | 1 |
| 6 | 2 |
| 7 | 2 |
| 8 | 1 |
| 9 | 1 |
| 10 | 2 |
Let's paginate over the whole table ordered by the primary key (`id`). The query for the first page is the same as the offset pagination query, for simplicity, we use 5 as the page size:
```sql
SELECT "issues".* FROM "issues" ORDER BY "issues"."id" ASC LIMIT 5
```
Notice that we didn't add the `OFFSET` clause.
To get to the next page, we need to extract values that are part of the `ORDER BY` clause from the last row. In this case, we just need the `id`, which is 5. Now we construct the query for the next page:
```sql
SELECT "issues".* FROM "issues" WHERE "issues"."id" > 5 ORDER BY "issues"."id" ASC LIMIT 5
```
Looking at the query execution plan, we can see that this query read only 5 rows (offset-based pagination would read 10 rows):
```plaintext
Limit (cost=0.56..2.08 rows=5 width=1301) (actual time=0.093..0.137 rows=5 loops=1)
-> Index Scan using issues_pkey on issues (cost=0.56..X rows=X width=1301) (actual time=0.092..0.136 rows=5 loops=1)
Index Cond: (id > 5)
Planning Time: 7.710 ms
Execution Time: 0.224 ms
(5 rows)
```
#### Known issues
##### No page numbers
Offset pagination provides an easy way to request a specific page. We can edit the URL and modify the `page=` URL parameter. Keyset pagination cannot provide page numbers because the paging logic might depend on different columns.
In the previous example, the column is the `id`, so we might see something like this in the `URL`:
```plaintext
id_after=5
```
In GraphQL, the parameters are serialized to JSON and then encoded:
```plaintext
eyJpZCI6Ijk0NzMzNTk0IiwidXBkYXRlZF9hdCI6IjIwMjEtMDQtMDkgMDg6NTA6MDUuODA1ODg0MDAwIFVUQyJ9
```
{{< alert type="note" >}}
Pagination parameters are visible to the user, so be careful about which columns we order by.
{{< /alert >}}
Keyset pagination can only provide the next, previous, first, and last pages.
##### Complexity
Building queries when we order by a single column is very easy, however, things get more complex if tie-breaker or multi-column ordering is used. The complexity increases if the columns are nullable.
Example: ordering by `id` and `created_at` where `created_at` is nullable, query for getting the second page:
```sql
SELECT "issues".*
FROM "issues"
WHERE (("issues"."id" > 99
AND "issues"."created_at" = '2021-02-16 11:26:17.408466')
OR ("issues"."created_at" > '2021-02-16 11:26:17.408466')
OR ("issues"."created_at" IS NULL))
ORDER BY "issues"."created_at" DESC NULLS LAST, "issues"."id" DESC
LIMIT 20
```
##### Tooling
A generic keyset pagination library is available within the GitLab project which can most of the cases easily replace the existing, Kaminari based pagination with significant performance improvements when dealing with large datasets.
Example:
```ruby
# first page
paginator = Project.order(:created_at, :id).keyset_paginate(per_page: 20)
puts paginator.to_a # records
# next page
cursor = paginator.cursor_for_next_page
paginator = Project.order(:created_at, :id).keyset_paginate(cursor: cursor, per_page: 20)
puts paginator.to_a # records
```
For a comprehensive overview, take a look at the [keyset pagination guide](keyset_pagination.md) page.
#### Performance
Keyset pagination provides stable performance regardless of the number of pages we moved forward. To achieve this performance, the paginated query needs an index that covers all the columns in the `ORDER BY` clause, similarly to the offset pagination.
### General performance guidelines
See the [pagination general performance guidelines page](pagination_performance_guidelines.md).
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Pagination guidelines
breadcrumbs:
- doc
- development
- database
---
This document gives an overview of the current capabilities and provides best practices for paginating over data in GitLab, and in particular for PostgreSQL.
## Why do we need pagination?
Pagination is a popular technique to avoid loading too much data in one web request. This usually happens when we render a list of records. A common scenario is visualizing parent-children relations (has many) on the UI.
Example: listing issues within a project
As the number of issues grows within the project, the list gets longer. To render the list, the backend does the following:
1. Loads the records from the database, usually in a particular order.
1. Serializes the records in Ruby. Build Ruby (ActiveRecord) objects and then build a JSON or HTML string.
1. Sends the response back to the browser.
1. The browser renders the content.
We have two options for rendering the content:
- HTML: backend deals with the rendering (HAML template).
- JSON: the client (client-side JavaScript) transforms the payload into HTML.
Rendering long lists can significantly affect both the frontend and backend performance:
- The database reads a lot of data from the disk.
- The result of the query (records) is eventually transformed to Ruby objects which increases memory allocation.
- Large responses take more time to send over the wire, to the user's browser.
- Rendering long lists might freeze the browser (bad user experience).
With pagination, the data is split into equal pieces (pages). On the first visit, the user receives only a limited number of items (page size). The user can see more items by paginating forward which results in a new HTTP request and a new database query.

## General guidelines for paginating
### Pick the right approach
Let the database handle the pagination, filtering, and data retrieval. Implementing in-memory pagination on the backend (`paginate_array` from Kaminari) or on the frontend (JavaScript) might work for a few hundreds of records. If application limits are not defined, things can get out of control quickly.
### Reduce complexity
When we list records on the page we often provide additional filters and different sort options. This can complicate things on the backend side significantly.
For the MVC version, consider the following:
- Reduce the number of sort options to the minimum.
- Reduce the number of filters (dropdown list, search bar) to the minimum.
To make sorting and pagination efficient, for each sort option we need at least two database indexes (ascending, descending order). If we add filter options (by state or by author), we might need more indexes to maintain good performance. Indexes are not free, they can significantly affect the `UPDATE` query timings.
It's not possible to make all filter and sort combinations performant, so we should try optimizing the performance by usage patterns.
### Prepare for scaling
Offset-based pagination is the easiest way to paginate over records, however, it does not scale well for large database tables. As a long-term solution, [keyset pagination](keyset_pagination.md) is preferred. Switching between offset and keyset pagination is generally straightforward and can be done without affecting the end-user if the following conditions are met:
- Avoid presenting total counts, prefer limit counts.
- Example: count maximum 1001 records, and then on the UI show 1000+ if the count is 1001, show the actual number otherwise.
- See the [badge counters approach](../merge_request_concepts/performance.md#badge-counters) for more information.
- Avoid using page numbers, use next and previous page buttons.
- Keyset pagination doesn't support page numbers.
- For APIs, advise against building URLs for the next page by "hand".
- Promote the usage of the [`Link` header](../../api/rest/_index.md#pagination-link-header) where the URLs for the next and previous page are provided by the backend.
- This way changing the URL structure is possible without breaking backward compatibility.
{{< alert type="note" >}}
Infinite scroll can use keyset pagination without affecting the user experience since there are no exposed page numbers.
{{< /alert >}}
## Options for pagination
### Offset pagination
The most common way to paginate lists is using offset-based pagination (UI and REST API). It's backed by the popular [Kaminari](https://github.com/kaminari/kaminari) Ruby gem, which provides convenient helper methods to implement pagination on ActiveRecord queries.
Offset-based pagination is leveraging the `LIMIT` and `OFFSET` SQL clauses to take out a specific slice from the table.
Example database query when looking for the 2nd page of the issues within our project:
```sql
SELECT issues.* FROM issues WHERE project_id = 1 ORDER BY id LIMIT 20 OFFSET 20
```
1. Move an imaginary pointer over the table rows and skip 20 rows.
1. Take the next 20 rows.
Notice that the query also orders the rows by the primary key (`id`). When paginating data, specifying the order is very important. Without it, the returned rows are non-deterministic and can confuse the end-user.
#### Page numbers
Example pagination bar:

The Kaminari gem renders a nice pagination bar on the UI with page numbers and optionally quick shortcuts the next, previous, first, and last page buttons. To render these buttons, Kaminari needs to know the number of rows, and for that, a count query is executed.
```sql
SELECT COUNT(*) FROM issues WHERE project_id = 1
```
#### Performance
##### Index coverage
To achieve the good performance, the `ORDER BY` clause needs to be covered by an index.
Assuming that we have the following index:
```sql
CREATE INDEX index_on_issues_project_id ON issues (project_id);
```
Let's try to request the first page:
```sql
SELECT issues.* FROM issues WHERE project_id = 1 ORDER BY id LIMIT 20;
```
We can produce the same query in Rails:
```ruby
Issue.where(project_id: 1).page(1).per(20)
```
The SQL query returns a maximum of 20 rows from the database. However, it doesn't mean that the database only reads 20 rows from the disk to produce the result.
This is what happens:
1. The database tries to plan the execution in the most efficient way possible based on the table statistics and the available indexes.
1. The planner knows that we have an index covering the `project_id` column.
1. The database reads all rows using the index on `project_id`.
1. The rows at this point are not sorted, so the database sorts the rows.
1. The database returns the first 20 rows.
In case the project has 10,000 rows, the database reads 10,000 rows and sorts them in memory (or on disk). This does not scale well in the long term.
To fix this we need the following index:
```sql
CREATE INDEX index_on_issues_project_id ON issues (project_id, id);
```
By making the `id` column part of the index, the previous query reads maximum 20 rows. The query performs well regardless of the number of issues within a project. So with this change, we've also improved the initial page load (when the user loads the issue page).
{{< alert type="note" >}}
Here we're leveraging the ordered property of the b-tree database index. Values in the index are sorted so reading 20 rows does not require further sorting.
{{< /alert >}}
#### Known issues
##### `COUNT(*)` on a large dataset
Kaminari by default executes a count query to determine the number of pages for rendering the page links. Count queries can be quite expensive for a large table. In an unfortunate scenario the queries time out.
To work around this, we can run Kaminari without invoking the count SQL query.
```ruby
Issue.where(project_id: 1).page(1).per(20).without_count
```
In this case, the count query is not executed and the pagination no longer renders the page numbers. We see only the next and previous links.
##### `OFFSET` on a large dataset
When we paginate over a large dataset, we might notice that the response time gets slower and slower. This is due to the `OFFSET` clause that seeks through the rows and skips N rows.
From the user point of view, this might not be always noticeable. As the user paginates forward, the previous rows might be still in the buffer cache of the database. If the user shares the link with someone else and it's opened after a few minutes or hours, the response time might be significantly higher or it would even time out.
When requesting a large page number, the database needs to read `PAGE * PAGE_SIZE` rows. This makes offset pagination **unsuitable for large database tables** however, with an [optimization technique](offset_pagination_optimization.md) the overall performance of the database queries can be slightly improved.
Example: listing users on the Admin area
Listing users with a very simple SQL query:
```sql
SELECT "users".* FROM "users" ORDER BY "users"."id" DESC LIMIT 20 OFFSET 0
```
The query execution plan shows that this query is efficient, the database only read 20 rows from the database (`rows=20`):
```plaintext
Limit (cost=0.43..3.19 rows=20 width=1309) (actual time=0.098..2.093 rows=20 loops=1)
Buffers: shared hit=103
-> Index Scan Backward using users_pkey on users (cost=0.43..X rows=X width=1309) (actual time=0.097..2.087 rows=20 loops=1)
Buffers: shared hit=103
Planning Time: 0.333 ms
Execution Time: 2.145 ms
(6 rows)
```
See the [Understanding EXPLAIN plans](understanding_explain_plans.md) to find more information about reading execution plans.
Let's visit the 50_000th page:
```sql
SELECT "users".* FROM "users" ORDER BY "users"."id" DESC LIMIT 20 OFFSET 999980;
```
The plan shows that the database reads 1_000_000 rows to return 20 rows, with a very high execution time (5.5 seconds):
```plaintext
Limit (cost=137878.89..137881.65 rows=20 width=1309) (actual time=5523.588..5523.667 rows=20 loops=1)
Buffers: shared hit=1007901 read=14774 written=609
I/O Timings: read=420.591 write=57.344
-> Index Scan Backward using users_pkey on users (cost=0.43..X rows=X width=1309) (actual time=0.060..5459.353 rows=1000000 loops=1)
Buffers: shared hit=1007901 read=14774 written=609
I/O Timings: read=420.591 write=57.344
Planning Time: 0.821 ms
Execution Time: 5523.745 ms
(8 rows)
```
We can argue that a typical user does not visit these pages. However, API users could go to very high page numbers (scraping, collecting data).
### Keyset pagination
Keyset pagination addresses the performance concerns of "skipping" previous rows when requesting a large page, however, it's not a drop-in replacement for offset-based pagination. When moving an API endpoint from offset-based pagination to keyset-based pagination, both must be supported. Removing one type of pagination entirely is a [breaking changes](../../update/terminology.md#breaking-change).
Keyset pagination used in both the [GraphQL API](../graphql_guide/pagination.md#keyset-pagination) and the [REST API](../../api/rest/_index.md#keyset-based-pagination).
Consider the following `issues` table:
| `id` | `project_id` |
|------|--------------|
| 1 | 1 |
| 2 | 1 |
| 3 | 2 |
| 4 | 1 |
| 5 | 1 |
| 6 | 2 |
| 7 | 2 |
| 8 | 1 |
| 9 | 1 |
| 10 | 2 |
Let's paginate over the whole table ordered by the primary key (`id`). The query for the first page is the same as the offset pagination query, for simplicity, we use 5 as the page size:
```sql
SELECT "issues".* FROM "issues" ORDER BY "issues"."id" ASC LIMIT 5
```
Notice that we didn't add the `OFFSET` clause.
To get to the next page, we need to extract values that are part of the `ORDER BY` clause from the last row. In this case, we just need the `id`, which is 5. Now we construct the query for the next page:
```sql
SELECT "issues".* FROM "issues" WHERE "issues"."id" > 5 ORDER BY "issues"."id" ASC LIMIT 5
```
Looking at the query execution plan, we can see that this query read only 5 rows (offset-based pagination would read 10 rows):
```plaintext
Limit (cost=0.56..2.08 rows=5 width=1301) (actual time=0.093..0.137 rows=5 loops=1)
-> Index Scan using issues_pkey on issues (cost=0.56..X rows=X width=1301) (actual time=0.092..0.136 rows=5 loops=1)
Index Cond: (id > 5)
Planning Time: 7.710 ms
Execution Time: 0.224 ms
(5 rows)
```
#### Known issues
##### No page numbers
Offset pagination provides an easy way to request a specific page. We can edit the URL and modify the `page=` URL parameter. Keyset pagination cannot provide page numbers because the paging logic might depend on different columns.
In the previous example, the column is the `id`, so we might see something like this in the `URL`:
```plaintext
id_after=5
```
In GraphQL, the parameters are serialized to JSON and then encoded:
```plaintext
eyJpZCI6Ijk0NzMzNTk0IiwidXBkYXRlZF9hdCI6IjIwMjEtMDQtMDkgMDg6NTA6MDUuODA1ODg0MDAwIFVUQyJ9
```
{{< alert type="note" >}}
Pagination parameters are visible to the user, so be careful about which columns we order by.
{{< /alert >}}
Keyset pagination can only provide the next, previous, first, and last pages.
##### Complexity
Building queries when we order by a single column is very easy, however, things get more complex if tie-breaker or multi-column ordering is used. The complexity increases if the columns are nullable.
Example: ordering by `id` and `created_at` where `created_at` is nullable, query for getting the second page:
```sql
SELECT "issues".*
FROM "issues"
WHERE (("issues"."id" > 99
AND "issues"."created_at" = '2021-02-16 11:26:17.408466')
OR ("issues"."created_at" > '2021-02-16 11:26:17.408466')
OR ("issues"."created_at" IS NULL))
ORDER BY "issues"."created_at" DESC NULLS LAST, "issues"."id" DESC
LIMIT 20
```
##### Tooling
A generic keyset pagination library is available within the GitLab project which can most of the cases easily replace the existing, Kaminari based pagination with significant performance improvements when dealing with large datasets.
Example:
```ruby
# first page
paginator = Project.order(:created_at, :id).keyset_paginate(per_page: 20)
puts paginator.to_a # records
# next page
cursor = paginator.cursor_for_next_page
paginator = Project.order(:created_at, :id).keyset_paginate(cursor: cursor, per_page: 20)
puts paginator.to_a # records
```
For a comprehensive overview, take a look at the [keyset pagination guide](keyset_pagination.md) page.
#### Performance
Keyset pagination provides stable performance regardless of the number of pages we moved forward. To achieve this performance, the paginated query needs an index that covers all the columns in the `ORDER BY` clause, similarly to the offset pagination.
### General performance guidelines
See the [pagination general performance guidelines page](pagination_performance_guidelines.md).
|
https://docs.gitlab.com/development/deduplicate_database_records
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/deduplicate_database_records.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
deduplicate_database_records.md
|
Data Access
|
Database
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Deduplicate database records in a database table
| null |
This guide describes a strategy for introducing database-level uniqueness constraint (unique index) to existing database tables with data.
Requirements:
- Attribute modifications (`INSERT`, `UPDATE`) related to the columns happen only via ActiveRecord (the technique depends on AR callbacks).
- Duplications are rare and mostly happen due to concurrent record creation. This can be verified by checking the production database table via teleport (reach out to a database maintainer for help).
The total runtime mainly depends on the number of records in the database table. The migration will require scanning all records; to fit into the
post-deployment migration runtime limit (about 10 minutes), database table with less than 10 million rows can be considered a small table.
## Deduplication strategy for small tables
The strategy requires 3 milestones. As an example, we're going to deduplicate the `issues` table based on the `title` column where the `title` must be unique for a given `project_id` column.
Milestone 1:
1. Add a new database index (not unique) to the table via post-migration (if not present already).
1. Add model-level uniqueness validation to reduce the likelihood of duplicates (if not present already).
1. Add a transaction-level [advisory lock](https://www.postgresql.org/docs/16/explicit-locking.html#ADVISORY-LOCKS) to prevent creating duplicate records.
The second step on its own will not prevent duplicate records, see the [Rails guides](https://guides.rubyonrails.org/active_record_validations.html#uniqueness) for more information.
Post-migration for creating the index:
```ruby
def up
add_concurrent_index :issues, [:project_id, :title], name: INDEX_NAME
end
def down
remove_concurrent_index_by_name :issues, INDEX_NAME
end
```
The `Issue` model validation and the advisory lock:
```ruby
class Issue < ApplicationRecord
validates :title, uniqueness: { scope: :project_id }
before_validation :prevent_concurrent_inserts
private
# This method will block while another database transaction attempts to insert the same data.
# After the lock is released by the other transaction, the uniqueness validation may fail
# with record not unique validation error.
# Without this block the uniqueness validation wouldn't be able to detect duplicated
# records as transactions can't see each other's changes.
def prevent_concurrent_inserts
return if project_id.nil? || title.nil?
lock_key = ['issues', project_id, title].join('-')
lock_expression = "hashtext(#{connection.quote(lock_key)})"
connection.execute("SELECT pg_advisory_xact_lock(#{lock_expression})")
end
end
```
Milestone 2:
1. Implement the deduplication logic in a post deployment migration.
1. Replace the existing index with a unique index.
How to resolve duplicates (for example, merge attributes, keep the most recent record) depends on the features built on top of the database table. In this example, we keep the most recent record.
```ruby
def up
model = define_batchable_model('issues')
# Single pass over the table
model.each_batch do |batch|
# find duplicated (project_id, title) pairs
duplicates = model
.where("(project_id, title) IN (#{batch.select(:project_id, :title).to_sql})")
.group(:project_id, :title)
.having('COUNT(*) > 1')
.pluck(:project_id, :title)
next if duplicates.empty?
value_list = Arel::Nodes::ValuesList.new(duplicates).to_sql
# Locate all records by (project_id, title) pairs and keep the most recent record.
# The lookup should be fast enough if duplications are rare.
cleanup_query = <<~SQL
WITH duplicated_records AS MATERIALIZED (
SELECT
id,
ROW_NUMBER() OVER (PARTITION BY project_id, title ORDER BY project_id, title, id DESC) AS row_number
FROM issues
WHERE (project_id, title) IN (#{value_list})
ORDER BY project_id, title
)
DELETE FROM issues
WHERE id IN (
SELECT id FROM duplicated_records WHERE row_number > 1
)
SQL
model.connection.execute(cleanup_query)
end
end
def down
# no-op
end
```
{{< alert type="note" >}}
This is a destructive operation with no possibility of rolling back. Make sure that the deduplication logic is tested thoroughly.
{{< /alert >}}
Replacing the old index with a unique index:
```ruby
def up
add_concurrent_index :issues, [:project_id, :title], name: UNIQUE_INDEX_NAME, unique: true
remove_concurrent_index_by_name :issues, INDEX_NAME
end
def down
add_concurrent_index :issues, [:project_id, :title], name: INDEX_NAME
remove_concurrent_index_by_name :issues, UNIQUE_INDEX_NAME
end
```
Milestone 3:
1. Remove the advisory lock by removing the `prevent_concurrent_inserts` ActiveRecord callback method.
{{< alert type="note" >}}
This milestone must be after a [required stop](required_stops.md).
{{< /alert >}}
## Deduplicate strategy for large tables
When deduplicating a large table we can move the batching and the deduplication logic into a [batched background migration](batched_background_migrations.md).
Milestone 1:
1. Add a new database index (not unique) to the table via post migration.
1. Add model-level uniqueness validation to reduce the likelihood of duplicates (if not present already).
1. Add a transaction-level [advisory lock](https://www.postgresql.org/docs/16/explicit-locking.html#ADVISORY-LOCKS) to prevent creating duplicate records.
Milestone 2:
1. Implement the deduplication logic in a batched background migration and enqueue it in a post deployment migration.
Milestone 3:
1. Finalize the batched background migration.
1. Replace the existing index with a unique index.
1. Remove the advisory lock by removing the `prevent_concurrent_inserts` ActiveRecord callback method.
{{< alert type="note" >}}
This milestone must be after a [required stop](required_stops.md).
{{< /alert >}}
|
---
stage: Data Access
group: Database
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Deduplicate database records in a database table
breadcrumbs:
- doc
- development
- database
---
This guide describes a strategy for introducing database-level uniqueness constraint (unique index) to existing database tables with data.
Requirements:
- Attribute modifications (`INSERT`, `UPDATE`) related to the columns happen only via ActiveRecord (the technique depends on AR callbacks).
- Duplications are rare and mostly happen due to concurrent record creation. This can be verified by checking the production database table via teleport (reach out to a database maintainer for help).
The total runtime mainly depends on the number of records in the database table. The migration will require scanning all records; to fit into the
post-deployment migration runtime limit (about 10 minutes), database table with less than 10 million rows can be considered a small table.
## Deduplication strategy for small tables
The strategy requires 3 milestones. As an example, we're going to deduplicate the `issues` table based on the `title` column where the `title` must be unique for a given `project_id` column.
Milestone 1:
1. Add a new database index (not unique) to the table via post-migration (if not present already).
1. Add model-level uniqueness validation to reduce the likelihood of duplicates (if not present already).
1. Add a transaction-level [advisory lock](https://www.postgresql.org/docs/16/explicit-locking.html#ADVISORY-LOCKS) to prevent creating duplicate records.
The second step on its own will not prevent duplicate records, see the [Rails guides](https://guides.rubyonrails.org/active_record_validations.html#uniqueness) for more information.
Post-migration for creating the index:
```ruby
def up
add_concurrent_index :issues, [:project_id, :title], name: INDEX_NAME
end
def down
remove_concurrent_index_by_name :issues, INDEX_NAME
end
```
The `Issue` model validation and the advisory lock:
```ruby
class Issue < ApplicationRecord
validates :title, uniqueness: { scope: :project_id }
before_validation :prevent_concurrent_inserts
private
# This method will block while another database transaction attempts to insert the same data.
# After the lock is released by the other transaction, the uniqueness validation may fail
# with record not unique validation error.
# Without this block the uniqueness validation wouldn't be able to detect duplicated
# records as transactions can't see each other's changes.
def prevent_concurrent_inserts
return if project_id.nil? || title.nil?
lock_key = ['issues', project_id, title].join('-')
lock_expression = "hashtext(#{connection.quote(lock_key)})"
connection.execute("SELECT pg_advisory_xact_lock(#{lock_expression})")
end
end
```
Milestone 2:
1. Implement the deduplication logic in a post deployment migration.
1. Replace the existing index with a unique index.
How to resolve duplicates (for example, merge attributes, keep the most recent record) depends on the features built on top of the database table. In this example, we keep the most recent record.
```ruby
def up
model = define_batchable_model('issues')
# Single pass over the table
model.each_batch do |batch|
# find duplicated (project_id, title) pairs
duplicates = model
.where("(project_id, title) IN (#{batch.select(:project_id, :title).to_sql})")
.group(:project_id, :title)
.having('COUNT(*) > 1')
.pluck(:project_id, :title)
next if duplicates.empty?
value_list = Arel::Nodes::ValuesList.new(duplicates).to_sql
# Locate all records by (project_id, title) pairs and keep the most recent record.
# The lookup should be fast enough if duplications are rare.
cleanup_query = <<~SQL
WITH duplicated_records AS MATERIALIZED (
SELECT
id,
ROW_NUMBER() OVER (PARTITION BY project_id, title ORDER BY project_id, title, id DESC) AS row_number
FROM issues
WHERE (project_id, title) IN (#{value_list})
ORDER BY project_id, title
)
DELETE FROM issues
WHERE id IN (
SELECT id FROM duplicated_records WHERE row_number > 1
)
SQL
model.connection.execute(cleanup_query)
end
end
def down
# no-op
end
```
{{< alert type="note" >}}
This is a destructive operation with no possibility of rolling back. Make sure that the deduplication logic is tested thoroughly.
{{< /alert >}}
Replacing the old index with a unique index:
```ruby
def up
add_concurrent_index :issues, [:project_id, :title], name: UNIQUE_INDEX_NAME, unique: true
remove_concurrent_index_by_name :issues, INDEX_NAME
end
def down
add_concurrent_index :issues, [:project_id, :title], name: INDEX_NAME
remove_concurrent_index_by_name :issues, UNIQUE_INDEX_NAME
end
```
Milestone 3:
1. Remove the advisory lock by removing the `prevent_concurrent_inserts` ActiveRecord callback method.
{{< alert type="note" >}}
This milestone must be after a [required stop](required_stops.md).
{{< /alert >}}
## Deduplicate strategy for large tables
When deduplicating a large table we can move the batching and the deduplication logic into a [batched background migration](batched_background_migrations.md).
Milestone 1:
1. Add a new database index (not unique) to the table via post migration.
1. Add model-level uniqueness validation to reduce the likelihood of duplicates (if not present already).
1. Add a transaction-level [advisory lock](https://www.postgresql.org/docs/16/explicit-locking.html#ADVISORY-LOCKS) to prevent creating duplicate records.
Milestone 2:
1. Implement the deduplication logic in a batched background migration and enqueue it in a post deployment migration.
Milestone 3:
1. Finalize the batched background migration.
1. Replace the existing index with a unique index.
1. Remove the advisory lock by removing the `prevent_concurrent_inserts` ActiveRecord callback method.
{{< alert type="note" >}}
This milestone must be after a [required stop](required_stops.md).
{{< /alert >}}
|
https://docs.gitlab.com/development/ordering_table_columns
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/ordering_table_columns.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
ordering_table_columns.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Ordering Table Columns in PostgreSQL
| null |
For GitLab we require that columns of new tables are ordered to use the
least amount of space. An easy way of doing this is to order them based on the
type size in descending order with variable sizes (`text`, `varchar`, arrays,
`json`, `jsonb`, and so on) at the end.
Similar to C structures the space of a table is influenced by the order of
columns. This is because the size of columns is aligned depending on the type of
the following column. Let's consider an example:
- `id` (integer, 4 bytes)
- `name` (text, variable)
- `user_id` (integer, 4 bytes)
The first column is a 4-byte integer. The next is text of variable length. The
`text` data type requires 1-word alignment, and on 64-bit platform, 1 word is 8
bytes. To meet the alignment requirements, four zeros are to be added right
after the first column, so `id` occupies 4 bytes, then 4 bytes of alignment
padding, and only next `name` is being stored. Therefore, in this case, 8 bytes
are spent for storing a 4-byte integer.
The space between rows is also subject to alignment padding. The `user_id`
column takes only 4 bytes, and on 64-bit platform, 4 zeroes are added for
alignment padding, to allow storing the next row beginning with the "clear" word.
As a result, the actual size of each column would be (omitting variable length
data and 24-byte tuple header): 8 bytes, variable, 8 bytes. This means that
each row requires at least 16 bytes for the two 4-byte integers. If a table
has a few rows this is not an issue. However, once you start storing millions of
rows you can save space by using a different order. For the above example, the
ideal column order would be the following:
- `id` (integer, 4 bytes)
- `user_id` (integer, 4 bytes)
- `name` (text, variable)
or
- `name` (text, variable)
- `id` (integer, 4 bytes)
- `user_id` (integer, 4 bytes)
In these examples, the `id` and `user_id` columns are packed together, which
means we only need 8 bytes to store both of them. This in turn means each row
requires 8 bytes less space.
Since Ruby on Rails 5.1, the default data type for IDs is `bigint`, which uses 8 bytes.
We are using `integer` in the examples to showcase a more realistic reordering scenario.
## Type Sizes
While the [PostgreSQL documentation](https://www.postgresql.org/docs/16/datatype.html) contains plenty
of information we list the sizes of common types here so it's easier to
look them up. Here "word" refers to the word size, which is 4 bytes for a 32
bits platform and 8 bytes for a 64 bits platform.
| Type | Size | Alignment needed |
|:-----------------|:-------------------------------------|:-----------|
| `smallint` | 2 bytes | 1 word |
| `integer` | 4 bytes | 1 word |
| `bigint` | 8 bytes | 8 bytes |
| `real` | 4 bytes | 1 word |
| `double precision` | 8 bytes | 8 bytes |
| `boolean` | 1 byte | not needed |
| `text` / `string` | variable, 1 byte plus the data | 1 word |
| `bytea` | variable, 1 or 4 bytes plus the data | 1 word |
| `timestamp` | 8 bytes | 8 bytes |
| `timestamptz` | 8 bytes | 8 bytes |
| `date` | 4 bytes | 1 word |
A "variable" size means the actual size depends on the value being stored. If
PostgreSQL determines this can be embedded directly into a row it may do so, but
for very large values it stores the data externally and store a pointer (of
1 word in size) in the column. Because of this variable sized columns should
always be at the end of a table.
## Real Example
Let's use the `events` table as an example, which currently has the following
layout:
| Column | Type | Size |
|:--------------|:----------------------------|:---------|
| `id` | integer | 4 bytes |
| `target_type` | character varying | variable |
| `target_id` | integer | 4 bytes |
| `title` | character varying | variable |
| `data` | text | variable |
| `project_id` | integer | 4 bytes |
| `created_at` | timestamp without time zone | 8 bytes |
| `updated_at` | timestamp without time zone | 8 bytes |
| `action` | integer | 4 bytes |
| `author_id` | integer | 4 bytes |
After adding padding to align the columns this would translate to columns being
divided into fixed size chunks as follows:
| Chunk Size | Columns |
|:-----------|:----------------------|
| 8 bytes | `id` |
| variable | `target_type` |
| 8 bytes | `target_id` |
| variable | `title` |
| variable | `data` |
| 8 bytes | `project_id` |
| 8 bytes | `created_at` |
| 8 bytes | `updated_at` |
| 8 bytes | `action`, `author_id` |
This means that excluding the variable sized data and tuple header, we need at
least 8 * 6 = 48 bytes per row.
We can optimize this by using the following column order instead:
| Column | Type | Size |
|:--------------|:----------------------------|:---------|
| `created_at` | timestamp without time zone | 8 bytes |
| `updated_at` | timestamp without time zone | 8 bytes |
| `id` | integer | 4 bytes |
| `target_id` | integer | 4 bytes |
| `project_id` | integer | 4 bytes |
| `action` | integer | 4 bytes |
| `author_id` | integer | 4 bytes |
| `target_type` | character varying | variable |
| `title` | character varying | variable |
| `data` | text | variable |
This would produce the following chunks:
| Chunk Size | Columns |
|:-----------|:-----------------------|
| 8 bytes | `created_at` |
| 8 bytes | `updated_at` |
| 8 bytes | `id`, `target_id` |
| 8 bytes | `project_id`, `action` |
| 8 bytes | `author_id` |
| variable | `target_type` |
| variable | `title` |
| variable | `data` |
Here we only need 40 bytes per row excluding the variable sized data and 24-byte
tuple header. 8 bytes being saved may not sound like much, but for tables as
large as the `events` table it does begin to matter. For example, when storing
80 000 000 rows this translates to a space saving of at least 610 MB, all by
just changing the order of a few columns.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Ordering Table Columns in PostgreSQL
breadcrumbs:
- doc
- development
- database
---
For GitLab we require that columns of new tables are ordered to use the
least amount of space. An easy way of doing this is to order them based on the
type size in descending order with variable sizes (`text`, `varchar`, arrays,
`json`, `jsonb`, and so on) at the end.
Similar to C structures the space of a table is influenced by the order of
columns. This is because the size of columns is aligned depending on the type of
the following column. Let's consider an example:
- `id` (integer, 4 bytes)
- `name` (text, variable)
- `user_id` (integer, 4 bytes)
The first column is a 4-byte integer. The next is text of variable length. The
`text` data type requires 1-word alignment, and on 64-bit platform, 1 word is 8
bytes. To meet the alignment requirements, four zeros are to be added right
after the first column, so `id` occupies 4 bytes, then 4 bytes of alignment
padding, and only next `name` is being stored. Therefore, in this case, 8 bytes
are spent for storing a 4-byte integer.
The space between rows is also subject to alignment padding. The `user_id`
column takes only 4 bytes, and on 64-bit platform, 4 zeroes are added for
alignment padding, to allow storing the next row beginning with the "clear" word.
As a result, the actual size of each column would be (omitting variable length
data and 24-byte tuple header): 8 bytes, variable, 8 bytes. This means that
each row requires at least 16 bytes for the two 4-byte integers. If a table
has a few rows this is not an issue. However, once you start storing millions of
rows you can save space by using a different order. For the above example, the
ideal column order would be the following:
- `id` (integer, 4 bytes)
- `user_id` (integer, 4 bytes)
- `name` (text, variable)
or
- `name` (text, variable)
- `id` (integer, 4 bytes)
- `user_id` (integer, 4 bytes)
In these examples, the `id` and `user_id` columns are packed together, which
means we only need 8 bytes to store both of them. This in turn means each row
requires 8 bytes less space.
Since Ruby on Rails 5.1, the default data type for IDs is `bigint`, which uses 8 bytes.
We are using `integer` in the examples to showcase a more realistic reordering scenario.
## Type Sizes
While the [PostgreSQL documentation](https://www.postgresql.org/docs/16/datatype.html) contains plenty
of information we list the sizes of common types here so it's easier to
look them up. Here "word" refers to the word size, which is 4 bytes for a 32
bits platform and 8 bytes for a 64 bits platform.
| Type | Size | Alignment needed |
|:-----------------|:-------------------------------------|:-----------|
| `smallint` | 2 bytes | 1 word |
| `integer` | 4 bytes | 1 word |
| `bigint` | 8 bytes | 8 bytes |
| `real` | 4 bytes | 1 word |
| `double precision` | 8 bytes | 8 bytes |
| `boolean` | 1 byte | not needed |
| `text` / `string` | variable, 1 byte plus the data | 1 word |
| `bytea` | variable, 1 or 4 bytes plus the data | 1 word |
| `timestamp` | 8 bytes | 8 bytes |
| `timestamptz` | 8 bytes | 8 bytes |
| `date` | 4 bytes | 1 word |
A "variable" size means the actual size depends on the value being stored. If
PostgreSQL determines this can be embedded directly into a row it may do so, but
for very large values it stores the data externally and store a pointer (of
1 word in size) in the column. Because of this variable sized columns should
always be at the end of a table.
## Real Example
Let's use the `events` table as an example, which currently has the following
layout:
| Column | Type | Size |
|:--------------|:----------------------------|:---------|
| `id` | integer | 4 bytes |
| `target_type` | character varying | variable |
| `target_id` | integer | 4 bytes |
| `title` | character varying | variable |
| `data` | text | variable |
| `project_id` | integer | 4 bytes |
| `created_at` | timestamp without time zone | 8 bytes |
| `updated_at` | timestamp without time zone | 8 bytes |
| `action` | integer | 4 bytes |
| `author_id` | integer | 4 bytes |
After adding padding to align the columns this would translate to columns being
divided into fixed size chunks as follows:
| Chunk Size | Columns |
|:-----------|:----------------------|
| 8 bytes | `id` |
| variable | `target_type` |
| 8 bytes | `target_id` |
| variable | `title` |
| variable | `data` |
| 8 bytes | `project_id` |
| 8 bytes | `created_at` |
| 8 bytes | `updated_at` |
| 8 bytes | `action`, `author_id` |
This means that excluding the variable sized data and tuple header, we need at
least 8 * 6 = 48 bytes per row.
We can optimize this by using the following column order instead:
| Column | Type | Size |
|:--------------|:----------------------------|:---------|
| `created_at` | timestamp without time zone | 8 bytes |
| `updated_at` | timestamp without time zone | 8 bytes |
| `id` | integer | 4 bytes |
| `target_id` | integer | 4 bytes |
| `project_id` | integer | 4 bytes |
| `action` | integer | 4 bytes |
| `author_id` | integer | 4 bytes |
| `target_type` | character varying | variable |
| `title` | character varying | variable |
| `data` | text | variable |
This would produce the following chunks:
| Chunk Size | Columns |
|:-----------|:-----------------------|
| 8 bytes | `created_at` |
| 8 bytes | `updated_at` |
| 8 bytes | `id`, `target_id` |
| 8 bytes | `project_id`, `action` |
| 8 bytes | `author_id` |
| variable | `target_type` |
| variable | `title` |
| variable | `data` |
Here we only need 40 bytes per row excluding the variable sized data and 24-byte
tuple header. 8 bytes being saved may not sound like much, but for tables as
large as the `events` table it does begin to matter. For example, when storing
80 000 000 rows this translates to a space saving of at least 610 MB, all by
just changing the order of a few columns.
|
https://docs.gitlab.com/development/query_count_limits
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/query_count_limits.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
query_count_limits.md
|
Data Access
|
Database
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Query Count Limits
| null |
Each controller, API endpoint and Sidekiq worker is allowed to execute up to
100 SQL queries.
If more than 100 SQL queries are executed, this is a
[performance problem](../performance.md) that should be fixed.
## Solving Failing Tests
In test environments, we raise an error when this threshold is exceeded.
When a test fails because it executes more than 100 SQL queries there are two
solutions to this problem:
- Reduce the number of SQL queries that are executed.
- Temporarily disable query limiting for the controller or API endpoint.
You should only resort to disabling query limits when an existing controller or endpoint
is to blame as in this case reducing the number of SQL queries can take a lot of
effort. Newly added controllers and endpoints are not allowed to execute more
than 100 SQL queries and no exceptions are made for this rule.
## Pipeline Stability
If specs start getting a query limit error in default branch pipelines, follow the [instruction](#disable-query-limiting) to disable the query limit.
Disabling the limit should always associate and prioritize an issue, so the excessive amount of queries can be investigated.
## Disable query limiting
In the event that you have to disable query limits for a controller, you must first
create an issue. This issue should (preferably in the title) mention the
controller or endpoint and include the appropriate labels (`database`,
`performance`, and at least a team specific label such as `Discussion`).
Since [GitLab 17.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/157016),
`QueryLimiting.disable` must set a new threshold (not unlimited).
After the issue has been created, you can disable query limits on the code in question. For
Rails controllers it's best to create a `before_action` hook that runs as early
as possible. The called method in turn should call
`Gitlab::QueryLimiting.disable!('issue URL here')`. For example:
```ruby
class MyController < ApplicationController
before_action :disable_query_limiting, only: [:show]
def index
# ...
end
def show
# ...
end
def disable_query_limiting
Gitlab::QueryLimiting.disable!('https://gitlab.com/gitlab-org/...', new_threshold: 200)
end
end
```
By using a `before_action` you don't have to modify the controller method in
question, reducing the likelihood of merge conflicts.
For Grape API endpoints there unfortunately is not a reliable way of running a
hook before a specific endpoint. This means that you have to add the allowlist
call directly into the endpoint like so:
```ruby
get '/projects/:id/foo' do
Gitlab::QueryLimiting.disable!('...', new_threshold: 200)
# ...
end
```
For Sidekiq workers, you will need to add the allowlist directly as well:
```ruby
def perform(args)
Gitlab::QueryLimiting.disable!('...', new_threshold: 200)
# ...
end
```
|
---
stage: Data Access
group: Database
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Query Count Limits
breadcrumbs:
- doc
- development
- database
---
Each controller, API endpoint and Sidekiq worker is allowed to execute up to
100 SQL queries.
If more than 100 SQL queries are executed, this is a
[performance problem](../performance.md) that should be fixed.
## Solving Failing Tests
In test environments, we raise an error when this threshold is exceeded.
When a test fails because it executes more than 100 SQL queries there are two
solutions to this problem:
- Reduce the number of SQL queries that are executed.
- Temporarily disable query limiting for the controller or API endpoint.
You should only resort to disabling query limits when an existing controller or endpoint
is to blame as in this case reducing the number of SQL queries can take a lot of
effort. Newly added controllers and endpoints are not allowed to execute more
than 100 SQL queries and no exceptions are made for this rule.
## Pipeline Stability
If specs start getting a query limit error in default branch pipelines, follow the [instruction](#disable-query-limiting) to disable the query limit.
Disabling the limit should always associate and prioritize an issue, so the excessive amount of queries can be investigated.
## Disable query limiting
In the event that you have to disable query limits for a controller, you must first
create an issue. This issue should (preferably in the title) mention the
controller or endpoint and include the appropriate labels (`database`,
`performance`, and at least a team specific label such as `Discussion`).
Since [GitLab 17.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/157016),
`QueryLimiting.disable` must set a new threshold (not unlimited).
After the issue has been created, you can disable query limits on the code in question. For
Rails controllers it's best to create a `before_action` hook that runs as early
as possible. The called method in turn should call
`Gitlab::QueryLimiting.disable!('issue URL here')`. For example:
```ruby
class MyController < ApplicationController
before_action :disable_query_limiting, only: [:show]
def index
# ...
end
def show
# ...
end
def disable_query_limiting
Gitlab::QueryLimiting.disable!('https://gitlab.com/gitlab-org/...', new_threshold: 200)
end
end
```
By using a `before_action` you don't have to modify the controller method in
question, reducing the likelihood of merge conflicts.
For Grape API endpoints there unfortunately is not a reliable way of running a
hook before a specific endpoint. This means that you have to add the allowlist
call directly into the endpoint like so:
```ruby
get '/projects/:id/foo' do
Gitlab::QueryLimiting.disable!('...', new_threshold: 200)
# ...
end
```
For Sidekiq workers, you will need to add the allowlist directly as well:
```ruby
def perform(args)
Gitlab::QueryLimiting.disable!('...', new_threshold: 200)
# ...
end
```
|
https://docs.gitlab.com/development/constraint_naming_convention
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/constraint_naming_convention.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
constraint_naming_convention.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Constraints naming conventions
| null |
The most common option is to let Rails pick the name for database constraints and indexes or let
PostgreSQL use the defaults (when applicable). However, when defining custom names in Rails, or
working in Go applications where no ORM is used, it is important to follow strict naming conventions
to improve consistency and discoverability.
The table below describes the naming conventions for custom PostgreSQL constraints.
The intent is not to retroactively change names in existing databases but rather ensure consistency of future changes.
| Type | Syntax | Notes | Examples |
|--------------------------|---------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
| **Primary Key** | `pk_<table name>` | | `pk_projects` |
| **Foreign Key** | `fk_<table name>_<column name>[_and_<column name>]*_<foreign table name>` | | `fk_projects_group_id_groups` |
| **Index** | `index_<table name>_on_<column name>[_and_<column name>]*[_and_<column name in partial clause>]*` | Index names must be all lowercase. | `index_repositories_on_group_id` |
| **Unique Constraint** | `unique_<table name>_<column name>[_and_<column name>]*` | | `unique_projects_group_id_and_name` |
| **Check Constraint** | `check_<table name>_<column name>[_and_<column name>]*[_<suffix>]?` | The optional suffix should denote the type of validation, such as `length` and `enum`. It can also be used to disambiguate multiple `CHECK` constraints on the same column. | `check_projects_name_length`<br />`check_projects_type_enum`<br />`check_projects_admin1_id_and_admin2_id_differ` |
| **Exclusion Constraint** | `excl_<table name>_<column name>[_and_<column name>]*_[_<suffix>]?` | The optional suffix should denote the type of exclusion being performed. | `excl_reservations_start_at_end_at_no_overlap` |
## Observations
- Check `db/structure.sql` for conflicts.
- Prefixes are preferred over suffices because they make it easier to identify the type of a given constraint quickly, as well as group them alphabetically;
- The `_and_` that joins column names can be omitted to keep the identifiers under the 63 characters' length limit defined by PostgreSQL. Additionally, the notation may be abbreviated to the best of our ability if struggling to keep under this limit.
- For indexes added to solve a very specific problem, it may make sense for the name to reflect their use.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Constraints naming conventions
breadcrumbs:
- doc
- development
- database
---
The most common option is to let Rails pick the name for database constraints and indexes or let
PostgreSQL use the defaults (when applicable). However, when defining custom names in Rails, or
working in Go applications where no ORM is used, it is important to follow strict naming conventions
to improve consistency and discoverability.
The table below describes the naming conventions for custom PostgreSQL constraints.
The intent is not to retroactively change names in existing databases but rather ensure consistency of future changes.
| Type | Syntax | Notes | Examples |
|--------------------------|---------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
| **Primary Key** | `pk_<table name>` | | `pk_projects` |
| **Foreign Key** | `fk_<table name>_<column name>[_and_<column name>]*_<foreign table name>` | | `fk_projects_group_id_groups` |
| **Index** | `index_<table name>_on_<column name>[_and_<column name>]*[_and_<column name in partial clause>]*` | Index names must be all lowercase. | `index_repositories_on_group_id` |
| **Unique Constraint** | `unique_<table name>_<column name>[_and_<column name>]*` | | `unique_projects_group_id_and_name` |
| **Check Constraint** | `check_<table name>_<column name>[_and_<column name>]*[_<suffix>]?` | The optional suffix should denote the type of validation, such as `length` and `enum`. It can also be used to disambiguate multiple `CHECK` constraints on the same column. | `check_projects_name_length`<br />`check_projects_type_enum`<br />`check_projects_admin1_id_and_admin2_id_differ` |
| **Exclusion Constraint** | `excl_<table name>_<column name>[_and_<column name>]*_[_<suffix>]?` | The optional suffix should denote the type of exclusion being performed. | `excl_reservations_start_at_end_at_no_overlap` |
## Observations
- Check `db/structure.sql` for conflicts.
- Prefixes are preferred over suffices because they make it easier to identify the type of a given constraint quickly, as well as group them alphabetically;
- The `_and_` that joins column names can be omitted to keep the identifiers under the 63 characters' length limit defined by PostgreSQL. Additionally, the notation may be abbreviated to the best of our ability if struggling to keep under this limit.
- For indexes added to solve a very specific problem, it may make sense for the name to reflect their use.
|
https://docs.gitlab.com/development/layout_and_access_patterns
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/layout_and_access_patterns.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
layout_and_access_patterns.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Best practices for data layout and access patterns
| null |
Certain patterns of data access, and especially data updates, can exacerbate strain
on the database. Avoid them if possible.
This document lists some patterns to avoid, with recommendations for alternatives.
## High-frequency updates, especially to the same row
Avoid single database rows that are updated by many transactions at the same time.
- If many processes attempt to update the same row simultaneously, they queue up
as each transaction locks the row for writing. As this can significantly increase
transaction timings, the Rails connection pools can saturate, leading to
application-wide downtime.
- For each row update, PostgreSQL inserts a new row version and deletes the old one.
In high-traffic scenarios, this approach can cause vacuum and WAL (write-ahead log)
pressure, reducing database performance.
This pattern often happens when an aggregate is too expensive to compute for each
request, so a running tally is kept in the database. If you need such an aggregate,
consider keeping a running total in a single row, plus a small working set of
recently added data, such as individual increments:
- When introducing new data, add it to the working set. These inserts do not
cause lock contention.
- When calculating the aggregate, combine the running total with a live aggregate
from the working set, providing an up-to-date result.
- Add a periodic job that incorporates the working set into the running total and
clears it in a transaction, bounding the amount of work needed by a reader.
## Wide tables
PostgreSQL organizes rows into 8 KB pages, and operates on one page at a time.
By minimizing the width of rows in a table, we improve the following:
- Sequential and bitmap index scan performance, because fewer pages must be
scanned if each contains more rows.
- Vacuum performance, because vacuum can process more rows in each page.
- Update performance, because during a (non-HOT) update, each index must be
updated for every row update.
Mitigating wide tables is one part of the database team's
[100 GB table initiative](../../architecture/blueprints/database_scaling/size-limits.md),
as wider tables can fit fewer rows in 100 GB.
When adding columns to a table, consider if you intend to access the data in the
new columns by itself, in a one-to-one relationship with the other columns of the
table. If so, the new columns could be a good candidate for splitting to a new table.
Several tables have already been split in this way. For example:
- `search_data` is split from `issues`.
- `project_pages_metadata` is split from `projects`.
- `merge_request_diff_details` is split from `merge_request_diffs`
## Data model trade-offs
Certain tables, like `users`, `namespaces`, and `projects`, can get very wide.
These tables are usually central to the application, and used very often.
Why is this a problem?
- Many of these columns are included in indexes, which leads to index write amplification.
When the number of indexes on the table is more than 16, it affects query planning,
and may lead to [light-weight lock (LWLock) contention](https://gitlab.com/groups/gitlab-org/-/epics/11543).
- Updates in PostgreSQL are implemented as a combination of delete and insert. This means that each column,
even if rarely used, is copied over and over again, on each update. This affects the amount of generated
write ahead log (WAL).
- When there is a column that is frequently updated, each update results in all table columns
being copied. Again, this results in increase of generated WAL, and creates more work for
auto-vacuum.
- PostgreSQL stores data as rows, or tuples in a page. Wide rows reduce the number of tuples per page,
and this affects read performance.
A possible solution to this problem is to keep only the most important columns on the main table,
and extract the rest into different tables, having one-to-one relationship with the main table.
Good candidates are columns that are either very frequently updated, for example `last_activity_at`,
or columns that are rarely updated and/or used, like activation tokens.
The trade-off that comes with such extraction is that index-only scans are no longer possible.
Instead, the application must either join to the new table or execute an additional query. The performance impacts
of this should be weighed against the benefits of the vertical table split.
There is a very good episode on this topic on the [PostgresFM](https://postgres.fm) podcast,
where @NikolayS of [PostgresAI](https://postgres.ai/) and @michristofides of [PgMustard](https://www.pgmustard.com/)
discuss this topic in more depth - [https://postgres.fm/episodes/data-model-trade-offs](https://postgres.fm/episodes/data-model-trade-offs).
### Example
Lets look at the `users` table, which at of the time of writing has 75 columns.
We can see a few groups of columns that match the above criteria, and are good candidates
for extraction:
- OTP related columns, like `encrypted_otp_secret`, `otp_secret_expires_at`, etc.
There are few of these columns, and once populated they should not be updated often (if at all).
- Columns related to email confirmation - `confirmation_token`, `confirmation_sent_at`,
and `confirmed_at`. Once populated these are most likely never updated.
- Timestamps like `password_expires_at`, `last_credential_check_at`, and `admin_email_unsubscribed_at`.
Such columns are either updated very often, or not at all. It will be better if they are in a separate table.
- Various tokens (and columns related to them), like `unlock_token`, `incoming_email_token`, and `feed_token`.
Let's focus on `users.incoming_email_token` - every user on GitLab.com has one set, and this token is rarely updated.
In order to extract it from `users` into a new table, we'll have to do the following:
1. Release M [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141561)
- Create table (release M)
- Update the application to read from the new table, and fallback to the original column when there is no data yet.
- Start to back-fill the new table
1. Release N [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141833)
- Finalize the background migration doing the back-fill. This should be done in the next release after a [required stop](../../update/upgrade_paths.md).
1. Release N + 1 [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141835)
- Update the application to read and write from the new table only.
- Ignore the original column. This starts the process of safely removing database columns, as described in our [guides](avoiding_downtime_in_migrations.md#dropping-columns).
1. Release N + 2 [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142086)
- Drop the original column.
1. Release N + 3 [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142087)
- Remove the ignore rule for the original column.
While this is a lengthy process, it's needed in order to do the extraction
without disrupting the application. Once completed, the original column and the related index will
no longer exists on the `users` table, which will result in improved performance.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Best practices for data layout and access patterns
breadcrumbs:
- doc
- development
- database
---
Certain patterns of data access, and especially data updates, can exacerbate strain
on the database. Avoid them if possible.
This document lists some patterns to avoid, with recommendations for alternatives.
## High-frequency updates, especially to the same row
Avoid single database rows that are updated by many transactions at the same time.
- If many processes attempt to update the same row simultaneously, they queue up
as each transaction locks the row for writing. As this can significantly increase
transaction timings, the Rails connection pools can saturate, leading to
application-wide downtime.
- For each row update, PostgreSQL inserts a new row version and deletes the old one.
In high-traffic scenarios, this approach can cause vacuum and WAL (write-ahead log)
pressure, reducing database performance.
This pattern often happens when an aggregate is too expensive to compute for each
request, so a running tally is kept in the database. If you need such an aggregate,
consider keeping a running total in a single row, plus a small working set of
recently added data, such as individual increments:
- When introducing new data, add it to the working set. These inserts do not
cause lock contention.
- When calculating the aggregate, combine the running total with a live aggregate
from the working set, providing an up-to-date result.
- Add a periodic job that incorporates the working set into the running total and
clears it in a transaction, bounding the amount of work needed by a reader.
## Wide tables
PostgreSQL organizes rows into 8 KB pages, and operates on one page at a time.
By minimizing the width of rows in a table, we improve the following:
- Sequential and bitmap index scan performance, because fewer pages must be
scanned if each contains more rows.
- Vacuum performance, because vacuum can process more rows in each page.
- Update performance, because during a (non-HOT) update, each index must be
updated for every row update.
Mitigating wide tables is one part of the database team's
[100 GB table initiative](../../architecture/blueprints/database_scaling/size-limits.md),
as wider tables can fit fewer rows in 100 GB.
When adding columns to a table, consider if you intend to access the data in the
new columns by itself, in a one-to-one relationship with the other columns of the
table. If so, the new columns could be a good candidate for splitting to a new table.
Several tables have already been split in this way. For example:
- `search_data` is split from `issues`.
- `project_pages_metadata` is split from `projects`.
- `merge_request_diff_details` is split from `merge_request_diffs`
## Data model trade-offs
Certain tables, like `users`, `namespaces`, and `projects`, can get very wide.
These tables are usually central to the application, and used very often.
Why is this a problem?
- Many of these columns are included in indexes, which leads to index write amplification.
When the number of indexes on the table is more than 16, it affects query planning,
and may lead to [light-weight lock (LWLock) contention](https://gitlab.com/groups/gitlab-org/-/epics/11543).
- Updates in PostgreSQL are implemented as a combination of delete and insert. This means that each column,
even if rarely used, is copied over and over again, on each update. This affects the amount of generated
write ahead log (WAL).
- When there is a column that is frequently updated, each update results in all table columns
being copied. Again, this results in increase of generated WAL, and creates more work for
auto-vacuum.
- PostgreSQL stores data as rows, or tuples in a page. Wide rows reduce the number of tuples per page,
and this affects read performance.
A possible solution to this problem is to keep only the most important columns on the main table,
and extract the rest into different tables, having one-to-one relationship with the main table.
Good candidates are columns that are either very frequently updated, for example `last_activity_at`,
or columns that are rarely updated and/or used, like activation tokens.
The trade-off that comes with such extraction is that index-only scans are no longer possible.
Instead, the application must either join to the new table or execute an additional query. The performance impacts
of this should be weighed against the benefits of the vertical table split.
There is a very good episode on this topic on the [PostgresFM](https://postgres.fm) podcast,
where @NikolayS of [PostgresAI](https://postgres.ai/) and @michristofides of [PgMustard](https://www.pgmustard.com/)
discuss this topic in more depth - [https://postgres.fm/episodes/data-model-trade-offs](https://postgres.fm/episodes/data-model-trade-offs).
### Example
Lets look at the `users` table, which at of the time of writing has 75 columns.
We can see a few groups of columns that match the above criteria, and are good candidates
for extraction:
- OTP related columns, like `encrypted_otp_secret`, `otp_secret_expires_at`, etc.
There are few of these columns, and once populated they should not be updated often (if at all).
- Columns related to email confirmation - `confirmation_token`, `confirmation_sent_at`,
and `confirmed_at`. Once populated these are most likely never updated.
- Timestamps like `password_expires_at`, `last_credential_check_at`, and `admin_email_unsubscribed_at`.
Such columns are either updated very often, or not at all. It will be better if they are in a separate table.
- Various tokens (and columns related to them), like `unlock_token`, `incoming_email_token`, and `feed_token`.
Let's focus on `users.incoming_email_token` - every user on GitLab.com has one set, and this token is rarely updated.
In order to extract it from `users` into a new table, we'll have to do the following:
1. Release M [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141561)
- Create table (release M)
- Update the application to read from the new table, and fallback to the original column when there is no data yet.
- Start to back-fill the new table
1. Release N [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141833)
- Finalize the background migration doing the back-fill. This should be done in the next release after a [required stop](../../update/upgrade_paths.md).
1. Release N + 1 [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/141835)
- Update the application to read and write from the new table only.
- Ignore the original column. This starts the process of safely removing database columns, as described in our [guides](avoiding_downtime_in_migrations.md#dropping-columns).
1. Release N + 2 [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142086)
- Drop the original column.
1. Release N + 3 [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142087)
- Remove the ignore rule for the original column.
While this is a lengthy process, it's needed in order to do the extraction
without disrupting the application. Once completed, the original column and the related index will
no longer exists on the `users` table, which will result in improved performance.
|
https://docs.gitlab.com/development/migrations_for_multiple_databases
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/migrations_for_multiple_databases.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
migrations_for_multiple_databases.md
|
Data Access
|
Database
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Migrations for Multiple databases
| null |
This document describes how to properly write database migrations
for [the decomposed GitLab application using multiple databases](https://gitlab.com/groups/gitlab-org/-/epics/6168).
For more information, see [Multiple databases](multiple_databases.md).
The design for multiple databases (except for the Geo database) assumes
that all decomposed databases have **the same structure** (for example, schema), but **the data is different** in each database. This means that some tables do not contain data on each database.
## Operations
Depending on the used constructs, we can classify migrations to be either:
1. Modifying structure ([DDL - Data Definition Language](https://www.postgresql.org/docs/16/ddl.html)) (for example, `ALTER TABLE`).
1. Modifying data ([DML - Data Manipulation Language](https://www.postgresql.org/docs/16/dml.html)) (for example, `UPDATE`).
1. Performing [other queries](https://www.postgresql.org/docs/16/queries.html) (for example, `SELECT`) that are treated as **DML** for the purposes of our migrations.
**The usage of `Gitlab::Database::Migration[2.0]` requires migrations to always be of a single purpose**.
Migrations cannot mix **DDL** and **DML** changes as the application requires the structure
(as described by `db/structure.sql`) to be exactly the same across all decomposed databases.
### Data Definition Language (DDL)
The DDL migrations are all migrations that:
1. Create or drop a table (for example, `create_table`).
1. Add or remove an index (for example, `add_index`, `add_concurrent_index`).
1. Add or remove a foreign key (for example `add_foreign_key`, `add_concurrent_foreign_key`).
1. Add or remove a column with or without a default value (for example, `add_column`).
1. Create or drop trigger functions (for example, `create_trigger_function`).
1. Attach or detach triggers from tables (for example, `track_record_deletions`, `untrack_record_deletions`).
1. Prepare or not asynchronous indexes (for example, `prepare_async_index`, `unprepare_async_index_by_name`).
1. Truncate a table (for example using the `truncate_tables!` helper method).
As such DDL migrations **CANNOT**:
1. Read or modify data in any form, via SQL statements or ActiveRecord models.
1. Update column values (for example, `update_column_in_batches`).
1. Schedule background migrations (for example, `queue_background_migration_jobs_by_range_at_intervals`).
1. Read the state of feature flags since they are stored in `main:` (a `features` and `feature_gates`).
1. Read application settings (as settings are stored in `main:`).
As the majority of migrations in the GitLab codebase are of the DDL-type,
this is also the default mode of operation and requires no further changes
to the migrations files.
#### Example: perform DDL on all databases
Example migration adding a concurrent index that is treated as change of the structure (DDL)
that is executed on all configured databases.
```ruby
class AddUserIdAndStateIndexToMergeRequestReviewers < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
INDEX_NAME = 'index_on_merge_request_reviewers_user_id_and_state'
def up
add_concurrent_index :merge_request_reviewers, [:user_id, :state], where: 'state = 2', name: INDEX_NAME
end
def down
remove_concurrent_index_by_name :merge_request_reviewers, INDEX_NAME
end
end
```
#### Example: Add a new table to store in a single database
1. Add the table to the [database dictionary](database_dictionary.md) in [`db/docs/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/db/docs):
```yaml
table_name: ssh_signatures
description: Description example
introduced_by_url: Merge request link
milestone: Milestone example
feature_categories:
- Feature category example
classes:
- Class example
gitlab_schema: gitlab_main
```
1. Create the table in a schema migration:
```ruby
class CreateSshSignatures < Gitlab::Database::Migration[2.1]
def change
create_table :ssh_signatures do |t|
t.timestamps_with_timezone null: false
t.bigint :project_id, null: false, index: true
t.bigint :key_id, null: false, index: true
t.integer :verification_status, default: 0, null: false, limit: 2
t.binary :commit_sha, null: false, index: { unique: true }
end
end
end
```
### Data Manipulation Language (DML)
The DML migrations are all migrations that:
1. Read data via SQL statements (for example, `SELECT * FROM projects WHERE id=1`).
1. Read data via ActiveRecord models (for example, `User < MigrationRecord`).
1. Create, update or delete data via ActiveRecord models (for example, `User.create!(...)`).
1. Create, update or delete data via SQL statements (for example, `DELETE FROM projects WHERE id=1`).
1. Update columns in batches (for example, `update_column_in_batches(:projects, :archived, true)`).
1. Schedule background migrations (for example, `queue_background_migration_jobs_by_range_at_intervals`).
1. Access application settings (for example, `ApplicationSetting.last` if run for `main:` database).
1. Read and modify feature flags if run for the `main:` database.
The DML migrations **CANNOT**:
1. Make any changes to DDL since this breaks the rule of keeping `structure.sql` coherent across
all decomposed databases.
1. **Read data from another database**.
To indicate the `DML` migration type, a migration must use the `restrict_gitlab_migration gitlab_schema:`
syntax in a migration class. This marks the given migration as DML and restricts access to it.
#### Example: perform DML only in context of the database containing the given `gitlab_schema`
Example migration updating `archived` column of `projects` that is executed
only for the database containing `gitlab_main` schema.
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
update_column_in_batches(:projects, :archived, true) do |table, query|
query.where(table[:archived].eq(false)) # rubocop:disable CodeReuse/ActiveRecord
end
end
def down
# no-op
end
end
```
#### Example: usage of `ActiveRecord` classes
A migration using `ActiveRecord` class to perform data manipulation
must use the `MigrationRecord` class. This class is guaranteed to provide
a correct connection in a context of a given migration.
Underneath the `MigrationRecord == ActiveRecord::Base`, as once the `db:migrate`
runs, it switches the active connection of `ActiveRecord::Base.establish_connection :ci`.
To avoid confusion to using the `ActiveRecord::Base`, `MigrationRecord` is required.
This implies that DML migrations are forbidden to read data from other
databases. For example, running migration in context of `ci:` and reading feature flags
from `main:`, as no established connection to another database is present.
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
class Project < MigrationRecord
end
def up
Project.where(archived: false).each_batch of |batch|
batch.update_all(archived: true)
end
end
def down
end
end
```
### The special purpose of `gitlab_shared`
As described in [`gitlab_schema`](multiple_databases.md#the-special-purpose-of-gitlab_shared),
the `gitlab_shared` tables are allowed to contain data across all databases. This implies
that such migrations should run across all databases to modify structure (DDL) or modify data (DML).
As such migrations accessing `gitlab_shared` do not need to use `restrict_gitlab_migration gitlab_schema:`,
migrations without restriction run across all databases and are allowed to modify data on each of them.
If the `restrict_gitlab_migration gitlab_schema:` is specified, the `DML` migration
runs only in a context of a database containing the given `gitlab_schema`.
#### Example: run DML `gitlab_shared` migration on all databases
Example migration updating `loose_foreign_keys_deleted_records` table
that is marked in `lib/gitlab/database/gitlab_schemas.yml` as `gitlab_shared`.
This migration is executed across all configured databases.
```ruby
class DeleteAllLooseForeignKeyRecords < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
execute("DELETE FROM loose_foreign_keys_deleted_records")
end
def down
# no-op
end
end
```
#### Example: run DML `gitlab_shared` only on the database containing the given `gitlab_schema`
Example migration updating `loose_foreign_keys_deleted_records` table
that is marked in `db/docs/loose_foreign_keys_deleted_records.yml` as `gitlab_shared`.
This migration since it configures restriction on `gitlab_ci` is executed only
in context of database containing `gitlab_ci` schema.
```ruby
class DeleteCiBuildsLooseForeignKeyRecords < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_ci
def up
execute("DELETE FROM loose_foreign_keys_deleted_records WHERE fully_qualified_table_name='ci_builds'")
end
def down
# no-op
end
end
```
### The behavior of skipping migrations
The only migrations that are skipped are the ones performing **DML** changes.
The **DDL** migrations are **always and unconditionally** executed.
The implemented [solution](https://gitlab.com/gitlab-org/gitlab/-/issues/355014#solution-2-use-database_tasks)
uses the `database_tasks:` as a way to indicate which additional database configurations
(in `config/database.yml`) share the same primary database. The database configurations
marked with `database_tasks: false` are exempt from executing `db:migrate` for those
database configurations.
If database configurations do not share databases (all do have `database_tasks: true`),
each migration runs for every database configuration:
1. The DDL migration applies all structure changes on all databases.
1. The DML migration runs only in the context of a database containing the given `gitlab_schema:`.
1. If the DML migration is not eligible to run, it is skipped. It's still
marked as executed in `schema_migrations`. While running `db:migrate`, the skipped
migration outputs `Current migration is skipped since it modifies 'gitlab_ci' which is outside of 'gitlab_main, gitlab_shared`.
To prevent loss of migrations if the `database_tasks: false` is configured, a dedicated
Rake task is used [`gitlab:db:validate_config`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/83118).
The `gitlab:db:validate_config` validates the correctness of `database_tasks:` by checking database identifiers
of each underlying database configuration. The ones that share the database are required to have
the `database_tasks: false` set. `gitlab:db:validate_config` always runs before `db:migrate`.
## Validation
Validation in a nutshell uses [`pg_query`](https://github.com/pganalyze/pg_query) to analyze
each query and classify tables with information from [`db/docs/`](database_dictionary.md).
The migration is skipped if the specified `gitlab_schema` is outside of a list of schemas
managed by a given database connection (`Gitlab::Database::gitlab_schemas_for_connection`).
The `Gitlab::Database::Migration[2.0]` includes `Gitlab::Database::MigrationHelpers::RestrictGitlabSchema`
which extends the `#migrate` method. For the duration of a migration a dedicated query analyzer
is installed `Gitlab::Database::QueryAnalyzers::RestrictAllowedSchemas` that accepts
a list of allowed schemas as defined by `restrict_gitlab_migration:`. If the executed query
is outside of allowed schemas, it raises an exception.
## Exceptions
Depending on misuse or lack of `restrict_gitlab_migration` various exceptions can be raised
as part of the migration run and prevent the migration from being completed.
### Exception 1: migration running in DDL mode does DML select
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
# Missing:
# restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
update_column_in_batches(:projects, :archived, true) do |table, query|
query.where(table[:archived].eq(false)) # rubocop:disable CodeReuse/ActiveRecord
end
end
def down
# no-op
end
end
```
```plaintext
Select/DML queries (SELECT/UPDATE/DELETE) are disallowed in the DDL (structure) mode
Modifying of 'projects' (gitlab_main) with 'SELECT * FROM projects...
```
The current migration do not use `restrict_gitlab_migration`. The lack indicates a migration
running in **DDL** mode, but the executed payload appears to be reading data from `projects`.
**The solution** is to add `restrict_gitlab_migration gitlab_schema: :gitlab_main`.
### Exception 2: migration running in DML mode changes the structure
```ruby
class AddUserIdAndStateIndexToMergeRequestReviewers < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
# restrict_gitlab_migration if defined indicates DML, it should be removed
restrict_gitlab_migration gitlab_schema: :gitlab_main
INDEX_NAME = 'index_on_merge_request_reviewers_user_id_and_state'
def up
add_concurrent_index :merge_request_reviewers, [:user_id, :state], where: 'state = 2', name: INDEX_NAME
end
def down
remove_concurrent_index_by_name :merge_request_reviewers, INDEX_NAME
end
end
```
```plaintext
DDL queries (structure) are disallowed in the Select/DML (SELECT/UPDATE/DELETE) mode.
Modifying of 'merge_request_reviewers' with 'CREATE INDEX...
```
The current migration do use `restrict_gitlab_migration`. The presence indicates **DML** mode,
but the executed payload appears to be doing structure changes (DDL).
**The solution** is to remove `restrict_gitlab_migration gitlab_schema: :gitlab_main`.
### Exception 3: migration running in DML mode accesses data from a table in another schema
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
# Since it modifies `projects` it should use `gitlab_main`
restrict_gitlab_migration gitlab_schema: :gitlab_ci
def up
update_column_in_batches(:projects, :archived, true) do |table, query|
query.where(table[:archived].eq(false)) # rubocop:disable CodeReuse/ActiveRecord
end
end
def down
# no-op
end
end
```
```plaintext
Select/DML queries (SELECT/UPDATE/DELETE) do access 'projects' (gitlab_main) " \
which is outside of list of allowed schemas: 'gitlab_ci'
```
The current migration do restrict the migration to `gitlab_ci`, but appears to modify
data in `gitlab_main`.
**The solution** is to change `restrict_gitlab_migration gitlab_schema: :gitlab_ci`.
### Exception 4: mixing DDL and DML mode
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
# This migration is invalid regardless of specification
# as it cannot modify structure and data at the same time
restrict_gitlab_migration gitlab_schema: :gitlab_ci
def up
add_concurrent_index :merge_request_reviewers, [:user_id, :state], where: 'state = 2', name: 'index_on_merge_request_reviewers'
update_column_in_batches(:projects, :archived, true) do |table, query|
query.where(table[:archived].eq(false)) # rubocop:disable CodeReuse/ActiveRecord
end
end
def down
# no-op
end
end
```
The migrations mixing **DDL** and **DML** depending on ordering of operations raises
one of the prior exceptions.
## Upcoming changes on multiple database migrations
The `restrict_gitlab_migration` using `gitlab_schema:` is considered as a first iteration
of this feature for running migrations selectively depending on a context. It is possible
to add additional restrictions to DML-only migrations (as the structure coherency is likely
to stay as-is until further notice) to restrict when they run.
A Potential extension is to limit running DML migration only to specific environments:
```ruby
restrict_gitlab_migration gitlab_schema: :gitlab_main, gitlab_env: :gitlab_com
```
## Background migrations
When you use:
- Background migrations with `track_jobs` set to `true` or
- Batched background migrations
The migration has to write to a jobs table. All of the
jobs tables used by background migrations are marked as `gitlab_shared`.
You can use these migrations when migrating tables in any database.
However, when queuing the batches, you must set `restrict_gitlab_migration` based on the
table you are iterating over. If you are updating all `projects`, for example, then you would set
`restrict_gitlab_migration gitlab_schema: :gitlab_main`. If, however, you are
updating all `ci_pipelines`, you would set
`restrict_gitlab_migration gitlab_schema: :gitlab_ci`.
As with all DML migrations, you cannot query another database outside of
`restrict_gitlab_migration` or `gitlab_shared`. If you need to query another database,
separate the migrations.
Because the actual migration logic (not the queueing step) for background
migrations runs in a Sidekiq worker, the logic can perform DML queries on
tables in any database, just like any ordinary Sidekiq worker can.
## How to determine `gitlab_schema` for a given table
See [database dictionary](database_dictionary.md).
|
---
stage: Data Access
group: Database
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Migrations for Multiple databases
breadcrumbs:
- doc
- development
- database
---
This document describes how to properly write database migrations
for [the decomposed GitLab application using multiple databases](https://gitlab.com/groups/gitlab-org/-/epics/6168).
For more information, see [Multiple databases](multiple_databases.md).
The design for multiple databases (except for the Geo database) assumes
that all decomposed databases have **the same structure** (for example, schema), but **the data is different** in each database. This means that some tables do not contain data on each database.
## Operations
Depending on the used constructs, we can classify migrations to be either:
1. Modifying structure ([DDL - Data Definition Language](https://www.postgresql.org/docs/16/ddl.html)) (for example, `ALTER TABLE`).
1. Modifying data ([DML - Data Manipulation Language](https://www.postgresql.org/docs/16/dml.html)) (for example, `UPDATE`).
1. Performing [other queries](https://www.postgresql.org/docs/16/queries.html) (for example, `SELECT`) that are treated as **DML** for the purposes of our migrations.
**The usage of `Gitlab::Database::Migration[2.0]` requires migrations to always be of a single purpose**.
Migrations cannot mix **DDL** and **DML** changes as the application requires the structure
(as described by `db/structure.sql`) to be exactly the same across all decomposed databases.
### Data Definition Language (DDL)
The DDL migrations are all migrations that:
1. Create or drop a table (for example, `create_table`).
1. Add or remove an index (for example, `add_index`, `add_concurrent_index`).
1. Add or remove a foreign key (for example `add_foreign_key`, `add_concurrent_foreign_key`).
1. Add or remove a column with or without a default value (for example, `add_column`).
1. Create or drop trigger functions (for example, `create_trigger_function`).
1. Attach or detach triggers from tables (for example, `track_record_deletions`, `untrack_record_deletions`).
1. Prepare or not asynchronous indexes (for example, `prepare_async_index`, `unprepare_async_index_by_name`).
1. Truncate a table (for example using the `truncate_tables!` helper method).
As such DDL migrations **CANNOT**:
1. Read or modify data in any form, via SQL statements or ActiveRecord models.
1. Update column values (for example, `update_column_in_batches`).
1. Schedule background migrations (for example, `queue_background_migration_jobs_by_range_at_intervals`).
1. Read the state of feature flags since they are stored in `main:` (a `features` and `feature_gates`).
1. Read application settings (as settings are stored in `main:`).
As the majority of migrations in the GitLab codebase are of the DDL-type,
this is also the default mode of operation and requires no further changes
to the migrations files.
#### Example: perform DDL on all databases
Example migration adding a concurrent index that is treated as change of the structure (DDL)
that is executed on all configured databases.
```ruby
class AddUserIdAndStateIndexToMergeRequestReviewers < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
INDEX_NAME = 'index_on_merge_request_reviewers_user_id_and_state'
def up
add_concurrent_index :merge_request_reviewers, [:user_id, :state], where: 'state = 2', name: INDEX_NAME
end
def down
remove_concurrent_index_by_name :merge_request_reviewers, INDEX_NAME
end
end
```
#### Example: Add a new table to store in a single database
1. Add the table to the [database dictionary](database_dictionary.md) in [`db/docs/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/db/docs):
```yaml
table_name: ssh_signatures
description: Description example
introduced_by_url: Merge request link
milestone: Milestone example
feature_categories:
- Feature category example
classes:
- Class example
gitlab_schema: gitlab_main
```
1. Create the table in a schema migration:
```ruby
class CreateSshSignatures < Gitlab::Database::Migration[2.1]
def change
create_table :ssh_signatures do |t|
t.timestamps_with_timezone null: false
t.bigint :project_id, null: false, index: true
t.bigint :key_id, null: false, index: true
t.integer :verification_status, default: 0, null: false, limit: 2
t.binary :commit_sha, null: false, index: { unique: true }
end
end
end
```
### Data Manipulation Language (DML)
The DML migrations are all migrations that:
1. Read data via SQL statements (for example, `SELECT * FROM projects WHERE id=1`).
1. Read data via ActiveRecord models (for example, `User < MigrationRecord`).
1. Create, update or delete data via ActiveRecord models (for example, `User.create!(...)`).
1. Create, update or delete data via SQL statements (for example, `DELETE FROM projects WHERE id=1`).
1. Update columns in batches (for example, `update_column_in_batches(:projects, :archived, true)`).
1. Schedule background migrations (for example, `queue_background_migration_jobs_by_range_at_intervals`).
1. Access application settings (for example, `ApplicationSetting.last` if run for `main:` database).
1. Read and modify feature flags if run for the `main:` database.
The DML migrations **CANNOT**:
1. Make any changes to DDL since this breaks the rule of keeping `structure.sql` coherent across
all decomposed databases.
1. **Read data from another database**.
To indicate the `DML` migration type, a migration must use the `restrict_gitlab_migration gitlab_schema:`
syntax in a migration class. This marks the given migration as DML and restricts access to it.
#### Example: perform DML only in context of the database containing the given `gitlab_schema`
Example migration updating `archived` column of `projects` that is executed
only for the database containing `gitlab_main` schema.
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
update_column_in_batches(:projects, :archived, true) do |table, query|
query.where(table[:archived].eq(false)) # rubocop:disable CodeReuse/ActiveRecord
end
end
def down
# no-op
end
end
```
#### Example: usage of `ActiveRecord` classes
A migration using `ActiveRecord` class to perform data manipulation
must use the `MigrationRecord` class. This class is guaranteed to provide
a correct connection in a context of a given migration.
Underneath the `MigrationRecord == ActiveRecord::Base`, as once the `db:migrate`
runs, it switches the active connection of `ActiveRecord::Base.establish_connection :ci`.
To avoid confusion to using the `ActiveRecord::Base`, `MigrationRecord` is required.
This implies that DML migrations are forbidden to read data from other
databases. For example, running migration in context of `ci:` and reading feature flags
from `main:`, as no established connection to another database is present.
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
class Project < MigrationRecord
end
def up
Project.where(archived: false).each_batch of |batch|
batch.update_all(archived: true)
end
end
def down
end
end
```
### The special purpose of `gitlab_shared`
As described in [`gitlab_schema`](multiple_databases.md#the-special-purpose-of-gitlab_shared),
the `gitlab_shared` tables are allowed to contain data across all databases. This implies
that such migrations should run across all databases to modify structure (DDL) or modify data (DML).
As such migrations accessing `gitlab_shared` do not need to use `restrict_gitlab_migration gitlab_schema:`,
migrations without restriction run across all databases and are allowed to modify data on each of them.
If the `restrict_gitlab_migration gitlab_schema:` is specified, the `DML` migration
runs only in a context of a database containing the given `gitlab_schema`.
#### Example: run DML `gitlab_shared` migration on all databases
Example migration updating `loose_foreign_keys_deleted_records` table
that is marked in `lib/gitlab/database/gitlab_schemas.yml` as `gitlab_shared`.
This migration is executed across all configured databases.
```ruby
class DeleteAllLooseForeignKeyRecords < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
execute("DELETE FROM loose_foreign_keys_deleted_records")
end
def down
# no-op
end
end
```
#### Example: run DML `gitlab_shared` only on the database containing the given `gitlab_schema`
Example migration updating `loose_foreign_keys_deleted_records` table
that is marked in `db/docs/loose_foreign_keys_deleted_records.yml` as `gitlab_shared`.
This migration since it configures restriction on `gitlab_ci` is executed only
in context of database containing `gitlab_ci` schema.
```ruby
class DeleteCiBuildsLooseForeignKeyRecords < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_ci
def up
execute("DELETE FROM loose_foreign_keys_deleted_records WHERE fully_qualified_table_name='ci_builds'")
end
def down
# no-op
end
end
```
### The behavior of skipping migrations
The only migrations that are skipped are the ones performing **DML** changes.
The **DDL** migrations are **always and unconditionally** executed.
The implemented [solution](https://gitlab.com/gitlab-org/gitlab/-/issues/355014#solution-2-use-database_tasks)
uses the `database_tasks:` as a way to indicate which additional database configurations
(in `config/database.yml`) share the same primary database. The database configurations
marked with `database_tasks: false` are exempt from executing `db:migrate` for those
database configurations.
If database configurations do not share databases (all do have `database_tasks: true`),
each migration runs for every database configuration:
1. The DDL migration applies all structure changes on all databases.
1. The DML migration runs only in the context of a database containing the given `gitlab_schema:`.
1. If the DML migration is not eligible to run, it is skipped. It's still
marked as executed in `schema_migrations`. While running `db:migrate`, the skipped
migration outputs `Current migration is skipped since it modifies 'gitlab_ci' which is outside of 'gitlab_main, gitlab_shared`.
To prevent loss of migrations if the `database_tasks: false` is configured, a dedicated
Rake task is used [`gitlab:db:validate_config`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/83118).
The `gitlab:db:validate_config` validates the correctness of `database_tasks:` by checking database identifiers
of each underlying database configuration. The ones that share the database are required to have
the `database_tasks: false` set. `gitlab:db:validate_config` always runs before `db:migrate`.
## Validation
Validation in a nutshell uses [`pg_query`](https://github.com/pganalyze/pg_query) to analyze
each query and classify tables with information from [`db/docs/`](database_dictionary.md).
The migration is skipped if the specified `gitlab_schema` is outside of a list of schemas
managed by a given database connection (`Gitlab::Database::gitlab_schemas_for_connection`).
The `Gitlab::Database::Migration[2.0]` includes `Gitlab::Database::MigrationHelpers::RestrictGitlabSchema`
which extends the `#migrate` method. For the duration of a migration a dedicated query analyzer
is installed `Gitlab::Database::QueryAnalyzers::RestrictAllowedSchemas` that accepts
a list of allowed schemas as defined by `restrict_gitlab_migration:`. If the executed query
is outside of allowed schemas, it raises an exception.
## Exceptions
Depending on misuse or lack of `restrict_gitlab_migration` various exceptions can be raised
as part of the migration run and prevent the migration from being completed.
### Exception 1: migration running in DDL mode does DML select
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
# Missing:
# restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
update_column_in_batches(:projects, :archived, true) do |table, query|
query.where(table[:archived].eq(false)) # rubocop:disable CodeReuse/ActiveRecord
end
end
def down
# no-op
end
end
```
```plaintext
Select/DML queries (SELECT/UPDATE/DELETE) are disallowed in the DDL (structure) mode
Modifying of 'projects' (gitlab_main) with 'SELECT * FROM projects...
```
The current migration do not use `restrict_gitlab_migration`. The lack indicates a migration
running in **DDL** mode, but the executed payload appears to be reading data from `projects`.
**The solution** is to add `restrict_gitlab_migration gitlab_schema: :gitlab_main`.
### Exception 2: migration running in DML mode changes the structure
```ruby
class AddUserIdAndStateIndexToMergeRequestReviewers < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
# restrict_gitlab_migration if defined indicates DML, it should be removed
restrict_gitlab_migration gitlab_schema: :gitlab_main
INDEX_NAME = 'index_on_merge_request_reviewers_user_id_and_state'
def up
add_concurrent_index :merge_request_reviewers, [:user_id, :state], where: 'state = 2', name: INDEX_NAME
end
def down
remove_concurrent_index_by_name :merge_request_reviewers, INDEX_NAME
end
end
```
```plaintext
DDL queries (structure) are disallowed in the Select/DML (SELECT/UPDATE/DELETE) mode.
Modifying of 'merge_request_reviewers' with 'CREATE INDEX...
```
The current migration do use `restrict_gitlab_migration`. The presence indicates **DML** mode,
but the executed payload appears to be doing structure changes (DDL).
**The solution** is to remove `restrict_gitlab_migration gitlab_schema: :gitlab_main`.
### Exception 3: migration running in DML mode accesses data from a table in another schema
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
# Since it modifies `projects` it should use `gitlab_main`
restrict_gitlab_migration gitlab_schema: :gitlab_ci
def up
update_column_in_batches(:projects, :archived, true) do |table, query|
query.where(table[:archived].eq(false)) # rubocop:disable CodeReuse/ActiveRecord
end
end
def down
# no-op
end
end
```
```plaintext
Select/DML queries (SELECT/UPDATE/DELETE) do access 'projects' (gitlab_main) " \
which is outside of list of allowed schemas: 'gitlab_ci'
```
The current migration do restrict the migration to `gitlab_ci`, but appears to modify
data in `gitlab_main`.
**The solution** is to change `restrict_gitlab_migration gitlab_schema: :gitlab_ci`.
### Exception 4: mixing DDL and DML mode
```ruby
class UpdateProjectsArchivedState < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
# This migration is invalid regardless of specification
# as it cannot modify structure and data at the same time
restrict_gitlab_migration gitlab_schema: :gitlab_ci
def up
add_concurrent_index :merge_request_reviewers, [:user_id, :state], where: 'state = 2', name: 'index_on_merge_request_reviewers'
update_column_in_batches(:projects, :archived, true) do |table, query|
query.where(table[:archived].eq(false)) # rubocop:disable CodeReuse/ActiveRecord
end
end
def down
# no-op
end
end
```
The migrations mixing **DDL** and **DML** depending on ordering of operations raises
one of the prior exceptions.
## Upcoming changes on multiple database migrations
The `restrict_gitlab_migration` using `gitlab_schema:` is considered as a first iteration
of this feature for running migrations selectively depending on a context. It is possible
to add additional restrictions to DML-only migrations (as the structure coherency is likely
to stay as-is until further notice) to restrict when they run.
A Potential extension is to limit running DML migration only to specific environments:
```ruby
restrict_gitlab_migration gitlab_schema: :gitlab_main, gitlab_env: :gitlab_com
```
## Background migrations
When you use:
- Background migrations with `track_jobs` set to `true` or
- Batched background migrations
The migration has to write to a jobs table. All of the
jobs tables used by background migrations are marked as `gitlab_shared`.
You can use these migrations when migrating tables in any database.
However, when queuing the batches, you must set `restrict_gitlab_migration` based on the
table you are iterating over. If you are updating all `projects`, for example, then you would set
`restrict_gitlab_migration gitlab_schema: :gitlab_main`. If, however, you are
updating all `ci_pipelines`, you would set
`restrict_gitlab_migration gitlab_schema: :gitlab_ci`.
As with all DML migrations, you cannot query another database outside of
`restrict_gitlab_migration` or `gitlab_shared`. If you need to query another database,
separate the migrations.
Because the actual migration logic (not the queueing step) for background
migrations runs in a Sidekiq worker, the logic can perform DML queries on
tables in any database, just like any ordinary Sidekiq worker can.
## How to determine `gitlab_schema` for a given table
See [database dictionary](database_dictionary.md).
|
https://docs.gitlab.com/development/database_debugging
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database_debugging.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
database_debugging.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Troubleshooting and debugging the database
| null |
This section is to help give some copy-pasta you can use as a reference when you
run into some head-banging database problems.
A first step is to search for your error in Slack, or search for `GitLab <my error>` with Google.
Available `RAILS_ENV`:
- `production` (generally not for your main GDK database, but you might need this for other installations such as Omnibus).
- `development` (this is your main GDK db).
- `test` (used for tests like RSpec).
## Delete everything and start over
If you just want to delete everything and start over with an empty DB (approximately 1 minute):
```shell
bundle exec rake db:reset RAILS_ENV=development
```
If you want to seed the empty DB with sample data (approximately 4 minutes):
```shell
bundle exec rake dev:setup
```
If you just want to delete everything and start over with sample data (approximately 4 minutes). This
also does `db:reset` and runs DB-specific migrations:
```shell
bundle exec rake db:setup RAILS_ENV=development
```
If your test DB is giving you problems, it is safe to delete everything because it doesn't contain important
data:
```shell
bundle exec rake db:reset RAILS_ENV=test
```
## Migration wrangling
- `bundle exec rake db:migrate RAILS_ENV=development`: Execute any pending migrations that you might have picked up from a MR
- `bundle exec rake db:migrate:status RAILS_ENV=development`: Check if all migrations are `up` or `down`
- `bundle exec rake db:migrate:down:main VERSION=20170926203418 RAILS_ENV=development`: Tear down a migration
- `bundle exec rake db:migrate:up:main VERSION=20170926203418 RAILS_ENV=development`: Set up a migration
- `bundle exec rake db:migrate:redo:main VERSION=20170926203418 RAILS_ENV=development`: Re-run a specific migration
Replace `main` in the above commands to execute against the `ci` database instead of `main`.
## Manually access the database
Access the database with one of these commands. They all get you to the same place.
```shell
gdk psql -d gitlabhq_development
bundle exec rails dbconsole -e development
bundle exec rails db -e development
```
- `\q`: Quit/exit
- `\dt`: List all tables
- `\d+ issues`: List columns for `issues` table
- `CREATE TABLE board_labels();`: Create a table called `board_labels`
- `SELECT * FROM schema_migrations WHERE version = '20170926203418';`: Check if a migration was run
- `DELETE FROM schema_migrations WHERE version = '20170926203418';`: Manually remove a migration
## Access the database with a GUI
Most GUIs (DataGrip, RubyMine, DBeaver) require a TCP connection to the database, but by default
the database runs on a UNIX socket. To be able to access the database from these tools, some steps
are needed:
1. On the GDK root directory, run:
```shell
gdk config set postgresql.host localhost
```
1. Open your `gdk.yml`, and confirm that it has the following lines:
```yaml
postgresql:
host: localhost
```
1. Reconfigure GDK:
```shell
gdk reconfigure
```
1. On your database GUI, select `localhost` as host, `5432` as port and `gitlabhq_development` as database.
You can also use the connection string `postgresql://localhost:5432/gitlabhq_development`.
The new connection should be working now.
## Access the GDK database with Visual Studio Code
Create a database connection using the PostgreSQL extension in Visual Studio Code to access and
explore the GDK database.
Prerequisites:
- [Visual Studio (VS) Code](https://code.visualstudio.com/download).
- [PostgreSQL](https://marketplace.visualstudio.com/items?itemName=ckolkman.vscode-postgres) VS Code extension.
To create a database connection:
1. In the activity bar, select the **PostgreSQL Explorer** icon.
1. From the opened pane, select **+** to add a new database connection:
1. Enter the **hostname** of the database. Use the path to the PostgreSQL folder in your GDK directory.
- Example: `/dev/gitlab-development-kit/postgresql`
1. Enter a **PostgreSQL user to authenticate as**.
Use your local username unless otherwise specified during PostgreSQL installation.
To verify your PostgreSQL username:
1. Ensure you are in the `gitlab` directory.
1. Access the PostgreSQL database. Run `rails db`. The output should look like:
```shell
psql (14.9)
Type "help" for help.
gitlabhq_development=#
```
1. In the returned PostgreSQL prompt, run `\conninfo` to display the connected user and
the port used to establish the connection. For example:
```shell
You are connected to database "gitlabhq_development" as user "root" on host "localhost" (address "127.0.0.1") at port "5432".
```
1. When prompted to enter the **password of the PostgreSQL user**, enter the password you set or leave the field blank.
- As you are logged in to the same machine that the Postgres server is running on, a password is not required.
1. Enter**Port number to connect to**. The default port number is`5432`.
1. In the **use an SSL connection?** field, select the appropriate connection for your
installation. The options are:
- **Use Secure Connection**
- **Standard Connection** (default)
1. In the optional **database to connect to** field, enter `gitlabhq_development`.
1. In the **display name for the database connection** field, enter `gitlabhq_development`.
Your `gitlabhq_development` database connection is now displayed in the **PostgreSQL Explorer** pane.
Use the arrows to expand and explore the contents of the GDK database.
If you cannot connect, first ensure that GDK is running and try again. For further instructions on how
to use the PostgreSQL Explorer extension for VS Code, see
the [usage section](https://marketplace.visualstudio.com/items?itemName=ckolkman.vscode-postgres#usage)
of the extension's documentation.
## FAQ
### `ActiveRecord::PendingMigrationError` with Spring
When running specs with the [Spring pre-loader](../rake_tasks.md#speed-up-tests-rake-tasks-and-migrations),
the test database can get into a corrupted state. Trying to run the migration or
dropping/resetting the test database has no effect.
```shell
$ bundle exec spring rspec some_spec.rb
...
Failure/Error: ActiveRecord::Migration.maintain_test_schema!
ActiveRecord::PendingMigrationError:
Migrations are pending. To resolve this issue, run:
bin/rake db:migrate RAILS_ENV=test
# ~/.rvm/gems/ruby-2.3.3/gems/activerecord-4.2.10/lib/active_record/migration.rb:392:in `check_pending!'
...
0 examples, 0 failures, 1 error occurred outside of examples
```
To resolve, you can kill the spring server and app that lives between spec runs.
```shell
$ ps aux | grep spring
eric 87304 1.3 2.9 3080836 482596 ?? Ss 10:12AM 4:08.36 spring app | gitlab | started 6 hours ago | test mode
eric 37709 0.0 0.0 2518640 7524 s006 S Wed11AM 0:00.79 spring server | gitlab | started 29 hours ago
$ kill 87304
$ kill 37709
```
### db:migrate `database version is too old to be migrated` error
Users receive this error when `db:migrate` detects that the current schema version
is older than the `MIN_SCHEMA_VERSION` defined in the `Gitlab::Database` library
module.
Over time we cleanup/combine old migrations in the codebase, so it is not always
possible to migrate GitLab from every previous version.
In some cases you might want to bypass this check. For example, if you were on a version
of GitLab schema later than the `MIN_SCHEMA_VERSION`, and then rolled back the
to an older migration, from before. In this case, to migrate forward again,
you should set the `SKIP_SCHEMA_VERSION_CHECK` environment variable.
```shell
bundle exec rake db:migrate SKIP_SCHEMA_VERSION_CHECK=true
```
## Performance issues
### Reduce connection overhead with connection pooling
Creating new database connections is not free, and in PostgreSQL specifically, it requires
forking an entire process to handle each new one. In case a connection lives for a very long time,
this is no problem. However, forking a process for several small queries can turn out to be costly.
If left unattended, peaks of new database connections can cause performance degradation,
or even lead to a complete outage.
A proven solution for instances that deal with surges of small, short-lived database connections
is to implement [PgBouncer](../../administration/postgresql/pgbouncer.md#pgbouncer-as-part-of-a-fault-tolerant-gitlab-installation) as a connection pooler.
This pool can be used to hold thousands of connections for almost no overhead. The drawback is the addition of
a small amount of latency, in exchange for up to more than 90% performance improvement, depending on the usage patterns.
PgBouncer can be fine-tuned to fit different installations. See our documentation on
[fine-tuning PgBouncer](../../administration/postgresql/pgbouncer.md#fine-tuning) for more information.
### Run ANALYZE to regenerate database statistics
The `ANALYZE` command is a good first approach for solving many performance issues.
By regenerating table statistics, the query planner creates more efficient query execution paths.
Up to date statistics never hurt!
- For Linux packages, run:
```shell
gitlab-psql -c 'SET statement_timeout = 0; ANALYZE VERBOSE;'
```
- On the SQL prompt, run:
```sql
-- needed because this is likely to run longer than the default statement_timeout
SET statement_timeout = 0;
ANALYZE VERBOSE;
```
### Collect data on ACTIVE workload
Active queries are the only ones actually consuming significant resources from the database.
This query gathers meta information from all existing **active** queries, along with:
- their age
- originating service
- `wait_event` (if it's in the waiting state)
- other possibly relevant information:
```sql
-- long queries are usually easier to read with the fields arranged vertically
\x
SELECT
pid
,datname
,usename
,application_name
,client_hostname
,backend_start
,query_start
,query
,age(now(), query_start) AS "age"
,state
,wait_event
,wait_event_type
,backend_type
FROM pg_stat_activity
WHERE state = 'active';
```
This query captures a single snapshot, so consider running the query 3-5 times
in a few minutes while the environment is unresponsive:
```sql
-- redirect output to a file
-- this location must be writable by `gitlab-psql`
\o /tmp/active1304.out
--
-- now execute the query above
--
-- all output goes to the file - if the prompt is = then it ran
-- cancel writing output
\o
```
[This Python script](https://gitlab.com/-/snippets/3680015) can help you parse the
output of `pg_stat_activity` into numbers that are easier to understand and correlate to performance issues.
### Investigate queries that seem slow
When you identify a query is taking too long to finish, or hogging too much database resources,
check how the query planner is executing it with `EXPLAIN`:
```sql
EXPLAIN (ANALYZE, BUFFERS) SELECT ... FROM ...
```
`BUFFERS` also show approximately how much memory is involved. I/O might cause
the problem, so make sure to add `BUFFERS` when running `EXPLAIN`.
If the database is sometimes performant, and sometimes slow, capture this output
for the same queries while the environment is in either state.
### Investigate index bloat
Index bloat shouldn't typically cause noticeable performance problems, but it can lead to high disk usage, particularly if there are [autovacuum issues](https://gitlab.com/gitlab-org/gitlab/-/issues/412672#note_1401807864).
The query below calculates bloat percentage from PostgreSQL's own `postgres_index_bloat_estimates`
table, and orders the results by percentage value. PostgreSQL needs some amount of
bloat to run correctly, so around 25% still represents standard behavior.
```sql
select a.identifier, a.bloat_size_bytes, b.tablename, b.ondisk_size_bytes,
(a.bloat_size_bytes/b.ondisk_size_bytes::float)*100 as percentage
from postgres_index_bloat_estimates a
join postgres_indexes b on a.identifier=b.identifier
where
-- to ensure the percentage calculation doesn't encounter zeroes
a.bloat_size_bytes>0 and
b.ondisk_size_bytes>1000000000
order by percentage desc;
```
### Rebuild indexes
If you identify a bloated table, you can rebuild its indexes using the query below.
You should also re-run [ANALYZE](#run-analyze-to-regenerate-database-statistics)
afterward, as statistics can be reset after indexes are rebuilt.
```sql
SET statement_timeout = 0;
REINDEX TABLE CONCURRENTLY <table_name>;
```
Monitor the index rebuild process by running the query below with `\watch 30` added after the semicolon:
```sql
SELECT
t.tablename, indexname, c.reltuples AS num_rows,
pg_size_pretty(pg_relation_size(quote_ident(t.tablename)::text)) AS table_size,
pg_size_pretty(pg_relation_size(quote_ident(indexrelname)::text)) AS index_size,
CASE WHEN indisvalid THEN 'Y'
ELSE 'N'
END AS VALID
FROM pg_tables t
LEFT OUTER JOIN pg_class c ON t.tablename=c.relname
LEFT OUTER JOIN
( SELECT c.relname AS ctablename, ipg.relname AS indexname, x.indnatts AS
number_of_columns, indexrelname, indisvalid FROM pg_index x
JOIN pg_class c ON c.oid = x.indrelid
JOIN pg_class ipg ON ipg.oid = x.indexrelid
JOIN pg_stat_all_indexes psai ON x.indexrelid = psai.indexrelid )
AS foo
ON t.tablename = foo.ctablename
WHERE
t.tablename in ('<comma_separated_table_names>')
ORDER BY 1,2; \watch 30
```
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Troubleshooting and debugging the database
breadcrumbs:
- doc
- development
- database
---
This section is to help give some copy-pasta you can use as a reference when you
run into some head-banging database problems.
A first step is to search for your error in Slack, or search for `GitLab <my error>` with Google.
Available `RAILS_ENV`:
- `production` (generally not for your main GDK database, but you might need this for other installations such as Omnibus).
- `development` (this is your main GDK db).
- `test` (used for tests like RSpec).
## Delete everything and start over
If you just want to delete everything and start over with an empty DB (approximately 1 minute):
```shell
bundle exec rake db:reset RAILS_ENV=development
```
If you want to seed the empty DB with sample data (approximately 4 minutes):
```shell
bundle exec rake dev:setup
```
If you just want to delete everything and start over with sample data (approximately 4 minutes). This
also does `db:reset` and runs DB-specific migrations:
```shell
bundle exec rake db:setup RAILS_ENV=development
```
If your test DB is giving you problems, it is safe to delete everything because it doesn't contain important
data:
```shell
bundle exec rake db:reset RAILS_ENV=test
```
## Migration wrangling
- `bundle exec rake db:migrate RAILS_ENV=development`: Execute any pending migrations that you might have picked up from a MR
- `bundle exec rake db:migrate:status RAILS_ENV=development`: Check if all migrations are `up` or `down`
- `bundle exec rake db:migrate:down:main VERSION=20170926203418 RAILS_ENV=development`: Tear down a migration
- `bundle exec rake db:migrate:up:main VERSION=20170926203418 RAILS_ENV=development`: Set up a migration
- `bundle exec rake db:migrate:redo:main VERSION=20170926203418 RAILS_ENV=development`: Re-run a specific migration
Replace `main` in the above commands to execute against the `ci` database instead of `main`.
## Manually access the database
Access the database with one of these commands. They all get you to the same place.
```shell
gdk psql -d gitlabhq_development
bundle exec rails dbconsole -e development
bundle exec rails db -e development
```
- `\q`: Quit/exit
- `\dt`: List all tables
- `\d+ issues`: List columns for `issues` table
- `CREATE TABLE board_labels();`: Create a table called `board_labels`
- `SELECT * FROM schema_migrations WHERE version = '20170926203418';`: Check if a migration was run
- `DELETE FROM schema_migrations WHERE version = '20170926203418';`: Manually remove a migration
## Access the database with a GUI
Most GUIs (DataGrip, RubyMine, DBeaver) require a TCP connection to the database, but by default
the database runs on a UNIX socket. To be able to access the database from these tools, some steps
are needed:
1. On the GDK root directory, run:
```shell
gdk config set postgresql.host localhost
```
1. Open your `gdk.yml`, and confirm that it has the following lines:
```yaml
postgresql:
host: localhost
```
1. Reconfigure GDK:
```shell
gdk reconfigure
```
1. On your database GUI, select `localhost` as host, `5432` as port and `gitlabhq_development` as database.
You can also use the connection string `postgresql://localhost:5432/gitlabhq_development`.
The new connection should be working now.
## Access the GDK database with Visual Studio Code
Create a database connection using the PostgreSQL extension in Visual Studio Code to access and
explore the GDK database.
Prerequisites:
- [Visual Studio (VS) Code](https://code.visualstudio.com/download).
- [PostgreSQL](https://marketplace.visualstudio.com/items?itemName=ckolkman.vscode-postgres) VS Code extension.
To create a database connection:
1. In the activity bar, select the **PostgreSQL Explorer** icon.
1. From the opened pane, select **+** to add a new database connection:
1. Enter the **hostname** of the database. Use the path to the PostgreSQL folder in your GDK directory.
- Example: `/dev/gitlab-development-kit/postgresql`
1. Enter a **PostgreSQL user to authenticate as**.
Use your local username unless otherwise specified during PostgreSQL installation.
To verify your PostgreSQL username:
1. Ensure you are in the `gitlab` directory.
1. Access the PostgreSQL database. Run `rails db`. The output should look like:
```shell
psql (14.9)
Type "help" for help.
gitlabhq_development=#
```
1. In the returned PostgreSQL prompt, run `\conninfo` to display the connected user and
the port used to establish the connection. For example:
```shell
You are connected to database "gitlabhq_development" as user "root" on host "localhost" (address "127.0.0.1") at port "5432".
```
1. When prompted to enter the **password of the PostgreSQL user**, enter the password you set or leave the field blank.
- As you are logged in to the same machine that the Postgres server is running on, a password is not required.
1. Enter**Port number to connect to**. The default port number is`5432`.
1. In the **use an SSL connection?** field, select the appropriate connection for your
installation. The options are:
- **Use Secure Connection**
- **Standard Connection** (default)
1. In the optional **database to connect to** field, enter `gitlabhq_development`.
1. In the **display name for the database connection** field, enter `gitlabhq_development`.
Your `gitlabhq_development` database connection is now displayed in the **PostgreSQL Explorer** pane.
Use the arrows to expand and explore the contents of the GDK database.
If you cannot connect, first ensure that GDK is running and try again. For further instructions on how
to use the PostgreSQL Explorer extension for VS Code, see
the [usage section](https://marketplace.visualstudio.com/items?itemName=ckolkman.vscode-postgres#usage)
of the extension's documentation.
## FAQ
### `ActiveRecord::PendingMigrationError` with Spring
When running specs with the [Spring pre-loader](../rake_tasks.md#speed-up-tests-rake-tasks-and-migrations),
the test database can get into a corrupted state. Trying to run the migration or
dropping/resetting the test database has no effect.
```shell
$ bundle exec spring rspec some_spec.rb
...
Failure/Error: ActiveRecord::Migration.maintain_test_schema!
ActiveRecord::PendingMigrationError:
Migrations are pending. To resolve this issue, run:
bin/rake db:migrate RAILS_ENV=test
# ~/.rvm/gems/ruby-2.3.3/gems/activerecord-4.2.10/lib/active_record/migration.rb:392:in `check_pending!'
...
0 examples, 0 failures, 1 error occurred outside of examples
```
To resolve, you can kill the spring server and app that lives between spec runs.
```shell
$ ps aux | grep spring
eric 87304 1.3 2.9 3080836 482596 ?? Ss 10:12AM 4:08.36 spring app | gitlab | started 6 hours ago | test mode
eric 37709 0.0 0.0 2518640 7524 s006 S Wed11AM 0:00.79 spring server | gitlab | started 29 hours ago
$ kill 87304
$ kill 37709
```
### db:migrate `database version is too old to be migrated` error
Users receive this error when `db:migrate` detects that the current schema version
is older than the `MIN_SCHEMA_VERSION` defined in the `Gitlab::Database` library
module.
Over time we cleanup/combine old migrations in the codebase, so it is not always
possible to migrate GitLab from every previous version.
In some cases you might want to bypass this check. For example, if you were on a version
of GitLab schema later than the `MIN_SCHEMA_VERSION`, and then rolled back the
to an older migration, from before. In this case, to migrate forward again,
you should set the `SKIP_SCHEMA_VERSION_CHECK` environment variable.
```shell
bundle exec rake db:migrate SKIP_SCHEMA_VERSION_CHECK=true
```
## Performance issues
### Reduce connection overhead with connection pooling
Creating new database connections is not free, and in PostgreSQL specifically, it requires
forking an entire process to handle each new one. In case a connection lives for a very long time,
this is no problem. However, forking a process for several small queries can turn out to be costly.
If left unattended, peaks of new database connections can cause performance degradation,
or even lead to a complete outage.
A proven solution for instances that deal with surges of small, short-lived database connections
is to implement [PgBouncer](../../administration/postgresql/pgbouncer.md#pgbouncer-as-part-of-a-fault-tolerant-gitlab-installation) as a connection pooler.
This pool can be used to hold thousands of connections for almost no overhead. The drawback is the addition of
a small amount of latency, in exchange for up to more than 90% performance improvement, depending on the usage patterns.
PgBouncer can be fine-tuned to fit different installations. See our documentation on
[fine-tuning PgBouncer](../../administration/postgresql/pgbouncer.md#fine-tuning) for more information.
### Run ANALYZE to regenerate database statistics
The `ANALYZE` command is a good first approach for solving many performance issues.
By regenerating table statistics, the query planner creates more efficient query execution paths.
Up to date statistics never hurt!
- For Linux packages, run:
```shell
gitlab-psql -c 'SET statement_timeout = 0; ANALYZE VERBOSE;'
```
- On the SQL prompt, run:
```sql
-- needed because this is likely to run longer than the default statement_timeout
SET statement_timeout = 0;
ANALYZE VERBOSE;
```
### Collect data on ACTIVE workload
Active queries are the only ones actually consuming significant resources from the database.
This query gathers meta information from all existing **active** queries, along with:
- their age
- originating service
- `wait_event` (if it's in the waiting state)
- other possibly relevant information:
```sql
-- long queries are usually easier to read with the fields arranged vertically
\x
SELECT
pid
,datname
,usename
,application_name
,client_hostname
,backend_start
,query_start
,query
,age(now(), query_start) AS "age"
,state
,wait_event
,wait_event_type
,backend_type
FROM pg_stat_activity
WHERE state = 'active';
```
This query captures a single snapshot, so consider running the query 3-5 times
in a few minutes while the environment is unresponsive:
```sql
-- redirect output to a file
-- this location must be writable by `gitlab-psql`
\o /tmp/active1304.out
--
-- now execute the query above
--
-- all output goes to the file - if the prompt is = then it ran
-- cancel writing output
\o
```
[This Python script](https://gitlab.com/-/snippets/3680015) can help you parse the
output of `pg_stat_activity` into numbers that are easier to understand and correlate to performance issues.
### Investigate queries that seem slow
When you identify a query is taking too long to finish, or hogging too much database resources,
check how the query planner is executing it with `EXPLAIN`:
```sql
EXPLAIN (ANALYZE, BUFFERS) SELECT ... FROM ...
```
`BUFFERS` also show approximately how much memory is involved. I/O might cause
the problem, so make sure to add `BUFFERS` when running `EXPLAIN`.
If the database is sometimes performant, and sometimes slow, capture this output
for the same queries while the environment is in either state.
### Investigate index bloat
Index bloat shouldn't typically cause noticeable performance problems, but it can lead to high disk usage, particularly if there are [autovacuum issues](https://gitlab.com/gitlab-org/gitlab/-/issues/412672#note_1401807864).
The query below calculates bloat percentage from PostgreSQL's own `postgres_index_bloat_estimates`
table, and orders the results by percentage value. PostgreSQL needs some amount of
bloat to run correctly, so around 25% still represents standard behavior.
```sql
select a.identifier, a.bloat_size_bytes, b.tablename, b.ondisk_size_bytes,
(a.bloat_size_bytes/b.ondisk_size_bytes::float)*100 as percentage
from postgres_index_bloat_estimates a
join postgres_indexes b on a.identifier=b.identifier
where
-- to ensure the percentage calculation doesn't encounter zeroes
a.bloat_size_bytes>0 and
b.ondisk_size_bytes>1000000000
order by percentage desc;
```
### Rebuild indexes
If you identify a bloated table, you can rebuild its indexes using the query below.
You should also re-run [ANALYZE](#run-analyze-to-regenerate-database-statistics)
afterward, as statistics can be reset after indexes are rebuilt.
```sql
SET statement_timeout = 0;
REINDEX TABLE CONCURRENTLY <table_name>;
```
Monitor the index rebuild process by running the query below with `\watch 30` added after the semicolon:
```sql
SELECT
t.tablename, indexname, c.reltuples AS num_rows,
pg_size_pretty(pg_relation_size(quote_ident(t.tablename)::text)) AS table_size,
pg_size_pretty(pg_relation_size(quote_ident(indexrelname)::text)) AS index_size,
CASE WHEN indisvalid THEN 'Y'
ELSE 'N'
END AS VALID
FROM pg_tables t
LEFT OUTER JOIN pg_class c ON t.tablename=c.relname
LEFT OUTER JOIN
( SELECT c.relname AS ctablename, ipg.relname AS indexname, x.indnatts AS
number_of_columns, indexrelname, indisvalid FROM pg_index x
JOIN pg_class c ON c.oid = x.indrelid
JOIN pg_class ipg ON ipg.oid = x.indexrelid
JOIN pg_stat_all_indexes psai ON x.indexrelid = psai.indexrelid )
AS foo
ON t.tablename = foo.ctablename
WHERE
t.tablename in ('<comma_separated_table_names>')
ORDER BY 1,2; \watch 30
```
|
https://docs.gitlab.com/development/hash_indexes
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/hash_indexes.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
hash_indexes.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Hash Indexes
| null |
PostgreSQL supports hash indexes besides the regular B-tree
indexes. Hash indexes however are to be avoided at all costs. While they may
_sometimes_ provide better performance the cost of rehashing can be very high.
More importantly: at least until PostgreSQL 10.0 hash indexes are not
WAL-logged, meaning they are not replicated to any replicas. From the PostgreSQL
documentation:
> Hash index operations are not presently WAL-logged, so hash indexes might need
> to be rebuilt with REINDEX after a database crash if there were unwritten
> changes. Also, changes to hash indexes are not replicated over streaming or
> file-based replication after the initial base backup, so they give wrong
> answers to queries that subsequently use them. For these reasons, hash index
> use is presently discouraged.
RuboCop is configured to register an offense when it detects the use of a hash
index.
Instead of using hash indexes you should use regular B-tree indexes.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Hash Indexes
breadcrumbs:
- doc
- development
- database
---
PostgreSQL supports hash indexes besides the regular B-tree
indexes. Hash indexes however are to be avoided at all costs. While they may
_sometimes_ provide better performance the cost of rehashing can be very high.
More importantly: at least until PostgreSQL 10.0 hash indexes are not
WAL-logged, meaning they are not replicated to any replicas. From the PostgreSQL
documentation:
> Hash index operations are not presently WAL-logged, so hash indexes might need
> to be rebuilt with REINDEX after a database crash if there were unwritten
> changes. Also, changes to hash indexes are not replicated over streaming or
> file-based replication after the initial base backup, so they give wrong
> answers to queries that subsequently use them. For these reasons, hash index
> use is presently discouraged.
RuboCop is configured to register an offense when it detects the use of a hash
index.
Instead of using hash indexes you should use regular B-tree indexes.
|
https://docs.gitlab.com/development/verifying_database_capabilities
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/verifying_database_capabilities.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
verifying_database_capabilities.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Verifying Database Capabilities
| null |
Sometimes certain bits of code may only work on a certain database
version. While we try to avoid such code as much as possible sometimes it is
necessary to add database (version) specific behavior.
To facilitate this we have the following methods that you can use:
- `ApplicationRecord.database.version`: returns the PostgreSQL version number as a string
in the format `X.Y.Z`.
This allows you to write code such as:
```ruby
if ApplicationRecord.database.version.to_f >= 11.7
run_really_fast_query
else
run_fast_query
end
```
## Read-only database
The database can be used in read-only mode. In this case we have to
make sure all GET requests don't attempt any write operations to the
database. If one of those requests wants to write to the database, it needs
to be wrapped in a `Gitlab::Database.read_only?` or `Gitlab::Database.read_write?`
guard, to make sure it doesn't for read-only databases.
We have a Rails Middleware that filters any potentially writing
operations (the `CUD` operations of CRUD) and prevent the user from trying
to update the database and getting a 500 error (see `Gitlab::Middleware::ReadOnly`).
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Verifying Database Capabilities
breadcrumbs:
- doc
- development
- database
---
Sometimes certain bits of code may only work on a certain database
version. While we try to avoid such code as much as possible sometimes it is
necessary to add database (version) specific behavior.
To facilitate this we have the following methods that you can use:
- `ApplicationRecord.database.version`: returns the PostgreSQL version number as a string
in the format `X.Y.Z`.
This allows you to write code such as:
```ruby
if ApplicationRecord.database.version.to_f >= 11.7
run_really_fast_query
else
run_fast_query
end
```
## Read-only database
The database can be used in read-only mode. In this case we have to
make sure all GET requests don't attempt any write operations to the
database. If one of those requests wants to write to the database, it needs
to be wrapped in a `Gitlab::Database.read_only?` or `Gitlab::Database.read_write?`
guard, to make sure it doesn't for read-only databases.
We have a Rails Middleware that filters any potentially writing
operations (the `CUD` operations of CRUD) and prevent the user from trying
to update the database and getting a 500 error (see `Gitlab::Middleware::ReadOnly`).
|
https://docs.gitlab.com/development/sha1_as_binary
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/sha1_as_binary.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
sha1_as_binary.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Storing SHA1 Hashes As Binary
| null |
Storing SHA1 hashes as strings is not very space efficient. A SHA1 as a string
requires at least 40 bytes, an additional byte to store the encoding, and
perhaps more space depending on the internals of PostgreSQL.
On the other hand, if one were to store a SHA1 as binary one would only need 20
bytes for the actual SHA1, and 1 or 4 bytes of additional space (again depending
on database internals). This means that in the best case scenario we can reduce
the space usage by 50%.
To make this easier to work with you can include the concern `ShaAttribute` into
a model and define a SHA attribute using the `sha_attribute` class method. For
example:
```ruby
class Commit < ActiveRecord::Base
include ShaAttribute
sha_attribute :sha
end
```
This allows you to use the value of the `sha` attribute as if it were a string,
while storing it as binary. This means that you can do something like this,
without having to worry about converting data to the right binary format:
```ruby
commit = Commit.find_by(sha: '88c60307bd1f215095834f09a1a5cb18701ac8ad')
commit.sha = '971604de4cfa324d91c41650fabc129420c8d1cc'
commit.save
```
There is however one requirement: the column used to store the SHA has must be
a binary type. For Rails this means you need to use the `:binary` type instead
of `:text` or `:string`.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Storing SHA1 Hashes As Binary
breadcrumbs:
- doc
- development
- database
---
Storing SHA1 hashes as strings is not very space efficient. A SHA1 as a string
requires at least 40 bytes, an additional byte to store the encoding, and
perhaps more space depending on the internals of PostgreSQL.
On the other hand, if one were to store a SHA1 as binary one would only need 20
bytes for the actual SHA1, and 1 or 4 bytes of additional space (again depending
on database internals). This means that in the best case scenario we can reduce
the space usage by 50%.
To make this easier to work with you can include the concern `ShaAttribute` into
a model and define a SHA attribute using the `sha_attribute` class method. For
example:
```ruby
class Commit < ActiveRecord::Base
include ShaAttribute
sha_attribute :sha
end
```
This allows you to use the value of the `sha` attribute as if it were a string,
while storing it as binary. This means that you can do something like this,
without having to worry about converting data to the right binary format:
```ruby
commit = Commit.find_by(sha: '88c60307bd1f215095834f09a1a5cb18701ac8ad')
commit.sha = '971604de4cfa324d91c41650fabc129420c8d1cc'
commit.save
```
There is however one requirement: the column used to store the SHA has must be
a binary type. For Rails this means you need to use the `:binary` type instead
of `:text` or `:string`.
|
https://docs.gitlab.com/development/swapping_tables
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/swapping_tables.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
swapping_tables.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Swapping Tables
| null |
Sometimes you need to replace one table with another. For example, when
migrating data in a very large table it's often better to create a copy of the
table and insert & migrate the data into this new table in the background.
For example, to swap a table called `events` with another table called `events_for_migration`, you would need to:
1. Rename `events` to `events_temporary`
1. Rename `events_for_migration` to `events`
1. Rename `events_temporary` to `events_for_migration`
Rails allows you to do this using the `rename_table` method:
```ruby
rename_table :events, :events_temporary
rename_table :events_for_migration, :events
rename_table :events_temporary, :events_for_migration
```
This does not require any downtime as long as the 3 `rename_table` calls are
executed in the same database transaction. Rails by default uses database
transactions for migrations, but if it doesn't you need to start one
manually:
```ruby
Event.transaction do
rename_table :events, :events_temporary
rename_table :events_for_migration, :events
rename_table :events_temporary, :events_for_migration
end
```
Once swapped you _have to_ reset the primary key of the new table. For
PostgreSQL you can use the `reset_pk_sequence!` method like so:
```ruby
reset_pk_sequence!('events')
```
Failure to reset the primary keys results in newly created rows starting
with an ID value of 1. Depending on the existing data this can then lead to
duplicate key constraints from popping up, preventing users from creating new
data.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Swapping Tables
breadcrumbs:
- doc
- development
- database
---
Sometimes you need to replace one table with another. For example, when
migrating data in a very large table it's often better to create a copy of the
table and insert & migrate the data into this new table in the background.
For example, to swap a table called `events` with another table called `events_for_migration`, you would need to:
1. Rename `events` to `events_temporary`
1. Rename `events_for_migration` to `events`
1. Rename `events_temporary` to `events_for_migration`
Rails allows you to do this using the `rename_table` method:
```ruby
rename_table :events, :events_temporary
rename_table :events_for_migration, :events
rename_table :events_temporary, :events_for_migration
```
This does not require any downtime as long as the 3 `rename_table` calls are
executed in the same database transaction. Rails by default uses database
transactions for migrations, but if it doesn't you need to start one
manually:
```ruby
Event.transaction do
rename_table :events, :events_temporary
rename_table :events_for_migration, :events
rename_table :events_temporary, :events_for_migration
end
```
Once swapped you _have to_ reset the primary key of the new table. For
PostgreSQL you can use the `reset_pk_sequence!` method like so:
```ruby
reset_pk_sequence!('events')
```
Failure to reset the primary keys results in newly created rows starting
with an ID value of 1. Depending on the existing data this can then lead to
duplicate key constraints from popping up, preventing users from creating new
data.
|
https://docs.gitlab.com/development/strings_and_the_text_data_type
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/strings_and_the_text_data_type.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
strings_and_the_text_data_type.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Strings and the Text data type
| null |
When adding new columns to store strings or other textual information:
1. We always use the `text` data type instead of the `string` data type.
1. `text` columns should always have a limit set, either by using the `create_table` with
the `#text ... limit: 100` helper (see below) when creating a table, or by using the `add_text_limit`
when altering an existing table. Without a limit, the longest possible [character string is about 1 GB](https://www.postgresql.org/docs/16/datatype-character.html).
The standard Rails `text` column type cannot be defined with a limit, but we extend `create_table` to
add a `limit: 255` option. Outside of `create_table`, `add_text_limit` can be used to add a [check constraint](https://www.postgresql.org/docs/16/ddl-constraints.html)
to an already existing column.
## Background information
The reason we always want to use `text` instead of `string` is that `string` columns have the
disadvantage that if you want to update their limit, you have to run an `ALTER TABLE ...` command.
While a limit is added, the `ALTER TABLE ...` command requires an `EXCLUSIVE LOCK` on the table, which
is held throughout the process of updating the column and while validating all existing records, a
process that can take a while for large tables.
On the other hand, texts are [more or less equivalent to strings](https://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/) in PostgreSQL,
while having the additional advantage that adding a limit on an existing column or updating their
limit does not require the very costly `EXCLUSIVE LOCK` to be held throughout the validation phase.
We can start by updating the constraint with the valid option off, which requires an `EXCLUSIVE LOCK`
but only for updating the declaration of the columns. We can then validate it at a later step using
`VALIDATE CONSTRAINT`, which requires only a `SHARE UPDATE EXCLUSIVE LOCK` (only conflicts with other
validations and index creation while it allows reads and writes).
{{< alert type="note" >}}
Don't use text columns for `encrypts` attributes. Use a
[`:jsonb` column](../migration_style_guide.md#encrypted-attributes) instead
{{< /alert >}}
## Create a new table with text columns
When adding a new table, the limits for all text columns should be added in the same migration as
the table creation. We add a `limit:` attribute to Rails' `#text` method, which allows adding a limit
for this column.
For example, consider a migration that creates a table with two text columns,
`db/migrate/20200401000001_create_db_guides.rb`:
```ruby
class CreateDbGuides < Gitlab::Database::Migration[2.1]
def change
create_table :db_guides do |t|
t.bigint :stars, default: 0, null: false
t.text :title, limit: 128
t.text :notes, limit: 1024
end
end
end
```
## Add a text column to an existing table
Adding a column to an existing table requires an exclusive lock for that table. Even though that lock
is held for a brief amount of time, the time `add_column` needs to complete its execution can vary
depending on how frequently the table is accessed. For example, acquiring an exclusive lock for a very
frequently accessed table may take minutes in GitLab.com and requires the use of `with_lock_retries`.
When adding a text limit, transactions must be disabled with `disable_ddl_transaction!`. This means adding the column is not rolled back
in case the migration fails afterwards. An attempt to re-run the migration will raise an error because of the already existing column.
For these reasons, adding a text column to an existing table can be done by either:
- [Add the column and limit in separate migrations.](#add-the-column-and-limit-in-separate-migrations)
- [Add the column and limit in one migration with checking if the column already exists.](#add-the-column-and-limit-in-one-migration-with-checking-if-the-column-already-exists)
### Add the column and limit in separate migrations
Consider a migration that adds a new text column `extended_title` to table `sprints`,
`db/migrate/20200501000001_add_extended_title_to_sprints.rb`:
```ruby
class AddExtendedTitleToSprints < Gitlab::Database::Migration[2.1]
# rubocop:disable Migration/AddLimitToTextColumns
# limit is added in 20200501000002_add_text_limit_to_sprints_extended_title
def change
add_column :sprints, :extended_title, :text
end
# rubocop:enable Migration/AddLimitToTextColumns
end
```
A second migration should follow the first one with a limit added to `extended_title`,
`db/migrate/20200501000002_add_text_limit_to_sprints_extended_title.rb`:
```ruby
class AddTextLimitToSprintsExtendedTitle < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
add_text_limit :sprints, :extended_title, 512
end
def down
# Down is required as `add_text_limit` is not reversible
remove_text_limit :sprints, :extended_title
end
end
```
### Add the column and limit in one migration with checking if the column already exists
Consider a migration that adds a new text column `extended_title` to table `sprints`,
`db/migrate/20200501000001_add_extended_title_to_sprints.rb`:
```ruby
class AddExtendedTitleToSprints < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
with_lock_retries do
add_column :sprints, :extended_title, :text, if_not_exists: true
end
add_text_limit :sprints, :extended_title, 512
end
def down
with_lock_retries do
remove_column :sprints, :extended_title, if_exists: true
end
end
end
```
## Add a text limit constraint to an existing column
Adding text limits to existing database columns requires multiple steps split into at least two different releases:
1. Release `N.M` (current release)
- Add a post-deployment migration to add the limit to the text column with `validate: false`.
- Add a post-deployment migration to fix the existing records.
{{< alert type="note" >}}
Depending on the size of the table, a background migration for cleanup could be required in the next release.
See [text limit constraints on large tables](strings_and_the_text_data_type.md#text-limit-constraints-on-large-tables) for more information.
{{< /alert >}}
- Create an issue for the next milestone to validate the text limit.
1. Release `N.M+1` (next release)
- Validate the text limit using a post-deployment migration.
### Example
Let's assume we want to add a `1024` limit to `issues.title_html` for a given release milestone,
such as 13.0.
Issues is a pretty busy and large table with more than 25 million rows, so we don't want to lock all
other processes that try to access it while running the update.
Also, after checking our production database, we know that there are `issues` with more characters in
their title than the 1024 character limit, so we cannot add and validate the constraint in one step.
{{< alert type="note" >}}
Even if we did not have any record with a title larger than the provided limit, another
instance of GitLab could have such records, so we would follow the same process either way.
{{< /alert >}}
#### Prevent new invalid records (current release)
We first add the limit as a `NOT VALID` check constraint to the table, which enforces consistency when
new records are inserted or current records are updated.
In the example above, the existing issues with more than 1024 characters in their title are not
affected, and you are still able to update records in the `issues` table. However, when you'd try
to update the `title_html` with a title that has more than 1024 characters, the constraint causes
a database error.
Adding or removing a constraint to an existing attribute requires that any application changes are
deployed first,
otherwise servers still in the old version of the application
[may try to update the attribute with invalid values](../multi_version_compatibility.md#ci-artifact-uploads-were-failing).
For these reasons, `add_text_limit` should run in a post-deployment migration.
Still in our example, for the 13.0 milestone (current), consider that the following validation
has been added to model `Issue`:
```ruby
validates :title_html, length: { maximum: 1024 }
```
We can also update the database in the same milestone by adding the text limit with `validate: false`
in a post-deployment migration,
`db/post_migrate/20200501000001_add_text_limit_migration.rb`:
```ruby
class AddTextLimitMigration < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
# This will add the constraint WITHOUT validating it
add_text_limit :issues, :title_html, 1024, validate: false
end
def down
# Down is required as `add_text_limit` is not reversible
remove_text_limit :issues, :title_html
end
end
```
#### Data migration to fix existing records (current release)
The approach here depends on the data volume and the cleanup strategy. The number of records that must
be fixed on GitLab.com is a nice indicator that helps us decide whether to use a post-deployment
migration or a background data migration:
- If the data volume is less than `1,000` records, then the data migration can be executed within the post-migration.
- If the data volume is higher than `1,000` records, it's advised to create a background migration.
When unsure about which option to use, contact the Database team for advice.
Back to our example, the issues table is considerably large and frequently accessed, so we are going
to add a background migration for the 13.0 milestone (current),
`db/post_migrate/20200501000002_schedule_cap_title_length_on_issues.rb`:
```ruby
class ScheduleCapTitleLengthOnIssues < Gitlab::Database::Migration[2.1]
# Info on how many records will be affected on GitLab.com
# time each batch needs to run on average, etc ...
BATCH_SIZE = 5000
DELAY_INTERVAL = 2.minutes.to_i
# Background migration will update issues whose title is longer than 1024 limit
ISSUES_BACKGROUND_MIGRATION = 'CapTitleLengthOnIssues'.freeze
disable_ddl_transaction!
def up
queue_batched_background_migration(
ISSUES_BACKGROUND_MIGRATION,
:issues,
:id,
batch_size: BATCH_SIZE
)
end
def down
delete_batched_background_migration(ISSUES_BACKGROUND_MIGRATION, :issues, :id, [])
end
end
```
To keep this guide short, we skipped the definition of the background migration and only
provided a high level example of the post-deployment migration that is used to schedule the batches.
You can find more information on the guide about [batched background migrations](batched_background_migrations.md)
#### Validate the text limit (next release)
Validating the text limit scans the whole table, and makes sure that each record is correct.
Still in our example, for the 13.1 milestone (next), we run the `validate_text_limit` migration
helper in a final post-deployment migration,
`db/post_migrate/20200601000001_validate_text_limit_migration.rb`:
```ruby
class ValidateTextLimitMigration < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
validate_text_limit :issues, :title_html
end
def down
# no-op
end
end
```
## Increasing a text limit constraint on an existing column
Increasing text limits on existing database columns can be safely achieved by first adding the new limit (with a different name),
and then dropping the previous limit:
```ruby
class ChangeMaintainerNoteLimitInCiRunner < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
add_text_limit :ci_runners, :maintainer_note, 1024, constraint_name: check_constraint_name(:ci_runners, :maintainer_note, 'max_length_1K')
remove_text_limit :ci_runners, :maintainer_note, constraint_name: check_constraint_name(:ci_runners, :maintainer_note, 'max_length')
end
def down
# no-op: Danger of failing if there are records with length(maintainer_note) > 255
end
end
```
## Text limit constraints on large tables
If you have to clean up a text column for a really [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3)
(for example, the `artifacts` in `ci_builds`), your background migration goes on for a while and
it needs an additional [batched background migration cleaning up](batched_background_migrations.md#cleaning-up-a-batched-background-migration)
in the release after adding the data migration.
In that rare case you need 3 releases end-to-end:
1. Release `N.M` - Add the text limit and the background migration to fix the existing records.
1. Release `N.M+1` - Cleanup the background migration.
1. Release `N.M+2` - Validate the text limit.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Strings and the Text data type
breadcrumbs:
- doc
- development
- database
---
When adding new columns to store strings or other textual information:
1. We always use the `text` data type instead of the `string` data type.
1. `text` columns should always have a limit set, either by using the `create_table` with
the `#text ... limit: 100` helper (see below) when creating a table, or by using the `add_text_limit`
when altering an existing table. Without a limit, the longest possible [character string is about 1 GB](https://www.postgresql.org/docs/16/datatype-character.html).
The standard Rails `text` column type cannot be defined with a limit, but we extend `create_table` to
add a `limit: 255` option. Outside of `create_table`, `add_text_limit` can be used to add a [check constraint](https://www.postgresql.org/docs/16/ddl-constraints.html)
to an already existing column.
## Background information
The reason we always want to use `text` instead of `string` is that `string` columns have the
disadvantage that if you want to update their limit, you have to run an `ALTER TABLE ...` command.
While a limit is added, the `ALTER TABLE ...` command requires an `EXCLUSIVE LOCK` on the table, which
is held throughout the process of updating the column and while validating all existing records, a
process that can take a while for large tables.
On the other hand, texts are [more or less equivalent to strings](https://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/) in PostgreSQL,
while having the additional advantage that adding a limit on an existing column or updating their
limit does not require the very costly `EXCLUSIVE LOCK` to be held throughout the validation phase.
We can start by updating the constraint with the valid option off, which requires an `EXCLUSIVE LOCK`
but only for updating the declaration of the columns. We can then validate it at a later step using
`VALIDATE CONSTRAINT`, which requires only a `SHARE UPDATE EXCLUSIVE LOCK` (only conflicts with other
validations and index creation while it allows reads and writes).
{{< alert type="note" >}}
Don't use text columns for `encrypts` attributes. Use a
[`:jsonb` column](../migration_style_guide.md#encrypted-attributes) instead
{{< /alert >}}
## Create a new table with text columns
When adding a new table, the limits for all text columns should be added in the same migration as
the table creation. We add a `limit:` attribute to Rails' `#text` method, which allows adding a limit
for this column.
For example, consider a migration that creates a table with two text columns,
`db/migrate/20200401000001_create_db_guides.rb`:
```ruby
class CreateDbGuides < Gitlab::Database::Migration[2.1]
def change
create_table :db_guides do |t|
t.bigint :stars, default: 0, null: false
t.text :title, limit: 128
t.text :notes, limit: 1024
end
end
end
```
## Add a text column to an existing table
Adding a column to an existing table requires an exclusive lock for that table. Even though that lock
is held for a brief amount of time, the time `add_column` needs to complete its execution can vary
depending on how frequently the table is accessed. For example, acquiring an exclusive lock for a very
frequently accessed table may take minutes in GitLab.com and requires the use of `with_lock_retries`.
When adding a text limit, transactions must be disabled with `disable_ddl_transaction!`. This means adding the column is not rolled back
in case the migration fails afterwards. An attempt to re-run the migration will raise an error because of the already existing column.
For these reasons, adding a text column to an existing table can be done by either:
- [Add the column and limit in separate migrations.](#add-the-column-and-limit-in-separate-migrations)
- [Add the column and limit in one migration with checking if the column already exists.](#add-the-column-and-limit-in-one-migration-with-checking-if-the-column-already-exists)
### Add the column and limit in separate migrations
Consider a migration that adds a new text column `extended_title` to table `sprints`,
`db/migrate/20200501000001_add_extended_title_to_sprints.rb`:
```ruby
class AddExtendedTitleToSprints < Gitlab::Database::Migration[2.1]
# rubocop:disable Migration/AddLimitToTextColumns
# limit is added in 20200501000002_add_text_limit_to_sprints_extended_title
def change
add_column :sprints, :extended_title, :text
end
# rubocop:enable Migration/AddLimitToTextColumns
end
```
A second migration should follow the first one with a limit added to `extended_title`,
`db/migrate/20200501000002_add_text_limit_to_sprints_extended_title.rb`:
```ruby
class AddTextLimitToSprintsExtendedTitle < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
add_text_limit :sprints, :extended_title, 512
end
def down
# Down is required as `add_text_limit` is not reversible
remove_text_limit :sprints, :extended_title
end
end
```
### Add the column and limit in one migration with checking if the column already exists
Consider a migration that adds a new text column `extended_title` to table `sprints`,
`db/migrate/20200501000001_add_extended_title_to_sprints.rb`:
```ruby
class AddExtendedTitleToSprints < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
with_lock_retries do
add_column :sprints, :extended_title, :text, if_not_exists: true
end
add_text_limit :sprints, :extended_title, 512
end
def down
with_lock_retries do
remove_column :sprints, :extended_title, if_exists: true
end
end
end
```
## Add a text limit constraint to an existing column
Adding text limits to existing database columns requires multiple steps split into at least two different releases:
1. Release `N.M` (current release)
- Add a post-deployment migration to add the limit to the text column with `validate: false`.
- Add a post-deployment migration to fix the existing records.
{{< alert type="note" >}}
Depending on the size of the table, a background migration for cleanup could be required in the next release.
See [text limit constraints on large tables](strings_and_the_text_data_type.md#text-limit-constraints-on-large-tables) for more information.
{{< /alert >}}
- Create an issue for the next milestone to validate the text limit.
1. Release `N.M+1` (next release)
- Validate the text limit using a post-deployment migration.
### Example
Let's assume we want to add a `1024` limit to `issues.title_html` for a given release milestone,
such as 13.0.
Issues is a pretty busy and large table with more than 25 million rows, so we don't want to lock all
other processes that try to access it while running the update.
Also, after checking our production database, we know that there are `issues` with more characters in
their title than the 1024 character limit, so we cannot add and validate the constraint in one step.
{{< alert type="note" >}}
Even if we did not have any record with a title larger than the provided limit, another
instance of GitLab could have such records, so we would follow the same process either way.
{{< /alert >}}
#### Prevent new invalid records (current release)
We first add the limit as a `NOT VALID` check constraint to the table, which enforces consistency when
new records are inserted or current records are updated.
In the example above, the existing issues with more than 1024 characters in their title are not
affected, and you are still able to update records in the `issues` table. However, when you'd try
to update the `title_html` with a title that has more than 1024 characters, the constraint causes
a database error.
Adding or removing a constraint to an existing attribute requires that any application changes are
deployed first,
otherwise servers still in the old version of the application
[may try to update the attribute with invalid values](../multi_version_compatibility.md#ci-artifact-uploads-were-failing).
For these reasons, `add_text_limit` should run in a post-deployment migration.
Still in our example, for the 13.0 milestone (current), consider that the following validation
has been added to model `Issue`:
```ruby
validates :title_html, length: { maximum: 1024 }
```
We can also update the database in the same milestone by adding the text limit with `validate: false`
in a post-deployment migration,
`db/post_migrate/20200501000001_add_text_limit_migration.rb`:
```ruby
class AddTextLimitMigration < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
# This will add the constraint WITHOUT validating it
add_text_limit :issues, :title_html, 1024, validate: false
end
def down
# Down is required as `add_text_limit` is not reversible
remove_text_limit :issues, :title_html
end
end
```
#### Data migration to fix existing records (current release)
The approach here depends on the data volume and the cleanup strategy. The number of records that must
be fixed on GitLab.com is a nice indicator that helps us decide whether to use a post-deployment
migration or a background data migration:
- If the data volume is less than `1,000` records, then the data migration can be executed within the post-migration.
- If the data volume is higher than `1,000` records, it's advised to create a background migration.
When unsure about which option to use, contact the Database team for advice.
Back to our example, the issues table is considerably large and frequently accessed, so we are going
to add a background migration for the 13.0 milestone (current),
`db/post_migrate/20200501000002_schedule_cap_title_length_on_issues.rb`:
```ruby
class ScheduleCapTitleLengthOnIssues < Gitlab::Database::Migration[2.1]
# Info on how many records will be affected on GitLab.com
# time each batch needs to run on average, etc ...
BATCH_SIZE = 5000
DELAY_INTERVAL = 2.minutes.to_i
# Background migration will update issues whose title is longer than 1024 limit
ISSUES_BACKGROUND_MIGRATION = 'CapTitleLengthOnIssues'.freeze
disable_ddl_transaction!
def up
queue_batched_background_migration(
ISSUES_BACKGROUND_MIGRATION,
:issues,
:id,
batch_size: BATCH_SIZE
)
end
def down
delete_batched_background_migration(ISSUES_BACKGROUND_MIGRATION, :issues, :id, [])
end
end
```
To keep this guide short, we skipped the definition of the background migration and only
provided a high level example of the post-deployment migration that is used to schedule the batches.
You can find more information on the guide about [batched background migrations](batched_background_migrations.md)
#### Validate the text limit (next release)
Validating the text limit scans the whole table, and makes sure that each record is correct.
Still in our example, for the 13.1 milestone (next), we run the `validate_text_limit` migration
helper in a final post-deployment migration,
`db/post_migrate/20200601000001_validate_text_limit_migration.rb`:
```ruby
class ValidateTextLimitMigration < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
validate_text_limit :issues, :title_html
end
def down
# no-op
end
end
```
## Increasing a text limit constraint on an existing column
Increasing text limits on existing database columns can be safely achieved by first adding the new limit (with a different name),
and then dropping the previous limit:
```ruby
class ChangeMaintainerNoteLimitInCiRunner < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
def up
add_text_limit :ci_runners, :maintainer_note, 1024, constraint_name: check_constraint_name(:ci_runners, :maintainer_note, 'max_length_1K')
remove_text_limit :ci_runners, :maintainer_note, constraint_name: check_constraint_name(:ci_runners, :maintainer_note, 'max_length')
end
def down
# no-op: Danger of failing if there are records with length(maintainer_note) > 255
end
end
```
## Text limit constraints on large tables
If you have to clean up a text column for a really [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3)
(for example, the `artifacts` in `ci_builds`), your background migration goes on for a while and
it needs an additional [batched background migration cleaning up](batched_background_migrations.md#cleaning-up-a-batched-background-migration)
in the release after adding the data migration.
In that rare case you need 3 releases end-to-end:
1. Release `N.M` - Add the text limit and the background migration to fix the existing records.
1. Release `N.M+1` - Cleanup the background migration.
1. Release `N.M+2` - Validate the text limit.
|
https://docs.gitlab.com/development/migration_ordering
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/migration_ordering.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
migration_ordering.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Migration ordering
| null |
Starting with GitLab 17.1, migrations are executed using
a custom ordering scheme that conforms to the GitLab release cadence. This change
simplifies the upgrade process, and eases both maintenance and support.
## Pre 17.1 logic
Migrations are executed in an order based upon the 14-digit timestamp
given in the file name of the migration itself. This behavior is the default for a Rails application.
GitLab also features logic to extend standard migration behavior in these important ways:
1. You can load migrations from additional folders. For example, migrations are
loaded from both the `db/post_migrate` folder and the `db/migrate` folder, which
you need when using [Post-Deployment migrations](post_deployment_migrations.md).
1. If you set the environment variable `SKIP_POST_DEPLOYMENT_MIGRATIONS`, migrations
are not loaded from any `post_migrate` folder.
1. You must provide a GitLab minor version, or "milestone", on all new migrations.
## 17.1+ logic
Migrations are executed in the following order:
1. Migrations without `milestone` defined are executed first, ordered by their timestamp.
1. Migrations with `milestone` defined are executed in milestone order:
1. Regular migrations are executed before post-deployment migrations.
1. Migrations of the same type and milestone are executed in order specified by their timestamp.
Example:
1. Any migrations without `milestone` defined.
1. `17.1` regular migrations.
1. `17.1` post-deployment migrations.
1. `17.2` regular migrations.
1. `17.2` post-deployment migrations.
1. Repeat for each milestone in the upgrade.
### New behavior for post-deployment migrations
This change causes post-deployment migrations to always be sorted at the end
of a given milestone. Previously, post-deployment migrations were
interleaved with regular ones, provided `SKIP_POST_DEPLOYMENT_MIGRATIONS` was not set.
When `SKIP_POST_DEPLOYMENT_MIGRATIONS` is set, post-deployment migrations are not executed.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Migration ordering
breadcrumbs:
- doc
- development
- database
---
Starting with GitLab 17.1, migrations are executed using
a custom ordering scheme that conforms to the GitLab release cadence. This change
simplifies the upgrade process, and eases both maintenance and support.
## Pre 17.1 logic
Migrations are executed in an order based upon the 14-digit timestamp
given in the file name of the migration itself. This behavior is the default for a Rails application.
GitLab also features logic to extend standard migration behavior in these important ways:
1. You can load migrations from additional folders. For example, migrations are
loaded from both the `db/post_migrate` folder and the `db/migrate` folder, which
you need when using [Post-Deployment migrations](post_deployment_migrations.md).
1. If you set the environment variable `SKIP_POST_DEPLOYMENT_MIGRATIONS`, migrations
are not loaded from any `post_migrate` folder.
1. You must provide a GitLab minor version, or "milestone", on all new migrations.
## 17.1+ logic
Migrations are executed in the following order:
1. Migrations without `milestone` defined are executed first, ordered by their timestamp.
1. Migrations with `milestone` defined are executed in milestone order:
1. Regular migrations are executed before post-deployment migrations.
1. Migrations of the same type and milestone are executed in order specified by their timestamp.
Example:
1. Any migrations without `milestone` defined.
1. `17.1` regular migrations.
1. `17.1` post-deployment migrations.
1. `17.2` regular migrations.
1. `17.2` post-deployment migrations.
1. Repeat for each milestone in the upgrade.
### New behavior for post-deployment migrations
This change causes post-deployment migrations to always be sorted at the end
of a given milestone. Previously, post-deployment migrations were
interleaved with regular ones, provided `SKIP_POST_DEPLOYMENT_MIGRATIONS` was not set.
When `SKIP_POST_DEPLOYMENT_MIGRATIONS` is set, post-deployment migrations are not executed.
|
https://docs.gitlab.com/development/database
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
_index.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Database development guidelines
| null |
## Database Reviews
- During the design phase of the feature you're working on, be mindful if you are adding any database-related changes. If you're adding or modifying a query, start looking at the `explain` plan early to avoid surprises late in the review phase.
- If, at any time, you need help optimizing a query or understanding an `explain` plan, ask for assistance in `#database`.
- If you're creating a database MR for review, check out our [Database review guidelines](../database_review.md).
It provides an introduction on database-related changes, migrations, and complex SQL queries.
- If you're a database reviewer or want to become one, check out our [introduction to reviewing database changes](database_reviewer_guidelines.md).
## Upgrade
- [Timeline for version upgrades](pg_upgrade_timeline.md)
## Tooling
- [Understanding EXPLAIN plans](understanding_explain_plans.md)
- [explain.depesz.com](https://explain.depesz.com/) or [explain.dalibo.com](https://explain.dalibo.com/) for visualizing the output of `EXPLAIN`
- [pgFormatter](https://sqlformat.darold.net/) a PostgreSQL SQL syntax beautifier
- [db:check-migrations job](dbcheck-migrations-job.md)
- [Database migration pipeline](database_migration_pipeline.md)
## Migrations
- [Adding required stops](required_stops.md)
- [Avoiding downtime in migrations](avoiding_downtime_in_migrations.md)
- [Batched background migrations guidelines](batched_background_migrations.md)
- [Create a regular migration](../migration_style_guide.md#create-a-regular-schema-migration), including creating new models
- [Deleting migrations](deleting_migrations.md)
- [Different types of migrations](../migration_style_guide.md#choose-an-appropriate-migration-type)
- [Migrations for multiple databases](migrations_for_multiple_databases.md)
- [Migrations style guide](../migration_style_guide.md) for creating safe SQL migrations
- [Partitioning tables](partitioning/_index.md)
- [Post-deployment migrations guidelines](post_deployment_migrations.md) and [how to create one](post_deployment_migrations.md#creating-migrations)
- [Running database migrations](database_debugging.md#migration-wrangling)
- [SQL guidelines](../sql.md) for working with SQL queries
- [Swapping tables](swapping_tables.md)
- [Testing Rails migrations](../testing_guide/testing_migrations_guide.md) guide
- [When and how to write Rails migrations tests](../testing_guide/testing_migrations_guide.md)
- [Deduplicate database records](deduplicate_database_records.md)
## Partitioning tables
- [Overview](partitioning/_index.md)
- [Date range](partitioning/date_range.md)
- [Hash](partitioning/hash.md)
- [Int range](partitioning/int_range.md)
- [List](partitioning/list.md)
## Debugging
- [Accessing the database](database_debugging.md#manually-access-the-database)
- [Resetting the database](database_debugging.md#delete-everything-and-start-over)
- [Troubleshooting and debugging the database](database_debugging.md)
- Tracing the source of an SQL query:
- In Rails console using [Verbose Query Logs](https://guides.rubyonrails.org/debugging_rails_applications.html#verbose-query-logs)
- Using query comments with [Marginalia](database_query_comments.md)
## Best practices
- [Adding database indexes](adding_database_indexes.md)
- [Adding Foreign key constraints without downtime](foreign_keys.md#avoiding-downtime-and-migration-failures)
- [Compatibility with Cells](../cells/_index.md)
- [Check for background migrations before upgrading](../../update/background_migrations.md)
- [Client-side connection-pool](client_side_connection_pool.md)
- [Constraints naming conventions](constraint_naming_convention.md)
- [Creating enums](creating_enums.md)
- [Data layout and access patterns](layout_and_access_patterns.md)
- [Efficient `IN` operator queries](efficient_in_operator_queries.md)
- [Foreign keys & associations](foreign_keys.md)
- [Hash indexes](hash_indexes.md)
- [Insert into tables in batches](insert_into_tables_in_batches.md)
- [Batching guidelines](batching_best_practices.md)
- [Iterating tables in batches](iterating_tables_in_batches.md)
- [Load balancing](load_balancing.md)
- [`NOT NULL` constraints](not_null_constraints.md)
- [Ordering table columns](ordering_table_columns.md)
- [Pagination guidelines](pagination_guidelines.md)
- [Pagination performance guidelines](pagination_performance_guidelines.md)
- [Offset pagination optimization](offset_pagination_optimization.md)
- [Polymorphic associations](polymorphic_associations.md)
- [Query count limits](query_count_limits.md)
- [Query performance guidelines](query_performance.md)
- [Serializing data](serializing_data.md)
- [Single table inheritance](single_table_inheritance.md)
- [Storing SHA1 hashes as binary](sha1_as_binary.md)
- [Strings and the Text data type](strings_and_the_text_data_type.md)
- [Updating multiple values](setting_multiple_values.md)
- [Verifying database capabilities](verifying_database_capabilities.md)
## Case studies
- [Database case study: Filtering by label](filtering_by_label.md)
- [Database case study: Namespaces storage statistics](namespaces_storage_statistics.md)
## PostgreSQL information for GitLab administrators
- [Configure GitLab using an external PostgreSQL service](../../administration/postgresql/external.md)
- [Configuring PostgreSQL for scaling](../../administration/postgresql/_index.md)
- [Database Load Balancing](../../administration/postgresql/database_load_balancing.md)
- [Moving GitLab databases to a different PostgreSQL instance](../../administration/postgresql/moving.md)
- [Replication and failover with Omnibus GitLab](../../administration/postgresql/replication_and_failover.md)
- [Standalone PostgreSQL using Omnibus GitLab](../../administration/postgresql/standalone.md)
- [Troubleshooting PostgreSQL](../../administration/troubleshooting/postgresql.md)
- [Working with the bundled PgBouncer service](../../administration/postgresql/pgbouncer.md)
## User information for scaling
For GitLab administrators, information about
[configuring PostgreSQL for scaling](../../administration/postgresql/_index.md) is available,
including the major methods:
- [Standalone PostgreSQL](../../administration/postgresql/standalone.md)
- [External PostgreSQL instances](../../administration/postgresql/external.md)
- [Replication and failover](../../administration/postgresql/replication_and_failover.md)
## ClickHouse
- [Introduction](clickhouse/_index.md)
- [ClickHouse within GitLab](clickhouse/clickhouse_within_gitlab.md)
- [Optimizing query execution](clickhouse/optimization.md)
- [Rebuild GitLab features using ClickHouse 1: Activity data](clickhouse/gitlab_activity_data.md)
- [Rebuild GitLab features using ClickHouse 2: Merge Request analytics](clickhouse/merge_request_analytics.md)
- [Tiered Storage in ClickHouse](clickhouse/tiered_storage.md)
## Miscellaneous
- [Maintenance operations](maintenance_operations.md)
- [Update multiple database objects](setting_multiple_values.md)
- [Batch iteration in a tree hierarchy proof of concept](poc_tree_iterator.md)
- [Scalability Patterns](scalability/patterns/_index.md)
- [Group hierarchy query optimization](group_hierarchy_optimization.md)
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Database development guidelines
breadcrumbs:
- doc
- development
- database
---
## Database Reviews
- During the design phase of the feature you're working on, be mindful if you are adding any database-related changes. If you're adding or modifying a query, start looking at the `explain` plan early to avoid surprises late in the review phase.
- If, at any time, you need help optimizing a query or understanding an `explain` plan, ask for assistance in `#database`.
- If you're creating a database MR for review, check out our [Database review guidelines](../database_review.md).
It provides an introduction on database-related changes, migrations, and complex SQL queries.
- If you're a database reviewer or want to become one, check out our [introduction to reviewing database changes](database_reviewer_guidelines.md).
## Upgrade
- [Timeline for version upgrades](pg_upgrade_timeline.md)
## Tooling
- [Understanding EXPLAIN plans](understanding_explain_plans.md)
- [explain.depesz.com](https://explain.depesz.com/) or [explain.dalibo.com](https://explain.dalibo.com/) for visualizing the output of `EXPLAIN`
- [pgFormatter](https://sqlformat.darold.net/) a PostgreSQL SQL syntax beautifier
- [db:check-migrations job](dbcheck-migrations-job.md)
- [Database migration pipeline](database_migration_pipeline.md)
## Migrations
- [Adding required stops](required_stops.md)
- [Avoiding downtime in migrations](avoiding_downtime_in_migrations.md)
- [Batched background migrations guidelines](batched_background_migrations.md)
- [Create a regular migration](../migration_style_guide.md#create-a-regular-schema-migration), including creating new models
- [Deleting migrations](deleting_migrations.md)
- [Different types of migrations](../migration_style_guide.md#choose-an-appropriate-migration-type)
- [Migrations for multiple databases](migrations_for_multiple_databases.md)
- [Migrations style guide](../migration_style_guide.md) for creating safe SQL migrations
- [Partitioning tables](partitioning/_index.md)
- [Post-deployment migrations guidelines](post_deployment_migrations.md) and [how to create one](post_deployment_migrations.md#creating-migrations)
- [Running database migrations](database_debugging.md#migration-wrangling)
- [SQL guidelines](../sql.md) for working with SQL queries
- [Swapping tables](swapping_tables.md)
- [Testing Rails migrations](../testing_guide/testing_migrations_guide.md) guide
- [When and how to write Rails migrations tests](../testing_guide/testing_migrations_guide.md)
- [Deduplicate database records](deduplicate_database_records.md)
## Partitioning tables
- [Overview](partitioning/_index.md)
- [Date range](partitioning/date_range.md)
- [Hash](partitioning/hash.md)
- [Int range](partitioning/int_range.md)
- [List](partitioning/list.md)
## Debugging
- [Accessing the database](database_debugging.md#manually-access-the-database)
- [Resetting the database](database_debugging.md#delete-everything-and-start-over)
- [Troubleshooting and debugging the database](database_debugging.md)
- Tracing the source of an SQL query:
- In Rails console using [Verbose Query Logs](https://guides.rubyonrails.org/debugging_rails_applications.html#verbose-query-logs)
- Using query comments with [Marginalia](database_query_comments.md)
## Best practices
- [Adding database indexes](adding_database_indexes.md)
- [Adding Foreign key constraints without downtime](foreign_keys.md#avoiding-downtime-and-migration-failures)
- [Compatibility with Cells](../cells/_index.md)
- [Check for background migrations before upgrading](../../update/background_migrations.md)
- [Client-side connection-pool](client_side_connection_pool.md)
- [Constraints naming conventions](constraint_naming_convention.md)
- [Creating enums](creating_enums.md)
- [Data layout and access patterns](layout_and_access_patterns.md)
- [Efficient `IN` operator queries](efficient_in_operator_queries.md)
- [Foreign keys & associations](foreign_keys.md)
- [Hash indexes](hash_indexes.md)
- [Insert into tables in batches](insert_into_tables_in_batches.md)
- [Batching guidelines](batching_best_practices.md)
- [Iterating tables in batches](iterating_tables_in_batches.md)
- [Load balancing](load_balancing.md)
- [`NOT NULL` constraints](not_null_constraints.md)
- [Ordering table columns](ordering_table_columns.md)
- [Pagination guidelines](pagination_guidelines.md)
- [Pagination performance guidelines](pagination_performance_guidelines.md)
- [Offset pagination optimization](offset_pagination_optimization.md)
- [Polymorphic associations](polymorphic_associations.md)
- [Query count limits](query_count_limits.md)
- [Query performance guidelines](query_performance.md)
- [Serializing data](serializing_data.md)
- [Single table inheritance](single_table_inheritance.md)
- [Storing SHA1 hashes as binary](sha1_as_binary.md)
- [Strings and the Text data type](strings_and_the_text_data_type.md)
- [Updating multiple values](setting_multiple_values.md)
- [Verifying database capabilities](verifying_database_capabilities.md)
## Case studies
- [Database case study: Filtering by label](filtering_by_label.md)
- [Database case study: Namespaces storage statistics](namespaces_storage_statistics.md)
## PostgreSQL information for GitLab administrators
- [Configure GitLab using an external PostgreSQL service](../../administration/postgresql/external.md)
- [Configuring PostgreSQL for scaling](../../administration/postgresql/_index.md)
- [Database Load Balancing](../../administration/postgresql/database_load_balancing.md)
- [Moving GitLab databases to a different PostgreSQL instance](../../administration/postgresql/moving.md)
- [Replication and failover with Omnibus GitLab](../../administration/postgresql/replication_and_failover.md)
- [Standalone PostgreSQL using Omnibus GitLab](../../administration/postgresql/standalone.md)
- [Troubleshooting PostgreSQL](../../administration/troubleshooting/postgresql.md)
- [Working with the bundled PgBouncer service](../../administration/postgresql/pgbouncer.md)
## User information for scaling
For GitLab administrators, information about
[configuring PostgreSQL for scaling](../../administration/postgresql/_index.md) is available,
including the major methods:
- [Standalone PostgreSQL](../../administration/postgresql/standalone.md)
- [External PostgreSQL instances](../../administration/postgresql/external.md)
- [Replication and failover](../../administration/postgresql/replication_and_failover.md)
## ClickHouse
- [Introduction](clickhouse/_index.md)
- [ClickHouse within GitLab](clickhouse/clickhouse_within_gitlab.md)
- [Optimizing query execution](clickhouse/optimization.md)
- [Rebuild GitLab features using ClickHouse 1: Activity data](clickhouse/gitlab_activity_data.md)
- [Rebuild GitLab features using ClickHouse 2: Merge Request analytics](clickhouse/merge_request_analytics.md)
- [Tiered Storage in ClickHouse](clickhouse/tiered_storage.md)
## Miscellaneous
- [Maintenance operations](maintenance_operations.md)
- [Update multiple database objects](setting_multiple_values.md)
- [Batch iteration in a tree hierarchy proof of concept](poc_tree_iterator.md)
- [Scalability Patterns](scalability/patterns/_index.md)
- [Group hierarchy query optimization](group_hierarchy_optimization.md)
|
https://docs.gitlab.com/development/group_hierarchy_optimization
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/group_hierarchy_optimization.md
|
2025-08-13
|
doc/development/database
|
[
"doc",
"development",
"database"
] |
group_hierarchy_optimization.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Group hierarchy query optimization
| null |
This document describes the hierarchy cache optimization strategy that helps with loading all descendants (subgroups or projects) from large group hierarchies with minimal overhead. The optimization was implemented within this GitLab [epic](https://gitlab.com/groups/gitlab-org/-/epics/11469).
The optimization is enabled automatically via the [`Namespaces::EnableDescendantsCacheCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/namespaces/enable_descendants_cache_cron_worker.rb?ref_type=heads) worker for group hierarchies with descendant counts above 700 (projects and groups). Enabling the optimization manually for smaller groups will likely not have noticeable effects.
## Performance comparison
Loading all group IDs for the `gitlab-org` group, including itself and its descendants.
{{< tabs >}}
{{< tab title="Optimized cached query" >}}
**42 buffers** (~336.00 KiB) from the buffer pool
```sql
SELECT "namespaces"."id" FROM UNNEST(
COALESCE(
(
SELECT ids FROM (
SELECT "namespace_descendants"."self_and_descendant_group_ids" AS ids
FROM "namespace_descendants"
WHERE "namespace_descendants"."outdated_at" IS NULL AND
"namespace_descendants"."namespace_id" = 22
) cached_query
),
(
SELECT ids
FROM (
SELECT ARRAY_AGG("namespaces"."id") AS ids
FROM (
SELECT namespaces.traversal_ids[array_length(namespaces.traversal_ids, 1)] AS id
FROM "namespaces"
WHERE "namespaces"."type" = 'Group' AND
(traversal_ids @> ('{22}'))
) namespaces
) consistent_query
)
)
) AS namespaces(id)
```
```plaintext
Function Scan on unnest namespaces (cost=1296.82..1296.92 rows=10 width=8) (actual time=0.193..0.236 rows=GROUP_COUNT loops=1)
Buffers: shared hit=42
I/O Timings: read=0.000 write=0.000
InitPlan 1 (returns $0)
-> Index Scan using namespace_descendants_12_pkey on gitlab_partitions_static.namespace_descendants_12 namespace_descendants (cost=0.14..3.16 rows=1 width=769) (actual time=0.022..0.023 rows=1 loops=1)
Index Cond: (namespace_descendants.namespace_id = 9970)
Filter: (namespace_descendants.outdated_at IS NULL)
Rows Removed by Filter: 0
Buffers: shared hit=5
I/O Timings: read=0.000 write=0.000
InitPlan 2 (returns $1)
-> Aggregate (cost=1293.62..1293.63 rows=1 width=32) (actual time=0.000..0.000 rows=0 loops=0)
I/O Timings: read=0.000 write=0.000
-> Bitmap Heap Scan on public.namespaces namespaces_1 (cost=62.00..1289.72 rows=781 width=28) (actual time=0.000..0.000 rows=0 loops=0)
I/O Timings: read=0.000 write=0.000
-> Bitmap Index Scan using index_namespaces_on_traversal_ids_for_groups (cost=0.00..61.81 rows=781 width=0) (actual time=0.000..0.000 rows=0 loops=0)
Index Cond: (namespaces_1.traversal_ids @> '{9970}'::integer[])
I/O Timings: read=0.000 write=0.000
Settings: seq_page_cost = '4', effective_cache_size = '472585MB', jit = 'off', work_mem = '100MB', random_page_cost = '1.5'
```
{{< /tab >}}
{{< tab title="Traversal ids based lookup query" >}}
**1037 buffers** (~8.10 MiB) from the buffer pool
```sql
SELECT namespaces.traversal_ids[array_length(namespaces.traversal_ids, 1)] AS id
FROM "namespaces"
WHERE "namespaces"."type" = 'Group' AND
(traversal_ids @> ('{22}'))
```
```plaintext
Bitmap Heap Scan on public.namespaces (cost=62.00..1291.67 rows=781 width=4) (actual time=0.670..2.273 rows=GROUP_COUNT loops=1)
Buffers: shared hit=1037
I/O Timings: read=0.000 write=0.000
-> Bitmap Index Scan using index_namespaces_on_traversal_ids_for_groups (cost=0.00..61.81 rows=781 width=0) (actual time=0.561..0.561 rows=1154 loops=1)
Index Cond: (namespaces.traversal_ids @> '{9970}'::integer[])
Buffers: shared hit=34
I/O Timings: read=0.000 write=0.000
Settings: work_mem = '100MB', random_page_cost = '1.5', seq_page_cost = '4', effective_cache_size = '472585MB', jit = 'off'
```
{{< /tab >}}
{{< /tabs >}}
## How to use the optimization
The optimization will be automatically used if you use one of these ActiveRecord scopes:
```ruby
# Loading all groups:
group.self_and_descendants
# Using the IDs in subqueries:
group.self_and_descendant_ids
NamespaceSetting.where(namespace_id: group.self_and_descendant_ids)
# Loading all projects:
group.all_projects
# Using the IDs in subqueries
MergeRequest.where(target_project_id: group.all_project_ids)
```
## Cache invalidation
When the group hierarchy changes, for example when a new project or subgroup is added, the cache is invalidated within the same transaction. A periodic worker called [`Namespaces::ProcessOutdatedNamespaceDescendantsCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/namespaces/process_outdated_namespace_descendants_cron_worker.rb?ref_type=heads) will update the cache with a slight delay. The invalidation is implemented using ActiveRecord callbacks.
While the cache is invalidated, the hierarchical database queries will continue returning consistent values using the uncached (unoptimized) `traversal_ids` based query query.
## Consistent queries
The lookup queries implement an `||` (or) functionality in SQL which allows us to check for the cached values first. If those are not present, we fall back to a full lookup of all groups or projects in the hierarchy.
For simplification, this is how we would implement the lookup in Ruby:
```ruby
if cached? && cache_up_to_date?
return cached_project_ids
else
return Project.where(...).pluck(:id)
end
```
In `SQL`, we leverage the `COALESCE` function, which returns the first non-NULL expression from a list of expressions. If the first expression is not NULL, the subsequent expressions are not evaluated.
```sql
SELECT COALESCE(
(SELECT 1), -- cached query
(SELECT 2 FROM pg_sleep(5)) -- non-cached query
)
```
The query above returns immediately however, if the first subquery returns null, the DB will execute the second query:
```sql
SELECT COALESCE(
(SELECT NULL), -- cached query
(SELECT 2 FROM pg_sleep(5)) -- non-cached query
)
```
## The `namespace_descendants` database table
The cached subgroup and project ids are stored in the `namespace_descendants` database table as arrays, the most important columns:
- `namespace_id`: primary key, this can be a top-level group ID or a subgroup ID.
- `self_and_descendant_group_ids`: all group IDs as an array
- `all_project_ids`: all project IDs as an array
- `outdated_at`: signals that the cache is outdated
## Cached database query
The query consists of three parts:
- cached query
- fallback, non-cached query
- outer query where additional filtering and data loading (`JOIN`) can be done
Cached query:
```sql
SELECT ids -- One row, array of ids
FROM (
SELECT "namespace_descendants"."self_and_descendant_group_ids" AS ids
FROM "namespace_descendants"
WHERE "namespace_descendants"."outdated_at" IS NULL AND
"namespace_descendants"."namespace_id" = 22
) cached_query
```
The query returns `NULL` when the cache is outdated or the cache record does not exist.
Fallback query, based on the `traversal_ids` lookup:
```sql
SELECT ids -- One row, array of ids
FROM (
SELECT ARRAY_AGG("namespaces"."id") AS ids
FROM (
SELECT namespaces.traversal_ids[array_length(namespaces.traversal_ids, 1)] AS id
FROM "namespaces"
WHERE "namespaces"."type" = 'Group' AND
(traversal_ids @> ('{22}'))
) namespaces
)
```
Final query, combining the queries into one:
```sql
SELECT "namespaces"."id" FROM UNNEST(
COALESCE(
(
SELECT ids FROM (
SELECT "namespace_descendants"."self_and_descendant_group_ids" AS ids
FROM "namespace_descendants"
WHERE "namespace_descendants"."outdated_at" IS NULL AND
"namespace_descendants"."namespace_id" = 22
) cached_query
),
(
SELECT ids
FROM (
SELECT ARRAY_AGG("namespaces"."id") AS ids
FROM (
SELECT namespaces.traversal_ids[array_length(namespaces.traversal_ids, 1)] AS id
FROM "namespaces"
WHERE "namespaces"."type" = 'Group' AND
(traversal_ids @> ('{22}'))
) namespaces
) consistent_query
)
)
) AS namespaces(id)
```
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Group hierarchy query optimization
breadcrumbs:
- doc
- development
- database
---
This document describes the hierarchy cache optimization strategy that helps with loading all descendants (subgroups or projects) from large group hierarchies with minimal overhead. The optimization was implemented within this GitLab [epic](https://gitlab.com/groups/gitlab-org/-/epics/11469).
The optimization is enabled automatically via the [`Namespaces::EnableDescendantsCacheCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/namespaces/enable_descendants_cache_cron_worker.rb?ref_type=heads) worker for group hierarchies with descendant counts above 700 (projects and groups). Enabling the optimization manually for smaller groups will likely not have noticeable effects.
## Performance comparison
Loading all group IDs for the `gitlab-org` group, including itself and its descendants.
{{< tabs >}}
{{< tab title="Optimized cached query" >}}
**42 buffers** (~336.00 KiB) from the buffer pool
```sql
SELECT "namespaces"."id" FROM UNNEST(
COALESCE(
(
SELECT ids FROM (
SELECT "namespace_descendants"."self_and_descendant_group_ids" AS ids
FROM "namespace_descendants"
WHERE "namespace_descendants"."outdated_at" IS NULL AND
"namespace_descendants"."namespace_id" = 22
) cached_query
),
(
SELECT ids
FROM (
SELECT ARRAY_AGG("namespaces"."id") AS ids
FROM (
SELECT namespaces.traversal_ids[array_length(namespaces.traversal_ids, 1)] AS id
FROM "namespaces"
WHERE "namespaces"."type" = 'Group' AND
(traversal_ids @> ('{22}'))
) namespaces
) consistent_query
)
)
) AS namespaces(id)
```
```plaintext
Function Scan on unnest namespaces (cost=1296.82..1296.92 rows=10 width=8) (actual time=0.193..0.236 rows=GROUP_COUNT loops=1)
Buffers: shared hit=42
I/O Timings: read=0.000 write=0.000
InitPlan 1 (returns $0)
-> Index Scan using namespace_descendants_12_pkey on gitlab_partitions_static.namespace_descendants_12 namespace_descendants (cost=0.14..3.16 rows=1 width=769) (actual time=0.022..0.023 rows=1 loops=1)
Index Cond: (namespace_descendants.namespace_id = 9970)
Filter: (namespace_descendants.outdated_at IS NULL)
Rows Removed by Filter: 0
Buffers: shared hit=5
I/O Timings: read=0.000 write=0.000
InitPlan 2 (returns $1)
-> Aggregate (cost=1293.62..1293.63 rows=1 width=32) (actual time=0.000..0.000 rows=0 loops=0)
I/O Timings: read=0.000 write=0.000
-> Bitmap Heap Scan on public.namespaces namespaces_1 (cost=62.00..1289.72 rows=781 width=28) (actual time=0.000..0.000 rows=0 loops=0)
I/O Timings: read=0.000 write=0.000
-> Bitmap Index Scan using index_namespaces_on_traversal_ids_for_groups (cost=0.00..61.81 rows=781 width=0) (actual time=0.000..0.000 rows=0 loops=0)
Index Cond: (namespaces_1.traversal_ids @> '{9970}'::integer[])
I/O Timings: read=0.000 write=0.000
Settings: seq_page_cost = '4', effective_cache_size = '472585MB', jit = 'off', work_mem = '100MB', random_page_cost = '1.5'
```
{{< /tab >}}
{{< tab title="Traversal ids based lookup query" >}}
**1037 buffers** (~8.10 MiB) from the buffer pool
```sql
SELECT namespaces.traversal_ids[array_length(namespaces.traversal_ids, 1)] AS id
FROM "namespaces"
WHERE "namespaces"."type" = 'Group' AND
(traversal_ids @> ('{22}'))
```
```plaintext
Bitmap Heap Scan on public.namespaces (cost=62.00..1291.67 rows=781 width=4) (actual time=0.670..2.273 rows=GROUP_COUNT loops=1)
Buffers: shared hit=1037
I/O Timings: read=0.000 write=0.000
-> Bitmap Index Scan using index_namespaces_on_traversal_ids_for_groups (cost=0.00..61.81 rows=781 width=0) (actual time=0.561..0.561 rows=1154 loops=1)
Index Cond: (namespaces.traversal_ids @> '{9970}'::integer[])
Buffers: shared hit=34
I/O Timings: read=0.000 write=0.000
Settings: work_mem = '100MB', random_page_cost = '1.5', seq_page_cost = '4', effective_cache_size = '472585MB', jit = 'off'
```
{{< /tab >}}
{{< /tabs >}}
## How to use the optimization
The optimization will be automatically used if you use one of these ActiveRecord scopes:
```ruby
# Loading all groups:
group.self_and_descendants
# Using the IDs in subqueries:
group.self_and_descendant_ids
NamespaceSetting.where(namespace_id: group.self_and_descendant_ids)
# Loading all projects:
group.all_projects
# Using the IDs in subqueries
MergeRequest.where(target_project_id: group.all_project_ids)
```
## Cache invalidation
When the group hierarchy changes, for example when a new project or subgroup is added, the cache is invalidated within the same transaction. A periodic worker called [`Namespaces::ProcessOutdatedNamespaceDescendantsCronWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/namespaces/process_outdated_namespace_descendants_cron_worker.rb?ref_type=heads) will update the cache with a slight delay. The invalidation is implemented using ActiveRecord callbacks.
While the cache is invalidated, the hierarchical database queries will continue returning consistent values using the uncached (unoptimized) `traversal_ids` based query query.
## Consistent queries
The lookup queries implement an `||` (or) functionality in SQL which allows us to check for the cached values first. If those are not present, we fall back to a full lookup of all groups or projects in the hierarchy.
For simplification, this is how we would implement the lookup in Ruby:
```ruby
if cached? && cache_up_to_date?
return cached_project_ids
else
return Project.where(...).pluck(:id)
end
```
In `SQL`, we leverage the `COALESCE` function, which returns the first non-NULL expression from a list of expressions. If the first expression is not NULL, the subsequent expressions are not evaluated.
```sql
SELECT COALESCE(
(SELECT 1), -- cached query
(SELECT 2 FROM pg_sleep(5)) -- non-cached query
)
```
The query above returns immediately however, if the first subquery returns null, the DB will execute the second query:
```sql
SELECT COALESCE(
(SELECT NULL), -- cached query
(SELECT 2 FROM pg_sleep(5)) -- non-cached query
)
```
## The `namespace_descendants` database table
The cached subgroup and project ids are stored in the `namespace_descendants` database table as arrays, the most important columns:
- `namespace_id`: primary key, this can be a top-level group ID or a subgroup ID.
- `self_and_descendant_group_ids`: all group IDs as an array
- `all_project_ids`: all project IDs as an array
- `outdated_at`: signals that the cache is outdated
## Cached database query
The query consists of three parts:
- cached query
- fallback, non-cached query
- outer query where additional filtering and data loading (`JOIN`) can be done
Cached query:
```sql
SELECT ids -- One row, array of ids
FROM (
SELECT "namespace_descendants"."self_and_descendant_group_ids" AS ids
FROM "namespace_descendants"
WHERE "namespace_descendants"."outdated_at" IS NULL AND
"namespace_descendants"."namespace_id" = 22
) cached_query
```
The query returns `NULL` when the cache is outdated or the cache record does not exist.
Fallback query, based on the `traversal_ids` lookup:
```sql
SELECT ids -- One row, array of ids
FROM (
SELECT ARRAY_AGG("namespaces"."id") AS ids
FROM (
SELECT namespaces.traversal_ids[array_length(namespaces.traversal_ids, 1)] AS id
FROM "namespaces"
WHERE "namespaces"."type" = 'Group' AND
(traversal_ids @> ('{22}'))
) namespaces
)
```
Final query, combining the queries into one:
```sql
SELECT "namespaces"."id" FROM UNNEST(
COALESCE(
(
SELECT ids FROM (
SELECT "namespace_descendants"."self_and_descendant_group_ids" AS ids
FROM "namespace_descendants"
WHERE "namespace_descendants"."outdated_at" IS NULL AND
"namespace_descendants"."namespace_id" = 22
) cached_query
),
(
SELECT ids
FROM (
SELECT ARRAY_AGG("namespaces"."id") AS ids
FROM (
SELECT namespaces.traversal_ids[array_length(namespaces.traversal_ids, 1)] AS id
FROM "namespaces"
WHERE "namespaces"."type" = 'Group' AND
(traversal_ids @> ('{22}'))
) namespaces
) consistent_query
)
)
) AS namespaces(id)
```
|
https://docs.gitlab.com/development/database/tiered_storage
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/tiered_storage.md
|
2025-08-13
|
doc/development/database/clickhouse
|
[
"doc",
"development",
"database",
"clickhouse"
] |
tiered_storage.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Tiered Storages in ClickHouse
| null |
{{< alert type="note" >}}
The MergeTree table engine in ClickHouse supports tiered storage.
See the documentation for [Using Multiple Block Devices for Data Storage](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes)
for details on setup and further explanation.
{{< /alert >}}
Quoting from the [MergeTree documentation](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes):
<!-- vale gitlab_base.Simplicity = NO -->
> MergeTree family table engines can store data on multiple block devices. For example,
> it can be useful when the data of a certain table are implicitly split into "hot" and "cold".
> The most recent data is regularly requested but requires only a small amount of space.
> On the contrary, the fat-tailed historical data is requested rarely.
<!-- vale gitlab_base.Simplicity = YES -->
When used with remote storage backends such as
[Amazon S3](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-s3),
this makes a very efficient storage scheme. It allows for storage policies, which
allows data to be on local disks for a period of time and then move it to object storage.
An [example configuration](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes_configure) can look like this:
```xml
<storage_configuration>
<disks>
<fast_ssd>
<path>/mnt/fast_ssd/clickhouse/</path>
</fast_ssd>
<gcs>
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/${BUCKET_NAME}/${ROOT_FOLDER}/</endpoint>
<access_key_id>${SERVICE_ACCOUNT_HMAC_KEY}</access_key_id>
<secret_access_key>${SERVICE_ACCOUNT_HMAC_SECRET}</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
</gcs>
...
</disks>
...
<policies>
<move_from_local_disks_to_gcs> <!-- policy name -->
<volumes>
<hot> <!-- volume name -->
<disk>fast_ssd</disk> <!-- disk name -->
</hot>
<cold>
<disk>gcs</disk>
</cold>
</volumes>
<move_factor>0.2</move_factor>
<!-- The move factor determines when to move data from hot volume to cold.
See ClickHouse docs for more details. -->
</moving_from_ssd_to_hdd>
....
</storage_configuration>
```
In this storage policy, two volumes are defined `hot` and `cold`. After the `hot` volume is filled with occupancy of `disk_size * move_factor`, the data is being moved to Google Cloud Storage (GCS).
If this storage policy is not the default, create tables by attaching the storage policies. For example:
```sql
CREATE TABLE key_value_table (
event_date Date,
key String,
value String,
) ENGINE = MergeTree
ORDER BY (key)
PARTITION BY toYYYYMM(event_date)
SETTINGS storage_policy = 'move_from_local_disks_to_gcs'
```
{{< alert type="note" >}}
In this storage policy, the move happens implicitly. It is also possible to keep
_hot_ data on local disks for a fixed period of time and then move them as _cold_.
{{< /alert >}}
This approach is possible with
[Table TTLs](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#mergetree-table-ttl),
which are also available with MergeTree table engine.
The ClickHouse documentation shows this feature in detail, in the example of
[implementing a hot - warm - cold architecture](https://clickhouse.com/docs/en/guides/developer/ttl#implementing-a-hotwarmcold-architecture).
You can take a similar approach for the example shown above. First, adjust the storage policy:
```xml
<storage_configuration>
...
<policies>
<local_disk_and_gcs> <!-- policy name -->
<volumes>
<hot> <!-- volume name -->
<disk>fast_ssd</disk> <!-- disk name -->
</hot>
<cold>
<disk>gcs</disk>
</cold>
</volumes>
</local_disk_and_gcs>
....
</storage_configuration>
```
Then create the table as:
```sql
CREATE TABLE another_key_value_table (
event_date Date,
key String,
value String,
) ENGINE = MergeTree
ORDER BY (key)
PARTITION BY toYYYYMM(event_date)
TTL
event_date TO VOLUME 'hot',
event_date + INTERVAL 1 YEAR TO VOLUME 'cold'
SETTINGS storage_policy = 'local_disk_and_gcs';
```
This creates the table so that data older than 1 year (evaluated against the
`event_date` column) is moved to GCS. Such a storage policy can be helpful for append-only
tables (like audit events) where only the most recent data is accessed frequently.
You can drop the data altogether, which can be a regulatory requirement.
We don't mention modifying TTLs in this guide, but that is possible as well.
See ClickHouse documentation for
[modifying TTL](https://clickhouse.com/docs/en/sql-reference/statements/alter/ttl#modify-ttl)
for details.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Tiered Storages in ClickHouse
breadcrumbs:
- doc
- development
- database
- clickhouse
---
{{< alert type="note" >}}
The MergeTree table engine in ClickHouse supports tiered storage.
See the documentation for [Using Multiple Block Devices for Data Storage](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes)
for details on setup and further explanation.
{{< /alert >}}
Quoting from the [MergeTree documentation](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes):
<!-- vale gitlab_base.Simplicity = NO -->
> MergeTree family table engines can store data on multiple block devices. For example,
> it can be useful when the data of a certain table are implicitly split into "hot" and "cold".
> The most recent data is regularly requested but requires only a small amount of space.
> On the contrary, the fat-tailed historical data is requested rarely.
<!-- vale gitlab_base.Simplicity = YES -->
When used with remote storage backends such as
[Amazon S3](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-s3),
this makes a very efficient storage scheme. It allows for storage policies, which
allows data to be on local disks for a period of time and then move it to object storage.
An [example configuration](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes_configure) can look like this:
```xml
<storage_configuration>
<disks>
<fast_ssd>
<path>/mnt/fast_ssd/clickhouse/</path>
</fast_ssd>
<gcs>
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/${BUCKET_NAME}/${ROOT_FOLDER}/</endpoint>
<access_key_id>${SERVICE_ACCOUNT_HMAC_KEY}</access_key_id>
<secret_access_key>${SERVICE_ACCOUNT_HMAC_SECRET}</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
</gcs>
...
</disks>
...
<policies>
<move_from_local_disks_to_gcs> <!-- policy name -->
<volumes>
<hot> <!-- volume name -->
<disk>fast_ssd</disk> <!-- disk name -->
</hot>
<cold>
<disk>gcs</disk>
</cold>
</volumes>
<move_factor>0.2</move_factor>
<!-- The move factor determines when to move data from hot volume to cold.
See ClickHouse docs for more details. -->
</moving_from_ssd_to_hdd>
....
</storage_configuration>
```
In this storage policy, two volumes are defined `hot` and `cold`. After the `hot` volume is filled with occupancy of `disk_size * move_factor`, the data is being moved to Google Cloud Storage (GCS).
If this storage policy is not the default, create tables by attaching the storage policies. For example:
```sql
CREATE TABLE key_value_table (
event_date Date,
key String,
value String,
) ENGINE = MergeTree
ORDER BY (key)
PARTITION BY toYYYYMM(event_date)
SETTINGS storage_policy = 'move_from_local_disks_to_gcs'
```
{{< alert type="note" >}}
In this storage policy, the move happens implicitly. It is also possible to keep
_hot_ data on local disks for a fixed period of time and then move them as _cold_.
{{< /alert >}}
This approach is possible with
[Table TTLs](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#mergetree-table-ttl),
which are also available with MergeTree table engine.
The ClickHouse documentation shows this feature in detail, in the example of
[implementing a hot - warm - cold architecture](https://clickhouse.com/docs/en/guides/developer/ttl#implementing-a-hotwarmcold-architecture).
You can take a similar approach for the example shown above. First, adjust the storage policy:
```xml
<storage_configuration>
...
<policies>
<local_disk_and_gcs> <!-- policy name -->
<volumes>
<hot> <!-- volume name -->
<disk>fast_ssd</disk> <!-- disk name -->
</hot>
<cold>
<disk>gcs</disk>
</cold>
</volumes>
</local_disk_and_gcs>
....
</storage_configuration>
```
Then create the table as:
```sql
CREATE TABLE another_key_value_table (
event_date Date,
key String,
value String,
) ENGINE = MergeTree
ORDER BY (key)
PARTITION BY toYYYYMM(event_date)
TTL
event_date TO VOLUME 'hot',
event_date + INTERVAL 1 YEAR TO VOLUME 'cold'
SETTINGS storage_policy = 'local_disk_and_gcs';
```
This creates the table so that data older than 1 year (evaluated against the
`event_date` column) is moved to GCS. Such a storage policy can be helpful for append-only
tables (like audit events) where only the most recent data is accessed frequently.
You can drop the data altogether, which can be a regulatory requirement.
We don't mention modifying TTLs in this guide, but that is possible as well.
See ClickHouse documentation for
[modifying TTL](https://clickhouse.com/docs/en/sql-reference/statements/alter/ttl#modify-ttl)
for details.
|
https://docs.gitlab.com/development/database/optimization
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/optimization.md
|
2025-08-13
|
doc/development/database/clickhouse
|
[
"doc",
"development",
"database",
"clickhouse"
] |
optimization.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Optimizing query execution
| null |
ClickHouse Inc has listed a [variety of optimization strategies](https://clickhouse.com/blog/clickhouse-faster-queries-with-projections-and-primary-indexes).
ClickHouse relies heavily on the structure of the primary index. However, in some cases, it's possible that queries rely on a column that's part of the primary index, but isn't the first column. See [Using multiple primary indexes](https://clickhouse.com/docs/en/guides/improving-query-performance/sparse-primary-indexes/sparse-primary-indexes-multiple) which offers several options in such cases. For example: using a data skipping index as a secondary index.
In cases of compound primary indexes, it's helpful to understand the data characteristics of key columns is also very helpful. They can allow the index to be more efficient. [Ordering key columns efficiently](https://clickhouse.com/docs/en/guides/improving-query-performance/sparse-primary-indexes/sparse-primary-indexes-cardinality) goes into details on these concepts.
ClickHouse blog also has a very good post, [Super charging your ClickHouse queries](https://clickhouse.com/blog/clickhouse-faster-queries-with-projections-and-primary-indexes), that outlines almost all of the approaches listed above.
It is possible to use [`EXPLAIN`](https://clickhouse.com/docs/en/sql-reference/statements/explain) statements with queries to get visible steps of the query pipeline. Note the different [types](https://clickhouse.com/docs/en/sql-reference/statements/explain#explain-types) of `EXPLAIN`.
Also, to get detailed query execution pipeline, you can set the logs level to `trace` with `clickhouse-client` and then execute the query.
For example:
```plaintext
$ clickhouse-client :) SET send_logs_level = 'trace'
$ clickhouse-client :) select count(traceID) from jaeger_index WHERE tenant = '12' AND service != 'jaeger-query' FORMAT Vertical ;
SELECT count(traceID)
FROM jaeger_index
WHERE (tenant = '12') AND (service != 'jaeger-query')
FORMAT Vertical
Query id: 6ce40daf-e1b1-4714-ab02-268246f3c5c9
[cluster-0-0-0] 2023.01.30 06:31:32.240819 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> executeQuery: (from 127.0.0.1:53654) select count(traceID) from jaeger_index WHERE tenant = '12' AND service != 'jaeger-query' FORMAT Vertical ; (stage: Complete)
....
[cluster-0-0-0] 2023.01.30 06:31:32.244071 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> InterpreterSelectQuery: MergeTreeWhereOptimizer: condition "service != 'jaeger-query'" moved to PREWHERE
[cluster-0-0-0] 2023.01.30 06:31:32.244420 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> InterpreterSelectQuery: MergeTreeWhereOptimizer: condition "service != 'jaeger-query'" moved to PREWHERE
....
[cluster-0-0-0] 2023.01.30 06:31:32.245153 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[cluster-0-0-0] 2023.01.30 06:31:32.245255 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> InterpreterSelectQuery: Complete -> Complete
[cluster-0-0-0] 2023.01.30 06:31:32.245590 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): Key condition: (column 1 not in ['jaeger-query', 'jaeger-query']), unknown, (column 0 in ['12', '12']), and, and
[cluster-0-0-0] 2023.01.30 06:31:32.245784 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): MinMax index condition: unknown, unknown, and, unknown, and
[cluster-0-0-0] 2023.01.30 06:31:32.246239 [ 1503 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): Used generic exclusion search over index for part 202301_1512_21497_9164 with 4 steps
[cluster-0-0-0] 2023.01.30 06:31:32.246293 [ 1503 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): Used generic exclusion search over index for part 202301_21498_24220_677 with 1 steps
[cluster-0-0-0] 2023.01.30 06:31:32.246488 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): Selected 2/2 parts by partition key, 1 parts by primary key, 2/4 marks by primary key, 2 marks to read from 1 ranges
[cluster-0-0-0] 2023.01.30 06:31:32.246591 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 202301_1512_21497_9164, approx. 16384 rows starting from 0
[cluster-0-0-0] 2023.01.30 06:31:32.642095 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> AggregatingTransform: Aggregating
[cluster-0-0-0] 2023.01.30 06:31:32.642193 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> Aggregator: An entry for key=16426982211452591884 found in cache: sum_of_sizes=2, median_size=1
[cluster-0-0-0] 2023.01.30 06:31:32.642210 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> Aggregator: Aggregation method: without_key
[cluster-0-0-0] 2023.01.30 06:31:32.642330 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> AggregatingTransform: Aggregated. 3211 to 1 rows (from 50.18 KiB) in 0.395452983 sec. (8119.802 rows/sec., 126.89 KiB/sec.)
[cluster-0-0-0] 2023.01.30 06:31:32.642343 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> Aggregator: Merging aggregated data
Row 1:
──────
count(traceID): 3211
[cluster-0-0-0] 2023.01.30 06:31:32.642887 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Information> executeQuery: Read 16384 rows, 620.52 KiB in 0.401978272 sec., 40758 rows/sec., 1.51 MiB/sec.
[cluster-0-0-0] 2023.01.30 06:31:32.645232 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> MemoryTracker: Peak memory usage (for query): 831.98 KiB.
[cluster-0-0-0] 2023.01.30 06:31:32.645251 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> TCPHandler: Processed in 0.404908496 sec.
1 row in set. Elapsed: 0.402 sec. Processed 16.38 thousand rows, 635.41 KB (40.71 thousand rows/s., 1.58 MB/s.)
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Optimizing query execution
breadcrumbs:
- doc
- development
- database
- clickhouse
---
ClickHouse Inc has listed a [variety of optimization strategies](https://clickhouse.com/blog/clickhouse-faster-queries-with-projections-and-primary-indexes).
ClickHouse relies heavily on the structure of the primary index. However, in some cases, it's possible that queries rely on a column that's part of the primary index, but isn't the first column. See [Using multiple primary indexes](https://clickhouse.com/docs/en/guides/improving-query-performance/sparse-primary-indexes/sparse-primary-indexes-multiple) which offers several options in such cases. For example: using a data skipping index as a secondary index.
In cases of compound primary indexes, it's helpful to understand the data characteristics of key columns is also very helpful. They can allow the index to be more efficient. [Ordering key columns efficiently](https://clickhouse.com/docs/en/guides/improving-query-performance/sparse-primary-indexes/sparse-primary-indexes-cardinality) goes into details on these concepts.
ClickHouse blog also has a very good post, [Super charging your ClickHouse queries](https://clickhouse.com/blog/clickhouse-faster-queries-with-projections-and-primary-indexes), that outlines almost all of the approaches listed above.
It is possible to use [`EXPLAIN`](https://clickhouse.com/docs/en/sql-reference/statements/explain) statements with queries to get visible steps of the query pipeline. Note the different [types](https://clickhouse.com/docs/en/sql-reference/statements/explain#explain-types) of `EXPLAIN`.
Also, to get detailed query execution pipeline, you can set the logs level to `trace` with `clickhouse-client` and then execute the query.
For example:
```plaintext
$ clickhouse-client :) SET send_logs_level = 'trace'
$ clickhouse-client :) select count(traceID) from jaeger_index WHERE tenant = '12' AND service != 'jaeger-query' FORMAT Vertical ;
SELECT count(traceID)
FROM jaeger_index
WHERE (tenant = '12') AND (service != 'jaeger-query')
FORMAT Vertical
Query id: 6ce40daf-e1b1-4714-ab02-268246f3c5c9
[cluster-0-0-0] 2023.01.30 06:31:32.240819 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> executeQuery: (from 127.0.0.1:53654) select count(traceID) from jaeger_index WHERE tenant = '12' AND service != 'jaeger-query' FORMAT Vertical ; (stage: Complete)
....
[cluster-0-0-0] 2023.01.30 06:31:32.244071 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> InterpreterSelectQuery: MergeTreeWhereOptimizer: condition "service != 'jaeger-query'" moved to PREWHERE
[cluster-0-0-0] 2023.01.30 06:31:32.244420 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> InterpreterSelectQuery: MergeTreeWhereOptimizer: condition "service != 'jaeger-query'" moved to PREWHERE
....
[cluster-0-0-0] 2023.01.30 06:31:32.245153 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[cluster-0-0-0] 2023.01.30 06:31:32.245255 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> InterpreterSelectQuery: Complete -> Complete
[cluster-0-0-0] 2023.01.30 06:31:32.245590 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): Key condition: (column 1 not in ['jaeger-query', 'jaeger-query']), unknown, (column 0 in ['12', '12']), and, and
[cluster-0-0-0] 2023.01.30 06:31:32.245784 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): MinMax index condition: unknown, unknown, and, unknown, and
[cluster-0-0-0] 2023.01.30 06:31:32.246239 [ 1503 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): Used generic exclusion search over index for part 202301_1512_21497_9164 with 4 steps
[cluster-0-0-0] 2023.01.30 06:31:32.246293 [ 1503 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): Used generic exclusion search over index for part 202301_21498_24220_677 with 1 steps
[cluster-0-0-0] 2023.01.30 06:31:32.246488 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> tracing_gcs.jaeger_index_local (66c6ca81-e20d-44dc-8101-92678fc24d99) (SelectExecutor): Selected 2/2 parts by partition key, 1 parts by primary key, 2/4 marks by primary key, 2 marks to read from 1 ranges
[cluster-0-0-0] 2023.01.30 06:31:32.246591 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 202301_1512_21497_9164, approx. 16384 rows starting from 0
[cluster-0-0-0] 2023.01.30 06:31:32.642095 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> AggregatingTransform: Aggregating
[cluster-0-0-0] 2023.01.30 06:31:32.642193 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> Aggregator: An entry for key=16426982211452591884 found in cache: sum_of_sizes=2, median_size=1
[cluster-0-0-0] 2023.01.30 06:31:32.642210 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> Aggregator: Aggregation method: without_key
[cluster-0-0-0] 2023.01.30 06:31:32.642330 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> AggregatingTransform: Aggregated. 3211 to 1 rows (from 50.18 KiB) in 0.395452983 sec. (8119.802 rows/sec., 126.89 KiB/sec.)
[cluster-0-0-0] 2023.01.30 06:31:32.642343 [ 348 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Trace> Aggregator: Merging aggregated data
Row 1:
──────
count(traceID): 3211
[cluster-0-0-0] 2023.01.30 06:31:32.642887 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Information> executeQuery: Read 16384 rows, 620.52 KiB in 0.401978272 sec., 40758 rows/sec., 1.51 MiB/sec.
[cluster-0-0-0] 2023.01.30 06:31:32.645232 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> MemoryTracker: Peak memory usage (for query): 831.98 KiB.
[cluster-0-0-0] 2023.01.30 06:31:32.645251 [ 4991 ] {6ce40daf-e1b1-4714-ab02-268246f3c5c9} <Debug> TCPHandler: Processed in 0.404908496 sec.
1 row in set. Elapsed: 0.402 sec. Processed 16.38 thousand rows, 635.41 KB (40.71 thousand rows/s., 1.58 MB/s.)
```
|
https://docs.gitlab.com/development/database/gitlab_activity_data
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/gitlab_activity_data.md
|
2025-08-13
|
doc/development/database/clickhouse
|
[
"doc",
"development",
"database",
"clickhouse"
] |
gitlab_activity_data.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Store GitLab activity data in ClickHouse
| null |
## Overview of the existing implementation
### What is GitLab activity data
GitLab records activity data during its operation as users interact with the application. Most of these interactions revolve around the projects, issues, and merge requests domain objects. Users can perform several different actions and some of these actions are recorded in a separate PostgreSQL database table called `events`.
Example events:
- Issue opened
- Issue reopened
- User joined a project
- Merge Request merged
- Repository pushed
- Snippet created
### Where is the activity data used
Several features use activity data:
- The user's [contribution calendar](../../../user/profile/contributions_calendar.md) on the profile page.
- Paginated list of the user's contributions.
- Paginated list of user activity for a Project and a Group.
- [Contribution analytics](../../../user/group/contribution_analytics/_index.md).
### How is the activity data created
The activity data is usually generated on the service layer when a specific operation is executed by the user. The persistence characteristics of an `events` record depend on the implementation of the service. Two main approaches exist:
1. In the database transaction where the actual event occurs.
1. After the database transaction (which could be delayed).
The above-mentioned mechanics provide a "mostly" consistent stream of `events`.
For example, consistently recording an `events` record:
```ruby
ApplicationRecord.transaction do
issue.closed!
Event.create!(action: :closed, target: issue)
end
```
Example, unsafe recording of an `events` record:
```ruby
ApplicationRecord.transaction do
issue.closed!
end
# If a crash happens here, the event will not be recorded.
Event.create!(action: :closed, target: issue)
```
### Database table structure
The `events` table uses [polymorphic association](https://guides.rubyonrails.org/association_basics.html#polymorphic-associations) to allow associating different database tables (issues, merge requests, etc.) with a record. A simplified database structure:
```sql
Column | Type | Nullable | Default | Storage |
-------------+--------------------------+-----------+----------+------------------------------------+
project_id | integer | | | plain |
author_id | integer | not null | | plain |
target_id | integer | | | plain |
created_at | timestamp with time zone | not null | | plain |
updated_at | timestamp with time zone | not null | | plain |
action | smallint | not null | | plain |
target_type | character varying | | | extended |
group_id | bigint | | | plain |
fingerprint | bytea | | | extended |
id | bigint | not null | nextval('events_id_seq'::regclass) | plain |
```
Some unexpected characteristics due to the evolving database design:
- The `project_id` and the `group_id` columns are mutually exclusive, internally we call them resource parent.
- Example 1: for an issue opened event, the `project_id` field is populated.
- Example 2: for an epic-related event, the `group_id` field is populated (epic is always part of a group).
- The `target_id` and `target_type` column pair identifies the target record.
- Example: `target_id=1` and `target_type=Issue`.
- When the columns are `null`, we refer to an event which has no representation in the database. For example a repository `push` action.
- Fingerprint is used in some cases to later alter the event based on some metadata change. This approach is mostly used for Wiki pages.
### Database record modifications
Most of the data is written once however, we cannot say that the table is append-only. A few use cases where actual row updates and deletions happen:
- Fingerprint-based update for certain Wiki page records.
- When user or an associated resource is deleted, the event rows are also deleted.
- The deletion of the associated `events` records happens in batches.
### Current performance problems
- The table uses significant disk space.
- Adding new events may significantly increase the database record count.
- Implementing data pruning logic is difficult.
- Time-range-based aggregations are not performant enough, some features may break due to slow database queries.
### Example queries
{{< alert type="note" >}}
These queries have been significantly simplified from the actual queries from production.
{{< /alert >}}
Database query for the user's contribution graph:
```sql
SELECT DATE(events.created_at), COUNT(*)
FROM events
WHERE events.author_id = 1
AND events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-01-18 22:59:59.999999'
AND (
(
events.action = 5
) OR
(
events.action IN (1, 3) -- Enum values are documented in the Event model, see the ACTIONS constant in app/models/event.rb
AND events.target_type IN ('Issue', 'WorkItem')
) OR
(
events.action IN (7, 1, 3)
AND events.target_type = 'MergeRequest'
) OR
(
events.action = 6
)
)
GROUP BY DATE(events.created_at)
```
Query for group contributions for each user:
```sql
SELECT events.author_id, events.target_type, events.action, COUNT(*)
FROM events
WHERE events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-03-18 22:59:59.999999'
AND events.project_id IN (1, 2, 3) -- list of project ids in the group
GROUP BY events.author_id, events.target_type, events.action
```
## Storing activity data in ClickHouse
### Data persistence
At the moment, there is no consensus about the way we would replicate data from the PostgreSQL database to ClickHouse. A few ideas that might work for the `events` table:
#### Record data immediately
This approach provides a simple way to keep the existing `events` table working while we're also sending data to the ClickHouse database. When an event record is created, ensure that it's created outside of the transaction. After persisting the data in PostgreSQL, persist it in ClickHouse.
```ruby
ApplicationRecord.transaction do
issue.update!(state: :closed)
end
# could be a method to hide complexity
Event.create!(action: :closed, target: issue)
ClickHouse::Event.create(action: :closed, target: issue)
```
What's behind the implementation of `ClickHouse::Event` is not decided yet, it could be one of the following:
- ActiveRecord model directly connecting the ClickHouse database.
- REST API call to an intermediate service.
<!-- vale gitlab_base.Spelling = NO -->
- Enqueueing an event to an event-streaming tool (like Kafka).
<!-- vale gitlab_base.Spelling = YES -->
#### Replication of `events` rows
Assuming that the creation of `events` record is an integral part of the system, introducing another storage call might cause performance degradation in various code paths, or it could introduce significant complexity.
Rather than sending data to ClickHouse on event creation time, we would move this processing in the background by iterating over the `events` table and sending the newly created database rows.
By keeping track of which records have been sent over ClickHouse, we could incrementally send data.
```ruby
last_updated_at = SyncProcess.last_updated_at
# oversimplified loop, we would probably batch this...
Event.where(updated_at > last_updated_at).each do |row|
last_row = ClickHouse::Event.create(row)
end
SyncProcess.last_updated_at = last_row.updated_at
```
### ClickHouse database table structure
When coming up with the initial database structure, we must look at the way the data is queried.
We have two main use cases:
- Query data for a certain user, within a time range.
- `WHERE author_id = 1 AND created_at BETWEEN '2021-01-01' AND '2021-12-31'`
- Additionally, there might be extra `project_id` condition due to the access control check.
- Query data for a project or group, within a time range.
- `WHERE project_id IN (1, 2) AND created_at BETWEEN '2021-01-01' AND '2021-12-31'`
The `author_id` and `project_id` columns are considered high-selectivity columns. By this we mean that optimizing the filtering of the `author_id` and the `project_id` columns is desirable for having performant database queries.
The most recent activity data is queried more often. At some point, we might just drop or relocate older data. Most of the features look back only a year.
For these reasons, we could start with a database table storing low-level `events` data:
```plantuml
hide circle
entity "events" as events {
id : UInt64 ("primary key")
--
project_id : UInt64
group_id : UInt64
target_id : UInt64
target_type : String
action : UInt8
fingerprint : UInt64
created_at : DateTime
updated_at : DateTime
}
```
The SQL statement for creating the table:
```sql
CREATE TABLE events
(
`id` UInt64,
`project_id` UInt64 DEFAULT 0 NOT NULL,
`group_id` UInt64 DEFAULT 0 NOT NULL,
`author_id` UInt64 DEFAULT 0 NOT NULL,
`target_id` UInt64 DEFAULT 0 NOT NULL,
`target_type` LowCardinality(String) DEFAULT '' NOT NULL,
`action` UInt8 DEFAULT 0 NOT NULL,
`fingerprint` UInt64 DEFAULT 0 NOT NULL,
`created_at` DateTime64(6, 'UTC') DEFAULT now() NOT NULL,
`updated_at` DateTime64(6, 'UTC') DEFAULT now() NOT NULL
)
ENGINE = ReplacingMergeTree(updated_at)
ORDER BY id;
```
A few changes compared to the PostgreSQL version:
- `target_type` uses [an optimization](https://clickhouse.com/docs/en/sql-reference/data-types/lowcardinality) for low-cardinality column values.
- `fingerprint` becomes an integer and leverages a performant integer-based hashing function such as xxHash64.
- All columns get a default value, the 0 default value for the integer columns means no value. See the related [best practices](https://clickhouse.com/docs/en/cloud/bestpractices/avoid-nullable-columns).
- `NOT NULL` to ensure that we always use the default values when data is missing (different behavior compared to PostgreSQL).
- The "primary" key automatically becomes the `id` column due to the `ORDER BY` clause.
Let's insert the same primary key value twice:
```sql
INSERT INTO events (id, project_id, target_id, author_id, target_type, action) VALUES (1, 2, 3, 4, 'Issue', null);
INSERT INTO events (id, project_id, target_id, author_id, target_type, action) VALUES (1, 20, 30, 5, 'Issue', null);
```
Let's inspect the results:
```sql
SELECT * FROM events
```
- We have two rows with the same `id` value (primary key).
- The `null` `action` becomes `0`.
- The non-specified fingerprint column becomes `0`.
- The `DateTime` columns have the insert timestamp.
ClickHouse will eventually "replace" the rows with the same primary key in the background. When running this operation, the higher `updated_at` value takes precedence. The same behavior can be simulated with the `final` keyword:
```sql
SELECT * FROM events FINAL
```
Adding `FINAL` to a query can have significant performance consequences, some of the issues are documented in the [ClickHouse documentation](https://clickhouse.com/docs/en/sql-reference/statements/select/from#final-modifier).
We should always expect duplicated values in the table, so we must take care of the deduplication in query time.
### ClickHouse database queries
ClickHouse uses SQL for querying the data, in some cases, a PostgreSQL query can be used in ClickHouse without major modifications assuming that the underlying database structure is very similar.
Query for group contributions for each user (PostgreSQL):
```sql
SELECT events.author_id, events.target_type, events.action, COUNT(*)
FROM events
WHERE events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-03-18 22:59:59.999999'
AND events.project_id IN (1, 2, 3) -- list of project ids in the group
GROUP BY events.author_id, events.target_type, events.action
```
The same query would work in PostgreSQL however, we might see duplicated values in ClickHouse due to the way the table engine works. The deduplication can be achieved by using a nested `FROM` statement.
```sql
SELECT author_id, target_type, action, count(*)
FROM (
SELECT
id,
argMax(events.project_id, events.updated_at) AS project_id,
argMax(events.group_id, events.updated_at) AS group_id,
argMax(events.author_id, events.updated_at) AS author_id,
argMax(events.target_type, events.updated_at) AS target_type,
argMax(events.target_id, events.updated_at) AS target_id,
argMax(events.action, events.updated_at) AS action,
argMax(events.fingerprint, events.updated_at) AS fingerprint,
FIRST_VALUE(events.created_at) AS created_at,
MAX(events.updated_at) AS updated_at
FROM events
WHERE events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-03-18 22:59:59.999999'
AND events.project_id IN (1, 2, 3) -- list of project ids in the group
GROUP BY id
) AS events
GROUP BY author_id, target_type, action
```
- Take the most recent column values based on the `updated_at` column.
- Take the first value for `created_at`, assuming that the first `INSERT` contains the correct value. An issue only when we don't sync `created_at` at all and the default value (`NOW()`) is used.
- Take the most recent `updated_at` value.
The query looks more complicated now because of the deduplication logic. The complexity can be hidden behind a database view.
### Optimizing the performance
The aggregation query in the previous section might not be performant enough for production use due to the large volume of data.
Let's add 1 million extra rows to the `events` table:
```sql
INSERT INTO events (id, project_id, author_id, target_id, target_type, action) SELECT id, project_id, author_id, target_id, 'Issue' AS target_type, action FROM generateRandom('id UInt64, project_id UInt64, author_id UInt64, target_id UInt64, action UInt64') LIMIT 1000000;
```
Running the previous aggregation query in the console prints out some performance data:
```plaintext
1 row in set. Elapsed: 0.122 sec. Processed 1.00 million rows, 42.00 MB (8.21 million rows/s., 344.96 MB/s.)
```
The query returned 1 row (correctly) however, it had to process 1 million rows (full table). We can optimize the query with an index on the `project_id` column:
```sql
ALTER TABLE events ADD INDEX project_id_index project_id TYPE minmax GRANULARITY 10;
ALTER TABLE events MATERIALIZE INDEX project_id_index;
```
Executing the query returns much better figures:
```plaintext
Read 2 rows, 107.00 B in 0.005616811 sec., 356 rows/sec., 18.60 KiB/sec.
```
To optimize the date range filter on the `created_at` column, we could try adding another index on the `created_at` column.
#### Query for the contribution graph
Just to recap, this is the PostgreSQL query:
```sql
SELECT DATE(events.created_at), COUNT(*)
FROM events
WHERE events.author_id = 1
AND events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-01-18 22:59:59.999999'
AND (
(
events.action = 5
) OR
(
events.action IN (1, 3) -- Enum values are documented in the Event model, see the ACTIONS constant in app/models/event.rb
AND events.target_type IN ('Issue', 'WorkItem')
) OR
(
events.action IN (7, 1, 3)
AND events.target_type = 'MergeRequest'
) OR
(
events.action = 6
)
)
GROUP BY DATE(events.created_at)
```
The filtering and the count aggregation is mainly done on the `author_id` and the `created_at` columns. Grouping the data by these two columns would probably give an adequate performance.
We could attempt adding an index on the `author_id` column however, we still need an additional index on the `created_at` column to properly cover this query. Besides, under the contribution graph, GitLab shows the list of ordered contributions of the user which would be great to get it efficiently via a different query with the `ORDER BY` clause.
For these reasons, it's probably better to use a ClickHouse projection which stores the events rows redundantly but we can specify a different sort order.
The ClickHouse query would be the following (with a slightly adjusted date range):
```sql
SELECT DATE(events.created_at) AS date, COUNT(*) AS count
FROM (
SELECT
id,
argMax(events.created_at, events.updated_at) AS created_at
FROM events
WHERE events.author_id = 4
AND events.created_at BETWEEN '2023-01-01 23:00:00' AND '2024-01-01 22:59:59.999999'
AND (
(
events.action = 5
) OR
(
events.action IN (1, 3) -- Enum values are documented in the Event model, see the ACTIONS constant in app/models/event.rb
AND events.target_type IN ('Issue', 'WorkItem')
) OR
(
events.action IN (7, 1, 3)
AND events.target_type = 'MergeRequest'
) OR
(
events.action = 6
)
)
GROUP BY id
) AS events
GROUP BY DATE(events.created_at)
```
The query does a full table scan, let's optimize it:
```sql
ALTER TABLE events ADD PROJECTION events_by_authors (
SELECT * ORDER BY author_id, created_at -- different sort order for the table
);
ALTER TABLE events MATERIALIZE PROJECTION events_by_authors;
```
#### Pagination of contributions
Listing the contributions of a user can be queried in the following way:
```sql
SELECT events.*
FROM (
SELECT
id,
argMax(events.project_id, events.updated_at) AS project_id,
argMax(events.group_id, events.updated_at) AS group_id,
argMax(events.author_id, events.updated_at) AS author_id,
argMax(events.target_type, events.updated_at) AS target_type,
argMax(events.target_id, events.updated_at) AS target_id,
argMax(events.action, events.updated_at) AS action,
argMax(events.fingerprint, events.updated_at) AS fingerprint,
FIRST_VALUE(events.created_at) AS created_at,
MAX(events.updated_at) AS updated_at
FROM events
WHERE events.author_id = 4
GROUP BY id
ORDER BY created_at DESC, id DESC
) AS events
LIMIT 20
```
ClickHouse supports the standard `LIMIT N OFFSET M` clauses, so we can request the next page:
```sql
SELECT events.*
FROM (
SELECT
id,
argMax(events.project_id, events.updated_at) AS project_id,
argMax(events.group_id, events.updated_at) AS group_id,
argMax(events.author_id, events.updated_at) AS author_id,
argMax(events.target_type, events.updated_at) AS target_type,
argMax(events.target_id, events.updated_at) AS target_id,
argMax(events.action, events.updated_at) AS action,
argMax(events.fingerprint, events.updated_at) AS fingerprint,
FIRST_VALUE(events.created_at) AS created_at,
MAX(events.updated_at) AS updated_at
FROM events
WHERE events.author_id = 4
GROUP BY id
ORDER BY created_at DESC, id DESC
) AS events
LIMIT 20 OFFSET 20
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Store GitLab activity data in ClickHouse
breadcrumbs:
- doc
- development
- database
- clickhouse
---
## Overview of the existing implementation
### What is GitLab activity data
GitLab records activity data during its operation as users interact with the application. Most of these interactions revolve around the projects, issues, and merge requests domain objects. Users can perform several different actions and some of these actions are recorded in a separate PostgreSQL database table called `events`.
Example events:
- Issue opened
- Issue reopened
- User joined a project
- Merge Request merged
- Repository pushed
- Snippet created
### Where is the activity data used
Several features use activity data:
- The user's [contribution calendar](../../../user/profile/contributions_calendar.md) on the profile page.
- Paginated list of the user's contributions.
- Paginated list of user activity for a Project and a Group.
- [Contribution analytics](../../../user/group/contribution_analytics/_index.md).
### How is the activity data created
The activity data is usually generated on the service layer when a specific operation is executed by the user. The persistence characteristics of an `events` record depend on the implementation of the service. Two main approaches exist:
1. In the database transaction where the actual event occurs.
1. After the database transaction (which could be delayed).
The above-mentioned mechanics provide a "mostly" consistent stream of `events`.
For example, consistently recording an `events` record:
```ruby
ApplicationRecord.transaction do
issue.closed!
Event.create!(action: :closed, target: issue)
end
```
Example, unsafe recording of an `events` record:
```ruby
ApplicationRecord.transaction do
issue.closed!
end
# If a crash happens here, the event will not be recorded.
Event.create!(action: :closed, target: issue)
```
### Database table structure
The `events` table uses [polymorphic association](https://guides.rubyonrails.org/association_basics.html#polymorphic-associations) to allow associating different database tables (issues, merge requests, etc.) with a record. A simplified database structure:
```sql
Column | Type | Nullable | Default | Storage |
-------------+--------------------------+-----------+----------+------------------------------------+
project_id | integer | | | plain |
author_id | integer | not null | | plain |
target_id | integer | | | plain |
created_at | timestamp with time zone | not null | | plain |
updated_at | timestamp with time zone | not null | | plain |
action | smallint | not null | | plain |
target_type | character varying | | | extended |
group_id | bigint | | | plain |
fingerprint | bytea | | | extended |
id | bigint | not null | nextval('events_id_seq'::regclass) | plain |
```
Some unexpected characteristics due to the evolving database design:
- The `project_id` and the `group_id` columns are mutually exclusive, internally we call them resource parent.
- Example 1: for an issue opened event, the `project_id` field is populated.
- Example 2: for an epic-related event, the `group_id` field is populated (epic is always part of a group).
- The `target_id` and `target_type` column pair identifies the target record.
- Example: `target_id=1` and `target_type=Issue`.
- When the columns are `null`, we refer to an event which has no representation in the database. For example a repository `push` action.
- Fingerprint is used in some cases to later alter the event based on some metadata change. This approach is mostly used for Wiki pages.
### Database record modifications
Most of the data is written once however, we cannot say that the table is append-only. A few use cases where actual row updates and deletions happen:
- Fingerprint-based update for certain Wiki page records.
- When user or an associated resource is deleted, the event rows are also deleted.
- The deletion of the associated `events` records happens in batches.
### Current performance problems
- The table uses significant disk space.
- Adding new events may significantly increase the database record count.
- Implementing data pruning logic is difficult.
- Time-range-based aggregations are not performant enough, some features may break due to slow database queries.
### Example queries
{{< alert type="note" >}}
These queries have been significantly simplified from the actual queries from production.
{{< /alert >}}
Database query for the user's contribution graph:
```sql
SELECT DATE(events.created_at), COUNT(*)
FROM events
WHERE events.author_id = 1
AND events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-01-18 22:59:59.999999'
AND (
(
events.action = 5
) OR
(
events.action IN (1, 3) -- Enum values are documented in the Event model, see the ACTIONS constant in app/models/event.rb
AND events.target_type IN ('Issue', 'WorkItem')
) OR
(
events.action IN (7, 1, 3)
AND events.target_type = 'MergeRequest'
) OR
(
events.action = 6
)
)
GROUP BY DATE(events.created_at)
```
Query for group contributions for each user:
```sql
SELECT events.author_id, events.target_type, events.action, COUNT(*)
FROM events
WHERE events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-03-18 22:59:59.999999'
AND events.project_id IN (1, 2, 3) -- list of project ids in the group
GROUP BY events.author_id, events.target_type, events.action
```
## Storing activity data in ClickHouse
### Data persistence
At the moment, there is no consensus about the way we would replicate data from the PostgreSQL database to ClickHouse. A few ideas that might work for the `events` table:
#### Record data immediately
This approach provides a simple way to keep the existing `events` table working while we're also sending data to the ClickHouse database. When an event record is created, ensure that it's created outside of the transaction. After persisting the data in PostgreSQL, persist it in ClickHouse.
```ruby
ApplicationRecord.transaction do
issue.update!(state: :closed)
end
# could be a method to hide complexity
Event.create!(action: :closed, target: issue)
ClickHouse::Event.create(action: :closed, target: issue)
```
What's behind the implementation of `ClickHouse::Event` is not decided yet, it could be one of the following:
- ActiveRecord model directly connecting the ClickHouse database.
- REST API call to an intermediate service.
<!-- vale gitlab_base.Spelling = NO -->
- Enqueueing an event to an event-streaming tool (like Kafka).
<!-- vale gitlab_base.Spelling = YES -->
#### Replication of `events` rows
Assuming that the creation of `events` record is an integral part of the system, introducing another storage call might cause performance degradation in various code paths, or it could introduce significant complexity.
Rather than sending data to ClickHouse on event creation time, we would move this processing in the background by iterating over the `events` table and sending the newly created database rows.
By keeping track of which records have been sent over ClickHouse, we could incrementally send data.
```ruby
last_updated_at = SyncProcess.last_updated_at
# oversimplified loop, we would probably batch this...
Event.where(updated_at > last_updated_at).each do |row|
last_row = ClickHouse::Event.create(row)
end
SyncProcess.last_updated_at = last_row.updated_at
```
### ClickHouse database table structure
When coming up with the initial database structure, we must look at the way the data is queried.
We have two main use cases:
- Query data for a certain user, within a time range.
- `WHERE author_id = 1 AND created_at BETWEEN '2021-01-01' AND '2021-12-31'`
- Additionally, there might be extra `project_id` condition due to the access control check.
- Query data for a project or group, within a time range.
- `WHERE project_id IN (1, 2) AND created_at BETWEEN '2021-01-01' AND '2021-12-31'`
The `author_id` and `project_id` columns are considered high-selectivity columns. By this we mean that optimizing the filtering of the `author_id` and the `project_id` columns is desirable for having performant database queries.
The most recent activity data is queried more often. At some point, we might just drop or relocate older data. Most of the features look back only a year.
For these reasons, we could start with a database table storing low-level `events` data:
```plantuml
hide circle
entity "events" as events {
id : UInt64 ("primary key")
--
project_id : UInt64
group_id : UInt64
target_id : UInt64
target_type : String
action : UInt8
fingerprint : UInt64
created_at : DateTime
updated_at : DateTime
}
```
The SQL statement for creating the table:
```sql
CREATE TABLE events
(
`id` UInt64,
`project_id` UInt64 DEFAULT 0 NOT NULL,
`group_id` UInt64 DEFAULT 0 NOT NULL,
`author_id` UInt64 DEFAULT 0 NOT NULL,
`target_id` UInt64 DEFAULT 0 NOT NULL,
`target_type` LowCardinality(String) DEFAULT '' NOT NULL,
`action` UInt8 DEFAULT 0 NOT NULL,
`fingerprint` UInt64 DEFAULT 0 NOT NULL,
`created_at` DateTime64(6, 'UTC') DEFAULT now() NOT NULL,
`updated_at` DateTime64(6, 'UTC') DEFAULT now() NOT NULL
)
ENGINE = ReplacingMergeTree(updated_at)
ORDER BY id;
```
A few changes compared to the PostgreSQL version:
- `target_type` uses [an optimization](https://clickhouse.com/docs/en/sql-reference/data-types/lowcardinality) for low-cardinality column values.
- `fingerprint` becomes an integer and leverages a performant integer-based hashing function such as xxHash64.
- All columns get a default value, the 0 default value for the integer columns means no value. See the related [best practices](https://clickhouse.com/docs/en/cloud/bestpractices/avoid-nullable-columns).
- `NOT NULL` to ensure that we always use the default values when data is missing (different behavior compared to PostgreSQL).
- The "primary" key automatically becomes the `id` column due to the `ORDER BY` clause.
Let's insert the same primary key value twice:
```sql
INSERT INTO events (id, project_id, target_id, author_id, target_type, action) VALUES (1, 2, 3, 4, 'Issue', null);
INSERT INTO events (id, project_id, target_id, author_id, target_type, action) VALUES (1, 20, 30, 5, 'Issue', null);
```
Let's inspect the results:
```sql
SELECT * FROM events
```
- We have two rows with the same `id` value (primary key).
- The `null` `action` becomes `0`.
- The non-specified fingerprint column becomes `0`.
- The `DateTime` columns have the insert timestamp.
ClickHouse will eventually "replace" the rows with the same primary key in the background. When running this operation, the higher `updated_at` value takes precedence. The same behavior can be simulated with the `final` keyword:
```sql
SELECT * FROM events FINAL
```
Adding `FINAL` to a query can have significant performance consequences, some of the issues are documented in the [ClickHouse documentation](https://clickhouse.com/docs/en/sql-reference/statements/select/from#final-modifier).
We should always expect duplicated values in the table, so we must take care of the deduplication in query time.
### ClickHouse database queries
ClickHouse uses SQL for querying the data, in some cases, a PostgreSQL query can be used in ClickHouse without major modifications assuming that the underlying database structure is very similar.
Query for group contributions for each user (PostgreSQL):
```sql
SELECT events.author_id, events.target_type, events.action, COUNT(*)
FROM events
WHERE events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-03-18 22:59:59.999999'
AND events.project_id IN (1, 2, 3) -- list of project ids in the group
GROUP BY events.author_id, events.target_type, events.action
```
The same query would work in PostgreSQL however, we might see duplicated values in ClickHouse due to the way the table engine works. The deduplication can be achieved by using a nested `FROM` statement.
```sql
SELECT author_id, target_type, action, count(*)
FROM (
SELECT
id,
argMax(events.project_id, events.updated_at) AS project_id,
argMax(events.group_id, events.updated_at) AS group_id,
argMax(events.author_id, events.updated_at) AS author_id,
argMax(events.target_type, events.updated_at) AS target_type,
argMax(events.target_id, events.updated_at) AS target_id,
argMax(events.action, events.updated_at) AS action,
argMax(events.fingerprint, events.updated_at) AS fingerprint,
FIRST_VALUE(events.created_at) AS created_at,
MAX(events.updated_at) AS updated_at
FROM events
WHERE events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-03-18 22:59:59.999999'
AND events.project_id IN (1, 2, 3) -- list of project ids in the group
GROUP BY id
) AS events
GROUP BY author_id, target_type, action
```
- Take the most recent column values based on the `updated_at` column.
- Take the first value for `created_at`, assuming that the first `INSERT` contains the correct value. An issue only when we don't sync `created_at` at all and the default value (`NOW()`) is used.
- Take the most recent `updated_at` value.
The query looks more complicated now because of the deduplication logic. The complexity can be hidden behind a database view.
### Optimizing the performance
The aggregation query in the previous section might not be performant enough for production use due to the large volume of data.
Let's add 1 million extra rows to the `events` table:
```sql
INSERT INTO events (id, project_id, author_id, target_id, target_type, action) SELECT id, project_id, author_id, target_id, 'Issue' AS target_type, action FROM generateRandom('id UInt64, project_id UInt64, author_id UInt64, target_id UInt64, action UInt64') LIMIT 1000000;
```
Running the previous aggregation query in the console prints out some performance data:
```plaintext
1 row in set. Elapsed: 0.122 sec. Processed 1.00 million rows, 42.00 MB (8.21 million rows/s., 344.96 MB/s.)
```
The query returned 1 row (correctly) however, it had to process 1 million rows (full table). We can optimize the query with an index on the `project_id` column:
```sql
ALTER TABLE events ADD INDEX project_id_index project_id TYPE minmax GRANULARITY 10;
ALTER TABLE events MATERIALIZE INDEX project_id_index;
```
Executing the query returns much better figures:
```plaintext
Read 2 rows, 107.00 B in 0.005616811 sec., 356 rows/sec., 18.60 KiB/sec.
```
To optimize the date range filter on the `created_at` column, we could try adding another index on the `created_at` column.
#### Query for the contribution graph
Just to recap, this is the PostgreSQL query:
```sql
SELECT DATE(events.created_at), COUNT(*)
FROM events
WHERE events.author_id = 1
AND events.created_at BETWEEN '2022-01-17 23:00:00' AND '2023-01-18 22:59:59.999999'
AND (
(
events.action = 5
) OR
(
events.action IN (1, 3) -- Enum values are documented in the Event model, see the ACTIONS constant in app/models/event.rb
AND events.target_type IN ('Issue', 'WorkItem')
) OR
(
events.action IN (7, 1, 3)
AND events.target_type = 'MergeRequest'
) OR
(
events.action = 6
)
)
GROUP BY DATE(events.created_at)
```
The filtering and the count aggregation is mainly done on the `author_id` and the `created_at` columns. Grouping the data by these two columns would probably give an adequate performance.
We could attempt adding an index on the `author_id` column however, we still need an additional index on the `created_at` column to properly cover this query. Besides, under the contribution graph, GitLab shows the list of ordered contributions of the user which would be great to get it efficiently via a different query with the `ORDER BY` clause.
For these reasons, it's probably better to use a ClickHouse projection which stores the events rows redundantly but we can specify a different sort order.
The ClickHouse query would be the following (with a slightly adjusted date range):
```sql
SELECT DATE(events.created_at) AS date, COUNT(*) AS count
FROM (
SELECT
id,
argMax(events.created_at, events.updated_at) AS created_at
FROM events
WHERE events.author_id = 4
AND events.created_at BETWEEN '2023-01-01 23:00:00' AND '2024-01-01 22:59:59.999999'
AND (
(
events.action = 5
) OR
(
events.action IN (1, 3) -- Enum values are documented in the Event model, see the ACTIONS constant in app/models/event.rb
AND events.target_type IN ('Issue', 'WorkItem')
) OR
(
events.action IN (7, 1, 3)
AND events.target_type = 'MergeRequest'
) OR
(
events.action = 6
)
)
GROUP BY id
) AS events
GROUP BY DATE(events.created_at)
```
The query does a full table scan, let's optimize it:
```sql
ALTER TABLE events ADD PROJECTION events_by_authors (
SELECT * ORDER BY author_id, created_at -- different sort order for the table
);
ALTER TABLE events MATERIALIZE PROJECTION events_by_authors;
```
#### Pagination of contributions
Listing the contributions of a user can be queried in the following way:
```sql
SELECT events.*
FROM (
SELECT
id,
argMax(events.project_id, events.updated_at) AS project_id,
argMax(events.group_id, events.updated_at) AS group_id,
argMax(events.author_id, events.updated_at) AS author_id,
argMax(events.target_type, events.updated_at) AS target_type,
argMax(events.target_id, events.updated_at) AS target_id,
argMax(events.action, events.updated_at) AS action,
argMax(events.fingerprint, events.updated_at) AS fingerprint,
FIRST_VALUE(events.created_at) AS created_at,
MAX(events.updated_at) AS updated_at
FROM events
WHERE events.author_id = 4
GROUP BY id
ORDER BY created_at DESC, id DESC
) AS events
LIMIT 20
```
ClickHouse supports the standard `LIMIT N OFFSET M` clauses, so we can request the next page:
```sql
SELECT events.*
FROM (
SELECT
id,
argMax(events.project_id, events.updated_at) AS project_id,
argMax(events.group_id, events.updated_at) AS group_id,
argMax(events.author_id, events.updated_at) AS author_id,
argMax(events.target_type, events.updated_at) AS target_type,
argMax(events.target_id, events.updated_at) AS target_id,
argMax(events.action, events.updated_at) AS action,
argMax(events.fingerprint, events.updated_at) AS fingerprint,
FIRST_VALUE(events.created_at) AS created_at,
MAX(events.updated_at) AS updated_at
FROM events
WHERE events.author_id = 4
GROUP BY id
ORDER BY created_at DESC, id DESC
) AS events
LIMIT 20 OFFSET 20
```
|
https://docs.gitlab.com/development/database/merge_request_analytics
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/merge_request_analytics.md
|
2025-08-13
|
doc/development/database/clickhouse
|
[
"doc",
"development",
"database",
"clickhouse"
] |
merge_request_analytics.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Merge request analytics with ClickHouse
| null |
The [merge request analytics feature](../../../user/analytics/merge_request_analytics.md)
shows statistics about the merged merge requests in the project and also exposes record-level metadata.
Aggregations include:
- **Average time to merge**: The duration between the creation time and the merge time.
- **Monthly aggregations**: A chart of 12 months of the merged merge requests.
Under the chart, the user can see the paginated list of merge requests, 12 months per page.
You can filter by:
- Author
- Assignee
- Labels
- Milestone
- Source branch
- Target branch
## Current performance problems
- The aggregation queries require specialized indexes, which cost additional
disk space (index-only scans).
- Querying the whole 12 months is slow (statement timeout). Instead, the frontend
requests data per month (12 database queries).
- Even with specialized indexes, making the feature available on the group level
would not be feasible due to the large volume of merge requests.
## Example queries
Get the number of merge requests merged in a given month:
```sql
SELECT COUNT(*)
FROM "merge_requests"
INNER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE (NOT EXISTS
(SELECT 1
FROM "banned_users"
WHERE (merge_requests.author_id = banned_users.user_id)))
AND "merge_request_metrics"."target_project_id" = 278964
AND "merge_request_metrics"."merged_at" >= '2022-12-01 00:00:00'
AND "merge_request_metrics"."merged_at" <= '2023-01-01 00:00:00'
```
The `merge_request_metrics` table was de-normalized (by adding `target_project_id`)
to improve the first-page load time. The query itself works well for smaller date ranges,
however, it can time out as the date range increases.
After an extra filter is added, the query becomes more complex because it must also
filter the `merge_requests` table:
```sql
SELECT COUNT(*)
FROM "merge_requests"
INNER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE (NOT EXISTS
(SELECT 1
FROM "banned_users"
WHERE (merge_requests.author_id = banned_users.user_id)))
AND "merge_requests"."author_id" IN
(SELECT "users"."id"
FROM "users"
WHERE (LOWER("users"."username") IN (LOWER('ahegyi'))))
AND "merge_request_metrics"."target_project_id" = 278964
AND "merge_request_metrics"."merged_at" >= '2022-12-01 00:00:00'
AND "merge_request_metrics"."merged_at" <= '2023-01-01 00:00:00'
```
To calculate mean time to merge, we also query the total time between the
merge request creation time and merge time.
```sql
SELECT EXTRACT(epoch
FROM SUM(AGE(merge_request_metrics.merged_at, merge_request_metrics.created_at)))
FROM "merge_requests"
INNER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE (NOT EXISTS
(SELECT 1
FROM "banned_users"
WHERE (merge_requests.author_id = banned_users.user_id)))
AND "merge_requests"."author_id" IN
(SELECT "users"."id"
FROM "users"
WHERE (LOWER("users"."username") IN (LOWER('ahegyi'))))
AND "merge_request_metrics"."target_project_id" = 278964
AND "merge_request_metrics"."merged_at" >= '2022-08-01 00:00:00'
AND "merge_request_metrics"."merged_at" <= '2022-09-01 00:00:00'
AND "merge_request_metrics"."merged_at" > "merge_request_metrics"."created_at"
LIMIT 1
```
## Store merge request data in ClickHouse
Several other use cases exist for storing and querying merge request data in
[ClickHouse](../../../integration/clickhouse.md). In this document, we focus on this particular feature.
The core data exists in the `merge_request_metrics` and in the `merge_requests`
database tables. Some filters require extra tables to be joined:
- `banned_users`: Filter out merge requests created by banned users.
- `labels`: A merge request can have one or more assigned labels.
- `assignees`: A merge request can have one or more assignees.
- `merged_at`: The `merged_at` column is located in the `merge_request_metrics` table.
The `merge_requests` table contains data that can be filtered directly:
- **Author**: via the `author_id` column.
- **Milestone**: via the `milestone_id` column.
- **Source branch**.
- **Target branch**.
- **Project**: via the `project_id` column.
### Keep ClickHouse data up to date
Replicating or syncing the `merge_requests` table is unfortunately not enough.
Separate queries to associated tables are required to insert one de-normalized
`merge_requests` row into the ClickHouse database.
Change detection is non-trivial to implement. A few corners we could cut:
- The feature is available for GitLab Premium and GitLab Ultimate customers.
We don't have to sync all the data, but instead sync only the `merge_requests` records
which are part of licensed groups.
- Data changes (often) happen via the `MergeRequest` services, where bumping the
`updated_at` timestamp column is mostly consistent. Some sort of incremental
synchronization process could be implemented.
- We only need to query the merged merge requests. After the merge, the record rarely changes.
### Database table structure
The database table structure uses de-normalization to make all required columns
available in one database table. This eliminates the need for `JOINs`.
```sql
CREATE TABLE merge_requests
(
`id` UInt64,
`project_id` UInt64 DEFAULT 0 NOT NULL,
`author_id` UInt64 DEFAULT 0 NOT NULL,
`milestone_id` UInt64 DEFAULT 0 NOT NULL,
`label_ids` Array(UInt64) DEFAULT [] NOT NULL,
`assignee_ids` Array(UInt64) DEFAULT [] NOT NULL,
`source_branch` String DEFAULT '' NOT NULL,
`target_branch` String DEFAULT '' NOT NULL,
`merged_at` DateTime64(6, 'UTC') NOT NULL,
`created_at` DateTime64(6, 'UTC') DEFAULT now() NOT NULL,
`updated_at` DateTime64(6, 'UTC') DEFAULT now() NOT NULL
)
ENGINE = ReplacingMergeTree(updated_at)
ORDER BY (project_id, merged_at, id);
```
Similarly to the [activity data example](gitlab_activity_data.md), we use the
`ReplacingMergeTree` engine. Several columns of the merge request record may change,
so keeping the table up-to-date is important.
The database table is ordered by the `project_id, merged_at, id` columns. This ordering
optimizes the table data for our use case: querying the `merged_at` column in a project.
## Rewrite the count query
First, let's generate some data for the table.
```sql
INSERT INTO merge_requests (id, project_id, author_id, milestone_id, label_ids, merged_at, created_at)
SELECT id, project_id, author_id, milestone_id, label_ids, merged_at, created_at
FROM generateRandom('id UInt64, project_id UInt8, author_id UInt8, milestone_id UInt8, label_ids Array(UInt8), merged_at DateTime64(6, \'UTC\'), created_at DateTime64(6, \'UTC\')')
LIMIT 1000000;
```
{{< alert type="note" >}}
Some integer data types were cast as `UInt8` so it is highly probable that they
have same values across different rows.
{{< /alert >}}
The original count query only aggregated data for one month. With ClickHouse, we can
attempt aggregating the data for the whole year.
PostgreSQL-based count query:
```sql
SELECT COUNT(*)
FROM "merge_requests"
INNER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE (NOT EXISTS
(SELECT 1
FROM "banned_users"
WHERE (merge_requests.author_id = banned_users.user_id)))
AND "merge_request_metrics"."target_project_id" = 278964
AND "merge_request_metrics"."merged_at" >= '2022-12-01 00:00:00'
AND "merge_request_metrics"."merged_at" <= '2023-01-01 00:00:00'
```
ClickHouse query:
```sql
SELECT
toYear(merged_at) AS year,
toMonth(merged_at) AS month,
COUNT(*)
FROM merge_requests
WHERE
project_id = 200
AND merged_at BETWEEN '2022-01-01 00:00:00'
AND '2023-01-01 00:00:00'
GROUP BY year, month
```
The query processed a significantly lower number of rows compared to the generated data.
The `ORDER BY` clause (primary key) is helping the query execution:
```plaintext
11 rows in set. Elapsed: 0.010 sec.
Processed 8.19 thousand rows, 131.07 KB (783.45 thousand rows/s., 12.54 MB/s.)
```
## Rewrite the Mean time to merge query
The query calculates the mean time to merge as:
`duration(created_at, merged_at) / merge_request_count`. The calculation is done in
two separate steps:
1. Request the monthly counts and the monthly duration values.
1. Sum the counts to get the yearly count.
1. Sum the durations to get the yearly duration.
1. Divide the durations by the count.
In ClickHouse, we can calculate the mean time to merge with one query:
```sql
SELECT
SUM(
dateDiff('second', merged_at, created_at) / 3600 / 24
) / COUNT(*) AS mean_time_to_merge -- mean_time_to_merge is in days
FROM merge_requests
WHERE
project_id = 200
AND merged_at BETWEEN '2022-01-01 00:00:00'
AND '2023-01-01 00:00:00'
```
## Filtering
The database queries above can be used as base queries. You can add more filters.
For example, filtering for a label and a milestone:
```sql
SELECT
toYear(merged_at) AS year,
toMonth(merged_at) AS month,
COUNT(*)
FROM merge_requests
WHERE
project_id = 200
AND milestone_id = 15
AND has(label_ids, 118)
AND -- array includes 118
merged_at BETWEEN '2022-01-01 00:00:00'
AND '2023-01-01 00:00:00'
GROUP BY year, month
```
Optimizing a particular filter is usually done with a database index. This particular
query reads 8000 rows:
```plaintext
1 row in set. Elapsed: 0.016 sec.
Processed 8.19 thousand rows, 589.99 KB (505.38 thousand rows/s., 36.40 MB/s.)
```
Adding an index on `milestone_id`:
```sql
ALTER TABLE merge_requests
ADD
INDEX milestone_id_index milestone_id TYPE minmax GRANULARITY 10;
ALTER TABLE
merge_requests MATERIALIZE INDEX milestone_id_index;
```
On the generated data, adding the index didn't improve the performance.
### Banned users filter
A recently added feature in GitLab filters out merge requests where the author is
banned by the admins. The banned users are tracked on the instance level in the
`banned_users` database table.
#### Idea 1: Enumerate the banned user IDs
This would require no structural changes to the ClickHouse database schema.
We could query the banned users in the project and filter the values out in query time.
Get the banned users (in PostgreSQL):
```sql
SELECT user_id FROM banned_users
```
In ClickHouse
```sql
SELECT
toYear(merged_at) AS year,
toMonth(merged_at) AS month,
COUNT(*)
FROM merge_requests
WHERE
author_id NOT IN (1, 2, 3, 4) AND -- banned users
project_id = 200
AND milestone_id = 15
AND has(label_ids, 118) AND -- array includes 118
merged_at BETWEEN '2022-01-01 00:00:00'
AND '2023-01-01 00:00:00'
GROUP BY year, month
```
The problem with this approach is that the number of banned users could increase significantly which would make the query bigger and slower.
#### Idea 2: replicate the `banned_users` table
Assuming that the `banned_users table` doesn't grow to millions of rows, we could
attempt to periodically sync the whole table to ClickHouse. With this approach,
a mostly consistent `banned_users` table could be used in the ClickHouse database query:
```sql
SELECT
toYear(merged_at) AS year,
toMonth(merged_at) AS month,
COUNT(*)
FROM merge_requests
WHERE
author_id NOT IN (SELECT user_id FROM banned_users) AND
project_id = 200 AND
milestone_id = 15 AND
has(label_ids, 118) AND -- array includes 118
merged_at BETWEEN '2022-01-01 00:00:00' AND '2023-01-01 00:00:00'
GROUP BY year, month
```
Alternatively, the `banned_users` table could be stored as a
[dictionary](https://clickhouse.com/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts)
to further improve the query performance.
#### Idea 3: Alter the feature
For analytical calculations, it might be acceptable to drop this particular filter.
This approach assumes that including the merge requests of banned users doesn't skew the statistics significantly.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Merge request analytics with ClickHouse
breadcrumbs:
- doc
- development
- database
- clickhouse
---
The [merge request analytics feature](../../../user/analytics/merge_request_analytics.md)
shows statistics about the merged merge requests in the project and also exposes record-level metadata.
Aggregations include:
- **Average time to merge**: The duration between the creation time and the merge time.
- **Monthly aggregations**: A chart of 12 months of the merged merge requests.
Under the chart, the user can see the paginated list of merge requests, 12 months per page.
You can filter by:
- Author
- Assignee
- Labels
- Milestone
- Source branch
- Target branch
## Current performance problems
- The aggregation queries require specialized indexes, which cost additional
disk space (index-only scans).
- Querying the whole 12 months is slow (statement timeout). Instead, the frontend
requests data per month (12 database queries).
- Even with specialized indexes, making the feature available on the group level
would not be feasible due to the large volume of merge requests.
## Example queries
Get the number of merge requests merged in a given month:
```sql
SELECT COUNT(*)
FROM "merge_requests"
INNER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE (NOT EXISTS
(SELECT 1
FROM "banned_users"
WHERE (merge_requests.author_id = banned_users.user_id)))
AND "merge_request_metrics"."target_project_id" = 278964
AND "merge_request_metrics"."merged_at" >= '2022-12-01 00:00:00'
AND "merge_request_metrics"."merged_at" <= '2023-01-01 00:00:00'
```
The `merge_request_metrics` table was de-normalized (by adding `target_project_id`)
to improve the first-page load time. The query itself works well for smaller date ranges,
however, it can time out as the date range increases.
After an extra filter is added, the query becomes more complex because it must also
filter the `merge_requests` table:
```sql
SELECT COUNT(*)
FROM "merge_requests"
INNER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE (NOT EXISTS
(SELECT 1
FROM "banned_users"
WHERE (merge_requests.author_id = banned_users.user_id)))
AND "merge_requests"."author_id" IN
(SELECT "users"."id"
FROM "users"
WHERE (LOWER("users"."username") IN (LOWER('ahegyi'))))
AND "merge_request_metrics"."target_project_id" = 278964
AND "merge_request_metrics"."merged_at" >= '2022-12-01 00:00:00'
AND "merge_request_metrics"."merged_at" <= '2023-01-01 00:00:00'
```
To calculate mean time to merge, we also query the total time between the
merge request creation time and merge time.
```sql
SELECT EXTRACT(epoch
FROM SUM(AGE(merge_request_metrics.merged_at, merge_request_metrics.created_at)))
FROM "merge_requests"
INNER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE (NOT EXISTS
(SELECT 1
FROM "banned_users"
WHERE (merge_requests.author_id = banned_users.user_id)))
AND "merge_requests"."author_id" IN
(SELECT "users"."id"
FROM "users"
WHERE (LOWER("users"."username") IN (LOWER('ahegyi'))))
AND "merge_request_metrics"."target_project_id" = 278964
AND "merge_request_metrics"."merged_at" >= '2022-08-01 00:00:00'
AND "merge_request_metrics"."merged_at" <= '2022-09-01 00:00:00'
AND "merge_request_metrics"."merged_at" > "merge_request_metrics"."created_at"
LIMIT 1
```
## Store merge request data in ClickHouse
Several other use cases exist for storing and querying merge request data in
[ClickHouse](../../../integration/clickhouse.md). In this document, we focus on this particular feature.
The core data exists in the `merge_request_metrics` and in the `merge_requests`
database tables. Some filters require extra tables to be joined:
- `banned_users`: Filter out merge requests created by banned users.
- `labels`: A merge request can have one or more assigned labels.
- `assignees`: A merge request can have one or more assignees.
- `merged_at`: The `merged_at` column is located in the `merge_request_metrics` table.
The `merge_requests` table contains data that can be filtered directly:
- **Author**: via the `author_id` column.
- **Milestone**: via the `milestone_id` column.
- **Source branch**.
- **Target branch**.
- **Project**: via the `project_id` column.
### Keep ClickHouse data up to date
Replicating or syncing the `merge_requests` table is unfortunately not enough.
Separate queries to associated tables are required to insert one de-normalized
`merge_requests` row into the ClickHouse database.
Change detection is non-trivial to implement. A few corners we could cut:
- The feature is available for GitLab Premium and GitLab Ultimate customers.
We don't have to sync all the data, but instead sync only the `merge_requests` records
which are part of licensed groups.
- Data changes (often) happen via the `MergeRequest` services, where bumping the
`updated_at` timestamp column is mostly consistent. Some sort of incremental
synchronization process could be implemented.
- We only need to query the merged merge requests. After the merge, the record rarely changes.
### Database table structure
The database table structure uses de-normalization to make all required columns
available in one database table. This eliminates the need for `JOINs`.
```sql
CREATE TABLE merge_requests
(
`id` UInt64,
`project_id` UInt64 DEFAULT 0 NOT NULL,
`author_id` UInt64 DEFAULT 0 NOT NULL,
`milestone_id` UInt64 DEFAULT 0 NOT NULL,
`label_ids` Array(UInt64) DEFAULT [] NOT NULL,
`assignee_ids` Array(UInt64) DEFAULT [] NOT NULL,
`source_branch` String DEFAULT '' NOT NULL,
`target_branch` String DEFAULT '' NOT NULL,
`merged_at` DateTime64(6, 'UTC') NOT NULL,
`created_at` DateTime64(6, 'UTC') DEFAULT now() NOT NULL,
`updated_at` DateTime64(6, 'UTC') DEFAULT now() NOT NULL
)
ENGINE = ReplacingMergeTree(updated_at)
ORDER BY (project_id, merged_at, id);
```
Similarly to the [activity data example](gitlab_activity_data.md), we use the
`ReplacingMergeTree` engine. Several columns of the merge request record may change,
so keeping the table up-to-date is important.
The database table is ordered by the `project_id, merged_at, id` columns. This ordering
optimizes the table data for our use case: querying the `merged_at` column in a project.
## Rewrite the count query
First, let's generate some data for the table.
```sql
INSERT INTO merge_requests (id, project_id, author_id, milestone_id, label_ids, merged_at, created_at)
SELECT id, project_id, author_id, milestone_id, label_ids, merged_at, created_at
FROM generateRandom('id UInt64, project_id UInt8, author_id UInt8, milestone_id UInt8, label_ids Array(UInt8), merged_at DateTime64(6, \'UTC\'), created_at DateTime64(6, \'UTC\')')
LIMIT 1000000;
```
{{< alert type="note" >}}
Some integer data types were cast as `UInt8` so it is highly probable that they
have same values across different rows.
{{< /alert >}}
The original count query only aggregated data for one month. With ClickHouse, we can
attempt aggregating the data for the whole year.
PostgreSQL-based count query:
```sql
SELECT COUNT(*)
FROM "merge_requests"
INNER JOIN "merge_request_metrics" ON "merge_request_metrics"."merge_request_id" = "merge_requests"."id"
WHERE (NOT EXISTS
(SELECT 1
FROM "banned_users"
WHERE (merge_requests.author_id = banned_users.user_id)))
AND "merge_request_metrics"."target_project_id" = 278964
AND "merge_request_metrics"."merged_at" >= '2022-12-01 00:00:00'
AND "merge_request_metrics"."merged_at" <= '2023-01-01 00:00:00'
```
ClickHouse query:
```sql
SELECT
toYear(merged_at) AS year,
toMonth(merged_at) AS month,
COUNT(*)
FROM merge_requests
WHERE
project_id = 200
AND merged_at BETWEEN '2022-01-01 00:00:00'
AND '2023-01-01 00:00:00'
GROUP BY year, month
```
The query processed a significantly lower number of rows compared to the generated data.
The `ORDER BY` clause (primary key) is helping the query execution:
```plaintext
11 rows in set. Elapsed: 0.010 sec.
Processed 8.19 thousand rows, 131.07 KB (783.45 thousand rows/s., 12.54 MB/s.)
```
## Rewrite the Mean time to merge query
The query calculates the mean time to merge as:
`duration(created_at, merged_at) / merge_request_count`. The calculation is done in
two separate steps:
1. Request the monthly counts and the monthly duration values.
1. Sum the counts to get the yearly count.
1. Sum the durations to get the yearly duration.
1. Divide the durations by the count.
In ClickHouse, we can calculate the mean time to merge with one query:
```sql
SELECT
SUM(
dateDiff('second', merged_at, created_at) / 3600 / 24
) / COUNT(*) AS mean_time_to_merge -- mean_time_to_merge is in days
FROM merge_requests
WHERE
project_id = 200
AND merged_at BETWEEN '2022-01-01 00:00:00'
AND '2023-01-01 00:00:00'
```
## Filtering
The database queries above can be used as base queries. You can add more filters.
For example, filtering for a label and a milestone:
```sql
SELECT
toYear(merged_at) AS year,
toMonth(merged_at) AS month,
COUNT(*)
FROM merge_requests
WHERE
project_id = 200
AND milestone_id = 15
AND has(label_ids, 118)
AND -- array includes 118
merged_at BETWEEN '2022-01-01 00:00:00'
AND '2023-01-01 00:00:00'
GROUP BY year, month
```
Optimizing a particular filter is usually done with a database index. This particular
query reads 8000 rows:
```plaintext
1 row in set. Elapsed: 0.016 sec.
Processed 8.19 thousand rows, 589.99 KB (505.38 thousand rows/s., 36.40 MB/s.)
```
Adding an index on `milestone_id`:
```sql
ALTER TABLE merge_requests
ADD
INDEX milestone_id_index milestone_id TYPE minmax GRANULARITY 10;
ALTER TABLE
merge_requests MATERIALIZE INDEX milestone_id_index;
```
On the generated data, adding the index didn't improve the performance.
### Banned users filter
A recently added feature in GitLab filters out merge requests where the author is
banned by the admins. The banned users are tracked on the instance level in the
`banned_users` database table.
#### Idea 1: Enumerate the banned user IDs
This would require no structural changes to the ClickHouse database schema.
We could query the banned users in the project and filter the values out in query time.
Get the banned users (in PostgreSQL):
```sql
SELECT user_id FROM banned_users
```
In ClickHouse
```sql
SELECT
toYear(merged_at) AS year,
toMonth(merged_at) AS month,
COUNT(*)
FROM merge_requests
WHERE
author_id NOT IN (1, 2, 3, 4) AND -- banned users
project_id = 200
AND milestone_id = 15
AND has(label_ids, 118) AND -- array includes 118
merged_at BETWEEN '2022-01-01 00:00:00'
AND '2023-01-01 00:00:00'
GROUP BY year, month
```
The problem with this approach is that the number of banned users could increase significantly which would make the query bigger and slower.
#### Idea 2: replicate the `banned_users` table
Assuming that the `banned_users table` doesn't grow to millions of rows, we could
attempt to periodically sync the whole table to ClickHouse. With this approach,
a mostly consistent `banned_users` table could be used in the ClickHouse database query:
```sql
SELECT
toYear(merged_at) AS year,
toMonth(merged_at) AS month,
COUNT(*)
FROM merge_requests
WHERE
author_id NOT IN (SELECT user_id FROM banned_users) AND
project_id = 200 AND
milestone_id = 15 AND
has(label_ids, 118) AND -- array includes 118
merged_at BETWEEN '2022-01-01 00:00:00' AND '2023-01-01 00:00:00'
GROUP BY year, month
```
Alternatively, the `banned_users` table could be stored as a
[dictionary](https://clickhouse.com/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts)
to further improve the query performance.
#### Idea 3: Alter the feature
For analytical calculations, it might be acceptable to drop this particular filter.
This approach assumes that including the merge requests of banned users doesn't skew the statistics significantly.
|
https://docs.gitlab.com/development/database/clickhouse_within_gitlab
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/clickhouse_within_gitlab.md
|
2025-08-13
|
doc/development/database/clickhouse
|
[
"doc",
"development",
"database",
"clickhouse"
] |
clickhouse_within_gitlab.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
ClickHouse within GitLab
| null |
This document gives a high-level overview of how to develop features using ClickHouse in the GitLab Rails application.
{{< alert type="note" >}}
Most of the tooling and APIs are considered unstable.
{{< /alert >}}
## GDK setup
### Setup ClickHouse server
1. Install ClickHouse locally as described in [ClickHouse installation documentation](https://clickhouse.com/docs/en/install). If you use QuickInstall it will be installed in current directory, if you use Homebrew it will be installed to `/opt/homebrew/bin/clickhouse`
1. Add ClickHouse section to your `gdk.yml`. See [`gdk.example.yml`](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/gdk.example.yml)
1. Adjust the `gdk.yml` ClickHouse configuration file to point to your local ClickHouse installation and local data storage. E.g.
```yaml
clickhouse:
bin: "/opt/homebrew/bin/clickhouse"
enabled: true
# these are optional if we have more then one GDK:
# http_port: 8123
# interserver_http_port: 9009
# tcp_port: 9001
```
1. Run `gdk reconfigure`
1. Start ClickHouse with `gdk start clickhouse`
### Configure your Rails application
1. Copy the example file and configure the credentials:
```shell
cp config/click_house.yml.example config/click_house.yml
```
1. Create the database using the bundled `clickhouse client`:
```shell
gdk clickhouse
```
```sql
create database gitlab_clickhouse_development;
create database gitlab_clickhouse_test;
```
### Validate your setup
Run the Rails console and invoke a simple query:
```ruby
ClickHouse::Client.select('SELECT 1', :main)
# => [{"1"=>1}]
```
## Database schema and migrations
To generate a ClickHouse database migration, execute:
``` shell
bundle exec rails generate gitlab:click_house:migration MIGRATION_CLASS_NAME
```
To run database migrations, execute:
```shell
bundle exec rake gitlab:clickhouse:migrate
```
To rollback last N migrations, execute:
```shell
bundle exec rake gitlab:clickhouse:rollback:main STEP=N
```
Or use the following command to rollback all migrations:
```shell
bundle exec rake gitlab:clickhouse:rollback:main VERSION=0
```
You can create a migration by creating a Ruby migration file in `db/click_house/migrate` folder. It should be prefixed with a timestamp in the format `YYYYMMDDHHMMSS_description_of_migration.rb`
```ruby
# 20230811124511_create_issues.rb
# frozen_string_literal: true
class CreateIssues < ClickHouse::Migration
def up
execute <<~SQL
CREATE TABLE issues
(
id UInt64 DEFAULT 0,
title String DEFAULT ''
)
ENGINE = MergeTree
PRIMARY KEY (id)
SQL
end
def down
execute <<~SQL
DROP TABLE sync_cursors
SQL
end
end
```
## Post deployment migrations
To generate a ClickHouse database post deployment migration execute:
``` shell
bundle exec rails generate gitlab:click_house:post_deployment_migration MIGRATION_CLASS_NAME
```
These migrations will run by default together with regular migrations, but they can be skipped,
for example, before deploying to production using `SKIP_POST_DEPLOYMENT_MIGRATIONS` environment variable, for example:
``` shell
export SKIP_POST_DEPLOYMENT_MIGRATIONS=true
bundle exec rake gitlab:clickhouse:migrate
```
## Writing database queries
For the ClickHouse database we don't use ORM (Object Relational Mapping). The main reason is that the GitLab application has many customizations for the `ActiveRecord` PostgreSQL adapter and the application generally assumes that all databases are using `PostgreSQL`. Since ClickHouse-related features are still in a very early stage of development, we decided to implement a simple HTTP client to avoid hard to discover bugs and long debugging time when dealing with multiple `ActiveRecord` adapters.
Additionally, ClickHouse might not be used the same way as other adapters for `ActiveRecord`. The access patterns differ from traditional transactional databases, in that ClickHouse:
- Uses nested aggregation `SELECT` queries with `GROUP BY` clauses.
- Doesn't use single `INSERT` statements. Data is inserted in batches via background jobs.
- Has different consistency characteristics, no transactions.
- Has very little database-level validations.
Database queries are written and executed with the help of the `ClickHouse::Client` gem.
A simple query from the `events` table:
```ruby
rows = ClickHouse::Client.select('SELECT * FROM events', :main)
```
When working with queries with placeholders you can use the `ClickHouse::Query` object where you need to specify the placeholder name and its data type. The actual variable replacement, quoting and escaping will be done by the ClickHouse server.
```ruby
raw_query = 'SELECT * FROM events WHERE id > {min_id:UInt64}'
placeholders = { min_id: Integer(100) }
query = ClickHouse::Client::Query.new(raw_query: raw_query, placeholders: placeholders)
rows = ClickHouse::Client.select(query, :main)
```
When using placeholders the client can provide the query with redacted placeholder values which can be ingested by our logging system. You can see the redacted version of your query by calling the `to_redacted_sql` method:
```ruby
puts query.to_redacted_sql
```
ClickHouse allows only one statement per request. This means that the common SQL injection vulnerability where the statement is closed with a `;` character and then another query is "injected" cannot be exploited:
```ruby
ClickHouse::Client.select('SELECT 1; SELECT 2', :main)
# ClickHouse::Client::DatabaseError: Code: 62. DB::Exception: Syntax error (Multi-statements are not allowed): failed at position 9 (end of query): ; SELECT 2. . (SYNTAX_ERROR) (version 23.4.2.11 (official build))
```
### Subqueries
You can compose complex queries with the `ClickHouse::Client::Query` class by specifying the query placeholder with the special `Subquery` type. The library will make sure to correctly merge the queries and the placeholders:
```ruby
subquery = ClickHouse::Client::Query.new(raw_query: 'SELECT id FROM events WHERE id = {id:UInt64}', placeholders: { id: Integer(10) })
raw_query = 'SELECT * FROM events WHERE id > {id:UInt64} AND id IN ({q:Subquery})'
placeholders = { id: Integer(10), q: subquery }
query = ClickHouse::Client::Query.new(raw_query: raw_query, placeholders: placeholders)
rows = ClickHouse::Client.select(query, :main)
# ClickHouse will replace the placeholders
puts query.to_sql # SELECT * FROM events WHERE id > {id:UInt64} AND id IN (SELECT id FROM events WHERE id = {id:UInt64})
puts query.to_redacted_sql # SELECT * FROM events WHERE id > $1 AND id IN (SELECT id FROM events WHERE id = $2)
puts query.placeholders # { id: 10 }
```
In case there are placeholders with the same name but different values the query will raise an error.
### Writing query conditions
When working with complex forms where multiple filter conditions are present, building queries by concatenating query fragments as string can get out of hands very quickly. For queries with several conditions you may use the `ClickHouse::Client::QueryBuilder` class. The class uses the `Arel` gem to generate queries and provides a similar query interface like `ActiveRecord`.
```ruby
builder = ClickHouse::Client::QueryBuilder.new('events')
query = builder
.where(builder.table[:created_at].lteq(Date.today))
.where(id: [1,2,3])
rows = ClickHouse::Client.select(query, :main)
```
## Inserting data
The ClickHouse client supports inserting data through the standard query interface:
```ruby
raw_query = 'INSERT INTO events (id, target_type) VALUES ({id:UInt64}, {target_type:String})'
placeholders = { id: 1, target_type: 'Issue' }
query = ClickHouse::Client::Query.new(raw_query: raw_query, placeholders: placeholders)
rows = ClickHouse::Client.execute(query, :main)
```
Inserting data this way is acceptable if:
- The table contains settings or configuration data where we need to add one row.
- For testing, test data has to be prepared in the database.
When inserting data, we should always try to use batch processing where multiple rows are inserted at once. Building large `INSERT` queries in memory is discouraged because of the increased memory usage. Additionally, values specified within such queries cannot be redacted automatically by the client.
To compress data and reduce memory usage, insert CSV data. You can do this with the internal [`CsvBuilder`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/gems/csv_builder) gem:
```ruby
iterator = Event.find_each
# insert from events table using only the id and the target_type columns
column_mapping = {
id: :id,
target_type: :target_type
}
CsvBuilder::Gzip.new(iterator, column_mapping).render do |tempfile|
query = 'INSERT INTO events (id, target_type) FORMAT CSV'
ClickHouse::Client.insert_csv(query, File.open(tempfile.path), :main)
end
```
{{< alert type="note" >}}
It's important to test and verify efficient batching of database records from PostgreSQL. Consider using the techniques described in the [Iterating tables in batches](../iterating_tables_in_batches.md).
{{< /alert >}}
## Iterating over tables
You can use the `ClickHouse::Iterator` class for batching over large volumes of data in ClickHouse. The iterator works a bit differently than existing tooling for the PostgreSQL database (see [iterating tables in batches docs](../iterating_tables_in_batches.md)), as the tool does not rely on database indexes and uses fixed size numeric ranges.
Prerequisites:
- Single integer column.
- No huge gaps between the column values, the ideal columns would be auto-incrementing PostgreSQL primary keys.
- Duplicated values are not a problem if the data duplication is minimal.
Usage:
```ruby
connection = ClickHouse::Connection.new(:main)
builder = ClickHouse::Client::QueryBuilder.new('events')
iterator = ClickHouse::Iterator.new(query_builder: builder, connection: connection)
iterator.each_batch(column: :id, of: 100_000) do |scope|
records = connection.select(scope.to_sql)
end
```
In case you want to iterate over specific rows, you could add filters to the query builder object. Be advised that efficient filtering and iteration might require a different database table schema optimized for the use case. When introducing such iteration, always ensure that the database queries are not scanning the whole database table.
```ruby
connection = ClickHouse::Connection.new(:main)
builder = ClickHouse::Client::QueryBuilder.new('events')
# filtering by target type and stringified traversal ids/path
builder = builder.where(target_type: 'Issue')
builder = builder.where(path: '96/97/') # points to a specific project
iterator = ClickHouse::Iterator.new(query_builder: builder, connection: connection)
iterator.each_batch(column: :id, of: 10) do |scope, min, max|
puts "processing range: #{min} - #{max}"
puts scope.to_sql
records = connection.select(scope.to_sql)
end
```
### Min-max strategies
As the first step, the iterator determines the data range which will be used as condition in the iteration database queries. The data range is
determined using `MIN(column)` and `MAX(column)` aggregations. For some database tables this strategy causes inefficient database queries (full table scan). One example would be partitioned database tables.
Example query:
```sql
SELECT MIN(id) AS min, MAX(id) AS max FROM events;
```
Alternatively a different min-max strategy can be used which uses `ORDER BY + LIMIT` for determining the data range.
```ruby
iterator = ClickHouse::Iterator.new(query_builder: builder, connection: connection, min_max_strategy: :order_limit)
```
Example query:
```sql
SELECT (SELECT id FROM events ORDER BY id ASC LIMIT 1) AS min, (SELECT id FROM events ORDER BY id DESC LIMIT 1) AS max;
```
## Implementing Sidekiq workers
Sidekiq workers leveraging ClickHouse databases should include the `ClickHouseWorker` module.
This ensures that the worker is paused while database migrations are running,
and that migrations do not run while the worker is active.
```ruby
# events_sync_worker.rb
# frozen_string_literal: true
module ClickHouse
class EventsSyncWorker
include ApplicationWorker
include ClickHouseWorker
...
end
end
```
## Best practices
When building features that require data from ClickHouse, you should first replicate raw data from PostgreSQL tables (such as events or issues) using [Sidekiq workers](#implementing-sidekiq-workers) or another strategy. Then, build separate aggregations on top of that data. By avoiding direct aggregation from PostgreSQL, you can improve maintainability and enable data reprocessing.
## Testing
ClickHouse is enabled on CI/CD but to avoid significantly affecting the pipeline runtime we've decided to run the ClickHouse server for test cases tagged with `:click_house` only.
The `:click_house` tag ensures that the database schema is properly set up before every test case.
```ruby
RSpec.describe MyClickHouseFeature, :click_house do
it 'returns rows' do
rows = ClickHouse::Client.select('SELECT 1', :main)
expect(rows.size).to eq(1)
end
end
```
## Multiple databases
By design, the `ClickHouse::Client` library supports configuring multiple databases. Because we're still at a very early stage of development, we only have one database called `main`.
Multi database configuration example:
```yaml
development:
main:
database: gitlab_clickhouse_main_development
url: 'http://localhost:8123'
username: clickhouse
password: clickhouse
user_analytics: # made up database
database: gitlab_clickhouse_user_analytics_development
url: 'http://localhost:8123'
username: clickhouse
password: clickhouse
```
## Observability
All queries executed via the `ClickHouse::Client` library expose the query with performance metrics (timings, read bytes) via `ActiveSupport::Notifications`.
```ruby
ActiveSupport::Notifications.subscribe('sql.click_house') do |_, _, _, _, data|
puts data.inspect
end
```
Additionally, to view the executed ClickHouse queries in web interactions, on the performance bar, next to the `ch` label select the count.
## Handling Siphon Errors in Tests
GitLab uses a tool called [Siphon](https://gitlab.com/gitlab-org/analytics-section/siphon) to constantly synchronise data from specified tables in PostgreSQL to ClickHouse.
This process requires that for each specified table, the ClickHouse schema must contain a copy of the PostgreSQL schema.
During GitLab development, if you add a new column to PostgreSQL without adding a matching column in ClickHouse it will fail with an error:
```plaintext
This table is synchronised to ClickHouse and you've added a new column!
```
To resolve this, you should add a migration to add the column to ClickHouse too.
### Example
1. Add a new column `new_int` of type `int4` to a table that is being synchronised to ClickHouse, such as `milestones`.
1. Note that CI will fail with the error:
```plaintext
This table is synchronised to ClickHouse and you've added a new column!
```
1. Generate a new ClickHouse migration to add the new column, note that the ClickHouse table is prefixed with `siphon_`:
```plaintext
bundle exec rails generate gitlab:click_house:migration add_new_int_to_siphon_milestones
```
1. In the generated file, define up/down methods to add/remove the new column. ClickHouse data types map approximately to PostgreSQL.
Check `Gitlab::ClickHouse::SiphonGenerator::PG_TYPE_MAP` for the appropriate mapping for the new column. Using the wrong type will trigger a different error.
Additionally, consider making use of [`LowCardinaility`](https://clickhouse.com/docs/sql-reference/data-types/lowcardinality) where appropriate and use [`Nullable`](https://clickhouse.com/docs/sql-reference/data-types/nullable) sparingly opting for default values instead where possible.
```ruby
class AddNewIntToSiphonMilestones < ClickHouse::Migration
def up
execute <<~SQL
ALTER TABLE siphon_milestones ADD COLUMN new_int Int64 DEFAULT 42;
SQL
end
def down
execute <<~SQL
ALTER TABLE siphon_milestones DROP COLUMN new_int;
SQL
end
end
```
If you need further assistance, reach out to `#f_siphon` internally.
## Troubleshooting
If you experience `MEMORY_LIMIT_EXCEEDED` errors when executing queries, increase the `clickhouse.max_memory_usage` and `clickhouse.max_server_memory_usage` settings
in your `gdk.yml` file.
Consult the `gdk.example.yml` file for the default settings. You must reconfigure GDK for changes to take effect.
## Getting help
For additional information or specific questions, reach out to the ClickHouse Datastore working group in the `#f_clickhouse` Slack channel, or mention `@gitlab-org/maintainers/clickhouse` in a comment on GitLab.com.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: ClickHouse within GitLab
breadcrumbs:
- doc
- development
- database
- clickhouse
---
This document gives a high-level overview of how to develop features using ClickHouse in the GitLab Rails application.
{{< alert type="note" >}}
Most of the tooling and APIs are considered unstable.
{{< /alert >}}
## GDK setup
### Setup ClickHouse server
1. Install ClickHouse locally as described in [ClickHouse installation documentation](https://clickhouse.com/docs/en/install). If you use QuickInstall it will be installed in current directory, if you use Homebrew it will be installed to `/opt/homebrew/bin/clickhouse`
1. Add ClickHouse section to your `gdk.yml`. See [`gdk.example.yml`](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/gdk.example.yml)
1. Adjust the `gdk.yml` ClickHouse configuration file to point to your local ClickHouse installation and local data storage. E.g.
```yaml
clickhouse:
bin: "/opt/homebrew/bin/clickhouse"
enabled: true
# these are optional if we have more then one GDK:
# http_port: 8123
# interserver_http_port: 9009
# tcp_port: 9001
```
1. Run `gdk reconfigure`
1. Start ClickHouse with `gdk start clickhouse`
### Configure your Rails application
1. Copy the example file and configure the credentials:
```shell
cp config/click_house.yml.example config/click_house.yml
```
1. Create the database using the bundled `clickhouse client`:
```shell
gdk clickhouse
```
```sql
create database gitlab_clickhouse_development;
create database gitlab_clickhouse_test;
```
### Validate your setup
Run the Rails console and invoke a simple query:
```ruby
ClickHouse::Client.select('SELECT 1', :main)
# => [{"1"=>1}]
```
## Database schema and migrations
To generate a ClickHouse database migration, execute:
``` shell
bundle exec rails generate gitlab:click_house:migration MIGRATION_CLASS_NAME
```
To run database migrations, execute:
```shell
bundle exec rake gitlab:clickhouse:migrate
```
To rollback last N migrations, execute:
```shell
bundle exec rake gitlab:clickhouse:rollback:main STEP=N
```
Or use the following command to rollback all migrations:
```shell
bundle exec rake gitlab:clickhouse:rollback:main VERSION=0
```
You can create a migration by creating a Ruby migration file in `db/click_house/migrate` folder. It should be prefixed with a timestamp in the format `YYYYMMDDHHMMSS_description_of_migration.rb`
```ruby
# 20230811124511_create_issues.rb
# frozen_string_literal: true
class CreateIssues < ClickHouse::Migration
def up
execute <<~SQL
CREATE TABLE issues
(
id UInt64 DEFAULT 0,
title String DEFAULT ''
)
ENGINE = MergeTree
PRIMARY KEY (id)
SQL
end
def down
execute <<~SQL
DROP TABLE sync_cursors
SQL
end
end
```
## Post deployment migrations
To generate a ClickHouse database post deployment migration execute:
``` shell
bundle exec rails generate gitlab:click_house:post_deployment_migration MIGRATION_CLASS_NAME
```
These migrations will run by default together with regular migrations, but they can be skipped,
for example, before deploying to production using `SKIP_POST_DEPLOYMENT_MIGRATIONS` environment variable, for example:
``` shell
export SKIP_POST_DEPLOYMENT_MIGRATIONS=true
bundle exec rake gitlab:clickhouse:migrate
```
## Writing database queries
For the ClickHouse database we don't use ORM (Object Relational Mapping). The main reason is that the GitLab application has many customizations for the `ActiveRecord` PostgreSQL adapter and the application generally assumes that all databases are using `PostgreSQL`. Since ClickHouse-related features are still in a very early stage of development, we decided to implement a simple HTTP client to avoid hard to discover bugs and long debugging time when dealing with multiple `ActiveRecord` adapters.
Additionally, ClickHouse might not be used the same way as other adapters for `ActiveRecord`. The access patterns differ from traditional transactional databases, in that ClickHouse:
- Uses nested aggregation `SELECT` queries with `GROUP BY` clauses.
- Doesn't use single `INSERT` statements. Data is inserted in batches via background jobs.
- Has different consistency characteristics, no transactions.
- Has very little database-level validations.
Database queries are written and executed with the help of the `ClickHouse::Client` gem.
A simple query from the `events` table:
```ruby
rows = ClickHouse::Client.select('SELECT * FROM events', :main)
```
When working with queries with placeholders you can use the `ClickHouse::Query` object where you need to specify the placeholder name and its data type. The actual variable replacement, quoting and escaping will be done by the ClickHouse server.
```ruby
raw_query = 'SELECT * FROM events WHERE id > {min_id:UInt64}'
placeholders = { min_id: Integer(100) }
query = ClickHouse::Client::Query.new(raw_query: raw_query, placeholders: placeholders)
rows = ClickHouse::Client.select(query, :main)
```
When using placeholders the client can provide the query with redacted placeholder values which can be ingested by our logging system. You can see the redacted version of your query by calling the `to_redacted_sql` method:
```ruby
puts query.to_redacted_sql
```
ClickHouse allows only one statement per request. This means that the common SQL injection vulnerability where the statement is closed with a `;` character and then another query is "injected" cannot be exploited:
```ruby
ClickHouse::Client.select('SELECT 1; SELECT 2', :main)
# ClickHouse::Client::DatabaseError: Code: 62. DB::Exception: Syntax error (Multi-statements are not allowed): failed at position 9 (end of query): ; SELECT 2. . (SYNTAX_ERROR) (version 23.4.2.11 (official build))
```
### Subqueries
You can compose complex queries with the `ClickHouse::Client::Query` class by specifying the query placeholder with the special `Subquery` type. The library will make sure to correctly merge the queries and the placeholders:
```ruby
subquery = ClickHouse::Client::Query.new(raw_query: 'SELECT id FROM events WHERE id = {id:UInt64}', placeholders: { id: Integer(10) })
raw_query = 'SELECT * FROM events WHERE id > {id:UInt64} AND id IN ({q:Subquery})'
placeholders = { id: Integer(10), q: subquery }
query = ClickHouse::Client::Query.new(raw_query: raw_query, placeholders: placeholders)
rows = ClickHouse::Client.select(query, :main)
# ClickHouse will replace the placeholders
puts query.to_sql # SELECT * FROM events WHERE id > {id:UInt64} AND id IN (SELECT id FROM events WHERE id = {id:UInt64})
puts query.to_redacted_sql # SELECT * FROM events WHERE id > $1 AND id IN (SELECT id FROM events WHERE id = $2)
puts query.placeholders # { id: 10 }
```
In case there are placeholders with the same name but different values the query will raise an error.
### Writing query conditions
When working with complex forms where multiple filter conditions are present, building queries by concatenating query fragments as string can get out of hands very quickly. For queries with several conditions you may use the `ClickHouse::Client::QueryBuilder` class. The class uses the `Arel` gem to generate queries and provides a similar query interface like `ActiveRecord`.
```ruby
builder = ClickHouse::Client::QueryBuilder.new('events')
query = builder
.where(builder.table[:created_at].lteq(Date.today))
.where(id: [1,2,3])
rows = ClickHouse::Client.select(query, :main)
```
## Inserting data
The ClickHouse client supports inserting data through the standard query interface:
```ruby
raw_query = 'INSERT INTO events (id, target_type) VALUES ({id:UInt64}, {target_type:String})'
placeholders = { id: 1, target_type: 'Issue' }
query = ClickHouse::Client::Query.new(raw_query: raw_query, placeholders: placeholders)
rows = ClickHouse::Client.execute(query, :main)
```
Inserting data this way is acceptable if:
- The table contains settings or configuration data where we need to add one row.
- For testing, test data has to be prepared in the database.
When inserting data, we should always try to use batch processing where multiple rows are inserted at once. Building large `INSERT` queries in memory is discouraged because of the increased memory usage. Additionally, values specified within such queries cannot be redacted automatically by the client.
To compress data and reduce memory usage, insert CSV data. You can do this with the internal [`CsvBuilder`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/gems/csv_builder) gem:
```ruby
iterator = Event.find_each
# insert from events table using only the id and the target_type columns
column_mapping = {
id: :id,
target_type: :target_type
}
CsvBuilder::Gzip.new(iterator, column_mapping).render do |tempfile|
query = 'INSERT INTO events (id, target_type) FORMAT CSV'
ClickHouse::Client.insert_csv(query, File.open(tempfile.path), :main)
end
```
{{< alert type="note" >}}
It's important to test and verify efficient batching of database records from PostgreSQL. Consider using the techniques described in the [Iterating tables in batches](../iterating_tables_in_batches.md).
{{< /alert >}}
## Iterating over tables
You can use the `ClickHouse::Iterator` class for batching over large volumes of data in ClickHouse. The iterator works a bit differently than existing tooling for the PostgreSQL database (see [iterating tables in batches docs](../iterating_tables_in_batches.md)), as the tool does not rely on database indexes and uses fixed size numeric ranges.
Prerequisites:
- Single integer column.
- No huge gaps between the column values, the ideal columns would be auto-incrementing PostgreSQL primary keys.
- Duplicated values are not a problem if the data duplication is minimal.
Usage:
```ruby
connection = ClickHouse::Connection.new(:main)
builder = ClickHouse::Client::QueryBuilder.new('events')
iterator = ClickHouse::Iterator.new(query_builder: builder, connection: connection)
iterator.each_batch(column: :id, of: 100_000) do |scope|
records = connection.select(scope.to_sql)
end
```
In case you want to iterate over specific rows, you could add filters to the query builder object. Be advised that efficient filtering and iteration might require a different database table schema optimized for the use case. When introducing such iteration, always ensure that the database queries are not scanning the whole database table.
```ruby
connection = ClickHouse::Connection.new(:main)
builder = ClickHouse::Client::QueryBuilder.new('events')
# filtering by target type and stringified traversal ids/path
builder = builder.where(target_type: 'Issue')
builder = builder.where(path: '96/97/') # points to a specific project
iterator = ClickHouse::Iterator.new(query_builder: builder, connection: connection)
iterator.each_batch(column: :id, of: 10) do |scope, min, max|
puts "processing range: #{min} - #{max}"
puts scope.to_sql
records = connection.select(scope.to_sql)
end
```
### Min-max strategies
As the first step, the iterator determines the data range which will be used as condition in the iteration database queries. The data range is
determined using `MIN(column)` and `MAX(column)` aggregations. For some database tables this strategy causes inefficient database queries (full table scan). One example would be partitioned database tables.
Example query:
```sql
SELECT MIN(id) AS min, MAX(id) AS max FROM events;
```
Alternatively a different min-max strategy can be used which uses `ORDER BY + LIMIT` for determining the data range.
```ruby
iterator = ClickHouse::Iterator.new(query_builder: builder, connection: connection, min_max_strategy: :order_limit)
```
Example query:
```sql
SELECT (SELECT id FROM events ORDER BY id ASC LIMIT 1) AS min, (SELECT id FROM events ORDER BY id DESC LIMIT 1) AS max;
```
## Implementing Sidekiq workers
Sidekiq workers leveraging ClickHouse databases should include the `ClickHouseWorker` module.
This ensures that the worker is paused while database migrations are running,
and that migrations do not run while the worker is active.
```ruby
# events_sync_worker.rb
# frozen_string_literal: true
module ClickHouse
class EventsSyncWorker
include ApplicationWorker
include ClickHouseWorker
...
end
end
```
## Best practices
When building features that require data from ClickHouse, you should first replicate raw data from PostgreSQL tables (such as events or issues) using [Sidekiq workers](#implementing-sidekiq-workers) or another strategy. Then, build separate aggregations on top of that data. By avoiding direct aggregation from PostgreSQL, you can improve maintainability and enable data reprocessing.
## Testing
ClickHouse is enabled on CI/CD but to avoid significantly affecting the pipeline runtime we've decided to run the ClickHouse server for test cases tagged with `:click_house` only.
The `:click_house` tag ensures that the database schema is properly set up before every test case.
```ruby
RSpec.describe MyClickHouseFeature, :click_house do
it 'returns rows' do
rows = ClickHouse::Client.select('SELECT 1', :main)
expect(rows.size).to eq(1)
end
end
```
## Multiple databases
By design, the `ClickHouse::Client` library supports configuring multiple databases. Because we're still at a very early stage of development, we only have one database called `main`.
Multi database configuration example:
```yaml
development:
main:
database: gitlab_clickhouse_main_development
url: 'http://localhost:8123'
username: clickhouse
password: clickhouse
user_analytics: # made up database
database: gitlab_clickhouse_user_analytics_development
url: 'http://localhost:8123'
username: clickhouse
password: clickhouse
```
## Observability
All queries executed via the `ClickHouse::Client` library expose the query with performance metrics (timings, read bytes) via `ActiveSupport::Notifications`.
```ruby
ActiveSupport::Notifications.subscribe('sql.click_house') do |_, _, _, _, data|
puts data.inspect
end
```
Additionally, to view the executed ClickHouse queries in web interactions, on the performance bar, next to the `ch` label select the count.
## Handling Siphon Errors in Tests
GitLab uses a tool called [Siphon](https://gitlab.com/gitlab-org/analytics-section/siphon) to constantly synchronise data from specified tables in PostgreSQL to ClickHouse.
This process requires that for each specified table, the ClickHouse schema must contain a copy of the PostgreSQL schema.
During GitLab development, if you add a new column to PostgreSQL without adding a matching column in ClickHouse it will fail with an error:
```plaintext
This table is synchronised to ClickHouse and you've added a new column!
```
To resolve this, you should add a migration to add the column to ClickHouse too.
### Example
1. Add a new column `new_int` of type `int4` to a table that is being synchronised to ClickHouse, such as `milestones`.
1. Note that CI will fail with the error:
```plaintext
This table is synchronised to ClickHouse and you've added a new column!
```
1. Generate a new ClickHouse migration to add the new column, note that the ClickHouse table is prefixed with `siphon_`:
```plaintext
bundle exec rails generate gitlab:click_house:migration add_new_int_to_siphon_milestones
```
1. In the generated file, define up/down methods to add/remove the new column. ClickHouse data types map approximately to PostgreSQL.
Check `Gitlab::ClickHouse::SiphonGenerator::PG_TYPE_MAP` for the appropriate mapping for the new column. Using the wrong type will trigger a different error.
Additionally, consider making use of [`LowCardinaility`](https://clickhouse.com/docs/sql-reference/data-types/lowcardinality) where appropriate and use [`Nullable`](https://clickhouse.com/docs/sql-reference/data-types/nullable) sparingly opting for default values instead where possible.
```ruby
class AddNewIntToSiphonMilestones < ClickHouse::Migration
def up
execute <<~SQL
ALTER TABLE siphon_milestones ADD COLUMN new_int Int64 DEFAULT 42;
SQL
end
def down
execute <<~SQL
ALTER TABLE siphon_milestones DROP COLUMN new_int;
SQL
end
end
```
If you need further assistance, reach out to `#f_siphon` internally.
## Troubleshooting
If you experience `MEMORY_LIMIT_EXCEEDED` errors when executing queries, increase the `clickhouse.max_memory_usage` and `clickhouse.max_server_memory_usage` settings
in your `gdk.yml` file.
Consult the `gdk.example.yml` file for the default settings. You must reconfigure GDK for changes to take effect.
## Getting help
For additional information or specific questions, reach out to the ClickHouse Datastore working group in the `#f_clickhouse` Slack channel, or mention `@gitlab-org/maintainers/clickhouse` in a comment on GitLab.com.
|
https://docs.gitlab.com/development/database/clickhouse
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/_index.md
|
2025-08-13
|
doc/development/database/clickhouse
|
[
"doc",
"development",
"database",
"clickhouse"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Introduction to ClickHouse use and table design
| null |
## How it differs from PostgreSQL
The [intro](https://clickhouse.com/docs/en/intro) page is quite good to give an overview of ClickHouse.
ClickHouse has a lot of differences from traditional OLTP (online transaction processing) databases like PostgreSQL. The underlying architecture is a bit different, and the processing is a lot more CPU-bound than in traditional databases.
ClickHouse is a log-centric database where immutability is a key component. The advantages of such approaches are well documented; for more information, see [The rise of immutable data stores](https://www.odbms.org/2015/10/the-rise-of-immutable-data-stores/). However, this also makes updates much harder. See ClickHouse [documentation](https://clickhouse.com/docs/en/guides/developer/mutations) for operations that provide UPDATE/DELETE support. It is noticeable that these operations are supposed to be non-frequent.
This distinction is important while designing tables. Either:
- The updates are not required (best case)
- If they are needed, they aren't to be run during query execution.
## ACID compatibility
ClickHouse has a slightly different overview of Transactional support, where the guarantees are applicable only up to a block of inserted data to a specific table. See [the Transactional (ACID) support](https://clickhouse.com/docs/en/guides/developer/transactional) documentation for details.
Multiple insertions in a single write should be avoided as transactional support across multiple tables is only covered in materialized views.
ClickHouse is heavily geared towards providing the best-in-class support for analytical queries. Operations like aggregation are very fast and there are several features to augment these capabilities.
ClickHouse has some good blog posts covering [details of aggregations](https://altinity.com/blog/clickhouse-aggregation-fun-part-1-internals-and-handy-tools).
## Primary indexes, sorting index and dictionaries
It is highly recommended to read ["A practical introduction to primary indexes in ClickHouse""](https://clickhouse.com/docs/en/guides/improving-query-performance/sparse-primary-indexes/sparse-primary-indexes-intro) to get an understanding of indexes in ClickHouse.
Particularly how database index design in ClickHouse [differs](https://clickhouse.com/docs/en/guides/improving-query-performance/sparse-primary-indexes/sparse-primary-indexes-design#an-index-design-for-massive-data-scales) from those in transactional databases like PostgreSQL.
Primary index design plays a very important role in query performance and should be stated carefully. Almost all of the queries should rely on the primary index as full data scans are bound to take longer.
Read the documentation for [primary keys and indexes in queries](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#primary-keys-and-indexes-in-queries) to learn how indexes can affect query performance in MergeTree Table engines (default table engine in ClickHouse).
Secondary indexes in ClickHouse are different from what is available in other systems. They are also called data-skipping indexes as they are used to skip over a block of data. See the documentation for [data-skipping indexes](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-data_skipping-indexes).
ClickHouse also offers ["Dictionaries"](https://clickhouse.com/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts) which can be used as external indexes. Dictionaries are loaded from memory and can be used to look up values on query runtime.
## Data types & Partitioning
ClickHouse offers SQL-compatible [data types](https://clickhouse.com/docs/en/sql-reference/data-types) and few specialized data types like:
- [`LowCardinality`](https://clickhouse.com/docs/en/sql-reference/data-types/lowcardinality)
- [UUID](https://clickhouse.com/docs/en/sql-reference/data-types/uuid)
- [Maps](https://clickhouse.com/docs/en/sql-reference/data-types/map)
- [Nested](https://clickhouse.com/docs/en/sql-reference/data-types/nested-data-structures/nested) which is interesting, because it simulates a table inside a column.
One key design aspect that comes up front while designing a table is the partitioning key. Partitions can be any arbitrary expression but usually, these are time duration like months, days, or weeks. ClickHouse takes a best-effort approach to minimize the data read by using the smallest set of partitions.
Suggested reads:
- [Choose a low cardinality partitioning key](https://clickhouse.com/docs/en/optimize/partitioning-key)
- [Custom partitioning key](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key).
## Sharding and replication
Sharding is a feature that allows splitting the data into multiple ClickHouse nodes to increase throughput and decrease latency. The sharding feature uses a distributed engine that is backed by local tables. The distributed engine is a "virtual" table that does not store any data. It is used as an interface to insert and query data.
See [the ClickHouse documentation](https://clickhouse.com/docs/en/engines/table-engines/special/distributed) and this section on [replication and sharding](https://clickhouse.com/docs/en/architecture/replication#replication-and-sharding-configuration). ClickHouse can use either Zookeeper or its own compatible API via a component called [ClickHouse Keeper](https://clickhouse.com/docs/en/operations/clickhouse-keeper) to maintain consensus.
After nodes are set up, they can become invisible from the Clients and both write and read queries can be issued to any node.
In most cases, clusters usually start with a fixed number of nodes(~ shards). [Rebalancing shards](https://clickhouse.com/docs/en/guides/sre/scaling-clusters) is operationally heavy and requires rigorous testing.
Replication is supported by MergeTree Table engine, see the [replication section](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication) in documentation for details on how to define them.
ClickHouse relies on a distributed coordination component (either Zookeeper or ClickHouse Keeper) to track the participating nodes in the quorum. Replication is asynchronous and multi-leader. Inserts can be issued to any node and they can appear on other nodes with some latency. If desired, stickiness to a specific node can be used to make sure that reads observe the latest written data.
## Materialized views
One of the defining features of ClickHouse is materialized views. Functionally they resemble insert triggers for ClickHouse.
We recommended reading the [views](https://clickhouse.com/docs/en/sql-reference/statements/create/view#materialized-view) section from the official documentation to get a better understanding of how they work.
Quoting the [documentation](https://clickhouse.com/docs/en/sql-reference/statements/create/view#materialized-view):
> Materialized views in ClickHouse are implemented more like insert triggers.
> If there's some aggregation in the view query, it's applied only to the batch
> of freshly inserted data. Any changes to existing data of the source table
> (like update, delete, drop a partition, etc.) do not change the materialized view.
## Secure and sensible defaults
ClickHouse instances should follow these security recommendations:
### Users
Files: `users.xml` and `config.xml`.
| Topic | Security Requirement | Reason |
| ----- | -------------------- | ------ |
| [`user_name/password`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-namepassword) | Usernames **must not** be blank. Passwords **must** use `password_sha256_hex` and **must not** be blank. | `plaintext` and `password_double_sha1_hex` are insecure. If username isn't specified, [`default` is used with no password](https://clickhouse.com/docs/en/operations/settings/settings-users). |
| [`access_management`](https://clickhouse.com/docs/en/operations/settings/settings-users#access_management-user-setting) | Use Server [configuration files](https://clickhouse.com/docs/en/operations/configuration-files) `users.xml` and `config.xml`. Avoid SQL-driven workflow. | SQL-driven workflow implies that at least one user has `access_management` which can be avoided via configuration files. These files are easier to audit and monitor too, considering that ["You can't manage the same access entity by both configuration methods simultaneously."](https://clickhouse.com/docs/en/operations/access-rights#access-control). |
| [`user_name/networks`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-namenetworks) | At least one of `<ip>`, `<host>`, `<host_regexp>` **must** be set. Do not use `<ip>::/0</ip>` to open access for any network. | Network controls. ([Trust cautiously](https://handbook.gitlab.com/handbook/security/architecture/#trust-cautiously) principle) |
| [`user_name/profile`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-nameprofile) | Use profiles to set similar properties across multiple users and set limits (from the user interface). | [Least privilege](https://handbook.gitlab.com/handbook/security/architecture/#assign-the-least-privilege-possible) principle and limits. |
| [`user_name/quota`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-namequota) | Set quotas for users whenever possible. | Limit resource usage over a period of time or track the use of resources. |
| [`user_name/databases`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-namedatabases) | Restrict access to data, and avoid users with full access. | [Least privilege](https://handbook.gitlab.com/handbook/security/architecture/#assign-the-least-privilege-possible) principle. |
### Network
Files: `config.xml`
| Topic | Security Requirement | Reason |
| ----- | -------------------- | ------ |
| [`mysql_port`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#server_configuration_parameters-mysql_port) | Disable MySQL access unless strictly necessary:<br/> `<!-- <mysql_port>9004</mysql_port> -->`. | Close unnecessary ports and features exposure. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth) principle) |
| [`postgresql_port`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#server_configuration_parameters-postgresql_port) | Disable PostgreSQL access unless strictly necessary:<br/> `<!-- <mysql_port>9005</mysql_port> -->` | Close unnecessary ports and features exposure. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth) principle) |
| [`http_port/https_port`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#http-porthttps-port) & [`tcp_port/tcp_port_secure`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#http-porthttps-port) | Configure [SSL-TLS](https://clickhouse.com/docs/en/guides/sre/configuring-ssl), and disable non SSL ports:<br/>`<!-- <http_port>8123</http_port> -->`<br/>`<!-- <tcp_port>9000</tcp_port> -->`<br/>and enable secure ports:<br/>`<https_port>8443</https_port>`<br/>`<tcp_port_secure>9440</tcp_port_secure>` | Encrypt data in transit. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth) principle) |
| [`interserver_http_host`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#interserver-http-host) | Disable `interserver_http_host` in favor of `interserver_https_host` (`<interserver_https_port>9010</interserver_https_port>`) if ClickHouse is configured as a cluster. | Encrypt data in transit. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth) principle) |
### Storage
| Topic | Security Requirement | Reason |
| ----- | -------------------- | ------ |
| Permissions | ClickHouse runs by default with the `clickhouse` user. Running as `root` is never needed. Use the principle of least privileges for the folders: `/etc/clickhouse-server`, `/var/lib/clickhouse`, `/var/log/clickhouse-server`. These folders must belong to the `clickhouse` user and group, and no other system user must have access to them. | Default passwords, ports and rules are "open doors". ([Fail securely & use secure defaults](https://handbook.gitlab.com/handbook/security/architecture/#fail-securely--use-secure-defaults) principle) |
| Encryption | Use an encrypted storage for logs and data if RED data is processed. On Kubernetes, the [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) used must be encrypted. [GKE](https://cloud.google.com/blog/products/containers-kubernetes/exploring-container-security-use-your-own-keys-to-protect-your-data-on-gke) and [EKS](https://aws.github.io/aws-eks-best-practices/security/docs/data/) encrypt all data at rest already. In this case, using your own key is best but not required. | Encrypt data at rest. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth)) |
### Logging
| Topic | Security Requirement | Reason |
| ----- | -------------------- | ------ |
| `logger` | `Log` and `errorlog` **must** be defined and writable by `clickhouse`. | Make sure logs are stored. |
| SIEM | If hosted on GitLab.com, the ClickHouse instance or cluster **must** report [logs to our SIEM](https://internal.gitlab.com/handbook/security/security_operations/security_logging/tooling/devo/) (internal link). | [GitLab logs critical information system activity](https://handbook.gitlab.com/handbook/security/audit-logging-policy/). |
| Log sensitive data | Query masking rules **must** be used if sensitive data can be logged. See [example masking rules](#example-masking-rules). | [Column level encryption](https://clickhouse.com/docs/en/sql-reference/functions/encryption-functions) can be used and leak sensitive data (keys) in logs. |
#### Example masking rules
```xml
<query_masking_rules>
<rule>
<name>hide SSN</name>
<regexp>(^|\D)\d{3}-\d{2}-\d{4}($|\D)</regexp>
<replace>000-00-0000</replace>
</rule>
<rule>
<name>hide encrypt/decrypt arguments</name>
<regexp>
((?:aes_)?(?:encrypt|decrypt)(?:_mysql)?)\s*\(\s*(?:'(?:\\'|.)+'|.*?)\s*\)
</regexp>
<replace>\1(???)</replace>
</rule>
</query_masking_rules>
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Introduction to ClickHouse use and table design
breadcrumbs:
- doc
- development
- database
- clickhouse
---
## How it differs from PostgreSQL
The [intro](https://clickhouse.com/docs/en/intro) page is quite good to give an overview of ClickHouse.
ClickHouse has a lot of differences from traditional OLTP (online transaction processing) databases like PostgreSQL. The underlying architecture is a bit different, and the processing is a lot more CPU-bound than in traditional databases.
ClickHouse is a log-centric database where immutability is a key component. The advantages of such approaches are well documented; for more information, see [The rise of immutable data stores](https://www.odbms.org/2015/10/the-rise-of-immutable-data-stores/). However, this also makes updates much harder. See ClickHouse [documentation](https://clickhouse.com/docs/en/guides/developer/mutations) for operations that provide UPDATE/DELETE support. It is noticeable that these operations are supposed to be non-frequent.
This distinction is important while designing tables. Either:
- The updates are not required (best case)
- If they are needed, they aren't to be run during query execution.
## ACID compatibility
ClickHouse has a slightly different overview of Transactional support, where the guarantees are applicable only up to a block of inserted data to a specific table. See [the Transactional (ACID) support](https://clickhouse.com/docs/en/guides/developer/transactional) documentation for details.
Multiple insertions in a single write should be avoided as transactional support across multiple tables is only covered in materialized views.
ClickHouse is heavily geared towards providing the best-in-class support for analytical queries. Operations like aggregation are very fast and there are several features to augment these capabilities.
ClickHouse has some good blog posts covering [details of aggregations](https://altinity.com/blog/clickhouse-aggregation-fun-part-1-internals-and-handy-tools).
## Primary indexes, sorting index and dictionaries
It is highly recommended to read ["A practical introduction to primary indexes in ClickHouse""](https://clickhouse.com/docs/en/guides/improving-query-performance/sparse-primary-indexes/sparse-primary-indexes-intro) to get an understanding of indexes in ClickHouse.
Particularly how database index design in ClickHouse [differs](https://clickhouse.com/docs/en/guides/improving-query-performance/sparse-primary-indexes/sparse-primary-indexes-design#an-index-design-for-massive-data-scales) from those in transactional databases like PostgreSQL.
Primary index design plays a very important role in query performance and should be stated carefully. Almost all of the queries should rely on the primary index as full data scans are bound to take longer.
Read the documentation for [primary keys and indexes in queries](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#primary-keys-and-indexes-in-queries) to learn how indexes can affect query performance in MergeTree Table engines (default table engine in ClickHouse).
Secondary indexes in ClickHouse are different from what is available in other systems. They are also called data-skipping indexes as they are used to skip over a block of data. See the documentation for [data-skipping indexes](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-data_skipping-indexes).
ClickHouse also offers ["Dictionaries"](https://clickhouse.com/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts) which can be used as external indexes. Dictionaries are loaded from memory and can be used to look up values on query runtime.
## Data types & Partitioning
ClickHouse offers SQL-compatible [data types](https://clickhouse.com/docs/en/sql-reference/data-types) and few specialized data types like:
- [`LowCardinality`](https://clickhouse.com/docs/en/sql-reference/data-types/lowcardinality)
- [UUID](https://clickhouse.com/docs/en/sql-reference/data-types/uuid)
- [Maps](https://clickhouse.com/docs/en/sql-reference/data-types/map)
- [Nested](https://clickhouse.com/docs/en/sql-reference/data-types/nested-data-structures/nested) which is interesting, because it simulates a table inside a column.
One key design aspect that comes up front while designing a table is the partitioning key. Partitions can be any arbitrary expression but usually, these are time duration like months, days, or weeks. ClickHouse takes a best-effort approach to minimize the data read by using the smallest set of partitions.
Suggested reads:
- [Choose a low cardinality partitioning key](https://clickhouse.com/docs/en/optimize/partitioning-key)
- [Custom partitioning key](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key).
## Sharding and replication
Sharding is a feature that allows splitting the data into multiple ClickHouse nodes to increase throughput and decrease latency. The sharding feature uses a distributed engine that is backed by local tables. The distributed engine is a "virtual" table that does not store any data. It is used as an interface to insert and query data.
See [the ClickHouse documentation](https://clickhouse.com/docs/en/engines/table-engines/special/distributed) and this section on [replication and sharding](https://clickhouse.com/docs/en/architecture/replication#replication-and-sharding-configuration). ClickHouse can use either Zookeeper or its own compatible API via a component called [ClickHouse Keeper](https://clickhouse.com/docs/en/operations/clickhouse-keeper) to maintain consensus.
After nodes are set up, they can become invisible from the Clients and both write and read queries can be issued to any node.
In most cases, clusters usually start with a fixed number of nodes(~ shards). [Rebalancing shards](https://clickhouse.com/docs/en/guides/sre/scaling-clusters) is operationally heavy and requires rigorous testing.
Replication is supported by MergeTree Table engine, see the [replication section](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication) in documentation for details on how to define them.
ClickHouse relies on a distributed coordination component (either Zookeeper or ClickHouse Keeper) to track the participating nodes in the quorum. Replication is asynchronous and multi-leader. Inserts can be issued to any node and they can appear on other nodes with some latency. If desired, stickiness to a specific node can be used to make sure that reads observe the latest written data.
## Materialized views
One of the defining features of ClickHouse is materialized views. Functionally they resemble insert triggers for ClickHouse.
We recommended reading the [views](https://clickhouse.com/docs/en/sql-reference/statements/create/view#materialized-view) section from the official documentation to get a better understanding of how they work.
Quoting the [documentation](https://clickhouse.com/docs/en/sql-reference/statements/create/view#materialized-view):
> Materialized views in ClickHouse are implemented more like insert triggers.
> If there's some aggregation in the view query, it's applied only to the batch
> of freshly inserted data. Any changes to existing data of the source table
> (like update, delete, drop a partition, etc.) do not change the materialized view.
## Secure and sensible defaults
ClickHouse instances should follow these security recommendations:
### Users
Files: `users.xml` and `config.xml`.
| Topic | Security Requirement | Reason |
| ----- | -------------------- | ------ |
| [`user_name/password`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-namepassword) | Usernames **must not** be blank. Passwords **must** use `password_sha256_hex` and **must not** be blank. | `plaintext` and `password_double_sha1_hex` are insecure. If username isn't specified, [`default` is used with no password](https://clickhouse.com/docs/en/operations/settings/settings-users). |
| [`access_management`](https://clickhouse.com/docs/en/operations/settings/settings-users#access_management-user-setting) | Use Server [configuration files](https://clickhouse.com/docs/en/operations/configuration-files) `users.xml` and `config.xml`. Avoid SQL-driven workflow. | SQL-driven workflow implies that at least one user has `access_management` which can be avoided via configuration files. These files are easier to audit and monitor too, considering that ["You can't manage the same access entity by both configuration methods simultaneously."](https://clickhouse.com/docs/en/operations/access-rights#access-control). |
| [`user_name/networks`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-namenetworks) | At least one of `<ip>`, `<host>`, `<host_regexp>` **must** be set. Do not use `<ip>::/0</ip>` to open access for any network. | Network controls. ([Trust cautiously](https://handbook.gitlab.com/handbook/security/architecture/#trust-cautiously) principle) |
| [`user_name/profile`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-nameprofile) | Use profiles to set similar properties across multiple users and set limits (from the user interface). | [Least privilege](https://handbook.gitlab.com/handbook/security/architecture/#assign-the-least-privilege-possible) principle and limits. |
| [`user_name/quota`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-namequota) | Set quotas for users whenever possible. | Limit resource usage over a period of time or track the use of resources. |
| [`user_name/databases`](https://clickhouse.com/docs/en/operations/settings/settings-users#user-namedatabases) | Restrict access to data, and avoid users with full access. | [Least privilege](https://handbook.gitlab.com/handbook/security/architecture/#assign-the-least-privilege-possible) principle. |
### Network
Files: `config.xml`
| Topic | Security Requirement | Reason |
| ----- | -------------------- | ------ |
| [`mysql_port`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#server_configuration_parameters-mysql_port) | Disable MySQL access unless strictly necessary:<br/> `<!-- <mysql_port>9004</mysql_port> -->`. | Close unnecessary ports and features exposure. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth) principle) |
| [`postgresql_port`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#server_configuration_parameters-postgresql_port) | Disable PostgreSQL access unless strictly necessary:<br/> `<!-- <mysql_port>9005</mysql_port> -->` | Close unnecessary ports and features exposure. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth) principle) |
| [`http_port/https_port`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#http-porthttps-port) & [`tcp_port/tcp_port_secure`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#http-porthttps-port) | Configure [SSL-TLS](https://clickhouse.com/docs/en/guides/sre/configuring-ssl), and disable non SSL ports:<br/>`<!-- <http_port>8123</http_port> -->`<br/>`<!-- <tcp_port>9000</tcp_port> -->`<br/>and enable secure ports:<br/>`<https_port>8443</https_port>`<br/>`<tcp_port_secure>9440</tcp_port_secure>` | Encrypt data in transit. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth) principle) |
| [`interserver_http_host`](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#interserver-http-host) | Disable `interserver_http_host` in favor of `interserver_https_host` (`<interserver_https_port>9010</interserver_https_port>`) if ClickHouse is configured as a cluster. | Encrypt data in transit. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth) principle) |
### Storage
| Topic | Security Requirement | Reason |
| ----- | -------------------- | ------ |
| Permissions | ClickHouse runs by default with the `clickhouse` user. Running as `root` is never needed. Use the principle of least privileges for the folders: `/etc/clickhouse-server`, `/var/lib/clickhouse`, `/var/log/clickhouse-server`. These folders must belong to the `clickhouse` user and group, and no other system user must have access to them. | Default passwords, ports and rules are "open doors". ([Fail securely & use secure defaults](https://handbook.gitlab.com/handbook/security/architecture/#fail-securely--use-secure-defaults) principle) |
| Encryption | Use an encrypted storage for logs and data if RED data is processed. On Kubernetes, the [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) used must be encrypted. [GKE](https://cloud.google.com/blog/products/containers-kubernetes/exploring-container-security-use-your-own-keys-to-protect-your-data-on-gke) and [EKS](https://aws.github.io/aws-eks-best-practices/security/docs/data/) encrypt all data at rest already. In this case, using your own key is best but not required. | Encrypt data at rest. ([Defense in depth](https://handbook.gitlab.com/handbook/security/architecture/#implement-defense-in-depth)) |
### Logging
| Topic | Security Requirement | Reason |
| ----- | -------------------- | ------ |
| `logger` | `Log` and `errorlog` **must** be defined and writable by `clickhouse`. | Make sure logs are stored. |
| SIEM | If hosted on GitLab.com, the ClickHouse instance or cluster **must** report [logs to our SIEM](https://internal.gitlab.com/handbook/security/security_operations/security_logging/tooling/devo/) (internal link). | [GitLab logs critical information system activity](https://handbook.gitlab.com/handbook/security/audit-logging-policy/). |
| Log sensitive data | Query masking rules **must** be used if sensitive data can be logged. See [example masking rules](#example-masking-rules). | [Column level encryption](https://clickhouse.com/docs/en/sql-reference/functions/encryption-functions) can be used and leak sensitive data (keys) in logs. |
#### Example masking rules
```xml
<query_masking_rules>
<rule>
<name>hide SSN</name>
<regexp>(^|\D)\d{3}-\d{2}-\d{4}($|\D)</regexp>
<replace>000-00-0000</replace>
</rule>
<rule>
<name>hide encrypt/decrypt arguments</name>
<regexp>
((?:aes_)?(?:encrypt|decrypt)(?:_mysql)?)\s*\(\s*(?:'(?:\\'|.)+'|.*?)\s*\)
</regexp>
<replace>\1(???)</replace>
</rule>
</query_masking_rules>
```
|
https://docs.gitlab.com/development/database/sharding
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/_index.md
|
2025-08-13
|
doc/development/database/sharding
|
[
"doc",
"development",
"database",
"sharding"
] |
_index.md
| null | null | null | null | null |
<!-- markdownlint-disable -->
This document was moved to [another location](../../organization/sharding/_index.md).
<!-- This redirect file can be deleted after <2025-09-18>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
---
redirect_to: ../../organization/sharding/_index.md
remove_date: '2025-09-18'
breadcrumbs:
- doc
- development
- database
- sharding
---
<!-- markdownlint-disable -->
This document was moved to [another location](../../organization/sharding/_index.md).
<!-- This redirect file can be deleted after <2025-09-18>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
https://docs.gitlab.com/development/database/scalability/time_decay
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/scalability/time_decay.md
|
2025-08-13
|
doc/development/database/scalability/patterns
|
[
"doc",
"development",
"database",
"scalability",
"patterns"
] |
time_decay.md
|
Data Access
|
Database Frameworks
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Time-decay data
|
Learn how to operate on large time-decay data
|
This document describes the *time-decay pattern* introduced in the
[Database Scalability Working Group](https://handbook.gitlab.com/handbook/company/working-groups/database-scalability/#time-decay-data).
We discuss the characteristics of time-decay data, and propose best practices for GitLab development
to consider in this context.
Some datasets are subject to strong time-decay effects, in which recent data is accessed far more
frequently than older data. Another aspect of time-decay: with time, some types of data become
less important. This means we can also move old data to a bit less durable (less available) storage,
or even delete the data, in extreme cases.
Those effects are usually tied to product or application semantics. They can vary in the degree
that older data are accessed, and how useful or required older data are to the users or the
application.
Let's first consider entities with no inherent time-related bias for their data.
A record for a user or a project may be equally important and frequently accessed, irrelevant to when
it was created. We cannot predict by using a user's `id` or `created_at` how often the related
record is accessed or updated.
On the other hand, a good example for datasets with extreme time-decay effects are logs and time
series data, such as events recording user actions.
Most of the time, that type of data may have no business use after a couple of days or weeks, and
quickly become less important even from a data analysis perspective. They represent a snapshot that
quickly becomes less and less relevant to the current state of the application, until at
some point it has no real value.
In the middle of the two extremes, we can find datasets that have useful information that we want to
keep around, but with old records seldom being accessed after an initial (small) time period after
creation.
## Characteristics of time-decay data
We are interested in datasets that show the following characteristics:
- **Size of the dataset**: they are considerably large.
- **Access methods**: we can filter the vast majority of queries accessing the dataset
by a time related dimension or a categorical dimension with time decay effects.
- **Immutability**: the time-decay status does not change.
- **Retention**: whether we want to keep the old data or not, or whether old
data should remain accessible by users through the application.
### Size of the dataset
There can be datasets of variable sizes that show strong time-decay effects, but in the context of
this blueprint, we intend to focus on entities with a **considerably large dataset**.
Smaller datasets do not contribute significantly to the database related resource usage, nor do they
inflict a considerable performance penalty to queries.
In contrast, large datasets over about 50 million records, or 100 GB in size, add a significant
overhead to constantly accessing a really small subset of the data. In those cases, we would want to
use the time-decay effect in our advantage and reduce the actively accessed dataset.
### Data access methods
The second and most important characteristic of time-decay data is that most of the time, we are
able to implicitly or explicitly access the data using a date filter,
**restricting our results based on a time-related dimension**.
There can be many such dimensions, but we focus only on the creation date as it is both
the most commonly used, and the one that we can control and optimize against. It:
- Is immutable.
- Is set when the record is created
- Can be tied to physically clustering the records, without having to move them around.
It's important to add that even if time-decay data are not accessed that way by the application by
default, you can make the vast majority of the queries explicitly filter the data in such
a way. **Time decay data without such a time-decay related access method are of no use from an optimization perspective, as there is no way to set and follow a scaling pattern.**
We are not restricting the definition to data that are always accessed using a time-decay related
access method, as there may be some outlier operations. These may be necessary and we can accept
them not scaling, if the rest of the access methods can scale. An example:
an administrator accessing all past events of a specific type, while all other operations only access
a maximum of a month of events, restricted to 6 months in the past.
### Immutability
The third characteristic of time-decay data is that their **time-decay status does not change**.
Once they are considered "old", they cannot switch back to "new" or relevant again.
This definition may sound trivial, but we have to be able to make operations over "old" data **more**
expensive (for example, by archiving or moving them to less expensive storage) without having to worry about
the repercussions of switching back to being relevant and having important application operations
underperforming.
Consider as a counter example to a time-decay data access pattern an application view that presents
issues by when they were updated. We are also interested in the most recent data from an "update"
perspective, but that definition is volatile and not actionable.
### Retention
Finally, a characteristic that further differentiates time-decay data in sub-categories with
slightly different approaches available is **whether we want to keep the old data or not**
(for example, retention policy) and/or
**whether old data is accessible by users through the application**.
#### (optional) Extended definition of time-decay data
As a side note, if we extend the aforementioned definitions to access patterns that restrict access
to a well defined subset of the data based on a clustering attribute, we could use the time-decay
scaling patterns for many other types of data.
As an example, consider data that are only accessed while they are labeled as active, like To-Dos
not marked as done, pipelines for unmerged merge requests (or a similar not time based constraint), etc.
In this case, instead of using a time dimension to define the decay, we use a categorical dimension
(that is, one that uses a finite set of values) to define the subset of interest. As long as that
subset is small compared to the overall size of the dataset, we could use the same approach.
Similarly, we may define data as old based both on a time dimension and additional status attributes,
such as CI pipelines that failed more than 6 months ago.
## Time-decay data strategies
### Partition tables
This is the acceptable best practice for addressing time-decay data from a pure database perspective.
You can find more information on table partitioning for PostgreSQL in the
[documentation page for table partitioning](https://www.postgresql.org/docs/16/ddl-partitioning.html).
Partitioning by date intervals (for example, month, year) allows us to create much smaller tables
(partitions) for each date interval and only access the most recent partitions for any
application-related operation.
We have to set the partitioning key based on the date interval of interest, which may depend on two
factors:
1. **How far back in time do we need to access data for?**
Partitioning by week is of no use if we always access data for a year back, as we would have to
execute queries over 52 different partitions (tables) each time. As an example for that consider the
activity feed on the profile of any GitLab user.
In contrast, if we want to just access the last 7 days of created records, partitioning by year
would include too many unnecessary records in each partition, as is the case for `web_hook_logs`.
1. **How large are the partitions created?**
The major purpose of partitioning is accessing tables that are as small as possible. If they get too
large by themselves, queries start underperforming. We may have to re-partition (split) them
in even smaller partitions.
The perfect partitioning scheme keeps **all queries over a dataset almost always over a single partition**,
with some cases going over two partitions and seldom over multiple partitions being
an acceptable balance. We should also target for **partitions that are as small as possible**, below
5-10M records and/or 10 GB each maximum.
Partitioning can be combined with other strategies to either prune (drop) old partitions, move them
to cheaper storage inside the database or move them outside of the database (archive or use of other
types of storage engines).
As long as we do not want to keep old records and partitioning is used, pruning old data has a
constant, for all intents and purposes zero, cost compared to deleting the data from a huge table
(as described in the following sub-section). We just need a background worker to drop old partitions
whenever all the data inside that partition get out of the retention policy's period.
As an example, if we only want to keep records no more than 6 months old and we partition by month,
we can safely keep the 7 latest partitions at all times (current month and 6 months in the past).
That means that we can have a worker dropping the 8th oldest partition at the start of each month.
Moving partitions to cheaper storage inside the same database is relatively simple in PostgreSQL
through the use of [tablespaces](https://www.postgresql.org/docs/16/manage-ag-tablespaces.html).
It is possible to specify a tablespace and storage parameters for each partition separately, so the
approach in this case would be to:
1. Create a new tablespace on a cheaper, slow disk.
1. Set the storage parameters higher on that new tablespace so that the PostgreSQL optimizer knows that the disks are slower.
1. Move the old partitions to the slow tablespace automatically by using background workers.
Finally, moving partitions outside of the database can be achieved through database archiving or
manually exporting the partitions to a different storage engine (more details in the dedicated
sub-section).
### Prune old data
If we don't want to keep old data around in any form, we can implement a pruning strategy and
delete old data.
It's a simple-to-implement strategy that uses a pruning worker to delete past data. As an example
that we further analyze below, we are pruning old `web_hook_logs` older than 90 days.
The disadvantage of such a solution over large, non-partitioned tables is that we have to manually
access and delete all the records that are considered as not relevant any more. That is a very
expensive operation, due to multi-version concurrency control in PostgreSQL. It also leads to the
pruning worker not being able to catch up with new records being created, if that rate exceeds a
threshold, as is the case of [`web_hook_logs`](https://gitlab.com/gitlab-org/gitlab/-/issues/256088)
at the time of writing this document.
For the aforementioned reasons, our proposal is that
**we should base any implementation of a data retention strategy on partitioning**,
unless there are strong reasons not to.
### Move old data outside of the database
In most cases, we consider old data as valuable, so we do not want to prune them. If at the same
time, they are not required for any database related operations (for example, directly accessed or used in
joins and other types of queries), we can move them outside of the database.
That does not mean that they are not directly accessible by users through the application; we could
move data outside the database and use other storage engines or access types for them, similarly to
offloading metadata but only for the case of old data.
In the simplest use case we can provide fast and direct access to recent data, while allowing users
to download an archive with older data. This is an option evaluated in the `audit_events` use case.
Depending on the country and industry, audit events may have a very long retention period, while
only the past months of data are actively accessed through GitLab interface.
Additional use cases may include exporting data to a data warehouse or other types of data stores as
they may be better suited for processing that type of data. An example can be JSON logs that we
sometimes store in tables: loading such data into a BigQuery or a columnar store like Redshift may
be better for analyzing/querying the data.
We might consider a number of strategies for moving data outside of the database:
1. Streaming this type of data into logs and then move them to secondary storage options
or load them to other types of data stores directly (as CSV/JSON data).
1. Creating an ETL process that exports the data to CSV, uploads them to object storage,
drops this data from the database, and then loads the CSV into a different data store.
1. Loading the data in the background by using the API provided by the data store.
This may be a not viable solution for large datasets; as long as bulk uploading using files is an
option, it should outperform API calls.
## Use cases
### Web hook logs
Related epic: [Partitioning: `web_hook_logs` table](https://gitlab.com/groups/gitlab-org/-/epics/5558)
The important characteristics of `web_hook_logs` are the following:
1. Size of the dataset: it is a really large table. At the moment we decided to
partition it (`2021-03-01`), it had roughly 527M records and a total size of roughly 1 TB
- Table: `web_hook_logs`
- Rows: approximately 527M
- Total size: 1.02 TiB (10.46%)
- Table size: 713.02 GiB (13.37%)
- Index(es) size: 42.26 GiB (1.10%)
- TOAST size: 279.01 GiB (38.56%)
1. Access methods: we always request for the past 7 days of logs at max.
1. Immutability: it can be partitioned by `created_at`, an attribute that does not change.
1. Retention: there is a 90 days retention policy set for it.
Additionally, we were at the time trying to prune the data by using a background worker
(`PruneWebHookLogsWorker`), which could not [keep up with the rate of inserts](https://gitlab.com/gitlab-org/gitlab/-/issues/256088).
As a result, on March 2021 there were still not deleted records since July 2020 and the table was
increasing in size by more than 2 million records per day instead of staying at a more or less
stable size.
Finally, the rate of inserts has grown to more than 170 GB of data per month by March 2021 and keeps
on growing, so the only viable solution to pruning old data was through partitioning.
Our approach was to partition the table per month as it aligned with the 90 days retention policy.
The process required follows:
1. Decide on a partitioning key
Using the `created_at` column is straightforward in this case: it is a natural
partitioning key when a retention policy exists and there were no conflicting access patterns.
1. After we decide on the partitioning key, we can create the partitions and backfill
them (copy data from the existing table). We can't just partition an existing table;
we have to create a new partitioned table.
So, we have to create the partitioned table and all the related partitions, start copying everything
over, and also add sync triggers so that any new data or updates/deletes to existing data can be
mirrored to the new partitioned table.
[MR with all the necessary details on how to start partitioning a table](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/55938)
It required 15 days and 7.6 hours to complete that process.
1. One milestone after the initial partitioning starts, clean up after the background migration
used to backfill and finish executing any remaining jobs, retry failed jobs, etc.
[MR with all the necessary details](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/57580)
1. Add any remaining foreign keys and secondary indexes to the partitioned table. This brings
its schema on par with the original non partitioned table before we can swap them in the next milestone.
We are not adding them at the beginning as they are adding overhead to each insert and they
would slow down the initial backfilling of the table (in this case for more than half a billion
records, which can add up significantly). So we create a lightweight, vanilla version of the
table, copy all the data and then add any remaining indexes and foreign keys.
1. Swap the base table with partitioned copy: this is when the partitioned table
starts actively being used by the application.
Dropping the original table is a destructive operation, and we want to make sure that we had no
issues during the process, so we keep the old non-partitioned table. We also switch the sync trigger
the other way around so that the non-partitioned table is still up to date with any operations
happening on the partitioned table. That allows us to swap back the tables if it is necessary.
[MR with all the necessary details](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60184)
1. Last step, one milestone after the swap: drop the non-partitioned table
[Issue with all the necessary details](https://gitlab.com/gitlab-org/gitlab/-/issues/323678)
1. After the non-partitioned table is dropped, we can add a worker to implement the
pruning strategy by dropping past partitions.
In this case, the worker makes sure that only 4 partitions are always active (as the
retention policy is 90 days) and drop any partitions older than four months. We have to keep 4
months of partitions while the current month is still active, as going 90 days back takes you to
the fourth oldest partition.
### Audit events
Related epic: [Partitioning: Design and implement partitioning strategy for audit events](https://gitlab.com/groups/gitlab-org/-/epics/3206)
The `audit_events` table shares a lot of characteristics with the `web_hook_logs` table discussed
in the previous sub-section, so we focus on the points they differ.
The consensus was that
[partitioning could solve most of the performance issues](https://gitlab.com/groups/gitlab-org/-/epics/3206#note_338157248).
In contrast to most other large tables, it has no major conflicting access patterns: we could switch
the access patterns to align with partitioning by month. This is not the case for example for other
tables, which even though could justify a partitioning approach (for example, by namespace), they have many
conflicting access patterns.
In addition, `audit_events` is a write-heavy table with very few reads (queries) over it and has a
very simple schema, not connected with the rest of the database (no incoming or outgoing FK
constraints) and with only two indexes defined over it.
The later was important at the time as not having Foreign Key constraints meant that we could
partition it while we were still in PostgreSQL 11. *This is not a concern any more now that we have
moved to PostgreSQL 12 as a required default, as can be seen for the `web_hook_logs` use case above.*
The migrations and steps required for partitioning the `audit_events` are similar to
the ones described in the previous sub-section for `web_hook_logs`. There is no retention
strategy defined for `audit_events` at the moment, so there is no pruning strategy
implemented over it, but we may implement an archiving solution in the future.
What's interesting on the case of `audit_events` is the discussion on the necessary steps that we
had to follow to implement the UI/UX Changes needed to
[encourage optimal querying of the partitioned](https://gitlab.com/gitlab-org/gitlab/-/issues/223260).
It can be used as a starting point on the changes required on the application level
to align all access patterns with a specific time-decay related access method.
### CI tables
{{< alert type="note" >}}
Requirements and analysis of the CI tables use case: still a work in progress. We intend
to add more details after the analysis moves forward.
{{< /alert >}}
|
---
stage: Data Access
group: Database Frameworks
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Learn how to operate on large time-decay data
title: Time-decay data
breadcrumbs:
- doc
- development
- database
- scalability
- patterns
---
This document describes the *time-decay pattern* introduced in the
[Database Scalability Working Group](https://handbook.gitlab.com/handbook/company/working-groups/database-scalability/#time-decay-data).
We discuss the characteristics of time-decay data, and propose best practices for GitLab development
to consider in this context.
Some datasets are subject to strong time-decay effects, in which recent data is accessed far more
frequently than older data. Another aspect of time-decay: with time, some types of data become
less important. This means we can also move old data to a bit less durable (less available) storage,
or even delete the data, in extreme cases.
Those effects are usually tied to product or application semantics. They can vary in the degree
that older data are accessed, and how useful or required older data are to the users or the
application.
Let's first consider entities with no inherent time-related bias for their data.
A record for a user or a project may be equally important and frequently accessed, irrelevant to when
it was created. We cannot predict by using a user's `id` or `created_at` how often the related
record is accessed or updated.
On the other hand, a good example for datasets with extreme time-decay effects are logs and time
series data, such as events recording user actions.
Most of the time, that type of data may have no business use after a couple of days or weeks, and
quickly become less important even from a data analysis perspective. They represent a snapshot that
quickly becomes less and less relevant to the current state of the application, until at
some point it has no real value.
In the middle of the two extremes, we can find datasets that have useful information that we want to
keep around, but with old records seldom being accessed after an initial (small) time period after
creation.
## Characteristics of time-decay data
We are interested in datasets that show the following characteristics:
- **Size of the dataset**: they are considerably large.
- **Access methods**: we can filter the vast majority of queries accessing the dataset
by a time related dimension or a categorical dimension with time decay effects.
- **Immutability**: the time-decay status does not change.
- **Retention**: whether we want to keep the old data or not, or whether old
data should remain accessible by users through the application.
### Size of the dataset
There can be datasets of variable sizes that show strong time-decay effects, but in the context of
this blueprint, we intend to focus on entities with a **considerably large dataset**.
Smaller datasets do not contribute significantly to the database related resource usage, nor do they
inflict a considerable performance penalty to queries.
In contrast, large datasets over about 50 million records, or 100 GB in size, add a significant
overhead to constantly accessing a really small subset of the data. In those cases, we would want to
use the time-decay effect in our advantage and reduce the actively accessed dataset.
### Data access methods
The second and most important characteristic of time-decay data is that most of the time, we are
able to implicitly or explicitly access the data using a date filter,
**restricting our results based on a time-related dimension**.
There can be many such dimensions, but we focus only on the creation date as it is both
the most commonly used, and the one that we can control and optimize against. It:
- Is immutable.
- Is set when the record is created
- Can be tied to physically clustering the records, without having to move them around.
It's important to add that even if time-decay data are not accessed that way by the application by
default, you can make the vast majority of the queries explicitly filter the data in such
a way. **Time decay data without such a time-decay related access method are of no use from an optimization perspective, as there is no way to set and follow a scaling pattern.**
We are not restricting the definition to data that are always accessed using a time-decay related
access method, as there may be some outlier operations. These may be necessary and we can accept
them not scaling, if the rest of the access methods can scale. An example:
an administrator accessing all past events of a specific type, while all other operations only access
a maximum of a month of events, restricted to 6 months in the past.
### Immutability
The third characteristic of time-decay data is that their **time-decay status does not change**.
Once they are considered "old", they cannot switch back to "new" or relevant again.
This definition may sound trivial, but we have to be able to make operations over "old" data **more**
expensive (for example, by archiving or moving them to less expensive storage) without having to worry about
the repercussions of switching back to being relevant and having important application operations
underperforming.
Consider as a counter example to a time-decay data access pattern an application view that presents
issues by when they were updated. We are also interested in the most recent data from an "update"
perspective, but that definition is volatile and not actionable.
### Retention
Finally, a characteristic that further differentiates time-decay data in sub-categories with
slightly different approaches available is **whether we want to keep the old data or not**
(for example, retention policy) and/or
**whether old data is accessible by users through the application**.
#### (optional) Extended definition of time-decay data
As a side note, if we extend the aforementioned definitions to access patterns that restrict access
to a well defined subset of the data based on a clustering attribute, we could use the time-decay
scaling patterns for many other types of data.
As an example, consider data that are only accessed while they are labeled as active, like To-Dos
not marked as done, pipelines for unmerged merge requests (or a similar not time based constraint), etc.
In this case, instead of using a time dimension to define the decay, we use a categorical dimension
(that is, one that uses a finite set of values) to define the subset of interest. As long as that
subset is small compared to the overall size of the dataset, we could use the same approach.
Similarly, we may define data as old based both on a time dimension and additional status attributes,
such as CI pipelines that failed more than 6 months ago.
## Time-decay data strategies
### Partition tables
This is the acceptable best practice for addressing time-decay data from a pure database perspective.
You can find more information on table partitioning for PostgreSQL in the
[documentation page for table partitioning](https://www.postgresql.org/docs/16/ddl-partitioning.html).
Partitioning by date intervals (for example, month, year) allows us to create much smaller tables
(partitions) for each date interval and only access the most recent partitions for any
application-related operation.
We have to set the partitioning key based on the date interval of interest, which may depend on two
factors:
1. **How far back in time do we need to access data for?**
Partitioning by week is of no use if we always access data for a year back, as we would have to
execute queries over 52 different partitions (tables) each time. As an example for that consider the
activity feed on the profile of any GitLab user.
In contrast, if we want to just access the last 7 days of created records, partitioning by year
would include too many unnecessary records in each partition, as is the case for `web_hook_logs`.
1. **How large are the partitions created?**
The major purpose of partitioning is accessing tables that are as small as possible. If they get too
large by themselves, queries start underperforming. We may have to re-partition (split) them
in even smaller partitions.
The perfect partitioning scheme keeps **all queries over a dataset almost always over a single partition**,
with some cases going over two partitions and seldom over multiple partitions being
an acceptable balance. We should also target for **partitions that are as small as possible**, below
5-10M records and/or 10 GB each maximum.
Partitioning can be combined with other strategies to either prune (drop) old partitions, move them
to cheaper storage inside the database or move them outside of the database (archive or use of other
types of storage engines).
As long as we do not want to keep old records and partitioning is used, pruning old data has a
constant, for all intents and purposes zero, cost compared to deleting the data from a huge table
(as described in the following sub-section). We just need a background worker to drop old partitions
whenever all the data inside that partition get out of the retention policy's period.
As an example, if we only want to keep records no more than 6 months old and we partition by month,
we can safely keep the 7 latest partitions at all times (current month and 6 months in the past).
That means that we can have a worker dropping the 8th oldest partition at the start of each month.
Moving partitions to cheaper storage inside the same database is relatively simple in PostgreSQL
through the use of [tablespaces](https://www.postgresql.org/docs/16/manage-ag-tablespaces.html).
It is possible to specify a tablespace and storage parameters for each partition separately, so the
approach in this case would be to:
1. Create a new tablespace on a cheaper, slow disk.
1. Set the storage parameters higher on that new tablespace so that the PostgreSQL optimizer knows that the disks are slower.
1. Move the old partitions to the slow tablespace automatically by using background workers.
Finally, moving partitions outside of the database can be achieved through database archiving or
manually exporting the partitions to a different storage engine (more details in the dedicated
sub-section).
### Prune old data
If we don't want to keep old data around in any form, we can implement a pruning strategy and
delete old data.
It's a simple-to-implement strategy that uses a pruning worker to delete past data. As an example
that we further analyze below, we are pruning old `web_hook_logs` older than 90 days.
The disadvantage of such a solution over large, non-partitioned tables is that we have to manually
access and delete all the records that are considered as not relevant any more. That is a very
expensive operation, due to multi-version concurrency control in PostgreSQL. It also leads to the
pruning worker not being able to catch up with new records being created, if that rate exceeds a
threshold, as is the case of [`web_hook_logs`](https://gitlab.com/gitlab-org/gitlab/-/issues/256088)
at the time of writing this document.
For the aforementioned reasons, our proposal is that
**we should base any implementation of a data retention strategy on partitioning**,
unless there are strong reasons not to.
### Move old data outside of the database
In most cases, we consider old data as valuable, so we do not want to prune them. If at the same
time, they are not required for any database related operations (for example, directly accessed or used in
joins and other types of queries), we can move them outside of the database.
That does not mean that they are not directly accessible by users through the application; we could
move data outside the database and use other storage engines or access types for them, similarly to
offloading metadata but only for the case of old data.
In the simplest use case we can provide fast and direct access to recent data, while allowing users
to download an archive with older data. This is an option evaluated in the `audit_events` use case.
Depending on the country and industry, audit events may have a very long retention period, while
only the past months of data are actively accessed through GitLab interface.
Additional use cases may include exporting data to a data warehouse or other types of data stores as
they may be better suited for processing that type of data. An example can be JSON logs that we
sometimes store in tables: loading such data into a BigQuery or a columnar store like Redshift may
be better for analyzing/querying the data.
We might consider a number of strategies for moving data outside of the database:
1. Streaming this type of data into logs and then move them to secondary storage options
or load them to other types of data stores directly (as CSV/JSON data).
1. Creating an ETL process that exports the data to CSV, uploads them to object storage,
drops this data from the database, and then loads the CSV into a different data store.
1. Loading the data in the background by using the API provided by the data store.
This may be a not viable solution for large datasets; as long as bulk uploading using files is an
option, it should outperform API calls.
## Use cases
### Web hook logs
Related epic: [Partitioning: `web_hook_logs` table](https://gitlab.com/groups/gitlab-org/-/epics/5558)
The important characteristics of `web_hook_logs` are the following:
1. Size of the dataset: it is a really large table. At the moment we decided to
partition it (`2021-03-01`), it had roughly 527M records and a total size of roughly 1 TB
- Table: `web_hook_logs`
- Rows: approximately 527M
- Total size: 1.02 TiB (10.46%)
- Table size: 713.02 GiB (13.37%)
- Index(es) size: 42.26 GiB (1.10%)
- TOAST size: 279.01 GiB (38.56%)
1. Access methods: we always request for the past 7 days of logs at max.
1. Immutability: it can be partitioned by `created_at`, an attribute that does not change.
1. Retention: there is a 90 days retention policy set for it.
Additionally, we were at the time trying to prune the data by using a background worker
(`PruneWebHookLogsWorker`), which could not [keep up with the rate of inserts](https://gitlab.com/gitlab-org/gitlab/-/issues/256088).
As a result, on March 2021 there were still not deleted records since July 2020 and the table was
increasing in size by more than 2 million records per day instead of staying at a more or less
stable size.
Finally, the rate of inserts has grown to more than 170 GB of data per month by March 2021 and keeps
on growing, so the only viable solution to pruning old data was through partitioning.
Our approach was to partition the table per month as it aligned with the 90 days retention policy.
The process required follows:
1. Decide on a partitioning key
Using the `created_at` column is straightforward in this case: it is a natural
partitioning key when a retention policy exists and there were no conflicting access patterns.
1. After we decide on the partitioning key, we can create the partitions and backfill
them (copy data from the existing table). We can't just partition an existing table;
we have to create a new partitioned table.
So, we have to create the partitioned table and all the related partitions, start copying everything
over, and also add sync triggers so that any new data or updates/deletes to existing data can be
mirrored to the new partitioned table.
[MR with all the necessary details on how to start partitioning a table](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/55938)
It required 15 days and 7.6 hours to complete that process.
1. One milestone after the initial partitioning starts, clean up after the background migration
used to backfill and finish executing any remaining jobs, retry failed jobs, etc.
[MR with all the necessary details](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/57580)
1. Add any remaining foreign keys and secondary indexes to the partitioned table. This brings
its schema on par with the original non partitioned table before we can swap them in the next milestone.
We are not adding them at the beginning as they are adding overhead to each insert and they
would slow down the initial backfilling of the table (in this case for more than half a billion
records, which can add up significantly). So we create a lightweight, vanilla version of the
table, copy all the data and then add any remaining indexes and foreign keys.
1. Swap the base table with partitioned copy: this is when the partitioned table
starts actively being used by the application.
Dropping the original table is a destructive operation, and we want to make sure that we had no
issues during the process, so we keep the old non-partitioned table. We also switch the sync trigger
the other way around so that the non-partitioned table is still up to date with any operations
happening on the partitioned table. That allows us to swap back the tables if it is necessary.
[MR with all the necessary details](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60184)
1. Last step, one milestone after the swap: drop the non-partitioned table
[Issue with all the necessary details](https://gitlab.com/gitlab-org/gitlab/-/issues/323678)
1. After the non-partitioned table is dropped, we can add a worker to implement the
pruning strategy by dropping past partitions.
In this case, the worker makes sure that only 4 partitions are always active (as the
retention policy is 90 days) and drop any partitions older than four months. We have to keep 4
months of partitions while the current month is still active, as going 90 days back takes you to
the fourth oldest partition.
### Audit events
Related epic: [Partitioning: Design and implement partitioning strategy for audit events](https://gitlab.com/groups/gitlab-org/-/epics/3206)
The `audit_events` table shares a lot of characteristics with the `web_hook_logs` table discussed
in the previous sub-section, so we focus on the points they differ.
The consensus was that
[partitioning could solve most of the performance issues](https://gitlab.com/groups/gitlab-org/-/epics/3206#note_338157248).
In contrast to most other large tables, it has no major conflicting access patterns: we could switch
the access patterns to align with partitioning by month. This is not the case for example for other
tables, which even though could justify a partitioning approach (for example, by namespace), they have many
conflicting access patterns.
In addition, `audit_events` is a write-heavy table with very few reads (queries) over it and has a
very simple schema, not connected with the rest of the database (no incoming or outgoing FK
constraints) and with only two indexes defined over it.
The later was important at the time as not having Foreign Key constraints meant that we could
partition it while we were still in PostgreSQL 11. *This is not a concern any more now that we have
moved to PostgreSQL 12 as a required default, as can be seen for the `web_hook_logs` use case above.*
The migrations and steps required for partitioning the `audit_events` are similar to
the ones described in the previous sub-section for `web_hook_logs`. There is no retention
strategy defined for `audit_events` at the moment, so there is no pruning strategy
implemented over it, but we may implement an archiving solution in the future.
What's interesting on the case of `audit_events` is the discussion on the necessary steps that we
had to follow to implement the UI/UX Changes needed to
[encourage optimal querying of the partitioned](https://gitlab.com/gitlab-org/gitlab/-/issues/223260).
It can be used as a starting point on the changes required on the application level
to align all access patterns with a specific time-decay related access method.
### CI tables
{{< alert type="note" >}}
Requirements and analysis of the CI tables use case: still a work in progress. We intend
to add more details after the analysis moves forward.
{{< /alert >}}
|
https://docs.gitlab.com/development/database/scalability/patterns
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/scalability/_index.md
|
2025-08-13
|
doc/development/database/scalability/patterns
|
[
"doc",
"development",
"database",
"scalability",
"patterns"
] |
_index.md
|
Data Access
|
Database Frameworks
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Database Scalability Patterns
|
Learn how to scale the database through the use of best-of-class database scalability patterns
|
- [Read-mostly](read_mostly.md)
- [Time-decay](time_decay.md)
|
---
stage: Data Access
group: Database Frameworks
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Learn how to scale the database through the use of best-of-class database
scalability patterns
title: Database Scalability Patterns
breadcrumbs:
- doc
- development
- database
- scalability
- patterns
---
- [Read-mostly](read_mostly.md)
- [Time-decay](time_decay.md)
|
https://docs.gitlab.com/development/database/scalability/read_mostly
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/scalability/read_mostly.md
|
2025-08-13
|
doc/development/database/scalability/patterns
|
[
"doc",
"development",
"database",
"scalability",
"patterns"
] |
read_mostly.md
|
Data Access
|
Database Frameworks
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Read-mostly data
|
Learn how to scale operating on read-mostly data at scale
|
This document describes the *read-mostly* pattern introduced in the
[Database Scalability Working Group](https://handbook.gitlab.com/handbook/company/working-groups/database-scalability/#read-mostly-data).
We discuss the characteristics of *read-mostly* data and propose best practices for GitLab development
to consider in this context.
## Characteristics of read-mostly data
As the name already suggests, *read-mostly* data is about data that is much more often read than
updated. Writing this data through updates, inserts, or deletes is a very rare event compared to
reading this data.
In addition, *read-mostly* data in this context is typically a small dataset. We explicitly don't deal
with large datasets here, even though they often have a "write once, read often" characteristic, too.
### Example: license data
Let's introduce a canonical example: license data in GitLab. A GitLab instance may have a license
attached to use GitLab enterprise features. This license data is held instance-wide, that
is, there typically only exist a few relevant records. This information is kept in a table
`licenses` which is very small.
We consider this *read-mostly* data, because it follows above outlined characteristics:
- **Rare writes**: license data very rarely sees any writes after having inserted the license.
- **Frequent reads**: license data is read extremely often to check if enterprise features can be used.
- **Small size**: this dataset is very small. On GitLab.com we have 5 records at < 50 kB total relation size.
### Effects of *read-mostly* data at scale
Given this dataset is small and read very often, we can expect data to nearly always reside in
database caches and/or database disk caches. Thus, the concern with *read-mostly* data is typically
not around database I/O overhead, because we typically don't read data from disk anyway.
However, considering the high frequency reads, this has potential to incur overhead in terms of
database CPU load and database context switches. Additionally, those high frequency queries go
through the whole database stack. They also cause overhead on the database connection
multiplexing components and load balancers. Also, the application spends cycles in preparing and
sending queries to retrieve the data, deserialize the results and allocate new objects to represent
the information gathered - all in a high frequency fashion.
In the example of license data above, the query to read license data was
[identified](https://gitlab.com/gitlab-org/gitlab/-/issues/292900) to stand out in terms of query
frequency. In fact, we were seeing around 6,000 queries per second (QPS) on the cluster during peak
times. With the cluster size at that time, we were seeing about 1,000 QPS on each replica, and fewer
than 400 QPS on the primary at peak times. The difference is explained by our
[database load balancing for scaling reads](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/database/load_balancing.rb),
which favors replicas for pure read-only transactions.

The overall transaction throughput on the database primary at the time varied between 50,000 and
70,000 transactions per second (TPS). In comparison, this query frequency only takes a small
portion of the overall query frequency. However, we do expect this to still have considerable
overhead in terms of context switches. It is worth removing this overhead, if we can.
## How to recognize read-mostly data
It can be difficult to recognize *read-mostly* data, even though there are clear cases like in our
example.
One approach is to look at the [read/write ratio and statistics from, for example, the primary](https://bit.ly/3frdtyz). Here, we look at the TOP20 tables by their read/write ratio over 60 minutes (taken in a peak traffic time):
```plaintext
bottomk(20,
avg by (relname, fqdn) (
(
rate(pg_stat_user_tables_seq_tup_read{env="gprd"}[1h])
+
rate(pg_stat_user_tables_idx_tup_fetch{env="gprd"}[1h])
) /
(
rate(pg_stat_user_tables_seq_tup_read{env="gprd"}[1h])
+ rate(pg_stat_user_tables_idx_tup_fetch{env="gprd"}[1h])
+ rate(pg_stat_user_tables_n_tup_ins{env="gprd"}[1h])
+ rate(pg_stat_user_tables_n_tup_upd{env="gprd"}[1h])
+ rate(pg_stat_user_tables_n_tup_del{env="gprd"}[1h])
)
) and on (fqdn) (pg_replication_is_replica == 0)
)
```
This yields a good impression of which tables are much more often read than written (on the database
primary):

From here, we can [zoom](https://bit.ly/2VmloX1) into for example `gitlab_subscriptions` and realize that index reads peak at above 10k tuples per second overall (there are no seq scans):

We very rarely write to the table (there are no seq scans):

Additionally, the table is only 400 MB in size - so this may be another candidate we may want to
consider in this pattern (see [#327483](https://gitlab.com/gitlab-org/gitlab/-/issues/327483)).
## Best practices for handling read-mostly data at scale
### Cache read-mostly data
To reduce the database overhead, we implement a cache for the data and thus significantly
reduce the query frequency on the database side. There are different scopes for caching available:
- `RequestStore`: per-request in-memory cache (based on [`request_store` gem](https://github.com/steveklabnik/request_store))
- [`ProcessMemoryCache`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/process_memory_cache.rb#L4): per-process in-memory cache (a `ActiveSupport::Cache::MemoryStore`)
- [`Gitlab::Redis::Cache`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/redis/cache.rb) and `Rails.cache`: full-blown cache in Redis
Continuing the above example, we had a `RequestStore` in place to cache license information on a
per-request basis. However, that still leads to one query per request. When we started to cache license information
[using a process-wide in-memory cache](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/50318)
for 1 second, query frequency dramatically dropped:

The choice of caching here highly depends on the characteristics of data in question. A very small
dataset like license data that is nearly never updated is a good candidate for in-memory caching.
A per-process cache is favorable here, because this unties the cache refresh rate from the incoming
request rate.
A caveat here is that our Redis setup is currently not using Redis secondaries and we rely on a
single node for caching. That is, we need to strike a balance to avoid Redis falling over due to
increased pressure. In comparison, reading data from PostgreSQL replicas can be distributed across
several read-only replicas. Even though a query to the database might be more expensive, the
load is balanced across more nodes.
### Read read-mostly data from replica
With or without caching implemented, we also must make sure to read data from database replicas if
we can. This supports our efforts to scale reads across many database replicas and removes
unnecessary workload from the database primary.
GitLab [database load balancing for reads](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/database/load_balancing.rb)
sticks to the primary after a first write or when opening an
explicit transaction. In the context of *read-mostly* data, we strive to read this data outside of a
transaction scope and before doing any writes. This is often possible given that this data is only
seldom updated (and thus we're often not concerned with reading slightly stale data, for example).
However, it can be non-obvious that this query cannot be sent to a replica because of a previous
write or transaction. Hence, when we encounter *read-mostly* data, it is a good practice to check the
wider context and make sure this data can be read from a replica.
|
---
stage: Data Access
group: Database Frameworks
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Learn how to scale operating on read-mostly data at scale
title: Read-mostly data
breadcrumbs:
- doc
- development
- database
- scalability
- patterns
---
This document describes the *read-mostly* pattern introduced in the
[Database Scalability Working Group](https://handbook.gitlab.com/handbook/company/working-groups/database-scalability/#read-mostly-data).
We discuss the characteristics of *read-mostly* data and propose best practices for GitLab development
to consider in this context.
## Characteristics of read-mostly data
As the name already suggests, *read-mostly* data is about data that is much more often read than
updated. Writing this data through updates, inserts, or deletes is a very rare event compared to
reading this data.
In addition, *read-mostly* data in this context is typically a small dataset. We explicitly don't deal
with large datasets here, even though they often have a "write once, read often" characteristic, too.
### Example: license data
Let's introduce a canonical example: license data in GitLab. A GitLab instance may have a license
attached to use GitLab enterprise features. This license data is held instance-wide, that
is, there typically only exist a few relevant records. This information is kept in a table
`licenses` which is very small.
We consider this *read-mostly* data, because it follows above outlined characteristics:
- **Rare writes**: license data very rarely sees any writes after having inserted the license.
- **Frequent reads**: license data is read extremely often to check if enterprise features can be used.
- **Small size**: this dataset is very small. On GitLab.com we have 5 records at < 50 kB total relation size.
### Effects of *read-mostly* data at scale
Given this dataset is small and read very often, we can expect data to nearly always reside in
database caches and/or database disk caches. Thus, the concern with *read-mostly* data is typically
not around database I/O overhead, because we typically don't read data from disk anyway.
However, considering the high frequency reads, this has potential to incur overhead in terms of
database CPU load and database context switches. Additionally, those high frequency queries go
through the whole database stack. They also cause overhead on the database connection
multiplexing components and load balancers. Also, the application spends cycles in preparing and
sending queries to retrieve the data, deserialize the results and allocate new objects to represent
the information gathered - all in a high frequency fashion.
In the example of license data above, the query to read license data was
[identified](https://gitlab.com/gitlab-org/gitlab/-/issues/292900) to stand out in terms of query
frequency. In fact, we were seeing around 6,000 queries per second (QPS) on the cluster during peak
times. With the cluster size at that time, we were seeing about 1,000 QPS on each replica, and fewer
than 400 QPS on the primary at peak times. The difference is explained by our
[database load balancing for scaling reads](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/database/load_balancing.rb),
which favors replicas for pure read-only transactions.

The overall transaction throughput on the database primary at the time varied between 50,000 and
70,000 transactions per second (TPS). In comparison, this query frequency only takes a small
portion of the overall query frequency. However, we do expect this to still have considerable
overhead in terms of context switches. It is worth removing this overhead, if we can.
## How to recognize read-mostly data
It can be difficult to recognize *read-mostly* data, even though there are clear cases like in our
example.
One approach is to look at the [read/write ratio and statistics from, for example, the primary](https://bit.ly/3frdtyz). Here, we look at the TOP20 tables by their read/write ratio over 60 minutes (taken in a peak traffic time):
```plaintext
bottomk(20,
avg by (relname, fqdn) (
(
rate(pg_stat_user_tables_seq_tup_read{env="gprd"}[1h])
+
rate(pg_stat_user_tables_idx_tup_fetch{env="gprd"}[1h])
) /
(
rate(pg_stat_user_tables_seq_tup_read{env="gprd"}[1h])
+ rate(pg_stat_user_tables_idx_tup_fetch{env="gprd"}[1h])
+ rate(pg_stat_user_tables_n_tup_ins{env="gprd"}[1h])
+ rate(pg_stat_user_tables_n_tup_upd{env="gprd"}[1h])
+ rate(pg_stat_user_tables_n_tup_del{env="gprd"}[1h])
)
) and on (fqdn) (pg_replication_is_replica == 0)
)
```
This yields a good impression of which tables are much more often read than written (on the database
primary):

From here, we can [zoom](https://bit.ly/2VmloX1) into for example `gitlab_subscriptions` and realize that index reads peak at above 10k tuples per second overall (there are no seq scans):

We very rarely write to the table (there are no seq scans):

Additionally, the table is only 400 MB in size - so this may be another candidate we may want to
consider in this pattern (see [#327483](https://gitlab.com/gitlab-org/gitlab/-/issues/327483)).
## Best practices for handling read-mostly data at scale
### Cache read-mostly data
To reduce the database overhead, we implement a cache for the data and thus significantly
reduce the query frequency on the database side. There are different scopes for caching available:
- `RequestStore`: per-request in-memory cache (based on [`request_store` gem](https://github.com/steveklabnik/request_store))
- [`ProcessMemoryCache`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/process_memory_cache.rb#L4): per-process in-memory cache (a `ActiveSupport::Cache::MemoryStore`)
- [`Gitlab::Redis::Cache`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/redis/cache.rb) and `Rails.cache`: full-blown cache in Redis
Continuing the above example, we had a `RequestStore` in place to cache license information on a
per-request basis. However, that still leads to one query per request. When we started to cache license information
[using a process-wide in-memory cache](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/50318)
for 1 second, query frequency dramatically dropped:

The choice of caching here highly depends on the characteristics of data in question. A very small
dataset like license data that is nearly never updated is a good candidate for in-memory caching.
A per-process cache is favorable here, because this unties the cache refresh rate from the incoming
request rate.
A caveat here is that our Redis setup is currently not using Redis secondaries and we rely on a
single node for caching. That is, we need to strike a balance to avoid Redis falling over due to
increased pressure. In comparison, reading data from PostgreSQL replicas can be distributed across
several read-only replicas. Even though a query to the database might be more expensive, the
load is balanced across more nodes.
### Read read-mostly data from replica
With or without caching implemented, we also must make sure to read data from database replicas if
we can. This supports our efforts to scale reads across many database replicas and removes
unnecessary workload from the database primary.
GitLab [database load balancing for reads](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/database/load_balancing.rb)
sticks to the primary after a first write or when opening an
explicit transaction. In the context of *read-mostly* data, we strive to read this data outside of a
transaction scope and before doing any writes. This is often possible given that this data is only
seldom updated (and thus we're often not concerned with reading slightly stale data, for example).
However, it can be non-obvious that this query cannot be sent to a replica because of a previous
write or transaction. Hence, when we encounter *read-mostly* data, it is a good practice to check the
wider context and make sure this data can be read from a replica.
|
https://docs.gitlab.com/development/database/date_range
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/date_range.md
|
2025-08-13
|
doc/development/database/partitioning
|
[
"doc",
"development",
"database",
"partitioning"
] |
date_range.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Date range partitioning
| null |
## Description
The scheme best supported by the GitLab migration helpers is date-range partitioning,
where each partition in the table contains data for a single month. In this case,
the partitioning key must be a timestamp or date column. For this type of
partitioning to work well, most queries must access data in a
certain date range.
For a more concrete example, consider using the `audit_events` table.
It was the first table to be partitioned in the application database. This
table tracks audit entries of security events that happen in the
application. In almost all cases, users want to see audit activity that
occurs in a certain time frame. As a result, date-range partitioning
was a natural fit for how the data would be accessed.
To look at this in more detail, imagine a simplified `audit_events` schema:
```sql
CREATE TABLE audit_events (
id SERIAL NOT NULL PRIMARY KEY,
author_id INT NOT NULL,
details jsonb NOT NULL,
created_at timestamptz NOT NULL);
```
Now imagine typical queries in the UI would display the data in a
certain date range, like a single week:
```sql
SELECT *
FROM audit_events
WHERE created_at >= '2020-01-01 00:00:00'
AND created_at < '2020-01-08 00:00:00'
ORDER BY created_at DESC
LIMIT 100
```
If the table is partitioned on the `created_at` column the base table would
look like:
```sql
CREATE TABLE audit_events (
id SERIAL NOT NULL,
author_id INT NOT NULL,
details jsonb NOT NULL,
created_at timestamptz NOT NULL,
PRIMARY KEY (id, created_at))
PARTITION BY RANGE(created_at);
```
{{< alert type="note" >}}
The primary key of a partitioned table must include the partition key as
part of the primary key definition.
{{< /alert >}}
And we might have a list of partitions for the table, such as:
```sql
audit_events_202001 FOR VALUES FROM ('2020-01-01') TO ('2020-02-01')
audit_events_202002 FOR VALUES FROM ('2020-02-01') TO ('2020-03-01')
audit_events_202003 FOR VALUES FROM ('2020-03-01') TO ('2020-04-01')
```
Each partition is a separate physical table, with the same structure as
the base `audit_events` table, but contains only data for rows where the
partition key falls in the specified range. For example, the partition
`audit_events_202001` contains rows where the `created_at` column is
greater than or equal to `2020-01-01` and less than `2020-02-01`.
Now, if we look at the previous example query again, the database can
use the `WHERE` to recognize that all matching rows are in the
`audit_events_202001` partition. Rather than searching all of the data
in all of the partitions, it can search only the single month's worth
of data in the appropriate partition. In a large table, this can
dramatically reduce the amount of data the database needs to access.
However, imagine a query that does not filter based on the partitioning
key, such as:
```sql
SELECT *
FROM audit_events
WHERE author_id = 123
ORDER BY created_at DESC
LIMIT 100
```
In this example, the database can't prune any partitions from the search,
because matching data could exist in any of them. As a result, it has to
query each partition individually, and aggregate the rows into a single result
set. Because `author_id` would be indexed, the performance impact could
likely be acceptable, but on more complex queries the overhead can be
substantial. Partitioning should only be leveraged if the access patterns
of the data support the partitioning strategy, otherwise performance
suffers.
## Time-range Partitioning Strategies
GitLab supports two strategies for time-range partitioning:
- Daily partitioning
- Monthly partitioning
### Using Time-range Partitioning
To use time-range partitioning in your model, include the `PartitionedTable` module and configure the partition settings:
```ruby
class WebHookLog < ApplicationRecord
include PartitionedTable
partitioned_by :created_at, strategy: :monthly, retain_for: 1.month
end
```
### Available Strategies
#### Daily Strategy (`:daily`)
The daily strategy creates one partition per day:
```ruby
partitioned_by :created_at, strategy: :daily, retain_for: 7.days
```
#### Monthly Strategy (`:monthly`)
The monthly strategy creates one partition per month:
```ruby
partitioned_by :created_at, strategy: :monthly, retain_for: 3.months, analyze_interval: 3.days
```
### Configuration Options
- `column`: The column to partition on (required, must be a timestamp or date column)
- `strategy`: Either `:daily` or `:monthly` (required)
- `retain_for`: Duration to retain partitions (optional)
- `analyze_interval`: How often to run ANALYZE on new partitions (optional)
Choose `:daily` for high-volume tables that need fine-grained partitioning, or `:monthly` for tables with moderate data volume where daily partitioning would be excessive.
## Example
### Step 1: Creating the partitioned copy (Release N)
The first step is to add a migration to create the partitioned copy of
the original table. This migration creates the appropriate
partitions based on the data in the original table, and install a
trigger that syncs writes from the original table into the
partitioned copy.
An example migration of partitioning the `audit_events` table by its
`created_at` column would look like:
```ruby
class PartitionAuditEvents < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
def up
partition_table_by_date :audit_events, :created_at
end
def down
drop_partitioned_table_for :audit_events
end
end
```
After this has executed, any inserts, updates, or deletes in the
original table are also duplicated in the new table. For updates and
deletes, the operation only has an effect if the corresponding row
exists in the partitioned table.
### Step 2: Backfill the partitioned copy (Release N)
The second step is to add a post-deployment migration that schedules
the background jobs that backfill existing data from the original table
into the partitioned copy.
Continuing the above example, the migration would look like:
```ruby
class BackfillPartitionAuditEvents < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
enqueue_partitioning_data_migration :audit_events
end
def down
cleanup_partitioning_data_migration :audit_events
end
end
```
This step [queues a batched background migration](../batched_background_migrations.md#enqueue-a-batched-background-migration) internally with BATCH_SIZE and SUB_BATCH_SIZE as `50,000` and `2,500`. Refer [Batched Background migrations guide](../batched_background_migrations.md) for more details.
### Step 3: Post-backfill cleanup (Release after a required stop post Step 2)
{{< alert type="warning" >}}
A [required stop](../required_stops.md) must occur between steps 2 and 3 to allow the background migration from step 2 to complete successfully
in GitLab Self-Managed instances.
{{< /alert >}}
In this step,
add another post-deployment migration that cleans up after the
background migration. This includes forcing any remaining jobs to
execute, and copying data that may have been missed, due to dropped or
failed jobs.
Once again, continuing the example, this migration would look like:
```ruby
class CleanupPartitionedAuditEventsBackfill < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
finalize_backfilling_partitioned_table :audit_events
end
def down
# no op
end
end
```
After this migration completes, the original table and partitioned
table should contain identical data. The trigger installed on the
original table guarantees that the data remains in sync going forward.
### Step 4: Swap the partitioned and non-partitioned tables (Release N+1)
This step replaces the non-partitioned table with its partitioned copy, this should be used only after all other migration steps have completed successfully.
Some limitations to this method MUST be handled before, or during, the swap migration:
- Secondary indexes and foreign keys are not automatically recreated on the partitioned table.
- Some types of constraints (UNIQUE and EXCLUDE) which rely on indexes, are not automatically recreated
on the partitioned table, since the underlying index will not be present.
- Foreign keys referencing the original non-partitioned table should be updated to reference the
partitioned table. This is not supported in PostgreSQL 11.
- Views referencing the original table are not automatically updated to reference the partitioned table.
```ruby
# frozen_string_literal: true
class SwapPartitionedAuditEvents < ActiveRecord::Migration[6.0]
include Gitlab::Database::PartitioningMigrationHelpers
def up
replace_with_partitioned_table :audit_events
end
def down
rollback_replace_with_partitioned_table :audit_events
end
end
```
After this migration completes:
- The partitioned table replaces the non-partitioned (original) table.
- The sync trigger created earlier is dropped.
The partitioned table is now ready for use by the application.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Date range partitioning
breadcrumbs:
- doc
- development
- database
- partitioning
---
## Description
The scheme best supported by the GitLab migration helpers is date-range partitioning,
where each partition in the table contains data for a single month. In this case,
the partitioning key must be a timestamp or date column. For this type of
partitioning to work well, most queries must access data in a
certain date range.
For a more concrete example, consider using the `audit_events` table.
It was the first table to be partitioned in the application database. This
table tracks audit entries of security events that happen in the
application. In almost all cases, users want to see audit activity that
occurs in a certain time frame. As a result, date-range partitioning
was a natural fit for how the data would be accessed.
To look at this in more detail, imagine a simplified `audit_events` schema:
```sql
CREATE TABLE audit_events (
id SERIAL NOT NULL PRIMARY KEY,
author_id INT NOT NULL,
details jsonb NOT NULL,
created_at timestamptz NOT NULL);
```
Now imagine typical queries in the UI would display the data in a
certain date range, like a single week:
```sql
SELECT *
FROM audit_events
WHERE created_at >= '2020-01-01 00:00:00'
AND created_at < '2020-01-08 00:00:00'
ORDER BY created_at DESC
LIMIT 100
```
If the table is partitioned on the `created_at` column the base table would
look like:
```sql
CREATE TABLE audit_events (
id SERIAL NOT NULL,
author_id INT NOT NULL,
details jsonb NOT NULL,
created_at timestamptz NOT NULL,
PRIMARY KEY (id, created_at))
PARTITION BY RANGE(created_at);
```
{{< alert type="note" >}}
The primary key of a partitioned table must include the partition key as
part of the primary key definition.
{{< /alert >}}
And we might have a list of partitions for the table, such as:
```sql
audit_events_202001 FOR VALUES FROM ('2020-01-01') TO ('2020-02-01')
audit_events_202002 FOR VALUES FROM ('2020-02-01') TO ('2020-03-01')
audit_events_202003 FOR VALUES FROM ('2020-03-01') TO ('2020-04-01')
```
Each partition is a separate physical table, with the same structure as
the base `audit_events` table, but contains only data for rows where the
partition key falls in the specified range. For example, the partition
`audit_events_202001` contains rows where the `created_at` column is
greater than or equal to `2020-01-01` and less than `2020-02-01`.
Now, if we look at the previous example query again, the database can
use the `WHERE` to recognize that all matching rows are in the
`audit_events_202001` partition. Rather than searching all of the data
in all of the partitions, it can search only the single month's worth
of data in the appropriate partition. In a large table, this can
dramatically reduce the amount of data the database needs to access.
However, imagine a query that does not filter based on the partitioning
key, such as:
```sql
SELECT *
FROM audit_events
WHERE author_id = 123
ORDER BY created_at DESC
LIMIT 100
```
In this example, the database can't prune any partitions from the search,
because matching data could exist in any of them. As a result, it has to
query each partition individually, and aggregate the rows into a single result
set. Because `author_id` would be indexed, the performance impact could
likely be acceptable, but on more complex queries the overhead can be
substantial. Partitioning should only be leveraged if the access patterns
of the data support the partitioning strategy, otherwise performance
suffers.
## Time-range Partitioning Strategies
GitLab supports two strategies for time-range partitioning:
- Daily partitioning
- Monthly partitioning
### Using Time-range Partitioning
To use time-range partitioning in your model, include the `PartitionedTable` module and configure the partition settings:
```ruby
class WebHookLog < ApplicationRecord
include PartitionedTable
partitioned_by :created_at, strategy: :monthly, retain_for: 1.month
end
```
### Available Strategies
#### Daily Strategy (`:daily`)
The daily strategy creates one partition per day:
```ruby
partitioned_by :created_at, strategy: :daily, retain_for: 7.days
```
#### Monthly Strategy (`:monthly`)
The monthly strategy creates one partition per month:
```ruby
partitioned_by :created_at, strategy: :monthly, retain_for: 3.months, analyze_interval: 3.days
```
### Configuration Options
- `column`: The column to partition on (required, must be a timestamp or date column)
- `strategy`: Either `:daily` or `:monthly` (required)
- `retain_for`: Duration to retain partitions (optional)
- `analyze_interval`: How often to run ANALYZE on new partitions (optional)
Choose `:daily` for high-volume tables that need fine-grained partitioning, or `:monthly` for tables with moderate data volume where daily partitioning would be excessive.
## Example
### Step 1: Creating the partitioned copy (Release N)
The first step is to add a migration to create the partitioned copy of
the original table. This migration creates the appropriate
partitions based on the data in the original table, and install a
trigger that syncs writes from the original table into the
partitioned copy.
An example migration of partitioning the `audit_events` table by its
`created_at` column would look like:
```ruby
class PartitionAuditEvents < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
def up
partition_table_by_date :audit_events, :created_at
end
def down
drop_partitioned_table_for :audit_events
end
end
```
After this has executed, any inserts, updates, or deletes in the
original table are also duplicated in the new table. For updates and
deletes, the operation only has an effect if the corresponding row
exists in the partitioned table.
### Step 2: Backfill the partitioned copy (Release N)
The second step is to add a post-deployment migration that schedules
the background jobs that backfill existing data from the original table
into the partitioned copy.
Continuing the above example, the migration would look like:
```ruby
class BackfillPartitionAuditEvents < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
enqueue_partitioning_data_migration :audit_events
end
def down
cleanup_partitioning_data_migration :audit_events
end
end
```
This step [queues a batched background migration](../batched_background_migrations.md#enqueue-a-batched-background-migration) internally with BATCH_SIZE and SUB_BATCH_SIZE as `50,000` and `2,500`. Refer [Batched Background migrations guide](../batched_background_migrations.md) for more details.
### Step 3: Post-backfill cleanup (Release after a required stop post Step 2)
{{< alert type="warning" >}}
A [required stop](../required_stops.md) must occur between steps 2 and 3 to allow the background migration from step 2 to complete successfully
in GitLab Self-Managed instances.
{{< /alert >}}
In this step,
add another post-deployment migration that cleans up after the
background migration. This includes forcing any remaining jobs to
execute, and copying data that may have been missed, due to dropped or
failed jobs.
Once again, continuing the example, this migration would look like:
```ruby
class CleanupPartitionedAuditEventsBackfill < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
finalize_backfilling_partitioned_table :audit_events
end
def down
# no op
end
end
```
After this migration completes, the original table and partitioned
table should contain identical data. The trigger installed on the
original table guarantees that the data remains in sync going forward.
### Step 4: Swap the partitioned and non-partitioned tables (Release N+1)
This step replaces the non-partitioned table with its partitioned copy, this should be used only after all other migration steps have completed successfully.
Some limitations to this method MUST be handled before, or during, the swap migration:
- Secondary indexes and foreign keys are not automatically recreated on the partitioned table.
- Some types of constraints (UNIQUE and EXCLUDE) which rely on indexes, are not automatically recreated
on the partitioned table, since the underlying index will not be present.
- Foreign keys referencing the original non-partitioned table should be updated to reference the
partitioned table. This is not supported in PostgreSQL 11.
- Views referencing the original table are not automatically updated to reference the partitioned table.
```ruby
# frozen_string_literal: true
class SwapPartitionedAuditEvents < ActiveRecord::Migration[6.0]
include Gitlab::Database::PartitioningMigrationHelpers
def up
replace_with_partitioned_table :audit_events
end
def down
rollback_replace_with_partitioned_table :audit_events
end
end
```
After this migration completes:
- The partitioned table replaces the non-partitioned (original) table.
- The sync trigger created earlier is dropped.
The partitioned table is now ready for use by the application.
|
https://docs.gitlab.com/development/database/int_range
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/int_range.md
|
2025-08-13
|
doc/development/database/partitioning
|
[
"doc",
"development",
"database",
"partitioning"
] |
int_range.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Int range partitioning
| null |
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132148) in GitLab 16.8.
{{< /history >}}
## Description
Int range partitioning is a technique for dividing a large table into smaller,
more manageable chunks based on an integer column.
This can be particularly useful for tables with large numbers of rows,
as it can significantly improve query performance, reduce storage requirements, and simplify maintenance tasks.
For this type of partitioning to work well, most queries must access data in a
certain int range.
To look at this in more detail, imagine a simplified `merge_request_diff_files` schema:
```sql
CREATE TABLE merge_request_diff_files (
merge_request_diff_id INT NOT NULL,
relative_order INT NOT NULL,
PRIMARY KEY (merge_request_diff_id, relative_order));
```
Now imagine typical queries in the UI would display the data in a certain int range:
```sql
SELECT *
FROM merge_request_diff_files
WHERE merge_request_diff_id > 1 AND merge_request_diff_id < 10
LIMIT 100
```
If the table is partitioned on the `merge_request_diff_id` column the base table would look like:
```sql
CREATE TABLE merge_request_diff_files (
merge_request_diff_id INT NOT NULL,
relative_order INT NOT NULL,
PRIMARY KEY (merge_request_diff_id, relative_order))
PARTITION BY RANGE(merge_request_diff_id);
```
{{< alert type="note" >}}
The primary key of a partitioned table must include the partition key as
part of the primary key definition.
{{< /alert >}}
And we might have a list of partitions for the table, such as:
```sql
merge_request_diff_files_1 FOR VALUES FROM (1) TO (20)
merge_request_diff_files_20 FOR VALUES FROM (20) TO (40)
merge_request_diff_files_40 FOR VALUES FROM (40) TO (60)
```
Each partition is a separate physical table, with the same structure as
the base `merge_request_diff_files` table, but contains only data for rows where the
partition key falls in the specified range. For example, the partition
`merge_request_diff_files_1` contains rows where the `merge_request_diff_id` column is
greater than or equal to `1` and less than `20`.
Now, if we look at the previous example query again, the database can
use the `WHERE` to recognize that all matching rows are in the
`merge_request_diff_files_1` partition. Rather than searching all of the data
in all of the partitions. In a large table, this can
dramatically reduce the amount of data the database needs to access.
## Example
### Step 1: Creating the partitioned copy (Release N)
The first step is to add a migration to create the partitioned copy of
the original table. This migration creates the appropriate
partitions based on the data in the original table, and install a
trigger that syncs writes from the original table into the
partitioned copy.
An example migration of partitioning the `merge_request_diff_commits` table by its
`merge_request_diff_id` column would look like:
```ruby
class PartitionMergeRequestDiffCommits < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
def up
partition_table_by_int_range(
'merge_request_diff_commits',
'merge_request_diff_id',
partition_size: 10_000_000,
primary_key: %w[merge_request_diff_id relative_order]
)
end
def down
drop_partitioned_table_for('merge_request_diff_commits')
end
end
```
After this has executed, any inserts, updates, or deletes in the
original table are also duplicated in the new table. For updates and
deletes, the operation only has an effect if the corresponding row
exists in the partitioned table.
### Step 2: Backfill the partitioned copy (Release N)
The second step is to add a post-deployment migration that schedules
the background jobs that backfill existing data from the original table
into the partitioned copy.
Continuing the above example, the migration would look like:
```ruby
class BackfillPartitionMergeRequestDiffCommits < Gitlab::Database::Migration[2.2]
include Gitlab::Database::PartitioningMigrationHelpers
milestone '16.10'
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
enqueue_partitioning_data_migration :merge_request_diff_commits
end
def down
cleanup_partitioning_data_migration :merge_request_diff_commits
end
end
```
This step [queues a batched background migration](../batched_background_migrations.md#enqueue-a-batched-background-migration) internally with BATCH_SIZE and SUB_BATCH_SIZE as `50,000` and `2,500`. Refer [Batched Background migrations guide](../batched_background_migrations.md) for more details.
### Step 3: Post-backfill cleanup (Release N+1)
This step must occur at least one release after the release that
includes step (2). This gives time for the background
migration to execute properly in GitLab Self-Managed instances. In this step,
add another post-deployment migration that cleans up after the
background migration. This includes forcing any remaining jobs to
execute, and copying data that may have been missed, due to dropped or
failed jobs.
{{< alert type="warning" >}}
A required stop must occur between steps 2 and 3 to allow the background migration from step 2 to complete successfully.
{{< /alert >}}
Once again, continuing the example, this migration would look like:
```ruby
class CleanupPartitionMergeRequestDiffCommitsBackfill < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
finalize_backfilling_partitioned_table :merge_request_diff_commits
end
def down
# no op
end
end
```
After this migration completes, the original table and partitioned
table should contain identical data. The trigger installed on the
original table guarantees that the data remains in sync going forward.
### Step 4: Swap the partitioned and non-partitioned tables (Release N+1)
This step replaces the non-partitioned table with its partitioned copy, this should be used only after all other migration steps have completed successfully.
Some limitations to this method MUST be handled before, or during, the swap migration:
- Secondary indexes and foreign keys are not automatically recreated on the partitioned table.
- Some types of constraints (UNIQUE and EXCLUDE) which rely on indexes, are not automatically recreated
on the partitioned table, since the underlying index will not be present.
- Foreign keys referencing the original non-partitioned table should be updated to reference the
partitioned table. This is not supported in PostgreSQL 11.
- Views referencing the original table are not automatically updated to reference the partitioned table.
```ruby
# frozen_string_literal: true
class SwapPartitionMergeRequestDiffCommits < ActiveRecord::Migration[6.0]
include Gitlab::Database::PartitioningMigrationHelpers
def up
replace_with_partitioned_table :audit_events
end
def down
rollback_replace_with_partitioned_table :audit_events
end
end
```
After this migration completes:
- The partitioned table replaces the non-partitioned (original) table.
- The sync trigger created earlier is dropped.
The partitioned table is now ready for use by the application.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Int range partitioning
breadcrumbs:
- doc
- development
- database
- partitioning
---
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132148) in GitLab 16.8.
{{< /history >}}
## Description
Int range partitioning is a technique for dividing a large table into smaller,
more manageable chunks based on an integer column.
This can be particularly useful for tables with large numbers of rows,
as it can significantly improve query performance, reduce storage requirements, and simplify maintenance tasks.
For this type of partitioning to work well, most queries must access data in a
certain int range.
To look at this in more detail, imagine a simplified `merge_request_diff_files` schema:
```sql
CREATE TABLE merge_request_diff_files (
merge_request_diff_id INT NOT NULL,
relative_order INT NOT NULL,
PRIMARY KEY (merge_request_diff_id, relative_order));
```
Now imagine typical queries in the UI would display the data in a certain int range:
```sql
SELECT *
FROM merge_request_diff_files
WHERE merge_request_diff_id > 1 AND merge_request_diff_id < 10
LIMIT 100
```
If the table is partitioned on the `merge_request_diff_id` column the base table would look like:
```sql
CREATE TABLE merge_request_diff_files (
merge_request_diff_id INT NOT NULL,
relative_order INT NOT NULL,
PRIMARY KEY (merge_request_diff_id, relative_order))
PARTITION BY RANGE(merge_request_diff_id);
```
{{< alert type="note" >}}
The primary key of a partitioned table must include the partition key as
part of the primary key definition.
{{< /alert >}}
And we might have a list of partitions for the table, such as:
```sql
merge_request_diff_files_1 FOR VALUES FROM (1) TO (20)
merge_request_diff_files_20 FOR VALUES FROM (20) TO (40)
merge_request_diff_files_40 FOR VALUES FROM (40) TO (60)
```
Each partition is a separate physical table, with the same structure as
the base `merge_request_diff_files` table, but contains only data for rows where the
partition key falls in the specified range. For example, the partition
`merge_request_diff_files_1` contains rows where the `merge_request_diff_id` column is
greater than or equal to `1` and less than `20`.
Now, if we look at the previous example query again, the database can
use the `WHERE` to recognize that all matching rows are in the
`merge_request_diff_files_1` partition. Rather than searching all of the data
in all of the partitions. In a large table, this can
dramatically reduce the amount of data the database needs to access.
## Example
### Step 1: Creating the partitioned copy (Release N)
The first step is to add a migration to create the partitioned copy of
the original table. This migration creates the appropriate
partitions based on the data in the original table, and install a
trigger that syncs writes from the original table into the
partitioned copy.
An example migration of partitioning the `merge_request_diff_commits` table by its
`merge_request_diff_id` column would look like:
```ruby
class PartitionMergeRequestDiffCommits < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
def up
partition_table_by_int_range(
'merge_request_diff_commits',
'merge_request_diff_id',
partition_size: 10_000_000,
primary_key: %w[merge_request_diff_id relative_order]
)
end
def down
drop_partitioned_table_for('merge_request_diff_commits')
end
end
```
After this has executed, any inserts, updates, or deletes in the
original table are also duplicated in the new table. For updates and
deletes, the operation only has an effect if the corresponding row
exists in the partitioned table.
### Step 2: Backfill the partitioned copy (Release N)
The second step is to add a post-deployment migration that schedules
the background jobs that backfill existing data from the original table
into the partitioned copy.
Continuing the above example, the migration would look like:
```ruby
class BackfillPartitionMergeRequestDiffCommits < Gitlab::Database::Migration[2.2]
include Gitlab::Database::PartitioningMigrationHelpers
milestone '16.10'
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
enqueue_partitioning_data_migration :merge_request_diff_commits
end
def down
cleanup_partitioning_data_migration :merge_request_diff_commits
end
end
```
This step [queues a batched background migration](../batched_background_migrations.md#enqueue-a-batched-background-migration) internally with BATCH_SIZE and SUB_BATCH_SIZE as `50,000` and `2,500`. Refer [Batched Background migrations guide](../batched_background_migrations.md) for more details.
### Step 3: Post-backfill cleanup (Release N+1)
This step must occur at least one release after the release that
includes step (2). This gives time for the background
migration to execute properly in GitLab Self-Managed instances. In this step,
add another post-deployment migration that cleans up after the
background migration. This includes forcing any remaining jobs to
execute, and copying data that may have been missed, due to dropped or
failed jobs.
{{< alert type="warning" >}}
A required stop must occur between steps 2 and 3 to allow the background migration from step 2 to complete successfully.
{{< /alert >}}
Once again, continuing the example, this migration would look like:
```ruby
class CleanupPartitionMergeRequestDiffCommitsBackfill < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
restrict_gitlab_migration gitlab_schema: :gitlab_main
def up
finalize_backfilling_partitioned_table :merge_request_diff_commits
end
def down
# no op
end
end
```
After this migration completes, the original table and partitioned
table should contain identical data. The trigger installed on the
original table guarantees that the data remains in sync going forward.
### Step 4: Swap the partitioned and non-partitioned tables (Release N+1)
This step replaces the non-partitioned table with its partitioned copy, this should be used only after all other migration steps have completed successfully.
Some limitations to this method MUST be handled before, or during, the swap migration:
- Secondary indexes and foreign keys are not automatically recreated on the partitioned table.
- Some types of constraints (UNIQUE and EXCLUDE) which rely on indexes, are not automatically recreated
on the partitioned table, since the underlying index will not be present.
- Foreign keys referencing the original non-partitioned table should be updated to reference the
partitioned table. This is not supported in PostgreSQL 11.
- Views referencing the original table are not automatically updated to reference the partitioned table.
```ruby
# frozen_string_literal: true
class SwapPartitionMergeRequestDiffCommits < ActiveRecord::Migration[6.0]
include Gitlab::Database::PartitioningMigrationHelpers
def up
replace_with_partitioned_table :audit_events
end
def down
rollback_replace_with_partitioned_table :audit_events
end
end
```
After this migration completes:
- The partitioned table replaces the non-partitioned (original) table.
- The sync trigger created earlier is dropped.
The partitioned table is now ready for use by the application.
|
https://docs.gitlab.com/development/database/partitioning
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/_index.md
|
2025-08-13
|
doc/development/database/partitioning
|
[
"doc",
"development",
"database",
"partitioning"
] |
_index.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Database table partitioning
| null |
{{< alert type="warning" >}}
If you have questions not answered below, check for and add them
to [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/398650).
Tag `@gitlab-org/database-team/triage` and we'll get back to you with an
answer as soon as possible. If you get an answer in Slack, document
it on the issue as well so we can update this document in the future.
{{< /alert >}}
Table partitioning is a powerful database feature that allows a table's
data to be split into smaller physical tables that act as a single large
table. If the application is designed to work with partitioning in mind,
there can be multiple benefits, such as:
- Query performance can be improved greatly, because the database can
cheaply eliminate much of the data from the search space, while still
providing full SQL capabilities.
- Bulk deletes can be achieved with minimal impact on the database by
dropping entire partitions. This is a natural fit for features that need
to periodically delete data that falls outside the retention window.
- Administrative tasks like `VACUUM` and index rebuilds can operate on
individual partitions, rather than across a single massive table.
Unfortunately, not all models fit a partitioning scheme, and there are
significant drawbacks if implemented incorrectly. Additionally,
**tables can only be partitioned at their creation**, making it nontrivial
to apply partitioning to a busy database. A suite of migration tools are available
to enable backend developers to partition existing tables, but the
migration process is rather heavy, taking multiple steps split across
several releases. Due to the limitations of partitioning and the related
migrations, you should understand how partitioning fits your use case
before attempting to leverage this feature.
The partitioning migration helpers work by creating a partitioned duplicate
of the original table and using a combination of a trigger and a background
migration to copy data into the new table. Changes to the original table
schema can be made in parallel with the partitioning migration, but they
must take care to not break the underlying mechanism that makes the migration
work. For example, if a column is added to the table that is being
partitioned, both the partitioned table and the trigger definition must
be updated to match.
## Determine when to use partitioning
While partitioning can be very useful when properly applied, it's
imperative to identify if the data and workload of a table naturally fit a
partitioning scheme. Understand a few details to decide if partitioning
is a good fit for your particular problem:
- **Table partitioning**. A table is partitioned on a partition key, which is a
column or set of columns which determine how the data is split across the
partitions. The partition key is used by the database when reading or
writing data, to decide which partitions must be accessed. The
partition key should be a column that would be included in a `WHERE`
clause on almost all queries accessing that table.
- **How the data is split**. What strategy does the database use
to split the data across the partitions?
## Determine the appropriate partitioning strategy
The available partitioning strategy choices are `date range`, `int range`, `hash`, and `list`.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Database table partitioning
breadcrumbs:
- doc
- development
- database
- partitioning
---
{{< alert type="warning" >}}
If you have questions not answered below, check for and add them
to [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/398650).
Tag `@gitlab-org/database-team/triage` and we'll get back to you with an
answer as soon as possible. If you get an answer in Slack, document
it on the issue as well so we can update this document in the future.
{{< /alert >}}
Table partitioning is a powerful database feature that allows a table's
data to be split into smaller physical tables that act as a single large
table. If the application is designed to work with partitioning in mind,
there can be multiple benefits, such as:
- Query performance can be improved greatly, because the database can
cheaply eliminate much of the data from the search space, while still
providing full SQL capabilities.
- Bulk deletes can be achieved with minimal impact on the database by
dropping entire partitions. This is a natural fit for features that need
to periodically delete data that falls outside the retention window.
- Administrative tasks like `VACUUM` and index rebuilds can operate on
individual partitions, rather than across a single massive table.
Unfortunately, not all models fit a partitioning scheme, and there are
significant drawbacks if implemented incorrectly. Additionally,
**tables can only be partitioned at their creation**, making it nontrivial
to apply partitioning to a busy database. A suite of migration tools are available
to enable backend developers to partition existing tables, but the
migration process is rather heavy, taking multiple steps split across
several releases. Due to the limitations of partitioning and the related
migrations, you should understand how partitioning fits your use case
before attempting to leverage this feature.
The partitioning migration helpers work by creating a partitioned duplicate
of the original table and using a combination of a trigger and a background
migration to copy data into the new table. Changes to the original table
schema can be made in parallel with the partitioning migration, but they
must take care to not break the underlying mechanism that makes the migration
work. For example, if a column is added to the table that is being
partitioned, both the partitioned table and the trigger definition must
be updated to match.
## Determine when to use partitioning
While partitioning can be very useful when properly applied, it's
imperative to identify if the data and workload of a table naturally fit a
partitioning scheme. Understand a few details to decide if partitioning
is a good fit for your particular problem:
- **Table partitioning**. A table is partitioned on a partition key, which is a
column or set of columns which determine how the data is split across the
partitions. The partition key is used by the database when reading or
writing data, to decide which partitions must be accessed. The
partition key should be a column that would be included in a `WHERE`
clause on almost all queries accessing that table.
- **How the data is split**. What strategy does the database use
to split the data across the partitions?
## Determine the appropriate partitioning strategy
The available partitioning strategy choices are `date range`, `int range`, `hash`, and `list`.
|
https://docs.gitlab.com/development/database/list
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/list.md
|
2025-08-13
|
doc/development/database/partitioning
|
[
"doc",
"development",
"database",
"partitioning"
] |
list.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
List partition
| null |
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/96815) in GitLab 15.4.
{{< /history >}}
## Description
Add the partitioning key column to the table you are partitioning.
Include the partitioning key in the following constraints:
- The primary key.
- All foreign keys referencing the table to be partitioned.
- All unique constraints.
## Example
### Step 1 - Add partition key
Add the partitioning key column. For example, in a rails migration:
```ruby
class AddPartitionNumberForPartitioning < Gitlab::Database::Migration[2.1]
TABLE_NAME = :table_name
COLUMN_NAME = :partition_id
DEFAULT_VALUE = 100
def change
add_column(TABLE_NAME, COLUMN_NAME, :bigint, default: 100)
end
end
```
### Step 2 - Create required indexes
Add indexes including the partitioning key column. For example, in a rails migration:
```ruby
class PrepareIndexesForPartitioning < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
TABLE_NAME = :table_name
INDEX_NAME = :index_name
def up
add_concurrent_index(TABLE_NAME, [:id, :partition_id], unique: true, name: INDEX_NAME)
end
def down
remove_concurrent_index_by_name(TABLE_NAME, INDEX_NAME)
end
end
```
### Step 3 - Enforce unique constraint
Change all unique indexes to include the partitioning key column,
including the primary key index. You can start by adding an unique
index on `[primary_key_column, :partition_id]`, which will be
required for the next two steps. For example, in a rails migration:
```ruby
class PrepareUniqueContraintForPartitioning < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
TABLE_NAME = :table_name
OLD_UNIQUE_INDEX_NAME = :index_name_unique
NEW_UNIQUE_INDEX_NAME = :new_index_name
def up
add_concurrent_index(TABLE_NAME, [:id, :partition_id], unique: true, name: NEW_UNIQUE_INDEX_NAME)
remove_concurrent_index_by_name(TABLE_NAME, OLD_UNIQUE_INDEX_NAME)
end
def down
add_concurrent_index(TABLE_NAME, :id, unique: true, name: OLD_UNIQUE_INDEX_NAME)
remove_concurrent_index_by_name(TABLE_NAME, NEW_UNIQUE_INDEX_NAME)
end
end
```
### Step 4 - Enforce foreign key constraint
Enforce foreign keys including the partitioning key column. For example, in a rails migration:
```ruby
class PrepareForeignKeyForPartitioning < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
SOURCE_TABLE_NAME = :source_table_name
TARGET_TABLE_NAME = :target_table_name
COLUMN = :foreign_key_id
TARGET_COLUMN = :id
FK_NAME = :fk_365d1db505_p
PARTITION_COLUMN = :partition_id
def up
add_concurrent_foreign_key(
SOURCE_TABLE_NAME,
TARGET_TABLE_NAME,
column: [PARTITION_COLUMN, COLUMN],
target_column: [PARTITION_COLUMN, TARGET_COLUMN],
validate: false,
on_update: :cascade,
name: FK_NAME
)
# This should be done in a separate post migration when dealing with a high traffic table
validate_foreign_key(TABLE_NAME, [PARTITION_COLUMN, COLUMN], name: FK_NAME)
end
def down
with_lock_retries do
remove_foreign_key_if_exists(SOURCE_TABLE_NAME, name: FK_NAME)
end
end
end
```
The `on_update: :cascade` option is mandatory if we want the partitioning column
to be updated. This will cascade the update to all dependent rows. Without
specifying it, updating the partition column on the target table we would
result in a `Key is still referenced from table ...` error and updating the
partition column on the source table would raise a
`Key is not present in table ...` error.
### Step 5 - Swap primary key
Swap the primary key including the partitioning key column. This can be done only after
including the partition key for all references foreign keys. For example, in a rails migration:
```ruby
class PreparePrimaryKeyForPartitioning < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
TABLE_NAME = :table_name
PRIMARY_KEY = :primary_key
OLD_INDEX_NAME = :old_index_name
NEW_INDEX_NAME = :new_index_name
def up
swap_primary_key(TABLE_NAME, PRIMARY_KEY, NEW_INDEX_NAME)
end
def down
add_concurrent_index(TABLE_NAME, :id, unique: true, name: OLD_INDEX_NAME)
add_concurrent_index(TABLE_NAME, [:id, :partition_id], unique: true, name: NEW_INDEX_NAME)
unswap_primary_key(TABLE_NAME, PRIMARY_KEY, OLD_INDEX_NAME)
# We need to add back referenced FKs if any, eg: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/113725/diffs
end
end
```
{{< alert type="note" >}}
Do not forget to set the primary key explicitly in your model as `ActiveRecord` does not support composite primary keys.
{{< /alert >}}
```ruby
class Model < ApplicationRecord
self.primary_key = :id
end
```
### Step 6 - Create parent table and attach existing table as the initial partition
You can now create the parent table attaching the existing table as the initial
partition by using the following helpers provided by the database team.
For example, using list partitioning in Rails post migrations:
```ruby
class PrepareTableConstraintsForListPartitioning < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers::TableManagementHelpers
disable_ddl_transaction!
TABLE_NAME = :table_name
PARENT_TABLE_NAME = :p_table_name
FIRST_PARTITION = 100
PARTITION_COLUMN = :partition_id
def up
prepare_constraint_for_list_partitioning(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
def down
revert_preparing_constraint_for_list_partitioning(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
end
```
{{< alert type="note" >}}
`initial_partitioning_value` could be an array of values. It must contain all of the
values for the existing partitions. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/465859)
for more details.
{{< /alert >}}
```ruby
class ConvertTableToListPartitioning < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers::TableManagementHelpers
disable_ddl_transaction!
TABLE_NAME = :table_name
PARENT_TABLE_NAME = :p_table_name
FIRST_PARTITION = 100
PARTITION_COLUMN = :partition_id
def up
convert_table_to_first_list_partition(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
def down
revert_converting_table_to_first_list_partition(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
end
```
{{< alert type="note" >}}
Do not forget to set the sequence name explicitly in your model because it will
be owned by the routing table and `ActiveRecord` can't determine it. This can
be cleaned up after the `table_name` is changed to the routing table.
{{< /alert >}}
```ruby
class Model < ApplicationRecord
self.sequence_name = 'model_id_seq'
end
```
If the partitioning constraint migration takes [more than 10 minutes](../../migration_style_guide.md#how-long-a-migration-should-take) to finish,
it can be made to run asynchronously to avoid running the post-migration during busy hours.
Prepend the following migration `AsyncPrepareTableConstraintsForListPartitioning`
and use `async: true` option. This change marks the partitioning constraint as `NOT VALID`
and enqueues a scheduled job to validate the existing data in the table during the weekend.
Then the second post-migration `PrepareTableConstraintsForListPartitioning` only
marks the partitioning constraint as validated, because the existing data is already
tested during the previous weekend.
For example:
```ruby
class AsyncPrepareTableConstraintsForListPartitioning < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers::TableManagementHelpers
disable_ddl_transaction!
TABLE_NAME = :table_name
PARENT_TABLE_NAME = :p_table_name
FIRST_PARTITION = 100
PARTITION_COLUMN = :partition_id
def up
prepare_constraint_for_list_partitioning(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION,
async: true
)
end
def down
revert_preparing_constraint_for_list_partitioning(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
end
```
### Step 7 - Re-point foreign keys to parent table
The tables that reference the initial partition must be updated to point to the
parent table now. Without this change, the records from those tables
will not be able to locate the rows in the next partitions because they will look
for them in the initial partition.
Steps:
- Add the foreign key to the partitioned table and validate it asynchronously,
[for example](https://gitlab.com/gitlab-org/gitlab/-/blob/65d63f6a00196c3a7d59f15191920f271ab2b145/db/post_migrate/20230524135543_replace_ci_build_pending_states_foreign_key.rb).
- Validate it synchronously after the asynchronously validation was completed on GitLab.com,
[for example](https://gitlab.com/gitlab-org/gitlab/-/blob/65d63f6a00196c3a7d59f15191920f271ab2b145/db/post_migrate/20230530140456_validate_fk_ci_build_pending_states_p_ci_builds.rb).
- Remove the old foreign key and rename the new one to the old name,
[for example](https://gitlab.com/gitlab-org/gitlab/-/blob/65d63f6a00196c3a7d59f15191920f271ab2b145/db/post_migrate/20230615083713_replace_old_fk_ci_build_pending_states_to_builds.rb#L9).
### Step 8 - Ensure ID uniqueness across partitions
All uniqueness constraints must include the partitioning key, so we can have
duplicate IDs across partitions. To solve this we enforce that only the database
can set the ID values and use a sequence to generate them because sequences are
guaranteed to generate unique values.
For example:
```ruby
class EnsureIdUniquenessForPCiBuilds < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers::UniquenessHelpers
TABLE_NAME = :p_ci_builds
SEQ_NAME = :ci_builds_id_seq
def up
ensure_unique_id(TABLE_NAME, seq: SEQ_NAME)
end
def down
revert_ensure_unique_id(TABLE_NAME, seq: SEQ_NAME)
end
end
```
### Step 9 - Analyze the partitioned table and create new partitions
The autovacuum daemon does not process partitioned tables. It is necessary to
periodically run a manual `ANALYZE` to keep the statistics of the table hierarchy
up to date.
Models that implement `Ci::Partitionable` with `partitioned: true` option are
analyzed by default on a weekly basis. To enable this and create new partitions
you need to register the model in the [PostgreSQL initializer](https://gitlab.com/gitlab-org/gitlab/-/blob/b7f0e3f1bcd2ffc220768bbc373364151775ca8e/config/initializers/postgres_partitioning.rb).
### Step 10 - Update the application to use the partitioned table
Now that the parent table is ready, we can update the application to use it:
```ruby
class Model < ApplicationRecord
self.table_name = :partitioned_table
end
```
Depending on the model, it might be safer to use a [change management issue](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/16387).
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: List partition
breadcrumbs:
- doc
- development
- database
- partitioning
---
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/96815) in GitLab 15.4.
{{< /history >}}
## Description
Add the partitioning key column to the table you are partitioning.
Include the partitioning key in the following constraints:
- The primary key.
- All foreign keys referencing the table to be partitioned.
- All unique constraints.
## Example
### Step 1 - Add partition key
Add the partitioning key column. For example, in a rails migration:
```ruby
class AddPartitionNumberForPartitioning < Gitlab::Database::Migration[2.1]
TABLE_NAME = :table_name
COLUMN_NAME = :partition_id
DEFAULT_VALUE = 100
def change
add_column(TABLE_NAME, COLUMN_NAME, :bigint, default: 100)
end
end
```
### Step 2 - Create required indexes
Add indexes including the partitioning key column. For example, in a rails migration:
```ruby
class PrepareIndexesForPartitioning < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
TABLE_NAME = :table_name
INDEX_NAME = :index_name
def up
add_concurrent_index(TABLE_NAME, [:id, :partition_id], unique: true, name: INDEX_NAME)
end
def down
remove_concurrent_index_by_name(TABLE_NAME, INDEX_NAME)
end
end
```
### Step 3 - Enforce unique constraint
Change all unique indexes to include the partitioning key column,
including the primary key index. You can start by adding an unique
index on `[primary_key_column, :partition_id]`, which will be
required for the next two steps. For example, in a rails migration:
```ruby
class PrepareUniqueContraintForPartitioning < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
TABLE_NAME = :table_name
OLD_UNIQUE_INDEX_NAME = :index_name_unique
NEW_UNIQUE_INDEX_NAME = :new_index_name
def up
add_concurrent_index(TABLE_NAME, [:id, :partition_id], unique: true, name: NEW_UNIQUE_INDEX_NAME)
remove_concurrent_index_by_name(TABLE_NAME, OLD_UNIQUE_INDEX_NAME)
end
def down
add_concurrent_index(TABLE_NAME, :id, unique: true, name: OLD_UNIQUE_INDEX_NAME)
remove_concurrent_index_by_name(TABLE_NAME, NEW_UNIQUE_INDEX_NAME)
end
end
```
### Step 4 - Enforce foreign key constraint
Enforce foreign keys including the partitioning key column. For example, in a rails migration:
```ruby
class PrepareForeignKeyForPartitioning < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
SOURCE_TABLE_NAME = :source_table_name
TARGET_TABLE_NAME = :target_table_name
COLUMN = :foreign_key_id
TARGET_COLUMN = :id
FK_NAME = :fk_365d1db505_p
PARTITION_COLUMN = :partition_id
def up
add_concurrent_foreign_key(
SOURCE_TABLE_NAME,
TARGET_TABLE_NAME,
column: [PARTITION_COLUMN, COLUMN],
target_column: [PARTITION_COLUMN, TARGET_COLUMN],
validate: false,
on_update: :cascade,
name: FK_NAME
)
# This should be done in a separate post migration when dealing with a high traffic table
validate_foreign_key(TABLE_NAME, [PARTITION_COLUMN, COLUMN], name: FK_NAME)
end
def down
with_lock_retries do
remove_foreign_key_if_exists(SOURCE_TABLE_NAME, name: FK_NAME)
end
end
end
```
The `on_update: :cascade` option is mandatory if we want the partitioning column
to be updated. This will cascade the update to all dependent rows. Without
specifying it, updating the partition column on the target table we would
result in a `Key is still referenced from table ...` error and updating the
partition column on the source table would raise a
`Key is not present in table ...` error.
### Step 5 - Swap primary key
Swap the primary key including the partitioning key column. This can be done only after
including the partition key for all references foreign keys. For example, in a rails migration:
```ruby
class PreparePrimaryKeyForPartitioning < Gitlab::Database::Migration[2.1]
disable_ddl_transaction!
TABLE_NAME = :table_name
PRIMARY_KEY = :primary_key
OLD_INDEX_NAME = :old_index_name
NEW_INDEX_NAME = :new_index_name
def up
swap_primary_key(TABLE_NAME, PRIMARY_KEY, NEW_INDEX_NAME)
end
def down
add_concurrent_index(TABLE_NAME, :id, unique: true, name: OLD_INDEX_NAME)
add_concurrent_index(TABLE_NAME, [:id, :partition_id], unique: true, name: NEW_INDEX_NAME)
unswap_primary_key(TABLE_NAME, PRIMARY_KEY, OLD_INDEX_NAME)
# We need to add back referenced FKs if any, eg: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/113725/diffs
end
end
```
{{< alert type="note" >}}
Do not forget to set the primary key explicitly in your model as `ActiveRecord` does not support composite primary keys.
{{< /alert >}}
```ruby
class Model < ApplicationRecord
self.primary_key = :id
end
```
### Step 6 - Create parent table and attach existing table as the initial partition
You can now create the parent table attaching the existing table as the initial
partition by using the following helpers provided by the database team.
For example, using list partitioning in Rails post migrations:
```ruby
class PrepareTableConstraintsForListPartitioning < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers::TableManagementHelpers
disable_ddl_transaction!
TABLE_NAME = :table_name
PARENT_TABLE_NAME = :p_table_name
FIRST_PARTITION = 100
PARTITION_COLUMN = :partition_id
def up
prepare_constraint_for_list_partitioning(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
def down
revert_preparing_constraint_for_list_partitioning(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
end
```
{{< alert type="note" >}}
`initial_partitioning_value` could be an array of values. It must contain all of the
values for the existing partitions. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/465859)
for more details.
{{< /alert >}}
```ruby
class ConvertTableToListPartitioning < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers::TableManagementHelpers
disable_ddl_transaction!
TABLE_NAME = :table_name
PARENT_TABLE_NAME = :p_table_name
FIRST_PARTITION = 100
PARTITION_COLUMN = :partition_id
def up
convert_table_to_first_list_partition(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
def down
revert_converting_table_to_first_list_partition(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
end
```
{{< alert type="note" >}}
Do not forget to set the sequence name explicitly in your model because it will
be owned by the routing table and `ActiveRecord` can't determine it. This can
be cleaned up after the `table_name` is changed to the routing table.
{{< /alert >}}
```ruby
class Model < ApplicationRecord
self.sequence_name = 'model_id_seq'
end
```
If the partitioning constraint migration takes [more than 10 minutes](../../migration_style_guide.md#how-long-a-migration-should-take) to finish,
it can be made to run asynchronously to avoid running the post-migration during busy hours.
Prepend the following migration `AsyncPrepareTableConstraintsForListPartitioning`
and use `async: true` option. This change marks the partitioning constraint as `NOT VALID`
and enqueues a scheduled job to validate the existing data in the table during the weekend.
Then the second post-migration `PrepareTableConstraintsForListPartitioning` only
marks the partitioning constraint as validated, because the existing data is already
tested during the previous weekend.
For example:
```ruby
class AsyncPrepareTableConstraintsForListPartitioning < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers::TableManagementHelpers
disable_ddl_transaction!
TABLE_NAME = :table_name
PARENT_TABLE_NAME = :p_table_name
FIRST_PARTITION = 100
PARTITION_COLUMN = :partition_id
def up
prepare_constraint_for_list_partitioning(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION,
async: true
)
end
def down
revert_preparing_constraint_for_list_partitioning(
table_name: TABLE_NAME,
partitioning_column: PARTITION_COLUMN,
parent_table_name: PARENT_TABLE_NAME,
initial_partitioning_value: FIRST_PARTITION
)
end
end
```
### Step 7 - Re-point foreign keys to parent table
The tables that reference the initial partition must be updated to point to the
parent table now. Without this change, the records from those tables
will not be able to locate the rows in the next partitions because they will look
for them in the initial partition.
Steps:
- Add the foreign key to the partitioned table and validate it asynchronously,
[for example](https://gitlab.com/gitlab-org/gitlab/-/blob/65d63f6a00196c3a7d59f15191920f271ab2b145/db/post_migrate/20230524135543_replace_ci_build_pending_states_foreign_key.rb).
- Validate it synchronously after the asynchronously validation was completed on GitLab.com,
[for example](https://gitlab.com/gitlab-org/gitlab/-/blob/65d63f6a00196c3a7d59f15191920f271ab2b145/db/post_migrate/20230530140456_validate_fk_ci_build_pending_states_p_ci_builds.rb).
- Remove the old foreign key and rename the new one to the old name,
[for example](https://gitlab.com/gitlab-org/gitlab/-/blob/65d63f6a00196c3a7d59f15191920f271ab2b145/db/post_migrate/20230615083713_replace_old_fk_ci_build_pending_states_to_builds.rb#L9).
### Step 8 - Ensure ID uniqueness across partitions
All uniqueness constraints must include the partitioning key, so we can have
duplicate IDs across partitions. To solve this we enforce that only the database
can set the ID values and use a sequence to generate them because sequences are
guaranteed to generate unique values.
For example:
```ruby
class EnsureIdUniquenessForPCiBuilds < Gitlab::Database::Migration[2.1]
include Gitlab::Database::PartitioningMigrationHelpers::UniquenessHelpers
TABLE_NAME = :p_ci_builds
SEQ_NAME = :ci_builds_id_seq
def up
ensure_unique_id(TABLE_NAME, seq: SEQ_NAME)
end
def down
revert_ensure_unique_id(TABLE_NAME, seq: SEQ_NAME)
end
end
```
### Step 9 - Analyze the partitioned table and create new partitions
The autovacuum daemon does not process partitioned tables. It is necessary to
periodically run a manual `ANALYZE` to keep the statistics of the table hierarchy
up to date.
Models that implement `Ci::Partitionable` with `partitioned: true` option are
analyzed by default on a weekly basis. To enable this and create new partitions
you need to register the model in the [PostgreSQL initializer](https://gitlab.com/gitlab-org/gitlab/-/blob/b7f0e3f1bcd2ffc220768bbc373364151775ca8e/config/initializers/postgres_partitioning.rb).
### Step 10 - Update the application to use the partitioned table
Now that the parent table is ready, we can update the application to use it:
```ruby
class Model < ApplicationRecord
self.table_name = :partitioned_table
end
```
Depending on the model, it might be safer to use a [change management issue](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/16387).
|
https://docs.gitlab.com/development/database/hash
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/database/hash.md
|
2025-08-13
|
doc/development/database/partitioning
|
[
"doc",
"development",
"database",
"partitioning"
] |
hash.md
|
Data Access
|
Database Frameworks
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Hash Partitioning
| null |
Hash partitioning is a method of dividing a large table into smaller, more manageable partitions based on a hash function applied to a specified column, typically the ID column. It offers unique advantages for certain use cases, but it also comes with limitations.
Key points:
- Data distribution: Rows are assigned to partitions based on the hash value of their ID and a modulus-remainder calculation.
For example, if partitioning by `HASH(ID)` with `MODULUS 64` and `REMAINDER 1`, rows with `hash(ID) % 64 == 1` would go into the corresponding partition.
- Query requirements: Hash partitioning works best when most queries include a `WHERE hashed_column = ?` condition,
as this allows PostgreSQL to quickly identify the relevant partition.
- ID uniqueness: It's the only partitioning method (aside from complex list partitioning) that can guarantee ID uniqueness across multiple partitions at the database level.
Upfront decisions:
- The number of partitions must be chosen before table creation and cannot be easily added later. This makes it crucial to anticipate future data growth.
Unsupported query types:
- Range queries `(WHERE id BETWEEN ? AND ?)` and lookups by other keys `(WHERE other_id = ?)` are not directly supported on hash-partitioned tables.
Considerations:
- Choose a large number of partitions to accommodate future growth.
- Ensure application queries align with hash partitioning requirements.
- Evaluate alternatives like range partitioning or list partitioning if range queries or lookups by other keys are essential.
In summary, hash partitioning is a valuable tool for specific scenarios, particularly when ID uniqueness across partitions is crucial. However, it's essential to carefully consider its limitations and query patterns before implementation.
|
---
stage: Data Access
group: Database Frameworks
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Hash Partitioning
breadcrumbs:
- doc
- development
- database
- partitioning
---
Hash partitioning is a method of dividing a large table into smaller, more manageable partitions based on a hash function applied to a specified column, typically the ID column. It offers unique advantages for certain use cases, but it also comes with limitations.
Key points:
- Data distribution: Rows are assigned to partitions based on the hash value of their ID and a modulus-remainder calculation.
For example, if partitioning by `HASH(ID)` with `MODULUS 64` and `REMAINDER 1`, rows with `hash(ID) % 64 == 1` would go into the corresponding partition.
- Query requirements: Hash partitioning works best when most queries include a `WHERE hashed_column = ?` condition,
as this allows PostgreSQL to quickly identify the relevant partition.
- ID uniqueness: It's the only partitioning method (aside from complex list partitioning) that can guarantee ID uniqueness across multiple partitions at the database level.
Upfront decisions:
- The number of partitions must be chosen before table creation and cannot be easily added later. This makes it crucial to anticipate future data growth.
Unsupported query types:
- Range queries `(WHERE id BETWEEN ? AND ?)` and lookups by other keys `(WHERE other_id = ?)` are not directly supported on hash-partitioned tables.
Considerations:
- Choose a large number of partitions to accommodate future growth.
- Ensure application queries align with hash partitioning requirements.
- Evaluate alternatives like range partitioning or list partitioning if range queries or lookups by other keys are essential.
In summary, hash partitioning is a valuable tool for specific scenarios, particularly when ID uniqueness across partitions is crucial. However, it's essential to carefully consider its limitations and query patterns before implementation.
|
https://docs.gitlab.com/development/experiment_rollout
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/experiment_rollout.md
|
2025-08-13
|
doc/development/experiment_guide
|
[
"doc",
"development",
"experiment_guide"
] |
experiment_rollout.md
|
Growth
|
Acquisition
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Experiment rollouts and feature flags
| null |
## Experiment rollout issue
Each experiment should have an [experiment rollout](https://gitlab.com/groups/gitlab-org/-/boards/1352542) issue to track the experiment from rollout through to cleanup and removal.
The rollout issue is similar to a feature flag rollout issue, and is also used to track the status of an experiment.
When an experiment is deployed, the due date of the issue should be set (this depends on the experiment but can be up to a few weeks in the future).
After the deadline, the issue must be resolved and either:
- It was successful and the experiment becomes the new default.
- It was not successful and all code related to the experiment is removed.
In either case, an outcome of the experiment should be posted to the issue with the reasoning for the decision.
## Turn off all experiments
When there is a case on GitLab.com (SaaS) that necessitates turning off all experiments, we have this control.
You can toggle experiments on SaaS on and off using the `gitlab_experiment` [feature flag](../feature_flags/_index.md).
This can be done via ChatOps:
- [disable](../feature_flags/controls.md#disabling-feature-flags): `/chatops run feature set gitlab_experiment false`
- [enable](../feature_flags/controls.md#process): `/chatops run feature delete gitlab_experiment`
- This allows the `default_enabled` [value of true in the YAML](https://gitlab.com/gitlab-org/gitlab/-/blob/016430f6751b0c34abb24f74608c80a1a8268f20/config/feature_flags/ops/gitlab_experiment.yml#L8) to be honored.
## Notes on feature flags
{{< alert type="note" >}}
We use the terms "enabled" and "disabled" here, even though it's against our
[documentation style guide recommendations](../documentation/styleguide/word_list.md#enable)
because these are the terms that the feature flag documentation uses.
{{< /alert >}}
You may already be familiar with the concept of feature flags in GitLab, but using
feature flags in experiments is a bit different. While in general terms, a feature flag
is viewed as being either `on` or `off`, this isn't accurate for experiments.
Generally, `off` means that when we ask if a feature flag is enabled, it always
returns `false`, and `on` means that it always returns `true`. An interim state,
considered `conditional`, also exists. We take advantage of this trinary state of
feature flags. To understand this `conditional` aspect: consider that either of these
settings puts a feature flag into this state:
- Setting a `percentage_of_actors` of any percent greater than 0%.
- Enabling it for a single user or group.
Conditional means that it returns `true` in some situations, but not all situations.
When a feature flag is disabled (meaning the state is `off`), the experiment is
considered _inactive_. You can visualize this in the [decision tree diagram](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment#how-it-works)
as reaching the first `Running?` node, and traversing the negative path.
When a feature flag is rolled out to a `percentage_of_actors` or similar (meaning the
state is `conditional`) the experiment is considered to be _running_
where sometimes the control is assigned, and sometimes the candidate is assigned.
We don't refer to this as being enabled, because that's a confusing and overloaded
term here. In the experiment terms, our experiment is _running_, and the feature flag is
`conditional`.
When a feature flag is enabled (meaning the state is `on`), the candidate is always
assigned.
We should try to be consistent with our terms, and so for experiments, we have an
_inactive_ experiment until we set the feature flag to `conditional`. After which,
our experiment is then considered _running_. If you choose to "enable" your feature flag,
you should consider the experiment to be _resolved_, because everyone is assigned
the candidate unless they've opted out of experimentation.
|
---
stage: Growth
group: Acquisition
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Experiment rollouts and feature flags
breadcrumbs:
- doc
- development
- experiment_guide
---
## Experiment rollout issue
Each experiment should have an [experiment rollout](https://gitlab.com/groups/gitlab-org/-/boards/1352542) issue to track the experiment from rollout through to cleanup and removal.
The rollout issue is similar to a feature flag rollout issue, and is also used to track the status of an experiment.
When an experiment is deployed, the due date of the issue should be set (this depends on the experiment but can be up to a few weeks in the future).
After the deadline, the issue must be resolved and either:
- It was successful and the experiment becomes the new default.
- It was not successful and all code related to the experiment is removed.
In either case, an outcome of the experiment should be posted to the issue with the reasoning for the decision.
## Turn off all experiments
When there is a case on GitLab.com (SaaS) that necessitates turning off all experiments, we have this control.
You can toggle experiments on SaaS on and off using the `gitlab_experiment` [feature flag](../feature_flags/_index.md).
This can be done via ChatOps:
- [disable](../feature_flags/controls.md#disabling-feature-flags): `/chatops run feature set gitlab_experiment false`
- [enable](../feature_flags/controls.md#process): `/chatops run feature delete gitlab_experiment`
- This allows the `default_enabled` [value of true in the YAML](https://gitlab.com/gitlab-org/gitlab/-/blob/016430f6751b0c34abb24f74608c80a1a8268f20/config/feature_flags/ops/gitlab_experiment.yml#L8) to be honored.
## Notes on feature flags
{{< alert type="note" >}}
We use the terms "enabled" and "disabled" here, even though it's against our
[documentation style guide recommendations](../documentation/styleguide/word_list.md#enable)
because these are the terms that the feature flag documentation uses.
{{< /alert >}}
You may already be familiar with the concept of feature flags in GitLab, but using
feature flags in experiments is a bit different. While in general terms, a feature flag
is viewed as being either `on` or `off`, this isn't accurate for experiments.
Generally, `off` means that when we ask if a feature flag is enabled, it always
returns `false`, and `on` means that it always returns `true`. An interim state,
considered `conditional`, also exists. We take advantage of this trinary state of
feature flags. To understand this `conditional` aspect: consider that either of these
settings puts a feature flag into this state:
- Setting a `percentage_of_actors` of any percent greater than 0%.
- Enabling it for a single user or group.
Conditional means that it returns `true` in some situations, but not all situations.
When a feature flag is disabled (meaning the state is `off`), the experiment is
considered _inactive_. You can visualize this in the [decision tree diagram](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment#how-it-works)
as reaching the first `Running?` node, and traversing the negative path.
When a feature flag is rolled out to a `percentage_of_actors` or similar (meaning the
state is `conditional`) the experiment is considered to be _running_
where sometimes the control is assigned, and sometimes the candidate is assigned.
We don't refer to this as being enabled, because that's a confusing and overloaded
term here. In the experiment terms, our experiment is _running_, and the feature flag is
`conditional`.
When a feature flag is enabled (meaning the state is `on`), the candidate is always
assigned.
We should try to be consistent with our terms, and so for experiments, we have an
_inactive_ experiment until we set the feature flag to `conditional`. After which,
our experiment is then considered _running_. If you choose to "enable" your feature flag,
you should consider the experiment to be _resolved_, because everyone is assigned
the candidate unless they've opted out of experimentation.
|
https://docs.gitlab.com/development/experiment_code_reviews
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/experiment_code_reviews.md
|
2025-08-13
|
doc/development/experiment_guide
|
[
"doc",
"development",
"experiment_guide"
] |
experiment_code_reviews.md
|
Growth
|
Acquisition
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Experiment code reviews
| null |
Experiments' code quality can fail our standards for several reasons. These
reasons can include not being added to the codebase for a long time, or because
of fast iteration to retrieve data. However, having the experiment run (or not
run) shouldn't impact GitLab availability. To avoid or identify issues,
experiments are initially deployed to a small number of users. Regardless,
experiments still need tests.
Experiments must have corresponding [frontend or feature tests](../testing_guide/_index.md) to ensure they
exist in the application. These tests should help prevent the experiment code from
being removed before the [experiment cleanup process](https://handbook.gitlab.com/handbook/marketing/growth/engineering/experimentation/#experiment-cleanup-issue) starts.
If, as a reviewer or maintainer, you find code that would usually fail review
but is acceptable for now, mention your concerns with a note stating that there's no
need to change the code. The author can then add a comment to this piece of code
and link to the issue that resolves the experiment. The author or reviewer can add a link to this concern in the
experiment rollout issue under the `Experiment Successful Cleanup Concerns` section of the description.
If the experiment is successful and becomes part of the product, any items that appear under this section are addressed.
|
---
stage: Growth
group: Acquisition
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Experiment code reviews
breadcrumbs:
- doc
- development
- experiment_guide
---
Experiments' code quality can fail our standards for several reasons. These
reasons can include not being added to the codebase for a long time, or because
of fast iteration to retrieve data. However, having the experiment run (or not
run) shouldn't impact GitLab availability. To avoid or identify issues,
experiments are initially deployed to a small number of users. Regardless,
experiments still need tests.
Experiments must have corresponding [frontend or feature tests](../testing_guide/_index.md) to ensure they
exist in the application. These tests should help prevent the experiment code from
being removed before the [experiment cleanup process](https://handbook.gitlab.com/handbook/marketing/growth/engineering/experimentation/#experiment-cleanup-issue) starts.
If, as a reviewer or maintainer, you find code that would usually fail review
but is acceptable for now, mention your concerns with a note stating that there's no
need to change the code. The author can then add a comment to this piece of code
and link to the issue that resolves the experiment. The author or reviewer can add a link to this concern in the
experiment rollout issue under the `Experiment Successful Cleanup Concerns` section of the description.
If the experiment is successful and becomes part of the product, any items that appear under this section are addressed.
|
https://docs.gitlab.com/development/testing_experiments
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/testing_experiments.md
|
2025-08-13
|
doc/development/experiment_guide
|
[
"doc",
"development",
"experiment_guide"
] |
testing_experiments.md
|
Growth
|
Acquisition
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Testing experiments
| null |
## Testing experiments with RSpec
In the course of working with experiments, you might want to use the RSpec
tooling that's built in. This happens automatically for files in `spec/experiments`, but
for other files and specs you want to include it in, you can specify the `:experiment` type:
```ruby
it "tests experiments nicely", :experiment do
end
```
### Stub helpers
You can stub experiments using `stub_experiments`. Pass it a hash using experiment
names as the keys, and the variants you want each to resolve to, as the values:
```ruby
# Ensures the experiments named `:example` & `:example2` are both "enabled" and
# that each will resolve to the given variant (`:my_variant` and `:control`).
stub_experiments(example: :my_variant, example2: :control)
experiment(:example) do |e|
e.enabled? # => true
e.assigned.name # => 'my_variant'
end
experiment(:example2) do |e|
e.enabled? # => true
e.assigned.name # => 'control'
end
```
### Exclusion, segmentation, and behavior matchers
You can also test things like the registered behaviors, the exclusions, and
segmentations using the matchers.
```ruby
class ExampleExperiment < ApplicationExperiment
control { }
candidate { '_candidate_' }
exclude { context.actor.first_name == 'Richard' }
segment(variant: :candidate) { context.actor.username == 'jejacks0n' }
end
excluded = double(username: 'rdiggitty', first_name: 'Richard')
segmented = double(username: 'jejacks0n', first_name: 'Jeremy')
# register_behavior matcher
expect(experiment(:example)).to register_behavior(:control)
expect(experiment(:example)).to register_behavior(:candidate).with('_candidate_')
# exclude matcher
expect(experiment(:example)).to exclude(actor: excluded)
expect(experiment(:example)).not_to exclude(actor: segmented)
# segment matcher
expect(experiment(:example)).to segment(actor: segmented).into(:candidate)
expect(experiment(:example)).not_to segment(actor: excluded)
```
### Tracking matcher
Tracking events is a major aspect of experimentation. We try
to provide a flexible way to ensure your tracking calls are covered.
You can do this on the instance level or at an "any instance" level:
```ruby
subject = experiment(:example)
expect(subject).to track(:my_event)
subject.track(:my_event)
```
You can use the `on_next_instance` chain method to specify that it happens
on the next instance of the experiment. This helps you if you're calling
`experiment(:example).track` downstream:
```ruby
expect(experiment(:example)).to track(:my_event).on_next_instance
experiment(:example).track(:my_event)
```
A full example of the methods you can chain onto the `track` matcher:
```ruby
expect(experiment(:example)).to track(:my_event, value: 1, property: '_property_')
.on_next_instance
.with_context(foo: :bar)
.for(:variant_name)
experiment(:example, :variant_name, foo: :bar).track(:my_event, value: 1, property: '_property_')
```
## Test with Jest
### Stub Helpers
You can stub experiments using the `stubExperiments` helper defined in `spec/frontend/__helpers__/experimentation_helper.js`.
```javascript
import { stubExperiments } from 'helpers/experimentation_helper';
import { getExperimentData } from '~/experimentation/utils';
describe('when my_experiment is enabled', () => {
beforeEach(() => {
stubExperiments({ my_experiment: 'candidate' });
});
it('sets the correct data', () => {
expect(getExperimentData('my_experiment')).toEqual({ experiment: 'my_experiment', variant: 'candidate' });
});
});
```
{{< alert type="note" >}}
This method of stubbing in Jest specs does not automatically un-stub itself at the end of the test. We merge our stubbed experiment in with all the other global data in `window.gl`. If you must remove the stubbed experiments after your test or ensure a clean global object before your test, you must manage the global object directly yourself:
{{< /alert >}}
```javascript
describe('tests that care about global state', () => {
const originalObjects = [];
beforeEach(() => {
// For backwards compatibility for now, we're using both window.gon & window.gl
originalObjects.push(window.gon, window.gl);
});
afterEach(() => {
[window.gon, window.gl] = originalObjects;
});
it('stubs experiment in fresh global state', () => {
stubExperiment({ my_experiment: 'candidate' });
// ...
});
})
```
|
---
stage: Growth
group: Acquisition
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Testing experiments
breadcrumbs:
- doc
- development
- experiment_guide
---
## Testing experiments with RSpec
In the course of working with experiments, you might want to use the RSpec
tooling that's built in. This happens automatically for files in `spec/experiments`, but
for other files and specs you want to include it in, you can specify the `:experiment` type:
```ruby
it "tests experiments nicely", :experiment do
end
```
### Stub helpers
You can stub experiments using `stub_experiments`. Pass it a hash using experiment
names as the keys, and the variants you want each to resolve to, as the values:
```ruby
# Ensures the experiments named `:example` & `:example2` are both "enabled" and
# that each will resolve to the given variant (`:my_variant` and `:control`).
stub_experiments(example: :my_variant, example2: :control)
experiment(:example) do |e|
e.enabled? # => true
e.assigned.name # => 'my_variant'
end
experiment(:example2) do |e|
e.enabled? # => true
e.assigned.name # => 'control'
end
```
### Exclusion, segmentation, and behavior matchers
You can also test things like the registered behaviors, the exclusions, and
segmentations using the matchers.
```ruby
class ExampleExperiment < ApplicationExperiment
control { }
candidate { '_candidate_' }
exclude { context.actor.first_name == 'Richard' }
segment(variant: :candidate) { context.actor.username == 'jejacks0n' }
end
excluded = double(username: 'rdiggitty', first_name: 'Richard')
segmented = double(username: 'jejacks0n', first_name: 'Jeremy')
# register_behavior matcher
expect(experiment(:example)).to register_behavior(:control)
expect(experiment(:example)).to register_behavior(:candidate).with('_candidate_')
# exclude matcher
expect(experiment(:example)).to exclude(actor: excluded)
expect(experiment(:example)).not_to exclude(actor: segmented)
# segment matcher
expect(experiment(:example)).to segment(actor: segmented).into(:candidate)
expect(experiment(:example)).not_to segment(actor: excluded)
```
### Tracking matcher
Tracking events is a major aspect of experimentation. We try
to provide a flexible way to ensure your tracking calls are covered.
You can do this on the instance level or at an "any instance" level:
```ruby
subject = experiment(:example)
expect(subject).to track(:my_event)
subject.track(:my_event)
```
You can use the `on_next_instance` chain method to specify that it happens
on the next instance of the experiment. This helps you if you're calling
`experiment(:example).track` downstream:
```ruby
expect(experiment(:example)).to track(:my_event).on_next_instance
experiment(:example).track(:my_event)
```
A full example of the methods you can chain onto the `track` matcher:
```ruby
expect(experiment(:example)).to track(:my_event, value: 1, property: '_property_')
.on_next_instance
.with_context(foo: :bar)
.for(:variant_name)
experiment(:example, :variant_name, foo: :bar).track(:my_event, value: 1, property: '_property_')
```
## Test with Jest
### Stub Helpers
You can stub experiments using the `stubExperiments` helper defined in `spec/frontend/__helpers__/experimentation_helper.js`.
```javascript
import { stubExperiments } from 'helpers/experimentation_helper';
import { getExperimentData } from '~/experimentation/utils';
describe('when my_experiment is enabled', () => {
beforeEach(() => {
stubExperiments({ my_experiment: 'candidate' });
});
it('sets the correct data', () => {
expect(getExperimentData('my_experiment')).toEqual({ experiment: 'my_experiment', variant: 'candidate' });
});
});
```
{{< alert type="note" >}}
This method of stubbing in Jest specs does not automatically un-stub itself at the end of the test. We merge our stubbed experiment in with all the other global data in `window.gl`. If you must remove the stubbed experiments after your test or ensure a clean global object before your test, you must manage the global object directly yourself:
{{< /alert >}}
```javascript
describe('tests that care about global state', () => {
const originalObjects = [];
beforeEach(() => {
// For backwards compatibility for now, we're using both window.gon & window.gl
originalObjects.push(window.gon, window.gl);
});
afterEach(() => {
[window.gon, window.gl] = originalObjects;
});
it('stubs experiment in fresh global state', () => {
stubExperiment({ my_experiment: 'candidate' });
// ...
});
})
```
|
https://docs.gitlab.com/development/implementing_experiments
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/implementing_experiments.md
|
2025-08-13
|
doc/development/experiment_guide
|
[
"doc",
"development",
"experiment_guide"
] |
implementing_experiments.md
|
Growth
|
Acquisition
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Implementing an A/B/n experiment
| null |
## Implementing an experiment
[Examples](https://gitlab.com/groups/gitlab-org/growth/-/wikis/GLEX-How-Tos)
Start by generating a feature flag using the `bin/feature-flag` command as you
usually would for a development feature flag, making sure to use `experiment` for
the type. For the sake of documentation let's name our feature flag (and experiment)
`pill_color`.
```shell
bin/feature-flag pill_color -t experiment
```
After you generate the desired feature flag, you can immediately implement an
experiment in code. A basic experiment implementation can be:
```ruby
experiment(:pill_color, actor: current_user) do |e|
e.control { 'control' }
e.variant(:red) { 'red' }
e.variant(:blue) { 'blue' }
end
```
When this code executes, the experiment is run, a variant is assigned, and (if in a
controller or view) a `window.gl.experiments.pill_color` object is available in the
client layer, with details like:
- The assigned variant.
- The context key for client tracking events.
In addition, when an experiment runs, an event is tracked for
the experiment `:assignment`. We cover more about events, tracking, and
the client layer later.
In local development, you can make the experiment active by using the feature flag
interface. You can also target specific cases by providing the relevant experiment
to the call to enable the feature flag:
```ruby
# Enable for everyone
Feature.enable(:pill_color)
# Get the `experiment` method -- already available in controllers, views, and mailers.
include Gitlab::Experiment::Dsl
# Enable for only the first user
Feature.enable(:pill_color, experiment(:pill_color, actor: User.first))
```
To roll out your experiment feature flag on an environment, run
the following command using ChatOps (which is covered in more depth in the
[Feature flags in development of GitLab](../feature_flags/_index.md) documentation).
This command creates a scenario where half of everyone who encounters
the experiment would be assigned the _control_, 25% would be assigned the _red_
variant, and 25% would be assigned the _blue_ variant:
```plaintext
/chatops run feature set pill_color 50 --actors
```
For an even distribution in this example, change the command to set it to 66% instead
of 50.
{{< alert type="note" >}}
To immediately stop running an experiment, use the
`/chatops run feature set pill_color false` command.
{{< /alert >}}
{{< alert type="warning" >}}
We strongly recommend using the `--actors` flag when using the ChatOps commands,
as anything else may give odd behaviors due to how the caching of variant assignment is
handled.
{{< /alert >}}
We can also implement this experiment in a HAML file with HTML wrappings:
```ruby
#cta-interface
- experiment(:pill_color, actor: current_user) do |e|
- e.control do
.pill-button control
- e.variant(:red) do
.pill-button.red red
- e.variant(:blue) do
.pill-button.blue blue
```
### The importance of context
In our previous example experiment, our context (this is an important term) is a hash
that's set to `{ actor: current_user }`. Context must be unique based on how you
want to run your experiment, and should be understood at a lower level.
It's expected, and recommended, that you use some of these
contexts to simplify reporting:
- `{ actor: current_user }`: Assigns a variant and is "sticky" to each user
(or "client" if `current_user` is nil) who enters the experiment.
- `{ project: project }`: Assigns a variant and is "sticky" to the project
being viewed. If running your experiment is more useful when viewing a project,
rather than when a specific user is viewing any project, consider this approach.
- `{ group: group }`: Similar to the project example, but applies to a wider
scope of projects and users.
- `{ actor: current_user, project: project }`: Assigns a variant and is "sticky"
to the user who is viewing the given project. This creates a different variant
assignment possibility for every project that `current_user` views. Understand this
can create a large cache size if an experiment like this in a highly trafficked part
of the application.
- `{ wday: Time.current.wday }`: Assigns a variant based on the current day of the
week. In this example, it would consistently assign one variant on Friday, and a
potentially different variant on Saturday.
Context is critical to how you define and report on your experiment. It's usually
the most important aspect of how you choose to implement your experiment, so consider
it carefully, and discuss it with the wider team if needed. Also, take into account
that the context you choose affects our cache size.
After the above examples, we can state the general case: *given a specific
and consistent context, we can provide a consistent experience and track events for
that experience.* To dive a bit deeper into the implementation details: a context key
is generated from the context that's provided. Use this context key to:
- Determine the assigned variant.
- Identify events tracked against that context key.
We can think about this as the experience that we've rendered, which is both dictated
and tracked by the context key. The context key is used to track the interaction and
results of the experience we've rendered to that context key. These concepts are
somewhat abstract and hard to understand initially, but this approach enables us to
communicate about experiments as something that's wider than just user behavior.
{{< alert type="note" >}}
Using `actor:` uses cookies if the `current_user` is nil. If you don't need
cookies though - meaning that the exposed functionality would only be visible to
authenticated users - `{ user: current_user }` would be just as effective.
{{< /alert >}}
{{< alert type="warning" >}}
The caching of variant assignment is done by using this context, and so consider
your impact on the cache size when defining your experiment. If you use
`{ time: Time.current }` you would be inflating the cache size every time the
experiment is run. Not only that, your experiment would not be "sticky" and events
wouldn't be resolvable.
{{< /alert >}}
### Advanced experimentation
There are two ways to implement an experiment:
1. The basic experiment style described previously.
1. A more advanced style where an experiment class is provided.
The advanced style is handled by naming convention, and works similar to what you
would expect in Rails.
To generate a custom experiment class that can override the defaults in
`ApplicationExperiment` use the Rails generator:
```shell
rails generate gitlab:experiment pill_color control red blue
```
This generates an experiment class in `app/experiments/pill_color_experiment.rb`
with the _behaviors_ we've provided to the generator. Here's an example
of how that class would look after migrating our previous example into it:
```ruby
class PillColorExperiment < ApplicationExperiment
control { 'control' }
variant(:red) { 'red' }
variant(:blue) { 'blue' }
end
```
We can now simplify where we run our experiment to the following call, instead of
providing the block we were initially providing, by explicitly calling `run`:
```ruby
experiment(:pill_color, actor: current_user).run
```
The _behaviors_ we defined in our experiment class represent the default
implementation. You can still use the block syntax to override these _behaviors_
however, so the following would also be valid:
```ruby
experiment(:pill_color, actor: current_user) do |e|
e.control { '<strong>control</strong>' }
end
```
{{< alert type="note" >}}
When passing a block to the `experiment` method, it is implicitly invoked as
if `run` has been called.
{{< /alert >}}
#### Segmentation rules
You can use runtime segmentation rules to, for instance, segment contexts into a specific
variant. The `segment` method is a callback (like `before_action`) and so allows providing
a block or method name.
In this example, any user named `'Richard'` would always be assigned the _red_
variant, and any account older than 2 weeks old would be assigned the _blue_ variant:
```ruby
class PillColorExperiment < ApplicationExperiment
# ...registered behaviors
segment(variant: :red) { context.actor.first_name == 'Richard' }
segment :old_account?, variant: :blue
private
def old_account?
context.actor.created_at < 2.weeks.ago
end
end
```
When an experiment runs, the segmentation rules are executed in the order they're
defined. The first segmentation rule to produce a truthy result assigns the variant.
In our example, any user named `'Richard'`, regardless of account age, is always
assigned the _red_ variant. If you want the opposite logic, flip the order.
{{< alert type="note" >}}
Keep in mind when defining segmentation rules: after a truthy result, the remaining
segmentation rules are skipped to achieve optimal performance.
{{< /alert >}}
#### Exclusion rules
Exclusion rules are similar to segmentation rules, but are intended to determine
if a context should even be considered as something we should include in the experiment
and track events toward. Exclusion means we don't care about the events in relation
to the given context.
These examples exclude all users named `'Richard'`, and any account
older than 2 weeks old. Not only are they given the control behavior - which could
be nothing - but no events are tracked in these cases as well.
```ruby
class PillColorExperiment < ApplicationExperiment
# ...registered behaviors
exclude :old_account?, ->{ context.actor.first_name == 'Richard' }
private
def old_account?
context.actor.created_at < 2.weeks.ago
end
end
```
You may also need to check exclusion in custom tracking logic by calling `should_track?`:
```ruby
class PillColorExperiment < ApplicationExperiment
# ...registered behaviors
def expensive_tracking_logic
return unless should_track?
track(:my_event, value: expensive_method_call)
end
end
```
### Tracking events
One of the most important aspects of experiments is gathering data and reporting on
it. You can use the `track` method to track events across an experimental implementation.
You can track events consistently to an experiment if you provide the same context between
calls to your experiment. If you do not understand context, you should read
about contexts now.
We can assume we run the experiment in one or a few places, but
track events potentially in many places. The tracking call remains the same, with
the arguments you would usually use when
tracking events using snowplow. The easiest example
of tracking an event in Ruby would be:
```ruby
experiment(:pill_color, actor: current_user).track(:clicked)
```
When you run an experiment with any of the examples so far, an `:assignment` event
is tracked automatically by default. All events that are tracked from an
experiment have a special
[experiment context](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/1-0-3)
added to the event. This can be used - typically by the data team - to create a connection
between the events on a given experiment.
If our user hasn't encountered the experiment (meaning where the experiment
is run), and we track an event for them, they are assigned a variant and see
that variant if they ever encountered the experiment later, when an `:assignment`
event would be tracked at that time for them.
{{< alert type="note" >}}
GitLab tries to be sensitive and respectful of our customers regarding tracking,
so our experimentation library allows us to implement an experiment without ever tracking identifying
IDs. It's not always possible, though, based on experiment reporting requirements.
You may be asked from time to time to track a specific record ID in experiments.
The approach is largely up to the PM and engineer creating the implementation.
No recommendations are provided here at this time.
{{< /alert >}}
## Experiments in the client layer
Any experiment that's been run in the request lifecycle surfaces in `window.gl.experiments`,
and matches [this schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/1-0-3)
so it can be used when resolving experimentation in the client layer.
Given that we've defined a class for our experiment, and have defined the variants for it, we can publish that experiment in a couple ways.
The first way is by running the experiment. Assuming the experiment has been run, it surfaces in the client layer without having to do anything special.
The second way doesn't run the experiment and is intended to be used if the experiment must only surface in the client layer. To accomplish this we can `.publish` the experiment. This does not run any logic, but does surface the experiment details in the client layer so they can be used there.
An example might be to publish an experiment in a `before_action` in a controller. Assuming we've defined the `PillColorExperiment` class, like we have above, we can surface it to the client by publishing it instead of running it:
```ruby
before_action -> { experiment(:pill_color).publish }, only: [:show]
```
You can then see this surface in the JavaScript console:
```javascript
window.gl.experiments // => { pill_color: { excluded: false, experiment: "pill_color", key: "ca63ac02", variant: "candidate" } }
```
### Using experiments in Vue
With the `gitlab-experiment` component, you can define slots that match the name of the
variants pushed to `window.gl.experiments`.
We can make use of the named slots in the Vue component, that match the behaviors defined in :
```vue
<script>
import GitlabExperiment from '~/experimentation/components/gitlab_experiment.vue';
export default {
components: { GitlabExperiment }
}
</script>
<template>
<gitlab-experiment name="pill_color">
<template #control>
<button class="bg-default">Click default button</button>
</template>
<template #red>
<button class="bg-red">Click red button</button>
</template>
<template #blue>
<button class="bg-blue">Click blue button</button>
</template>
</gitlab-experiment>
</template>
```
{{< alert type="note" >}}
When there is no experiment data in the `window.gl.experiments` object for the given experiment name, the `control` slot is used, if it exists.
{{< /alert >}}
|
---
stage: Growth
group: Acquisition
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Implementing an A/B/n experiment
breadcrumbs:
- doc
- development
- experiment_guide
---
## Implementing an experiment
[Examples](https://gitlab.com/groups/gitlab-org/growth/-/wikis/GLEX-How-Tos)
Start by generating a feature flag using the `bin/feature-flag` command as you
usually would for a development feature flag, making sure to use `experiment` for
the type. For the sake of documentation let's name our feature flag (and experiment)
`pill_color`.
```shell
bin/feature-flag pill_color -t experiment
```
After you generate the desired feature flag, you can immediately implement an
experiment in code. A basic experiment implementation can be:
```ruby
experiment(:pill_color, actor: current_user) do |e|
e.control { 'control' }
e.variant(:red) { 'red' }
e.variant(:blue) { 'blue' }
end
```
When this code executes, the experiment is run, a variant is assigned, and (if in a
controller or view) a `window.gl.experiments.pill_color` object is available in the
client layer, with details like:
- The assigned variant.
- The context key for client tracking events.
In addition, when an experiment runs, an event is tracked for
the experiment `:assignment`. We cover more about events, tracking, and
the client layer later.
In local development, you can make the experiment active by using the feature flag
interface. You can also target specific cases by providing the relevant experiment
to the call to enable the feature flag:
```ruby
# Enable for everyone
Feature.enable(:pill_color)
# Get the `experiment` method -- already available in controllers, views, and mailers.
include Gitlab::Experiment::Dsl
# Enable for only the first user
Feature.enable(:pill_color, experiment(:pill_color, actor: User.first))
```
To roll out your experiment feature flag on an environment, run
the following command using ChatOps (which is covered in more depth in the
[Feature flags in development of GitLab](../feature_flags/_index.md) documentation).
This command creates a scenario where half of everyone who encounters
the experiment would be assigned the _control_, 25% would be assigned the _red_
variant, and 25% would be assigned the _blue_ variant:
```plaintext
/chatops run feature set pill_color 50 --actors
```
For an even distribution in this example, change the command to set it to 66% instead
of 50.
{{< alert type="note" >}}
To immediately stop running an experiment, use the
`/chatops run feature set pill_color false` command.
{{< /alert >}}
{{< alert type="warning" >}}
We strongly recommend using the `--actors` flag when using the ChatOps commands,
as anything else may give odd behaviors due to how the caching of variant assignment is
handled.
{{< /alert >}}
We can also implement this experiment in a HAML file with HTML wrappings:
```ruby
#cta-interface
- experiment(:pill_color, actor: current_user) do |e|
- e.control do
.pill-button control
- e.variant(:red) do
.pill-button.red red
- e.variant(:blue) do
.pill-button.blue blue
```
### The importance of context
In our previous example experiment, our context (this is an important term) is a hash
that's set to `{ actor: current_user }`. Context must be unique based on how you
want to run your experiment, and should be understood at a lower level.
It's expected, and recommended, that you use some of these
contexts to simplify reporting:
- `{ actor: current_user }`: Assigns a variant and is "sticky" to each user
(or "client" if `current_user` is nil) who enters the experiment.
- `{ project: project }`: Assigns a variant and is "sticky" to the project
being viewed. If running your experiment is more useful when viewing a project,
rather than when a specific user is viewing any project, consider this approach.
- `{ group: group }`: Similar to the project example, but applies to a wider
scope of projects and users.
- `{ actor: current_user, project: project }`: Assigns a variant and is "sticky"
to the user who is viewing the given project. This creates a different variant
assignment possibility for every project that `current_user` views. Understand this
can create a large cache size if an experiment like this in a highly trafficked part
of the application.
- `{ wday: Time.current.wday }`: Assigns a variant based on the current day of the
week. In this example, it would consistently assign one variant on Friday, and a
potentially different variant on Saturday.
Context is critical to how you define and report on your experiment. It's usually
the most important aspect of how you choose to implement your experiment, so consider
it carefully, and discuss it with the wider team if needed. Also, take into account
that the context you choose affects our cache size.
After the above examples, we can state the general case: *given a specific
and consistent context, we can provide a consistent experience and track events for
that experience.* To dive a bit deeper into the implementation details: a context key
is generated from the context that's provided. Use this context key to:
- Determine the assigned variant.
- Identify events tracked against that context key.
We can think about this as the experience that we've rendered, which is both dictated
and tracked by the context key. The context key is used to track the interaction and
results of the experience we've rendered to that context key. These concepts are
somewhat abstract and hard to understand initially, but this approach enables us to
communicate about experiments as something that's wider than just user behavior.
{{< alert type="note" >}}
Using `actor:` uses cookies if the `current_user` is nil. If you don't need
cookies though - meaning that the exposed functionality would only be visible to
authenticated users - `{ user: current_user }` would be just as effective.
{{< /alert >}}
{{< alert type="warning" >}}
The caching of variant assignment is done by using this context, and so consider
your impact on the cache size when defining your experiment. If you use
`{ time: Time.current }` you would be inflating the cache size every time the
experiment is run. Not only that, your experiment would not be "sticky" and events
wouldn't be resolvable.
{{< /alert >}}
### Advanced experimentation
There are two ways to implement an experiment:
1. The basic experiment style described previously.
1. A more advanced style where an experiment class is provided.
The advanced style is handled by naming convention, and works similar to what you
would expect in Rails.
To generate a custom experiment class that can override the defaults in
`ApplicationExperiment` use the Rails generator:
```shell
rails generate gitlab:experiment pill_color control red blue
```
This generates an experiment class in `app/experiments/pill_color_experiment.rb`
with the _behaviors_ we've provided to the generator. Here's an example
of how that class would look after migrating our previous example into it:
```ruby
class PillColorExperiment < ApplicationExperiment
control { 'control' }
variant(:red) { 'red' }
variant(:blue) { 'blue' }
end
```
We can now simplify where we run our experiment to the following call, instead of
providing the block we were initially providing, by explicitly calling `run`:
```ruby
experiment(:pill_color, actor: current_user).run
```
The _behaviors_ we defined in our experiment class represent the default
implementation. You can still use the block syntax to override these _behaviors_
however, so the following would also be valid:
```ruby
experiment(:pill_color, actor: current_user) do |e|
e.control { '<strong>control</strong>' }
end
```
{{< alert type="note" >}}
When passing a block to the `experiment` method, it is implicitly invoked as
if `run` has been called.
{{< /alert >}}
#### Segmentation rules
You can use runtime segmentation rules to, for instance, segment contexts into a specific
variant. The `segment` method is a callback (like `before_action`) and so allows providing
a block or method name.
In this example, any user named `'Richard'` would always be assigned the _red_
variant, and any account older than 2 weeks old would be assigned the _blue_ variant:
```ruby
class PillColorExperiment < ApplicationExperiment
# ...registered behaviors
segment(variant: :red) { context.actor.first_name == 'Richard' }
segment :old_account?, variant: :blue
private
def old_account?
context.actor.created_at < 2.weeks.ago
end
end
```
When an experiment runs, the segmentation rules are executed in the order they're
defined. The first segmentation rule to produce a truthy result assigns the variant.
In our example, any user named `'Richard'`, regardless of account age, is always
assigned the _red_ variant. If you want the opposite logic, flip the order.
{{< alert type="note" >}}
Keep in mind when defining segmentation rules: after a truthy result, the remaining
segmentation rules are skipped to achieve optimal performance.
{{< /alert >}}
#### Exclusion rules
Exclusion rules are similar to segmentation rules, but are intended to determine
if a context should even be considered as something we should include in the experiment
and track events toward. Exclusion means we don't care about the events in relation
to the given context.
These examples exclude all users named `'Richard'`, and any account
older than 2 weeks old. Not only are they given the control behavior - which could
be nothing - but no events are tracked in these cases as well.
```ruby
class PillColorExperiment < ApplicationExperiment
# ...registered behaviors
exclude :old_account?, ->{ context.actor.first_name == 'Richard' }
private
def old_account?
context.actor.created_at < 2.weeks.ago
end
end
```
You may also need to check exclusion in custom tracking logic by calling `should_track?`:
```ruby
class PillColorExperiment < ApplicationExperiment
# ...registered behaviors
def expensive_tracking_logic
return unless should_track?
track(:my_event, value: expensive_method_call)
end
end
```
### Tracking events
One of the most important aspects of experiments is gathering data and reporting on
it. You can use the `track` method to track events across an experimental implementation.
You can track events consistently to an experiment if you provide the same context between
calls to your experiment. If you do not understand context, you should read
about contexts now.
We can assume we run the experiment in one or a few places, but
track events potentially in many places. The tracking call remains the same, with
the arguments you would usually use when
tracking events using snowplow. The easiest example
of tracking an event in Ruby would be:
```ruby
experiment(:pill_color, actor: current_user).track(:clicked)
```
When you run an experiment with any of the examples so far, an `:assignment` event
is tracked automatically by default. All events that are tracked from an
experiment have a special
[experiment context](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/1-0-3)
added to the event. This can be used - typically by the data team - to create a connection
between the events on a given experiment.
If our user hasn't encountered the experiment (meaning where the experiment
is run), and we track an event for them, they are assigned a variant and see
that variant if they ever encountered the experiment later, when an `:assignment`
event would be tracked at that time for them.
{{< alert type="note" >}}
GitLab tries to be sensitive and respectful of our customers regarding tracking,
so our experimentation library allows us to implement an experiment without ever tracking identifying
IDs. It's not always possible, though, based on experiment reporting requirements.
You may be asked from time to time to track a specific record ID in experiments.
The approach is largely up to the PM and engineer creating the implementation.
No recommendations are provided here at this time.
{{< /alert >}}
## Experiments in the client layer
Any experiment that's been run in the request lifecycle surfaces in `window.gl.experiments`,
and matches [this schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/1-0-3)
so it can be used when resolving experimentation in the client layer.
Given that we've defined a class for our experiment, and have defined the variants for it, we can publish that experiment in a couple ways.
The first way is by running the experiment. Assuming the experiment has been run, it surfaces in the client layer without having to do anything special.
The second way doesn't run the experiment and is intended to be used if the experiment must only surface in the client layer. To accomplish this we can `.publish` the experiment. This does not run any logic, but does surface the experiment details in the client layer so they can be used there.
An example might be to publish an experiment in a `before_action` in a controller. Assuming we've defined the `PillColorExperiment` class, like we have above, we can surface it to the client by publishing it instead of running it:
```ruby
before_action -> { experiment(:pill_color).publish }, only: [:show]
```
You can then see this surface in the JavaScript console:
```javascript
window.gl.experiments // => { pill_color: { excluded: false, experiment: "pill_color", key: "ca63ac02", variant: "candidate" } }
```
### Using experiments in Vue
With the `gitlab-experiment` component, you can define slots that match the name of the
variants pushed to `window.gl.experiments`.
We can make use of the named slots in the Vue component, that match the behaviors defined in :
```vue
<script>
import GitlabExperiment from '~/experimentation/components/gitlab_experiment.vue';
export default {
components: { GitlabExperiment }
}
</script>
<template>
<gitlab-experiment name="pill_color">
<template #control>
<button class="bg-default">Click default button</button>
</template>
<template #red>
<button class="bg-red">Click red button</button>
</template>
<template #blue>
<button class="bg-blue">Click blue button</button>
</template>
</gitlab-experiment>
</template>
```
{{< alert type="note" >}}
When there is no experiment data in the `window.gl.experiments` object for the given experiment name, the `control` slot is used, if it exists.
{{< /alert >}}
|
https://docs.gitlab.com/development/experiment_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/experiment_guide
|
[
"doc",
"development",
"experiment_guide"
] |
_index.md
|
Growth
|
Acquisition
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Experiment Guide
| null |
Experiments can be conducted by any GitLab team, most often the teams from the
[Growth Sub-department](https://handbook.gitlab.com/handbook/marketing/growth/engineering/).
Experiments are not tied to releases because they primarily target GitLab.com.
Experiments are run as an A/B/n test, and are behind an [experiment feature flag](../feature_flags/_index.md#experiment-type)
to turn the test on or off. Based on the data the experiment generates, the team decides
if the experiment had a positive impact and should be made the new default, or rolled back.
Experiments in GitLab are tightly coupled with the concepts provided by
[Feature flags in development of GitLab](../feature_flags/_index.md). You're strongly encouraged
to read and understand the [Feature flags in development of GitLab](../feature_flags/_index.md)
portion of the documentation before considering running experiments. Experiments add additional
concepts which may seem confusing or advanced without understanding the underpinnings of how GitLab
uses feature flags in development. One concept: experiments can be run with multiple variants,
which are sometimes referred to as A/B/n tests.
We use the [`gitlab-experiment` gem](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment),
sometimes referred to as GLEX, to run our experiments. The gem exists in a separate repository
so it can be shared across any GitLab property that uses Ruby. You should feel comfortable reading
the documentation on that project if you want to dig into more advanced topics or open issues. Be
aware that the documentation there reflects what's in the main branch and may not be the same as
the version being used in GitLab.
## Glossary
To ensure a shared language, you should understand these fundamental terms we use
when communicating about experiments:
- `experiment`: Any deviation of code paths we want to run at some times, but not others.
- `context`: A consistent experience we provide in an experiment.
- `control`: The default, or "original" code path.
- `candidate`: Defines an experiment with only one code path.
- `variant(s)`: Defines an experiment with multiple code paths.
- `behaviors`: Used to reference all possible code paths of an experiment, including the control.
## Implementing an experiment
[GLEX](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment) - or `Gitlab::Experiment`, the `gitlab-experiment` gem - is the preferred option for implementing an experiment in GitLab.
For more information, see [Implementing an A/B/n experiment using GLEX](implementing_experiments.md).
This uses [experiment](../feature_flags/_index.md#experiment-type) feature flags.
### Add new icons and illustrations for experiments
Some experiments may require you to add custom icons or illustrations to our codebase.
This process is lengthy and at this stage, the outcome of the experiment uncertain.
Therefore, you should postpone this effort until the [experiment cleanup process](https://handbook.gitlab.com/handbook/engineering/development/growth/experimentation/#experiment-cleanup-issue).
We recommend the following workflow:
1. Review the Pajamas guidelines for [icons](https://design.gitlab.com/product-foundations/iconography/) and [illustrations](https://design.gitlab.com/product-foundations/illustration/).
1. Add an icon or illustration as an `.svg` file in the `/app/assets/images` (or EE) path in the GitLab repository.
1. Use `image_tag` or `image_path` to render it via the asset pipeline.
1. **If the experiment is a success**, designers add the new icon or illustration to the Pajamas UI kit as part of the cleanup process.
Engineers can then add it to the [SVG library](https://gitlab-org.gitlab.io/gitlab-svgs/) and modify the implementation based on the
[Frontend Development Guidelines](../fe_guide/icons.md#usage-in-hamlrails-2).
## Related topics
- [Experiments API](../../api/experiments.md)
|
---
stage: Growth
group: Acquisition
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Experiment Guide
breadcrumbs:
- doc
- development
- experiment_guide
---
Experiments can be conducted by any GitLab team, most often the teams from the
[Growth Sub-department](https://handbook.gitlab.com/handbook/marketing/growth/engineering/).
Experiments are not tied to releases because they primarily target GitLab.com.
Experiments are run as an A/B/n test, and are behind an [experiment feature flag](../feature_flags/_index.md#experiment-type)
to turn the test on or off. Based on the data the experiment generates, the team decides
if the experiment had a positive impact and should be made the new default, or rolled back.
Experiments in GitLab are tightly coupled with the concepts provided by
[Feature flags in development of GitLab](../feature_flags/_index.md). You're strongly encouraged
to read and understand the [Feature flags in development of GitLab](../feature_flags/_index.md)
portion of the documentation before considering running experiments. Experiments add additional
concepts which may seem confusing or advanced without understanding the underpinnings of how GitLab
uses feature flags in development. One concept: experiments can be run with multiple variants,
which are sometimes referred to as A/B/n tests.
We use the [`gitlab-experiment` gem](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment),
sometimes referred to as GLEX, to run our experiments. The gem exists in a separate repository
so it can be shared across any GitLab property that uses Ruby. You should feel comfortable reading
the documentation on that project if you want to dig into more advanced topics or open issues. Be
aware that the documentation there reflects what's in the main branch and may not be the same as
the version being used in GitLab.
## Glossary
To ensure a shared language, you should understand these fundamental terms we use
when communicating about experiments:
- `experiment`: Any deviation of code paths we want to run at some times, but not others.
- `context`: A consistent experience we provide in an experiment.
- `control`: The default, or "original" code path.
- `candidate`: Defines an experiment with only one code path.
- `variant(s)`: Defines an experiment with multiple code paths.
- `behaviors`: Used to reference all possible code paths of an experiment, including the control.
## Implementing an experiment
[GLEX](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment) - or `Gitlab::Experiment`, the `gitlab-experiment` gem - is the preferred option for implementing an experiment in GitLab.
For more information, see [Implementing an A/B/n experiment using GLEX](implementing_experiments.md).
This uses [experiment](../feature_flags/_index.md#experiment-type) feature flags.
### Add new icons and illustrations for experiments
Some experiments may require you to add custom icons or illustrations to our codebase.
This process is lengthy and at this stage, the outcome of the experiment uncertain.
Therefore, you should postpone this effort until the [experiment cleanup process](https://handbook.gitlab.com/handbook/engineering/development/growth/experimentation/#experiment-cleanup-issue).
We recommend the following workflow:
1. Review the Pajamas guidelines for [icons](https://design.gitlab.com/product-foundations/iconography/) and [illustrations](https://design.gitlab.com/product-foundations/illustration/).
1. Add an icon or illustration as an `.svg` file in the `/app/assets/images` (or EE) path in the GitLab repository.
1. Use `image_tag` or `image_path` to render it via the asset pipeline.
1. **If the experiment is a success**, designers add the new icon or illustration to the Pajamas UI kit as part of the cleanup process.
Engineers can then add it to the [SVG library](https://gitlab-org.gitlab.io/gitlab-svgs/) and modify the implementation based on the
[Frontend Development Guidelines](../fe_guide/icons.md#usage-in-hamlrails-2).
## Related topics
- [Experiments API](../../api/experiments.md)
|
https://docs.gitlab.com/development/deprecation_guidelines
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/deprecation_guidelines
|
[
"doc",
"development",
"deprecation_guidelines"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Deprecating GitLab features
| null |
For details about the terms used on this page, see [the terminology](../../update/terminology.md).
## Breaking Change Policy
Any change counts as a breaking change if customers need to take action to ensure their GitLab workflows aren't disrupted.
A breaking change could come from sources such as:
- An intentional product change
- A configuration update
- A third-party deprecation
- Or many other sources
For many of our users, GitLab is a tier zero system. It is critical in creating, releasing, operating, and scaling users' businesses. The consequence of a breaking change can be serious.
Product and Engineering Managers are responsible and accountable for customer impacts due to the changes they make to the platform. The burden is on GitLab, not the customer, to own change management.
**We aim to eliminate all breaking changes from GitLab.** If you have exhausted the alternatives and believe you have a strong case for why a breaking change should be allowed, you can follow the process below to seek an exception.
## How do I get approval to move forward with a breaking change?
**By default, no breaking change is allowed unless the breaking change implementation plan has been granted explicit approval by following the process below.**
1. Open an issue using the [Breaking Change Exception template](https://gitlab.com/gitlab-com/Product/-/issues/new?description_template=Breaking-Change-Exception) and fill in all of the required sections.
1. **If your breaking change meets any of the below criteria**, please call it out in the request. It doesn't guarantee the request will be approved but it helps make a good argument. Most breaking changes that are approved will fall into at least one of these categories:
1. The impact of the breaking change has been **fully mitigated via an automated migration** that requires no action from the customer.
1. The breaking change will have **negligible customer impact** as measured by actual product usage tracking across GitLab Self-Managed, GitLab.com, and GitLab Dedicated. For instance if it impacts less than 1% of the GitLab customer base.
1. The breaking change is being implemented due to a **significant security risk- Severity 1 or 2.**
1. Once the issue is ready for review, follow the instructions in the template for who to tag to get the approval process started.
1. Wait until you get approval before publicly sharing the news or confirming your proposed timeline. The time from initial submission to approval or denial will vary, so **submit a minimum of six months in advance** of the proposed removal time frame.
## What details are part of the request template?
- Executive Summary
- Impact Assessment
- Rollout & Communication Plan
- Internal Communication
- Customer Communication
[Request template](https://gitlab.com/gitlab-com/Product/-/issues/new?description_template=Breaking-Change-Exception)
## After you have an approved breaking change, what's next?
1. Create a public [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Deprecations.md) that will serve as a source of truth for customers in regards to the change.
1. Ensure the change is added to the deprecations docs page by following the guidance below.
1. Follow the Rollout & Communications plan that was approved in your request.
## Update the deprecations and removals documentation
The [deprecations and removals](../../update/deprecations.md)
documentation is generated from the YAML files located in
[`gitlab/data/deprecations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations).
To update the deprecations and removals page when a YAML file is added,
edited, or removed:
1. From the command line, go to your local clone of the [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
1. Create, edit, or remove the YAML file under [`data/deprecations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations).
1. Compile the deprecations and removals documentation:
```shell
bin/rake gitlab:docs:compile_deprecations
```
1. If needed, you can verify the documentation is up to date with:
```shell
bin/rake gitlab:docs:check_deprecations
```
1. Commit the updated documentation and push the changes.
1. Create a merge request using the [Deprecations and Removals](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/merge_request_templates/Deprecations.md)
template.
Related Handbook pages:
- <https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#deprecations-removals-and-breaking-changes>
- <https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#update-the-deprecations-doc>
## Update the breaking change windows documentation
The [breaking change windows](../../update/breaking_windows.md)
documentation is generated based on the `window` value in the YAML files located in
[`gitlab/data/deprecations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations).
To update the breaking change windows page when a YAML file is added,
edited, or removed:
1. From the command line, go to your local clone of the [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
1. Create, edit, or remove the YAML file under [`data/deprecations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations).
1. Compile the breaking change windows documentation:
```shell
bin/rake gitlab:docs:compile_windows
```
1. Update the deprecations documentation:
```shell
bin/rake gitlab:docs:compile_deprecations
```
1. If needed, you can verify the documentation is up to date with:
```shell
bin/rake gitlab:docs:check_windows
```
1. Commit the updated documentation and push the changes.
1. Create a merge request.
## Update the related documentation
When features are deprecated and removed, [update the related documentation](../documentation/styleguide/deprecations_and_removals.md).
## API deprecations and breaking changes
Our APIs have special rules regarding deprecations and breaking changes.
### REST API v4
REST API v4 [cannot have breaking changes made to it](../api_styleguide.md#breaking-changes)
unless the API feature was previously
[marked as experimental or beta](../api_styleguide.md#experimental-beta-and-generally-available-features).
See [What to do instead of a breaking change?](../api_styleguide.md#what-to-do-instead-of-a-breaking-change)
### GraphQL API
The GraphQL API has a requirement for a [longer deprecation cycle](../../api/graphql/_index.md#deprecation-and-removal-process)
than the standard cycle before a breaking change can be made.
See the [GraphQL deprecation process](../api_graphql_styleguide.md#deprecating-schema-items).
## Webhook breaking changes
We cannot make breaking changes to webhook payloads.
For a list of what constitutes a breaking webhook payload change and what to do instead, see the
[Webhook breaking changes guide](../../development/webhooks.md#breaking-changes).
## How are Community Contributions to a deprecated feature handled?
Development on deprecated features is restricted to Priority 1 / Severity 1 bug fixes. Any community contributions to deprecated features are unlikely to be prioritized during milestone planning.
However, at GitLab, we [give agency](https://handbook.gitlab.com/handbook/values/#give-agency) to our team members. So, a member of the team associated with the contribution may decide to review and merge it at their discretion.
## Other guidelines
For configuration removals, see the [Omnibus deprecation policy](../../administration/package_information/deprecation_policy.md).
For versioning and upgrade details, see our [Release and Maintenance policy](../../policy/maintenance.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Deprecating GitLab features
breadcrumbs:
- doc
- development
- deprecation_guidelines
---
For details about the terms used on this page, see [the terminology](../../update/terminology.md).
## Breaking Change Policy
Any change counts as a breaking change if customers need to take action to ensure their GitLab workflows aren't disrupted.
A breaking change could come from sources such as:
- An intentional product change
- A configuration update
- A third-party deprecation
- Or many other sources
For many of our users, GitLab is a tier zero system. It is critical in creating, releasing, operating, and scaling users' businesses. The consequence of a breaking change can be serious.
Product and Engineering Managers are responsible and accountable for customer impacts due to the changes they make to the platform. The burden is on GitLab, not the customer, to own change management.
**We aim to eliminate all breaking changes from GitLab.** If you have exhausted the alternatives and believe you have a strong case for why a breaking change should be allowed, you can follow the process below to seek an exception.
## How do I get approval to move forward with a breaking change?
**By default, no breaking change is allowed unless the breaking change implementation plan has been granted explicit approval by following the process below.**
1. Open an issue using the [Breaking Change Exception template](https://gitlab.com/gitlab-com/Product/-/issues/new?description_template=Breaking-Change-Exception) and fill in all of the required sections.
1. **If your breaking change meets any of the below criteria**, please call it out in the request. It doesn't guarantee the request will be approved but it helps make a good argument. Most breaking changes that are approved will fall into at least one of these categories:
1. The impact of the breaking change has been **fully mitigated via an automated migration** that requires no action from the customer.
1. The breaking change will have **negligible customer impact** as measured by actual product usage tracking across GitLab Self-Managed, GitLab.com, and GitLab Dedicated. For instance if it impacts less than 1% of the GitLab customer base.
1. The breaking change is being implemented due to a **significant security risk- Severity 1 or 2.**
1. Once the issue is ready for review, follow the instructions in the template for who to tag to get the approval process started.
1. Wait until you get approval before publicly sharing the news or confirming your proposed timeline. The time from initial submission to approval or denial will vary, so **submit a minimum of six months in advance** of the proposed removal time frame.
## What details are part of the request template?
- Executive Summary
- Impact Assessment
- Rollout & Communication Plan
- Internal Communication
- Customer Communication
[Request template](https://gitlab.com/gitlab-com/Product/-/issues/new?description_template=Breaking-Change-Exception)
## After you have an approved breaking change, what's next?
1. Create a public [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Deprecations.md) that will serve as a source of truth for customers in regards to the change.
1. Ensure the change is added to the deprecations docs page by following the guidance below.
1. Follow the Rollout & Communications plan that was approved in your request.
## Update the deprecations and removals documentation
The [deprecations and removals](../../update/deprecations.md)
documentation is generated from the YAML files located in
[`gitlab/data/deprecations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations).
To update the deprecations and removals page when a YAML file is added,
edited, or removed:
1. From the command line, go to your local clone of the [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
1. Create, edit, or remove the YAML file under [`data/deprecations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations).
1. Compile the deprecations and removals documentation:
```shell
bin/rake gitlab:docs:compile_deprecations
```
1. If needed, you can verify the documentation is up to date with:
```shell
bin/rake gitlab:docs:check_deprecations
```
1. Commit the updated documentation and push the changes.
1. Create a merge request using the [Deprecations and Removals](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/merge_request_templates/Deprecations.md)
template.
Related Handbook pages:
- <https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#deprecations-removals-and-breaking-changes>
- <https://handbook.gitlab.com/handbook/marketing/blog/release-posts/#update-the-deprecations-doc>
## Update the breaking change windows documentation
The [breaking change windows](../../update/breaking_windows.md)
documentation is generated based on the `window` value in the YAML files located in
[`gitlab/data/deprecations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations).
To update the breaking change windows page when a YAML file is added,
edited, or removed:
1. From the command line, go to your local clone of the [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
1. Create, edit, or remove the YAML file under [`data/deprecations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations).
1. Compile the breaking change windows documentation:
```shell
bin/rake gitlab:docs:compile_windows
```
1. Update the deprecations documentation:
```shell
bin/rake gitlab:docs:compile_deprecations
```
1. If needed, you can verify the documentation is up to date with:
```shell
bin/rake gitlab:docs:check_windows
```
1. Commit the updated documentation and push the changes.
1. Create a merge request.
## Update the related documentation
When features are deprecated and removed, [update the related documentation](../documentation/styleguide/deprecations_and_removals.md).
## API deprecations and breaking changes
Our APIs have special rules regarding deprecations and breaking changes.
### REST API v4
REST API v4 [cannot have breaking changes made to it](../api_styleguide.md#breaking-changes)
unless the API feature was previously
[marked as experimental or beta](../api_styleguide.md#experimental-beta-and-generally-available-features).
See [What to do instead of a breaking change?](../api_styleguide.md#what-to-do-instead-of-a-breaking-change)
### GraphQL API
The GraphQL API has a requirement for a [longer deprecation cycle](../../api/graphql/_index.md#deprecation-and-removal-process)
than the standard cycle before a breaking change can be made.
See the [GraphQL deprecation process](../api_graphql_styleguide.md#deprecating-schema-items).
## Webhook breaking changes
We cannot make breaking changes to webhook payloads.
For a list of what constitutes a breaking webhook payload change and what to do instead, see the
[Webhook breaking changes guide](../../development/webhooks.md#breaking-changes).
## How are Community Contributions to a deprecated feature handled?
Development on deprecated features is restricted to Priority 1 / Severity 1 bug fixes. Any community contributions to deprecated features are unlikely to be prioritized during milestone planning.
However, at GitLab, we [give agency](https://handbook.gitlab.com/handbook/values/#give-agency) to our team members. So, a member of the team associated with the contribution may decide to review and merge it at their discretion.
## Other guidelines
For configuration removals, see the [Omnibus deprecation policy](../../administration/package_information/deprecation_policy.md).
For versioning and upgrade details, see our [Release and Maintenance policy](../../policy/maintenance.md).
|
https://docs.gitlab.com/development/feature_categorization
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/feature_categorization
|
[
"doc",
"development",
"feature_categorization"
] |
_index.md
|
Enablement
|
Infrastructure
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Feature Categorization
| null |
Each Sidekiq worker, Batched Background migrations, controller action, [test example](../testing_guide/best_practices.md#feature-category-metadata) or API endpoint
must declare a `feature_category` attribute. This attribute maps each
of these to a [feature category](https://handbook.gitlab.com/handbook/product/categories/). This
is done for error budgeting, alert routing, and team attribution.
The list of feature categories can be found in the file `config/feature_categories.yml`.
This file is generated from the
[`stages.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml)
data file used in the GitLab Handbook and other GitLab resources.
## Updating `config/feature_categories.yml`
Occasionally new features will be added to GitLab stages, groups, and
product categories. When this occurs, you can automatically update
`config/feature_categories.yml` by running
`scripts/update-feature-categories`. This script will fetch and parse
[`stages.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml)
and generate a new version of the file, which needs to be committed to
the repository.
The [Scalability team](https://handbook.gitlab.com/handbook/engineering/infrastructure/team/scalability/)
currently maintains the `feature_categories.yml` file. They will automatically be
notified on Slack when the file becomes outdated.
## Gemfile
For each Ruby gem dependency we should specify which feature category requires
this dependency. This should clarify ownership and we can delegate upgrading
to the respective group owning the feature.
### Tooling feature category
For Developer Experience internal tooling we use `feature_category: :tooling`.
For example, `knapsack` and `gitlab-crystalball` are both used to run RSpec test
suites in CI and they don't belong to any product groups.
### Test platform feature category
For gems that are primarily maintained by the [Test Platform sub department](https://handbook.gitlab.com/handbook/engineering/infrastructure/test-platform/), we use `feature_category: :test_platform`.
For example, `capybara` is defined in both `Gemfile` and `qa/Gemfile` to run tests involving UI. They don't belong to a specific product group.
### Shared feature category
For gems that are used across different product groups we use
`feature_category: :shared`. For example, `rails` is used through out the
application and it's shared with multiple groups.
## Sidekiq workers
The declaration uses the `feature_category` class method, as shown below.
```ruby
class SomeScheduledTaskWorker
include ApplicationWorker
# Declares that this worker is part of the
# `continuous_integration` feature category
feature_category :continuous_integration
# ...
end
```
The feature categories specified using `feature_category` should be
defined in
[`config/feature_categories.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/feature_categories.yml). If
not, the specs will fail.
### Excluding Sidekiq workers from feature categorization
A few Sidekiq workers, that are used across all features, cannot be mapped to a
single category. These should be declared as such using the
`feature_category :not_owned`
declaration, as shown below:
```ruby
class SomeCrossCuttingConcernWorker
include ApplicationWorker
# Declares that this worker does not map to a feature category
feature_category :not_owned # rubocop:disable Gitlab/AvoidFeatureCategoryNotOwned
# ...
end
```
When possible, workers marked as "not owned" use their caller's
category (worker or HTTP endpoint) in metrics and logs.
For instance, `ReactiveCachingWorker` can have multiple feature
categories in metrics and logs.
## Batched background migrations
Long-running migrations (as per the [time limits guidelines](../migration_style_guide.md#how-long-a-migration-should-take))
are pulled out as [batched background migrations](../database/batched_background_migrations.md).
They should define a `feature_category`, like this:
```ruby
# Filename: lib/gitlab/background_migration/my_background_migration_job.rb
class MyBackgroundMigrationJob < BatchedMigrationJob
feature_category :gitaly
#...
end
```
{{< alert type="note" >}}
[`RuboCop::Cop::BackgroundMigration::FeatureCategory`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/cop/background_migration/feature_category.rb) cop ensures a valid `feature_category` is defined.
{{< /alert >}}
## Rails controllers
Specifying feature categories on controller actions can be done using
the `feature_category` class method.
A feature category can be specified on an entire controller
using:
```ruby
class Boards::ListsController < ApplicationController
feature_category :kanban_boards
end
```
The feature category can be limited to a list of actions using the
second argument:
```ruby
class DashboardController < ApplicationController
feature_category :team_planning, [:issues, :issues_calendar]
feature_category :code_review_workflow, [:merge_requests]
end
```
These forms cannot be mixed: if a controller has more than one category,
every single action must be listed.
### Excluding controller actions from feature categorization
In the rare case an action cannot be tied to a feature category this
can be done using the `not_owned` feature category.
```ruby
class Admin::LogsController < ApplicationController
feature_category :not_owned
end
```
### Ensuring feature categories are valid
The `spec/controllers/every_controller_spec.rb` will iterate over all
defined routes, and check the controller to see if a category is
assigned to all actions.
The spec also validates if the used feature categories are known. And if
the actions used in configuration still exist as routes.
## API endpoints
The [GraphQL API](../../api/graphql/_index.md) is currently categorized
as `not_owned`. For now, no extra specification is needed. For more
information, see
[`gitlab-com/gl-infra/scalability#583`](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/583/).
Grape API endpoints can use the `feature_category` class method, like
[Rails controllers](#rails-controllers) do:
```ruby
module API
class Issues < ::API::Base
feature_category :team_planning
end
end
```
The second argument can be used to specify feature categories for
specific routes:
```ruby
module API
class Users < ::API::Base
feature_category :user_profile, ['/users/:id/custom_attributes', '/users/:id/custom_attributes/:key']
end
end
```
Or the feature category can be specified in the action itself:
```ruby
module API
class Users < ::API::Base
get ':id', feature_category: :user_profile do
end
end
end
```
As with Rails controllers, an API class must specify the category for
every single action unless the same category is used for every action
within that class.
## RSpec Examples
You must set feature category metadata for each RSpec example. This information is used for flaky test
issues to identify the group that owns the feature.
The `feature_category` should be a value from [`config/feature_categories.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/feature_categories.yml).
The `feature_category` metadata can be set:
- [In the top-level `RSpec.describe` blocks](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/104274/diffs#6bd01173381e873f3e1b6c55d33cdaa3d897156b_5_5).
- [In `describe` blocks](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/104274/diffs#a520db2677a30e7f1f5593584f69c49031b894b9_12_12).
Consider splitting the file in the case there are multiple feature categories identified in the same file.
Example:
```ruby
RSpec.describe Admin::Geo::SettingsController, :geo, feature_category: :geo_replication do
```
For examples that don't have a `feature_category` set we add a warning when running them in local environment.
To disable the warning use `RSPEC_WARN_MISSING_FEATURE_CATEGORY=false` when running RSpec tests:
```shell
RSPEC_WARN_MISSING_FEATURE_CATEGORY=false bin/rspec spec/<test_file>
```
Additionally, we flag the offenses via `RSpec/FeatureCategory` RuboCop rule.
### Tooling feature category
For Engineering Productivity internal tooling we use `feature_category: :tooling`.
For example in [`spec/tooling/danger/specs_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/tooling/danger/specs_spec.rb#L12).
### Shared feature category
For features that support developers and they are not specific to a product group we use `feature_category: :shared`
For example [`spec/lib/gitlab/job_waiter_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/job_waiter_spec.rb)
### Admin section
Adding feature categories is equally important when adding new parts to the Admin section. Historically, Admin sections were often marked as `not_owned` in the code. Now
you must ensure each new addition to the Admin section is properly annotated using `feature_category` notation.
|
---
stage: Enablement
group: Infrastructure
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Feature Categorization
breadcrumbs:
- doc
- development
- feature_categorization
---
Each Sidekiq worker, Batched Background migrations, controller action, [test example](../testing_guide/best_practices.md#feature-category-metadata) or API endpoint
must declare a `feature_category` attribute. This attribute maps each
of these to a [feature category](https://handbook.gitlab.com/handbook/product/categories/). This
is done for error budgeting, alert routing, and team attribution.
The list of feature categories can be found in the file `config/feature_categories.yml`.
This file is generated from the
[`stages.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml)
data file used in the GitLab Handbook and other GitLab resources.
## Updating `config/feature_categories.yml`
Occasionally new features will be added to GitLab stages, groups, and
product categories. When this occurs, you can automatically update
`config/feature_categories.yml` by running
`scripts/update-feature-categories`. This script will fetch and parse
[`stages.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml)
and generate a new version of the file, which needs to be committed to
the repository.
The [Scalability team](https://handbook.gitlab.com/handbook/engineering/infrastructure/team/scalability/)
currently maintains the `feature_categories.yml` file. They will automatically be
notified on Slack when the file becomes outdated.
## Gemfile
For each Ruby gem dependency we should specify which feature category requires
this dependency. This should clarify ownership and we can delegate upgrading
to the respective group owning the feature.
### Tooling feature category
For Developer Experience internal tooling we use `feature_category: :tooling`.
For example, `knapsack` and `gitlab-crystalball` are both used to run RSpec test
suites in CI and they don't belong to any product groups.
### Test platform feature category
For gems that are primarily maintained by the [Test Platform sub department](https://handbook.gitlab.com/handbook/engineering/infrastructure/test-platform/), we use `feature_category: :test_platform`.
For example, `capybara` is defined in both `Gemfile` and `qa/Gemfile` to run tests involving UI. They don't belong to a specific product group.
### Shared feature category
For gems that are used across different product groups we use
`feature_category: :shared`. For example, `rails` is used through out the
application and it's shared with multiple groups.
## Sidekiq workers
The declaration uses the `feature_category` class method, as shown below.
```ruby
class SomeScheduledTaskWorker
include ApplicationWorker
# Declares that this worker is part of the
# `continuous_integration` feature category
feature_category :continuous_integration
# ...
end
```
The feature categories specified using `feature_category` should be
defined in
[`config/feature_categories.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/feature_categories.yml). If
not, the specs will fail.
### Excluding Sidekiq workers from feature categorization
A few Sidekiq workers, that are used across all features, cannot be mapped to a
single category. These should be declared as such using the
`feature_category :not_owned`
declaration, as shown below:
```ruby
class SomeCrossCuttingConcernWorker
include ApplicationWorker
# Declares that this worker does not map to a feature category
feature_category :not_owned # rubocop:disable Gitlab/AvoidFeatureCategoryNotOwned
# ...
end
```
When possible, workers marked as "not owned" use their caller's
category (worker or HTTP endpoint) in metrics and logs.
For instance, `ReactiveCachingWorker` can have multiple feature
categories in metrics and logs.
## Batched background migrations
Long-running migrations (as per the [time limits guidelines](../migration_style_guide.md#how-long-a-migration-should-take))
are pulled out as [batched background migrations](../database/batched_background_migrations.md).
They should define a `feature_category`, like this:
```ruby
# Filename: lib/gitlab/background_migration/my_background_migration_job.rb
class MyBackgroundMigrationJob < BatchedMigrationJob
feature_category :gitaly
#...
end
```
{{< alert type="note" >}}
[`RuboCop::Cop::BackgroundMigration::FeatureCategory`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/cop/background_migration/feature_category.rb) cop ensures a valid `feature_category` is defined.
{{< /alert >}}
## Rails controllers
Specifying feature categories on controller actions can be done using
the `feature_category` class method.
A feature category can be specified on an entire controller
using:
```ruby
class Boards::ListsController < ApplicationController
feature_category :kanban_boards
end
```
The feature category can be limited to a list of actions using the
second argument:
```ruby
class DashboardController < ApplicationController
feature_category :team_planning, [:issues, :issues_calendar]
feature_category :code_review_workflow, [:merge_requests]
end
```
These forms cannot be mixed: if a controller has more than one category,
every single action must be listed.
### Excluding controller actions from feature categorization
In the rare case an action cannot be tied to a feature category this
can be done using the `not_owned` feature category.
```ruby
class Admin::LogsController < ApplicationController
feature_category :not_owned
end
```
### Ensuring feature categories are valid
The `spec/controllers/every_controller_spec.rb` will iterate over all
defined routes, and check the controller to see if a category is
assigned to all actions.
The spec also validates if the used feature categories are known. And if
the actions used in configuration still exist as routes.
## API endpoints
The [GraphQL API](../../api/graphql/_index.md) is currently categorized
as `not_owned`. For now, no extra specification is needed. For more
information, see
[`gitlab-com/gl-infra/scalability#583`](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/583/).
Grape API endpoints can use the `feature_category` class method, like
[Rails controllers](#rails-controllers) do:
```ruby
module API
class Issues < ::API::Base
feature_category :team_planning
end
end
```
The second argument can be used to specify feature categories for
specific routes:
```ruby
module API
class Users < ::API::Base
feature_category :user_profile, ['/users/:id/custom_attributes', '/users/:id/custom_attributes/:key']
end
end
```
Or the feature category can be specified in the action itself:
```ruby
module API
class Users < ::API::Base
get ':id', feature_category: :user_profile do
end
end
end
```
As with Rails controllers, an API class must specify the category for
every single action unless the same category is used for every action
within that class.
## RSpec Examples
You must set feature category metadata for each RSpec example. This information is used for flaky test
issues to identify the group that owns the feature.
The `feature_category` should be a value from [`config/feature_categories.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/feature_categories.yml).
The `feature_category` metadata can be set:
- [In the top-level `RSpec.describe` blocks](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/104274/diffs#6bd01173381e873f3e1b6c55d33cdaa3d897156b_5_5).
- [In `describe` blocks](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/104274/diffs#a520db2677a30e7f1f5593584f69c49031b894b9_12_12).
Consider splitting the file in the case there are multiple feature categories identified in the same file.
Example:
```ruby
RSpec.describe Admin::Geo::SettingsController, :geo, feature_category: :geo_replication do
```
For examples that don't have a `feature_category` set we add a warning when running them in local environment.
To disable the warning use `RSPEC_WARN_MISSING_FEATURE_CATEGORY=false` when running RSpec tests:
```shell
RSPEC_WARN_MISSING_FEATURE_CATEGORY=false bin/rspec spec/<test_file>
```
Additionally, we flag the offenses via `RSpec/FeatureCategory` RuboCop rule.
### Tooling feature category
For Engineering Productivity internal tooling we use `feature_category: :tooling`.
For example in [`spec/tooling/danger/specs_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/tooling/danger/specs_spec.rb#L12).
### Shared feature category
For features that support developers and they are not specific to a product group we use `feature_category: :shared`
For example [`spec/lib/gitlab/job_waiter_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/job_waiter_spec.rb)
### Admin section
Adding feature categories is equally important when adding new parts to the Admin section. Historically, Admin sections were often marked as `not_owned` in the code. Now
you must ensure each new addition to the Admin section is properly annotated using `feature_category` notation.
|
https://docs.gitlab.com/development/observability
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/observability
|
[
"doc",
"development",
"observability"
] |
_index.md
|
Monitor
|
Platform Insights
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Observability development guidelines
|
Documentation for developers interested in contributing features or bugfixes for GitLab Observability.
|
## GitLab Observability development setup
There are several options for developing and debugging GitLab Observability:
- [Run GDK and GitLab Observability Backend locally all-in-one](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitlab_observability_backend.md): This is the simplest and recommended approach for those looking to make changes, or verify changes to Rails, Sidekiq or Workhorse.
- [Run GDK locally and connect to the staging instance](#run-gdk-and-connect-to-the-staging-instance-of-gitlab-observability-backend) of [GitLab Observability Backend](https://gitlab.com/gitlab-org/opstrace/opstrace). This is an alternative approach for those looking to make changes, or verify changes to Rails, Sidekiq or Workhorse.
- [Use the purpose built `devvm`](#use-the-purpose-built-devvm). This is more involved but includes a development deployment of the [GitLab Observability Backend](https://gitlab.com/gitlab-org/opstrace/opstrace). This is recommended for those who want to make changes to the GitLab Observability Backend component.
- [Run GDK with mocked Observability data](#run-gdk-with-mocked-observability-data). This could be useful in case you just need to work on a frontend or Rails change and do not need the full stack, or when providing reproduction steps for an MR reviewer, who probably won't want to set up the full stack just for an MR.
### Run GDK and connect to the staging instance of GitLab Observability Backend
This method takes advantage of our Cloud Connected Observability Backend. Your GitLab instance will require a valid Cloud License and will be treated as a GitLab Self-Managed instance, connected to a multi-tenant GitLab-hosted instance of the GitLab Observability Backend. See [this design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/observability_for_self_managed/) for more details on how this works.
How to enable:
1. Add a **GitLab Ultimate Self-Managed** subscription to your GDK instance.
1. Sign in to the [staging Customers Portal](https://customers.staging.gitlab.com) by selecting the **Continue with GitLab.com account** button.
If you do not have an existing account, you are prompted to create one.
1. If you do not have an existing cloud activation code, click the **Staging Self-Managed Ultimate Subscription** link on the [new subscription purchase links page](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/main/doc/flows/self_service_flow_urls.md#new-subscription-purchase-links).
1. Select enough seats to cover the number of users in your GDK instance (200 should be plenty).
1. Purchase the subscription using [a test credit card](https://gitlab.com/gitlab-org/customers-gitlab-com/#testing-credit-card-information).
After this step is complete, you will have an activation code for a _GitLab Ultimate Self-Managed subscription_.
1. Set environment variables to point customers-dot to staging, and the Observability URL to staging. For GDK, this can be done in `<gdk-root>/env.runit`:
```shell
export GITLAB_SIMULATE_SAAS=0
export GITLAB_LICENSE_MODE=test
export CUSTOMER_PORTAL_URL=https://customers.staging.gitlab.com
export OVERRIDE_OBSERVABILITY_QUERY_URL=https://observe.staging.gitlab.com
export OVERRIDE_OBSERVABILITY_INGEST_URL=https://observe.staging.gitlab.com
```
On a non-GDK/GCK instance, you can set the variables using `gitlab_rails['env']` in the `gitlab.rb` file:
```shell
gitlab_rails['env'] = {
'GITLAB_LICENSE_MODE' => 'test',
'CUSTOMER_PORTAL_URL' => 'https://customers.staging.gitlab.com',
'OVERRIDE_OBSERVABILITY_QUERY_URL' => 'https://observe.staging.gitlab.com',
'OVERRIDE_OBSERVABILITY_INGEST_URL' => 'https://observe.staging.gitlab.com'
}
```
1. Enable the feature flag for GitLab Observability features:
1. Start a rails console session:
- GDK: `gdk rails console`
- GCK: `make console`
- GitLab Distribution: [Start a Rails console session](../../administration/operations/rails_console.md#starting-a-rails-console-session)
1. Run `Feature.enable(:observability_features);`
1. Restart your instance (for example, `gdk restart`).
1. Follow the [instructions to activate your new license](../../administration/license.md#activate-gitlab-ee).
1. Test out the GitLab Observability feature by navigating to a project and selecting Tracing, Metrics, or Logs from the Monitor section of the navigation menu.
1. If you are seeing 404 errors you might need to manually [refresh](../../subscriptions/manage_subscription.md#manually-synchronize-subscription-data) your license data.
### Use the purpose built `devvm`
Visit [`devvm`](https://gitlab.com/gitlab-org/opstrace/devvm) and follow the README instructions for setup and developing against it.
## Use the OpenTelemetry Demo app to send data to a project
The [OpenTelemetry Demo app](https://opentelemetry.io/docs/demo/) is a great way to run several Docker containers (representing a distributed system), and to send the logs, metrics, and traces to your local GDK instance.
You can reference the instructions for running the demo app [here](https://opentelemetry.io/docs/demo/docker-deployment/).
### OpenTelemetry Demo app Quickstart
1. Clone the Demo repository:
```shell
git clone https://github.com/open-telemetry/opentelemetry-demo.git
```
1. Change to the demo folder:
```shell
cd opentelemetry-demo/
```
1. Create a project in your local GDK instance. Take note of the project ID.
1. In the newly created project, create a project access token with **Developer** role and **API** scope. Save the token for use in the next step.
1. With an editor, edit the configuration in `src/otelcollector/otelcol-config-extras.yml`. Add the following YAML, replacing:
- `$GDK_HOST` with the host and `$GDK_PORT` with the port number of your GitLab instance.
- `$PROJECT_ID` with the project ID and `$TOKEN` with the token created in the previous steps.
```yaml
exporters:
otlphttp/gitlab:
endpoint: http://$GDK_HOST:$GDK_PORT/api/v4/projects/$PROJECT_ID/observability/
headers:
"private-token": "$TOKEN"
service:
pipelines:
traces:
exporters: [spanmetrics, otlphttp/gitlab]
metrics:
exporters: [otlphttp/gitlab]
logs:
exporters: [otlphttp/gitlab]
```
{{< alert type="note" >}}
For GDK and Docker to communicate you may need to set up a [loopback interface](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/local_network.md#create-loopback-interface).
{{< /alert >}}
1. Save the configuration and start the demo app:
```shell
docker compose up --force-recreate --remove-orphans --detach
```
1. [Visit the UI to generate data](https://opentelemetry.io/docs/demo/docker-deployment/#verify-the-web-store-and-telemetry).
1. Verify Telemetry by exploring logs, metrics, and traces under the Monitor menu in your GitLab project.
### Run GDK with mocked Observability data
Apply the following [patch](https://gitlab.com/gitlab-org/opstrace/opstrace/-/snippets/3747939) to override Observability API calls with local mocks:
```shell
git apply < <(curl --silent "https://gitlab.com/gitlab-org/opstrace/opstrace/-/snippets/3747939/raw/main/mock.patch")
```
|
---
stage: Monitor
group: Platform Insights
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Documentation for developers interested in contributing features or bugfixes
for GitLab Observability.
title: GitLab Observability development guidelines
breadcrumbs:
- doc
- development
- observability
---
## GitLab Observability development setup
There are several options for developing and debugging GitLab Observability:
- [Run GDK and GitLab Observability Backend locally all-in-one](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/gitlab_observability_backend.md): This is the simplest and recommended approach for those looking to make changes, or verify changes to Rails, Sidekiq or Workhorse.
- [Run GDK locally and connect to the staging instance](#run-gdk-and-connect-to-the-staging-instance-of-gitlab-observability-backend) of [GitLab Observability Backend](https://gitlab.com/gitlab-org/opstrace/opstrace). This is an alternative approach for those looking to make changes, or verify changes to Rails, Sidekiq or Workhorse.
- [Use the purpose built `devvm`](#use-the-purpose-built-devvm). This is more involved but includes a development deployment of the [GitLab Observability Backend](https://gitlab.com/gitlab-org/opstrace/opstrace). This is recommended for those who want to make changes to the GitLab Observability Backend component.
- [Run GDK with mocked Observability data](#run-gdk-with-mocked-observability-data). This could be useful in case you just need to work on a frontend or Rails change and do not need the full stack, or when providing reproduction steps for an MR reviewer, who probably won't want to set up the full stack just for an MR.
### Run GDK and connect to the staging instance of GitLab Observability Backend
This method takes advantage of our Cloud Connected Observability Backend. Your GitLab instance will require a valid Cloud License and will be treated as a GitLab Self-Managed instance, connected to a multi-tenant GitLab-hosted instance of the GitLab Observability Backend. See [this design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/observability_for_self_managed/) for more details on how this works.
How to enable:
1. Add a **GitLab Ultimate Self-Managed** subscription to your GDK instance.
1. Sign in to the [staging Customers Portal](https://customers.staging.gitlab.com) by selecting the **Continue with GitLab.com account** button.
If you do not have an existing account, you are prompted to create one.
1. If you do not have an existing cloud activation code, click the **Staging Self-Managed Ultimate Subscription** link on the [new subscription purchase links page](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/main/doc/flows/self_service_flow_urls.md#new-subscription-purchase-links).
1. Select enough seats to cover the number of users in your GDK instance (200 should be plenty).
1. Purchase the subscription using [a test credit card](https://gitlab.com/gitlab-org/customers-gitlab-com/#testing-credit-card-information).
After this step is complete, you will have an activation code for a _GitLab Ultimate Self-Managed subscription_.
1. Set environment variables to point customers-dot to staging, and the Observability URL to staging. For GDK, this can be done in `<gdk-root>/env.runit`:
```shell
export GITLAB_SIMULATE_SAAS=0
export GITLAB_LICENSE_MODE=test
export CUSTOMER_PORTAL_URL=https://customers.staging.gitlab.com
export OVERRIDE_OBSERVABILITY_QUERY_URL=https://observe.staging.gitlab.com
export OVERRIDE_OBSERVABILITY_INGEST_URL=https://observe.staging.gitlab.com
```
On a non-GDK/GCK instance, you can set the variables using `gitlab_rails['env']` in the `gitlab.rb` file:
```shell
gitlab_rails['env'] = {
'GITLAB_LICENSE_MODE' => 'test',
'CUSTOMER_PORTAL_URL' => 'https://customers.staging.gitlab.com',
'OVERRIDE_OBSERVABILITY_QUERY_URL' => 'https://observe.staging.gitlab.com',
'OVERRIDE_OBSERVABILITY_INGEST_URL' => 'https://observe.staging.gitlab.com'
}
```
1. Enable the feature flag for GitLab Observability features:
1. Start a rails console session:
- GDK: `gdk rails console`
- GCK: `make console`
- GitLab Distribution: [Start a Rails console session](../../administration/operations/rails_console.md#starting-a-rails-console-session)
1. Run `Feature.enable(:observability_features);`
1. Restart your instance (for example, `gdk restart`).
1. Follow the [instructions to activate your new license](../../administration/license.md#activate-gitlab-ee).
1. Test out the GitLab Observability feature by navigating to a project and selecting Tracing, Metrics, or Logs from the Monitor section of the navigation menu.
1. If you are seeing 404 errors you might need to manually [refresh](../../subscriptions/manage_subscription.md#manually-synchronize-subscription-data) your license data.
### Use the purpose built `devvm`
Visit [`devvm`](https://gitlab.com/gitlab-org/opstrace/devvm) and follow the README instructions for setup and developing against it.
## Use the OpenTelemetry Demo app to send data to a project
The [OpenTelemetry Demo app](https://opentelemetry.io/docs/demo/) is a great way to run several Docker containers (representing a distributed system), and to send the logs, metrics, and traces to your local GDK instance.
You can reference the instructions for running the demo app [here](https://opentelemetry.io/docs/demo/docker-deployment/).
### OpenTelemetry Demo app Quickstart
1. Clone the Demo repository:
```shell
git clone https://github.com/open-telemetry/opentelemetry-demo.git
```
1. Change to the demo folder:
```shell
cd opentelemetry-demo/
```
1. Create a project in your local GDK instance. Take note of the project ID.
1. In the newly created project, create a project access token with **Developer** role and **API** scope. Save the token for use in the next step.
1. With an editor, edit the configuration in `src/otelcollector/otelcol-config-extras.yml`. Add the following YAML, replacing:
- `$GDK_HOST` with the host and `$GDK_PORT` with the port number of your GitLab instance.
- `$PROJECT_ID` with the project ID and `$TOKEN` with the token created in the previous steps.
```yaml
exporters:
otlphttp/gitlab:
endpoint: http://$GDK_HOST:$GDK_PORT/api/v4/projects/$PROJECT_ID/observability/
headers:
"private-token": "$TOKEN"
service:
pipelines:
traces:
exporters: [spanmetrics, otlphttp/gitlab]
metrics:
exporters: [otlphttp/gitlab]
logs:
exporters: [otlphttp/gitlab]
```
{{< alert type="note" >}}
For GDK and Docker to communicate you may need to set up a [loopback interface](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/local_network.md#create-loopback-interface).
{{< /alert >}}
1. Save the configuration and start the demo app:
```shell
docker compose up --force-recreate --remove-orphans --detach
```
1. [Visit the UI to generate data](https://opentelemetry.io/docs/demo/docker-deployment/#verify-the-web-store-and-telemetry).
1. Verify Telemetry by exploring logs, metrics, and traces under the Monitor menu in your GitLab project.
### Run GDK with mocked Observability data
Apply the following [patch](https://gitlab.com/gitlab-org/opstrace/opstrace/-/snippets/3747939) to override Observability API calls with local mocks:
```shell
git apply < <(curl --silent "https://gitlab.com/gitlab-org/opstrace/opstrace/-/snippets/3747939/raw/main/mock.patch")
```
|
https://docs.gitlab.com/development/debian_repository
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/debian_repository.md
|
2025-08-13
|
doc/development/packages
|
[
"doc",
"development",
"packages"
] |
debian_repository.md
|
Package
|
Package Registry
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Debian Repository
| null |
This guide explains:
1. A basic overview of how Debian packages are structured
1. What package managers, clients, and tools are used to manage Debian packages
1. How the GitLab Debian repository functions
## Debian package basics
There are two types of [Debian packages](https://www.debian.org/doc/manuals/debian-faq/pkg-basics.en.html): binary and source.
- **Binary** - These are usually `.deb` files and contain executables, config files, and other data. A binary package must match your OS or architecture since it is already compiled. These are usually installed using `dpkg`. Dependencies must already exist on the system when installing a binary package.
- **Source** - These are usually made up of `.dsc` files and compressed `.tar` files. A source package may be compiled on your system.
Packages are fetched with [`apt`](https://manpages.debian.org/bullseye/apt/apt.8.en.html) and installed with `dpkg`. When you use `apt`, it also fetches and installs any dependencies.
The `.deb` file follows the naming convention `<PackageName>_<VersionNumber>-<DebianRevisionNumber>_<DebianArchitecture>.deb`.
It includes a `control file` that contains metadata about the package. You can view the control file by using `dpkg --info <deb_file>`.
The [`.changes` file](https://www.debian.org/doc/debian-policy/ch-controlfields.html#debian-changes-files-changes) is used to tell the Debian repository how to process updates to packages. It contains a variety of metadata for the package, including architecture, distribution, and version. In addition to the metadata, they contain three lists of checksums: `sha1`, `sha256`, and `md5` in the `Files` section. Refer to [sample_1.2.3~alpha2_amd64.changes](https://gitlab.com/gitlab-org/gitlab/-/blob/dd1e70d3676891025534dc4a1e89ca9383178fe7/spec/fixtures/packages/debian/sample_1.2.3~alpha2_amd64.changes) for an example of how these files are structured.
## How do people get Debian packages?
While you can download a single `.deb` file and install it with [`dpkg`](https://manpages.debian.org/bullseye/dpkg/dpkg.1.en.html), most users consume Debian packages with [`apt`](https://manpages.debian.org/bullseye/apt/apt.8.en.html) using `apt-get`. `apt` wraps `dpkg`, adding dependency management and compilation.
## How do people publish Debian packages?
It is not uncommon to use `curl` to publish packages depending on the type of Debian repository you are working with. However, `dput-ng` is the best tool to use as it will upload the relevant files based on the `.changes` file.
## What is all this distribution business?
When it comes to Debian, packages don't exist on their own. They belong to a _distribution_. This can mean many things, but the main thing to note is users are used to having to specify the distribution.
## What does a Debian Repository look like?
- A [Debian repository](https://wiki.debian.org/DebianRepository) is made up of many releases.
- Each release is given a stable **codename**. For the public Debian repository, these are names like "bullseye" and "jessie".
- There is also the concept of **suites** which are essentially aliases of codenames synonymous with release channels like "stable" and "edge". Over time they change and point to different _codenames_.
- Each release has many **components**. In the public repository, these are "main", "contrib", and "non-free".
- Each release has many **architectures** such as "amd64", "arm64", or "i386".
- Each release has a signed **Release** file (see below about [GPG signing](#what-are-gpg-keys-and-what-are-signed-releases))
A standard directory-based Debian repository would be organized as:
```plaintext
dists\
|--jessie/
|--bullseye\
|Changelog
|Release
|InRelease
|Release.gpg
|--main\
|--amd64\
|--arm64\
|--contrib\
|--non-free\
pool\
|--this is where the .deb files for all releases live
```
You can explore a mirror of the public Debian repository here: <http://ftp.us.debian.org/debian/>
In the public Debian repository, the entire directory structure, release files, GPG keys, and other files are all generated by a series of scripts called the [Debian Archive Kit, or dak](https://salsa.debian.org/ftp-team/dak).
In the GitLab Debian repository, we don't deal with specific file directories. Instead, we use code and an underlying [PostgreSQL database to organize the relationships](structure.md#debian-packages) between these different pieces.
## What does a Debian Repository do?
The Debian community created many package repository systems before things like object storage existed, and they used FTP to upload artifacts to a remote server. Most current package repositories and registries are just directories on a server somewhere. Packages added to the [official Debian distribution](https://www.debian.org/distrib/packages) exist in a central public repository that a group of open source maintainers curates. The package maintainers use the [Debian Archive Kit, or dak](https://salsa.debian.org/ftp-team/dak) scripts to generate release files and do other maintenance tasks. So, in addition to storing and serving files, a complete Debian repository needs to accomplish the same behavior that dak provides. This behavior is what the GitLab Debian registry aims to do.
## What are GPG keys, and what are signed releases
A [GPG key](https://www.gnupg.org/) is a public/private key pair for secure data transmission. Similar to an SSH key, there is a private and public key. Whoever has the _public key can encrypt data_, and whoever has the _private key can decrypt data_ that was encrypted using the public key. You can also use GPG keys to sign data. Whoever has the private key can sign data or a file, and whoever has the public key can then check the signature and trust it came from the person with the matching private key.
We use GPG to sign the release file for the Debian packages. The release file is an index of all packages within a given distribution and their respective digests.
In the GitLab Debian registry, a background process generates a new release file whenever a user publishes a new package to their Debian repository. A GPG key is created for each distribution. If a user requests a release for that distribution, they can request the signed version and the public GPG key to verify the authenticity of that release file.
## GitLab repository internals
When a [file upload](../../api/packages/debian.md#upload-a-package-file) occurs:
1. A new "incoming" package record is found or created. All new files are assigned to the "incoming" package. It is a holding area used until we know what package the file is actually associated with.
1. A new "unknown" file is stored. It is unknown because we do not yet know if this file belongs to an existing package or not.
Once we know which package the file belongs to, it is associated with that package, and the "incoming" package is removed if no more files remain. The "unknown" status of the file is updated to the correct file type.
Next, if the file is a `.changes` format:
1. The `.changes` file is parsed and any files listed within it are updated. All uploaded non-`.changes` files are correctly associated with various distributions and packages.
1. The `::Packages::Debian::GenerateDistributionWorker` and thus `::Packages::Debian::GenerateDistributionService` are run.
1. Component files are created or updated. Since we just updated package files that were listed in the `.changes` file, we now check the component/architecture files based on the changed checksum values.
1. A new release is generated:
1. A new GPG key is generated if one does not already exist for the distribution
1. A [Release file](https://wiki.debian.org/DebianRepository/Format#A.22Release.22_files) is written, signed by the GPG key, and then stored.
1. Old component files are destroyed.
The three following diagrams show the path taken after a file is uploaded to the Debian API:
```mermaid
sequenceDiagram
autonumber
actor Client
Client->>+DebianProjectPackages: PUT projects/:id/packages/debian/:file_name
Note over DebianProjectPackages: If `.changes` file or distribution param present
DebianProjectPackages->>+CreateTemporaryPackageService: Create temporary package
Note over DebianProjectPackages: Else
DebianProjectPackages->>+FindOrCreateIncomingService: Create "incoming" package
Note over DebianProjectPackages: Finally
DebianProjectPackages->>+CreatePackageFileService: Create "unknown" file
Note over CreatePackageFileService: If `.changes` file or distribution param present
CreatePackageFileService->>+ProcessPackageFileWorker: Schedule worker to process the file
DebianProjectPackages->>+Client: 202 Created
ProcessPackageFileWorker->>+ProcessPackageFileService: Start service
```
`ProcessPackageFileWorker` background job:
```mermaid
sequenceDiagram
autonumber
ProcessPackageFileWorker->>+ProcessPackageFileService: Start service
ProcessPackageFileService->>+ExtractChangesMetadataService: Extract changes metadata
ExtractChangesMetadataService->>+ExtractMetadataService: Extract file metadata
ExtractMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
ExtractMetadataService->>+ExtractDebMetadataService: If .deb, .udeb or ddeb
ExtractDebMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
ParseDebian822Service-->>-ExtractDebMetadataService: Parse String as Debian RFC822 control data format
ExtractDebMetadataService-->>-ExtractMetadataService: Return the parsed control file
ExtractMetadataService->>+ParseDebian822Service: if .dsc, .changes, or buildinfo
ParseDebian822Service-->>-ExtractMetadataService: Parse String as Debian RFC822 control data format
ExtractMetadataService-->>-ExtractChangesMetadataService: Parse Metadata file
ExtractChangesMetadataService-->>-ProcessPackageFileService: Return list of files and hashes from the .changes file
loop process files listed in .changes
ProcessPackageFileService->>+ExtractMetadataService: Process file
ExtractMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
ExtractMetadataService->>+ExtractDebMetadataService: If .deb, .udeb or ddeb
ExtractDebMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
ParseDebian822Service-->>-ExtractDebMetadataService: Parse String as Debian RFC822 control data format
ExtractDebMetadataService-->>-ExtractMetadataService: Return the parsed control file
ExtractMetadataService->>+ParseDebian822Service: if .dsc, .changes, or buildinfo
ParseDebian822Service-->>-ExtractMetadataService: Parse String as Debian RFC822 control data format
ExtractMetadataService-->>-ProcessPackageFileService: Use parsed metadata to update "unknown" (or known) file
end
ProcessPackageFileService->>+GenerateDistributionWorker: Find distribution and start service
GenerateDistributionWorker->>+GenerateDistributionService: Generate distribution
```
`GenerateDistributionWorker` background job:
```mermaid
sequenceDiagram
autonumber
GenerateDistributionWorker->>+GenerateDistributionService: Generate distribution
GenerateDistributionService->>+GenerateDistributionService: generate component files based on new archs and updates from .changes
GenerateDistributionService->>+GenerateDistributionKeyService: generate GPG key for distribution
GenerateDistributionKeyService-->>-GenerateDistributionService: GPG key
GenerateDistributionService-->>-GenerateDistributionService: Generate distribution file
GenerateDistributionService->>+SignDistributionService: Sign release file with GPG key
SignDistributionService-->>-GenerateDistributionService: Save the signed release file
GenerateDistributionService->>+GenerateDistributionService: destroy no longer used component files
```
### Distributions
You must create a distribution before publishing a package to it. When you create or update a distribution using the project or group distribution API, in addition to creating the initial backing records in the database, the `GenerateDistributionService` run as shown in the above sequence diagram.
|
---
stage: Package
group: Package Registry
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Debian Repository
breadcrumbs:
- doc
- development
- packages
---
This guide explains:
1. A basic overview of how Debian packages are structured
1. What package managers, clients, and tools are used to manage Debian packages
1. How the GitLab Debian repository functions
## Debian package basics
There are two types of [Debian packages](https://www.debian.org/doc/manuals/debian-faq/pkg-basics.en.html): binary and source.
- **Binary** - These are usually `.deb` files and contain executables, config files, and other data. A binary package must match your OS or architecture since it is already compiled. These are usually installed using `dpkg`. Dependencies must already exist on the system when installing a binary package.
- **Source** - These are usually made up of `.dsc` files and compressed `.tar` files. A source package may be compiled on your system.
Packages are fetched with [`apt`](https://manpages.debian.org/bullseye/apt/apt.8.en.html) and installed with `dpkg`. When you use `apt`, it also fetches and installs any dependencies.
The `.deb` file follows the naming convention `<PackageName>_<VersionNumber>-<DebianRevisionNumber>_<DebianArchitecture>.deb`.
It includes a `control file` that contains metadata about the package. You can view the control file by using `dpkg --info <deb_file>`.
The [`.changes` file](https://www.debian.org/doc/debian-policy/ch-controlfields.html#debian-changes-files-changes) is used to tell the Debian repository how to process updates to packages. It contains a variety of metadata for the package, including architecture, distribution, and version. In addition to the metadata, they contain three lists of checksums: `sha1`, `sha256`, and `md5` in the `Files` section. Refer to [sample_1.2.3~alpha2_amd64.changes](https://gitlab.com/gitlab-org/gitlab/-/blob/dd1e70d3676891025534dc4a1e89ca9383178fe7/spec/fixtures/packages/debian/sample_1.2.3~alpha2_amd64.changes) for an example of how these files are structured.
## How do people get Debian packages?
While you can download a single `.deb` file and install it with [`dpkg`](https://manpages.debian.org/bullseye/dpkg/dpkg.1.en.html), most users consume Debian packages with [`apt`](https://manpages.debian.org/bullseye/apt/apt.8.en.html) using `apt-get`. `apt` wraps `dpkg`, adding dependency management and compilation.
## How do people publish Debian packages?
It is not uncommon to use `curl` to publish packages depending on the type of Debian repository you are working with. However, `dput-ng` is the best tool to use as it will upload the relevant files based on the `.changes` file.
## What is all this distribution business?
When it comes to Debian, packages don't exist on their own. They belong to a _distribution_. This can mean many things, but the main thing to note is users are used to having to specify the distribution.
## What does a Debian Repository look like?
- A [Debian repository](https://wiki.debian.org/DebianRepository) is made up of many releases.
- Each release is given a stable **codename**. For the public Debian repository, these are names like "bullseye" and "jessie".
- There is also the concept of **suites** which are essentially aliases of codenames synonymous with release channels like "stable" and "edge". Over time they change and point to different _codenames_.
- Each release has many **components**. In the public repository, these are "main", "contrib", and "non-free".
- Each release has many **architectures** such as "amd64", "arm64", or "i386".
- Each release has a signed **Release** file (see below about [GPG signing](#what-are-gpg-keys-and-what-are-signed-releases))
A standard directory-based Debian repository would be organized as:
```plaintext
dists\
|--jessie/
|--bullseye\
|Changelog
|Release
|InRelease
|Release.gpg
|--main\
|--amd64\
|--arm64\
|--contrib\
|--non-free\
pool\
|--this is where the .deb files for all releases live
```
You can explore a mirror of the public Debian repository here: <http://ftp.us.debian.org/debian/>
In the public Debian repository, the entire directory structure, release files, GPG keys, and other files are all generated by a series of scripts called the [Debian Archive Kit, or dak](https://salsa.debian.org/ftp-team/dak).
In the GitLab Debian repository, we don't deal with specific file directories. Instead, we use code and an underlying [PostgreSQL database to organize the relationships](structure.md#debian-packages) between these different pieces.
## What does a Debian Repository do?
The Debian community created many package repository systems before things like object storage existed, and they used FTP to upload artifacts to a remote server. Most current package repositories and registries are just directories on a server somewhere. Packages added to the [official Debian distribution](https://www.debian.org/distrib/packages) exist in a central public repository that a group of open source maintainers curates. The package maintainers use the [Debian Archive Kit, or dak](https://salsa.debian.org/ftp-team/dak) scripts to generate release files and do other maintenance tasks. So, in addition to storing and serving files, a complete Debian repository needs to accomplish the same behavior that dak provides. This behavior is what the GitLab Debian registry aims to do.
## What are GPG keys, and what are signed releases
A [GPG key](https://www.gnupg.org/) is a public/private key pair for secure data transmission. Similar to an SSH key, there is a private and public key. Whoever has the _public key can encrypt data_, and whoever has the _private key can decrypt data_ that was encrypted using the public key. You can also use GPG keys to sign data. Whoever has the private key can sign data or a file, and whoever has the public key can then check the signature and trust it came from the person with the matching private key.
We use GPG to sign the release file for the Debian packages. The release file is an index of all packages within a given distribution and their respective digests.
In the GitLab Debian registry, a background process generates a new release file whenever a user publishes a new package to their Debian repository. A GPG key is created for each distribution. If a user requests a release for that distribution, they can request the signed version and the public GPG key to verify the authenticity of that release file.
## GitLab repository internals
When a [file upload](../../api/packages/debian.md#upload-a-package-file) occurs:
1. A new "incoming" package record is found or created. All new files are assigned to the "incoming" package. It is a holding area used until we know what package the file is actually associated with.
1. A new "unknown" file is stored. It is unknown because we do not yet know if this file belongs to an existing package or not.
Once we know which package the file belongs to, it is associated with that package, and the "incoming" package is removed if no more files remain. The "unknown" status of the file is updated to the correct file type.
Next, if the file is a `.changes` format:
1. The `.changes` file is parsed and any files listed within it are updated. All uploaded non-`.changes` files are correctly associated with various distributions and packages.
1. The `::Packages::Debian::GenerateDistributionWorker` and thus `::Packages::Debian::GenerateDistributionService` are run.
1. Component files are created or updated. Since we just updated package files that were listed in the `.changes` file, we now check the component/architecture files based on the changed checksum values.
1. A new release is generated:
1. A new GPG key is generated if one does not already exist for the distribution
1. A [Release file](https://wiki.debian.org/DebianRepository/Format#A.22Release.22_files) is written, signed by the GPG key, and then stored.
1. Old component files are destroyed.
The three following diagrams show the path taken after a file is uploaded to the Debian API:
```mermaid
sequenceDiagram
autonumber
actor Client
Client->>+DebianProjectPackages: PUT projects/:id/packages/debian/:file_name
Note over DebianProjectPackages: If `.changes` file or distribution param present
DebianProjectPackages->>+CreateTemporaryPackageService: Create temporary package
Note over DebianProjectPackages: Else
DebianProjectPackages->>+FindOrCreateIncomingService: Create "incoming" package
Note over DebianProjectPackages: Finally
DebianProjectPackages->>+CreatePackageFileService: Create "unknown" file
Note over CreatePackageFileService: If `.changes` file or distribution param present
CreatePackageFileService->>+ProcessPackageFileWorker: Schedule worker to process the file
DebianProjectPackages->>+Client: 202 Created
ProcessPackageFileWorker->>+ProcessPackageFileService: Start service
```
`ProcessPackageFileWorker` background job:
```mermaid
sequenceDiagram
autonumber
ProcessPackageFileWorker->>+ProcessPackageFileService: Start service
ProcessPackageFileService->>+ExtractChangesMetadataService: Extract changes metadata
ExtractChangesMetadataService->>+ExtractMetadataService: Extract file metadata
ExtractMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
ExtractMetadataService->>+ExtractDebMetadataService: If .deb, .udeb or ddeb
ExtractDebMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
ParseDebian822Service-->>-ExtractDebMetadataService: Parse String as Debian RFC822 control data format
ExtractDebMetadataService-->>-ExtractMetadataService: Return the parsed control file
ExtractMetadataService->>+ParseDebian822Service: if .dsc, .changes, or buildinfo
ParseDebian822Service-->>-ExtractMetadataService: Parse String as Debian RFC822 control data format
ExtractMetadataService-->>-ExtractChangesMetadataService: Parse Metadata file
ExtractChangesMetadataService-->>-ProcessPackageFileService: Return list of files and hashes from the .changes file
loop process files listed in .changes
ProcessPackageFileService->>+ExtractMetadataService: Process file
ExtractMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
ExtractMetadataService->>+ExtractDebMetadataService: If .deb, .udeb or ddeb
ExtractDebMetadataService->>+ParseDebian822Service: run `dpkg --field` to get control file
ParseDebian822Service-->>-ExtractDebMetadataService: Parse String as Debian RFC822 control data format
ExtractDebMetadataService-->>-ExtractMetadataService: Return the parsed control file
ExtractMetadataService->>+ParseDebian822Service: if .dsc, .changes, or buildinfo
ParseDebian822Service-->>-ExtractMetadataService: Parse String as Debian RFC822 control data format
ExtractMetadataService-->>-ProcessPackageFileService: Use parsed metadata to update "unknown" (or known) file
end
ProcessPackageFileService->>+GenerateDistributionWorker: Find distribution and start service
GenerateDistributionWorker->>+GenerateDistributionService: Generate distribution
```
`GenerateDistributionWorker` background job:
```mermaid
sequenceDiagram
autonumber
GenerateDistributionWorker->>+GenerateDistributionService: Generate distribution
GenerateDistributionService->>+GenerateDistributionService: generate component files based on new archs and updates from .changes
GenerateDistributionService->>+GenerateDistributionKeyService: generate GPG key for distribution
GenerateDistributionKeyService-->>-GenerateDistributionService: GPG key
GenerateDistributionService-->>-GenerateDistributionService: Generate distribution file
GenerateDistributionService->>+SignDistributionService: Sign release file with GPG key
SignDistributionService-->>-GenerateDistributionService: Save the signed release file
GenerateDistributionService->>+GenerateDistributionService: destroy no longer used component files
```
### Distributions
You must create a distribution before publishing a package to it. When you create or update a distribution using the project or group distribution API, in addition to creating the initial backing records in the database, the `GenerateDistributionService` run as shown in the above sequence diagram.
|
https://docs.gitlab.com/development/harbor_registry_development
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/harbor_registry_development.md
|
2025-08-13
|
doc/development/packages
|
[
"doc",
"development",
"packages"
] |
harbor_registry_development.md
|
Package
|
Container Registry
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Harbor registry
| null |
## Enable Harbor registry
To enable the Harbor registry, you must configure the Harbor integration for your group or project.
The Harbor configuration requires four fields: `url`, `project_name`, `username` and `password`.
| Field | Description |
| --- | --- |
| `url` | The URL of the Harbor instance. |
| `project_name` | The project name of the Harbor instance. |
| `username` | The username used to sign in to the Harbor instance. |
| `password` | The password used to sign in to the Harbor instance. |
You can use [GitLab CI/CD predefined variables](../../ci/variables/_index.md) along with the following Harbor registry variables to request data from the Harbor instance.
| Variable | Description |
| --- | --- |
| `HARBOR_URL` | The URL of the Harbor instance. |
| `HARBOR_HOST` | The host of the Harbor instance URL. |
| `HARBOR_OCI` | The OCI URL of the Harbor instance URL. |
| `HARBOR_PROJECT` | The project name of the Harbor instance. |
| `HARBOR_USERNAME` | The username used to sign in to the Harbor instance. |
| `HARBOR_PASSWORD` | The password used to sign in to the Harbor instance. |
### Test settings
When testing the settings, a request is sent to `/api/v2.0/ping` of the Harbor instance. A successful test returns status code `200`. This test is primarily to verify that the Harbor instance is configured correctly. It doesn't verify that the `username` and `password` are correct.
## Code structure
```shell
app/controllers/concerns/harbor
├── access.rb
├── artifact.rb
├── repository.rb
└── tag.rb
app/controllers/projects/harbor
├── application_controller.rb
├── artifacts_controller.rb
├── repositories_controller.rb
└── tags_controller.rb
app/controllers/groups/harbor
├── application_controller.rb
├── artifacts_controller.rb
├── repositories_controller.rb
└── tags_controller.rb
app/models/integrations/harbor.rb
app/serializers/integrations/harbor_serializers
├── artifact_entity.rb
├── artifact_serializer.rb
├── repository_entity.rb
├── repository_serializer.rb
├── tag_entity.rb
└── tag_serializer.rb
lib/gitlab/harbor
├── client.rb
└── query.rb
```
The controllers under `app/controllers/projects/harbor` and `app/controllers/groups/harbor` provide the API interface for front-end calls.
The modules under `app/controllers/concerns/harbor` provide some common methods used by controllers.
The Harbor integration model is under `app/models/integrations`, and it contains some configuration information for Harbor integration.
The serializers under `app/serializers/integrations/harbor_serializers` are used by the controllers under `app/controllers/projects/harbor` and `app/controllers/groups/harbor`, and they help controllers to serialize the JSON data in the response.
The `lib/gitlab/harbor` directory contains the Harbor client, which sends API requests to the Harbor instances to retrieve data.
## Sequence diagram
```mermaid
sequenceDiagram
Client->>+GitLab: Request Harbor registry
GitLab->>+Harbor instance: Request repositories data via API
Harbor instance->>+GitLab: Repositories data
GitLab->>+Client: Return repositories data
Client->>+GitLab: Request Harbor registry artifacts
GitLab->>+Harbor instance: Request artifacts data via API
Harbor instance->>+GitLab: Artifacts data
GitLab->>+Client: Return artifacts data
Client->>+GitLab: Request Harbor registry tags
GitLab->>+Harbor instance: Request tags data via API
Harbor instance->>+GitLab: Tags data
GitLab->>+Client: Return tags data
```
## Policy
The`read_harbor_registry` policy for groups and projects is used to control whether users have access to Harbor registry.
This policy is enabled for every user with at least the Reporter role.
## Frontend Development
The relevant front-end code is located in the `app/assets/javascripts/packages_and_registries/harbor_registry/` directory. The file structure is as follows:
```shell
├── components
│ ├── details
│ │ ├── artifacts_list_row.vue
│ │ ├── artifacts_list.vue
│ │ └── details_header.vue
│ ├── list
│ │ ├── harbor_list_header.vue
│ │ ├── harbor_list_row.vue
│ │ └── harbor_list.vue
│ ├── tags
│ │ ├── tags_header.vue
│ │ ├── tags_list_row.vue
│ │ └── tags_list.vue
│ └── harbor_registry_breadcrumb.vue
├── constants
│ ├── common.js
│ ├── details.js
│ ├── index.js
│ └── list.js
├── pages
│ ├── details.vue
│ ├── harbor_tags.vue
│ ├── index.vue
│ └── list.vue
├── index.js
├── router.js
└── utils.js
```
{{< alert type="note" >}}
You can check out this [discussion](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/82777#note_1017875324) to see why we use the REST API instead of GraphQL.
{{< /alert >}}
The file `harbor_registry/pages/index.vue` only contains a single Vue router-view component, which goes to the `images list`, `image detail`, and `tags list` pages via `router.js`.
Because `registry_breadcrumb.vue` component does not support multi-level paths, we have reimplemented the `harbor_registry/components/harbor_registry_breadcrumb.vue` component.
A multi-level breadcrumb component can be generated by passing a path array to `harbor_registry_breadcrumb.vue`.
```javascript
const routeNameList = [];
const hrefList = [];
this.breadCrumbState.updateName(nameList);
this.breadCrumbState.updateHref(hrefList);
```
|
---
stage: Package
group: Container Registry
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Harbor registry
breadcrumbs:
- doc
- development
- packages
---
## Enable Harbor registry
To enable the Harbor registry, you must configure the Harbor integration for your group or project.
The Harbor configuration requires four fields: `url`, `project_name`, `username` and `password`.
| Field | Description |
| --- | --- |
| `url` | The URL of the Harbor instance. |
| `project_name` | The project name of the Harbor instance. |
| `username` | The username used to sign in to the Harbor instance. |
| `password` | The password used to sign in to the Harbor instance. |
You can use [GitLab CI/CD predefined variables](../../ci/variables/_index.md) along with the following Harbor registry variables to request data from the Harbor instance.
| Variable | Description |
| --- | --- |
| `HARBOR_URL` | The URL of the Harbor instance. |
| `HARBOR_HOST` | The host of the Harbor instance URL. |
| `HARBOR_OCI` | The OCI URL of the Harbor instance URL. |
| `HARBOR_PROJECT` | The project name of the Harbor instance. |
| `HARBOR_USERNAME` | The username used to sign in to the Harbor instance. |
| `HARBOR_PASSWORD` | The password used to sign in to the Harbor instance. |
### Test settings
When testing the settings, a request is sent to `/api/v2.0/ping` of the Harbor instance. A successful test returns status code `200`. This test is primarily to verify that the Harbor instance is configured correctly. It doesn't verify that the `username` and `password` are correct.
## Code structure
```shell
app/controllers/concerns/harbor
├── access.rb
├── artifact.rb
├── repository.rb
└── tag.rb
app/controllers/projects/harbor
├── application_controller.rb
├── artifacts_controller.rb
├── repositories_controller.rb
└── tags_controller.rb
app/controllers/groups/harbor
├── application_controller.rb
├── artifacts_controller.rb
├── repositories_controller.rb
└── tags_controller.rb
app/models/integrations/harbor.rb
app/serializers/integrations/harbor_serializers
├── artifact_entity.rb
├── artifact_serializer.rb
├── repository_entity.rb
├── repository_serializer.rb
├── tag_entity.rb
└── tag_serializer.rb
lib/gitlab/harbor
├── client.rb
└── query.rb
```
The controllers under `app/controllers/projects/harbor` and `app/controllers/groups/harbor` provide the API interface for front-end calls.
The modules under `app/controllers/concerns/harbor` provide some common methods used by controllers.
The Harbor integration model is under `app/models/integrations`, and it contains some configuration information for Harbor integration.
The serializers under `app/serializers/integrations/harbor_serializers` are used by the controllers under `app/controllers/projects/harbor` and `app/controllers/groups/harbor`, and they help controllers to serialize the JSON data in the response.
The `lib/gitlab/harbor` directory contains the Harbor client, which sends API requests to the Harbor instances to retrieve data.
## Sequence diagram
```mermaid
sequenceDiagram
Client->>+GitLab: Request Harbor registry
GitLab->>+Harbor instance: Request repositories data via API
Harbor instance->>+GitLab: Repositories data
GitLab->>+Client: Return repositories data
Client->>+GitLab: Request Harbor registry artifacts
GitLab->>+Harbor instance: Request artifacts data via API
Harbor instance->>+GitLab: Artifacts data
GitLab->>+Client: Return artifacts data
Client->>+GitLab: Request Harbor registry tags
GitLab->>+Harbor instance: Request tags data via API
Harbor instance->>+GitLab: Tags data
GitLab->>+Client: Return tags data
```
## Policy
The`read_harbor_registry` policy for groups and projects is used to control whether users have access to Harbor registry.
This policy is enabled for every user with at least the Reporter role.
## Frontend Development
The relevant front-end code is located in the `app/assets/javascripts/packages_and_registries/harbor_registry/` directory. The file structure is as follows:
```shell
├── components
│ ├── details
│ │ ├── artifacts_list_row.vue
│ │ ├── artifacts_list.vue
│ │ └── details_header.vue
│ ├── list
│ │ ├── harbor_list_header.vue
│ │ ├── harbor_list_row.vue
│ │ └── harbor_list.vue
│ ├── tags
│ │ ├── tags_header.vue
│ │ ├── tags_list_row.vue
│ │ └── tags_list.vue
│ └── harbor_registry_breadcrumb.vue
├── constants
│ ├── common.js
│ ├── details.js
│ ├── index.js
│ └── list.js
├── pages
│ ├── details.vue
│ ├── harbor_tags.vue
│ ├── index.vue
│ └── list.vue
├── index.js
├── router.js
└── utils.js
```
{{< alert type="note" >}}
You can check out this [discussion](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/82777#note_1017875324) to see why we use the REST API instead of GraphQL.
{{< /alert >}}
The file `harbor_registry/pages/index.vue` only contains a single Vue router-view component, which goes to the `images list`, `image detail`, and `tags list` pages via `router.js`.
Because `registry_breadcrumb.vue` component does not support multi-level paths, we have reimplemented the `harbor_registry/components/harbor_registry_breadcrumb.vue` component.
A multi-level breadcrumb component can be generated by passing a path array to `harbor_registry_breadcrumb.vue`.
```javascript
const routeNameList = [];
const hrefList = [];
this.breadCrumbState.updateName(nameList);
this.breadCrumbState.updateHref(hrefList);
```
|
https://docs.gitlab.com/development/dependency_proxy
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/dependency_proxy.md
|
2025-08-13
|
doc/development/packages
|
[
"doc",
"development",
"packages"
] |
dependency_proxy.md
|
Package
|
Container Registry
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Dependency Proxy
| null |
The Dependency Proxy is a pull-through-cache for public registry images from DockerHub. This document describes how this
feature is constructed in GitLab.
## Container registry
The Dependency Proxy for the container registry acts a stand-in for a remote container registry. In our case,
the remote registry is the public DockerHub registry.
```mermaid
flowchart TD
id1([$ docker]) --> id2([GitLab Dependency Proxy])
id2 --> id3([DockerHub])
```
From the user's perspective, the GitLab instance is just a container registry that they are interacting with to
pull images by using `docker login gitlab.com`
When you use `docker login gitlab.com`, the Docker client uses the [v2 API](https://distribution.github.io/distribution/spec/api/)
to make requests.
To support authentication, we must include one route:
- [API Version Check](https://distribution.github.io/distribution/spec/api/#api-version-check)
To support `docker pull` requests, we must include two additional routes:
- [Pulling an image manifest](https://distribution.github.io/distribution/spec/api/#pulling-an-image-manifest)
- [Pulling an image layer (blob)](https://distribution.github.io/distribution/spec/api/#pulling-a-layer)
These routes are defined in [`gitlab-org/gitlab/config/routes/group.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/3f76455ac9cf90a927767e55c837d6b07af818df/config/routes/group.rb#L164-175).
In its simplest form, the Dependency Proxy manages three requests:
- Logging in / returning a JWT
- Fetching a manifest
- Fetching a blob
Here is what the general request sequence looks like for the Dependency Proxy:
```mermaid
sequenceDiagram
Client->>+GitLab: Login? / request token
GitLab->>+Client: JWT
Client->>+GitLab: request a manifest for an image
GitLab->>+ExternalRegistry: request JWT
ExternalRegistry->>+GitLab : JWT
GitLab->>+ExternalRegistry : request manifest
ExternalRegistry->>+GitLab : return manifest
GitLab->>+GitLab : store manifest
GitLab->>+Client : return manifest
loop request image layers
Client->>+GitLab: request a blob from the manifest
GitLab->>+ExternalRegistry: request JWT
ExternalRegistry->>+GitLab : JWT
GitLab->>+ExternalRegistry : request blob
ExternalRegistry->>+GitLab : return blob
GitLab->>+GitLab : store blob
GitLab->>+Client : return blob
end
```
### Authentication and authorization
When a Docker client authenticates with a registry, the registry tells the client where to get a JSON Web Token
(JWT) and to use it for all subsequent requests. This allows the authentication service to live in a separate
application from the registry. For example, the GitLab container registry directs Docker clients to get a token
from `https://gitlab.com/jwt/auth`. This endpoint is part of the `gitlab-org/gitlab` project, also known as the
rails project or web service.
When a user tries to sign in to the dependency proxy with a Docker client, we must tell it where to get a JWT. We
can use the same endpoint we use with the container registry: `https://gitlab.com/jwt/auth`. But in our case,
we tell the Docker client to specify `service=dependency_proxy` in the parameters so can use a separate underlying
service to generate the token.
This sequence diagram shows the request flow for logging into the Dependency Proxy.
```mermaid
sequenceDiagram
autonumber
participant C as Docker CLI
participant R as GitLab (Dependency Proxy)
Note right of C: User tries `docker login gitlab.com` and enters username/password
C->>R: GET /v2/
Note left of R: Check for Authorization header, return 401 if none, return 200 if token exists and is valid
R->>C: 401 Unauthorized with header "WWW-Authenticate": "Bearer realm=\"http://gitlab.com/jwt/auth\",service=\"registry.docker.io\""
Note right of C: Request Oauth token using HTTP Basic Auth
C->>R: GET /jwt/auth
Note left of R: Token is returned
R->>C: 200 OK (with Bearer token included)
Note right of C: original request is tested again
C->>R: GET /v2/ (this time with `Authorization: Bearer [token]` header)
Note right of C: Login Succeeded
R->>C: 200 OK
```
The dependency proxy uses its own authentication service, separate from the authentication managed by the UI
(`ApplicationController`) and API (`ApiGuard`). Once the service has created a JWT, the `DependencyProxy::ApplicationController`
manages authentication and authorization for the rest of the requests. It manages the user by using `GitLab::Auth::Result` and
is similar to the authentication implemented by the Git client requests in `GitHttpClientController`.
### Caching
Blobs are cached artifacts with no logic around them. We cache them by digest. When we receive a request for a new blob,
we check to see if we have a blob with the requested digest, and return it. Otherwise we fetch it from the external
registry and cache it.
Manifests are more complicated, partially due to [rate limiting on DockerHub](https://www.docker.com/increase-rate-limits/).
A manifest is essentially a recipe for creating an image. It has a list of blobs to create a certain image. So
`alpine:latest` has a manifest associated with it that specifies the blobs needed to create the `alpine:latest`
image. The interesting part is that `alpine:latest` can change over time, so we can't just cache the manifest and
assume it is OK to use forever. Instead, we must check the digest of the manifest, which is an ETag. This gets
interesting because the requests for manifests often don't include the digest. So how do we know if the manifest
we have cached is still the most up-to-date `alpine:latest`? DockerHub allows free HEAD requests that don't count
toward their rate limit. The HEAD request returns the manifest digest so we can tell whether or not the one we
have is stale.
With this knowledge, we have built the following logic to manage manifest requests:
```mermaid
graph TD
A[Receive manifest request] --> | We have the manifest cached.| B{Docker manifest HEAD request}
A --> | We do not have manifest cached.| C{Docker manifest GET request}
B --> | Digest matches the one in the DB | D[Fetch manifest from cache]
B --> | HEAD request error, network failure, cannot reach DockerHub | D[Fetch manifest from cache]
B --> | Digest does not match the one in DB | C
C --> E[Save manifest to cache, save digest to database]
D --> F
E --> F[Return manifest]
```
### Workhorse for file handling
Management of file uploads and caching happens in [Workhorse](../workhorse/_index.md). This explains the additional
[`POST` routes](https://gitlab.com/gitlab-org/gitlab/-/blob/3f76455ac9cf90a927767e55c837d6b07af818df/config/routes/group.rb#L170-173)
that we have for the Dependency Proxy.
The [`send_dependency`](https://gitlab.com/gitlab-org/gitlab/-/blob/7359d23f4e078479969c872924150219c6f1665f/app/helpers/workhorse_helper.rb#L46-53)
method makes a request to Workhorse including the previously fetched JWT from the external registry. Workhorse then
can use that token to request the manifest or blob the user originally requested. The Workhorse code lives in
[`workhorse/internal/dependencyproxy/dependencyproxy.go`](https://gitlab.com/gitlab-org/gitlab/-/blob/b8f44a8f3c26efe9932c2ada2df75ef7acb8417b/workhorse/internal/dependencyproxy/dependencyproxy.go#L4).
Once we put it all together, the sequence for requesting an image file looks like this:
```mermaid
sequenceDiagram
Client->>Workhorse: GET /v2/*group_id/dependency_proxy/containers/*image/manifests/*tag
Workhorse->>Rails: GET /v2/*group_id/dependency_proxy/containers/*image/manifests/*tag
Rails->>Rails: Check DB. Is manifest persisted in cache?
alt In Cache
Rails->>Workhorse: Respond with send-url injector
Workhorse->>Client: Send the file to the client
else Not In Cache
Rails->>Rails: Generate auth token and download URL for the manifest in upstream registry
Rails->>Workhorse: Respond with send-dependency injector
Workhorse->>External Registry: Request the manifest
External Registry->>Workhorse: Download the manifest
Workhorse->>Rails: GET /v2/*group_id/dependency_proxy/containers/*image/manifest/*tag/authorize
Rails->>Workhorse: Respond with upload instructions
Workhorse->>Client: Send the manifest file to the client with original headers
Workhorse->>Object Storage: Save the manifest file with some of it's header values
Workhorse->>Rails: Finalize the upload
end
```
### Cleanup policies
The cleanup policies for the Dependency Proxy work as time-to-live policies. They allow users to set the number
of days a file is allowed to remain cached if it has been unread. Since there is no way to associate the blobs
with the images they belong to (to do this, we would need to build the metadata database that the container registry
folks built), we can set up rules like "if this blob has not been pulled in 90 days, delete it". This means that
any files that are continuously getting pulled will not be removed from the cache, but if, for example,
`alpine:latest` changes and one of the underlying blobs is no longer used, it will eventually get cleaned up
because it has stopped getting pulled. We use the `read_at` attribute to track the last time a given
`dependency_proxy_blob` or `dependency_proxy_manifest` was pulled.
These work using a cron worker, [DependencyProxy::CleanupDependencyProxyWorker](https://gitlab.com/gitlab-org/gitlab/-/blob/7359d23f4e078479969c872924150219c6f1665f/app/workers/dependency_proxy/cleanup_dependency_proxy_worker.rb#L4),
that will kick off two [limited capacity](../sidekiq/limited_capacity_worker.md) workers: one to delete blobs,
and one to delete manifests. The capacity is set in an [application setting](settings.md#container-registry).
### Historic reference links
- [Dependency proxy for private groups](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/46042) - initial authentication implementation
- [Manifest caching](https://gitlab.com/gitlab-org/gitlab/-/issues/241639) - initial manifest caching implementation
- [Workhorse for blobs](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/71890) - initial workhorse implementation
- [Workhorse for manifest](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/73033) - moving manifest cache logic to Workhorse
- [Deploy token support](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/64363) - authorization largely updated
- [SSO support](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67373) - changes how policies are checked
|
---
stage: Package
group: Container Registry
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Dependency Proxy
breadcrumbs:
- doc
- development
- packages
---
The Dependency Proxy is a pull-through-cache for public registry images from DockerHub. This document describes how this
feature is constructed in GitLab.
## Container registry
The Dependency Proxy for the container registry acts a stand-in for a remote container registry. In our case,
the remote registry is the public DockerHub registry.
```mermaid
flowchart TD
id1([$ docker]) --> id2([GitLab Dependency Proxy])
id2 --> id3([DockerHub])
```
From the user's perspective, the GitLab instance is just a container registry that they are interacting with to
pull images by using `docker login gitlab.com`
When you use `docker login gitlab.com`, the Docker client uses the [v2 API](https://distribution.github.io/distribution/spec/api/)
to make requests.
To support authentication, we must include one route:
- [API Version Check](https://distribution.github.io/distribution/spec/api/#api-version-check)
To support `docker pull` requests, we must include two additional routes:
- [Pulling an image manifest](https://distribution.github.io/distribution/spec/api/#pulling-an-image-manifest)
- [Pulling an image layer (blob)](https://distribution.github.io/distribution/spec/api/#pulling-a-layer)
These routes are defined in [`gitlab-org/gitlab/config/routes/group.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/3f76455ac9cf90a927767e55c837d6b07af818df/config/routes/group.rb#L164-175).
In its simplest form, the Dependency Proxy manages three requests:
- Logging in / returning a JWT
- Fetching a manifest
- Fetching a blob
Here is what the general request sequence looks like for the Dependency Proxy:
```mermaid
sequenceDiagram
Client->>+GitLab: Login? / request token
GitLab->>+Client: JWT
Client->>+GitLab: request a manifest for an image
GitLab->>+ExternalRegistry: request JWT
ExternalRegistry->>+GitLab : JWT
GitLab->>+ExternalRegistry : request manifest
ExternalRegistry->>+GitLab : return manifest
GitLab->>+GitLab : store manifest
GitLab->>+Client : return manifest
loop request image layers
Client->>+GitLab: request a blob from the manifest
GitLab->>+ExternalRegistry: request JWT
ExternalRegistry->>+GitLab : JWT
GitLab->>+ExternalRegistry : request blob
ExternalRegistry->>+GitLab : return blob
GitLab->>+GitLab : store blob
GitLab->>+Client : return blob
end
```
### Authentication and authorization
When a Docker client authenticates with a registry, the registry tells the client where to get a JSON Web Token
(JWT) and to use it for all subsequent requests. This allows the authentication service to live in a separate
application from the registry. For example, the GitLab container registry directs Docker clients to get a token
from `https://gitlab.com/jwt/auth`. This endpoint is part of the `gitlab-org/gitlab` project, also known as the
rails project or web service.
When a user tries to sign in to the dependency proxy with a Docker client, we must tell it where to get a JWT. We
can use the same endpoint we use with the container registry: `https://gitlab.com/jwt/auth`. But in our case,
we tell the Docker client to specify `service=dependency_proxy` in the parameters so can use a separate underlying
service to generate the token.
This sequence diagram shows the request flow for logging into the Dependency Proxy.
```mermaid
sequenceDiagram
autonumber
participant C as Docker CLI
participant R as GitLab (Dependency Proxy)
Note right of C: User tries `docker login gitlab.com` and enters username/password
C->>R: GET /v2/
Note left of R: Check for Authorization header, return 401 if none, return 200 if token exists and is valid
R->>C: 401 Unauthorized with header "WWW-Authenticate": "Bearer realm=\"http://gitlab.com/jwt/auth\",service=\"registry.docker.io\""
Note right of C: Request Oauth token using HTTP Basic Auth
C->>R: GET /jwt/auth
Note left of R: Token is returned
R->>C: 200 OK (with Bearer token included)
Note right of C: original request is tested again
C->>R: GET /v2/ (this time with `Authorization: Bearer [token]` header)
Note right of C: Login Succeeded
R->>C: 200 OK
```
The dependency proxy uses its own authentication service, separate from the authentication managed by the UI
(`ApplicationController`) and API (`ApiGuard`). Once the service has created a JWT, the `DependencyProxy::ApplicationController`
manages authentication and authorization for the rest of the requests. It manages the user by using `GitLab::Auth::Result` and
is similar to the authentication implemented by the Git client requests in `GitHttpClientController`.
### Caching
Blobs are cached artifacts with no logic around them. We cache them by digest. When we receive a request for a new blob,
we check to see if we have a blob with the requested digest, and return it. Otherwise we fetch it from the external
registry and cache it.
Manifests are more complicated, partially due to [rate limiting on DockerHub](https://www.docker.com/increase-rate-limits/).
A manifest is essentially a recipe for creating an image. It has a list of blobs to create a certain image. So
`alpine:latest` has a manifest associated with it that specifies the blobs needed to create the `alpine:latest`
image. The interesting part is that `alpine:latest` can change over time, so we can't just cache the manifest and
assume it is OK to use forever. Instead, we must check the digest of the manifest, which is an ETag. This gets
interesting because the requests for manifests often don't include the digest. So how do we know if the manifest
we have cached is still the most up-to-date `alpine:latest`? DockerHub allows free HEAD requests that don't count
toward their rate limit. The HEAD request returns the manifest digest so we can tell whether or not the one we
have is stale.
With this knowledge, we have built the following logic to manage manifest requests:
```mermaid
graph TD
A[Receive manifest request] --> | We have the manifest cached.| B{Docker manifest HEAD request}
A --> | We do not have manifest cached.| C{Docker manifest GET request}
B --> | Digest matches the one in the DB | D[Fetch manifest from cache]
B --> | HEAD request error, network failure, cannot reach DockerHub | D[Fetch manifest from cache]
B --> | Digest does not match the one in DB | C
C --> E[Save manifest to cache, save digest to database]
D --> F
E --> F[Return manifest]
```
### Workhorse for file handling
Management of file uploads and caching happens in [Workhorse](../workhorse/_index.md). This explains the additional
[`POST` routes](https://gitlab.com/gitlab-org/gitlab/-/blob/3f76455ac9cf90a927767e55c837d6b07af818df/config/routes/group.rb#L170-173)
that we have for the Dependency Proxy.
The [`send_dependency`](https://gitlab.com/gitlab-org/gitlab/-/blob/7359d23f4e078479969c872924150219c6f1665f/app/helpers/workhorse_helper.rb#L46-53)
method makes a request to Workhorse including the previously fetched JWT from the external registry. Workhorse then
can use that token to request the manifest or blob the user originally requested. The Workhorse code lives in
[`workhorse/internal/dependencyproxy/dependencyproxy.go`](https://gitlab.com/gitlab-org/gitlab/-/blob/b8f44a8f3c26efe9932c2ada2df75ef7acb8417b/workhorse/internal/dependencyproxy/dependencyproxy.go#L4).
Once we put it all together, the sequence for requesting an image file looks like this:
```mermaid
sequenceDiagram
Client->>Workhorse: GET /v2/*group_id/dependency_proxy/containers/*image/manifests/*tag
Workhorse->>Rails: GET /v2/*group_id/dependency_proxy/containers/*image/manifests/*tag
Rails->>Rails: Check DB. Is manifest persisted in cache?
alt In Cache
Rails->>Workhorse: Respond with send-url injector
Workhorse->>Client: Send the file to the client
else Not In Cache
Rails->>Rails: Generate auth token and download URL for the manifest in upstream registry
Rails->>Workhorse: Respond with send-dependency injector
Workhorse->>External Registry: Request the manifest
External Registry->>Workhorse: Download the manifest
Workhorse->>Rails: GET /v2/*group_id/dependency_proxy/containers/*image/manifest/*tag/authorize
Rails->>Workhorse: Respond with upload instructions
Workhorse->>Client: Send the manifest file to the client with original headers
Workhorse->>Object Storage: Save the manifest file with some of it's header values
Workhorse->>Rails: Finalize the upload
end
```
### Cleanup policies
The cleanup policies for the Dependency Proxy work as time-to-live policies. They allow users to set the number
of days a file is allowed to remain cached if it has been unread. Since there is no way to associate the blobs
with the images they belong to (to do this, we would need to build the metadata database that the container registry
folks built), we can set up rules like "if this blob has not been pulled in 90 days, delete it". This means that
any files that are continuously getting pulled will not be removed from the cache, but if, for example,
`alpine:latest` changes and one of the underlying blobs is no longer used, it will eventually get cleaned up
because it has stopped getting pulled. We use the `read_at` attribute to track the last time a given
`dependency_proxy_blob` or `dependency_proxy_manifest` was pulled.
These work using a cron worker, [DependencyProxy::CleanupDependencyProxyWorker](https://gitlab.com/gitlab-org/gitlab/-/blob/7359d23f4e078479969c872924150219c6f1665f/app/workers/dependency_proxy/cleanup_dependency_proxy_worker.rb#L4),
that will kick off two [limited capacity](../sidekiq/limited_capacity_worker.md) workers: one to delete blobs,
and one to delete manifests. The capacity is set in an [application setting](settings.md#container-registry).
### Historic reference links
- [Dependency proxy for private groups](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/46042) - initial authentication implementation
- [Manifest caching](https://gitlab.com/gitlab-org/gitlab/-/issues/241639) - initial manifest caching implementation
- [Workhorse for blobs](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/71890) - initial workhorse implementation
- [Workhorse for manifest](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/73033) - moving manifest cache logic to Workhorse
- [Deploy token support](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/64363) - authorization largely updated
- [SSO support](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67373) - changes how policies are checked
|
https://docs.gitlab.com/development/settings
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/settings.md
|
2025-08-13
|
doc/development/packages
|
[
"doc",
"development",
"packages"
] |
settings.md
|
Package
|
Package Registry
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Package Settings
| null |
This page includes an exhaustive list of settings related to and maintained by the package stage.
## Instance Settings
### Package registry
| Setting | Table | Description |
| ------- | ----- | -----------|
| `nuget_skip_metadata_url_validation` | `application_settings` | Indicates whether to skip metadata URL validation for the NuGet package. |
| `npm_package_requests_forwarding` | `application_settings` | Enables or disables npm package forwarding for the instance. |
| `pypi_package_requests_forwarding` | `application_settings` | Enables or disables PyPI package forwarding for the instance. |
| `packages_cleanup_package_file_worker_capacity` | `application_settings` | Number of concurrent workers allowed for package file cleanup. |
| `package_registry_allow_anyone_to_pull_option` | `application_settings` | Enables or disables the `Allow anyone to pull from package registry` toggle. |
| `throttle_unauthenticated_packages_api_requests_per_period` | `application_settings` | Request limit for unauthenticated package API requests in the period defined by `throttle_unauthenticated_packages_api_period_in_seconds`. |
| `throttle_unauthenticated_packages_api_period_in_seconds` | `application_settings` | Period in seconds to measure unauthenticated package API requests. |
| `throttle_authenticated_packages_api_requests_per_period` | `application_settings` | Request limit for authenticated package API requests in the period defined by `throttle_authenticated_packages_api_period_in_seconds`. |
| `throttle_authenticated_packages_api_period_in_seconds` | `application_settings` | Period in seconds to measure authenticated package API requests. |
| `throttle_unauthenticated_packages_api_enabled` | `application_settings` | |
| `throttle_authenticated_packages_api_enabled` | `application_settings` | Enables or disables request limits/throttling for the package API. |
| `conan_max_file_size` | `plan_limits` | Maximum file size for a Conan package file. |
| `maven_max_file_size` | `plan_limits` | Maximum file size for a Maven package file. |
| `npm_max_file_size` | `plan_limits` | Maximum file size for an npm package file. |
| `nuget_max_file_size` | `plan_limits` | Maximum file size for a NuGet package file. |
| `pypi_max_file_size` | `plan_limits` | Maximum file size for a PyPI package file. |
| `generic_packages_max_file_size` | `plan_limits` | Maximum file size for a generic package file. |
| `golang_max_file_size` | `plan_limits` | Maximum file size for a GoProxy package file. |
| `debian_max_file_size` | `plan_limits` | Maximum file size for a Debian package file. |
| `rubygems_max_file_size` | `plan_limits` | Maximum file size for a RubyGems package file. |
| `terraform_module_max_file_size` | `plan_limits` | Maximum file size for a Terraform package file. |
| `helm_max_file_size` | `plan_limits` | Maximum file size for a Helm package file. |
| `helm_max_packages_count` | `application_settings` | Maximum number of Helm packages that can be listed per channel. Must be at least 1. Default is 1000. |
### Container registry
| Setting | Table | Description |
| ------- | ----- | -----------|
| `container_registry_token_expire_delay` | `application_settings` | The time in minutes before the container registry auth token (JWT) expires. |
| `container_expiration_policies_enable_historic_entries` | `application_settings` | Allow or prevent projects older than 12.8 to use container cleanup policies. |
| `container_registry_vendor` | `application_settings` | The vendor of the container registry. `gitlab` for the GitLab container registry, other values for external registries. |
| `container_registry_version` | `application_settings` | The current version of the container registry. |
| `container_registry_features` | `application_settings` | Features supported by the connected container registry. For example, tag deletion. |
| `container_registry_delete_tags_service_timeout` | `application_settings` | The maximum time (in seconds) that the cleanup process can take to delete a batch of tags. |
| `container_registry_expiration_policies_worker_capacity` | `application_settings` | Number of concurrent container image cleanup policy workers allowed. |
| `container_registry_cleanup_tags_service_max_list_size` | `application_settings` | The maximum number of tags that can be deleted in a cleanup policy single execution. Additional tags must be deleted in another execution. |
| `container_registry_expiration_policies_caching` | `application_settings` | Enable or disable tag creation timestamp caching during execution of cleanup policies. |
| `container_registry_import_max_tags_count` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns 0 until it gets removed. |
| `container_registry_import_max_retries` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns 0 until it gets removed. |
| `container_registry_import_start_max_retries` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns 0 until it gets removed. |
| `container_registry_import_max_step_duration` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns 0 until it gets removed. |
| `container_registry_import_target_plan` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns an empty string ('') until it gets removed. |
| `container_registry_import_created_before` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns an empty string ('') until it gets removed. |
| `container_registry_pre_import_timeout` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns an empty string ('') until it gets removed. |
| `container_registry_import_timeout` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns an empty string ('') until it gets removed. |
| `dependency_proxy_ttl_group_policy_worker_capacity` | `application_settings` | Number of concurrent dependency proxy cleanup policy workers allowed. |
## Namespace/Group Settings
| Setting | Table | Description |
| ------- | ----- | -----------|
| `maven_duplicates_allowed` | `namespace_package_settings` | Allow or prevent duplicate Maven packages. |
| `maven_duplicate_exception_regex` | `namespace_package_settings` | Regex defining Maven packages that are allowed to be duplicate when duplicates are not allowed. This matches the name and version of the package. |
| `generic_duplicates_allowed` | `namespace_package_settings` | Allow or prevent duplicate generic packages. |
| `generic_duplicate_exception_regex` | `namespace_package_settings` | Regex defining generic packages that are allowed to be duplicate when duplicates are not allowed. |
| `nuget_duplicates_allowed` | `namespace_package_settings` | Allow or prevent duplicate NuGet packages. |
| `nuget_duplicate_exception_regex` | `namespace_package_settings` | Regex defining NuGet packages that are allowed to be duplicate when duplicates are not allowed. |
| `nuget_symbol_server_enabled` | `namespace_package_settings` | Enable or disable the NuGet symbol server. |
| `terraform_module_duplicates_allowed` | `namespace_package_settings` | Allow or prevent duplicate Terraform module packages. |
| `terraform_module_duplicate_exception_regex` | `namespace_package_settings` | Regex defining Terraform module packages that are allowed to be duplicate when duplicates are not allowed. |
| Dependency Proxy Cleanup Policies - `ttl` | `dependency_proxy_image_ttl_group_policies` | Number of days to retain an unused Dependency Proxy file before it is removed. |
| Dependency Proxy - `enabled` | `dependency_proxy_image_ttl_group_policies` | Enable or disable the Dependency Proxy cleanup policy. |
## Project Settings
| Setting | Table | Description |
| ------- | ----- | -----------|
| Container Cleanup Policies - `next_run_at` | `container_expiration_policies` | When the project qualifies for the next container cleanup policy cron worker. |
| Container Cleanup Policies - `name_regex` | `container_expiration_policies` | Regex defining image names to remove with the container cleanup policy. |
| Container Cleanup Policies - `cadence` | `container_expiration_policies` | How often the container cleanup policy should run. |
| Container Cleanup Policies - `older_than` | `container_expiration_policies` | Age of images to remove with the container cleanup policy. |
| Container Cleanup Policies - `keep_n` | `container_expiration_policies` | Number of images to retain in a container cleanup policy. |
| Container Cleanup Policies - `enabled` | `container_expiration_policies` | Enable or disable a container cleanup policy. |
| Container Cleanup Policies - `name_regex_keep` | `container_expiration_policies` | Regex defining image names to always keep regardless of other rules with the container cleanup policy. |
|
---
stage: Package
group: Package Registry
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Package Settings
breadcrumbs:
- doc
- development
- packages
---
This page includes an exhaustive list of settings related to and maintained by the package stage.
## Instance Settings
### Package registry
| Setting | Table | Description |
| ------- | ----- | -----------|
| `nuget_skip_metadata_url_validation` | `application_settings` | Indicates whether to skip metadata URL validation for the NuGet package. |
| `npm_package_requests_forwarding` | `application_settings` | Enables or disables npm package forwarding for the instance. |
| `pypi_package_requests_forwarding` | `application_settings` | Enables or disables PyPI package forwarding for the instance. |
| `packages_cleanup_package_file_worker_capacity` | `application_settings` | Number of concurrent workers allowed for package file cleanup. |
| `package_registry_allow_anyone_to_pull_option` | `application_settings` | Enables or disables the `Allow anyone to pull from package registry` toggle. |
| `throttle_unauthenticated_packages_api_requests_per_period` | `application_settings` | Request limit for unauthenticated package API requests in the period defined by `throttle_unauthenticated_packages_api_period_in_seconds`. |
| `throttle_unauthenticated_packages_api_period_in_seconds` | `application_settings` | Period in seconds to measure unauthenticated package API requests. |
| `throttle_authenticated_packages_api_requests_per_period` | `application_settings` | Request limit for authenticated package API requests in the period defined by `throttle_authenticated_packages_api_period_in_seconds`. |
| `throttle_authenticated_packages_api_period_in_seconds` | `application_settings` | Period in seconds to measure authenticated package API requests. |
| `throttle_unauthenticated_packages_api_enabled` | `application_settings` | |
| `throttle_authenticated_packages_api_enabled` | `application_settings` | Enables or disables request limits/throttling for the package API. |
| `conan_max_file_size` | `plan_limits` | Maximum file size for a Conan package file. |
| `maven_max_file_size` | `plan_limits` | Maximum file size for a Maven package file. |
| `npm_max_file_size` | `plan_limits` | Maximum file size for an npm package file. |
| `nuget_max_file_size` | `plan_limits` | Maximum file size for a NuGet package file. |
| `pypi_max_file_size` | `plan_limits` | Maximum file size for a PyPI package file. |
| `generic_packages_max_file_size` | `plan_limits` | Maximum file size for a generic package file. |
| `golang_max_file_size` | `plan_limits` | Maximum file size for a GoProxy package file. |
| `debian_max_file_size` | `plan_limits` | Maximum file size for a Debian package file. |
| `rubygems_max_file_size` | `plan_limits` | Maximum file size for a RubyGems package file. |
| `terraform_module_max_file_size` | `plan_limits` | Maximum file size for a Terraform package file. |
| `helm_max_file_size` | `plan_limits` | Maximum file size for a Helm package file. |
| `helm_max_packages_count` | `application_settings` | Maximum number of Helm packages that can be listed per channel. Must be at least 1. Default is 1000. |
### Container registry
| Setting | Table | Description |
| ------- | ----- | -----------|
| `container_registry_token_expire_delay` | `application_settings` | The time in minutes before the container registry auth token (JWT) expires. |
| `container_expiration_policies_enable_historic_entries` | `application_settings` | Allow or prevent projects older than 12.8 to use container cleanup policies. |
| `container_registry_vendor` | `application_settings` | The vendor of the container registry. `gitlab` for the GitLab container registry, other values for external registries. |
| `container_registry_version` | `application_settings` | The current version of the container registry. |
| `container_registry_features` | `application_settings` | Features supported by the connected container registry. For example, tag deletion. |
| `container_registry_delete_tags_service_timeout` | `application_settings` | The maximum time (in seconds) that the cleanup process can take to delete a batch of tags. |
| `container_registry_expiration_policies_worker_capacity` | `application_settings` | Number of concurrent container image cleanup policy workers allowed. |
| `container_registry_cleanup_tags_service_max_list_size` | `application_settings` | The maximum number of tags that can be deleted in a cleanup policy single execution. Additional tags must be deleted in another execution. |
| `container_registry_expiration_policies_caching` | `application_settings` | Enable or disable tag creation timestamp caching during execution of cleanup policies. |
| `container_registry_import_max_tags_count` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns 0 until it gets removed. |
| `container_registry_import_max_retries` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns 0 until it gets removed. |
| `container_registry_import_start_max_retries` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns 0 until it gets removed. |
| `container_registry_import_max_step_duration` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns 0 until it gets removed. |
| `container_registry_import_target_plan` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns an empty string ('') until it gets removed. |
| `container_registry_import_created_before` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns an empty string ('') until it gets removed. |
| `container_registry_pre_import_timeout` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns an empty string ('') until it gets removed. |
| `container_registry_import_timeout` | `application_settings` | **Deprecated** in 17.0. The migration for GitLab.com is now complete so we are starting to cleanup this field. This field returns an empty string ('') until it gets removed. |
| `dependency_proxy_ttl_group_policy_worker_capacity` | `application_settings` | Number of concurrent dependency proxy cleanup policy workers allowed. |
## Namespace/Group Settings
| Setting | Table | Description |
| ------- | ----- | -----------|
| `maven_duplicates_allowed` | `namespace_package_settings` | Allow or prevent duplicate Maven packages. |
| `maven_duplicate_exception_regex` | `namespace_package_settings` | Regex defining Maven packages that are allowed to be duplicate when duplicates are not allowed. This matches the name and version of the package. |
| `generic_duplicates_allowed` | `namespace_package_settings` | Allow or prevent duplicate generic packages. |
| `generic_duplicate_exception_regex` | `namespace_package_settings` | Regex defining generic packages that are allowed to be duplicate when duplicates are not allowed. |
| `nuget_duplicates_allowed` | `namespace_package_settings` | Allow or prevent duplicate NuGet packages. |
| `nuget_duplicate_exception_regex` | `namespace_package_settings` | Regex defining NuGet packages that are allowed to be duplicate when duplicates are not allowed. |
| `nuget_symbol_server_enabled` | `namespace_package_settings` | Enable or disable the NuGet symbol server. |
| `terraform_module_duplicates_allowed` | `namespace_package_settings` | Allow or prevent duplicate Terraform module packages. |
| `terraform_module_duplicate_exception_regex` | `namespace_package_settings` | Regex defining Terraform module packages that are allowed to be duplicate when duplicates are not allowed. |
| Dependency Proxy Cleanup Policies - `ttl` | `dependency_proxy_image_ttl_group_policies` | Number of days to retain an unused Dependency Proxy file before it is removed. |
| Dependency Proxy - `enabled` | `dependency_proxy_image_ttl_group_policies` | Enable or disable the Dependency Proxy cleanup policy. |
## Project Settings
| Setting | Table | Description |
| ------- | ----- | -----------|
| Container Cleanup Policies - `next_run_at` | `container_expiration_policies` | When the project qualifies for the next container cleanup policy cron worker. |
| Container Cleanup Policies - `name_regex` | `container_expiration_policies` | Regex defining image names to remove with the container cleanup policy. |
| Container Cleanup Policies - `cadence` | `container_expiration_policies` | How often the container cleanup policy should run. |
| Container Cleanup Policies - `older_than` | `container_expiration_policies` | Age of images to remove with the container cleanup policy. |
| Container Cleanup Policies - `keep_n` | `container_expiration_policies` | Number of images to retain in a container cleanup policy. |
| Container Cleanup Policies - `enabled` | `container_expiration_policies` | Enable or disable a container cleanup policy. |
| Container Cleanup Policies - `name_regex_keep` | `container_expiration_policies` | Regex defining image names to always keep regardless of other rules with the container cleanup policy. |
|
https://docs.gitlab.com/development/packages
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/packages
|
[
"doc",
"development",
"packages"
] |
_index.md
|
Package
|
Package Registry
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Package and container registry development guidelines
| null |
The documentation for package and container registry development is split into two groups.
## Package registry development
Development and architectural documentation for the package registry:
- [Debian repository structure](debian_repository.md)
- [Developing a new format](new_format_development.md)
- [Settings](settings.md)
- [Structure / Schema](structure.md)
- API documentation
- [Composer](../../api/packages/composer.md)
- [Conan v1](../../api/packages/conan_v1.md)
- [Conan v2](../../api/packages/conan_v2.md)
- [Debian](../../api/packages/debian.md)
- [Generic](../../user/packages/generic_packages/_index.md)
- [Go Proxy](../../api/packages/go_proxy.md)
- [Helm](../../api/packages/helm.md)
- [Maven](../../api/packages/maven.md)
- [npm](../../api/packages/npm.md)
- [NuGet](../../api/packages/nuget.md)
- [PyPI](../../api/packages/pypi.md)
- [Ruby Gems](../../api/packages/rubygems.md)
## Container registry development
Development and architectural documentation for the container registry
- [Dependency proxy structure](dependency_proxy.md)
- [Settings](settings.md)
- [Structure / Schema](structure.md)
- [Cleanup policies](cleanup_policies.md)
## Harbor registry development
Development and architectural documentation for the harbor registry
- [Development documentation](harbor_registry_development.md)
|
---
stage: Package
group: Package Registry
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Package and container registry development guidelines
breadcrumbs:
- doc
- development
- packages
---
The documentation for package and container registry development is split into two groups.
## Package registry development
Development and architectural documentation for the package registry:
- [Debian repository structure](debian_repository.md)
- [Developing a new format](new_format_development.md)
- [Settings](settings.md)
- [Structure / Schema](structure.md)
- API documentation
- [Composer](../../api/packages/composer.md)
- [Conan v1](../../api/packages/conan_v1.md)
- [Conan v2](../../api/packages/conan_v2.md)
- [Debian](../../api/packages/debian.md)
- [Generic](../../user/packages/generic_packages/_index.md)
- [Go Proxy](../../api/packages/go_proxy.md)
- [Helm](../../api/packages/helm.md)
- [Maven](../../api/packages/maven.md)
- [npm](../../api/packages/npm.md)
- [NuGet](../../api/packages/nuget.md)
- [PyPI](../../api/packages/pypi.md)
- [Ruby Gems](../../api/packages/rubygems.md)
## Container registry development
Development and architectural documentation for the container registry
- [Dependency proxy structure](dependency_proxy.md)
- [Settings](settings.md)
- [Structure / Schema](structure.md)
- [Cleanup policies](cleanup_policies.md)
## Harbor registry development
Development and architectural documentation for the harbor registry
- [Development documentation](harbor_registry_development.md)
|
https://docs.gitlab.com/development/new_format_development
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/new_format_development.md
|
2025-08-13
|
doc/development/packages
|
[
"doc",
"development",
"packages"
] |
new_format_development.md
|
Package
|
Package Registry
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Developing support for a new package format
| null |
This document guides you through adding support to GitLab for a new a [package management system](../../administration/packages/_index.md).
See the already supported formats in the [Packages and registries documentation](../../user/packages/_index.md)
It is possible to add a new format with only backend changes.
This guide is superficial and does not cover the way the code should be written.
However, you can find a good example by looking at the following merge requests:
- [npm registry support](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/8673)
- [Maven repository](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/6607)
- [Instance-level API for Maven repository](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/8757)
- [NuGet group-level API](https://gitlab.com/gitlab-org/gitlab/-/issues/36423)
## General information
The existing database model requires the following:
- Every package belongs to a project.
- Every package file belongs to a package.
- A package can have one or more package files.
- The package model is based on storing information about the package and its version.
### API endpoints
Package systems work with GitLab via API. For example `lib/api/npm_project_packages.rb`
implements API endpoints to work with npm clients. So, the first thing to do is to
add a new `lib/api/your_name_project_packages.rb` file with API endpoints that are
necessary to make the package system client work. Usually that means having
endpoints like:
- GET package information.
- GET package file content.
- PUT upload package.
Because the packages belong to a project, it's expected to have project-level endpoint (remote)
for uploading and downloading them. For example:
```plaintext
GET https://gitlab.com/api/v4/projects/<your_project_id>/packages/npm/
PUT https://gitlab.com/api/v4/projects/<your_project_id>/packages/npm/
```
Group-level and instance-level endpoints should only be considered after the project-level endpoint is available in production.
#### Remote hierarchy
Packages are scoped within various levels of access, which is generally configured by setting your remote. A
remote endpoint may be set at the project level, meaning when installing packages, only packages belonging to that
project are visible. Alternatively, a group-level endpoint may be used to allow visibility to all packages
in a given group. Lastly, an instance-level endpoint can be used to allow visibility to all packages in an
entire GitLab instance.
As an MVC, we recommend beginning with a project-level endpoint. A typical iteration plan for remote hierarchies is to go from:
- Publish and install in a project
- Install from a group
- Publish and install in an instance (this is for self-managed customers)
Using instance-level endpoints requires [stricter naming conventions](#naming-conventions).
{{< alert type="note" >}}
Composer package naming scope is Instance Level.
{{< /alert >}}
### Naming conventions
To avoid name conflict for instance-level endpoints you must define a package naming convention
that gives a way to identify the project that the package belongs to. This generally involves using the project
ID or full project path in the package name. For more information with an example, see
[Package recipe naming convention for instance remotes](../../user/packages/conan_1_repository/_index.md#package-recipe-naming-convention-for-instance-remotes).
For group and project-level endpoints, naming can be less constrained and it is up to the group and project
members to be certain that there is no conflict between two package names. However, the system should prevent
a user from reusing an existing name within a given scope.
Otherwise, naming should follow the package manager's naming conventions and include a validation in the `package.md`
model for that package type.
### Services and finders
Logic for performing tasks such as creating package or package file records or finding packages should not live
in the API file, but should live in services and finders. Existing services and finders should be used or
extended when possible to keep the common package logic grouped as much as possible.
### Configuration
GitLab has a `packages` section in its configuration file (`gitlab.rb` or `gitlab.yml`).
It applies to all package systems supported by GitLab. Usually you don't need
to add anything there.
Packages can be configured to use object storage, therefore your code must support it.
## MVC Approach
The way new package systems are integrated in GitLab is using an [MVC](https://handbook.gitlab.com/handbook/values/#minimal-viable-change-mvc). Therefore, the first iteration should support the bare minimum user actions:
- Authentication with a GitLab job, personal access, project access, or deploy token
- Uploading a package and displaying basic metadata in the user interface
- Pulling a package
- Required actions
Required actions are all the additional requests that GitLab must handle so the corresponding package manager CLI can work properly. It could be a search feature or an endpoint providing meta information about a package. For example:
- For NuGet, the search request was implemented during the first MVC iteration, to support Visual Studio.
- For npm, there is a metadata endpoint used by `npm` to get the tarball URL.
For the first MVC iteration, it's recommended to stay at the project level of the [remote hierarchy](#remote-hierarchy). Other levels can be tackled with [future Merge Requests](#future-work).
The MVC usually has two phases:
- [Analysis](#analysis)
- [Implementation](#implementation)
### Keep iterations small
When implementing a new package manager, it is tempting to create one large merge request containing all of the
necessary endpoints and services necessary to support basic usage. Instead:
1. Put the API endpoints behind a [feature flag](../feature_flags/_index.md).
1. Submit each endpoint or behavior (download, upload, etc) in a different merge request to shorten the review process.
### Analysis
During this phase, the idea is to collect as much information as possible about the API used by the package system. Here some aspects that can be useful to include:
- **Authentication**: What authentication mechanisms are available (OAuth, Basic
Authorization, other). Keep in mind that GitLab users often want to use their
[personal access tokens](../../user/profile/personal_access_tokens.md).
Although not needed for the MVC first iteration, the [CI/CD job tokens](../../ci/jobs/ci_job_token.md)
have to be supported at some point in the future.
- **Requests**: Which requests are needed to have a working MVC. Ideally, produce
a list of all the requests needed for the MVC (including required actions). Further
investigation could provide an example for each request with the request and the response bodies.
- **Upload**: Carefully analyze how the upload process works. This request is likely the most
complex to implement. A detailed analysis is desired here as uploads can be
encoded in different ways (body or multipart) and can even be in a totally different
format (for example, a JSON structure where the package file is a Base64 value of
a particular field). These different encodings lead to slightly different implementations
on GitLab and GitLab Workhorse. For more detailed information, review [file uploads](#file-uploads).
- **Endpoints**: Suggest a list of endpoint URLs to implement in GitLab.
- **Split work**: Suggest a list of changes to do to incrementally build the MVC.
This gives a good idea of how much work there is to be done. Here is an example
list that must be adapted on a case by case basis:
1. Empty file structure (API file, base service for this package)
1. Authentication system for "logging in" to the package manager
1. Identify metadata and create applicable tables
1. Workhorse route for [object storage direct upload](../uploads/_index.md#direct-upload)
1. Endpoints required for upload/publish
1. Endpoints required for install/download
1. Endpoints required for required actions
The analysis usually takes a full milestone to complete, though it's not impossible to start the implementation in the same milestone.
In particular, the upload request can have some [requirements in the GitLab Workhorse project](#file-uploads). This project has a different release cycle than the rails backend. It's **strongly** recommended that you open an issue there as soon as the upload request analysis is done. This way GitLab Workhorse is already ready when the upload request is implemented on the rails backend.
### Implementation
The implementation of the different Merge Requests varies between different package system integrations. Contributors should take into account some important aspects of the implementation phase.
#### Authentication
The MVC must support [personal access tokens](../../user/profile/personal_access_tokens.md) right from the start. We support two options for these tokens: OAuth and Basic Access.
OAuth authentication is already supported. You can see an example in the [npm API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/npm_project_packages.rb).
[Basic Access authentication](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication)
support is done by overriding a specific function in the API helpers, like
[this example in the Conan API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/conan_packages.rb).
For this authentication mechanism, keep in mind that some clients can send an unauthenticated
request first, wait for the 401 Unauthorized response with the [`WWW-Authenticate`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/WWW-Authenticate)
field, then send an updated (authenticated) request. This case is more involved as
GitLab must handle the `401 Unauthorized` response. The [NuGet API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/nuget_packages.rb)
supports this case.
#### Authorization
Project permissions and group permissions exist for `read_package`, `create_package`, and `destroy_package`. Each
endpoint should
[authorize the requesting user](https://gitlab.com/gitlab-org/gitlab/-/blob/398fef1ca26ae2b2c3dc89750f6b20455a1e5507/ee/lib/api/conan_packages.rb)
against the project or group before continuing.
#### Database and handling metadata
The current database model allows you to store a name and a version for each package.
Every time you upload a new package, you can either create a new record of `Package`
or add files to existing record. `PackageFile` should be able to store all file-related
information like the file `name`, `side`, `sha1`, and so on.
If there is specific data necessary to be stored for only one package system support,
consider creating a separate metadata model. See `packages_maven_metadata` table
and `Packages::Maven::Metadatum` model as an example for package specific data, and `packages_conan_file_metadata` table
and `Packages::Conan::FileMetadatum` model as an example for package file specific data.
If there is package specific behavior for a given package manager, add those methods to the metadata models and
delegate from the package model.
The existing package UI only displays information in the `packages_packages` and `packages_package_files`
tables. If the data stored in the metadata tables must be displayed, a `~frontend` change is required.
#### File uploads
File uploads should be handled by GitLab Workhorse using object accelerated uploads. What this means is that
the workhorse proxy that checks all incoming requests to GitLab intercept the upload request,
upload the file, and forward a request to the main GitLab codebase only containing the metadata
and file location rather than the file itself. An overview of this process can be found in the
[development documentation](../uploads/_index.md#direct-upload).
In terms of code, this means a route must be added to the
[GitLab Workhorse project](https://gitlab.com/gitlab-org/gitlab-workhorse) for each upload endpoint being added
(instance, group, project). [This merge request](https://gitlab.com/gitlab-org/gitlab-workhorse/-/merge_requests/412/diffs)
demonstrates adding an instance-level endpoint for Conan to workhorse. You can also see the Maven project level endpoint
implemented in the same file.
After the route has been added, you must add an additional `/authorize` version of the upload endpoint to your API file.
[This example](https://gitlab.com/gitlab-org/gitlab/-/blob/398fef1ca26ae2b2c3dc89750f6b20455a1e5507/ee/lib/api/maven_packages.rb#L164)
shows the additional endpoint added for Maven. The `/authorize` endpoint verifies and authorizes the request from workhorse,
then the typical upload endpoint is implemented below, consuming the metadata that Workhorse provides to
create the package record. Workhorse provides a variety of file metadata such as type, size, and different checksum formats.
For testing purposes, you may want to [enable object storage](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/object_storage.md)
in your local development environment.
#### File size limits
Files uploaded to the GitLab package registry are [limited by format](../../administration/instance_limits.md#package-registry-limits).
On GitLab.com, these are typically set to 5 GB to help prevent timeout issues and abuse.
When a new package type is added to the `Packages::Package` model, a size limit must be added
similar to [this example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/52639/diffs#382f879fb09b0212e3cedd99e6c46e2083867216),
or the [related test](https://gitlab.com/gitlab-org/gitlab/-/blob/fe4ba43766781371cebfacd78364a1de762917cd/spec/models/packages/package_spec.rb#L761)
must be updated if file size limits do not apply. The only reason a size limit does not apply is if
the package format does not upload and store package files.
#### Rate Limits on GitLab.com
Package manager clients can make rapid requests that exceed the
[GitLab.com standard API rate limits](../../user/gitlab_com/_index.md#rate-limits-on-gitlabcom).
This results in a `429 Too Many Requests` error.
We have opened a set of paths to allow higher rate limits. Unless it is not possible,
new package managers should follow these conventions so they can take advantage of the
expanded package rate limit.
These route prefixes guarantee a higher rate limit:
```plaintext
/api/v4/packages/
/api/v4/projects/:project_id/packages/
/api/v4/groups/:group_id/-/packages/
```
### MVC Checklist
When adding support to GitLab for a new package manager, the first iteration must contain the
following features. You can add the features through many merge requests as needed, but all the
features must be implemented when the feature flag is removed.
- Project-level API
- Push event tracking
- Pull event tracking
- Authentication with personal access tokens
- Authentication with Job Tokens
- Authentication with Deploy Tokens (group and project)
- File size [limit](#file-size-limits)
- File format guards (only accept valid file formats for the package type)
- Name regex with validation
- Version regex with validation
- Workhorse route for [accelerated](../uploads/working_with_uploads.md) uploads
- Background workers for extracting package metadata (if applicable)
- Documentation (how to use the feature)
- API Documentation (individual endpoints with curl examples)
- Seeding in [`db/fixtures/development/26_packages.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/fixtures/development/26_packages.rb)
- Update the [runbook](https://gitlab.com/gitlab-com/runbooks/-/blob/31fb4959e89db25fddf865bc81734c222daf32dd/dashboards/stage-groups/package.dashboard.jsonnet#L74) for the Grafana charts
- End-to-end feature tests for (at the minimum) publishing and installing a package
### Future Work
While working on the MVC, contributors might find features that are not mandatory for the MVC but can provide a better user experience. It's generally a good idea to keep an eye on those and open issues.
Here are some examples
1. Endpoints required for search
1. Front end updates to display additional package information and metadata
1. Limits on file sizes
1. Tracking for metrics
1. Read more metadata fields from the package to make it available to the front end. For example, it's usual to be able to tag a package. Those tags can be read and saved by backend and then displayed on the packages UI.
1. Endpoints for the upper levels of the [remote hierarchy](#remote-hierarchy). This step might require you to create a [naming convention](#naming-conventions)
## Exceptions
This documentation is just guidelines on how to implement a package manager to match the existing structure and logic
already present in GitLab. While the structure is intended to be extendable and flexible enough to allow for
any given package manager, if there is good reason to stray due to the constraints or needs of a given package
manager, then it should be raised and discussed in the implementation issue or merge request to work towards
the most efficient outcome.
|
---
stage: Package
group: Package Registry
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Developing support for a new package format
breadcrumbs:
- doc
- development
- packages
---
This document guides you through adding support to GitLab for a new a [package management system](../../administration/packages/_index.md).
See the already supported formats in the [Packages and registries documentation](../../user/packages/_index.md)
It is possible to add a new format with only backend changes.
This guide is superficial and does not cover the way the code should be written.
However, you can find a good example by looking at the following merge requests:
- [npm registry support](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/8673)
- [Maven repository](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/6607)
- [Instance-level API for Maven repository](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/8757)
- [NuGet group-level API](https://gitlab.com/gitlab-org/gitlab/-/issues/36423)
## General information
The existing database model requires the following:
- Every package belongs to a project.
- Every package file belongs to a package.
- A package can have one or more package files.
- The package model is based on storing information about the package and its version.
### API endpoints
Package systems work with GitLab via API. For example `lib/api/npm_project_packages.rb`
implements API endpoints to work with npm clients. So, the first thing to do is to
add a new `lib/api/your_name_project_packages.rb` file with API endpoints that are
necessary to make the package system client work. Usually that means having
endpoints like:
- GET package information.
- GET package file content.
- PUT upload package.
Because the packages belong to a project, it's expected to have project-level endpoint (remote)
for uploading and downloading them. For example:
```plaintext
GET https://gitlab.com/api/v4/projects/<your_project_id>/packages/npm/
PUT https://gitlab.com/api/v4/projects/<your_project_id>/packages/npm/
```
Group-level and instance-level endpoints should only be considered after the project-level endpoint is available in production.
#### Remote hierarchy
Packages are scoped within various levels of access, which is generally configured by setting your remote. A
remote endpoint may be set at the project level, meaning when installing packages, only packages belonging to that
project are visible. Alternatively, a group-level endpoint may be used to allow visibility to all packages
in a given group. Lastly, an instance-level endpoint can be used to allow visibility to all packages in an
entire GitLab instance.
As an MVC, we recommend beginning with a project-level endpoint. A typical iteration plan for remote hierarchies is to go from:
- Publish and install in a project
- Install from a group
- Publish and install in an instance (this is for self-managed customers)
Using instance-level endpoints requires [stricter naming conventions](#naming-conventions).
{{< alert type="note" >}}
Composer package naming scope is Instance Level.
{{< /alert >}}
### Naming conventions
To avoid name conflict for instance-level endpoints you must define a package naming convention
that gives a way to identify the project that the package belongs to. This generally involves using the project
ID or full project path in the package name. For more information with an example, see
[Package recipe naming convention for instance remotes](../../user/packages/conan_1_repository/_index.md#package-recipe-naming-convention-for-instance-remotes).
For group and project-level endpoints, naming can be less constrained and it is up to the group and project
members to be certain that there is no conflict between two package names. However, the system should prevent
a user from reusing an existing name within a given scope.
Otherwise, naming should follow the package manager's naming conventions and include a validation in the `package.md`
model for that package type.
### Services and finders
Logic for performing tasks such as creating package or package file records or finding packages should not live
in the API file, but should live in services and finders. Existing services and finders should be used or
extended when possible to keep the common package logic grouped as much as possible.
### Configuration
GitLab has a `packages` section in its configuration file (`gitlab.rb` or `gitlab.yml`).
It applies to all package systems supported by GitLab. Usually you don't need
to add anything there.
Packages can be configured to use object storage, therefore your code must support it.
## MVC Approach
The way new package systems are integrated in GitLab is using an [MVC](https://handbook.gitlab.com/handbook/values/#minimal-viable-change-mvc). Therefore, the first iteration should support the bare minimum user actions:
- Authentication with a GitLab job, personal access, project access, or deploy token
- Uploading a package and displaying basic metadata in the user interface
- Pulling a package
- Required actions
Required actions are all the additional requests that GitLab must handle so the corresponding package manager CLI can work properly. It could be a search feature or an endpoint providing meta information about a package. For example:
- For NuGet, the search request was implemented during the first MVC iteration, to support Visual Studio.
- For npm, there is a metadata endpoint used by `npm` to get the tarball URL.
For the first MVC iteration, it's recommended to stay at the project level of the [remote hierarchy](#remote-hierarchy). Other levels can be tackled with [future Merge Requests](#future-work).
The MVC usually has two phases:
- [Analysis](#analysis)
- [Implementation](#implementation)
### Keep iterations small
When implementing a new package manager, it is tempting to create one large merge request containing all of the
necessary endpoints and services necessary to support basic usage. Instead:
1. Put the API endpoints behind a [feature flag](../feature_flags/_index.md).
1. Submit each endpoint or behavior (download, upload, etc) in a different merge request to shorten the review process.
### Analysis
During this phase, the idea is to collect as much information as possible about the API used by the package system. Here some aspects that can be useful to include:
- **Authentication**: What authentication mechanisms are available (OAuth, Basic
Authorization, other). Keep in mind that GitLab users often want to use their
[personal access tokens](../../user/profile/personal_access_tokens.md).
Although not needed for the MVC first iteration, the [CI/CD job tokens](../../ci/jobs/ci_job_token.md)
have to be supported at some point in the future.
- **Requests**: Which requests are needed to have a working MVC. Ideally, produce
a list of all the requests needed for the MVC (including required actions). Further
investigation could provide an example for each request with the request and the response bodies.
- **Upload**: Carefully analyze how the upload process works. This request is likely the most
complex to implement. A detailed analysis is desired here as uploads can be
encoded in different ways (body or multipart) and can even be in a totally different
format (for example, a JSON structure where the package file is a Base64 value of
a particular field). These different encodings lead to slightly different implementations
on GitLab and GitLab Workhorse. For more detailed information, review [file uploads](#file-uploads).
- **Endpoints**: Suggest a list of endpoint URLs to implement in GitLab.
- **Split work**: Suggest a list of changes to do to incrementally build the MVC.
This gives a good idea of how much work there is to be done. Here is an example
list that must be adapted on a case by case basis:
1. Empty file structure (API file, base service for this package)
1. Authentication system for "logging in" to the package manager
1. Identify metadata and create applicable tables
1. Workhorse route for [object storage direct upload](../uploads/_index.md#direct-upload)
1. Endpoints required for upload/publish
1. Endpoints required for install/download
1. Endpoints required for required actions
The analysis usually takes a full milestone to complete, though it's not impossible to start the implementation in the same milestone.
In particular, the upload request can have some [requirements in the GitLab Workhorse project](#file-uploads). This project has a different release cycle than the rails backend. It's **strongly** recommended that you open an issue there as soon as the upload request analysis is done. This way GitLab Workhorse is already ready when the upload request is implemented on the rails backend.
### Implementation
The implementation of the different Merge Requests varies between different package system integrations. Contributors should take into account some important aspects of the implementation phase.
#### Authentication
The MVC must support [personal access tokens](../../user/profile/personal_access_tokens.md) right from the start. We support two options for these tokens: OAuth and Basic Access.
OAuth authentication is already supported. You can see an example in the [npm API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/npm_project_packages.rb).
[Basic Access authentication](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication)
support is done by overriding a specific function in the API helpers, like
[this example in the Conan API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/conan_packages.rb).
For this authentication mechanism, keep in mind that some clients can send an unauthenticated
request first, wait for the 401 Unauthorized response with the [`WWW-Authenticate`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/WWW-Authenticate)
field, then send an updated (authenticated) request. This case is more involved as
GitLab must handle the `401 Unauthorized` response. The [NuGet API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/nuget_packages.rb)
supports this case.
#### Authorization
Project permissions and group permissions exist for `read_package`, `create_package`, and `destroy_package`. Each
endpoint should
[authorize the requesting user](https://gitlab.com/gitlab-org/gitlab/-/blob/398fef1ca26ae2b2c3dc89750f6b20455a1e5507/ee/lib/api/conan_packages.rb)
against the project or group before continuing.
#### Database and handling metadata
The current database model allows you to store a name and a version for each package.
Every time you upload a new package, you can either create a new record of `Package`
or add files to existing record. `PackageFile` should be able to store all file-related
information like the file `name`, `side`, `sha1`, and so on.
If there is specific data necessary to be stored for only one package system support,
consider creating a separate metadata model. See `packages_maven_metadata` table
and `Packages::Maven::Metadatum` model as an example for package specific data, and `packages_conan_file_metadata` table
and `Packages::Conan::FileMetadatum` model as an example for package file specific data.
If there is package specific behavior for a given package manager, add those methods to the metadata models and
delegate from the package model.
The existing package UI only displays information in the `packages_packages` and `packages_package_files`
tables. If the data stored in the metadata tables must be displayed, a `~frontend` change is required.
#### File uploads
File uploads should be handled by GitLab Workhorse using object accelerated uploads. What this means is that
the workhorse proxy that checks all incoming requests to GitLab intercept the upload request,
upload the file, and forward a request to the main GitLab codebase only containing the metadata
and file location rather than the file itself. An overview of this process can be found in the
[development documentation](../uploads/_index.md#direct-upload).
In terms of code, this means a route must be added to the
[GitLab Workhorse project](https://gitlab.com/gitlab-org/gitlab-workhorse) for each upload endpoint being added
(instance, group, project). [This merge request](https://gitlab.com/gitlab-org/gitlab-workhorse/-/merge_requests/412/diffs)
demonstrates adding an instance-level endpoint for Conan to workhorse. You can also see the Maven project level endpoint
implemented in the same file.
After the route has been added, you must add an additional `/authorize` version of the upload endpoint to your API file.
[This example](https://gitlab.com/gitlab-org/gitlab/-/blob/398fef1ca26ae2b2c3dc89750f6b20455a1e5507/ee/lib/api/maven_packages.rb#L164)
shows the additional endpoint added for Maven. The `/authorize` endpoint verifies and authorizes the request from workhorse,
then the typical upload endpoint is implemented below, consuming the metadata that Workhorse provides to
create the package record. Workhorse provides a variety of file metadata such as type, size, and different checksum formats.
For testing purposes, you may want to [enable object storage](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/howto/object_storage.md)
in your local development environment.
#### File size limits
Files uploaded to the GitLab package registry are [limited by format](../../administration/instance_limits.md#package-registry-limits).
On GitLab.com, these are typically set to 5 GB to help prevent timeout issues and abuse.
When a new package type is added to the `Packages::Package` model, a size limit must be added
similar to [this example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/52639/diffs#382f879fb09b0212e3cedd99e6c46e2083867216),
or the [related test](https://gitlab.com/gitlab-org/gitlab/-/blob/fe4ba43766781371cebfacd78364a1de762917cd/spec/models/packages/package_spec.rb#L761)
must be updated if file size limits do not apply. The only reason a size limit does not apply is if
the package format does not upload and store package files.
#### Rate Limits on GitLab.com
Package manager clients can make rapid requests that exceed the
[GitLab.com standard API rate limits](../../user/gitlab_com/_index.md#rate-limits-on-gitlabcom).
This results in a `429 Too Many Requests` error.
We have opened a set of paths to allow higher rate limits. Unless it is not possible,
new package managers should follow these conventions so they can take advantage of the
expanded package rate limit.
These route prefixes guarantee a higher rate limit:
```plaintext
/api/v4/packages/
/api/v4/projects/:project_id/packages/
/api/v4/groups/:group_id/-/packages/
```
### MVC Checklist
When adding support to GitLab for a new package manager, the first iteration must contain the
following features. You can add the features through many merge requests as needed, but all the
features must be implemented when the feature flag is removed.
- Project-level API
- Push event tracking
- Pull event tracking
- Authentication with personal access tokens
- Authentication with Job Tokens
- Authentication with Deploy Tokens (group and project)
- File size [limit](#file-size-limits)
- File format guards (only accept valid file formats for the package type)
- Name regex with validation
- Version regex with validation
- Workhorse route for [accelerated](../uploads/working_with_uploads.md) uploads
- Background workers for extracting package metadata (if applicable)
- Documentation (how to use the feature)
- API Documentation (individual endpoints with curl examples)
- Seeding in [`db/fixtures/development/26_packages.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/fixtures/development/26_packages.rb)
- Update the [runbook](https://gitlab.com/gitlab-com/runbooks/-/blob/31fb4959e89db25fddf865bc81734c222daf32dd/dashboards/stage-groups/package.dashboard.jsonnet#L74) for the Grafana charts
- End-to-end feature tests for (at the minimum) publishing and installing a package
### Future Work
While working on the MVC, contributors might find features that are not mandatory for the MVC but can provide a better user experience. It's generally a good idea to keep an eye on those and open issues.
Here are some examples
1. Endpoints required for search
1. Front end updates to display additional package information and metadata
1. Limits on file sizes
1. Tracking for metrics
1. Read more metadata fields from the package to make it available to the front end. For example, it's usual to be able to tag a package. Those tags can be read and saved by backend and then displayed on the packages UI.
1. Endpoints for the upper levels of the [remote hierarchy](#remote-hierarchy). This step might require you to create a [naming convention](#naming-conventions)
## Exceptions
This documentation is just guidelines on how to implement a package manager to match the existing structure and logic
already present in GitLab. While the structure is intended to be extendable and flexible enough to allow for
any given package manager, if there is good reason to stray due to the constraints or needs of a given package
manager, then it should be raised and discussed in the implementation issue or merge request to work towards
the most efficient outcome.
|
https://docs.gitlab.com/development/structure
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/structure.md
|
2025-08-13
|
doc/development/packages
|
[
"doc",
"development",
"packages"
] |
structure.md
|
Package
|
Package Registry
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Package Structure
| null |
## Package registry
```mermaid
erDiagram
projects }|--|| namespaces : ""
packages_package_files }o--|| packages_packages : ""
packages_package_file_build_infos }o--|| packages_package_files : ""
packages_build_infos }o--|| packages_packages : ""
packages_tags }o--|| packages_packages : ""
packages_packages }|--|| projects : ""
packages_maven_metadata |o--|| packages_packages : ""
packages_nuget_metadata |o--|| packages_packages : ""
packages_composer_metadata |o--|| packages_packages : ""
packages_conan_metadata |o--|| packages_packages : ""
packages_pypi_metadata |o--|| packages_packages : ""
packages_npm_metadata |o--|| packages_packages : ""
package_conan_file_metadatum |o--|| packages_package_files : ""
package_helm_file_metadatum |o--|| packages_package_files : ""
packages_nuget_dependency_link_metadata |o--|| packages_dependency_links: ""
packages_dependencies ||--o| packages_dependency_links: ""
packages_packages ||--o{ packages_dependency_links: ""
namespace_package_settings |o--|| namespaces: ""
```
### Debian packages
Debian contains a higher number of dedicated tables, so it is displayed here separately:
```mermaid
erDiagram
projects }|--|| namespaces : ""
packages_packages }|--|| projects : ""
packages_package_files }o--|| packages_packages : ""
packages_debian_group_architectures }|--|| packages_debian_group_distributions : ""
packages_debian_group_component_files }|--|| packages_debian_group_components : ""
packages_debian_group_component_files }|--|| packages_debian_group_architectures : ""
packages_debian_group_components }|--|| packages_debian_group_distributions : ""
packages_debian_group_distribution_keys }|--|| packages_debian_group_distributions : ""
packages_debian_group_distributions }o--|| namespaces : ""
packages_debian_project_architectures }|--|| packages_debian_project_distributions : ""
packages_debian_project_component_files }|--|| packages_debian_project_components : ""
packages_debian_project_component_files }|--|| packages_debian_project_architectures : ""
packages_debian_project_components }|--|| packages_debian_project_distributions : ""
packages_debian_project_distribution_keys }|--|| packages_debian_project_distributions : ""
packages_debian_project_distributions }o--|| projects : ""
packages_debian_publications }|--|| packages_debian_project_distributions : ""
packages_debian_publications |o--|| packages_packages : ""
packages_debian_project_distributions |o--|| packages_packages : ""
packages_debian_group_distributions |o--|| namespaces : ""
packages_debian_file_metadata |o--|| packages_package_files : ""
```
## Container registry
```mermaid
erDiagram
projects }|--|| namespaces : ""
container_repositories }|--|| projects : ""
container_expiration_policy |o--|| projects : ""
```
## Dependency Proxy
```mermaid
erDiagram
dependency_proxy_blobs }o--|| namespaces : ""
dependency_proxy_manifests }o--|| namespaces : ""
dependency_proxy_image_ttl_group_policies |o--|| namespaces : ""
dependency_proxy_group_settings |o--|| namespaces : ""
```
|
---
stage: Package
group: Package Registry
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Package Structure
breadcrumbs:
- doc
- development
- packages
---
## Package registry
```mermaid
erDiagram
projects }|--|| namespaces : ""
packages_package_files }o--|| packages_packages : ""
packages_package_file_build_infos }o--|| packages_package_files : ""
packages_build_infos }o--|| packages_packages : ""
packages_tags }o--|| packages_packages : ""
packages_packages }|--|| projects : ""
packages_maven_metadata |o--|| packages_packages : ""
packages_nuget_metadata |o--|| packages_packages : ""
packages_composer_metadata |o--|| packages_packages : ""
packages_conan_metadata |o--|| packages_packages : ""
packages_pypi_metadata |o--|| packages_packages : ""
packages_npm_metadata |o--|| packages_packages : ""
package_conan_file_metadatum |o--|| packages_package_files : ""
package_helm_file_metadatum |o--|| packages_package_files : ""
packages_nuget_dependency_link_metadata |o--|| packages_dependency_links: ""
packages_dependencies ||--o| packages_dependency_links: ""
packages_packages ||--o{ packages_dependency_links: ""
namespace_package_settings |o--|| namespaces: ""
```
### Debian packages
Debian contains a higher number of dedicated tables, so it is displayed here separately:
```mermaid
erDiagram
projects }|--|| namespaces : ""
packages_packages }|--|| projects : ""
packages_package_files }o--|| packages_packages : ""
packages_debian_group_architectures }|--|| packages_debian_group_distributions : ""
packages_debian_group_component_files }|--|| packages_debian_group_components : ""
packages_debian_group_component_files }|--|| packages_debian_group_architectures : ""
packages_debian_group_components }|--|| packages_debian_group_distributions : ""
packages_debian_group_distribution_keys }|--|| packages_debian_group_distributions : ""
packages_debian_group_distributions }o--|| namespaces : ""
packages_debian_project_architectures }|--|| packages_debian_project_distributions : ""
packages_debian_project_component_files }|--|| packages_debian_project_components : ""
packages_debian_project_component_files }|--|| packages_debian_project_architectures : ""
packages_debian_project_components }|--|| packages_debian_project_distributions : ""
packages_debian_project_distribution_keys }|--|| packages_debian_project_distributions : ""
packages_debian_project_distributions }o--|| projects : ""
packages_debian_publications }|--|| packages_debian_project_distributions : ""
packages_debian_publications |o--|| packages_packages : ""
packages_debian_project_distributions |o--|| packages_packages : ""
packages_debian_group_distributions |o--|| namespaces : ""
packages_debian_file_metadata |o--|| packages_package_files : ""
```
## Container registry
```mermaid
erDiagram
projects }|--|| namespaces : ""
container_repositories }|--|| projects : ""
container_expiration_policy |o--|| projects : ""
```
## Dependency Proxy
```mermaid
erDiagram
dependency_proxy_blobs }o--|| namespaces : ""
dependency_proxy_manifests }o--|| namespaces : ""
dependency_proxy_image_ttl_group_policies |o--|| namespaces : ""
dependency_proxy_group_settings |o--|| namespaces : ""
```
|
https://docs.gitlab.com/development/cleanup_policies
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/cleanup_policies.md
|
2025-08-13
|
doc/development/packages
|
[
"doc",
"development",
"packages"
] |
cleanup_policies.md
|
Package
|
Container Registry
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Cleanup policies
| null |
Cleanup policies are recurrent background processes that automatically remove
objects according to some parameters set by users.
## Container registry
Cleanup policies for the container registry work on all the container repositories
hosted in a single project. All tags that match the cleanup parameters are removed.
### Parameters
The [ContainerExpirationPolicy](https://gitlab.com/gitlab-org/gitlab/-/blob/37a76cbfb54a9a3f0dba3c3748eaaac82fb8bf4b/app/models/container_expiration_policy.rb)
holds all parameters for the container registry cleanup policies.
The parameters are split into two groups:
- The parameters that define tags to keep:
- `keep_n`. Keep the `n` most recent tags.
- `name_regex_keep`. Keep tags matching this regular expression.
- The parameters that define tags to destroy:
- `older_than`. Destroy tags older than this timestamp.
- `name_regex`. Destroy tags matching this regular expression.
The remaining parameters impact when the policy is executed:
- `enabled`. Defines if the policy is enabled or not.
- `cadence`. Defines the execution cadence of the policy.
- `next_run_at`. Defines when the next execution should happen.
### Execution
Due to the large number of policies we need to process on GitLab.com, the execution
follows this design.
- Policy executions are limited in time.
- Policy executions are either complete or partial.
- The background jobs will consider the next job to be executed based on two priorities:
- Policy with a `next_run_at` in the past.
- Partially executed policies.
To track the cleanup policy status on a container repository,
we have an `expiration_policy_cleanup_status` on the `ContainerRepository`
model.
Background jobs for this execution are organized on:
- A cron background job that runs every hour.
- A set of background jobs that will loop on container repositories that need
a policy execution.
#### The cron background job
The [cron background job](https://gitlab.com/gitlab-org/gitlab/-/blob/36454d77a8de76a25896efd7c051d6796985f579/app/workers/container_expiration_policy_worker.rb)
is quite simple.
Its main tasks are:
1. Check if there are any container repositories in need of a cleanup. If any,
enqueue as many limited capacity jobs as necessary, up to a limit.
1. Compute metrics for cleanup policies and log them.
#### The limited capacity job
This [job](https://gitlab.com/gitlab-org/gitlab/-/blob/36454d77a8de76a25896efd7c051d6796985f579/app/workers/container_expiration_policies/cleanup_container_repository_worker.rb)
is based on the [limited capacity concern](../sidekiq/limited_capacity_worker.md).
This job will run in parallel up to [a specific capacity](settings.md#container-registry).
The primary responsibility of this job is to select the next container
repository that requires cleaning and call the related service on it.
This is where the two priorities are evaluated in order. If a container repository
is found, the cleanup service is called on it.
To ensure that only one cleaning is executed on a given container repository
at any time, we use a database lock along with the
`expiration_policy_cleanup_status` column.
This job will re-enqueue itself until no more container repositories require cleanup.
#### Services
Here is the services call that will happen from the limited capacity job:
```mermaid
flowchart TD
job[Limited capacity job] --> cleanup([ContainerExpirationPolicies::CleanupService])
cleanup --> cleanup_tags([Projects::ContainerRepository::CleanupTagsService])
cleanup_tags --> delete_tags([Projects::ContainerRepository::DeleteTagsService])
```
- [`ContainerExpirationPolicies::CleanupService`](https://gitlab.com/gitlab-org/gitlab/-/blob/6546ffc6fe4e9b447a1b7f050edddb8926fe4a3d/app/services/container_expiration_policies/cleanup_service.rb).
This service mainly deals with container repository `expiration_policy_cleanup_status`
updates and will call the cleanup tags service.
- [`Projects::ContainerRepository::CleanupTagsService`](https://gitlab.com/gitlab-org/gitlab/-/blob/f23d70b7d638c38d71af102cfd32a3f6751596f9/app/services/projects/container_repository/cleanup_tags_service.rb).
This service receives the policy parameters and builds the list of tags to
destroy on the container registry.
- [`Projects::ContainerRepository::DeleteTagsService`](https://gitlab.com/gitlab-org/gitlab/-/blob/f23d70b7d638c38d71af102cfd32a3f6751596f9/app/services/projects/container_repository/delete_tags_service.rb).
This service receives a list of tags and loops on that list. For each tag,
the service will call the container registry API endpoint to destroy the target tag.
The cleanup tags service uses a very specific [execution order](../../user/packages/container_registry/reduce_container_registry_storage.md#how-the-cleanup-policy-works)
to build the list of tags to destroy.
Lastly, the cleanup tags service and delete tags service work using facades.
The actual implementation depends on the type of container registry connected.
If the GitLab container registry is connected, several improvements are available
and used during cleanup policies execution, such as [better use of the container registry API](https://gitlab.com/groups/gitlab-org/-/epics/8379).
### Historic reference links
- [First iteration](https://gitlab.com/gitlab-org/gitlab/-/issues/15398)
- [Throttling policy executions](https://gitlab.com/gitlab-org/gitlab/-/issues/208193)
- [Adding caching](https://gitlab.com/gitlab-org/gitlab/-/issues/339129)
- [Further improvements](https://gitlab.com/groups/gitlab-org/-/epics/8379)
|
---
stage: Package
group: Container Registry
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Cleanup policies
breadcrumbs:
- doc
- development
- packages
---
Cleanup policies are recurrent background processes that automatically remove
objects according to some parameters set by users.
## Container registry
Cleanup policies for the container registry work on all the container repositories
hosted in a single project. All tags that match the cleanup parameters are removed.
### Parameters
The [ContainerExpirationPolicy](https://gitlab.com/gitlab-org/gitlab/-/blob/37a76cbfb54a9a3f0dba3c3748eaaac82fb8bf4b/app/models/container_expiration_policy.rb)
holds all parameters for the container registry cleanup policies.
The parameters are split into two groups:
- The parameters that define tags to keep:
- `keep_n`. Keep the `n` most recent tags.
- `name_regex_keep`. Keep tags matching this regular expression.
- The parameters that define tags to destroy:
- `older_than`. Destroy tags older than this timestamp.
- `name_regex`. Destroy tags matching this regular expression.
The remaining parameters impact when the policy is executed:
- `enabled`. Defines if the policy is enabled or not.
- `cadence`. Defines the execution cadence of the policy.
- `next_run_at`. Defines when the next execution should happen.
### Execution
Due to the large number of policies we need to process on GitLab.com, the execution
follows this design.
- Policy executions are limited in time.
- Policy executions are either complete or partial.
- The background jobs will consider the next job to be executed based on two priorities:
- Policy with a `next_run_at` in the past.
- Partially executed policies.
To track the cleanup policy status on a container repository,
we have an `expiration_policy_cleanup_status` on the `ContainerRepository`
model.
Background jobs for this execution are organized on:
- A cron background job that runs every hour.
- A set of background jobs that will loop on container repositories that need
a policy execution.
#### The cron background job
The [cron background job](https://gitlab.com/gitlab-org/gitlab/-/blob/36454d77a8de76a25896efd7c051d6796985f579/app/workers/container_expiration_policy_worker.rb)
is quite simple.
Its main tasks are:
1. Check if there are any container repositories in need of a cleanup. If any,
enqueue as many limited capacity jobs as necessary, up to a limit.
1. Compute metrics for cleanup policies and log them.
#### The limited capacity job
This [job](https://gitlab.com/gitlab-org/gitlab/-/blob/36454d77a8de76a25896efd7c051d6796985f579/app/workers/container_expiration_policies/cleanup_container_repository_worker.rb)
is based on the [limited capacity concern](../sidekiq/limited_capacity_worker.md).
This job will run in parallel up to [a specific capacity](settings.md#container-registry).
The primary responsibility of this job is to select the next container
repository that requires cleaning and call the related service on it.
This is where the two priorities are evaluated in order. If a container repository
is found, the cleanup service is called on it.
To ensure that only one cleaning is executed on a given container repository
at any time, we use a database lock along with the
`expiration_policy_cleanup_status` column.
This job will re-enqueue itself until no more container repositories require cleanup.
#### Services
Here is the services call that will happen from the limited capacity job:
```mermaid
flowchart TD
job[Limited capacity job] --> cleanup([ContainerExpirationPolicies::CleanupService])
cleanup --> cleanup_tags([Projects::ContainerRepository::CleanupTagsService])
cleanup_tags --> delete_tags([Projects::ContainerRepository::DeleteTagsService])
```
- [`ContainerExpirationPolicies::CleanupService`](https://gitlab.com/gitlab-org/gitlab/-/blob/6546ffc6fe4e9b447a1b7f050edddb8926fe4a3d/app/services/container_expiration_policies/cleanup_service.rb).
This service mainly deals with container repository `expiration_policy_cleanup_status`
updates and will call the cleanup tags service.
- [`Projects::ContainerRepository::CleanupTagsService`](https://gitlab.com/gitlab-org/gitlab/-/blob/f23d70b7d638c38d71af102cfd32a3f6751596f9/app/services/projects/container_repository/cleanup_tags_service.rb).
This service receives the policy parameters and builds the list of tags to
destroy on the container registry.
- [`Projects::ContainerRepository::DeleteTagsService`](https://gitlab.com/gitlab-org/gitlab/-/blob/f23d70b7d638c38d71af102cfd32a3f6751596f9/app/services/projects/container_repository/delete_tags_service.rb).
This service receives a list of tags and loops on that list. For each tag,
the service will call the container registry API endpoint to destroy the target tag.
The cleanup tags service uses a very specific [execution order](../../user/packages/container_registry/reduce_container_registry_storage.md#how-the-cleanup-policy-works)
to build the list of tags to destroy.
Lastly, the cleanup tags service and delete tags service work using facades.
The actual implementation depends on the type of container registry connected.
If the GitLab container registry is connected, several improvements are available
and used during cleanup policies execution, such as [better use of the container registry API](https://gitlab.com/groups/gitlab-org/-/epics/8379).
### Historic reference links
- [First iteration](https://gitlab.com/gitlab-org/gitlab/-/issues/15398)
- [Throttling policy executions](https://gitlab.com/gitlab-org/gitlab/-/issues/208193)
- [Adding caching](https://gitlab.com/gitlab-org/gitlab/-/issues/339129)
- [Further improvements](https://gitlab.com/groups/gitlab-org/-/epics/8379)
|
https://docs.gitlab.com/development/contributing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing.md
|
2025-08-13
|
doc/development/bulk_imports
|
[
"doc",
"development",
"bulk_imports"
] |
contributing.md
|
Create
|
Import
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Add new relations to the direct transfer importer
| null |
At a high level, to add a new relation to the direct transfer importer, you must:
1. Add a new relation to the list of exported data.
1. Add a new ETL (Extract/Transform/Load) Pipeline on the import side with data processing instructions.
1. Add newly-created pipeline to the list of importing stages.
1. Add a label for the newly created relation to display in the UI.
1. Ensure sufficient test coverage.
{{< alert type="note" >}}
To mitigate the risk of introducing bugs and performance issues, newly added relations should be put behind a feature flag.
{{< /alert >}}
## Export from source
There are a few types of relations we export:
- ActiveRecord associations. Read from `import_export.yml` file, serialized to JSON, written to a NDJSON file. Each relation is exported to either a `.gz` file, or `.tar.gz`
file if a collection, uploaded, and served using the REST API of destination instance of GitLab to download and import.
- Binary files. For example, uploads or LFS objects.
- A handful of relations that are not exported but are read from the GraphQL API directly during import.
For ActiveRecord associations, you should use NDJSON over GraphQL API for performance reasons. Heavily-nested associations can produce a lot of network
requests which can slow down the overall migration.
### Exporting an ActiveRecord relation
The direct transfer importer's underlying behavior is heavily based on file-based importer,
which uses the [`import_export.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/project/import_export.yml) file that
describes a list of `Project` associations to be included in the export.
A similar [`import_export.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/group/import_export.yml) is available for `Group`.
For example, to add import support for a new `Project` association called `documents`, you must:
1. Add it to `import_export.yml` file.
1. Add test coverage for the new relation.
1. Verify that the added relation is exporting as expected.
#### Add it to `import_export.yml` file
{{< alert type="note" >}}
Associations listed in this file are imported from top to bottom. If you have an association that is order-dependent, put the dependencies before the
associations that require them. For example, documents must be imported before merge requests, otherwise they are not valid.
{{< /alert >}}
1. Add your association to `tree.project` within the `import_export.yml`.
```diff
diff --git a/lib/gitlab/import_export/project/import_export.yml b/lib/gitlab/import_export/project/import_export.yml
index 43d66e0e67b7..0880a27dfce2 100644
--- a/lib/gitlab/import_export/project/import_export.yml
+++ b/lib/gitlab/import_export/project/import_export.yml
@@ -122,6 +122,7 @@ tree:
- label:
- :priorities
- :service_desk_setting
+ - :documents
group_members:
- :user
```
{{< alert type="note" >}}
If your association is relates to an Enterprise Edition-only feature, add it to the `ee.tree.project` tree at the end of the file so that it is only exported
and imported in Enterprise Edition instances of GitLab.
{{< /alert >}}
If your association doesn't need to include any sub-relations, then this is enough. But if it needs more sub-relations to be included (for example, notes),
you must list them out. For example, documents can have notes (with award emojis on notes) and award emojis (on documents), which we want to migrate. In this
case, our relation becomes the following:
```diff
diff --git a/lib/gitlab/import_export/project/import_export.yml b/lib/gitlab/import_export/project/import_export.yml
index 43d66e0e67b7..0880a27dfce2 100644
--- a/lib/gitlab/import_export/project/import_export.yml
+++ b/lib/gitlab/import_export/project/import_export.yml
@@ -122,6 +122,7 @@ tree:
- label:
- :priorities
- :service_desk_setting
+ - documents:
- :award_emoji
- notes:
- :award_emoji
group_members:
- :user
```
1. Add `included_attributes` of the relation. By default, any relation attribute that is not listed in `included_attributes` of the YAML file are filtered
out on both export and import. To include the attributes you need, you must add them to `included_attributes` list as following:
```diff
diff --git a/lib/gitlab/import_export/project/import_export.yml b/lib/gitlab/import_export/project/import_export.yml
index 43d66e0e67b7..dbf0e1275ecf 100644
--- a/lib/gitlab/import_export/project/import_export.yml
+++ b/lib/gitlab/import_export/project/import_export.yml
@@ -142,6 +142,9 @@ import_only_tree:
# Only include the following attributes for the models specified.
included_attributes:
+ documents:
+ - :title
+ - :description
user:
- :id
- :public_email
```
1. Add `excluded_attributes` of the relation. We also have `excluded_attributes` list present in the file. You don't need to add excluded attributes for
`Project`, but you do still need to do it for `Group`. This list represent attributes that should not be included in the export and should be ignored
on import. These attributes usually are:
- Anything that ends on `_id` or `_ids`
- Anything that includes `attributes` (except `custom_attributes`)
- Anything that ends on `_html`
- Anything sensitive (for example, tokens, encrypted data)
See a full list of prohibited references [here](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/attribute_cleaner.rb#L14-21).
1. Add `methods` of the relation. If your relation has a method (for example, `document.signature`) that must also be exported, you can add it in the `methods` section.
The exported value will be present in the export and you can do something with it on import. For example, assigning it to a field.
For example, we export return value of `note_diff_file.diff_export` [method](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/project/import_export.yml#L1161-1161) and on import
[set `note_diff_file.diff`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/project/relation_factory.rb#L149-151) to the exported value of this method.
#### Add test coverage for new relation
Because the direct transfer uses the file-based importer under the hood, we must add test coverage for a new relation with tests in the scope of the file-based
importer, which also covers the export side of the direct transfer importer. Add tests to:
1. `spec/lib/gitlab/import_export/project/tree_saver_spec.rb`. A similar file is available for `Group`.
1. `ee/spec/lib/ee/gitlab/import_export/project/tree_saver_spec.rb` for EE-specific relations.
Follow other relations example to add the new tests.
#### Verifying added relation is exporting as expected
Any newly-added relation specified in `import_export.yml` is automatically added to the export files written on disk, so no extra actions are required.
Once the relation is added and tests are added, we can manually check that the relation is exported. It should automatically be included in both:
- File-based imports and exports. Use the [project export functionality](../../user/project/settings/import_export.md#export-a-project-and-its-data) to export,
download, and inspect the exported data.
- Direct transfer exports. Use the [`export_relations` API](../../api/project_relations_export.md) to export, download, and inspect exported relations
(it might be exported in batches).
### Export a binary relation
If adding support for a binary relation:
1. Create a new export service that performs export on disk. See example
[`BulkImports::LfsObjectsExportService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/bulk_imports/lfs_objects_export_service.rb).
1. Add the relation to the
[list of `file_relations`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/bulk_imports/file_transfer/project_config.rb).
1. Add the relation to `BulkImports::FileExportService`.
[Example](https://gitlab.com/gitlab-org/gitlab/-/commit/7867db2c22fb9c9850e1dcb49f26fa2b89a665c6)
## Import on destination
As mentioned above, there are three kinds of relations in direct transfer imports:
1. NDJSON-exported relations, downloaded from the `export_relations` API. For example, `documents.ndjson.gz`.
1. GraphQL API relations. For example, `members` information is fetched using GraphQL to import group and project user memberships.
1. Binary relations, downloaded from the `export_relations` API. For example, `lfs_objects.tar.gz`.
Because the direct transfer importer is based on the Extract/Transform/Load data processing technique, to start importing a relation we must define:
- A new relation importing pipeline. For example, `DocumentsPipeline`.
- A data extractor for the pipeline to know where and how to extract the data. For example, `NdjsonPipeline`.
- A list of transformers, which is a set of classes that are going to transform the data to the format you need.
- A loader, which is going to persist data somewhere. For example, save a row in the database or create a new LFS object.
No matter what type of relation is being imported, the Pipeline class structure is the same:
```ruby
module BulkImports
module Common
module Pipelines
class DocumentsPipeline
include Pipeline
def extract(context)
BulkImports::Pipeline::ExtractedData.new(data: file_paths)
end
def transform(context, object)
...
end
def load(context, object)
document.save!
end
end
end
end
end
```
### Importing a relation from NDJSON
#### Defining a pipeline
From the previous example, our `documents` relation is exported to NDJSON file, in which case we can use both:
- `NdjsonPipeline`, which includes automatic data transformation from a JSON to an ActiveRecord object (which is using file-based importer under the hood).
- `NdjsonExtractor`, which downloads the `.ndjson.gz` file from source instance using the `/export_relations/download` REST API endpoint.
Each step of the ETL pipeline can be defined as a method or a class.
```ruby
class DocumentsPipeline
include NdjsonPipeline
relation_name 'documents'
extractor ::BulkImports::Common::Extractors::NdjsonExtractor, relation: relation
end
```
This new pipeline will now:
1. Download the `documents.ndjson.gz` file from the source instance.
1. Read the contents of the NDJSON file and deserialize JSON to convert to an ActiveRecord object.
1. Save it in the database in scope of a project.
A pipeline can be placed under either:
- The `BulkImports::Common::Pipelines` namespace if it's shared and to be used in both Group and Project migrations. For example, `LabelsPipeline` is a common
pipeline and is referenced in both Group and Project stage lists.
- The `BulkImports::Projects::Pipelines` namespace if a pipeline belongs to a Project migration.
- The `BulkImports::Groups::Pipelines` namespace if a pipeline belongs to a Group migration.
#### Adding a new pipeline to stages
The direct transfer importer performs migration of groups and projects in stages. The list of stages is defined in:
- For `Project`: `lib/bulk_imports/projects/stage.rb`.
- For `Group`: `lib/bulk_imports/groups/stage.rb`.
Each stage:
- Can have multiple pipelines that run in parallel.
- Must fully complete before moving to the next stage.
Let's add our pipeline to the `Project` stage:
```ruby
module BulkImports
module Projects
class Stage < ::BulkImports::Stage
private
def config
{
project: {
pipeline: BulkImports::Projects::Pipelines::ProjectPipeline,
stage: 0
},
repository: {
pipeline: BulkImports::Projects::Pipelines::RepositoryPipeline,
maximum_source_version: '15.0.0',
stage: 1
},
documents: {
pipeline: BulkImports::Projects::Pipelines::DocumentsPipeline,
minimum_source_version: '16.11.0',
stage: 2
}
end
end
end
end
```
We specified:
- `stage: 2`, so project and repository stages must complete first before our pipeline is run in stage 2.
- `minimum_source_version: '16.11.0'`. Because we introduced `documents` relation for exports in this milestone, it's not available in previous GitLab versions. Therefore
so this pipeline only runs if source version is 16.11 or later.
{{< alert type="note" >}}
If a relation is deprecated and need only to run the pipeline up to a certain version, we can specify `maximum_source_version` attribute.
{{< /alert >}}
#### Covering a pipeline with tests
Because we already covered the export side with tests, we must do the same for the import side. For the direct transfer importer, each pipeline has a separate spec
file that would look something like [this example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/bulk_imports/common/pipelines/milestones_pipeline_spec.rb).
[Example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/bulk_imports/common/pipelines/milestones_pipeline_spec.rb)
#### Importing a relation with a custom association name
Associations exist that do not match their ActiveRecord class names. For example:
```ruby
class Release
has_many :links, class_name: 'Releases::Link'
end
```
An association like this is exported under `links` in `releases.ndjson`. However, on import, whenever we constantize a relation class, we can't constantize
`links` because the class does not exist. The class should be `Releases::Link`.
In this case, we must add this association name to the
[`OVERRIDES`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/import_export/project/relation_factory.rb#L7) hash, which represents a map of
associations and their corresponding ActiveRecord classes so that the importer knows how to constantize them correctly.
```ruby
module Gitlab
module ImportExport
module Project
class RelationFactory < Base::RelationFactory
OVERRIDES = {
links: 'Releases::Link'
}
end
end
end
end
```
This way, the importer maps each exported `link` to the corresponding `Releases::Link` class.
#### Importing an existing object that is referenced by multiple other relations
If relations are referenced across multiple associations (or within a single association across multiple records), we won't want to import duplicates.
For example, consider a label that is applied on a number of different issues and merge requests. Whenever we export issues and merge requests, the exported
label is contained within each of the records as its subrelation. When we import exported issues and merge requests, we want to import the label only once
and reuse it across all of the records. Otherwise, we end up with duplicates (multiple labels with the same name).
To import an object like this only once and reuse it in multiple places, we must define the object as an existing object relation.
First, we must add the label association to
[`EXISTING_OBJECT_RELATIONS`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/import_export/project/relation_factory.rb#L54-54). After the
relation is added to the list of existing object relations, the importer knows that such a relation must be treated differently from the others and goes
through a different import flow. Instead of importing such a relation by using the regular route, it uses an
[`ObjectBuilder`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/import_export/base/relation_factory.rb#L280).
`ObjectBuilder` attempts to either:
- [Find an existing object](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/import_export/base/object_builder.rb#L32-36) in the database based on
the parameters you define and returns it.
- Create a new one if it doesn't exist.
To add a new relation to the ObjectBuilder, you must:
1. Add your relation to `EXISTING_OBJECT_RELATIONS` as mentioned above.
1. Update either Group or Project `ObjectBuilder`, depending whether it's a project or group association.
1. Define which attributes should be used to perform an existing object lookup. For example, for labels, we want to search by `title`, `description`, and
`created_at`. If a label with defined parameters exists in the project, reuse it and do not create a new one.
### Importing a relation from GraphQL API
If your relation is available through GraphQL API, you can use `GraphQlExtractor` and perform transformations and loading within the pipeline class.
`MembersPipeline` example:
```ruby
module BulkImports
module Common
module Pipelines
class MembersPipeline
include Pipeline
transformer Common::Transformers::ProhibitedAttributesTransformer
transformer Common::Transformers::MemberAttributesTransformer
def extract(context)
graphql_extractor.extract(context)
end
def load(_context, data)
...
member.save!
end
private
def graphql_extractor
@graphql_extractor ||= BulkImports::Common::Extractors::GraphqlExtractor
.new(query: BulkImports::Common::Graphql::GetMembersQuery)
end
end
end
end
end
```
The rest of the steps are identical to the steps above.
### Import a binary relation
A binary relation pipeline has the same structure as other pipelines, all you need to do is define what happens during extract/transform/load steps.
`LfsObjectsPipeline` example:
```ruby
module BulkImports
module Common
module Pipelines
class LfsObjectsPipeline
include Pipeline
file_extraction_pipeline!
def extract(_context)
download_service.execute
decompression_service.execute
extraction_service.execute
...
end
def load(_context, file_path)
...
lfs_object.save!
end
end
end
end
end
```
There are a number of helper service classes to assist with data download:
- `BulkImports::FileDownloadService`: Downloads a file from a given location.
- `BulkImports::FileDecompressionService`: Gzip decompression service with required validations.
- `BulkImports::ArchiveExtractionService`: Tar extraction service.
## Adapt the UI
### Add a label for the new relation
Once a new relation is added to Direct Transfer, you need to make sure that the relation is displayed in human readable form in the UI.
1. Add a new key value pair to the [`BULK_IMPORT_STATIC_ITEMS`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/import/constants.js#L9)
```diff
diff --git a/app/assets/javascripts/import/constants.js b/app/assets/javascripts/import/constants.js
index 439f453cd9d3..d6b4119a0af9 100644
--- a/app/assets/javascripts/import/constants.js
+++ b/app/assets/javascripts/import/constants.js
@@ -31,6 +31,7 @@ export const BULK_IMPORT_STATIC_ITEMS = {
service_desk_setting: __('Service Desk'),
vulnerabilities: __('Vulnerabilities'),
commit_notes: __('Commit notes'),
+ documents: __('Documents')
};
const STATISTIC_ITEMS = {
```
|
---
stage: Create
group: Import
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Add new relations to the direct transfer importer
breadcrumbs:
- doc
- development
- bulk_imports
---
At a high level, to add a new relation to the direct transfer importer, you must:
1. Add a new relation to the list of exported data.
1. Add a new ETL (Extract/Transform/Load) Pipeline on the import side with data processing instructions.
1. Add newly-created pipeline to the list of importing stages.
1. Add a label for the newly created relation to display in the UI.
1. Ensure sufficient test coverage.
{{< alert type="note" >}}
To mitigate the risk of introducing bugs and performance issues, newly added relations should be put behind a feature flag.
{{< /alert >}}
## Export from source
There are a few types of relations we export:
- ActiveRecord associations. Read from `import_export.yml` file, serialized to JSON, written to a NDJSON file. Each relation is exported to either a `.gz` file, or `.tar.gz`
file if a collection, uploaded, and served using the REST API of destination instance of GitLab to download and import.
- Binary files. For example, uploads or LFS objects.
- A handful of relations that are not exported but are read from the GraphQL API directly during import.
For ActiveRecord associations, you should use NDJSON over GraphQL API for performance reasons. Heavily-nested associations can produce a lot of network
requests which can slow down the overall migration.
### Exporting an ActiveRecord relation
The direct transfer importer's underlying behavior is heavily based on file-based importer,
which uses the [`import_export.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/project/import_export.yml) file that
describes a list of `Project` associations to be included in the export.
A similar [`import_export.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/group/import_export.yml) is available for `Group`.
For example, to add import support for a new `Project` association called `documents`, you must:
1. Add it to `import_export.yml` file.
1. Add test coverage for the new relation.
1. Verify that the added relation is exporting as expected.
#### Add it to `import_export.yml` file
{{< alert type="note" >}}
Associations listed in this file are imported from top to bottom. If you have an association that is order-dependent, put the dependencies before the
associations that require them. For example, documents must be imported before merge requests, otherwise they are not valid.
{{< /alert >}}
1. Add your association to `tree.project` within the `import_export.yml`.
```diff
diff --git a/lib/gitlab/import_export/project/import_export.yml b/lib/gitlab/import_export/project/import_export.yml
index 43d66e0e67b7..0880a27dfce2 100644
--- a/lib/gitlab/import_export/project/import_export.yml
+++ b/lib/gitlab/import_export/project/import_export.yml
@@ -122,6 +122,7 @@ tree:
- label:
- :priorities
- :service_desk_setting
+ - :documents
group_members:
- :user
```
{{< alert type="note" >}}
If your association is relates to an Enterprise Edition-only feature, add it to the `ee.tree.project` tree at the end of the file so that it is only exported
and imported in Enterprise Edition instances of GitLab.
{{< /alert >}}
If your association doesn't need to include any sub-relations, then this is enough. But if it needs more sub-relations to be included (for example, notes),
you must list them out. For example, documents can have notes (with award emojis on notes) and award emojis (on documents), which we want to migrate. In this
case, our relation becomes the following:
```diff
diff --git a/lib/gitlab/import_export/project/import_export.yml b/lib/gitlab/import_export/project/import_export.yml
index 43d66e0e67b7..0880a27dfce2 100644
--- a/lib/gitlab/import_export/project/import_export.yml
+++ b/lib/gitlab/import_export/project/import_export.yml
@@ -122,6 +122,7 @@ tree:
- label:
- :priorities
- :service_desk_setting
+ - documents:
- :award_emoji
- notes:
- :award_emoji
group_members:
- :user
```
1. Add `included_attributes` of the relation. By default, any relation attribute that is not listed in `included_attributes` of the YAML file are filtered
out on both export and import. To include the attributes you need, you must add them to `included_attributes` list as following:
```diff
diff --git a/lib/gitlab/import_export/project/import_export.yml b/lib/gitlab/import_export/project/import_export.yml
index 43d66e0e67b7..dbf0e1275ecf 100644
--- a/lib/gitlab/import_export/project/import_export.yml
+++ b/lib/gitlab/import_export/project/import_export.yml
@@ -142,6 +142,9 @@ import_only_tree:
# Only include the following attributes for the models specified.
included_attributes:
+ documents:
+ - :title
+ - :description
user:
- :id
- :public_email
```
1. Add `excluded_attributes` of the relation. We also have `excluded_attributes` list present in the file. You don't need to add excluded attributes for
`Project`, but you do still need to do it for `Group`. This list represent attributes that should not be included in the export and should be ignored
on import. These attributes usually are:
- Anything that ends on `_id` or `_ids`
- Anything that includes `attributes` (except `custom_attributes`)
- Anything that ends on `_html`
- Anything sensitive (for example, tokens, encrypted data)
See a full list of prohibited references [here](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/attribute_cleaner.rb#L14-21).
1. Add `methods` of the relation. If your relation has a method (for example, `document.signature`) that must also be exported, you can add it in the `methods` section.
The exported value will be present in the export and you can do something with it on import. For example, assigning it to a field.
For example, we export return value of `note_diff_file.diff_export` [method](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/project/import_export.yml#L1161-1161) and on import
[set `note_diff_file.diff`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_export/project/relation_factory.rb#L149-151) to the exported value of this method.
#### Add test coverage for new relation
Because the direct transfer uses the file-based importer under the hood, we must add test coverage for a new relation with tests in the scope of the file-based
importer, which also covers the export side of the direct transfer importer. Add tests to:
1. `spec/lib/gitlab/import_export/project/tree_saver_spec.rb`. A similar file is available for `Group`.
1. `ee/spec/lib/ee/gitlab/import_export/project/tree_saver_spec.rb` for EE-specific relations.
Follow other relations example to add the new tests.
#### Verifying added relation is exporting as expected
Any newly-added relation specified in `import_export.yml` is automatically added to the export files written on disk, so no extra actions are required.
Once the relation is added and tests are added, we can manually check that the relation is exported. It should automatically be included in both:
- File-based imports and exports. Use the [project export functionality](../../user/project/settings/import_export.md#export-a-project-and-its-data) to export,
download, and inspect the exported data.
- Direct transfer exports. Use the [`export_relations` API](../../api/project_relations_export.md) to export, download, and inspect exported relations
(it might be exported in batches).
### Export a binary relation
If adding support for a binary relation:
1. Create a new export service that performs export on disk. See example
[`BulkImports::LfsObjectsExportService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/bulk_imports/lfs_objects_export_service.rb).
1. Add the relation to the
[list of `file_relations`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/bulk_imports/file_transfer/project_config.rb).
1. Add the relation to `BulkImports::FileExportService`.
[Example](https://gitlab.com/gitlab-org/gitlab/-/commit/7867db2c22fb9c9850e1dcb49f26fa2b89a665c6)
## Import on destination
As mentioned above, there are three kinds of relations in direct transfer imports:
1. NDJSON-exported relations, downloaded from the `export_relations` API. For example, `documents.ndjson.gz`.
1. GraphQL API relations. For example, `members` information is fetched using GraphQL to import group and project user memberships.
1. Binary relations, downloaded from the `export_relations` API. For example, `lfs_objects.tar.gz`.
Because the direct transfer importer is based on the Extract/Transform/Load data processing technique, to start importing a relation we must define:
- A new relation importing pipeline. For example, `DocumentsPipeline`.
- A data extractor for the pipeline to know where and how to extract the data. For example, `NdjsonPipeline`.
- A list of transformers, which is a set of classes that are going to transform the data to the format you need.
- A loader, which is going to persist data somewhere. For example, save a row in the database or create a new LFS object.
No matter what type of relation is being imported, the Pipeline class structure is the same:
```ruby
module BulkImports
module Common
module Pipelines
class DocumentsPipeline
include Pipeline
def extract(context)
BulkImports::Pipeline::ExtractedData.new(data: file_paths)
end
def transform(context, object)
...
end
def load(context, object)
document.save!
end
end
end
end
end
```
### Importing a relation from NDJSON
#### Defining a pipeline
From the previous example, our `documents` relation is exported to NDJSON file, in which case we can use both:
- `NdjsonPipeline`, which includes automatic data transformation from a JSON to an ActiveRecord object (which is using file-based importer under the hood).
- `NdjsonExtractor`, which downloads the `.ndjson.gz` file from source instance using the `/export_relations/download` REST API endpoint.
Each step of the ETL pipeline can be defined as a method or a class.
```ruby
class DocumentsPipeline
include NdjsonPipeline
relation_name 'documents'
extractor ::BulkImports::Common::Extractors::NdjsonExtractor, relation: relation
end
```
This new pipeline will now:
1. Download the `documents.ndjson.gz` file from the source instance.
1. Read the contents of the NDJSON file and deserialize JSON to convert to an ActiveRecord object.
1. Save it in the database in scope of a project.
A pipeline can be placed under either:
- The `BulkImports::Common::Pipelines` namespace if it's shared and to be used in both Group and Project migrations. For example, `LabelsPipeline` is a common
pipeline and is referenced in both Group and Project stage lists.
- The `BulkImports::Projects::Pipelines` namespace if a pipeline belongs to a Project migration.
- The `BulkImports::Groups::Pipelines` namespace if a pipeline belongs to a Group migration.
#### Adding a new pipeline to stages
The direct transfer importer performs migration of groups and projects in stages. The list of stages is defined in:
- For `Project`: `lib/bulk_imports/projects/stage.rb`.
- For `Group`: `lib/bulk_imports/groups/stage.rb`.
Each stage:
- Can have multiple pipelines that run in parallel.
- Must fully complete before moving to the next stage.
Let's add our pipeline to the `Project` stage:
```ruby
module BulkImports
module Projects
class Stage < ::BulkImports::Stage
private
def config
{
project: {
pipeline: BulkImports::Projects::Pipelines::ProjectPipeline,
stage: 0
},
repository: {
pipeline: BulkImports::Projects::Pipelines::RepositoryPipeline,
maximum_source_version: '15.0.0',
stage: 1
},
documents: {
pipeline: BulkImports::Projects::Pipelines::DocumentsPipeline,
minimum_source_version: '16.11.0',
stage: 2
}
end
end
end
end
```
We specified:
- `stage: 2`, so project and repository stages must complete first before our pipeline is run in stage 2.
- `minimum_source_version: '16.11.0'`. Because we introduced `documents` relation for exports in this milestone, it's not available in previous GitLab versions. Therefore
so this pipeline only runs if source version is 16.11 or later.
{{< alert type="note" >}}
If a relation is deprecated and need only to run the pipeline up to a certain version, we can specify `maximum_source_version` attribute.
{{< /alert >}}
#### Covering a pipeline with tests
Because we already covered the export side with tests, we must do the same for the import side. For the direct transfer importer, each pipeline has a separate spec
file that would look something like [this example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/bulk_imports/common/pipelines/milestones_pipeline_spec.rb).
[Example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/bulk_imports/common/pipelines/milestones_pipeline_spec.rb)
#### Importing a relation with a custom association name
Associations exist that do not match their ActiveRecord class names. For example:
```ruby
class Release
has_many :links, class_name: 'Releases::Link'
end
```
An association like this is exported under `links` in `releases.ndjson`. However, on import, whenever we constantize a relation class, we can't constantize
`links` because the class does not exist. The class should be `Releases::Link`.
In this case, we must add this association name to the
[`OVERRIDES`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/import_export/project/relation_factory.rb#L7) hash, which represents a map of
associations and their corresponding ActiveRecord classes so that the importer knows how to constantize them correctly.
```ruby
module Gitlab
module ImportExport
module Project
class RelationFactory < Base::RelationFactory
OVERRIDES = {
links: 'Releases::Link'
}
end
end
end
end
```
This way, the importer maps each exported `link` to the corresponding `Releases::Link` class.
#### Importing an existing object that is referenced by multiple other relations
If relations are referenced across multiple associations (or within a single association across multiple records), we won't want to import duplicates.
For example, consider a label that is applied on a number of different issues and merge requests. Whenever we export issues and merge requests, the exported
label is contained within each of the records as its subrelation. When we import exported issues and merge requests, we want to import the label only once
and reuse it across all of the records. Otherwise, we end up with duplicates (multiple labels with the same name).
To import an object like this only once and reuse it in multiple places, we must define the object as an existing object relation.
First, we must add the label association to
[`EXISTING_OBJECT_RELATIONS`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/import_export/project/relation_factory.rb#L54-54). After the
relation is added to the list of existing object relations, the importer knows that such a relation must be treated differently from the others and goes
through a different import flow. Instead of importing such a relation by using the regular route, it uses an
[`ObjectBuilder`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/import_export/base/relation_factory.rb#L280).
`ObjectBuilder` attempts to either:
- [Find an existing object](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/import_export/base/object_builder.rb#L32-36) in the database based on
the parameters you define and returns it.
- Create a new one if it doesn't exist.
To add a new relation to the ObjectBuilder, you must:
1. Add your relation to `EXISTING_OBJECT_RELATIONS` as mentioned above.
1. Update either Group or Project `ObjectBuilder`, depending whether it's a project or group association.
1. Define which attributes should be used to perform an existing object lookup. For example, for labels, we want to search by `title`, `description`, and
`created_at`. If a label with defined parameters exists in the project, reuse it and do not create a new one.
### Importing a relation from GraphQL API
If your relation is available through GraphQL API, you can use `GraphQlExtractor` and perform transformations and loading within the pipeline class.
`MembersPipeline` example:
```ruby
module BulkImports
module Common
module Pipelines
class MembersPipeline
include Pipeline
transformer Common::Transformers::ProhibitedAttributesTransformer
transformer Common::Transformers::MemberAttributesTransformer
def extract(context)
graphql_extractor.extract(context)
end
def load(_context, data)
...
member.save!
end
private
def graphql_extractor
@graphql_extractor ||= BulkImports::Common::Extractors::GraphqlExtractor
.new(query: BulkImports::Common::Graphql::GetMembersQuery)
end
end
end
end
end
```
The rest of the steps are identical to the steps above.
### Import a binary relation
A binary relation pipeline has the same structure as other pipelines, all you need to do is define what happens during extract/transform/load steps.
`LfsObjectsPipeline` example:
```ruby
module BulkImports
module Common
module Pipelines
class LfsObjectsPipeline
include Pipeline
file_extraction_pipeline!
def extract(_context)
download_service.execute
decompression_service.execute
extraction_service.execute
...
end
def load(_context, file_path)
...
lfs_object.save!
end
end
end
end
end
```
There are a number of helper service classes to assist with data download:
- `BulkImports::FileDownloadService`: Downloads a file from a given location.
- `BulkImports::FileDecompressionService`: Gzip decompression service with required validations.
- `BulkImports::ArchiveExtractionService`: Tar extraction service.
## Adapt the UI
### Add a label for the new relation
Once a new relation is added to Direct Transfer, you need to make sure that the relation is displayed in human readable form in the UI.
1. Add a new key value pair to the [`BULK_IMPORT_STATIC_ITEMS`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/import/constants.js#L9)
```diff
diff --git a/app/assets/javascripts/import/constants.js b/app/assets/javascripts/import/constants.js
index 439f453cd9d3..d6b4119a0af9 100644
--- a/app/assets/javascripts/import/constants.js
+++ b/app/assets/javascripts/import/constants.js
@@ -31,6 +31,7 @@ export const BULK_IMPORT_STATIC_ITEMS = {
service_desk_setting: __('Service Desk'),
vulnerabilities: __('Vulnerabilities'),
commit_notes: __('Commit notes'),
+ documents: __('Documents')
};
const STATISTIC_ITEMS = {
```
|
https://docs.gitlab.com/development/code_intelligence
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/code_intelligence
|
[
"doc",
"development",
"code_intelligence"
] |
_index.md
|
Create
|
Code Review
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Code intelligence development guidelines
|
Developer documentation for the Code Intelligence feature.
|
This document describes the design behind [Code Intelligence](../../user/project/code_intelligence.md).
The built-in Code Intelligence in GitLab is powered by
[LSIF](https://lsif.dev) and comes down to generating an LSIF document for a
project in a CI job, processing the data, uploading it as a CI artifact and
displaying this information for the files in the project.
Here is a sequence diagram for uploading an LSIF artifact:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Sequence diagram for LSIF artifact uploads
accDescr: The process of how Runner, Workhorse, Rails, and object storage work together to upload an artifact.
participant Runner
participant Workhorse
participant Rails
participant Object Storage
Runner->>+Workhorse: POST /v4/jobs/:id/artifacts
Workhorse->>+Rails: POST /:id/artifacts/authorize
Rails-->>-Workhorse: Respond with ProcessLsif header
Note right of Workhorse: Process LSIF file
Workhorse->>+Object Storage: Put file
Object Storage-->>-Workhorse: request results
Workhorse->>+Rails: POST /:id/artifacts
Rails-->>-Workhorse: request results
Workhorse-->>-Runner: request results
```
1. The CI/CD job generates a document in an LSIF format (usually `dump.lsif`) using
[an indexer](https://lsif.dev) for the language of a project. The format
[describes](https://sourcegraph.com/docs/code-search/code-navigation/writing_an_indexer#writing-an-indexer)
interactions between a method or function and its definitions or references. The
document is marked to be stored as an LSIF report artifact.
1. After receiving a request for storing the artifact, Workhorse asks
GitLab Rails to authorize the upload.
1. GitLab Rails validates whether the artifact can be uploaded and sends
`ProcessLsif: true` header if the LSIF artifact can be processed.
1. Workhorse reads the LSIF document line by line and generates code intelligence
data for each file in the project. The output is a zipped directory of JSON
files which imitates the structure of the project:
Project:
```code
app
controllers
application_controller.rb
models
application.rb
```
Generated data:
```code
app
controllers
application_controller.rb.json
models
application.rb.json
```
1. The zipped directory is stored as a ZIP artifact. Workhorse replaces the
original LSIF document with a set of JSON files in the ZIP artifact and
generates metadata for it. The metadata makes it possible to view a single
file in a ZIP file without unpacking or loading the whole file. That allows us
to access code intelligence data for a single file.
1. When a file is viewed in the GitLab application, frontend fetches code
intelligence data for the file directly from the object storage. The file
contains information about code units in the file. For example:
```json
[
{
"definition_path": "cmd/check/main.go#L4",
"hover": [
{
"language": "go",
"tokens": [
[
{
"class": "kn",
"value": "package"
},
{
"value": " "
},
{
"class": "s",
"value": "\"fmt\""
}
]
]
},
{
"value": "Package fmt implements formatted I/O with functions analogous to C's printf and scanf. The format 'verbs' are derived from C's but are simpler. \n\n### hdr-PrintingPrinting\nThe verbs: \n\nGeneral: \n\n```\n%v\tthe value in a default format\n\twhen printing st..."
}
],
"start_char": 2,
"start_line": 33
}
...
]
```
|
---
stage: Create
group: Code Review
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Developer documentation for the Code Intelligence feature.
title: Code intelligence development guidelines
breadcrumbs:
- doc
- development
- code_intelligence
---
This document describes the design behind [Code Intelligence](../../user/project/code_intelligence.md).
The built-in Code Intelligence in GitLab is powered by
[LSIF](https://lsif.dev) and comes down to generating an LSIF document for a
project in a CI job, processing the data, uploading it as a CI artifact and
displaying this information for the files in the project.
Here is a sequence diagram for uploading an LSIF artifact:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Sequence diagram for LSIF artifact uploads
accDescr: The process of how Runner, Workhorse, Rails, and object storage work together to upload an artifact.
participant Runner
participant Workhorse
participant Rails
participant Object Storage
Runner->>+Workhorse: POST /v4/jobs/:id/artifacts
Workhorse->>+Rails: POST /:id/artifacts/authorize
Rails-->>-Workhorse: Respond with ProcessLsif header
Note right of Workhorse: Process LSIF file
Workhorse->>+Object Storage: Put file
Object Storage-->>-Workhorse: request results
Workhorse->>+Rails: POST /:id/artifacts
Rails-->>-Workhorse: request results
Workhorse-->>-Runner: request results
```
1. The CI/CD job generates a document in an LSIF format (usually `dump.lsif`) using
[an indexer](https://lsif.dev) for the language of a project. The format
[describes](https://sourcegraph.com/docs/code-search/code-navigation/writing_an_indexer#writing-an-indexer)
interactions between a method or function and its definitions or references. The
document is marked to be stored as an LSIF report artifact.
1. After receiving a request for storing the artifact, Workhorse asks
GitLab Rails to authorize the upload.
1. GitLab Rails validates whether the artifact can be uploaded and sends
`ProcessLsif: true` header if the LSIF artifact can be processed.
1. Workhorse reads the LSIF document line by line and generates code intelligence
data for each file in the project. The output is a zipped directory of JSON
files which imitates the structure of the project:
Project:
```code
app
controllers
application_controller.rb
models
application.rb
```
Generated data:
```code
app
controllers
application_controller.rb.json
models
application.rb.json
```
1. The zipped directory is stored as a ZIP artifact. Workhorse replaces the
original LSIF document with a set of JSON files in the ZIP artifact and
generates metadata for it. The metadata makes it possible to view a single
file in a ZIP file without unpacking or loading the whole file. That allows us
to access code intelligence data for a single file.
1. When a file is viewed in the GitLab application, frontend fetches code
intelligence data for the file directly from the object storage. The file
contains information about code units in the file. For example:
```json
[
{
"definition_path": "cmd/check/main.go#L4",
"hover": [
{
"language": "go",
"tokens": [
[
{
"class": "kn",
"value": "package"
},
{
"value": " "
},
{
"class": "s",
"value": "\"fmt\""
}
]
]
},
{
"value": "Package fmt implements formatted I/O with functions analogous to C's printf and scanf. The format 'verbs' are derived from C's but are simpler. \n\n### hdr-PrintingPrinting\nThe verbs: \n\nGeneral: \n\n```\n%v\tthe value in a default format\n\twhen printing st..."
}
],
"start_char": 2,
"start_line": 33
}
...
]
```
|
https://docs.gitlab.com/development/job_tokens
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/job_tokens.md
|
2025-08-13
|
doc/development/permissions
|
[
"doc",
"development",
"permissions"
] |
job_tokens.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Job token permission development guidelines
| null |
## Background
Job token permissions allow fine-grained access control for CI/CD job tokens that access GitLab API endpoints.
When enabled, the job token can only perform actions allowed for the project.
Historically, job tokens have provided broad access to resources by default. With the introduction of
fine-grained permissions for job tokens, we can enable granular access controls while adhering to the
principle of least privilege.
This topic provide guidance on the requirements and contribution guidelines for new job token permissions.
## Requirements
Before being accepted, all new job token permissions must:
- Be opt-in and disabled by default.
- Complete a review by the GitLab security team.
- Tag `@gitlab-com/gl-security/product-security/appsec` for review
These requirements ensure that new permissions allow users to maintain explicit control over their security configuration, prevent unintended privilege escalation, and adhere to the principle of least privilege.
## Add a job token permission
Job token permissions are defined in several locations. When adding new permissions, ensure the following files are updated:
- **Backend permission definitions**: [`lib/ci/job_token/policies.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/ci/job_token/policies.rb) - Lists the available permissions.
- **JSON schema validation**: [`app/validators/json_schemas/ci_job_token_policies.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/validators/json_schemas/ci_job_token_policies.json) - Defines the validation schema for the `job_token_policies` attribute of the `Ci::JobToken::GroupScopeLink` and `Ci::JobToken::ProjectScopeLink` models.
- **Frontend constants**: [`app/assets/javascripts/token_access/constants.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/token_access/constants.js) - Lists the permission definitions for the UI
## Add an API endpoint to a job token permission scope
### Route settings
To add job token policy support to an API endpoint, you need to configure two route settings:
#### `route_setting :authentication`
This setting controls which authentication methods are allowed for the endpoint.
**Parameters**:
- `job_token_allowed: true` - Enables CI/CD job tokens to authenticate against this endpoint
#### `route_setting :authorization`
This setting defines the permission level and access controls for job token access.
**Parameters**:
- `job_token_policies`: The required permission level. Available policies are listed in [lib/ci/job_token/policies.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/ci/job_token/policies.rb).
- `allow_public_access_for_enabled_project_features`: Optional. Allows access based on the visibility settings of the project feature. See [public access configuration](#public-access-configuration).
#### Example usage
This example shows how to add support for `tags` API endpoints to the job token policy's `repository` resource:
```ruby
# In lib/api/tags.rb
resource :projects do
# Enable job token authentication for this endpoint
route_setting :authentication, job_token_allowed: true
# Require the `read_repository` policy for reading tags
route_setting :authorization, job_token_policies: :read_repository,
allow_public_access_for_enabled_project_features: :repository
get ':id/repository/tags' do
# ... existing endpoint implementation
end
# Enable job token authentication for this endpoint
route_setting :authentication, job_token_allowed: true
# Require the `admin_repository` policy for creating tags
route_setting :authorization, job_token_policies: :admin_repository
post ':id/repository/tags' do
# ... existing endpoint implementation
end
end
```
### Key considerations
#### Permission level selection
Choose the appropriate permission level based on the operation:
- **Read operations** (GET requests): Use `:read_*` permissions
- **Write/Delete operations** (POST, PUT, DELETE requests): Use `:admin_*` permissions
#### Public access configuration
The `allow_public_access_for_enabled_project_features` parameter allows job tokens to access endpoints when:
- The project has appropriate visibility.
- The project feature is enabled.
- The project feature has appropriate visibility.
- Job token permissions are not explicitly configured for the resource.
This provides backward compatibility while enabling fine-grained control when the project feature is not publicly accessible.
### Testing
When implementing job token permissions for API endpoints, use the shared RSpec example `'enforcing job token policies'` to test the authorization behavior. This shared example provides comprehensive coverage for all job token policy scenarios.
#### Usage
Add the shared example to your API endpoint tests by including it with the required parameters:
```ruby
describe 'GET /projects/:id/repository/tags' do
let(:route) { "/projects/#{project.id}/repository/tags" }
it_behaves_like 'enforcing job token policies', :read_repository,
allow_public_access_for_enabled_project_features: :repository do
let(:user) { developer }
let(:request) do
get api(route), params: { job_token: target_job.token }
end
end
# Your other endpoint-specific tests...
end
```
#### Parameters
The shared example takes the following parameters:
- The job token policy that should be enforced (e.g., `:read_repository`)
- `allow_public_access_for_enabled_project_features` - (Optional) The project feature that the endpoint controls (e.g., `:repository`)
- `expected_success_status` - (Optional) The expected success status of the request (by default: `:success`)
#### What the shared example tests
The `'enforcing job token policies'` shared example automatically tests:
1. **Access granted**: Job tokens can access the endpoint when the required permissions are configured for the accessed project.
1. **Access denied**: Job tokens cannot access the endpoint when the required permissions are not configured for the accessed project.
1. **Public access fallback**: `allow_public_access_for_enabled_project_features` behavior when permissions aren't configured.
### Documentation
After you add job token support for a new API endpoint, you must update the [fine-grained permissions for CI/CD job tokens](../../ci/jobs/fine_grained_permissions.md#available-api-endpoints) documentation.
Run the following command to regenerate this topic:
```shell
bundle exec rake ci:job_tokens:compile_docs
```
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Job token permission development guidelines
breadcrumbs:
- doc
- development
- permissions
---
## Background
Job token permissions allow fine-grained access control for CI/CD job tokens that access GitLab API endpoints.
When enabled, the job token can only perform actions allowed for the project.
Historically, job tokens have provided broad access to resources by default. With the introduction of
fine-grained permissions for job tokens, we can enable granular access controls while adhering to the
principle of least privilege.
This topic provide guidance on the requirements and contribution guidelines for new job token permissions.
## Requirements
Before being accepted, all new job token permissions must:
- Be opt-in and disabled by default.
- Complete a review by the GitLab security team.
- Tag `@gitlab-com/gl-security/product-security/appsec` for review
These requirements ensure that new permissions allow users to maintain explicit control over their security configuration, prevent unintended privilege escalation, and adhere to the principle of least privilege.
## Add a job token permission
Job token permissions are defined in several locations. When adding new permissions, ensure the following files are updated:
- **Backend permission definitions**: [`lib/ci/job_token/policies.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/ci/job_token/policies.rb) - Lists the available permissions.
- **JSON schema validation**: [`app/validators/json_schemas/ci_job_token_policies.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/validators/json_schemas/ci_job_token_policies.json) - Defines the validation schema for the `job_token_policies` attribute of the `Ci::JobToken::GroupScopeLink` and `Ci::JobToken::ProjectScopeLink` models.
- **Frontend constants**: [`app/assets/javascripts/token_access/constants.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/token_access/constants.js) - Lists the permission definitions for the UI
## Add an API endpoint to a job token permission scope
### Route settings
To add job token policy support to an API endpoint, you need to configure two route settings:
#### `route_setting :authentication`
This setting controls which authentication methods are allowed for the endpoint.
**Parameters**:
- `job_token_allowed: true` - Enables CI/CD job tokens to authenticate against this endpoint
#### `route_setting :authorization`
This setting defines the permission level and access controls for job token access.
**Parameters**:
- `job_token_policies`: The required permission level. Available policies are listed in [lib/ci/job_token/policies.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/ci/job_token/policies.rb).
- `allow_public_access_for_enabled_project_features`: Optional. Allows access based on the visibility settings of the project feature. See [public access configuration](#public-access-configuration).
#### Example usage
This example shows how to add support for `tags` API endpoints to the job token policy's `repository` resource:
```ruby
# In lib/api/tags.rb
resource :projects do
# Enable job token authentication for this endpoint
route_setting :authentication, job_token_allowed: true
# Require the `read_repository` policy for reading tags
route_setting :authorization, job_token_policies: :read_repository,
allow_public_access_for_enabled_project_features: :repository
get ':id/repository/tags' do
# ... existing endpoint implementation
end
# Enable job token authentication for this endpoint
route_setting :authentication, job_token_allowed: true
# Require the `admin_repository` policy for creating tags
route_setting :authorization, job_token_policies: :admin_repository
post ':id/repository/tags' do
# ... existing endpoint implementation
end
end
```
### Key considerations
#### Permission level selection
Choose the appropriate permission level based on the operation:
- **Read operations** (GET requests): Use `:read_*` permissions
- **Write/Delete operations** (POST, PUT, DELETE requests): Use `:admin_*` permissions
#### Public access configuration
The `allow_public_access_for_enabled_project_features` parameter allows job tokens to access endpoints when:
- The project has appropriate visibility.
- The project feature is enabled.
- The project feature has appropriate visibility.
- Job token permissions are not explicitly configured for the resource.
This provides backward compatibility while enabling fine-grained control when the project feature is not publicly accessible.
### Testing
When implementing job token permissions for API endpoints, use the shared RSpec example `'enforcing job token policies'` to test the authorization behavior. This shared example provides comprehensive coverage for all job token policy scenarios.
#### Usage
Add the shared example to your API endpoint tests by including it with the required parameters:
```ruby
describe 'GET /projects/:id/repository/tags' do
let(:route) { "/projects/#{project.id}/repository/tags" }
it_behaves_like 'enforcing job token policies', :read_repository,
allow_public_access_for_enabled_project_features: :repository do
let(:user) { developer }
let(:request) do
get api(route), params: { job_token: target_job.token }
end
end
# Your other endpoint-specific tests...
end
```
#### Parameters
The shared example takes the following parameters:
- The job token policy that should be enforced (e.g., `:read_repository`)
- `allow_public_access_for_enabled_project_features` - (Optional) The project feature that the endpoint controls (e.g., `:repository`)
- `expected_success_status` - (Optional) The expected success status of the request (by default: `:success`)
#### What the shared example tests
The `'enforcing job token policies'` shared example automatically tests:
1. **Access granted**: Job tokens can access the endpoint when the required permissions are configured for the accessed project.
1. **Access denied**: Job tokens cannot access the endpoint when the required permissions are not configured for the accessed project.
1. **Public access fallback**: `allow_public_access_for_enabled_project_features` behavior when permissions aren't configured.
### Documentation
After you add job token support for a new API endpoint, you must update the [fine-grained permissions for CI/CD job tokens](../../ci/jobs/fine_grained_permissions.md#available-api-endpoints) documentation.
Run the following command to regenerate this topic:
```shell
bundle exec rake ci:job_tokens:compile_docs
```
|
https://docs.gitlab.com/development/predefined_roles
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/predefined_roles.md
|
2025-08-13
|
doc/development/permissions
|
[
"doc",
"development",
"permissions"
] |
predefined_roles.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Predefined system of user roles
| null |
## Instance
### User types
Each user can be one of the following types:
- Regular.
- External - access to groups and projects only if direct member.
- [Internal users](../../administration/internal_users.md) - system created.
- [Auditor](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/base_policy.rb#L9):
- No access to projects or groups settings menu.
- No access to **Admin** area.
- Read-only access to everything else.
- [Administrator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/policies/base_policy.rb#L6) - read-write access.
See the [permissions page](../../user/permissions.md) for details on how each user type is used.
## Groups and Projects
### General permissions
Groups and projects can have the following visibility levels:
- public (`20`) - an entity is visible to everyone
- internal (`10`) - an entity is visible to authenticated users
- private (`0`) - an entity is visible only to the approved members of the entity
By default, subgroups can **not** have higher visibility levels.
For example, if you create a new private group, it cannot include a public subgroup.
The visibility level of a group can be changed only if all subgroups and
sub-projects have the same or lower visibility level. For example, a group can be set
to internal only if all subgroups and projects are internal or private.
{{< alert type="warning" >}}
If you migrate an existing group to a lower visibility level, that action does not migrate subgroups
in the same way. This is a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/22406).
{{< /alert >}}
Visibility levels can be found in the `Gitlab::VisibilityLevel` module.
### Feature specific permissions
Additionally, the following project features can have different visibility levels:
- Issues
- Repository
- Merge request
- Forks
- Pipelines
- Analytics
- Requirements
- Security and compliance
- Wiki
- Snippets
- Pages
- Operations
- Metrics Dashboard
These features can be set to "Everyone with Access" or "Only Project Members".
They make sense only for public or internal projects because private projects
can be accessed only by project members by default.
### Members
Users can be members of multiple groups and projects. The following access
levels are available (defined in the
[`Gitlab::Access`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/access.rb)
module):
- No access (`0`)
- [Minimal access](../../user/permissions.md#users-with-minimal-access) (`5`)
- Guest (`10`)
- Planner (`15`)
- Reporter (`20`)
- Developer (`30`)
- Maintainer (`40`)
- Owner (`50`)
If a user is a member of both a project and the project parent groups, the
highest permission is the applied access level for the project.
If a user is a member of a project, but not the parent groups, they
can still view the groups and their entities (like epics).
Project membership (where the group membership is already taken into account)
is stored in the `project_authorizations` table.
{{< alert type="note" >}}
Projects in personal namespaces have a maximum role of Owner.
{{< /alert >}}
#### Guest role
A user with the Guest role in GitLab can view project plans, blockers and other
progress indicators. While unable to modify data they have not created, Guests
can contribute to a project by creating and linking project work items. Guests
can also view high-level project information such as:
- Analytics.
- Incident information.
- Issues and epics.
- Licenses.
For more information, see [project member permissions](../../user/permissions.md#project-members-permissions).
### Confidential issues
[Confidential issues](../../user/project/issues/confidential_issues.md) can be accessed
only by project members who are at least
reporters (they can't be accessed by guests). Additionally they can be accessed
by their authors and assignees.
### Licensed features
Some features can be accessed only if the user has the correct license plan.
## Permission dependencies
Feature policies can be quite complex and consist of multiple rules.
Quite often, one permission can be based on another.
Designing good permissions means reusing existing permissions as much as possible
and making access to features granular.
In the case of a complex resource, it should be broken into smaller pieces of information
and each piece should be granted a different permission.
A good example in this case is the _Merge Request widget_ and the _Security reports_.
Depending on the visibility level of the _Pipelines_, the _Security reports_ are either visible
in the widget or not. So, the _Merge Request widget_, the _Pipelines_, and the _Security reports_,
have separate permissions. Moreover, the permissions for the _Merge Request widget_
and the _Pipelines_ are dependencies of the _Security reports_.
### Permission dependencies of Secure features
Secure features have complex permissions since these features are integrated
into different features like Merge Requests and CI flow.
Here is a list of some permission dependencies.
| Activity level | Resource | Locations |Permission dependency|
|----------------|----------|-----------|-----|
| View | License information | Dependency list, License Compliance | Can view repository |
| View | Dependency information | Dependency list, License Compliance | Can view repository |
| View | Vulnerabilities information | Dependency list | Can view security findings |
| View | Black/Whitelisted licenses for the project | License Compliance, merge request | Can view repository |
| View | Security findings | merge request, CI job page, Pipeline security tab | Can read the project and CI jobs |
| View | Vulnerability feedback | merge request | Can read security findings |
| View | Dependency List page | Project | Can access Dependency information |
| View | License Compliance page | Project | Can access License information|
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Predefined system of user roles
breadcrumbs:
- doc
- development
- permissions
---
## Instance
### User types
Each user can be one of the following types:
- Regular.
- External - access to groups and projects only if direct member.
- [Internal users](../../administration/internal_users.md) - system created.
- [Auditor](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/base_policy.rb#L9):
- No access to projects or groups settings menu.
- No access to **Admin** area.
- Read-only access to everything else.
- [Administrator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/policies/base_policy.rb#L6) - read-write access.
See the [permissions page](../../user/permissions.md) for details on how each user type is used.
## Groups and Projects
### General permissions
Groups and projects can have the following visibility levels:
- public (`20`) - an entity is visible to everyone
- internal (`10`) - an entity is visible to authenticated users
- private (`0`) - an entity is visible only to the approved members of the entity
By default, subgroups can **not** have higher visibility levels.
For example, if you create a new private group, it cannot include a public subgroup.
The visibility level of a group can be changed only if all subgroups and
sub-projects have the same or lower visibility level. For example, a group can be set
to internal only if all subgroups and projects are internal or private.
{{< alert type="warning" >}}
If you migrate an existing group to a lower visibility level, that action does not migrate subgroups
in the same way. This is a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/22406).
{{< /alert >}}
Visibility levels can be found in the `Gitlab::VisibilityLevel` module.
### Feature specific permissions
Additionally, the following project features can have different visibility levels:
- Issues
- Repository
- Merge request
- Forks
- Pipelines
- Analytics
- Requirements
- Security and compliance
- Wiki
- Snippets
- Pages
- Operations
- Metrics Dashboard
These features can be set to "Everyone with Access" or "Only Project Members".
They make sense only for public or internal projects because private projects
can be accessed only by project members by default.
### Members
Users can be members of multiple groups and projects. The following access
levels are available (defined in the
[`Gitlab::Access`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/access.rb)
module):
- No access (`0`)
- [Minimal access](../../user/permissions.md#users-with-minimal-access) (`5`)
- Guest (`10`)
- Planner (`15`)
- Reporter (`20`)
- Developer (`30`)
- Maintainer (`40`)
- Owner (`50`)
If a user is a member of both a project and the project parent groups, the
highest permission is the applied access level for the project.
If a user is a member of a project, but not the parent groups, they
can still view the groups and their entities (like epics).
Project membership (where the group membership is already taken into account)
is stored in the `project_authorizations` table.
{{< alert type="note" >}}
Projects in personal namespaces have a maximum role of Owner.
{{< /alert >}}
#### Guest role
A user with the Guest role in GitLab can view project plans, blockers and other
progress indicators. While unable to modify data they have not created, Guests
can contribute to a project by creating and linking project work items. Guests
can also view high-level project information such as:
- Analytics.
- Incident information.
- Issues and epics.
- Licenses.
For more information, see [project member permissions](../../user/permissions.md#project-members-permissions).
### Confidential issues
[Confidential issues](../../user/project/issues/confidential_issues.md) can be accessed
only by project members who are at least
reporters (they can't be accessed by guests). Additionally they can be accessed
by their authors and assignees.
### Licensed features
Some features can be accessed only if the user has the correct license plan.
## Permission dependencies
Feature policies can be quite complex and consist of multiple rules.
Quite often, one permission can be based on another.
Designing good permissions means reusing existing permissions as much as possible
and making access to features granular.
In the case of a complex resource, it should be broken into smaller pieces of information
and each piece should be granted a different permission.
A good example in this case is the _Merge Request widget_ and the _Security reports_.
Depending on the visibility level of the _Pipelines_, the _Security reports_ are either visible
in the widget or not. So, the _Merge Request widget_, the _Pipelines_, and the _Security reports_,
have separate permissions. Moreover, the permissions for the _Merge Request widget_
and the _Pipelines_ are dependencies of the _Security reports_.
### Permission dependencies of Secure features
Secure features have complex permissions since these features are integrated
into different features like Merge Requests and CI flow.
Here is a list of some permission dependencies.
| Activity level | Resource | Locations |Permission dependency|
|----------------|----------|-----------|-----|
| View | License information | Dependency list, License Compliance | Can view repository |
| View | Dependency information | Dependency list, License Compliance | Can view repository |
| View | Vulnerabilities information | Dependency list | Can view security findings |
| View | Black/Whitelisted licenses for the project | License Compliance, merge request | Can view repository |
| View | Security findings | merge request, CI job page, Pipeline security tab | Can read the project and CI jobs |
| View | Vulnerability feedback | merge request | Can read security findings |
| View | Dependency List page | Project | Can access Dependency information |
| View | License Compliance page | Project | Can access License information|
|
https://docs.gitlab.com/development/authorizations
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/authorizations.md
|
2025-08-13
|
doc/development/permissions
|
[
"doc",
"development",
"permissions"
] |
authorizations.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Authorization
| null |
## Where should permissions be checked?
When deciding where to check permissions, apply defense-in-depth by implementing multiple checks at
different layers. Starting with low-level layers, such as finders and services,
followed by high-level layers, such as GraphQL, public REST API, and controllers.
For more information, see [guidelines for reusing abstractions](../reusing_abstractions.md).
Protecting the same resources at many points means that if one layer of defense is compromised
or missing, customer data is still protected by the additional layers.
For more information on permissions, see the permissions section in the [secure coding guidelines](../secure_coding_guidelines.md#permissions).
### Considerations
Services or finders are appropriate locations because:
- Multiple endpoints share services or finders so downstream logic is more likely to be re-used.
- Sometimes authorization logic must be incorporated in DB queries to filter records.
- You should avoid permission checks at the display layer except to provide better UX,
and not as a security check. For example, showing and hiding non-data elements like buttons.
The downsides to defense-in-depth are:
- `DeclarativePolicy` rules are relatively performant, but conditions may perform database calls.
- Higher maintenance costs.
### Exceptions
Developers can choose to do authorization in only a single area after weighing
the risks and drawbacks for their specific case.
Prefer domain logic (services or finders) as the source of truth when making exceptions.
Logic, like backend worker logic, might not need authorization based on the current user.
If the service or finder's constructor does not expect `current_user`, then it typically does not
check permissions.
### Frontend
When using an ability check in UI elements, make sure to also use an ability
check for the underlying backend code, if there is any. This ensures there is
absolutely no way to use the feature until the user has proper access.
If the UI element is HAML, you can use embedded Ruby to check if
`Ability.allowed?(user, action, subject)`.
If the UI element is JavaScript or Vue, use the `push_frontend_ability` method,
which is available to all controllers that inherit from `ApplicationController`.
You can use this method to expose the ability, for example:
```ruby
before_action do
push_frontend_ability(ability: :read_project, resource: @project, user: current_user)
end
```
You can then check the state of the ability in JavaScript as follows:
```javascript
if ( gon.abilities.readProject ) {
// ...
}
```
The name of the ability in JavaScript is always camelCase,
so checking for `gon.abilities.read_project` would not work.
To check for an ability in a Vue template, see the
[developer documentation for access abilities in Vue](../fe_guide/vue.md#accessing-abilities).
### Tips
If a class accepts `current_user`, then it may be responsible for authorization.
### Example: Adding a new API endpoint
By default, we authorize at the endpoint. Checking an existing ability may make sense; if not, then we probably need to add one.
As an aside, most endpoints can be cleanly categorized as a CRUD (create, read, update, destroy) action on a resource. The services and abilities follow suit, which is why many are named like `Projects::CreateService` or `:read_project`.
Say, for example, we extract the whole endpoint into a service. The `can?` check will now be in the service. Say the service reuses an existing finder, which we are modifying for our purposes. Should we make the finder check an ability?
- If the finder does not accept `current_user`, and therefore does not check permissions, then probably no.
- If the finder accepts `current_user`, and does not check permissions, then you should double-check other usages of the finder, and you might consider adding authorization.
- If the finder accepts `current_user`, and already checks permissions, then either we need to add our case, or the existing checks are appropriate.
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Authorization
breadcrumbs:
- doc
- development
- permissions
---
## Where should permissions be checked?
When deciding where to check permissions, apply defense-in-depth by implementing multiple checks at
different layers. Starting with low-level layers, such as finders and services,
followed by high-level layers, such as GraphQL, public REST API, and controllers.
For more information, see [guidelines for reusing abstractions](../reusing_abstractions.md).
Protecting the same resources at many points means that if one layer of defense is compromised
or missing, customer data is still protected by the additional layers.
For more information on permissions, see the permissions section in the [secure coding guidelines](../secure_coding_guidelines.md#permissions).
### Considerations
Services or finders are appropriate locations because:
- Multiple endpoints share services or finders so downstream logic is more likely to be re-used.
- Sometimes authorization logic must be incorporated in DB queries to filter records.
- You should avoid permission checks at the display layer except to provide better UX,
and not as a security check. For example, showing and hiding non-data elements like buttons.
The downsides to defense-in-depth are:
- `DeclarativePolicy` rules are relatively performant, but conditions may perform database calls.
- Higher maintenance costs.
### Exceptions
Developers can choose to do authorization in only a single area after weighing
the risks and drawbacks for their specific case.
Prefer domain logic (services or finders) as the source of truth when making exceptions.
Logic, like backend worker logic, might not need authorization based on the current user.
If the service or finder's constructor does not expect `current_user`, then it typically does not
check permissions.
### Frontend
When using an ability check in UI elements, make sure to also use an ability
check for the underlying backend code, if there is any. This ensures there is
absolutely no way to use the feature until the user has proper access.
If the UI element is HAML, you can use embedded Ruby to check if
`Ability.allowed?(user, action, subject)`.
If the UI element is JavaScript or Vue, use the `push_frontend_ability` method,
which is available to all controllers that inherit from `ApplicationController`.
You can use this method to expose the ability, for example:
```ruby
before_action do
push_frontend_ability(ability: :read_project, resource: @project, user: current_user)
end
```
You can then check the state of the ability in JavaScript as follows:
```javascript
if ( gon.abilities.readProject ) {
// ...
}
```
The name of the ability in JavaScript is always camelCase,
so checking for `gon.abilities.read_project` would not work.
To check for an ability in a Vue template, see the
[developer documentation for access abilities in Vue](../fe_guide/vue.md#accessing-abilities).
### Tips
If a class accepts `current_user`, then it may be responsible for authorization.
### Example: Adding a new API endpoint
By default, we authorize at the endpoint. Checking an existing ability may make sense; if not, then we probably need to add one.
As an aside, most endpoints can be cleanly categorized as a CRUD (create, read, update, destroy) action on a resource. The services and abilities follow suit, which is why many are named like `Projects::CreateService` or `:read_project`.
Say, for example, we extract the whole endpoint into a service. The `can?` check will now be in the service. Say the service reuses an existing finder, which we are modifying for our purposes. Should we make the finder check an ability?
- If the finder does not accept `current_user`, and therefore does not check permissions, then probably no.
- If the finder accepts `current_user`, and does not check permissions, then you should double-check other usages of the finder, and you might consider adding authorization.
- If the finder accepts `current_user`, and already checks permissions, then either we need to add our case, or the existing checks are appropriate.
|
https://docs.gitlab.com/development/custom_roles
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/custom_roles.md
|
2025-08-13
|
doc/development/permissions
|
[
"doc",
"development",
"permissions"
] |
custom_roles.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Custom role development guidelines
| null |
Ultimate customers can create custom roles and define those roles by assigning specific abilities.
For example, a user could create an "Engineer" role with `read code` and `admin merge requests` abilities, but without abilities like `admin issues`.
In this context, the terms "permission" and "ability" are often used interchangeably.
- "Ability" is an action a user can do. These map to [Declarative Policy abilities](https://gitlab.com/gitlab-org/ruby/gems/declarative-policy/-/blob/main/doc/defining-policies.md#rules) and live in Policy classes in `ee/app/policies/*`.
- "Permission" is how we refer to an ability [in user-facing documentation](../../user/permissions.md). The documentation of permissions is manually generated so there is not necessarily a 1:1 mapping of the permissions listed in documentation and the abilities defined in Policy classes.
## Custom roles vs default roles
In GitLab 15.9 and earlier, GitLab only had [default roles](predefined_roles.md) as a permission system. In this system, there are a few predefined roles that are statically assigned to certain abilities. These default roles are not customizable by customers.
With custom roles, the customers can decide which abilities they want to assign to certain user groups. For example:
- In the default role system, reading of vulnerabilities is limited to a Developer role.
- In the custom role system, a customer can assign this ability to a new custom role based on any default role.
Like default roles, custom roles are [inherited](../../user/project/members/_index.md#membership-types) within a group hierarchy. If a user has custom role for a group, that user will also have a custom role for any projects or subgroups within the group.
## Technical overview
- Individual custom roles are stored in the `member_roles` table (`MemberRole` model).
- A `member_roles` record is associated with top-level groups (not subgroups) via the `namespace_id` foreign key.
- A Group or project membership (`members` record) is associated with a custom role via the `member_role_id` foreign key.
- A Group or project membership can be associated with any custom role that is defined on the root-level group of the group or project.
- The `member_roles` table includes individual permissions and a `base_access_level` value.
- The `base_access_level` must be a [valid access level](../../api/access_requests.md#valid-access-levels).
The `base_access_level` determines which abilities are included in the custom role. For example, if the `base_access_level` is `10`, the custom role will include any abilities that a default Guest role would receive, plus any additional abilities that are enabled by the `member_roles` record by setting an attribute, such as `read_code`, to true.
- A custom role can enable additional abilities for a `base_access_level` but it cannot disable a permission. As a result, custom roles are "additive only". The rationale for this choice is [in this comment](https://gitlab.com/gitlab-org/gitlab/-/issues/352891#note_1059561579).
- Custom role abilities are supported at project level and group level.
## Refactoring abilities
### Finding existing abilities checks
Abilities are often [checked in multiple locations](authorizations.md#where-should-permissions-be-checked) for a single endpoint or web request. Therefore, it can be difficult to find the list of authorization checks that are run for a given endpoint.
To assist with this, you can locally set `GITLAB_DEBUG_POLICIES=true`.
This outputs information about which abilities are checked in the requests
made in any specs that you run. The output also includes the line of code where the
authorization check was made. Caller information is especially helpful in cases
where there is metaprogramming used because those cases are difficult to find by
grepping for ability name strings.
For example:
```shell
# example spec run
GITLAB_DEBUG_POLICIES=true bundle exec rspec spec/controllers/groups_controller_spec.rb:162
# permissions debug output when spec is run; if multiple policy checks are run they will all be in the debug output.
POLICY CHECK DEBUG -> policy: GlobalPolicy, ability: create_group, called_from: ["/gitlab/app/controllers/application_controller.rb:245:in `can?'", "/gitlab/app/controllers/groups_controller.rb:255:in `authorize_create_group!'"]
```
Use this setting to learn more about authorization checks while
refactoring. You should not keep this setting enabled for any specs on the default branch.
### Understanding logic for individual abilities
References to an ability may appear in a `DeclarativePolicy` class many times
and depend on conditions and rules which reference other abilities. As a result,
it can be challenging to know exactly which conditions apply to a particular
ability.
`DeclarativePolicy` provides a `ability_map` for each policy class, which
pulls all rules for an ability into an array.
For example:
```ruby
> GroupPolicy.ability_map.map.select { |k,v| k == :read_group_member }
=> {:read_group_member=>[[:enable, #<Rule can?(:read_group)>], [:prevent, #<Rule ~can_read_group_member>]]}
> GroupPolicy.ability_map.map.select { |k,v| k == :read_group }
=> {:read_group=>
[[:enable, #<Rule public_group>],
[:enable, #<Rule logged_in_viewable>],
[:enable, #<Rule guest>],
[:enable, #<Rule admin>],
[:enable, #<Rule has_projects>],
[:enable, #<Rule read_package_registry_deploy_token>],
[:enable, #<Rule write_package_registry_deploy_token>],
[:prevent, #<Rule all?(~public_group, ~admin, user_banned_from_group)>],
[:enable, #<Rule auditor>],
[:prevent, #<Rule needs_new_sso_session>],
[:prevent, #<Rule all?(ip_enforcement_prevents_access, ~owner, ~auditor)>]]}
```
`DeclarativePolicy` also provides a `debug` method that can be used to
understand the logic tree for a specific object and actor. The output is similar
to the list of rules from `ability_map`. But, `DeclarativePolicy` stops
evaluating rules after you `prevent` an ability, so it is possible that
not all conditions are called.
Example:
```ruby
policy = GroupPolicy.new(User.last, Group.last)
policy.debug(:read_group)
- [0] enable when public_group ((@custom_guest_user1 : Group/139))
- [0] enable when logged_in_viewable ((@custom_guest_user1 : Group/139))
- [0] enable when admin ((@custom_guest_user1 : Group/139))
- [0] enable when auditor ((@custom_guest_user1 : Group/139))
- [14] prevent when all?(~public_group, ~admin, user_banned_from_group) ((@custom_guest_user1 : Group/139))
- [14] prevent when needs_new_sso_session ((@custom_guest_user1 : Group/139))
- [16] enable when guest ((@custom_guest_user1 : Group/139))
- [16] enable when has_projects ((@custom_guest_user1 : Group/139))
- [16] enable when read_package_registry_deploy_token ((@custom_guest_user1 : Group/139))
- [16] enable when write_package_registry_deploy_token ((@custom_guest_user1 : Group/139))
[21] prevent when all?(ip_enforcement_prevents_access, ~owner, ~auditor) ((@custom_guest_user1 : Group/139))
=> #<DeclarativePolicy::Runner::State:0x000000015c665050
@called_conditions=
#<Set: {
"/dp/condition/GroupPolicy/public_group/Group:139",
"/dp/condition/GroupPolicy/logged_in_viewable/User:83,Group:139",
"/dp/condition/BasePolicy/admin/User:83",
"/dp/condition/BasePolicy/auditor/User:83",
"/dp/condition/GroupPolicy/user_banned_from_group/User:83,Group:139",
"/dp/condition/GroupPolicy/needs_new_sso_session/User:83,Group:139",
"/dp/condition/GroupPolicy/guest/User:83,Group:139",
"/dp/condition/GroupPolicy/has_projects/User:83,Group:139",
"/dp/condition/GroupPolicy/read_package_registry_deploy_token/User:83,Group:139",
"/dp/condition/GroupPolicy/write_package_registry_deploy_token/User:83,Group:139"}>,
@enabled=false,
@prevented=true>
```
### Abilities consolidation
Every feature added to custom roles should have minimal abilities. For most features, having `read_*` and `admin_*` should be enough. You should consolidate all:
- View-related abilities under `read_*`. For example, viewing a list or detail.
- Object updates under `admin_*`. For example, updating an object, adding assignees or closing it that object. Usually, a role that enables `admin_` has to have also `read_` abilities enabled. This is defined in `requirement` option in the `ALL_CUSTOMIZABLE_PERMISSIONS` hash on `MemberRole` model.
There might be features that require additional abilities but try to minimize those. You can always ask members of the Authentication and Authorization group for their opinion or help.
This is also where your work should begin. Take all the abilities for the feature you work on, and consolidate those abilities into `read_`, `admin_`, or additional abilities if necessary.
Many abilities in the `GroupPolicy` and `ProjectPolicy` classes have many
redundant policies. There is an [epic for consolidating these Policy classes](https://gitlab.com/groups/gitlab-org/-/epics/6689).
If you encounter similar permissions in these classes, consider refactoring so
that they have the same name.
For example, you see in `GroupPolicy` that there is an ability called
`read_group_security_dashboard` and in `ProjectPolicy` has an ability called
`read_project_security_dashboard`. You'd like to make both customizable. Rather
than adding a row to the `member_roles` table for each ability, consider
renaming them to `read_security_dashboard` and adding `read_security_dashboard`
to the `member_roles` table. Enabling `read_security_dashboard` on
the parent group will allow the custom role to access the group security dashboard and the project security dashboard
for each project in that group. Enabling the same permission on a specific project will allow access to that projects'
security dashboard.
## How to add support for an ability to custom roles
If adding an existing ability, consider [refactoring & consolidating abilities for the feature](#refactoring-abilities)
before in a separate merge request, before completing the below.
### Step 1. Generate a configuration file
- Run `./ee/bin/custom-ability <ABILITY_NAME>` to generate a configuration file for the new ability.
- This will generate a YAML file in `ee/config/custom_abilities` which follows the following schema:
| Field | Required | Description |
| ----- | -------- |--------------|
| `name` | yes | Unique, lowercase and underscored name describing the custom ability. Must match the filename. |
| `title` | yes | Human-readable title of the custom ability. |
| `description` | yes | Human-readable description of the custom ability. |
| `feature_category` | yes | Name of the feature category. For example, `vulnerability_management`. |
| `introduced_by_issue` | yes | Issue URL that proposed the addition of this custom ability. |
| `introduced_by_mr` | yes | MR URL that added this custom ability. |
| `milestone` | yes | Milestone in which this custom ability was added. |
| `admin_ability` | no | Boolean value to indicate whether this ability is checked at the admin level. |
| `group_ability` | yes | Boolean value to indicate whether this ability is checked on group level. |
| `enabled_for_group_access_levels` | if `group_ability = true` | The array of access levels that already have access to this custom ability in a group. See the section on [understanding logic for individual abilities](#understanding-logic-for-individual-abilities) for help on determining the base access level for an ability. This is for information only and has no impact on how custom roles operate. |
| `project_ability` | yes | Boolean value to whether this ability is checked on project level. |
| `enabled_for_project_access_levels` | if `project_ability = true` | The array of access levels that already have access to this custom ability in a project. See the section on [understanding logic for individual abilities](#understanding-logic-for-individual-abilities) for help on determining the base access level for an ability. This is for information only and has no impact on how custom roles operate. |
| `requirements` | no | The list of custom permissions this ability is dependent on. For instance `admin_vulnerability` is dependent on `read_vulnerability`. If none, then enter `[]` |
| `available_from_access_level` | no | The access level of the predefined role from which this ability is available, if applicable. See the section on [understanding logic for individual abilities](#understanding-logic-for-individual-abilities) for help on determining the base access level for an ability. This is for information only and has no impact on how custom roles operate. |
### Step 2: Create a spec file and update validation schema
- Run `bundle exec rails generate gitlab:custom_roles:code --ability <ABILITY_NAME>` which will update the permissions validation schema file and create an empty spec file.
### Step 3: Create a feature flag (optional)
- If you would like to toggle the custom ability using a [feature flag](../feature_flags/_index.md), create a feature flag with name `custom_ability_<name>`. Such as, for ability `read_code`, the feature flag will be `custom_ability_read_code`. When this feature flag is disabled, the custom ability will be hidden when creating a new custom role, or when fetching custom abilities for a user.
### Step 4: Update policies
- If the ability is checked on a group level, add rule(s) to GroupPolicy to enable the ability.
- For example: if the ability we would like to add is `read_dependency`, then an update to `ee/app/policies/ee/group_policy.rb` would look like as follows:
```ruby
rule { custom_role_enables_read_dependency }.enable(:read_dependency)
```
- Similarly, If the ability is checked on a project level, add rule(s) to ProjectPolicy to enable the ability.
- For example: if the ability we would like to add is `read_dependency`, then an update to `ee/app/policies/ee/project_policy.rb` would look like as follows:
```ruby
rule { custom_role_enables_read_dependency }.enable(:read_dependency)
```
- Not all abilities need to be enabled on both levels, for instance `admin_terraform_state` allows users to manage a project's terraform state. It only needs to be enabled on the project level and not the group level, and thus only needs to be configured in `ee/app/policies/ee/project_policy.rb`.
### Step 5: Verify role access
- Ensure SaaS mode is enabled with `GITLAB_SIMULATE_SAAS=1`.
- Go to any Group that you are an owner of, then go to `Settings -> Roles and permissions`.
- Select `New role` and create a custom role with the permission you have just created.
- Go to the Group's `Manage -> Members` page and assign a member to this newly created custom role.
- Next, sign in as that member and ensure that you are able to access the page that the custom ability is intended for.
### Step 6: Assess impact to advanced search
Custom roles may impact [advanced search functionality](../../user/search/advanced_search.md#available-scopes) if the ability impacts data that is indexed by Advanced search.
- Enable [Advanced search and index the instance](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/elasticsearch.md#enable-elasticsearch-in-the-gdk)
- Sign in as a member with the custom role assigned to any Group
- Perform a global search by navigating to `Search or go to...`. Type in a search term and select to search in `all GitLab`.
- Verify that the user can search for data impacted by the custom role
- Perform a group search by navigating to the group page then `Search or go to...`. Type in a search term and select search in group.
- Verify that the user can search for data impacted by the custom role
- Update [search authorization](../advanced_search.md#updating-authorization) if needed
### Step 7: Add specs
- Add the ability as a trait in the `MemberRoles` factory, `ee/spec/factories/member_roles.rb`.
- Add tests to `ee/spec/requests/custom_roles/<ABILITY_NAME>/request_spec.rb` to ensure that once the user has been assigned the custom ability, they can successfully access the controllers, REST API endpoints and GraphQL API endpoints.
- Below is an example of the typical setup that is required to test a Rails Controller endpoint.
```ruby
let_it_be(:user) { create(:user) }
let_it_be(:project) { create(:project, :repository, :in_group) }
let_it_be(:role) { create(:member_role, :guest, :custom_permission, namespace: project.group) }
let_it_be(:membership) { create(:project_member, :guest, member_role: role, user: user, project: project) }
before do
stub_licensed_features(custom_roles: true)
sign_in(user)
end
describe MyController do
describe '#show' do
it 'allows access' do
get my_controller_path(project)
expect(response).to have_gitlab_http_status(:ok)
expect(response).to render_template(:show)
end
end
end
```
- Below is an example of the typical setup that is required to test a GraphQL mutation.
```ruby
let_it_be(:user) { create(:user) }
let_it_be(:project) { create(:project, :repository, :in_group) }
let_it_be(:role) { create(:member_role, :guest, :custom_permission, namespace: project.group) }
let_it_be(:membership) { create(:project_member, :guest, member_role: role, user: user, project: project) }
before do
stub_licensed_features(custom_roles: true)
sign_in(user)
end
describe MyMutation do
include GraphqlHelpers
describe '#show' do
let(:mutation) { graphql_mutation(:my_mutation) }
it_behaves_like 'a working graphql query'
end
end
```
- Add tests to `ProjectPolicy` and/or `GroupPolicy`. Below is an example for testing `ProjectPolicy` related changes.
```ruby
context 'for a member role with read_dependency true' do
let(:member_role_abilities) { { read_dependency: true } }
let(:allowed_abilities) { [:read_dependency] }
it_behaves_like 'custom roles abilities'
end
```
- Add [advanced search permissions tests](../advanced_search.md#permissions-tests) for impacted scopes if needed
### Step 8: Update documentation
Follow the [Contribute to the GitLab documentation](../documentation/_index.md) page to make the following changes to the documentation:
- Update the list of custom abilities by running `bundle exec rake gitlab:custom_roles:compile_docs`
- Update the GraphQL documentation by running `bundle exec rake gitlab:graphql:compile_docs`
### Privilege escalation consideration
A base role typically has permissions that allow creation or management of artifacts corresponding to the base role when interacting with that artifact. For example, when a `Developer` creates an access token for a project, it is created with `Developer` access encoded into that credential. It is important to keep in mind that as new custom permissions are created, there might be a risk of elevated privileges when interacting with GitLab artifacts, and appropriate safeguards or base role checks should be added.
### Consuming seats
If a new user with a role `Guest` is added to a member role that includes enablement of an ability that is **not** in the `CUSTOMIZABLE_PERMISSIONS_EXEMPT_FROM_CONSUMING_SEAT` array, a seat is consumed. We simply want to make sure we are charging Ultimate customers for guest users, who have "elevated" abilities. This only applies to billable users on SaaS (billable users that are counted towards namespace subscription). More details about this topic can be found in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/390269).
### Modular Policies
In an effort to support the [GitLab Modular Monolith design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/modular_monolith/) the [Authorization group](https://handbook.gitlab.com/handbook/engineering/development/sec/govern/authorization/) is [collaborating](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/153348) with the [Create:IDE group](https://handbook.gitlab.com/handbook/engineering/development/dev/create/ide/). Once a POC is implemented, the findings will be [discussed](https://gitlab.com/gitlab-org/gitlab/-/issues/454934) and the Authorization group will make a decision of what the modular design of policies will be going forward.
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Custom role development guidelines
breadcrumbs:
- doc
- development
- permissions
---
Ultimate customers can create custom roles and define those roles by assigning specific abilities.
For example, a user could create an "Engineer" role with `read code` and `admin merge requests` abilities, but without abilities like `admin issues`.
In this context, the terms "permission" and "ability" are often used interchangeably.
- "Ability" is an action a user can do. These map to [Declarative Policy abilities](https://gitlab.com/gitlab-org/ruby/gems/declarative-policy/-/blob/main/doc/defining-policies.md#rules) and live in Policy classes in `ee/app/policies/*`.
- "Permission" is how we refer to an ability [in user-facing documentation](../../user/permissions.md). The documentation of permissions is manually generated so there is not necessarily a 1:1 mapping of the permissions listed in documentation and the abilities defined in Policy classes.
## Custom roles vs default roles
In GitLab 15.9 and earlier, GitLab only had [default roles](predefined_roles.md) as a permission system. In this system, there are a few predefined roles that are statically assigned to certain abilities. These default roles are not customizable by customers.
With custom roles, the customers can decide which abilities they want to assign to certain user groups. For example:
- In the default role system, reading of vulnerabilities is limited to a Developer role.
- In the custom role system, a customer can assign this ability to a new custom role based on any default role.
Like default roles, custom roles are [inherited](../../user/project/members/_index.md#membership-types) within a group hierarchy. If a user has custom role for a group, that user will also have a custom role for any projects or subgroups within the group.
## Technical overview
- Individual custom roles are stored in the `member_roles` table (`MemberRole` model).
- A `member_roles` record is associated with top-level groups (not subgroups) via the `namespace_id` foreign key.
- A Group or project membership (`members` record) is associated with a custom role via the `member_role_id` foreign key.
- A Group or project membership can be associated with any custom role that is defined on the root-level group of the group or project.
- The `member_roles` table includes individual permissions and a `base_access_level` value.
- The `base_access_level` must be a [valid access level](../../api/access_requests.md#valid-access-levels).
The `base_access_level` determines which abilities are included in the custom role. For example, if the `base_access_level` is `10`, the custom role will include any abilities that a default Guest role would receive, plus any additional abilities that are enabled by the `member_roles` record by setting an attribute, such as `read_code`, to true.
- A custom role can enable additional abilities for a `base_access_level` but it cannot disable a permission. As a result, custom roles are "additive only". The rationale for this choice is [in this comment](https://gitlab.com/gitlab-org/gitlab/-/issues/352891#note_1059561579).
- Custom role abilities are supported at project level and group level.
## Refactoring abilities
### Finding existing abilities checks
Abilities are often [checked in multiple locations](authorizations.md#where-should-permissions-be-checked) for a single endpoint or web request. Therefore, it can be difficult to find the list of authorization checks that are run for a given endpoint.
To assist with this, you can locally set `GITLAB_DEBUG_POLICIES=true`.
This outputs information about which abilities are checked in the requests
made in any specs that you run. The output also includes the line of code where the
authorization check was made. Caller information is especially helpful in cases
where there is metaprogramming used because those cases are difficult to find by
grepping for ability name strings.
For example:
```shell
# example spec run
GITLAB_DEBUG_POLICIES=true bundle exec rspec spec/controllers/groups_controller_spec.rb:162
# permissions debug output when spec is run; if multiple policy checks are run they will all be in the debug output.
POLICY CHECK DEBUG -> policy: GlobalPolicy, ability: create_group, called_from: ["/gitlab/app/controllers/application_controller.rb:245:in `can?'", "/gitlab/app/controllers/groups_controller.rb:255:in `authorize_create_group!'"]
```
Use this setting to learn more about authorization checks while
refactoring. You should not keep this setting enabled for any specs on the default branch.
### Understanding logic for individual abilities
References to an ability may appear in a `DeclarativePolicy` class many times
and depend on conditions and rules which reference other abilities. As a result,
it can be challenging to know exactly which conditions apply to a particular
ability.
`DeclarativePolicy` provides a `ability_map` for each policy class, which
pulls all rules for an ability into an array.
For example:
```ruby
> GroupPolicy.ability_map.map.select { |k,v| k == :read_group_member }
=> {:read_group_member=>[[:enable, #<Rule can?(:read_group)>], [:prevent, #<Rule ~can_read_group_member>]]}
> GroupPolicy.ability_map.map.select { |k,v| k == :read_group }
=> {:read_group=>
[[:enable, #<Rule public_group>],
[:enable, #<Rule logged_in_viewable>],
[:enable, #<Rule guest>],
[:enable, #<Rule admin>],
[:enable, #<Rule has_projects>],
[:enable, #<Rule read_package_registry_deploy_token>],
[:enable, #<Rule write_package_registry_deploy_token>],
[:prevent, #<Rule all?(~public_group, ~admin, user_banned_from_group)>],
[:enable, #<Rule auditor>],
[:prevent, #<Rule needs_new_sso_session>],
[:prevent, #<Rule all?(ip_enforcement_prevents_access, ~owner, ~auditor)>]]}
```
`DeclarativePolicy` also provides a `debug` method that can be used to
understand the logic tree for a specific object and actor. The output is similar
to the list of rules from `ability_map`. But, `DeclarativePolicy` stops
evaluating rules after you `prevent` an ability, so it is possible that
not all conditions are called.
Example:
```ruby
policy = GroupPolicy.new(User.last, Group.last)
policy.debug(:read_group)
- [0] enable when public_group ((@custom_guest_user1 : Group/139))
- [0] enable when logged_in_viewable ((@custom_guest_user1 : Group/139))
- [0] enable when admin ((@custom_guest_user1 : Group/139))
- [0] enable when auditor ((@custom_guest_user1 : Group/139))
- [14] prevent when all?(~public_group, ~admin, user_banned_from_group) ((@custom_guest_user1 : Group/139))
- [14] prevent when needs_new_sso_session ((@custom_guest_user1 : Group/139))
- [16] enable when guest ((@custom_guest_user1 : Group/139))
- [16] enable when has_projects ((@custom_guest_user1 : Group/139))
- [16] enable when read_package_registry_deploy_token ((@custom_guest_user1 : Group/139))
- [16] enable when write_package_registry_deploy_token ((@custom_guest_user1 : Group/139))
[21] prevent when all?(ip_enforcement_prevents_access, ~owner, ~auditor) ((@custom_guest_user1 : Group/139))
=> #<DeclarativePolicy::Runner::State:0x000000015c665050
@called_conditions=
#<Set: {
"/dp/condition/GroupPolicy/public_group/Group:139",
"/dp/condition/GroupPolicy/logged_in_viewable/User:83,Group:139",
"/dp/condition/BasePolicy/admin/User:83",
"/dp/condition/BasePolicy/auditor/User:83",
"/dp/condition/GroupPolicy/user_banned_from_group/User:83,Group:139",
"/dp/condition/GroupPolicy/needs_new_sso_session/User:83,Group:139",
"/dp/condition/GroupPolicy/guest/User:83,Group:139",
"/dp/condition/GroupPolicy/has_projects/User:83,Group:139",
"/dp/condition/GroupPolicy/read_package_registry_deploy_token/User:83,Group:139",
"/dp/condition/GroupPolicy/write_package_registry_deploy_token/User:83,Group:139"}>,
@enabled=false,
@prevented=true>
```
### Abilities consolidation
Every feature added to custom roles should have minimal abilities. For most features, having `read_*` and `admin_*` should be enough. You should consolidate all:
- View-related abilities under `read_*`. For example, viewing a list or detail.
- Object updates under `admin_*`. For example, updating an object, adding assignees or closing it that object. Usually, a role that enables `admin_` has to have also `read_` abilities enabled. This is defined in `requirement` option in the `ALL_CUSTOMIZABLE_PERMISSIONS` hash on `MemberRole` model.
There might be features that require additional abilities but try to minimize those. You can always ask members of the Authentication and Authorization group for their opinion or help.
This is also where your work should begin. Take all the abilities for the feature you work on, and consolidate those abilities into `read_`, `admin_`, or additional abilities if necessary.
Many abilities in the `GroupPolicy` and `ProjectPolicy` classes have many
redundant policies. There is an [epic for consolidating these Policy classes](https://gitlab.com/groups/gitlab-org/-/epics/6689).
If you encounter similar permissions in these classes, consider refactoring so
that they have the same name.
For example, you see in `GroupPolicy` that there is an ability called
`read_group_security_dashboard` and in `ProjectPolicy` has an ability called
`read_project_security_dashboard`. You'd like to make both customizable. Rather
than adding a row to the `member_roles` table for each ability, consider
renaming them to `read_security_dashboard` and adding `read_security_dashboard`
to the `member_roles` table. Enabling `read_security_dashboard` on
the parent group will allow the custom role to access the group security dashboard and the project security dashboard
for each project in that group. Enabling the same permission on a specific project will allow access to that projects'
security dashboard.
## How to add support for an ability to custom roles
If adding an existing ability, consider [refactoring & consolidating abilities for the feature](#refactoring-abilities)
before in a separate merge request, before completing the below.
### Step 1. Generate a configuration file
- Run `./ee/bin/custom-ability <ABILITY_NAME>` to generate a configuration file for the new ability.
- This will generate a YAML file in `ee/config/custom_abilities` which follows the following schema:
| Field | Required | Description |
| ----- | -------- |--------------|
| `name` | yes | Unique, lowercase and underscored name describing the custom ability. Must match the filename. |
| `title` | yes | Human-readable title of the custom ability. |
| `description` | yes | Human-readable description of the custom ability. |
| `feature_category` | yes | Name of the feature category. For example, `vulnerability_management`. |
| `introduced_by_issue` | yes | Issue URL that proposed the addition of this custom ability. |
| `introduced_by_mr` | yes | MR URL that added this custom ability. |
| `milestone` | yes | Milestone in which this custom ability was added. |
| `admin_ability` | no | Boolean value to indicate whether this ability is checked at the admin level. |
| `group_ability` | yes | Boolean value to indicate whether this ability is checked on group level. |
| `enabled_for_group_access_levels` | if `group_ability = true` | The array of access levels that already have access to this custom ability in a group. See the section on [understanding logic for individual abilities](#understanding-logic-for-individual-abilities) for help on determining the base access level for an ability. This is for information only and has no impact on how custom roles operate. |
| `project_ability` | yes | Boolean value to whether this ability is checked on project level. |
| `enabled_for_project_access_levels` | if `project_ability = true` | The array of access levels that already have access to this custom ability in a project. See the section on [understanding logic for individual abilities](#understanding-logic-for-individual-abilities) for help on determining the base access level for an ability. This is for information only and has no impact on how custom roles operate. |
| `requirements` | no | The list of custom permissions this ability is dependent on. For instance `admin_vulnerability` is dependent on `read_vulnerability`. If none, then enter `[]` |
| `available_from_access_level` | no | The access level of the predefined role from which this ability is available, if applicable. See the section on [understanding logic for individual abilities](#understanding-logic-for-individual-abilities) for help on determining the base access level for an ability. This is for information only and has no impact on how custom roles operate. |
### Step 2: Create a spec file and update validation schema
- Run `bundle exec rails generate gitlab:custom_roles:code --ability <ABILITY_NAME>` which will update the permissions validation schema file and create an empty spec file.
### Step 3: Create a feature flag (optional)
- If you would like to toggle the custom ability using a [feature flag](../feature_flags/_index.md), create a feature flag with name `custom_ability_<name>`. Such as, for ability `read_code`, the feature flag will be `custom_ability_read_code`. When this feature flag is disabled, the custom ability will be hidden when creating a new custom role, or when fetching custom abilities for a user.
### Step 4: Update policies
- If the ability is checked on a group level, add rule(s) to GroupPolicy to enable the ability.
- For example: if the ability we would like to add is `read_dependency`, then an update to `ee/app/policies/ee/group_policy.rb` would look like as follows:
```ruby
rule { custom_role_enables_read_dependency }.enable(:read_dependency)
```
- Similarly, If the ability is checked on a project level, add rule(s) to ProjectPolicy to enable the ability.
- For example: if the ability we would like to add is `read_dependency`, then an update to `ee/app/policies/ee/project_policy.rb` would look like as follows:
```ruby
rule { custom_role_enables_read_dependency }.enable(:read_dependency)
```
- Not all abilities need to be enabled on both levels, for instance `admin_terraform_state` allows users to manage a project's terraform state. It only needs to be enabled on the project level and not the group level, and thus only needs to be configured in `ee/app/policies/ee/project_policy.rb`.
### Step 5: Verify role access
- Ensure SaaS mode is enabled with `GITLAB_SIMULATE_SAAS=1`.
- Go to any Group that you are an owner of, then go to `Settings -> Roles and permissions`.
- Select `New role` and create a custom role with the permission you have just created.
- Go to the Group's `Manage -> Members` page and assign a member to this newly created custom role.
- Next, sign in as that member and ensure that you are able to access the page that the custom ability is intended for.
### Step 6: Assess impact to advanced search
Custom roles may impact [advanced search functionality](../../user/search/advanced_search.md#available-scopes) if the ability impacts data that is indexed by Advanced search.
- Enable [Advanced search and index the instance](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/elasticsearch.md#enable-elasticsearch-in-the-gdk)
- Sign in as a member with the custom role assigned to any Group
- Perform a global search by navigating to `Search or go to...`. Type in a search term and select to search in `all GitLab`.
- Verify that the user can search for data impacted by the custom role
- Perform a group search by navigating to the group page then `Search or go to...`. Type in a search term and select search in group.
- Verify that the user can search for data impacted by the custom role
- Update [search authorization](../advanced_search.md#updating-authorization) if needed
### Step 7: Add specs
- Add the ability as a trait in the `MemberRoles` factory, `ee/spec/factories/member_roles.rb`.
- Add tests to `ee/spec/requests/custom_roles/<ABILITY_NAME>/request_spec.rb` to ensure that once the user has been assigned the custom ability, they can successfully access the controllers, REST API endpoints and GraphQL API endpoints.
- Below is an example of the typical setup that is required to test a Rails Controller endpoint.
```ruby
let_it_be(:user) { create(:user) }
let_it_be(:project) { create(:project, :repository, :in_group) }
let_it_be(:role) { create(:member_role, :guest, :custom_permission, namespace: project.group) }
let_it_be(:membership) { create(:project_member, :guest, member_role: role, user: user, project: project) }
before do
stub_licensed_features(custom_roles: true)
sign_in(user)
end
describe MyController do
describe '#show' do
it 'allows access' do
get my_controller_path(project)
expect(response).to have_gitlab_http_status(:ok)
expect(response).to render_template(:show)
end
end
end
```
- Below is an example of the typical setup that is required to test a GraphQL mutation.
```ruby
let_it_be(:user) { create(:user) }
let_it_be(:project) { create(:project, :repository, :in_group) }
let_it_be(:role) { create(:member_role, :guest, :custom_permission, namespace: project.group) }
let_it_be(:membership) { create(:project_member, :guest, member_role: role, user: user, project: project) }
before do
stub_licensed_features(custom_roles: true)
sign_in(user)
end
describe MyMutation do
include GraphqlHelpers
describe '#show' do
let(:mutation) { graphql_mutation(:my_mutation) }
it_behaves_like 'a working graphql query'
end
end
```
- Add tests to `ProjectPolicy` and/or `GroupPolicy`. Below is an example for testing `ProjectPolicy` related changes.
```ruby
context 'for a member role with read_dependency true' do
let(:member_role_abilities) { { read_dependency: true } }
let(:allowed_abilities) { [:read_dependency] }
it_behaves_like 'custom roles abilities'
end
```
- Add [advanced search permissions tests](../advanced_search.md#permissions-tests) for impacted scopes if needed
### Step 8: Update documentation
Follow the [Contribute to the GitLab documentation](../documentation/_index.md) page to make the following changes to the documentation:
- Update the list of custom abilities by running `bundle exec rake gitlab:custom_roles:compile_docs`
- Update the GraphQL documentation by running `bundle exec rake gitlab:graphql:compile_docs`
### Privilege escalation consideration
A base role typically has permissions that allow creation or management of artifacts corresponding to the base role when interacting with that artifact. For example, when a `Developer` creates an access token for a project, it is created with `Developer` access encoded into that credential. It is important to keep in mind that as new custom permissions are created, there might be a risk of elevated privileges when interacting with GitLab artifacts, and appropriate safeguards or base role checks should be added.
### Consuming seats
If a new user with a role `Guest` is added to a member role that includes enablement of an ability that is **not** in the `CUSTOMIZABLE_PERMISSIONS_EXEMPT_FROM_CONSUMING_SEAT` array, a seat is consumed. We simply want to make sure we are charging Ultimate customers for guest users, who have "elevated" abilities. This only applies to billable users on SaaS (billable users that are counted towards namespace subscription). More details about this topic can be found in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/390269).
### Modular Policies
In an effort to support the [GitLab Modular Monolith design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/modular_monolith/) the [Authorization group](https://handbook.gitlab.com/handbook/engineering/development/sec/govern/authorization/) is [collaborating](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/153348) with the [Create:IDE group](https://handbook.gitlab.com/handbook/engineering/development/dev/create/ide/). Once a POC is implemented, the findings will be [discussed](https://gitlab.com/gitlab-org/gitlab/-/issues/454934) and the Authorization group will make a decision of what the modular design of policies will be going forward.
|
https://docs.gitlab.com/development/conventions
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/conventions.md
|
2025-08-13
|
doc/development/permissions
|
[
"doc",
"development",
"permissions"
] |
conventions.md
|
Software Supply Chain Security
|
Authorization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Permissions Conventions
| null |
## Historical Context
We utilize the [`DeclarativePolicy` framework for authorization in GitLab](../policies.md), making it straightforward to add new permissions. Until 2024, there was no clear guidance on when to introduce new permissions and how to name them. This lack of direction is a significant reason why the number of permissions has become unmanageable.
The purpose of this document is to provide guidance on:
- When to introduce a new permission and when to reuse an existing one
- How to name new permissions
- What should be included in the `Policy` classes and what should not
### Introducing New Permissions
Introduce a new permission only when absolutely necessary. Always try to use an existing one first. For example, there's no need for a `read_issue_description` permission when we already have `read_issue`, and both require the same role. Similarly, with `create_pipeline` available, we don't need `create_build`.
When introducing a new permission, always attempt to follow the naming conventions. Try to create a general permission, not a specific one. For example, it is better to add a permission `create_member_role` than `create_member_role_name`. If you're unsure, consult a Backend Engineer from the [Govern:Authorization team](https://handbook.gitlab.com/handbook/engineering/development/sec/govern/authorization/) for advice or approval for exceptions.
### Naming Permissions
Our goal is for all permissions to follow a consistent pattern: `action_resource(_subresource)`. The resource and subresource should always be in the singular and match the object being acted upon. For example, if an action is being evaluated against a `Project` the permission name should be in the format `action_project`. Additionally, we aim to limit the actions used to ensure clarity. The preferred actions are:
- `create` - for creating an object. For example, `create_issue`.
- `read` - for reading an object. For example, `read_issue`.
- `update` - for updating an object. For example, `update_issue`.
- `delete` - for deleting an object. For example, `delete_issue`.
- `push` and `download` - these are specific actions for file-related permissions. Other industry terms can be permitted after a justification.
We recognize that this set of actions is limited and not applicable to every feature. Here are some actions that, while necessary, should be rephrased to align with the above conventions:
- `approve` - For example, `approve_merge_request`. Though `approve` suggests a lower role than `manage`, it could be rephrased as `create_merge_request_approval`.
#### Preferred Actions
- `create` is preferred over `build` or `import`
- `read` is preferred over `access`
- `push` is preferred over `upload`
- `delete` is preferred over `destroy`
#### Exceptions
If you believe a new permission is needed that does not follow these conventions, consult the [Govern:Authorization team](https://handbook.gitlab.com/handbook/engineering/development/sec/govern/authorization/). We're always open to discussion, these guidelines are meant to make the work of Engineers easier, not to complicate it.
### What to Include in Policy Classes
#### Role
Policy classes should include checks for both predefined and custom roles.
Examples:
```ruby
rule { developer } # Static role check
rule { can?(:developer_access) } # Another approach used in some classes
rule { custom_role_enables_read_dependency } # Custom role check
```
#### Checks Related to the Current User
Include checks that vary based on the current user's relationship with the object, such as being an assignee or author.
Examples:
```ruby
rule { is_author }.policy do
enable :read_note
enable :update_note
enable :delete_note
end
```
|
---
stage: Software Supply Chain Security
group: Authorization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Permissions Conventions
breadcrumbs:
- doc
- development
- permissions
---
## Historical Context
We utilize the [`DeclarativePolicy` framework for authorization in GitLab](../policies.md), making it straightforward to add new permissions. Until 2024, there was no clear guidance on when to introduce new permissions and how to name them. This lack of direction is a significant reason why the number of permissions has become unmanageable.
The purpose of this document is to provide guidance on:
- When to introduce a new permission and when to reuse an existing one
- How to name new permissions
- What should be included in the `Policy` classes and what should not
### Introducing New Permissions
Introduce a new permission only when absolutely necessary. Always try to use an existing one first. For example, there's no need for a `read_issue_description` permission when we already have `read_issue`, and both require the same role. Similarly, with `create_pipeline` available, we don't need `create_build`.
When introducing a new permission, always attempt to follow the naming conventions. Try to create a general permission, not a specific one. For example, it is better to add a permission `create_member_role` than `create_member_role_name`. If you're unsure, consult a Backend Engineer from the [Govern:Authorization team](https://handbook.gitlab.com/handbook/engineering/development/sec/govern/authorization/) for advice or approval for exceptions.
### Naming Permissions
Our goal is for all permissions to follow a consistent pattern: `action_resource(_subresource)`. The resource and subresource should always be in the singular and match the object being acted upon. For example, if an action is being evaluated against a `Project` the permission name should be in the format `action_project`. Additionally, we aim to limit the actions used to ensure clarity. The preferred actions are:
- `create` - for creating an object. For example, `create_issue`.
- `read` - for reading an object. For example, `read_issue`.
- `update` - for updating an object. For example, `update_issue`.
- `delete` - for deleting an object. For example, `delete_issue`.
- `push` and `download` - these are specific actions for file-related permissions. Other industry terms can be permitted after a justification.
We recognize that this set of actions is limited and not applicable to every feature. Here are some actions that, while necessary, should be rephrased to align with the above conventions:
- `approve` - For example, `approve_merge_request`. Though `approve` suggests a lower role than `manage`, it could be rephrased as `create_merge_request_approval`.
#### Preferred Actions
- `create` is preferred over `build` or `import`
- `read` is preferred over `access`
- `push` is preferred over `upload`
- `delete` is preferred over `destroy`
#### Exceptions
If you believe a new permission is needed that does not follow these conventions, consult the [Govern:Authorization team](https://handbook.gitlab.com/handbook/engineering/development/sec/govern/authorization/). We're always open to discussion, these guidelines are meant to make the work of Engineers easier, not to complicate it.
### What to Include in Policy Classes
#### Role
Policy classes should include checks for both predefined and custom roles.
Examples:
```ruby
rule { developer } # Static role check
rule { can?(:developer_access) } # Another approach used in some classes
rule { custom_role_enables_read_dependency } # Custom role check
```
#### Checks Related to the Current User
Include checks that vary based on the current user's relationship with the object, such as being an assignee or author.
Examples:
```ruby
rule { is_author }.policy do
enable :read_note
enable :update_note
enable :delete_note
end
```
|
https://docs.gitlab.com/development/duo_workflow
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/duo_workflow
|
[
"doc",
"development",
"duo_workflow"
] |
_index.md
| null | null | null | null | null |
<!-- markdownlint-disable -->
This document was moved to [another location](../duo_agent_platform/_index.md).
<!-- This redirect file can be deleted after 2025-11-12. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
---
redirect_to: ../duo_agent_platform/_index.md
remove_date: '2025-11-12'
breadcrumbs:
- doc
- development
- duo_workflow
---
<!-- markdownlint-disable -->
This document was moved to [another location](../duo_agent_platform/_index.md).
<!-- This redirect file can be deleted after 2025-11-12. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
https://docs.gitlab.com/development/create_triage_policy_with_gitlab_duo_workflow_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/create_triage_policy_with_gitlab_duo_workflow_guide.md
|
2025-08-13
|
doc/development/duo_workflow
|
[
"doc",
"development",
"duo_workflow"
] |
create_triage_policy_with_gitlab_duo_workflow_guide.md
| null | null | null | null | null |
<!-- markdownlint-disable -->
This document was moved to [another location](../duo_agent_platform/create_triage_policy_with_gitlab_duo_agent_platform_guide.md).
<!-- This redirect file can be deleted after 2025-11-12. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
---
redirect_to: ../duo_agent_platform/create_triage_policy_with_gitlab_duo_agent_platform_guide.md
remove_date: '2025-11-12'
breadcrumbs:
- doc
- development
- duo_workflow
---
<!-- markdownlint-disable -->
This document was moved to [another location](../duo_agent_platform/create_triage_policy_with_gitlab_duo_agent_platform_guide.md).
<!-- This redirect file can be deleted after 2025-11-12. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
|
https://docs.gitlab.com/development/mergeability_framework
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/mergeability_framework.md
|
2025-08-13
|
doc/development/merge_request_concepts
|
[
"doc",
"development",
"merge_request_concepts"
] |
mergeability_framework.md
|
Create
|
Code Review
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Mergeability framework
|
Developer information explaining the process to add a new mergeability check
|
The initial work started with the [better defined mergeability framework](https://gitlab.com/groups/gitlab-org/-/epics/5598)
Originally, the mergeability knowledge was spread throughout the backend and frontend.
This work was to consolidate some of the mergeability criteria into the same location
in the backend. This allows the frontend to simply consume the API and display the error.
## Add a new check
When adding a new merge check, we must make a few choices:
- Is this check skippable, and part of the **Merge when checks pass** feature?
- Is this check cacheable?
- If so, what is an appropriate cache key?
- Does this check have a setting to turn this check on or off?
After we answer these questions, we can create the new check.
The mergeability checks live under `app/services/merge_requests/mergeability/`.
1. To create a new check, we can use this as a base:
```ruby
# frozen_string_literal: true
module MergeRequests
module Mergeability
class CheckCiStatusService < CheckBaseService
identifier :ci_must_pass # Identifier used to state which check failed
description 'Checks whether CI has passed' # Description of the check returned through GraphQL
def execute
# If the merge check is behind a setting, we return inactive if the setting is false
return inactive unless merge_request.only_allow_merge_if_pipeline_succeeds?
if merge_request.mergeable_ci_state?
success
else
failure
end
end
def skip?
# Here we can check for the param or return false if its not skippable
# Skippablility of an MR is related to merge when checks pass functionality
params[:skip_ci_check].present?
end
# If we return true here, we need to create the method def cache_key and provide
# an appropriate cache key that will invalidate correctly.
def cacheable?
false
end
end
end
end
```
1. Add the new check in the `def mergeable_state_checks` method.
1. Add the new check to the GraphQL enum `app/graphql/types/merge_requests/detailed_merge_status_enum.rb`.
1. Update the GraphQL documentation with `bundle exec rake gitlab:graphql:compile_docs`.
1. Update the API documentation in `doc/api/merge_requests.md`.
1. Update the frontend to support the new message: `app/assets/javascripts/vue_merge_request_widget/components/checks/message.vue`.
## Considerations
1. Should it be skippable? If it is part of the merge when checks pass work,
then we should add the skippable check. Otherwise, you should return `false`.
1. Performance: These mergeability checks are run very frequently, and therefore
performance is a big consideration here. It is critical to check how the new
mergeability check performs. In general, we are expecting around 10-20 ms.
1. Caching is an option too. We can set the `def cacheable?` method to return `true`,
and in that case, we need to create another method `def cache_key` to set the
cache key for the particular check. Cache invalidation can often be tricky,
and we must consider all the edge cases in the cache key. If we keep the timing
around 10-20 ms, then caching is not needed.
1. Time the checks. We time each check through the `app/services/merge_requests/mergeability/logger.rb`
class, which can then be viewed in Kibana.
## How the classes work together
1. The main methods that call the mergeability framework are: `def mergeable?`, and `DetailedMergeStatusService`.
1. These methods call the `RunChecksService` class which handles the iterating
of the mergeability checks, caching and instrumentation.
## Merge when checks pass
When we want to add the check to the Merge When Checks Pass feature, we must:
1. Allow the check to be skipped in the class.
1. Add the parameter to the list in the method `skipped_mergeable_checks`.
## Future work
1. At the moment, the slow performance of the approval check is the main area of
concern. We have attempted to make this check cacheable, but there are a lot of
edge cases to consider in regard to when it is invalid.
|
---
stage: Create
group: Code Review
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Developer information explaining the process to add a new mergeability
check
title: Mergeability framework
breadcrumbs:
- doc
- development
- merge_request_concepts
---
The initial work started with the [better defined mergeability framework](https://gitlab.com/groups/gitlab-org/-/epics/5598)
Originally, the mergeability knowledge was spread throughout the backend and frontend.
This work was to consolidate some of the mergeability criteria into the same location
in the backend. This allows the frontend to simply consume the API and display the error.
## Add a new check
When adding a new merge check, we must make a few choices:
- Is this check skippable, and part of the **Merge when checks pass** feature?
- Is this check cacheable?
- If so, what is an appropriate cache key?
- Does this check have a setting to turn this check on or off?
After we answer these questions, we can create the new check.
The mergeability checks live under `app/services/merge_requests/mergeability/`.
1. To create a new check, we can use this as a base:
```ruby
# frozen_string_literal: true
module MergeRequests
module Mergeability
class CheckCiStatusService < CheckBaseService
identifier :ci_must_pass # Identifier used to state which check failed
description 'Checks whether CI has passed' # Description of the check returned through GraphQL
def execute
# If the merge check is behind a setting, we return inactive if the setting is false
return inactive unless merge_request.only_allow_merge_if_pipeline_succeeds?
if merge_request.mergeable_ci_state?
success
else
failure
end
end
def skip?
# Here we can check for the param or return false if its not skippable
# Skippablility of an MR is related to merge when checks pass functionality
params[:skip_ci_check].present?
end
# If we return true here, we need to create the method def cache_key and provide
# an appropriate cache key that will invalidate correctly.
def cacheable?
false
end
end
end
end
```
1. Add the new check in the `def mergeable_state_checks` method.
1. Add the new check to the GraphQL enum `app/graphql/types/merge_requests/detailed_merge_status_enum.rb`.
1. Update the GraphQL documentation with `bundle exec rake gitlab:graphql:compile_docs`.
1. Update the API documentation in `doc/api/merge_requests.md`.
1. Update the frontend to support the new message: `app/assets/javascripts/vue_merge_request_widget/components/checks/message.vue`.
## Considerations
1. Should it be skippable? If it is part of the merge when checks pass work,
then we should add the skippable check. Otherwise, you should return `false`.
1. Performance: These mergeability checks are run very frequently, and therefore
performance is a big consideration here. It is critical to check how the new
mergeability check performs. In general, we are expecting around 10-20 ms.
1. Caching is an option too. We can set the `def cacheable?` method to return `true`,
and in that case, we need to create another method `def cache_key` to set the
cache key for the particular check. Cache invalidation can often be tricky,
and we must consider all the edge cases in the cache key. If we keep the timing
around 10-20 ms, then caching is not needed.
1. Time the checks. We time each check through the `app/services/merge_requests/mergeability/logger.rb`
class, which can then be viewed in Kibana.
## How the classes work together
1. The main methods that call the mergeability framework are: `def mergeable?`, and `DetailedMergeStatusService`.
1. These methods call the `RunChecksService` class which handles the iterating
of the mergeability checks, caching and instrumentation.
## Merge when checks pass
When we want to add the check to the Merge When Checks Pass feature, we must:
1. Allow the check to be skipped in the class.
1. Add the parameter to the list in the method `skipped_mergeable_checks`.
## Future work
1. At the moment, the slow performance of the approval check is the main area of
concern. We have attempted to make this check cacheable, but there are a lot of
edge cases to consider in regard to when it is invalid.
|
https://docs.gitlab.com/development/rate_limits
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/rate_limits.md
|
2025-08-13
|
doc/development/merge_request_concepts
|
[
"doc",
"development",
"merge_request_concepts"
] |
rate_limits.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Application and rate limit guidelines
| null |
GitLab, like most large applications, enforces limits in certain features.
The absences of limits can affect security, performance, data, or could even
exhaust the allocated resources for the application.
Every new feature should have safe usage limits included in its implementation.
Limits are applicable for:
- System-level resource pools such as API requests, SSHD connections, database connections, and storage.
- Domain-level objects such as compute quota, groups, and sign-in attempts.
## When limits are required
1. Limits are required if the absence of the limit matches severity 1 - 3 in the severity definitions for [limit-related bugs](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/#limit-related-bugs).
1. [GitLab application limits](../../administration/instance_limits.md) documentation must be updated anytime limits are added, removed, or updated.
## Additional reading
- Existing [GitLab application limits](../../administration/instance_limits.md)
- Product processes: [introducing application limits](https://handbook.gitlab.com/handbook/product/product-processes/#introducing-application-limits)
- Development documentation: [guide for adding application limits](../application_limits.md)
- Infrastructure guide to rate limits: [rate limit architecture](https://handbook.gitlab.com/handbook/engineering/infrastructure/rate-limiting/)
- A guide for when, where, and how to configure rate limits: [managing rate limits](https://handbook.gitlab.com/handbook/engineering/infrastructure/rate-limiting/managing-limits/)
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Application and rate limit guidelines
breadcrumbs:
- doc
- development
- merge_request_concepts
---
GitLab, like most large applications, enforces limits in certain features.
The absences of limits can affect security, performance, data, or could even
exhaust the allocated resources for the application.
Every new feature should have safe usage limits included in its implementation.
Limits are applicable for:
- System-level resource pools such as API requests, SSHD connections, database connections, and storage.
- Domain-level objects such as compute quota, groups, and sign-in attempts.
## When limits are required
1. Limits are required if the absence of the limit matches severity 1 - 3 in the severity definitions for [limit-related bugs](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/#limit-related-bugs).
1. [GitLab application limits](../../administration/instance_limits.md) documentation must be updated anytime limits are added, removed, or updated.
## Additional reading
- Existing [GitLab application limits](../../administration/instance_limits.md)
- Product processes: [introducing application limits](https://handbook.gitlab.com/handbook/product/product-processes/#introducing-application-limits)
- Development documentation: [guide for adding application limits](../application_limits.md)
- Infrastructure guide to rate limits: [rate limit architecture](https://handbook.gitlab.com/handbook/engineering/infrastructure/rate-limiting/)
- A guide for when, where, and how to configure rate limits: [managing rate limits](https://handbook.gitlab.com/handbook/engineering/infrastructure/rate-limiting/managing-limits/)
|
https://docs.gitlab.com/development/approval_rules
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/approval_rules.md
|
2025-08-13
|
doc/development/merge_request_concepts
|
[
"doc",
"development",
"merge_request_concepts"
] |
approval_rules.md
|
Create
|
Code Review
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Approval rules development guidelines
|
Developer documentation explaining the design and workflow of merge request approval rules.
|
This document explains the backend design and flow of all related functionality
about [merge request approval rules](../../user/project/merge_requests/approvals/_index.md).
This should help contributors to understand the code design easier and to also
help see if there are parts to improve as the feature and its implementation
evolves.
It's intentional that it doesn't contain too much implementation detail as they
can change often. The code should explain those things better. The components
mentioned here are the major parts of the application for the approval rules
feature to work.
{{< alert type="note" >}}
This is a living document and should be updated accordingly when parts
of the codebase touched in this document are changed or removed, or when new components
are added.
{{< /alert >}}
## Data Model
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: Approval rules data model
accDescr: Entity relationship diagram of approval rules
Project ||--o{ MergeRequest: " "
Project ||--o{ ApprovalProjectRule: " "
ApprovalProjectRule }o--o{ User: " "
ApprovalProjectRule }o--o{ Group: " "
ApprovalProjectRule }o--o{ ProtectedBranch: " "
MergeRequest ||--|| ApprovalState: " "
ApprovalState ||--o{ ApprovalWrappedRule: " "
MergeRequest ||--o{ Approval: " "
MergeRequest ||--o{ ApprovalMergeRequestRule: " "
ApprovalMergeRequestRule }o--o{ User: " "
ApprovalMergeRequestRule }o--o{ Group: " "
ApprovalMergeRequestRule ||--o| ApprovalProjectRule: " "
```
### `Project` and `MergeRequest`
`Project` and `MergeRequest` models are defined in `ee/app/models/ee/project.rb`
and `ee/app/models/ee/merge_request.rb`. They extend the non-EE versions, because
approval rules are an EE-only feature. Associations and other related stuff to
merge request approvals are defined here.
### `ApprovalState`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: ApprovalState
accDescr: Entity relationship diagram between MergeRequest and ApprovalState
MergeRequest ||--|| ApprovalState: " "
```
`ApprovalState` class is defined in `ee/app/models/approval_state.rb`. It's not
an actual `ActiveRecord` model. This class encapsulates all logic related to the
state of the approvals for a certain merge request like:
- Knowing the approval rules that are applicable to the merge request based on
its target branch.
- Knowing the approval rules that are applicable to a certain target branch.
- Checking if all rules were approved.
- Checking if approval is required.
- Knowing how many approvals were given or still required.
It gets the approval rules data from the project (`ApprovalProjectRule`) or the
merge request (`ApprovalMergeRequestRule`) and wraps it as `ApprovalWrappedRule`.
### `ApprovalProjectRule`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: ApprovalProjectRule diagram
accDescr: Entity relationship diagram between projects and ApprovalProjectRule
Project ||--o{ ApprovalProjectRule: " "
ApprovalProjectRule }o--o{ User: " "
ApprovalProjectRule }o--o{ Group: " "
ApprovalProjectRule }o--o{ ProtectedBranch: " "
```
`ApprovalProjectRule` model is defined in `ee/app/models/approval_project_rule.rb`.
A record is created/updated/deleted when an approval rule is added/edited/removed
via project settings or the [approval rules API for projects](../../api/merge_request_approvals.md#approval-rules-for-projects).
The `ApprovalState` model gets these records when approval rules are not
overwritten.
The `protected_branches` attribute is set and used when a rule is scoped to
protected branches. See [Approvals for protected branches](../../user/project/merge_requests/approvals/rules.md#approvals-for-protected-branches)
for more information about the feature.
### `ApprovalMergeRequestRule`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: ApprovalMergeRequestRule diagram
accDescr: Entity relationship diagram between MergeRequest and ApprovalMergeRequestRule
MergeRequest ||--o{ ApprovalMergeRequestRule: " "
ApprovalMergeRequestRule }o--o{ User: " "
ApprovalMergeRequestRule }o--o{ Group: " "
ApprovalMergeRequestRule ||--o| ApprovalProjectRule: " "
```
`ApprovalMergeRequestRule` model is defined in `ee/app/models/approval_merge_request_rule.rb`.
A record is created/updated/deleted when a rule is added/edited/removed via merge
request create/edit form or the [single merge request approvals API](../../api/merge_request_approvals.md#approval-rules-for-a-merge-request).
The `approval_project_rule` is set when it is based from an existing `ApprovalProjectRule`.
An `ApprovalMergeRequestRule` doesn't have `protected_branches` as it inherits
them from the `approval_project_rule` if not overridden.
### `ApprovalWrappedRule`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: ApprovalWrappedRule diagram
accDescr: Entity relationship diagram between ApprovalState and ApprovalWrappedRule
ApprovalState ||--o{ ApprovalWrappedRule: " "
```
`ApprovalWrappedRule` is defined in `ee/app/modes/approval_wrapped_rule.rb` and
is not an `ActiveRecord` model. It's used to wrap an `ApprovalProjectRule` or
`ApprovalMergeRequestRule` for common interface. It also has the following sub
types:
- `ApprovalWrappedAnyApprovalRule` - for wrapping an `any_approver` rule.
- `ApprovalWrappedCodeOwnerRule` - for wrapping a `code_owner` rule.
This class delegates most of the responsibilities to the approval rule it wraps
but it's also responsible for:
- Checking if the approval rule is approved.
- Knowing how many approvals were given or still required for the approval rule.
It gets this information from the approval rule and the `Approval` records from
the merge request.
### `Approval`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: Approval diagram
accDescr: Entity relationship diagram between MergeRequest and Approval
MergeRequest ||--o{ Approval: " "
```
`Approval` model is defined in `ee/app/models/approval.rb`. This model is
responsible for storing information about an approval made on a merge request.
Whenever an approval is given/revoked, a record is created/deleted.
## Controllers and Services
The following controllers and services below are being used for the approval
rules feature to work.
### `API::ProjectApprovalSettings`
This private API is defined in `ee/lib/api/project_approval_settings.rb`.
This is used for the following:
- Listing the approval rules in project settings.
- Creating/updating/deleting rules in project settings.
- Listing the approval rules on create merge request form.
### `Projects::MergeRequests::CreationsController`
This controller is defined in `app/controllers/projects/merge_requests/creations_controller.rb`.
The `create` action of this controller is used when the create merge request form is
submitted. It accepts the `approval_rules_attributes` parameter for creating/updating/deleting
`ApprovalMergeRequestRule` records. It passes the parameter along when it executes
`MergeRequests::CreateService`.
### `Projects::MergeRequestsController`
This controller is defined in `app/controllers/projects/merge_requests_controller.rb`.
The `update` action of this controller is used when the edit merge request form is
submitted. It's like `Projects::MergeRequests::CreationsController` but it executes
`MergeRequests::UpdateService` instead.
### `API::MergeRequestApprovals`
This API is defined in `ee/lib/api/merge_request_approvals.rb`.
The [Approvals API endpoint](../../api/merge_request_approvals.md#list-all-approval-rules-for-a-merge-request)
is requested when a merge request page loads.
The `/projects/:id/merge_requests/:merge_request_iid/approval_settings` is a
private API endpoint used for the following:
- Listing the approval rules on edit merge request form.
- Listing the approval rules on the merge request page.
When approving/unapproving an MR via UI and API, the [Approve Merge Request](../../api/merge_request_approvals.md#approve-merge-request)
API endpoint or the [Unapprove Merge Request](../../api/merge_request_approvals.md#unapprove-a-merge-request)
API endpoint are requested. They execute `MergeRequests::ApprovalService` and
`MergeRequests::RemoveApprovalService` accordingly.
### `API::ProjectApprovalRules` and `API::MergeRequestApprovalRules`
These APIs are defined in `ee/lib/api/project_approval_rules.rb` and
`ee/lib/api/merge_request_approval_rules.rb`.
Used to list/create/update/delete project and merge request level rules via
[Merge request approvals API](../../api/merge_request_approvals.md).
Executes `ApprovalRules::CreateService`, `ApprovalRules::UpdateService`,
`ApprovalRules::ProjectRuleDestroyService`, and `ApprovalRules::MergeRequestRuleDestroyService`
accordingly.
### `ApprovalRules::ParamsFilteringService`
This service is defined in `ee/app/services/approval_rules/params_filtering_service.rb`.
It is called only when `MergeRequests::CreateService` and
`MergeRequests::UpdateService` are executed.
It is responsible for parsing the `approval_rules_attributes` parameter to:
- Remove it when a user can't update approval rules.
- Filter the user IDs whether they are members of the project or not.
- Filter the group IDs whether they are visible to user.
- Identify the `any_approver` rule.
- Append hidden groups to it when specified.
- Append user defined inapplicable (rules that do not apply to the merge request's target
branch) approval rules.
### `ApprovalRules::CreateService`
This service is defined in `ee/app/services/approval_rules/create_service.rb`.
It is responsible for creating approval rules at either the merge request or project level.
It is called when:
- Creating approval rules for a project through the UI.
- Creating approval rules for a project through the [API::ProjectApprovalRules](../../api/merge_request_approvals.md#create-an-approval-rule-for-a-project) `/projects/:id/approval_rules` endpoint.
- Creating approval rules for a single merge request through [API::MergeRequestApprovalRules](../../api/merge_request_approvals.md#create-an-approval-rule-for-a-merge-request) `/projects/:id/merge_requests/:merge_request_iid/approval_rules` endpoint.
Merge request approval rules created through the UI do not use this service. See [Projects::MergeRequests::CreationsController](#projectsmergerequestscontroller)
## Flow
These flowcharts should help explain the flow from the controllers down to the
models for different functionalities.
Some CRUD API endpoints are intentionally skipped because they are pretty
straightforward.
### Creating a merge request with approval rules via web UI
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
accTitle: Merge request creation in the UI
accDescr: Flowchart of the creation of a merge request in the web UI, when the merge request contains approval rules
Projects::MergeRequests::CreationsController --> MergeRequests::CreateService
MergeRequests::CreateService --> ApprovalRules::ParamsFilteringService
ApprovalRules::ParamsFilteringService --> MergeRequests::CreateService
MergeRequests::CreateService --> MergeRequest
MergeRequest --> db[(Database)]
MergeRequest --> User
MergeRequest --> Group
MergeRequest --> ApprovalProjectRule
User --> db[(Database)]
Group --> db[(Database)]
ApprovalProjectRule --> db[(Database)]
```
When updating, the same flow is followed but it starts at `Projects::MergeRequestsController`
and executes `MergeRequests::UpdateService` instead.
### Viewing the merge request approval rules on an MR page
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
accTitle: Viewing approval rules on a merge request
accDescr: Flowchart of how the frontend retrieves, then displays, approval rule information on a merge request page
API::MergeRequestApprovals --> MergeRequest
MergeRequest --> ApprovalState
ApprovalState --> id1{approval rules are overridden}
id1{approval rules are overridden} --> |No| ApprovalProjectRule & ApprovalMergeRequestRule
id1{approval rules are overridden} --> |Yes| ApprovalMergeRequestRule
ApprovalState --> ApprovalWrappedRule
ApprovalWrappedRule --> Approval
```
This flow gets initiated by the frontend component. The data returned is
used to display information on the MR widget.
### Approving a merge request
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
accTitle: Approval data flowchart
accDescr: Flowchart of how an approval call to the API reaches the database
API::MergeRequestApprovals --> MergeRequests::ApprovalService
MergeRequests::ApprovalService --> Approval
Approval --> db[(Database)]
```
When unapproving, the same flow is followed but the `MergeRequests::RemoveApprovalService`
is executed instead.
## TODO
1. Add information related to other rule types, such as `code_owner` and `report_approver`.
1. Add information about side effects of approving/unapproving a merge request.
|
---
stage: Create
group: Code Review
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Developer documentation explaining the design and workflow of merge request
approval rules.
title: Approval rules development guidelines
breadcrumbs:
- doc
- development
- merge_request_concepts
---
This document explains the backend design and flow of all related functionality
about [merge request approval rules](../../user/project/merge_requests/approvals/_index.md).
This should help contributors to understand the code design easier and to also
help see if there are parts to improve as the feature and its implementation
evolves.
It's intentional that it doesn't contain too much implementation detail as they
can change often. The code should explain those things better. The components
mentioned here are the major parts of the application for the approval rules
feature to work.
{{< alert type="note" >}}
This is a living document and should be updated accordingly when parts
of the codebase touched in this document are changed or removed, or when new components
are added.
{{< /alert >}}
## Data Model
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: Approval rules data model
accDescr: Entity relationship diagram of approval rules
Project ||--o{ MergeRequest: " "
Project ||--o{ ApprovalProjectRule: " "
ApprovalProjectRule }o--o{ User: " "
ApprovalProjectRule }o--o{ Group: " "
ApprovalProjectRule }o--o{ ProtectedBranch: " "
MergeRequest ||--|| ApprovalState: " "
ApprovalState ||--o{ ApprovalWrappedRule: " "
MergeRequest ||--o{ Approval: " "
MergeRequest ||--o{ ApprovalMergeRequestRule: " "
ApprovalMergeRequestRule }o--o{ User: " "
ApprovalMergeRequestRule }o--o{ Group: " "
ApprovalMergeRequestRule ||--o| ApprovalProjectRule: " "
```
### `Project` and `MergeRequest`
`Project` and `MergeRequest` models are defined in `ee/app/models/ee/project.rb`
and `ee/app/models/ee/merge_request.rb`. They extend the non-EE versions, because
approval rules are an EE-only feature. Associations and other related stuff to
merge request approvals are defined here.
### `ApprovalState`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: ApprovalState
accDescr: Entity relationship diagram between MergeRequest and ApprovalState
MergeRequest ||--|| ApprovalState: " "
```
`ApprovalState` class is defined in `ee/app/models/approval_state.rb`. It's not
an actual `ActiveRecord` model. This class encapsulates all logic related to the
state of the approvals for a certain merge request like:
- Knowing the approval rules that are applicable to the merge request based on
its target branch.
- Knowing the approval rules that are applicable to a certain target branch.
- Checking if all rules were approved.
- Checking if approval is required.
- Knowing how many approvals were given or still required.
It gets the approval rules data from the project (`ApprovalProjectRule`) or the
merge request (`ApprovalMergeRequestRule`) and wraps it as `ApprovalWrappedRule`.
### `ApprovalProjectRule`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: ApprovalProjectRule diagram
accDescr: Entity relationship diagram between projects and ApprovalProjectRule
Project ||--o{ ApprovalProjectRule: " "
ApprovalProjectRule }o--o{ User: " "
ApprovalProjectRule }o--o{ Group: " "
ApprovalProjectRule }o--o{ ProtectedBranch: " "
```
`ApprovalProjectRule` model is defined in `ee/app/models/approval_project_rule.rb`.
A record is created/updated/deleted when an approval rule is added/edited/removed
via project settings or the [approval rules API for projects](../../api/merge_request_approvals.md#approval-rules-for-projects).
The `ApprovalState` model gets these records when approval rules are not
overwritten.
The `protected_branches` attribute is set and used when a rule is scoped to
protected branches. See [Approvals for protected branches](../../user/project/merge_requests/approvals/rules.md#approvals-for-protected-branches)
for more information about the feature.
### `ApprovalMergeRequestRule`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: ApprovalMergeRequestRule diagram
accDescr: Entity relationship diagram between MergeRequest and ApprovalMergeRequestRule
MergeRequest ||--o{ ApprovalMergeRequestRule: " "
ApprovalMergeRequestRule }o--o{ User: " "
ApprovalMergeRequestRule }o--o{ Group: " "
ApprovalMergeRequestRule ||--o| ApprovalProjectRule: " "
```
`ApprovalMergeRequestRule` model is defined in `ee/app/models/approval_merge_request_rule.rb`.
A record is created/updated/deleted when a rule is added/edited/removed via merge
request create/edit form or the [single merge request approvals API](../../api/merge_request_approvals.md#approval-rules-for-a-merge-request).
The `approval_project_rule` is set when it is based from an existing `ApprovalProjectRule`.
An `ApprovalMergeRequestRule` doesn't have `protected_branches` as it inherits
them from the `approval_project_rule` if not overridden.
### `ApprovalWrappedRule`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: ApprovalWrappedRule diagram
accDescr: Entity relationship diagram between ApprovalState and ApprovalWrappedRule
ApprovalState ||--o{ ApprovalWrappedRule: " "
```
`ApprovalWrappedRule` is defined in `ee/app/modes/approval_wrapped_rule.rb` and
is not an `ActiveRecord` model. It's used to wrap an `ApprovalProjectRule` or
`ApprovalMergeRequestRule` for common interface. It also has the following sub
types:
- `ApprovalWrappedAnyApprovalRule` - for wrapping an `any_approver` rule.
- `ApprovalWrappedCodeOwnerRule` - for wrapping a `code_owner` rule.
This class delegates most of the responsibilities to the approval rule it wraps
but it's also responsible for:
- Checking if the approval rule is approved.
- Knowing how many approvals were given or still required for the approval rule.
It gets this information from the approval rule and the `Approval` records from
the merge request.
### `Approval`
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: Approval diagram
accDescr: Entity relationship diagram between MergeRequest and Approval
MergeRequest ||--o{ Approval: " "
```
`Approval` model is defined in `ee/app/models/approval.rb`. This model is
responsible for storing information about an approval made on a merge request.
Whenever an approval is given/revoked, a record is created/deleted.
## Controllers and Services
The following controllers and services below are being used for the approval
rules feature to work.
### `API::ProjectApprovalSettings`
This private API is defined in `ee/lib/api/project_approval_settings.rb`.
This is used for the following:
- Listing the approval rules in project settings.
- Creating/updating/deleting rules in project settings.
- Listing the approval rules on create merge request form.
### `Projects::MergeRequests::CreationsController`
This controller is defined in `app/controllers/projects/merge_requests/creations_controller.rb`.
The `create` action of this controller is used when the create merge request form is
submitted. It accepts the `approval_rules_attributes` parameter for creating/updating/deleting
`ApprovalMergeRequestRule` records. It passes the parameter along when it executes
`MergeRequests::CreateService`.
### `Projects::MergeRequestsController`
This controller is defined in `app/controllers/projects/merge_requests_controller.rb`.
The `update` action of this controller is used when the edit merge request form is
submitted. It's like `Projects::MergeRequests::CreationsController` but it executes
`MergeRequests::UpdateService` instead.
### `API::MergeRequestApprovals`
This API is defined in `ee/lib/api/merge_request_approvals.rb`.
The [Approvals API endpoint](../../api/merge_request_approvals.md#list-all-approval-rules-for-a-merge-request)
is requested when a merge request page loads.
The `/projects/:id/merge_requests/:merge_request_iid/approval_settings` is a
private API endpoint used for the following:
- Listing the approval rules on edit merge request form.
- Listing the approval rules on the merge request page.
When approving/unapproving an MR via UI and API, the [Approve Merge Request](../../api/merge_request_approvals.md#approve-merge-request)
API endpoint or the [Unapprove Merge Request](../../api/merge_request_approvals.md#unapprove-a-merge-request)
API endpoint are requested. They execute `MergeRequests::ApprovalService` and
`MergeRequests::RemoveApprovalService` accordingly.
### `API::ProjectApprovalRules` and `API::MergeRequestApprovalRules`
These APIs are defined in `ee/lib/api/project_approval_rules.rb` and
`ee/lib/api/merge_request_approval_rules.rb`.
Used to list/create/update/delete project and merge request level rules via
[Merge request approvals API](../../api/merge_request_approvals.md).
Executes `ApprovalRules::CreateService`, `ApprovalRules::UpdateService`,
`ApprovalRules::ProjectRuleDestroyService`, and `ApprovalRules::MergeRequestRuleDestroyService`
accordingly.
### `ApprovalRules::ParamsFilteringService`
This service is defined in `ee/app/services/approval_rules/params_filtering_service.rb`.
It is called only when `MergeRequests::CreateService` and
`MergeRequests::UpdateService` are executed.
It is responsible for parsing the `approval_rules_attributes` parameter to:
- Remove it when a user can't update approval rules.
- Filter the user IDs whether they are members of the project or not.
- Filter the group IDs whether they are visible to user.
- Identify the `any_approver` rule.
- Append hidden groups to it when specified.
- Append user defined inapplicable (rules that do not apply to the merge request's target
branch) approval rules.
### `ApprovalRules::CreateService`
This service is defined in `ee/app/services/approval_rules/create_service.rb`.
It is responsible for creating approval rules at either the merge request or project level.
It is called when:
- Creating approval rules for a project through the UI.
- Creating approval rules for a project through the [API::ProjectApprovalRules](../../api/merge_request_approvals.md#create-an-approval-rule-for-a-project) `/projects/:id/approval_rules` endpoint.
- Creating approval rules for a single merge request through [API::MergeRequestApprovalRules](../../api/merge_request_approvals.md#create-an-approval-rule-for-a-merge-request) `/projects/:id/merge_requests/:merge_request_iid/approval_rules` endpoint.
Merge request approval rules created through the UI do not use this service. See [Projects::MergeRequests::CreationsController](#projectsmergerequestscontroller)
## Flow
These flowcharts should help explain the flow from the controllers down to the
models for different functionalities.
Some CRUD API endpoints are intentionally skipped because they are pretty
straightforward.
### Creating a merge request with approval rules via web UI
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
accTitle: Merge request creation in the UI
accDescr: Flowchart of the creation of a merge request in the web UI, when the merge request contains approval rules
Projects::MergeRequests::CreationsController --> MergeRequests::CreateService
MergeRequests::CreateService --> ApprovalRules::ParamsFilteringService
ApprovalRules::ParamsFilteringService --> MergeRequests::CreateService
MergeRequests::CreateService --> MergeRequest
MergeRequest --> db[(Database)]
MergeRequest --> User
MergeRequest --> Group
MergeRequest --> ApprovalProjectRule
User --> db[(Database)]
Group --> db[(Database)]
ApprovalProjectRule --> db[(Database)]
```
When updating, the same flow is followed but it starts at `Projects::MergeRequestsController`
and executes `MergeRequests::UpdateService` instead.
### Viewing the merge request approval rules on an MR page
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
accTitle: Viewing approval rules on a merge request
accDescr: Flowchart of how the frontend retrieves, then displays, approval rule information on a merge request page
API::MergeRequestApprovals --> MergeRequest
MergeRequest --> ApprovalState
ApprovalState --> id1{approval rules are overridden}
id1{approval rules are overridden} --> |No| ApprovalProjectRule & ApprovalMergeRequestRule
id1{approval rules are overridden} --> |Yes| ApprovalMergeRequestRule
ApprovalState --> ApprovalWrappedRule
ApprovalWrappedRule --> Approval
```
This flow gets initiated by the frontend component. The data returned is
used to display information on the MR widget.
### Approving a merge request
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph LR
accTitle: Approval data flowchart
accDescr: Flowchart of how an approval call to the API reaches the database
API::MergeRequestApprovals --> MergeRequests::ApprovalService
MergeRequests::ApprovalService --> Approval
Approval --> db[(Database)]
```
When unapproving, the same flow is followed but the `MergeRequests::RemoveApprovalService`
is executed instead.
## TODO
1. Add information related to other rule types, such as `code_owner` and `report_approver`.
1. Add information about side effects of approving/unapproving a merge request.
|
https://docs.gitlab.com/development/keep_around_refs
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/keep_around_refs.md
|
2025-08-13
|
doc/development/merge_request_concepts
|
[
"doc",
"development",
"merge_request_concepts"
] |
keep_around_refs.md
|
Create
|
Code Review
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Keep-around ref usage guidelines
| null |
## What are keep-around refs
Keep-around refs protect specific commits from the Git garbage collection process. While Git GC
usually removes unreferenced commits (those not reachable through branches or tags), there are cases
where preserving these orphaned commits is essential - such as maintaining commit comments and CI build
history. By creating a keep-around ref, we ensure these commits remain in the repository even when
they're no longer part of the active branch history.
For more information about developing with Git references on Gitaly, see
[Git references used by Gitaly](../gitaly.md#git-references-used-by-gitaly).
## Downsides of keep-around refs
Keeping the orphaned commits using keep-around refs comes with its own set of challenges.
- Its growth is untenable (`gitlab-org/gitlab` has about 1.2 GB of refs)
- The actual usage of these keep-around refs is spread across so it's hard to know exactly where
these keep-around refs are expected to exist
- It's time consuming to check the needs of keep-around refs as we need to consider all possible places
they could be referenced
- We could be keeping more commits than necessary because the ancestors of already preserved commits
don't have to be kept around, but it's hard to verify that and clean up efficiently
{{< alert type="warning" >}}
Due to the downsides mentioned above, we should not be adding more places where we create keep-around
refs. Instead consider alternative options such as scoped refs
(like `refs/merge-requests/<merge-request-iid>/head`) or avoid creating these refs altogether if at all possible.
{{< /alert >}}
## Usage
Following is a typical way to create a keep-around ref for the given commit SHA.
```ruby
project.repository.keep_around(sha, source: self.class.name)
```
This command creates a ref called `refs/keep-around/<SHA>` where <SHA> is the commit SHA that is being
kept around. This prevents the commit SHA and all parent commits from being garbage collected as
we now have a ref that points to the commit directly. `source` is used as a way for us to attribute
the keep-around ref creations to specific classes.
## Where keep-around refs are currently created
Here are the places where we currently create keep-around refs.
- `MergeRequest#keep_around_commit(merge_commit_sha)` with the `after_save` callback
- `MergeRequestDiff#keep_around_commits(start_commit_sha, head_commit_sha)` for both target and
source projects with the `after_create` callback
- `Note#keep_around_commit(commit_id)` with the `after_save` callback
- `DraftNotes::PublishService#keep_around_commits(shas)` as it publishes draft notes in bulk and `shas`
are from both `original_potion` and `position`
- `DiffNote#Keep_around_commits(sha)` similar to above, but just for a single `DiffNote` with the `after_save`
callback if it was not skipped for bulk insert
- `Ci::Pipeline#keep_around_commits(sha, before_sha)` with the `after_create` callback
## Future work
Due to the uncontrolled growth of keep-around refs and lack of visibility,
[Keep Around Refs Working Group](https://handbook.gitlab.com/handbook/company/working-groups/keep-around-refs/)
is currently working to:
- Reduce the number of existing keep-around refs
- Improve visibility into how and where keep-around refs are used
- Develop alternative solutions with better scalability
We should avoid creating more keep-around refs whenever possible and look for alternative solutions.
`gitlab::keep_around::orphaned` Rake task has been created to help us to identify orphaned keep-around refs.
|
---
stage: Create
group: Code Review
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Keep-around ref usage guidelines
breadcrumbs:
- doc
- development
- merge_request_concepts
---
## What are keep-around refs
Keep-around refs protect specific commits from the Git garbage collection process. While Git GC
usually removes unreferenced commits (those not reachable through branches or tags), there are cases
where preserving these orphaned commits is essential - such as maintaining commit comments and CI build
history. By creating a keep-around ref, we ensure these commits remain in the repository even when
they're no longer part of the active branch history.
For more information about developing with Git references on Gitaly, see
[Git references used by Gitaly](../gitaly.md#git-references-used-by-gitaly).
## Downsides of keep-around refs
Keeping the orphaned commits using keep-around refs comes with its own set of challenges.
- Its growth is untenable (`gitlab-org/gitlab` has about 1.2 GB of refs)
- The actual usage of these keep-around refs is spread across so it's hard to know exactly where
these keep-around refs are expected to exist
- It's time consuming to check the needs of keep-around refs as we need to consider all possible places
they could be referenced
- We could be keeping more commits than necessary because the ancestors of already preserved commits
don't have to be kept around, but it's hard to verify that and clean up efficiently
{{< alert type="warning" >}}
Due to the downsides mentioned above, we should not be adding more places where we create keep-around
refs. Instead consider alternative options such as scoped refs
(like `refs/merge-requests/<merge-request-iid>/head`) or avoid creating these refs altogether if at all possible.
{{< /alert >}}
## Usage
Following is a typical way to create a keep-around ref for the given commit SHA.
```ruby
project.repository.keep_around(sha, source: self.class.name)
```
This command creates a ref called `refs/keep-around/<SHA>` where <SHA> is the commit SHA that is being
kept around. This prevents the commit SHA and all parent commits from being garbage collected as
we now have a ref that points to the commit directly. `source` is used as a way for us to attribute
the keep-around ref creations to specific classes.
## Where keep-around refs are currently created
Here are the places where we currently create keep-around refs.
- `MergeRequest#keep_around_commit(merge_commit_sha)` with the `after_save` callback
- `MergeRequestDiff#keep_around_commits(start_commit_sha, head_commit_sha)` for both target and
source projects with the `after_create` callback
- `Note#keep_around_commit(commit_id)` with the `after_save` callback
- `DraftNotes::PublishService#keep_around_commits(shas)` as it publishes draft notes in bulk and `shas`
are from both `original_potion` and `position`
- `DiffNote#Keep_around_commits(sha)` similar to above, but just for a single `DiffNote` with the `after_save`
callback if it was not skipped for bulk insert
- `Ci::Pipeline#keep_around_commits(sha, before_sha)` with the `after_create` callback
## Future work
Due to the uncontrolled growth of keep-around refs and lack of visibility,
[Keep Around Refs Working Group](https://handbook.gitlab.com/handbook/company/working-groups/keep-around-refs/)
is currently working to:
- Reduce the number of existing keep-around refs
- Improve visibility into how and where keep-around refs are used
- Develop alternative solutions with better scalability
We should avoid creating more keep-around refs whenever possible and look for alternative solutions.
`gitlab::keep_around::orphaned` Rake task has been created to help us to identify orphaned keep-around refs.
|
https://docs.gitlab.com/development/merge_request_concepts
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/merge_request_concepts
|
[
"doc",
"development",
"merge_request_concepts"
] |
_index.md
|
Create
|
Code Review
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Merge request concepts
|
Developer information explaining terminology and features used in merge requests.
|
{{< alert type="note" >}}
The documentation below is the single source of truth for the merge request terminology and functionality.
{{< /alert >}}
The merge request is made up of several different key components and ideas that encompass the overall merge request experience. These concepts sometimes have competing and confusing terminology or overlap with other concepts. This page covers the following concepts:
1. Merge widget
1. Report widgets
1. Merge checks
1. Approval rules
When developing new merge request widgets, read the
[merge request widget framework](../fe_guide/merge_request_widgets.md)
documentation. All new widgets should use this framework, and older widgets should
be ported to use it.
## Merge widget
The merge widget is the component of the merge request where the `merge` button exists:

This area of the merge request is where all of the options and commit messages are defined prior to merging. It also contains information about what is in the merge request, what issues are closed, and other information important to the merging process.
## Report widgets
Reports are widgets within the merge request that report information about changes within the merge request. These widgets provide information to better help the author understand the changes and further improvements to the proposed changes.
[Design Documentation](https://design.gitlab.com/patterns/merge-request-reports/)

## Merge checks
Merge checks are statuses that can either pass or fail and conditionally control the availability of the merge button being available within a merge request. The key distinguishing factor in a merge check is that users do not interact with the merge checks inside of the merge request, but are able to influence whether or not the check passes or fails. Results from the check are processed as true/false to determine whether or not a merge request can be merged.
Examples of merge checks include:
- Merge conflicts
- Pipeline success
- Threads resolution
- [External status checks](../../user/project/merge_requests/status_checks.md)
- Required approvals
A merge request can be merged only when all of the required merge checks are satisfied.
## Approvals
Approval rules specify users that are required to or can optionally approve a merge request based on some kind of organizational policy. When approvals are required, they effectively become a required merge check. The key differentiator between merge checks and approval rules is that users do interact with approval rules, by deciding to approve the merge request.
Additionally, approval settings provide configuration options to define how those approval rules are applied in a merge request. They can set limitations, add requirements, or modify approvals.
Examples of approval rules and settings include:
- [Merge request approval rules](../../user/project/merge_requests/approvals/rules.md)
- [Code owner approvals](../../user/project/codeowners/_index.md)
- [Security approvals](../../user/application_security/policies/merge_request_approval_policies.md)
- [Prevent editing approval rules](../../user/project/merge_requests/approvals/settings.md#prevent-editing-approval-rules-in-merge-requests)
- [Remove all approvals when commits are added](../../user/project/merge_requests/approvals/settings.md#remove-all-approvals-when-commits-are-added-to-the-source-branch)
|
---
stage: Create
group: Code Review
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Developer information explaining terminology and features used in merge
requests.
title: Merge request concepts
breadcrumbs:
- doc
- development
- merge_request_concepts
---
{{< alert type="note" >}}
The documentation below is the single source of truth for the merge request terminology and functionality.
{{< /alert >}}
The merge request is made up of several different key components and ideas that encompass the overall merge request experience. These concepts sometimes have competing and confusing terminology or overlap with other concepts. This page covers the following concepts:
1. Merge widget
1. Report widgets
1. Merge checks
1. Approval rules
When developing new merge request widgets, read the
[merge request widget framework](../fe_guide/merge_request_widgets.md)
documentation. All new widgets should use this framework, and older widgets should
be ported to use it.
## Merge widget
The merge widget is the component of the merge request where the `merge` button exists:

This area of the merge request is where all of the options and commit messages are defined prior to merging. It also contains information about what is in the merge request, what issues are closed, and other information important to the merging process.
## Report widgets
Reports are widgets within the merge request that report information about changes within the merge request. These widgets provide information to better help the author understand the changes and further improvements to the proposed changes.
[Design Documentation](https://design.gitlab.com/patterns/merge-request-reports/)

## Merge checks
Merge checks are statuses that can either pass or fail and conditionally control the availability of the merge button being available within a merge request. The key distinguishing factor in a merge check is that users do not interact with the merge checks inside of the merge request, but are able to influence whether or not the check passes or fails. Results from the check are processed as true/false to determine whether or not a merge request can be merged.
Examples of merge checks include:
- Merge conflicts
- Pipeline success
- Threads resolution
- [External status checks](../../user/project/merge_requests/status_checks.md)
- Required approvals
A merge request can be merged only when all of the required merge checks are satisfied.
## Approvals
Approval rules specify users that are required to or can optionally approve a merge request based on some kind of organizational policy. When approvals are required, they effectively become a required merge check. The key differentiator between merge checks and approval rules is that users do interact with approval rules, by deciding to approve the merge request.
Additionally, approval settings provide configuration options to define how those approval rules are applied in a merge request. They can set limitations, add requirements, or modify approvals.
Examples of approval rules and settings include:
- [Merge request approval rules](../../user/project/merge_requests/approvals/rules.md)
- [Code owner approvals](../../user/project/codeowners/_index.md)
- [Security approvals](../../user/application_security/policies/merge_request_approval_policies.md)
- [Prevent editing approval rules](../../user/project/merge_requests/approvals/settings.md#prevent-editing-approval-rules-in-merge-requests)
- [Remove all approvals when commits are added](../../user/project/merge_requests/approvals/settings.md#remove-all-approvals-when-commits-are-added-to-the-source-branch)
|
https://docs.gitlab.com/development/performance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/performance.md
|
2025-08-13
|
doc/development/merge_request_concepts
|
[
"doc",
"development",
"merge_request_concepts"
] |
performance.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Merge Request Performance Guidelines
| null |
Each new introduced merge request **should be performant by default**.
To ensure a merge request does not negatively impact performance of GitLab
_every_ merge request **should** adhere to the guidelines outlined in this
document. There are no exceptions to this rule unless specifically discussed
with and agreed upon by backend maintainers and performance specialists.
It's also highly recommended that you read the following guides:
- [Performance Guidelines](../performance.md)
- [Avoiding downtime in migrations](../database/avoiding_downtime_in_migrations.md)
## Definition
The term `SHOULD` per the [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) means:
> This word, or the adjective "RECOMMENDED", mean that there
> may exist valid reasons in particular circumstances to ignore a
> particular item, but the full implications must be understood and
> carefully weighed before choosing a different course.
Ideally, each of these tradeoffs should be documented
in the separate issues, labeled accordingly and linked
to original issue and epic.
## Impact Analysis
**Summary**: think about the impact your merge request may have on performance
and those maintaining a GitLab setup.
Any change submitted can have an impact not only on the application itself but
also those maintaining it and those keeping it up and running (for example, production
engineers). As a result you should think carefully about the impact of your
merge request on not only the application but also on the people keeping it up
and running.
Can the queries used potentially take down any critical services and result in
engineers being woken up in the night? Can a malicious user abuse the code to
take down a GitLab instance? Do my changes make loading a certain page
slower? Does execution time grow exponentially given enough load or data in the
database?
These are all questions one should ask themselves before submitting a merge
request. It may sometimes be difficult to assess the impact, in which case you
should ask a performance specialist to review your code. See the "Reviewing"
section below for more information.
## Performance Review
**Summary**: ask performance specialists to review your code if you're not sure
about the impact.
Sometimes it's hard to assess the impact of a merge request. In this case you
should ask one of the merge request reviewers to review your changes.
([A list of reviewers](https://about.gitlab.com/company/team/) is available.) A reviewer
in turn can request a performance specialist to review the changes.
## Think outside of the box
Everyone has their own perception of how to use the new feature.
Always consider how users might be using the feature instead. Usually,
users test our features in a very unconventional way,
like by brute forcing or abusing edge conditions that we have.
## Data set
The data set the merge request processes should be known
and documented. The feature should clearly document what the expected
data set is for this feature to process, and what problems it might cause.
If you would think about the following example that puts
a strong emphasis of data set being processed.
The problem is simple: you want to filter a list of files from
some Git repository. Your feature requests a list of all files
from the repository and perform search for the set of files.
As an author you should in context of that problem consider
the following:
1. What repositories are planned to be supported?
1. How long it do big repositories like Linux kernel take?
1. Is there something that we can do differently to not process such a
big data set?
1. Should we build some fail-safe mechanism to contain
computational complexity? Usually it's better to degrade
the service for a single user instead of all users.
## Query plans and database structure
The query plan can tell us if we need additional
indexes, or expensive filtering (such as using sequential scans).
Each query plan should be run against substantial size of data set.
For example, if you look for issues with specific conditions,
you should consider validating a query against
a small number (a few hundred) and a big number (100_000) of issues.
See how the query behaves if the result is a few
and a few thousand.
This is needed as we have users using GitLab for very big projects and
in a very unconventional way. Even if it seems that it's unlikely
that such a big data set is used, it's still plausible that one
of our customers could encounter a problem with the feature.
Understanding ahead of time how it behaves at scale, even if we accept it,
is the desired outcome. We should always have a plan or understanding of what is needed
to optimize the feature for higher usage patterns.
Every database structure should be optimized and sometimes even over-described
in preparation for easy extension. The hardest part after some point is
data migration. Migrating millions of rows is always troublesome and
can have a negative impact on the application.
To better understand how to get help with the query plan reviews
read this section on [how to prepare the merge request for a database review](../database_review.md#how-to-prepare-the-merge-request-for-a-database-review).
## Query Counts
**Summary**: a merge request **should not** increase the total number of executed SQL
queries unless absolutely necessary.
The total number of queries executed by the code modified or added by a merge request
must not increase unless absolutely necessary. When building features it's
entirely possible you need some extra queries, but you should try to keep
this at a minimum.
As an example, say you introduce a feature that updates a number of database
rows with the same value. It may be very tempting (and easy) to write this using
the following pseudo code:
```ruby
objects_to_update.each do |object|
object.some_field = some_value
object.save
end
```
This means running one query for every object to update. This code can
easily overload a database given enough rows to update or many instances of this
code running in parallel. This particular problem is known as the
["N+1 query problem"](https://guides.rubyonrails.org/active_record_querying.html#eager-loading-associations). You can write a test with [QueryRecorder](../database/query_recorder.md) to detect this and prevent regressions.
In this particular case the workaround is fairly easy:
```ruby
objects_to_update.update_all(some_field: some_value)
```
This uses ActiveRecord's `update_all` method to update all rows in a single
query. This in turn makes it much harder for this code to overload a database.
## Use read replicas when possible
In a DB cluster we have many read replicas and one primary. A classic use of scaling the DB is to have read-only actions be performed by the replicas. We use [load balancing](../database/load_balancing.md) to distribute this load. This allows for the replicas to grow as the pressure on the DB grows.
By default, queries use read-only replicas, but due to
[primary sticking](../../administration/postgresql/database_load_balancing.md#primary-sticking), GitLab uses the
primary for some time and reverts to secondaries after they have either caught up or after 30 seconds.
Doing this can lead to a considerable amount of unnecessary load on the primary.
To prevent switching to the primary [merge request 56849](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/56849) introduced the
`without_sticky_writes` block. Typically, this method can be applied to prevent primary stickiness
after a trivial or insignificant write which doesn't affect the following queries in the same session.
To learn when a usage timestamp update can lead the session to stick to the primary and how to
prevent it by using `without_sticky_writes`, see [merge request 57328](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/57328)
As a counterpart of the `without_sticky_writes` utility,
[merge request 59167](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/59167) introduced
`use_replicas_for_read_queries`. This method forces all read-only queries inside its block to read
replicas regardless of the current primary stickiness.
This utility is reserved for cases where queries can tolerate replication lag.
Internally, our database load balancer classifies the queries based on their main statement (`select`, `update`, `delete`, and so on). When in doubt, it redirects the queries to the primary database. Hence, there are some common cases the load balancer sends the queries to the primary unnecessarily:
- Custom queries (via `exec_query`, `execute_statement`, `execute`, and so on)
- Read-only transactions
- In-flight connection configuration set
- Sidekiq background jobs
After the above queries are executed, GitLab
[sticks to the primary](../../administration/postgresql/database_load_balancing.md#primary-sticking).
When writing custom read-only SQL queries, use `select_all` instead of `execute` so that the query can use read-only replicas when possible.
Using `select_all` also prevents the query cache from being cleared.
To make transactions and other ambiguous queries prefer using the replicas,
[merge request 59086](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/59086) introduced
`fallback_to_replicas_for_ambiguous_queries`. This MR is also an example of how we redirected a
costly, time-consuming query to the replicas.
## Use CTEs wisely
Read about [complex queries on the relation object](../database/iterating_tables_in_batches.md#complex-queries-on-the-relation-object)
for considerations on how to use CTEs. We have found in some situations that CTEs can become
problematic in use (similar to the N+1 problem above). In particular, hierarchical recursive
CTE queries such as the CTE in [AuthorizedProjectsWorker](https://gitlab.com/gitlab-org/gitlab/-/issues/325688)
are very difficult to optimize and don't scale. We should avoid them when implementing new features
that require any kind of hierarchical structure.
CTEs have been effectively used as an optimization fence in many simpler cases,
such as this [example](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/43242#note_61416277).
With the supported PostgreSQL versions, the optimization fence behavior must be enabled
with the `MATERIALIZED` keyword. By default CTEs are inlined then [optimized by default](https://paquier.xyz/postgresql-2/postgres-12-with-materialize/).
When building CTE statements, use the `Gitlab::SQL::CTE` class.
By default, this `Gitlab::SQL::CTE` class forces materialization through adding the `MATERIALIZED` keyword.
{{< alert type="warning" >}}
Upgrading to GitLab 14.0 requires PostgreSQL 12 or later.
{{< /alert >}}
## Cached Queries
**Summary**: a merge request **should not** execute duplicated cached queries.
Rails provides an [SQL Query Cache](../cached_queries.md),
used to cache the results of database queries for the duration of the request.
See [why cached queries are considered bad](../cached_queries.md#why-cached-queries-are-considered-bad) and
[how to detect them](../cached_queries.md#how-to-detect-cached-queries).
The code introduced by a merge request, should not execute multiple duplicated cached queries.
The total number of the queries (including cached ones) executed by the code modified or added by a merge request
should not increase unless absolutely necessary.
The number of executed queries (including cached queries) should not depend on
collection size.
You can write a test by passing the `skip_cached` variable to [QueryRecorder](../database/query_recorder.md) to detect this and prevent regressions.
As an example, say you have a CI pipeline. All pipeline builds belong to the same pipeline,
thus they also belong to the same project (`pipeline.project`):
```ruby
pipeline_project = pipeline.project
# Project Load (0.6ms) SELECT "projects".* FROM "projects" WHERE "projects"."id" = $1 LIMIT $2
build = pipeline.builds.first
build.project == pipeline_project
# CACHE Project Load (0.0ms) SELECT "projects".* FROM "projects" WHERE "projects"."id" = $1 LIMIT $2
# => true
```
When we call `build.project`, it doesn't hit the database, it uses the cached result, but it re-instantiates
the same pipeline project object. It turns out that associated objects do not point to the same in-memory object.
If we try to serialize each build:
```ruby
pipeline.builds.each do |build|
build.to_json(only: [:name], include: [project: { only: [:name]}])
end
```
It re-instantiates project object for each build, instead of using the same in-memory object.
In this particular case the workaround is fairly easy:
```ruby
ActiveRecord::Associations::Preloader.new(records: pipeline, associations: [builds: :project]).call
pipeline.builds.each do |build|
build.to_json(only: [:name], include: [project: { only: [:name]}])
end
```
`ActiveRecord::Associations::Preloader` uses the same in-memory object for the same project.
This avoids the cached SQL query and also avoids re-instantiation of the project object for each build.
## Executing Queries in Loops
**Summary**: SQL queries **must not** be executed in a loop unless absolutely
necessary.
Executing SQL queries in a loop can result in many queries being executed
depending on the number of iterations in a loop. This may work fine for a
development environment with little data, but in a production environment this
can quickly spiral out of control.
There are some cases where this may be needed. If this is the case this should
be clearly mentioned in the merge request description.
## Batch process
**Summary**: Iterating a single process to external services (for example, PostgreSQL, Redis, Object Storage)
should be executed in a **batch-style** to reduce connection overheads.
For fetching rows from various tables in a batch-style, see [Eager Loading](#eager-loading) section.
### Example: Delete multiple files from Object Storage
When you delete multiple files from object storage, like GCS,
executing a single REST API call multiple times is a quite expensive
process. Ideally, this should be done in a batch-style, for example, S3 provides
[batch deletion API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html),
so it'd be a good idea to consider such an approach.
The `FastDestroyAll` module might help this situation. It's a
small framework when you remove a bunch of database rows and its associated data
in a batch style.
## Timeout
**Summary**: You should set a reasonable timeout when the system invokes HTTP calls
to external services (such as Kubernetes), and it should be executed in Sidekiq, not
in Puma threads.
Often, GitLab needs to communicate with an external service such as Kubernetes
clusters. In this case, it's hard to estimate when the external service finishes
the requested process, for example, if it's a user-owned cluster that's inactive for some reason,
GitLab might wait for the response forever ([Example](https://gitlab.com/gitlab-org/gitlab/-/issues/31475)).
This could result in Puma timeout and should be avoided at all cost.
You should set a reasonable timeout, gracefully handle exceptions and surface the
errors in UI or logging internally.
Using [`ReactiveCaching`](../utilities.md#reactivecaching) is one of the best solutions to fetch external data.
## Keep database transaction minimal
**Summary**: You should avoid accessing to external services like Gitaly during database
transactions, otherwise it leads to severe contention problems
as an open transaction basically blocks the release of a PostgreSQL backend connection.
For keeping transaction as minimal as possible, consider using `AfterCommitQueue`
module or `after_commit` AR hook.
Here is [an example](https://gitlab.com/gitlab-org/gitlab/-/issues/36154#note_247228859)
that one request to Gitaly instance during transaction triggered a ~"priority::1" issue.
## Eager Loading
**Summary**: always eager load associations when retrieving more than one row.
When retrieving multiple database records for which you need to use any
associations you **must** eager load these associations. For example, if you're
retrieving a list of blog posts and you want to display their authors you
**must** eager load the author associations.
In other words, instead of this:
```ruby
Post.all.each do |post|
puts post.author.name
end
```
You should use this:
```ruby
Post.all.includes(:author).each do |post|
puts post.author.name
end
```
Also consider using [QueryRecoder tests](../database/query_recorder.md) to prevent a regression when eager loading.
## Memory Usage
**Summary**: merge requests **must not** increase memory usage unless absolutely
necessary.
A merge request must not increase the memory usage of GitLab by more than the
absolute bare minimum required by the code. This means that if you have to parse
some large document (for example, an HTML document) it's best to parse it as a stream
whenever possible, instead of loading the entire input into memory. Sometimes
this isn't possible, in that case this should be stated explicitly in the merge
request.
## Lazy Rendering of UI Elements
**Summary**: only render UI elements when they are actually needed.
Certain UI elements may not always be needed. For example, when hovering over a
diff line there's a small icon displayed that can be used to create a new
comment. Instead of always rendering these kind of elements they should only be
rendered when actually needed. This ensures we don't spend time generating
Haml/HTML when it's not used.
## Use of Caching
**Summary**: cache data in memory or in Redis when it's needed multiple times in
a transaction or has to be kept around for a certain time period.
Sometimes certain bits of data have to be re-used in different places during a
transaction. In these cases this data should be cached in memory to remove the
need for running complex operations to fetch the data. You should use Redis if
data should be cached for a certain time period instead of the duration of the
transaction.
For example, say you process multiple snippets of text containing username
mentions (for example, `Hello @alice` and `How are you doing @alice?`). By caching the
user objects for every username we can remove the need for running the same
query for every mention of `@alice`.
Caching data per transaction can be done using
[RequestStore](https://github.com/steveklabnik/request_store) (use
`Gitlab::SafeRequestStore` to avoid having to remember to check
`RequestStore.active?`). Caching data in Redis can be done using
[Rails' caching system](https://guides.rubyonrails.org/caching_with_rails.html).
## Pagination
Each feature that renders a list of items as a table needs to include pagination.
The main styles of pagination are:
1. Offset-based pagination: user goes to a specific page, like 1. User sees the next page number,
and the total number of pages. This style is well supported by all components of GitLab.
1. Offset-based pagination, but without the count: user goes to a specific page, like 1.
User sees only the next page number, but does not see the total amount of pages.
1. Next page using keyset-based pagination: user can only go to next page, as we don't know how many pages
are available.
1. Infinite scrolling pagination: user scrolls the page and next items are loaded asynchronously. This is ideal,
as it has exact same benefits as the previous one.
The ultimately scalable solution for pagination is to use Keyset-based pagination.
However, we don't have support for that at GitLab at that moment. You
can follow the progress looking at [API: Keyset Pagination](https://gitlab.com/groups/gitlab-org/-/epics/2039).
Take into consideration the following when choosing a pagination strategy:
1. It's very inefficient to calculate amount of objects that pass the filtering,
this operation usually can take seconds, and can time out,
1. It's very inefficient to get entries for page at higher ordinals, like 1000.
The database has to sort and iterate all previous items, and this operation usually
can result in substantial load put on database.
You can find useful tips related to pagination in the [pagination guidelines](../database/pagination_guidelines.md).
## Badge counters
Counters should always be truncated. It means that we don't want to present
the exact number over some threshold. The reason for that is for the cases where we want
to calculate exact number of items, we effectively need to filter each of them for
the purpose of knowing the exact number of items matching.
From ~UX perspective it's often acceptable to see that you have over 1000+ pipelines,
instead of that you have 40000+ pipelines, but at a tradeoff of loading page for 2s longer.
An example of this pattern is the list of pipelines and jobs. We truncate numbers to `1000+`,
but we show an accurate number of running pipelines, which is the most interesting information.
There's a helper method that can be used for that purpose - `NumbersHelper.limited_counter_with_delimiter` -
that accepts an upper limit of counting rows.
In some cases it's desired that badge counters are loaded asynchronously.
This can speed up the initial page load and give a better user experience overall.
## Usage of feature flags
Each feature that has performance critical elements or has a known performance deficiency
needs to come with feature flag to disable it.
The feature flag makes our team more happy, because they can monitor the system and
quickly react without our users noticing the problem.
Performance deficiencies should be addressed right away after we merge initial
changes.
Read more about when and how feature flags should be used in
[Feature flags in GitLab development](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#how-to-use-feature-flags).
## Storage
We can consider the following types of storages:
- **Local temporary storage** (very-very short-term storage) This type of storage is system-provided storage, like a `/tmp` folder.
This is the type of storage that you should ideally use for all your temporary tasks.
The fact that each node has its own temporary storage makes scaling significantly easier.
This storage is also very often SSD-based, thus is significantly faster.
The local storage can easily be configured for the application with
the usage of `TMPDIR` variable.
- **Shared temporary storage** (short-term storage) This type of storage is network-based temporary storage,
usually run with a common NFS server. As of Feb 2020, we still use this type of storage
for most of our implementations. Even though this allows the above limit to be significantly larger,
it does not really mean that you can use more. The shared temporary storage is shared by
all nodes. Thus, the job that uses significant amount of that space or performs a lot
of operations creates a contention on execution of all other jobs and request
across the whole application, this can easily impact stability of the whole GitLab.
Be respectful of that.
- **Shared persistent storage** (long-term storage) This type of storage uses
shared network-based storage (for example, NFS). This solution is mostly used by customers running small
installations consisting of a few nodes. The files on shared storage are easily accessible,
but any job that is uploading or downloading data can create a serious contention to all other jobs.
This is also an approach by default used by Omnibus.
- **Object-based persistent storage** (long term storage) this type of storage uses external
services like [AWS S3](https://en.wikipedia.org/wiki/Amazon_S3). The Object Storage
can be treated as infinitely scalable and redundant. Accessing this storage usually requires
downloading the file to manipulate it. The Object Storage can be considered as an ultimate
solution, as by definition it can be assumed that it can handle unlimited concurrent uploads
and downloads of files. This is also ultimate solution required to ensure that application can
run in containerized deployments (Kubernetes) at ease.
### Temporary storage
The storage on production nodes is really sparse. The application should be built
in a way that accommodates running under very limited temporary storage.
You can expect the system on which your code runs has a total of `1G-10G`
of temporary storage. However, this storage is really shared across all
jobs being run. If your job requires to use more than `100MB` of that space
you should reconsider the approach you have taken.
Whatever your needs are, you should clearly document if you need to process files.
If you require more than `100MB`, consider asking for help from a maintainer
to work with you to possibly discover a better solution.
#### Local temporary storage
The usage of local storage is a desired solution to use,
especially since we work on deploying applications to Kubernetes clusters.
When you would like to use `Dir.mktmpdir`? In a case when you want for example
to extract/create archives, perform extensive manipulation of existing data, and so on.
```ruby
Dir.mktmpdir('designs') do |path|
# do manipulation on path
# the path will be removed once
# we go out of the block
end
```
#### Shared temporary storage
The usage of shared temporary storage is required if your intent
is to persistent file for a disk-based storage, and not Object Storage.
[Workhorse direct upload](../uploads/_index.md#direct-upload) when accepting file
can write it to shared storage, and later GitLab Rails can perform a move operation.
The move operation on the same destination is instantaneous.
The system instead of performing `copy` operation just re-attaches file into a new place.
Since this introduces extra complexity into application, you should only try
to re-use well established patterns (for example, `ObjectStorage` concern) instead of re-implementing it.
The usage of shared temporary storage is otherwise deprecated for all other usages.
### Persistent storage
#### Object Storage
It is required that all features holding persistent files support saving data
to Object Storage. Having a persistent storage in the form of shared volume across nodes
is not scalable, as it creates a contention on data access all nodes.
GitLab offers the [ObjectStorage concern](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/uploaders/object_storage.rb)
that implements a seamless support for Shared and Object Storage-based persistent storage.
#### Data access
Each feature that accepts data uploads or allows to download them needs to use
[Workhorse direct upload](../uploads/_index.md#direct-upload). It means that uploads needs to be
saved directly to Object Storage by Workhorse, and all downloads needs to be served
by Workhorse.
Performing uploads/downloads via Puma is an expensive operation,
as it blocks the whole processing slot (thread) for the duration of the upload.
Performing uploads/downloads via Puma also has a problem where the operation
can time out, which is especially problematic for slow clients. If clients take a long time
to upload/download the processing slot might be killed due to request processing
timeout (usually between 30s-60s).
For the above reasons it is required that [Workhorse direct upload](../uploads/_index.md#direct-upload) is implemented
for all file uploads and downloads.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Merge Request Performance Guidelines
breadcrumbs:
- doc
- development
- merge_request_concepts
---
Each new introduced merge request **should be performant by default**.
To ensure a merge request does not negatively impact performance of GitLab
_every_ merge request **should** adhere to the guidelines outlined in this
document. There are no exceptions to this rule unless specifically discussed
with and agreed upon by backend maintainers and performance specialists.
It's also highly recommended that you read the following guides:
- [Performance Guidelines](../performance.md)
- [Avoiding downtime in migrations](../database/avoiding_downtime_in_migrations.md)
## Definition
The term `SHOULD` per the [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt) means:
> This word, or the adjective "RECOMMENDED", mean that there
> may exist valid reasons in particular circumstances to ignore a
> particular item, but the full implications must be understood and
> carefully weighed before choosing a different course.
Ideally, each of these tradeoffs should be documented
in the separate issues, labeled accordingly and linked
to original issue and epic.
## Impact Analysis
**Summary**: think about the impact your merge request may have on performance
and those maintaining a GitLab setup.
Any change submitted can have an impact not only on the application itself but
also those maintaining it and those keeping it up and running (for example, production
engineers). As a result you should think carefully about the impact of your
merge request on not only the application but also on the people keeping it up
and running.
Can the queries used potentially take down any critical services and result in
engineers being woken up in the night? Can a malicious user abuse the code to
take down a GitLab instance? Do my changes make loading a certain page
slower? Does execution time grow exponentially given enough load or data in the
database?
These are all questions one should ask themselves before submitting a merge
request. It may sometimes be difficult to assess the impact, in which case you
should ask a performance specialist to review your code. See the "Reviewing"
section below for more information.
## Performance Review
**Summary**: ask performance specialists to review your code if you're not sure
about the impact.
Sometimes it's hard to assess the impact of a merge request. In this case you
should ask one of the merge request reviewers to review your changes.
([A list of reviewers](https://about.gitlab.com/company/team/) is available.) A reviewer
in turn can request a performance specialist to review the changes.
## Think outside of the box
Everyone has their own perception of how to use the new feature.
Always consider how users might be using the feature instead. Usually,
users test our features in a very unconventional way,
like by brute forcing or abusing edge conditions that we have.
## Data set
The data set the merge request processes should be known
and documented. The feature should clearly document what the expected
data set is for this feature to process, and what problems it might cause.
If you would think about the following example that puts
a strong emphasis of data set being processed.
The problem is simple: you want to filter a list of files from
some Git repository. Your feature requests a list of all files
from the repository and perform search for the set of files.
As an author you should in context of that problem consider
the following:
1. What repositories are planned to be supported?
1. How long it do big repositories like Linux kernel take?
1. Is there something that we can do differently to not process such a
big data set?
1. Should we build some fail-safe mechanism to contain
computational complexity? Usually it's better to degrade
the service for a single user instead of all users.
## Query plans and database structure
The query plan can tell us if we need additional
indexes, or expensive filtering (such as using sequential scans).
Each query plan should be run against substantial size of data set.
For example, if you look for issues with specific conditions,
you should consider validating a query against
a small number (a few hundred) and a big number (100_000) of issues.
See how the query behaves if the result is a few
and a few thousand.
This is needed as we have users using GitLab for very big projects and
in a very unconventional way. Even if it seems that it's unlikely
that such a big data set is used, it's still plausible that one
of our customers could encounter a problem with the feature.
Understanding ahead of time how it behaves at scale, even if we accept it,
is the desired outcome. We should always have a plan or understanding of what is needed
to optimize the feature for higher usage patterns.
Every database structure should be optimized and sometimes even over-described
in preparation for easy extension. The hardest part after some point is
data migration. Migrating millions of rows is always troublesome and
can have a negative impact on the application.
To better understand how to get help with the query plan reviews
read this section on [how to prepare the merge request for a database review](../database_review.md#how-to-prepare-the-merge-request-for-a-database-review).
## Query Counts
**Summary**: a merge request **should not** increase the total number of executed SQL
queries unless absolutely necessary.
The total number of queries executed by the code modified or added by a merge request
must not increase unless absolutely necessary. When building features it's
entirely possible you need some extra queries, but you should try to keep
this at a minimum.
As an example, say you introduce a feature that updates a number of database
rows with the same value. It may be very tempting (and easy) to write this using
the following pseudo code:
```ruby
objects_to_update.each do |object|
object.some_field = some_value
object.save
end
```
This means running one query for every object to update. This code can
easily overload a database given enough rows to update or many instances of this
code running in parallel. This particular problem is known as the
["N+1 query problem"](https://guides.rubyonrails.org/active_record_querying.html#eager-loading-associations). You can write a test with [QueryRecorder](../database/query_recorder.md) to detect this and prevent regressions.
In this particular case the workaround is fairly easy:
```ruby
objects_to_update.update_all(some_field: some_value)
```
This uses ActiveRecord's `update_all` method to update all rows in a single
query. This in turn makes it much harder for this code to overload a database.
## Use read replicas when possible
In a DB cluster we have many read replicas and one primary. A classic use of scaling the DB is to have read-only actions be performed by the replicas. We use [load balancing](../database/load_balancing.md) to distribute this load. This allows for the replicas to grow as the pressure on the DB grows.
By default, queries use read-only replicas, but due to
[primary sticking](../../administration/postgresql/database_load_balancing.md#primary-sticking), GitLab uses the
primary for some time and reverts to secondaries after they have either caught up or after 30 seconds.
Doing this can lead to a considerable amount of unnecessary load on the primary.
To prevent switching to the primary [merge request 56849](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/56849) introduced the
`without_sticky_writes` block. Typically, this method can be applied to prevent primary stickiness
after a trivial or insignificant write which doesn't affect the following queries in the same session.
To learn when a usage timestamp update can lead the session to stick to the primary and how to
prevent it by using `without_sticky_writes`, see [merge request 57328](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/57328)
As a counterpart of the `without_sticky_writes` utility,
[merge request 59167](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/59167) introduced
`use_replicas_for_read_queries`. This method forces all read-only queries inside its block to read
replicas regardless of the current primary stickiness.
This utility is reserved for cases where queries can tolerate replication lag.
Internally, our database load balancer classifies the queries based on their main statement (`select`, `update`, `delete`, and so on). When in doubt, it redirects the queries to the primary database. Hence, there are some common cases the load balancer sends the queries to the primary unnecessarily:
- Custom queries (via `exec_query`, `execute_statement`, `execute`, and so on)
- Read-only transactions
- In-flight connection configuration set
- Sidekiq background jobs
After the above queries are executed, GitLab
[sticks to the primary](../../administration/postgresql/database_load_balancing.md#primary-sticking).
When writing custom read-only SQL queries, use `select_all` instead of `execute` so that the query can use read-only replicas when possible.
Using `select_all` also prevents the query cache from being cleared.
To make transactions and other ambiguous queries prefer using the replicas,
[merge request 59086](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/59086) introduced
`fallback_to_replicas_for_ambiguous_queries`. This MR is also an example of how we redirected a
costly, time-consuming query to the replicas.
## Use CTEs wisely
Read about [complex queries on the relation object](../database/iterating_tables_in_batches.md#complex-queries-on-the-relation-object)
for considerations on how to use CTEs. We have found in some situations that CTEs can become
problematic in use (similar to the N+1 problem above). In particular, hierarchical recursive
CTE queries such as the CTE in [AuthorizedProjectsWorker](https://gitlab.com/gitlab-org/gitlab/-/issues/325688)
are very difficult to optimize and don't scale. We should avoid them when implementing new features
that require any kind of hierarchical structure.
CTEs have been effectively used as an optimization fence in many simpler cases,
such as this [example](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/43242#note_61416277).
With the supported PostgreSQL versions, the optimization fence behavior must be enabled
with the `MATERIALIZED` keyword. By default CTEs are inlined then [optimized by default](https://paquier.xyz/postgresql-2/postgres-12-with-materialize/).
When building CTE statements, use the `Gitlab::SQL::CTE` class.
By default, this `Gitlab::SQL::CTE` class forces materialization through adding the `MATERIALIZED` keyword.
{{< alert type="warning" >}}
Upgrading to GitLab 14.0 requires PostgreSQL 12 or later.
{{< /alert >}}
## Cached Queries
**Summary**: a merge request **should not** execute duplicated cached queries.
Rails provides an [SQL Query Cache](../cached_queries.md),
used to cache the results of database queries for the duration of the request.
See [why cached queries are considered bad](../cached_queries.md#why-cached-queries-are-considered-bad) and
[how to detect them](../cached_queries.md#how-to-detect-cached-queries).
The code introduced by a merge request, should not execute multiple duplicated cached queries.
The total number of the queries (including cached ones) executed by the code modified or added by a merge request
should not increase unless absolutely necessary.
The number of executed queries (including cached queries) should not depend on
collection size.
You can write a test by passing the `skip_cached` variable to [QueryRecorder](../database/query_recorder.md) to detect this and prevent regressions.
As an example, say you have a CI pipeline. All pipeline builds belong to the same pipeline,
thus they also belong to the same project (`pipeline.project`):
```ruby
pipeline_project = pipeline.project
# Project Load (0.6ms) SELECT "projects".* FROM "projects" WHERE "projects"."id" = $1 LIMIT $2
build = pipeline.builds.first
build.project == pipeline_project
# CACHE Project Load (0.0ms) SELECT "projects".* FROM "projects" WHERE "projects"."id" = $1 LIMIT $2
# => true
```
When we call `build.project`, it doesn't hit the database, it uses the cached result, but it re-instantiates
the same pipeline project object. It turns out that associated objects do not point to the same in-memory object.
If we try to serialize each build:
```ruby
pipeline.builds.each do |build|
build.to_json(only: [:name], include: [project: { only: [:name]}])
end
```
It re-instantiates project object for each build, instead of using the same in-memory object.
In this particular case the workaround is fairly easy:
```ruby
ActiveRecord::Associations::Preloader.new(records: pipeline, associations: [builds: :project]).call
pipeline.builds.each do |build|
build.to_json(only: [:name], include: [project: { only: [:name]}])
end
```
`ActiveRecord::Associations::Preloader` uses the same in-memory object for the same project.
This avoids the cached SQL query and also avoids re-instantiation of the project object for each build.
## Executing Queries in Loops
**Summary**: SQL queries **must not** be executed in a loop unless absolutely
necessary.
Executing SQL queries in a loop can result in many queries being executed
depending on the number of iterations in a loop. This may work fine for a
development environment with little data, but in a production environment this
can quickly spiral out of control.
There are some cases where this may be needed. If this is the case this should
be clearly mentioned in the merge request description.
## Batch process
**Summary**: Iterating a single process to external services (for example, PostgreSQL, Redis, Object Storage)
should be executed in a **batch-style** to reduce connection overheads.
For fetching rows from various tables in a batch-style, see [Eager Loading](#eager-loading) section.
### Example: Delete multiple files from Object Storage
When you delete multiple files from object storage, like GCS,
executing a single REST API call multiple times is a quite expensive
process. Ideally, this should be done in a batch-style, for example, S3 provides
[batch deletion API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html),
so it'd be a good idea to consider such an approach.
The `FastDestroyAll` module might help this situation. It's a
small framework when you remove a bunch of database rows and its associated data
in a batch style.
## Timeout
**Summary**: You should set a reasonable timeout when the system invokes HTTP calls
to external services (such as Kubernetes), and it should be executed in Sidekiq, not
in Puma threads.
Often, GitLab needs to communicate with an external service such as Kubernetes
clusters. In this case, it's hard to estimate when the external service finishes
the requested process, for example, if it's a user-owned cluster that's inactive for some reason,
GitLab might wait for the response forever ([Example](https://gitlab.com/gitlab-org/gitlab/-/issues/31475)).
This could result in Puma timeout and should be avoided at all cost.
You should set a reasonable timeout, gracefully handle exceptions and surface the
errors in UI or logging internally.
Using [`ReactiveCaching`](../utilities.md#reactivecaching) is one of the best solutions to fetch external data.
## Keep database transaction minimal
**Summary**: You should avoid accessing to external services like Gitaly during database
transactions, otherwise it leads to severe contention problems
as an open transaction basically blocks the release of a PostgreSQL backend connection.
For keeping transaction as minimal as possible, consider using `AfterCommitQueue`
module or `after_commit` AR hook.
Here is [an example](https://gitlab.com/gitlab-org/gitlab/-/issues/36154#note_247228859)
that one request to Gitaly instance during transaction triggered a ~"priority::1" issue.
## Eager Loading
**Summary**: always eager load associations when retrieving more than one row.
When retrieving multiple database records for which you need to use any
associations you **must** eager load these associations. For example, if you're
retrieving a list of blog posts and you want to display their authors you
**must** eager load the author associations.
In other words, instead of this:
```ruby
Post.all.each do |post|
puts post.author.name
end
```
You should use this:
```ruby
Post.all.includes(:author).each do |post|
puts post.author.name
end
```
Also consider using [QueryRecoder tests](../database/query_recorder.md) to prevent a regression when eager loading.
## Memory Usage
**Summary**: merge requests **must not** increase memory usage unless absolutely
necessary.
A merge request must not increase the memory usage of GitLab by more than the
absolute bare minimum required by the code. This means that if you have to parse
some large document (for example, an HTML document) it's best to parse it as a stream
whenever possible, instead of loading the entire input into memory. Sometimes
this isn't possible, in that case this should be stated explicitly in the merge
request.
## Lazy Rendering of UI Elements
**Summary**: only render UI elements when they are actually needed.
Certain UI elements may not always be needed. For example, when hovering over a
diff line there's a small icon displayed that can be used to create a new
comment. Instead of always rendering these kind of elements they should only be
rendered when actually needed. This ensures we don't spend time generating
Haml/HTML when it's not used.
## Use of Caching
**Summary**: cache data in memory or in Redis when it's needed multiple times in
a transaction or has to be kept around for a certain time period.
Sometimes certain bits of data have to be re-used in different places during a
transaction. In these cases this data should be cached in memory to remove the
need for running complex operations to fetch the data. You should use Redis if
data should be cached for a certain time period instead of the duration of the
transaction.
For example, say you process multiple snippets of text containing username
mentions (for example, `Hello @alice` and `How are you doing @alice?`). By caching the
user objects for every username we can remove the need for running the same
query for every mention of `@alice`.
Caching data per transaction can be done using
[RequestStore](https://github.com/steveklabnik/request_store) (use
`Gitlab::SafeRequestStore` to avoid having to remember to check
`RequestStore.active?`). Caching data in Redis can be done using
[Rails' caching system](https://guides.rubyonrails.org/caching_with_rails.html).
## Pagination
Each feature that renders a list of items as a table needs to include pagination.
The main styles of pagination are:
1. Offset-based pagination: user goes to a specific page, like 1. User sees the next page number,
and the total number of pages. This style is well supported by all components of GitLab.
1. Offset-based pagination, but without the count: user goes to a specific page, like 1.
User sees only the next page number, but does not see the total amount of pages.
1. Next page using keyset-based pagination: user can only go to next page, as we don't know how many pages
are available.
1. Infinite scrolling pagination: user scrolls the page and next items are loaded asynchronously. This is ideal,
as it has exact same benefits as the previous one.
The ultimately scalable solution for pagination is to use Keyset-based pagination.
However, we don't have support for that at GitLab at that moment. You
can follow the progress looking at [API: Keyset Pagination](https://gitlab.com/groups/gitlab-org/-/epics/2039).
Take into consideration the following when choosing a pagination strategy:
1. It's very inefficient to calculate amount of objects that pass the filtering,
this operation usually can take seconds, and can time out,
1. It's very inefficient to get entries for page at higher ordinals, like 1000.
The database has to sort and iterate all previous items, and this operation usually
can result in substantial load put on database.
You can find useful tips related to pagination in the [pagination guidelines](../database/pagination_guidelines.md).
## Badge counters
Counters should always be truncated. It means that we don't want to present
the exact number over some threshold. The reason for that is for the cases where we want
to calculate exact number of items, we effectively need to filter each of them for
the purpose of knowing the exact number of items matching.
From ~UX perspective it's often acceptable to see that you have over 1000+ pipelines,
instead of that you have 40000+ pipelines, but at a tradeoff of loading page for 2s longer.
An example of this pattern is the list of pipelines and jobs. We truncate numbers to `1000+`,
but we show an accurate number of running pipelines, which is the most interesting information.
There's a helper method that can be used for that purpose - `NumbersHelper.limited_counter_with_delimiter` -
that accepts an upper limit of counting rows.
In some cases it's desired that badge counters are loaded asynchronously.
This can speed up the initial page load and give a better user experience overall.
## Usage of feature flags
Each feature that has performance critical elements or has a known performance deficiency
needs to come with feature flag to disable it.
The feature flag makes our team more happy, because they can monitor the system and
quickly react without our users noticing the problem.
Performance deficiencies should be addressed right away after we merge initial
changes.
Read more about when and how feature flags should be used in
[Feature flags in GitLab development](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#how-to-use-feature-flags).
## Storage
We can consider the following types of storages:
- **Local temporary storage** (very-very short-term storage) This type of storage is system-provided storage, like a `/tmp` folder.
This is the type of storage that you should ideally use for all your temporary tasks.
The fact that each node has its own temporary storage makes scaling significantly easier.
This storage is also very often SSD-based, thus is significantly faster.
The local storage can easily be configured for the application with
the usage of `TMPDIR` variable.
- **Shared temporary storage** (short-term storage) This type of storage is network-based temporary storage,
usually run with a common NFS server. As of Feb 2020, we still use this type of storage
for most of our implementations. Even though this allows the above limit to be significantly larger,
it does not really mean that you can use more. The shared temporary storage is shared by
all nodes. Thus, the job that uses significant amount of that space or performs a lot
of operations creates a contention on execution of all other jobs and request
across the whole application, this can easily impact stability of the whole GitLab.
Be respectful of that.
- **Shared persistent storage** (long-term storage) This type of storage uses
shared network-based storage (for example, NFS). This solution is mostly used by customers running small
installations consisting of a few nodes. The files on shared storage are easily accessible,
but any job that is uploading or downloading data can create a serious contention to all other jobs.
This is also an approach by default used by Omnibus.
- **Object-based persistent storage** (long term storage) this type of storage uses external
services like [AWS S3](https://en.wikipedia.org/wiki/Amazon_S3). The Object Storage
can be treated as infinitely scalable and redundant. Accessing this storage usually requires
downloading the file to manipulate it. The Object Storage can be considered as an ultimate
solution, as by definition it can be assumed that it can handle unlimited concurrent uploads
and downloads of files. This is also ultimate solution required to ensure that application can
run in containerized deployments (Kubernetes) at ease.
### Temporary storage
The storage on production nodes is really sparse. The application should be built
in a way that accommodates running under very limited temporary storage.
You can expect the system on which your code runs has a total of `1G-10G`
of temporary storage. However, this storage is really shared across all
jobs being run. If your job requires to use more than `100MB` of that space
you should reconsider the approach you have taken.
Whatever your needs are, you should clearly document if you need to process files.
If you require more than `100MB`, consider asking for help from a maintainer
to work with you to possibly discover a better solution.
#### Local temporary storage
The usage of local storage is a desired solution to use,
especially since we work on deploying applications to Kubernetes clusters.
When you would like to use `Dir.mktmpdir`? In a case when you want for example
to extract/create archives, perform extensive manipulation of existing data, and so on.
```ruby
Dir.mktmpdir('designs') do |path|
# do manipulation on path
# the path will be removed once
# we go out of the block
end
```
#### Shared temporary storage
The usage of shared temporary storage is required if your intent
is to persistent file for a disk-based storage, and not Object Storage.
[Workhorse direct upload](../uploads/_index.md#direct-upload) when accepting file
can write it to shared storage, and later GitLab Rails can perform a move operation.
The move operation on the same destination is instantaneous.
The system instead of performing `copy` operation just re-attaches file into a new place.
Since this introduces extra complexity into application, you should only try
to re-use well established patterns (for example, `ObjectStorage` concern) instead of re-implementing it.
The usage of shared temporary storage is otherwise deprecated for all other usages.
### Persistent storage
#### Object Storage
It is required that all features holding persistent files support saving data
to Object Storage. Having a persistent storage in the form of shared volume across nodes
is not scalable, as it creates a contention on data access all nodes.
GitLab offers the [ObjectStorage concern](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/uploaders/object_storage.rb)
that implements a seamless support for Shared and Object Storage-based persistent storage.
#### Data access
Each feature that accepts data uploads or allows to download them needs to use
[Workhorse direct upload](../uploads/_index.md#direct-upload). It means that uploads needs to be
saved directly to Object Storage by Workhorse, and all downloads needs to be served
by Workhorse.
Performing uploads/downloads via Puma is an expensive operation,
as it blocks the whole processing slot (thread) for the duration of the upload.
Performing uploads/downloads via Puma also has a problem where the operation
can time out, which is especially problematic for slow clients. If clients take a long time
to upload/download the processing slot might be killed due to request processing
timeout (usually between 30s-60s).
For the above reasons it is required that [Workhorse direct upload](../uploads/_index.md#direct-upload) is implemented
for all file uploads and downloads.
|
https://docs.gitlab.com/development/merge_request_concepts/frontend
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/merge_request_concepts/frontend.md
|
2025-08-13
|
doc/development/merge_request_concepts/diffs
|
[
"doc",
"development",
"merge_request_concepts",
"diffs"
] |
frontend.md
|
Create
|
Code Review
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Merge request diffs frontend overview
|
Developer documentation explaining how the different parts of the Vue-based frontend diffs are generated.
|
This document provides an overview on how the frontend diffs Vue application works, and
the various different parts that exist. It should help contributors:
- Understand how the diffs Vue app is set up.
- Identify any areas that need improvement.
This document is a living document. Update it whenever anything significant changes in
the diffs application.
## Diffs Vue app
### Components
The Vue app for rendering diffs uses many different Vue components, some of which get shared
with other areas of the GitLab app. The below chart shows the direction for which components
get rendered.
This chart contains several types of items:
| Legend item | Interpretation |
| ----------- | -------------- |
| `xxx~~`, `ee-xxx~~` | A shortened directory path name. Can be found in `[ee]/app/assets/javascripts`, and omits `0..n` nested folders. |
| Rectangular nodes | Files. |
| Oval nodes | Plain language describing a deeper concept. |
| Double-rectangular nodes | Simplified code branch. |
| Diamond and circle nodes | Branches that have 2 (diamond) or 3+ (circle) options. |
| Pendant / banner nodes (left notch, right square) | A parent directory to shorten nested paths. |
| `./` | A path relative to the closest parent directory pendant node. Non-relative paths nested under parent pendant nodes are not in that directory. |
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TB
accTitle: Component rendering
accDescr: Flowchart of how components are rendered in the GitLab front end
classDef code font-family: monospace;
A["diffs~~app.vue"]
descVirtualScroller(["Virtual Scroller"])
codeForFiles[["v-for(diffFiles)"]]
B["diffs~~diff_file.vue"]
C["diffs~~diff_file_header.vue"]
D["diffs~~diff_stats.vue"]
E["diffs~~diff_content.vue"]
boolFileIsText{isTextFile}
boolOnlyWhitespace{isWhitespaceOnly}
boolNotDiffable{notDiffable}
boolNoPreview{noPreview}
descShowChanges(["Show button to "Show changes""])
%% Non-text changes
dirDiffViewer>"vue_shared~~diff_viewer"]
F["./viewers/not_diffable.vue"]
G["./viewers/no_preview.vue"]
H["./diff_viewer.vue"]
I["diffs~~diff_view.vue"]
boolIsRenamed{isRenamed}
boolIsModeChanged{isModeChanged}
boolFileHasNoPath{hasNewPath}
boolIsImage{isImage}
J["./viewers/renamed.vue"]
K["./viewers/mode_changed.vue"]
descNoViewer(["No viewer is rendered"])
L["./viewers/image_diff_viewer.vue"]
M["./viewers/download.vue"]
N["vue_shared~~download_diff_viewer.vue"]
boolImageIsReplaced{isReplaced}
O["vue_shared~~image_viewer.vue"]
switchImageMode((image_diff_viewer.mode))
P["./viewers/image_diff/onion_skin_viewer.vue"]
Q["./viewers/image_diff/swipe_viewer.vue"]
R["./viewers/image_diff/two_up_viewer.vue"]
S["diffs~~image_diff_overlay.vue"]
codeForImageDiscussions[["v-for(discussions)"]]
T["vue_shared~~design_note_pin.vue"]
U["vue_shared~~user_avatar_link.vue"]
V["diffs~~diff_discussions.vue"]
W["batch_comments~~diff_file_drafts.vue"]
codeForTwoUpDiscussions[["v-for(discussions)"]]
codeForTwoUpDrafts[["v-for(drafts)"]]
X["notes~~notable_discussion.vue"]
%% Text-file changes
codeForDiffLines[["v-for(diffLines)"]]
Y["diffs~~diff_expansion_cell.vue"]
Z["diffs~~diff_row.vue"]
AA["diffs~~diff_line.vue"]
AB["batch_comments~~draft_note.vue"]
AC["diffs~~diff_comment_cell.vue"]
AD["diffs~~diff_gutter_avatars.vue"]
AE["ee-diffs~~inline_findings_gutter_icon_dropdown.vue"]
AF["notes~~noteable_note.vue"]
AG["notes~~note_actions.vue"]
AH["notes~~note_body.vue"]
AI["notes~~note_header.vue"]
AJ["notes~~reply_button.vue"]
AK["notes~~note_awards_list.vue"]
AL["notes~~note_edited_text.vue"]
AM["notes~~note_form.vue"]
AN["vue_shared~~awards_list.vue"]
AO["emoji~~picker.vue"]
AP["emoji~~emoji_list.vue"]
descEmojiVirtualScroll(["Virtual Scroller"])
AQ["emoji~~category.vue"]
AR["emoji~emoji_category.vue"]
AS["vue_shared~~markdown_editor.vue"]
class codeForFiles,codeForImageDiscussions code;
class codeForTwoUpDiscussions,codeForTwoUpDrafts code;
class codeForDiffLines code;
%% Also apply code styling to this switch node
class switchImageMode code;
%% Also apply code styling to these boolean nodes
class boolFileIsText,boolOnlyWhitespace,boolNotDiffable,boolNoPreview code;
class boolIsRenamed,boolIsModeChanged,boolFileHasNoPath,boolIsImage code;
class boolImageIsReplaced code;
A --> descVirtualScroller
A -->|"Virtual Scroller is
disabled when
Find in page search
(Cmd/Ctrl+f) is used."|codeForFiles
descVirtualScroller --> codeForFiles
codeForFiles --> B --> C --> D
B --> E
%% File view flags cascade
E --> boolFileIsText
boolFileIsText --> |yes| I
boolFileIsText --> |no| boolOnlyWhitespace
boolOnlyWhitespace --> |yes| descShowChanges
boolOnlyWhitespace --> |no| dirDiffViewer
dirDiffViewer --> H
H --> boolNotDiffable
boolNotDiffable --> |yes| F
boolNotDiffable --> |no| boolNoPreview
boolNoPreview --> |yes| G
boolNoPreview --> |no| boolIsRenamed
boolIsRenamed --> |yes| J
boolIsRenamed --> |no| boolIsModeChanged
boolIsModeChanged --> |yes| K
boolIsModeChanged --> |no| boolFileHasNoPath
boolFileHasNoPath --> |yes| boolIsImage
boolFileHasNoPath --> |no| descNoViewer
boolIsImage --> |yes| L
boolIsImage --> |no| M
M --> N
%% Image diff viewer
L --> boolImageIsReplaced
boolImageIsReplaced --> |yes| switchImageMode
boolImageIsReplaced --> |no| O
switchImageMode -->|"'twoup' (default)"| R
switchImageMode -->|'onion'| P
switchImageMode -->|'swipe'| Q
P & Q --> S
S --> codeForImageDiscussions
S --> AM
R-->|"Rendered in
note container div"|U & W & V
%% Do not combine this with the "P & Q --> S" statement above
%% The order of these node relationships defines the
%% layout of the graph, and we need it in this order.
R --> S
V --> codeForTwoUpDiscussions
W --> codeForTwoUpDrafts
%% This invisible link forces `noteable_discussion`
%% to render above `design_note_pin`
X ~~~ T
codeForTwoUpDrafts --> AB
codeForImageDiscussions & codeForTwoUpDiscussions & codeForTwoUpDrafts --> T
codeForTwoUpDiscussions --> X
%% Text file diff viewer
I --> codeForDiffLines
codeForDiffLines --> Z
codeForDiffLines -->|"isMatchLine?"| Y
codeForDiffLines -->|"hasCodeQuality?"| AA
codeForDiffLines -->|"hasDraftNote(s)?"| AB
Z -->|"hasCodeQuality?"| AE
Z -->|"hasDiscussions?"| AD
AA --> AC
%% Draft notes
AB --> AF
AF --> AG & AH & AI
AG --> AJ
AH --> AK & AL & AM
AK --> AN --> AO --> AP --> descEmojiVirtualScroll --> AQ --> AR
AM --> AS
```
Some of the components are rendered more than others, but the main component is `diff_row.vue`.
This component renders every diff line in a diff file. For performance reasons, this
component is a functional component. However, when we upgrade to Vue 3, this is no longer
required.
The main diff app component is the main entry point to the diffs app. One of the most important parts
of this component is to dispatch the action that assigns discussions to diff lines. This action
gets dispatched after the metadata request is completed, and after the batch diffs requests are
finished. There is also a watcher set up to watches for changes in both the diff files array and the notes
array. Whenever a change happens here, the set discussion action gets dispatched.
The DiffRow component is set up in a way that allows for us to store the diff line data in one format.
Previously, we had to request two different formats for inline and side-by-side. The DiffRow component
then uses this standard format to render the diff line data. With this standard format, the user
can then switch between inline and side-by-side without the need to re-fetch any data.
{{< alert type="note" >}}
For this component, a lot of the data used and rendered gets memoized and cached, based on
various conditions. It is possible that data sometimes gets cached between each different
component render.
{{< /alert >}}
### Vuex store
The Vuex store for the diffs app consists of 3 different modules:
- Notes
- Diffs
- Batch comments
The notes module is responsible for the discussions, including diff discussions. In this module,
the discussions get fetched, and the polling for new discussions is setup. This module gets shared
with the issue app as well, so changes here need to be tested in both issues and merge requests.
The diffs module is responsible for the everything related to diffs. This includes, but is not limited
to, fetching diffs, assigning diff discussions to lines, and creating diff discussions.
Finally, the batch comments module is not complex, and is responsible only for the draft comments feature.
However, this module does dispatch actions in the notes and diff modules whenever draft comments
are published.
### API Requests
#### Metadata
The diffs metadata endpoint exists to fetch the base data the diffs app requires quickly, without
the need to fetch all the diff files. This includes, but is not limited to:
- Diff filenames, including some extra meta data for diff files
- Added and removed line numbers
- Branch names
- Diff versions
The most important part of the metadata response is the diff filenames. This data allows the diffs
app to render the file browser inside of the diffs app, without waiting for all batch diffs
requests to complete.
When the metadata response is received, the diff file data is processed into the correct structure
that the frontend requires to render the file browser in either tree view or list view.
The structure for this file object is:
```javascript
{
"key": "",
"path": "",
"name": "",
"type": "",
"tree": [],
"changed": true,
"diffLoaded": false,
"filePaths": {
"old": file.old_path,
"new": file.new_path
},
"tempFile": false,
"deleted": false,
"fileHash": "",
"addedLines": 1,
"removedLines": 1,
"parentPath": "/",
"submodule": false
}
```
#### Batch diffs
To reduce the response size for the diffs endpoint, we are splitting this response up into different
requests, to:
- Reduces the response size of each request.
- Allows the diffs app to start rendering diffs as quickly as the first request finishes.
To make the first request quicker, the request gets sent asking for a small amount of
diffs. The number of diffs requested then increases, until the maximum number of diffs per request is 30.
When the request finishes, the diffs app formats the data received into a format that makes
it easier for the diffs app to render the diffs lines.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
accTitle: Formatting diffs
accDescr: A flowchart of steps taken when rendering a diff, including retrieval and display preparations
A[fetchDiffFilesBatch] -->
B[commit SET_DIFF_DATA_BATCH] -->
C[prepareDiffData] -->
D[prepareRawDiffFile] -->
E[ensureBasicDiffFileLines] -->
F[prepareDiffFileLines] -->
G[finalizeDiffFile] -->
H[deduplicateFilesList]
```
After this has been completed, the diffs app can now begin to render the diff lines. However, before
anything can be rendered the diffs app does one more format. It takes the diff line data, and maps
the data into a format for easier switching between inline and side-by-side modes. This
formatting happens in a computed property inside the `diff_content.vue` component.
### Render queue
{{< alert type="note" >}}
This might not be required any more. Some investigation work is required to decide
the future of the render queue. The virtual scroll bar we created has probably removed
any performance benefit we got from this approach.
{{< /alert >}}
To render diffs quickly, we have a render queue that allows the diffs to render only if the
browser is idle. This saves the browser getting frozen when rendering a lot of large diffs at once,
and allows us to reduce the total blocking time.
This pipeline of rendering files happens only if all the below conditions are `true` for every
diff file. If any of these are `false`, then this render queue does not happen and the diffs get
rendered as expected.
- Are the diffs in this file already rendered?
- Does this diff have a viewer? (Meaning, is it not a download?)
- Is the diff expanded?
This chart gives a brief overview of the pipeline that happens:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
accTitle: Render queue pipeline
accDescr: Flowchart of the steps in the render queue pipeline
A[startRenderDiffsQueue] -->B
B[commit RENDER_FILE current file index] -->C
C[canRenderNextFile?]
C -->|Yes| D[Render file] -->B
C -->|No| E[Re-run requestIdleCallback] -->C
```
The checks that happen:
- Is the idle time remaining less than 5 ms?
- Have we already tried to render this file 4 times?
After these checks happen, the file is marked in Vuex as `renderable`, which allows the diffs
app to start rendering the diff lines and discussions.
|
---
stage: Create
group: Code Review
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Developer documentation explaining how the different parts of the Vue-based
frontend diffs are generated.
title: Merge request diffs frontend overview
breadcrumbs:
- doc
- development
- merge_request_concepts
- diffs
---
This document provides an overview on how the frontend diffs Vue application works, and
the various different parts that exist. It should help contributors:
- Understand how the diffs Vue app is set up.
- Identify any areas that need improvement.
This document is a living document. Update it whenever anything significant changes in
the diffs application.
## Diffs Vue app
### Components
The Vue app for rendering diffs uses many different Vue components, some of which get shared
with other areas of the GitLab app. The below chart shows the direction for which components
get rendered.
This chart contains several types of items:
| Legend item | Interpretation |
| ----------- | -------------- |
| `xxx~~`, `ee-xxx~~` | A shortened directory path name. Can be found in `[ee]/app/assets/javascripts`, and omits `0..n` nested folders. |
| Rectangular nodes | Files. |
| Oval nodes | Plain language describing a deeper concept. |
| Double-rectangular nodes | Simplified code branch. |
| Diamond and circle nodes | Branches that have 2 (diamond) or 3+ (circle) options. |
| Pendant / banner nodes (left notch, right square) | A parent directory to shorten nested paths. |
| `./` | A path relative to the closest parent directory pendant node. Non-relative paths nested under parent pendant nodes are not in that directory. |
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TB
accTitle: Component rendering
accDescr: Flowchart of how components are rendered in the GitLab front end
classDef code font-family: monospace;
A["diffs~~app.vue"]
descVirtualScroller(["Virtual Scroller"])
codeForFiles[["v-for(diffFiles)"]]
B["diffs~~diff_file.vue"]
C["diffs~~diff_file_header.vue"]
D["diffs~~diff_stats.vue"]
E["diffs~~diff_content.vue"]
boolFileIsText{isTextFile}
boolOnlyWhitespace{isWhitespaceOnly}
boolNotDiffable{notDiffable}
boolNoPreview{noPreview}
descShowChanges(["Show button to "Show changes""])
%% Non-text changes
dirDiffViewer>"vue_shared~~diff_viewer"]
F["./viewers/not_diffable.vue"]
G["./viewers/no_preview.vue"]
H["./diff_viewer.vue"]
I["diffs~~diff_view.vue"]
boolIsRenamed{isRenamed}
boolIsModeChanged{isModeChanged}
boolFileHasNoPath{hasNewPath}
boolIsImage{isImage}
J["./viewers/renamed.vue"]
K["./viewers/mode_changed.vue"]
descNoViewer(["No viewer is rendered"])
L["./viewers/image_diff_viewer.vue"]
M["./viewers/download.vue"]
N["vue_shared~~download_diff_viewer.vue"]
boolImageIsReplaced{isReplaced}
O["vue_shared~~image_viewer.vue"]
switchImageMode((image_diff_viewer.mode))
P["./viewers/image_diff/onion_skin_viewer.vue"]
Q["./viewers/image_diff/swipe_viewer.vue"]
R["./viewers/image_diff/two_up_viewer.vue"]
S["diffs~~image_diff_overlay.vue"]
codeForImageDiscussions[["v-for(discussions)"]]
T["vue_shared~~design_note_pin.vue"]
U["vue_shared~~user_avatar_link.vue"]
V["diffs~~diff_discussions.vue"]
W["batch_comments~~diff_file_drafts.vue"]
codeForTwoUpDiscussions[["v-for(discussions)"]]
codeForTwoUpDrafts[["v-for(drafts)"]]
X["notes~~notable_discussion.vue"]
%% Text-file changes
codeForDiffLines[["v-for(diffLines)"]]
Y["diffs~~diff_expansion_cell.vue"]
Z["diffs~~diff_row.vue"]
AA["diffs~~diff_line.vue"]
AB["batch_comments~~draft_note.vue"]
AC["diffs~~diff_comment_cell.vue"]
AD["diffs~~diff_gutter_avatars.vue"]
AE["ee-diffs~~inline_findings_gutter_icon_dropdown.vue"]
AF["notes~~noteable_note.vue"]
AG["notes~~note_actions.vue"]
AH["notes~~note_body.vue"]
AI["notes~~note_header.vue"]
AJ["notes~~reply_button.vue"]
AK["notes~~note_awards_list.vue"]
AL["notes~~note_edited_text.vue"]
AM["notes~~note_form.vue"]
AN["vue_shared~~awards_list.vue"]
AO["emoji~~picker.vue"]
AP["emoji~~emoji_list.vue"]
descEmojiVirtualScroll(["Virtual Scroller"])
AQ["emoji~~category.vue"]
AR["emoji~emoji_category.vue"]
AS["vue_shared~~markdown_editor.vue"]
class codeForFiles,codeForImageDiscussions code;
class codeForTwoUpDiscussions,codeForTwoUpDrafts code;
class codeForDiffLines code;
%% Also apply code styling to this switch node
class switchImageMode code;
%% Also apply code styling to these boolean nodes
class boolFileIsText,boolOnlyWhitespace,boolNotDiffable,boolNoPreview code;
class boolIsRenamed,boolIsModeChanged,boolFileHasNoPath,boolIsImage code;
class boolImageIsReplaced code;
A --> descVirtualScroller
A -->|"Virtual Scroller is
disabled when
Find in page search
(Cmd/Ctrl+f) is used."|codeForFiles
descVirtualScroller --> codeForFiles
codeForFiles --> B --> C --> D
B --> E
%% File view flags cascade
E --> boolFileIsText
boolFileIsText --> |yes| I
boolFileIsText --> |no| boolOnlyWhitespace
boolOnlyWhitespace --> |yes| descShowChanges
boolOnlyWhitespace --> |no| dirDiffViewer
dirDiffViewer --> H
H --> boolNotDiffable
boolNotDiffable --> |yes| F
boolNotDiffable --> |no| boolNoPreview
boolNoPreview --> |yes| G
boolNoPreview --> |no| boolIsRenamed
boolIsRenamed --> |yes| J
boolIsRenamed --> |no| boolIsModeChanged
boolIsModeChanged --> |yes| K
boolIsModeChanged --> |no| boolFileHasNoPath
boolFileHasNoPath --> |yes| boolIsImage
boolFileHasNoPath --> |no| descNoViewer
boolIsImage --> |yes| L
boolIsImage --> |no| M
M --> N
%% Image diff viewer
L --> boolImageIsReplaced
boolImageIsReplaced --> |yes| switchImageMode
boolImageIsReplaced --> |no| O
switchImageMode -->|"'twoup' (default)"| R
switchImageMode -->|'onion'| P
switchImageMode -->|'swipe'| Q
P & Q --> S
S --> codeForImageDiscussions
S --> AM
R-->|"Rendered in
note container div"|U & W & V
%% Do not combine this with the "P & Q --> S" statement above
%% The order of these node relationships defines the
%% layout of the graph, and we need it in this order.
R --> S
V --> codeForTwoUpDiscussions
W --> codeForTwoUpDrafts
%% This invisible link forces `noteable_discussion`
%% to render above `design_note_pin`
X ~~~ T
codeForTwoUpDrafts --> AB
codeForImageDiscussions & codeForTwoUpDiscussions & codeForTwoUpDrafts --> T
codeForTwoUpDiscussions --> X
%% Text file diff viewer
I --> codeForDiffLines
codeForDiffLines --> Z
codeForDiffLines -->|"isMatchLine?"| Y
codeForDiffLines -->|"hasCodeQuality?"| AA
codeForDiffLines -->|"hasDraftNote(s)?"| AB
Z -->|"hasCodeQuality?"| AE
Z -->|"hasDiscussions?"| AD
AA --> AC
%% Draft notes
AB --> AF
AF --> AG & AH & AI
AG --> AJ
AH --> AK & AL & AM
AK --> AN --> AO --> AP --> descEmojiVirtualScroll --> AQ --> AR
AM --> AS
```
Some of the components are rendered more than others, but the main component is `diff_row.vue`.
This component renders every diff line in a diff file. For performance reasons, this
component is a functional component. However, when we upgrade to Vue 3, this is no longer
required.
The main diff app component is the main entry point to the diffs app. One of the most important parts
of this component is to dispatch the action that assigns discussions to diff lines. This action
gets dispatched after the metadata request is completed, and after the batch diffs requests are
finished. There is also a watcher set up to watches for changes in both the diff files array and the notes
array. Whenever a change happens here, the set discussion action gets dispatched.
The DiffRow component is set up in a way that allows for us to store the diff line data in one format.
Previously, we had to request two different formats for inline and side-by-side. The DiffRow component
then uses this standard format to render the diff line data. With this standard format, the user
can then switch between inline and side-by-side without the need to re-fetch any data.
{{< alert type="note" >}}
For this component, a lot of the data used and rendered gets memoized and cached, based on
various conditions. It is possible that data sometimes gets cached between each different
component render.
{{< /alert >}}
### Vuex store
The Vuex store for the diffs app consists of 3 different modules:
- Notes
- Diffs
- Batch comments
The notes module is responsible for the discussions, including diff discussions. In this module,
the discussions get fetched, and the polling for new discussions is setup. This module gets shared
with the issue app as well, so changes here need to be tested in both issues and merge requests.
The diffs module is responsible for the everything related to diffs. This includes, but is not limited
to, fetching diffs, assigning diff discussions to lines, and creating diff discussions.
Finally, the batch comments module is not complex, and is responsible only for the draft comments feature.
However, this module does dispatch actions in the notes and diff modules whenever draft comments
are published.
### API Requests
#### Metadata
The diffs metadata endpoint exists to fetch the base data the diffs app requires quickly, without
the need to fetch all the diff files. This includes, but is not limited to:
- Diff filenames, including some extra meta data for diff files
- Added and removed line numbers
- Branch names
- Diff versions
The most important part of the metadata response is the diff filenames. This data allows the diffs
app to render the file browser inside of the diffs app, without waiting for all batch diffs
requests to complete.
When the metadata response is received, the diff file data is processed into the correct structure
that the frontend requires to render the file browser in either tree view or list view.
The structure for this file object is:
```javascript
{
"key": "",
"path": "",
"name": "",
"type": "",
"tree": [],
"changed": true,
"diffLoaded": false,
"filePaths": {
"old": file.old_path,
"new": file.new_path
},
"tempFile": false,
"deleted": false,
"fileHash": "",
"addedLines": 1,
"removedLines": 1,
"parentPath": "/",
"submodule": false
}
```
#### Batch diffs
To reduce the response size for the diffs endpoint, we are splitting this response up into different
requests, to:
- Reduces the response size of each request.
- Allows the diffs app to start rendering diffs as quickly as the first request finishes.
To make the first request quicker, the request gets sent asking for a small amount of
diffs. The number of diffs requested then increases, until the maximum number of diffs per request is 30.
When the request finishes, the diffs app formats the data received into a format that makes
it easier for the diffs app to render the diffs lines.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
accTitle: Formatting diffs
accDescr: A flowchart of steps taken when rendering a diff, including retrieval and display preparations
A[fetchDiffFilesBatch] -->
B[commit SET_DIFF_DATA_BATCH] -->
C[prepareDiffData] -->
D[prepareRawDiffFile] -->
E[ensureBasicDiffFileLines] -->
F[prepareDiffFileLines] -->
G[finalizeDiffFile] -->
H[deduplicateFilesList]
```
After this has been completed, the diffs app can now begin to render the diff lines. However, before
anything can be rendered the diffs app does one more format. It takes the diff line data, and maps
the data into a format for easier switching between inline and side-by-side modes. This
formatting happens in a computed property inside the `diff_content.vue` component.
### Render queue
{{< alert type="note" >}}
This might not be required any more. Some investigation work is required to decide
the future of the render queue. The virtual scroll bar we created has probably removed
any performance benefit we got from this approach.
{{< /alert >}}
To render diffs quickly, we have a render queue that allows the diffs to render only if the
browser is idle. This saves the browser getting frozen when rendering a lot of large diffs at once,
and allows us to reduce the total blocking time.
This pipeline of rendering files happens only if all the below conditions are `true` for every
diff file. If any of these are `false`, then this render queue does not happen and the diffs get
rendered as expected.
- Are the diffs in this file already rendered?
- Does this diff have a viewer? (Meaning, is it not a download?)
- Is the diff expanded?
This chart gives a brief overview of the pipeline that happens:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
accTitle: Render queue pipeline
accDescr: Flowchart of the steps in the render queue pipeline
A[startRenderDiffsQueue] -->B
B[commit RENDER_FILE current file index] -->C
C[canRenderNextFile?]
C -->|Yes| D[Render file] -->B
C -->|No| E[Re-run requestIdleCallback] -->C
```
The checks that happen:
- Is the idle time remaining less than 5 ms?
- Have we already tried to render this file 4 times?
After these checks happen, the file is marked in Vuex as `renderable`, which allows the diffs
app to start rendering the diff lines and discussions.
|
https://docs.gitlab.com/development/merge_request_concepts/development
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/merge_request_concepts/development.md
|
2025-08-13
|
doc/development/merge_request_concepts/diffs
|
[
"doc",
"development",
"merge_request_concepts",
"diffs"
] |
development.md
|
Create
|
Code Review
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Merge request diffs development guide
|
Developer documentation for the backend design and flow of merge request diffs.
|
This document explains the backend design and flow of merge request diffs.
It should help contributors:
- Understand the code design.
- Identify areas for improvement through contribution.
It's intentional that it doesn't contain too many implementation details, as they
can change often. The code better explains these details. The components
mentioned here are the major parts of the application for how merge request diffs
are generated, stored, and returned to users.
{{< alert type="note" >}}
This page is a living document. Update it accordingly when the parts
of the codebase touched in this document are changed or removed, or when new components
are added.
{{< /alert >}}
## Data model
Four main ActiveRecord models represent what we collectively refer to
as _diffs._ These database-backed records replicate data contained in the
project's Git repository, and are in part a cache against excessive access requests
to [Gitaly](../../gitaly.md). Additionally, they provide a logical place for:
- Calculated and retrieved metadata about the pieces of the diff.
- General class- and instance- based logic.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: Data model of diffs
accDescr: Data model of the four ActiveRecord models used in diffs
MergeRequest ||--|{ MergeRequestDiff: ""
MergeRequestDiff |{--|{ MergeRequestDiffCommit: ""
MergeRequestDiff |{--|| MergeRequestDiffDetail: ""
MergeRequestDiff |{--|{ MergeRequestDiffFile: ""
MergeRequestDiffCommit |{--|| MergeRequestDiffCommitUser: ""
```
### `MergeRequestDiff`
`MergeRequestDiff` is defined in `app/models/merge_request_diff.rb`. This
class holds metadata and context related to the diff resulting from a set of
commits. It defines methods that are the primary means for interacting with diff
contents, individual commits, and the files containing changes.
```ruby
#<MergeRequestDiff:0x00007fd1ed63b4d0
id: 28,
state: "collected",
merge_request_id: 28,
created_at: Tue, 06 Sep 2022 18:56:02.509469000 UTC +00:00,
updated_at: Tue, 06 Sep 2022 18:56:02.754201000 UTC +00:00,
base_commit_sha: "ae73cb07c9eeaf35924a10f713b364d32b2dd34f",
real_size: "9",
head_commit_sha: "bb5206fee213d983da88c47f9cf4cc6caf9c66dc",
start_commit_sha: "0b4bc9a49b562e85de7cc9e834518ea6828729b9",
commits_count: 6,
external_diff: "diff-28",
external_diff_store: 1,
stored_externally: nil,
files_count: 9,
patch_id_sha: "d504412d5b6e6739647e752aff8e468dde093f2f",
sorted: true,
diff_type: "regular",
verification_checksum: nil>
```
Diff content is usually accessed through this class. Logic is often applied
to diff, file, and commit content before it is returned to a user.
#### `MergeRequestDiff#commits_count`
When `MergeRequestDiff` is saved, associated `MergeRequestDiffCommit` records are
counted and cached into the `commits_count` column. This number displays on the
merge request page as the counter for the **Commits** tab.
If `MergeRequestDiffCommit` records are deleted, the counter doesn't update.
### `MergeRequestDiffCommit`
`MergeRequestDiffCommit` is defined in `app/models/merge_request_diff_commit.rb`.
This class corresponds to a single commit contained in its corresponding `MergeRequestDiff`,
and holds header information about the commit.
```ruby
#<MergeRequestDiffCommit:0x00007fd1dfc6c4c0
authored_date: Wed, 06 Aug 2022 06:35:52.000000000 UTC +00:00,
committed_date: Wed, 06 Aug 2022 06:35:52.000000000 UTC +00:00,
merge_request_diff_id: 28,
relative_order: 0,
sha: "bb5206fee213d983da88c47f9cf4cc6caf9c66dc",
message: "Feature conflict added\n\nSigned-off-by: Sample User <sample.user@example.com>\n",
trailers: {},
commit_author_id: 19,
committer_id: 19>
```
Every `MergeRequestDiffCommit` has a corresponding `MergeRequest::DiffCommitUser`
record it `:belongs_to`, in ActiveRecord parlance. These records are `:commit_author`
and `:committer`, and could be distinct individuals.
### `MergeRequest::DiffCommitUser`
`MergeRequest::DiffCommitUser` is defined in `app/models/merge_request/diff_commit_user.rb`.
It captures the `name` and `email` of a given commit, but contains no connection
itself to any `User` records.
```ruby
#<MergeRequest::DiffCommitUser:0x00007fd1dff7c930
id: 19,
name: "Sample User",
email: "sample.user@example.com">
```
### `MergeRequestDiffFile`
`MergeRequestDiffFile` is defined in `app/models/merge_request_diff_file.rb`.
This record of this class represents the diff of a single file contained in the
`MergeRequestDiff`. It holds both meta and specific information about the file's
relationship to the change, such as:
- Whether it is added or renamed.
- Its ordering in the diff.
- The raw diff output itself.
#### External diff storage
By default, diff data of a `MergeRequestDiffFile` is stored in `diff` column in
the `merge_request_diff_files` table. On some installations, the table can grow
too large, so they're configured to store diffs on external storage to save space.
To configure it, see [Merge request diffs storage](../../../administration/merge_request_diffs.md).
When configured to use external storage:
- The `diff` column in the database is left `NULL`.
- The associated `MergeRequestDiff` record sets the `stored_externally` attribute
to `true` on creation of `MergeRequestDiff`.
A cron job named `ScheduleMigrateExternalDiffsWorker` is also scheduled at
minute 15 of every hour. This migrates `diff` that are still stored in the
database to external storage.
### `MergeRequestDiffDetail`
`MergeRequestDiffDetail` is defined in `app/models/merge_request_diff_detail.rb`.
This class provides verification information for Geo replication, but otherwise
is not used for user-facing diffs.
```ruby
#<MergeRequestDiffFile:0x00007fd1ef7c9048
merge_request_diff_id: 28,
relative_order: 0,
new_file: true,
renamed_file: false,
deleted_file: false,
too_large: false,
a_mode: "0",
b_mode: "100644",
new_path: "files/ruby/feature.rb",
old_path: "files/ruby/feature.rb",
diff:
"@@ -0,0 +1,4 @@\n+# This file was changed in feature branch\n+# We put different code here to make merge conflict\n+class Conflict\n+end\n",
binary: false,
external_diff_offset: nil,
external_diff_size: nil>
```
## Flow
These flowcharts should help explain the flow from the controllers down to the
models for different features. This page is not intended to document the entirety
of options for access and working with diffs, focusing solely on the most common.
### Generation of `MergeRequestDiff*` records
As explained above, we use database tables to cache information from Gitaly when displaying
diffs on merge requests. When enabled, we also use object storage when storing diffs.
We have 2 types of merge request diffs: base diff and `HEAD` diff. Each type
is generated differently.
#### Base diff
On every push to a merge request branch, we create a new merge request diff version.
This flowchart shows a basic explanation of how each component is used in this case.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Flowchart of generating a new diff version
accDescr: High-level flowchart of components used when creating a new diff version, based on a Git push to a branch
A[PostReceive worker] --> B[MergeRequests::RefreshService]
B --> C[Reload diff of merge requests]
C --> D[Create merge request diff]
D --> K[(Database)]
D --> E[Ensure commit SHAs]
E --> L[Gitaly]
E --> F[Set patch-id]
F --> L[Gitaly]
F --> G[Save commits]
G --> L[Gitaly]
G --> K[(Database)]
G --> H[Save diffs]
H --> L[Gitaly]
H --> K[(Database)]
H --> M[(Object Storage)]
H --> I[Keep around commits]
I --> L[Gitaly]
I --> J[Clear highlight and stats cache]
J --> N[(Redis)]
```
This sequence diagram shows a more detailed explanation of this flow.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Data flow of building a new diff
accDescr: Detailed model of the data flow through the components that build a new diff version
PostReceive-->>+MergeRequests_RefreshService: execute()
Note over MergeRequests_RefreshService: Reload diff of merge requests
MergeRequests_RefreshService-->>+MergeRequest: reload_diff()
Note over MergeRequests_ReloadDiffsService: Create merge request diff
MergeRequest-->>+MergeRequests_ReloadDiffsService: execute()
MergeRequests_ReloadDiffsService-->>+MergeRequest: create_merge_request_diff()
MergeRequest-->>+MergeRequestDiff: create()
Note over MergeRequestDiff: Ensure commit SHAs
MergeRequestDiff-->>+MergeRequest: source_branch_sha()
MergeRequest-->>+Repository: commit()
Repository-->>+Gitaly: FindCommit RPC
Gitaly-->>-Repository: Gitlab::Git::Commit
Repository-->>+Commit: new()
Commit-->>-Repository: Commit
Repository-->>-MergeRequest: Commit
MergeRequest-->>-MergeRequestDiff: Commit SHA
Note over MergeRequestDiff: Set patch-id
MergeRequestDiff-->>+Repository: get_patch_id()
Repository-->>+Gitaly: GetPatchID RPC
Gitaly-->>-Repository: Patch ID
Repository-->>-MergeRequestDiff: Patch ID
Note over MergeRequestDiff: Save commits
MergeRequestDiff-->>+Gitaly: ListCommits RPC
Gitaly-->>-MergeRequestDiff: Commits
MergeRequestDiff-->>+MergeRequestDiffCommit: create_bulk()
Note over MergeRequestDiff: Save diffs
MergeRequestDiff-->>+Gitaly: ListCommits RPC
Gitaly-->>-MergeRequestDiff: Commits
opt When external diffs is enabled
MergeRequestDiff-->>+ObjectStorage: upload diffs
end
MergeRequestDiff-->>+MergeRequestDiffFile: legacy_bulk_insert()
Note over MergeRequestDiff: Keep around commits
MergeRequestDiff-->>+Repository: keep_around()
Repository-->>+Gitaly: WriteRef RPC
Note over MergeRequests_ReloadDiffsService: Clear highlight and stats cache
MergeRequests_ReloadDiffsService->>+Gitlab_Diff_HighlightCache: clear()
MergeRequests_ReloadDiffsService->>+Gitlab_Diff_StatsCache: clear()
Gitlab_Diff_HighlightCache-->>+Redis: cache
Gitlab_Diff_StatsCache-->>+Redis: cache
```
#### `HEAD` diff
Whenever mergeability of a merge request is checked and the merge request `merge_status`
is either `:unchecked`, `:cannot_be_merged_recheck`, `:checking`, or `:cannot_be_merged_rechecking`,
we attempt to merge the changes from source branch to target branch and write to a ref.
If it's successful (meaning, no conflict), we generate a diff based on the
generated commit and show it as the `HEAD` diff.
The flow differs from the base diff generation as it has a different entry point.
This flowchart shows a basic explanation of how each component is used when generating
a `HEAD` diff.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Generating a HEAD diff (high-level view)
accDescr: High-level flowchart of components used when generating a HEAD diff
A[MergeRequestMergeabilityCheckWorker] --> B[MergeRequests::MergeabilityCheckService]
B --> C[Merge changes to ref]
C --> L[Gitaly]
C --> D[Recreate merge request HEAD diff]
D --> K[(Database)]
D --> E[Ensure commit SHAs]
E --> L[Gitaly]
E --> F[Set patch-id]
F --> L[Gitaly]
F --> G[Save commits]
G --> L[Gitaly]
G --> K[(Database)]
G --> H[Save diffs]
H --> L[Gitaly]
H --> K[(Database)]
H --> M[(Object Storage)]
H --> I[Keep around commits]
I --> L[Gitaly]
```
This sequence diagram shows a more detailed explanation of this flow.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Generating a HEAD diff (detail view)
accDescr: Detailed sequence diagram of generating a new HEAD diff
MergeRequestMergeabilityCheckWorker-->>+MergeRequests_MergeabilityCheckService: execute()
Note over MergeRequests_MergeabilityCheckService: Merge changes to ref
MergeRequests_MergeabilityCheckService-->>+MergeRequests_MergeToRefService: execute()
MergeRequests_MergeToRefService-->>+Repository: merge_to_ref()
Repository-->>+Gitaly: UserMergeBranch RPC
Gitaly-->>-Repository: Commit SHA
MergeRequests_MergeToRefService-->>+Repository: commit()
Repository-->>+Gitaly: FindCommit RPC
Gitaly-->>-Repository: Gitlab::Git::Commit
Repository-->>+Commit: new()
Commit-->>-Repository: Commit
Repository-->>-MergeRequests_MergeToRefService: Commit
Note over MergeRequests_MergeabilityCheckService: Recreate merge request HEAD diff
MergeRequests_MergeabilityCheckService-->>+MergeRequests_ReloadMergeHeadDiffService: execute()
MergeRequests_ReloadMergeHeadDiffService-->>+MergeRequest: create_merge_request_diff()
MergeRequest-->>+MergeRequestDiff: create()
Note over MergeRequestDiff: Ensure commit SHAs
MergeRequestDiff-->>+MergeRequest: merge_ref_head()
MergeRequest-->>+Repository: commit()
Repository-->>+Gitaly: FindCommit RPC
Gitaly-->>-Repository: Gitlab::Git::Commit
Repository-->>+Commit: new()
Commit-->>-Repository: Commit
Repository-->>-MergeRequest: Commit
MergeRequest-->>-MergeRequestDiff: Commit SHA
Note over MergeRequestDiff: Set patch-id
MergeRequestDiff-->>+Repository: get_patch_id()
Repository-->>+Gitaly: GetPatchID RPC
Gitaly-->>-Repository: Patch ID
Repository-->>-MergeRequestDiff: Patch ID
Note over MergeRequestDiff: Save commits
MergeRequestDiff-->>+Gitaly: ListCommits RPC
Gitaly-->>-MergeRequestDiff: Commits
MergeRequestDiff-->>+MergeRequestDiffCommit: create_bulk()
Note over MergeRequestDiff: Save diffs
MergeRequestDiff-->>+Gitaly: ListCommits RPC
Gitaly-->>-MergeRequestDiff: Commits
opt When external diffs is enabled
MergeRequestDiff-->>+ObjectStorage: upload diffs
end
MergeRequestDiff-->>+MergeRequestDiffFile: legacy_bulk_insert()
Note over MergeRequestDiff: Keep around commits
MergeRequestDiff-->>+Repository: keep_around()
Repository-->>+Gitaly: WriteRef RPC
```
### `diffs_batch.json`
The most common avenue for viewing diffs is the **Changes**
tab at the top of merge request pages in the GitLab UI. When selected, the
diffs themselves are loaded via a paginated request to `/-/merge_requests/:id/diffs_batch.json`,
which is served by [`Projects::MergeRequests::DiffsController#diffs_batch`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/merge_requests/diffs_controller.rb).
This flowchart shows a basic explanation of how each component is used in a
`diffs_batch.json` request.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Viewing a diff
accDescr: High-level flowchart a diffs_batch request, which renders diffs for browser display
A[Frontend] --> B[diffs_batch.json]
B --> C[Preload diffs and ivars]
C -->D[Gitaly]
C -->E[(Database)]
C --> F[Getting diff file collection]
C --> F[Getting diff file collection]
F --> G[Calculate unfoldable diff lines]
G --> E
G --> H{ETag header is not stale}
H --> |Yes| I[Return 304]
H --> |No| J[Serialize diffs]
J --> D
J --> E
J --> K[(Redis)]
J --> L[Return 200 with JSON]
```
Different cases exist when viewing diffs, though, and the flow for each case differs.
#### Viewing HEAD, latest or specific diff version
The HEAD diff is viewed by default, if it is available. If not, it falls back to
latest diff version. It's also possible to view a specific diff version. These cases
have the same flow.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Viewing the most recent diff
accDescr: Sequence diagram showing how a particular diff is chosen for display, first with the HEAD diff, then the latest diff, followed by a specific version if it's requested
Frontend-->>+.#diffs_batch: API call
Note over .#diffs_batch: Preload diffs and ivars
.#diffs_batch-->>+.#define_diff_vars: before_action
.#define_diff_vars-->>+MergeRequest: merge_request_head_diff() or merge_request_diff()
MergeRequest-->>+MergeRequestDiff: find()
MergeRequestDiff-->>-MergeRequest: MergeRequestDiff
MergeRequest-->>-.#define_diff_vars: MergeRequestDiff
.#define_diff_vars-->>-.#diffs_batch: @compare
Note over .#diffs_batch: Getting diff file collection
.#diffs_batch-->>+MergeRequestDiff: diffs_in_batch()
MergeRequestDiff-->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: new()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>-MergeRequestDiff: diff file collection
MergeRequestDiff-->>-.#diffs_batch: diff file collection
Note over .#diffs_batch: Calculate unfoldable diff lines
.#diffs_batch-->>+MergeRequest: note_positions_for_paths
MergeRequest-->>+Gitlab_Diff_PositionCollection: new() then unfoldable()
Gitlab_Diff_PositionCollection-->>-MergeRequest: position collection
MergeRequest-->>-.#diffs_batch: unfoldable_positions
break when ETag header is present and is not stale
.#diffs_batch-->>+Frontend: return 304 HTTP
end
.#diffs_batch->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: write_cache()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch->>+Gitlab_Diff_HighlightCache: write_if_empty()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch->>+Gitlab_Diff_StatsCache: write_if_empty()
Gitlab_Diff_HighlightCache-->>+Redis: cache
Gitlab_Diff_StatsCache-->>+Redis: cache
Note over .#diffs_batch: Serialize diffs and render JSON
.#diffs_batch-->>+PaginatedDiffSerializer: represent()
PaginatedDiffSerializer-->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: diff_files()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+MergeRequestDiff: raw_diffs()
MergeRequestDiff-->>+MergeRequestDiffFile: Get all associated records
MergeRequestDiffFile-->>-MergeRequestDiff: Gitlab::Git::DiffCollection
MergeRequestDiff-->>-Gitlab_Diff_FileCollection_MergeRequestDiffBatch: diff files
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+Gitlab_Diff_StatsCache: find_by_path()
Gitlab_Diff_StatsCache-->>+Redis: Read data from cache
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+Gitlab_Diff_HighlightCache: decorate()
Gitlab_Diff_HighlightCache-->>+Redis: Read data from cache
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>-PaginatedDiffSerializer: diff files
PaginatedDiffSerializer-->>-.#diffs_batch: JSON
.#diffs_batch-->>+Frontend: return 200 HTTP with JSON
```
However, if **Show whitespace changes** is not selected when viewing diffs:
- Whitespace changes are ignored.
- The flow changes, and now involves Gitaly.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Viewing diffs without whitespace changes
accDescr: Sequence diagram showing how a particular diff is chosen for display, if whitespace changes are not requested - first with the HEAD diff, then the latest diff, followed by a specific version if it's requested
Frontend-->>+.#diffs_batch: API call
Note over .#diffs_batch: Preload diffs and ivars
.#diffs_batch-->>+.#define_diff_vars: before_action
.#define_diff_vars-->>+MergeRequest: merge_request_head_diff() or merge_request_diff()
MergeRequest-->>+MergeRequestDiff: find()
MergeRequestDiff-->>-MergeRequest: MergeRequestDiff
MergeRequest-->>-.#define_diff_vars: MergeRequestDiff
.#define_diff_vars-->>-.#diffs_batch: @compare
Note over .#diffs_batch: Getting diff file collection
.#diffs_batch-->>+MergeRequestDiff: diffs_in_batch()
MergeRequestDiff-->>+Gitlab_Diff_FileCollection_Compare: new()
Gitlab_Diff_FileCollection_Compare-->>-MergeRequestDiff: diff file collection
MergeRequestDiff-->>-.#diffs_batch: diff file collection
Note over .#diffs_batch: Calculate unfoldable diff lines
.#diffs_batch-->>+MergeRequest: note_positions_for_paths
MergeRequest-->>+Gitlab_Diff_PositionCollection: new() then unfoldable()
Gitlab_Diff_PositionCollection-->>-MergeRequest: position collection
MergeRequest-->>-.#diffs_batch: unfoldable_positions
break when ETag header is present and is not stale
.#diffs_batch-->>+Frontend: return 304 HTTP
end
opt Cache highlights and stats when viewing HEAD, latest or specific version
.#diffs_batch->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: write_cache()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch->>+Gitlab_Diff_HighlightCache: write_if_empty()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch->>+Gitlab_Diff_StatsCache: write_if_empty()
Gitlab_Diff_HighlightCache-->>+Redis: cache
Gitlab_Diff_StatsCache-->>+Redis: cache
end
Note over .#diffs_batch: Serialize diffs and render JSON
.#diffs_batch-->>+PaginatedDiffSerializer: represent()
PaginatedDiffSerializer-->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: diff_files()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+MergeRequestDiff: raw_diffs()
MergeRequestDiff-->>+Repository: diff()
Repository-->>+Gitaly: CommitDiff RPC
Gitaly-->>-Repository: GitalyClient::DiffStitcher
Repository-->>-MergeRequestDiff: Gitlab::Git::DiffCollection
MergeRequestDiff-->>-Gitlab_Diff_FileCollection_MergeRequestDiffBatch: diff files
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+Gitlab_Diff_StatsCache: find_by_path()
Gitlab_Diff_StatsCache-->>+Redis: Read data from cache
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+Gitlab_Diff_HighlightCache: decorate()
Gitlab_Diff_HighlightCache-->>+Redis: Read data from cache
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>-PaginatedDiffSerializer: diff files
PaginatedDiffSerializer-->>-.#diffs_batch: JSON
.#diffs_batch-->>+Frontend: return 200 HTTP with JSON
```
#### Compare between merge request diff versions
You can also compare different diff versions when viewing diffs. The flow is different
from the default flow, as it makes requests to Gitaly to generate a comparison between two
diff versions. It also doesn't use Redis for highlight and stats caches.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Comparing diffs
accDescr: Sequence diagram of how diffs are compared against each other
Frontend-->>+.#diffs_batch: API call
Note over .#diffs_batch: Preload diffs and ivars
.#diffs_batch-->>+.#define_diff_vars: before_action
.#define_diff_vars-->>+MergeRequestDiff: compare_with(start_sha)
MergeRequestDiff-->>+Compare: new()
Compare-->>-MergeRequestDiff: Compare
MergeRequestDiff-->>-.#define_diff_vars: Compare
.#define_diff_vars-->>-.#diffs_batch: @compare
Note over .#diffs_batch: Getting diff file collection
.#define_diff_vars-->>+Compare: diffs_in_batch()
Compare-->>+Gitlab_Diff_FileCollection_Compare: new()
Gitlab_Diff_FileCollection_Compare-->>-Compare: diff file collection
Compare-->>-.#define_diff_vars: diff file collection
Note over .#diffs_batch: Calculate unfoldable diff lines
.#diffs_batch-->>+MergeRequest: note_positions_for_paths
MergeRequest-->>+Gitlab_Diff_PositionCollection: new() then unfoldable()
Gitlab_Diff_PositionCollection-->>-MergeRequest: position collection
MergeRequest-->>-.#diffs_batch: unfoldable_positions
break when ETag header is present and is not stale
.#diffs_batch-->>+Frontend: return 304 HTTP
end
Note over .#diffs_batch: Serialize diffs and render JSON
.#diffs_batch-->>+PaginatedDiffSerializer: represent()
PaginatedDiffSerializer-->>+Gitlab_Diff_FileCollection_Compare: diff_files()
Gitlab_Diff_FileCollection_Compare-->>+Compare: raw_diffs()
Compare-->>+Repository: diff()
Repository-->>+Gitaly: CommitDiff RPC
Gitaly-->>-Repository: GitalyClient::DiffStitcher
Repository-->>-Compare: Gitlab::Git::DiffCollection
Compare-->>-Gitlab_Diff_FileCollection_Compare: diff files
Gitlab_Diff_FileCollection_Compare-->>-PaginatedDiffSerializer: diff files
PaginatedDiffSerializer-->>-.#diffs_batch: JSON
.#diffs_batch-->>+Frontend: return 200 HTTP with JSON
```
#### Viewing commit diff
Another feature to view merge request diffs is to view diffs of a specific commit. It
differs from the default flow, and requires Gitaly to get the diff of the specific commit. It
also doesn't use Redis for the highlight and stats caches.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Viewing commit diff
accDescr: Sequence diagram showing how viewing the diff of a specific commit is different from the default diff view flow
Frontend-->>+.#diffs_batch: API call
Note over .#diffs_batch: Preload diffs and ivars
.#diffs_batch-->>+.#define_diff_vars: before_action
.#define_diff_vars-->>+Repository: commit()
Repository-->>+Gitaly: FindCommit RPC
Gitaly-->>-Repository: Gitlab::Git::Commit
Repository-->>+Commit: new()
Commit-->>-Repository: Commit
Repository-->>-.#define_diff_vars: Commit
.#define_diff_vars-->>-.#diffs_batch: @compare
Note over .#diffs_batch: Getting diff file collection
.#define_diff_vars-->>+Commit: diffs_in_batch()
Commit-->>+Gitlab_Diff_FileCollection_Commit: new()
Gitlab_Diff_FileCollection_Commit-->>-Commit: diff file collection
Commit-->>-.#define_diff_vars: diff file collection
Note over .#diffs_batch: Calculate unfoldable diff lines
.#diffs_batch-->>+MergeRequest: note_positions_for_paths
MergeRequest-->>+Gitlab_Diff_PositionCollection: new() then unfoldable()
Gitlab_Diff_PositionCollection-->>-MergeRequest: position collection
MergeRequest-->>-.#diffs_batch: unfoldable_positions
break when ETag header is present and is not stale
.#diffs_batch-->>+Frontend: return 304 HTTP
end
Note over .#diffs_batch: Serialize diffs and render JSON
.#diffs_batch-->>+PaginatedDiffSerializer: represent()
PaginatedDiffSerializer-->>+Gitlab_Diff_FileCollection_Commit: diff_files()
Gitlab_Diff_FileCollection_Commit-->>+Commit: raw_diffs()
Commit-->>+Gitaly: CommitDiff RPC
Gitaly-->>-Commit: GitalyClient::DiffStitcher
Commit-->>-Gitlab_Diff_FileCollection_Commit: Gitlab::Git::DiffCollection
Gitlab_Diff_FileCollection_Commit-->>-PaginatedDiffSerializer: diff files
PaginatedDiffSerializer-->>-.#diffs_batch: JSON
.#diffs_batch-->>+Frontend: return 200 HTTP with JSON
```
### `diffs.json`
It's also possible to view diffs while creating a merge request by scrolling
down to the bottom of the new merge request page and clicking **Changes** tab.
It doesn't use the `diffs_batch.json` endpoint as the merge request record isn't
created at that point yet. It uses the `diffs.json` instead.
This flowchart shows a basic explanation of how each component is used in a
`diffs.json` request.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Diff request flow (high level)
accDescr: High-level flowchart of the components used in a diffs request
A[Frontend] --> B[diffs.json]
B --> C[Build merge request]
C --> D[Get diffs]
D --> E[Render view with diffs]
E --> G[Gitaly]
E --> F[Respond with JSON with the rendered view]
```
This sequence diagram shows a more detailed explanation of this flow.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Diff request flow (low level)
accDescr: Sequence diagram with a deeper view of the components used in a diffs request
Frontend-->>+.#diffs: API call
Note over .#diffs: Build merge request
.#diffs-->>+MergeRequests_BuildService: execute
MergeRequests_BuildService-->>+Compare: new()
Compare-->>-MergeRequests_BuildService: Compare
MergeRequests_BuildService-->>+Compare: commits()
Compare-->>+Gitaly: ListCommits RPC
Gitaly-->-Compare: Commits
Compare-->>-MergeRequests_BuildService: Commits
MergeRequests_BuildService-->>-.#diffs: MergeRequest
Note over .#diffs: Get diffs
.#diffs-->>+MergeRequest: diffs()
MergeRequest-->>+Compare: diffs()
Compare-->>+Gitlab_Diff_FileCollection_Compare: new()
Gitlab_Diff_FileCollection_Compare-->>-Compare: diff file collection
Compare-->>-MergeRequest: diff file collection
MergeRequest-->>-.#diffs: @diffs =
Note over .#diffs: Render view with diffs
.#diffs-->>+HAML: view_to_html_string('projects/merge_requests/creations/_diffs', diffs: @diffs)
HAML-->>+Gitlab_Diff_FileCollection_Compare: diff_files()
Gitlab_Diff_FileCollection_Compare-->>+Compare: raw_diffs()
Compare-->>+Repository: diff()
Repository-->>+Gitaly: CommitDiff RPC
Gitaly-->>-Repository: GitalyClient::DiffStitcher
Repository-->>-Compare: Gitlab::Git::DiffCollection
Compare-->>-Gitlab_Diff_FileCollection_Compare: diff files
Gitlab_Diff_FileCollection_Compare-->>-HAML: diff files
HAML-->>-.#diffs: rendered view
.#diffs-->>-Frontend: Respond with JSON with rendered view
```
|
---
stage: Create
group: Code Review
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Developer documentation for the backend design and flow of merge request
diffs.
title: Merge request diffs development guide
breadcrumbs:
- doc
- development
- merge_request_concepts
- diffs
---
This document explains the backend design and flow of merge request diffs.
It should help contributors:
- Understand the code design.
- Identify areas for improvement through contribution.
It's intentional that it doesn't contain too many implementation details, as they
can change often. The code better explains these details. The components
mentioned here are the major parts of the application for how merge request diffs
are generated, stored, and returned to users.
{{< alert type="note" >}}
This page is a living document. Update it accordingly when the parts
of the codebase touched in this document are changed or removed, or when new components
are added.
{{< /alert >}}
## Data model
Four main ActiveRecord models represent what we collectively refer to
as _diffs._ These database-backed records replicate data contained in the
project's Git repository, and are in part a cache against excessive access requests
to [Gitaly](../../gitaly.md). Additionally, they provide a logical place for:
- Calculated and retrieved metadata about the pieces of the diff.
- General class- and instance- based logic.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
erDiagram
accTitle: Data model of diffs
accDescr: Data model of the four ActiveRecord models used in diffs
MergeRequest ||--|{ MergeRequestDiff: ""
MergeRequestDiff |{--|{ MergeRequestDiffCommit: ""
MergeRequestDiff |{--|| MergeRequestDiffDetail: ""
MergeRequestDiff |{--|{ MergeRequestDiffFile: ""
MergeRequestDiffCommit |{--|| MergeRequestDiffCommitUser: ""
```
### `MergeRequestDiff`
`MergeRequestDiff` is defined in `app/models/merge_request_diff.rb`. This
class holds metadata and context related to the diff resulting from a set of
commits. It defines methods that are the primary means for interacting with diff
contents, individual commits, and the files containing changes.
```ruby
#<MergeRequestDiff:0x00007fd1ed63b4d0
id: 28,
state: "collected",
merge_request_id: 28,
created_at: Tue, 06 Sep 2022 18:56:02.509469000 UTC +00:00,
updated_at: Tue, 06 Sep 2022 18:56:02.754201000 UTC +00:00,
base_commit_sha: "ae73cb07c9eeaf35924a10f713b364d32b2dd34f",
real_size: "9",
head_commit_sha: "bb5206fee213d983da88c47f9cf4cc6caf9c66dc",
start_commit_sha: "0b4bc9a49b562e85de7cc9e834518ea6828729b9",
commits_count: 6,
external_diff: "diff-28",
external_diff_store: 1,
stored_externally: nil,
files_count: 9,
patch_id_sha: "d504412d5b6e6739647e752aff8e468dde093f2f",
sorted: true,
diff_type: "regular",
verification_checksum: nil>
```
Diff content is usually accessed through this class. Logic is often applied
to diff, file, and commit content before it is returned to a user.
#### `MergeRequestDiff#commits_count`
When `MergeRequestDiff` is saved, associated `MergeRequestDiffCommit` records are
counted and cached into the `commits_count` column. This number displays on the
merge request page as the counter for the **Commits** tab.
If `MergeRequestDiffCommit` records are deleted, the counter doesn't update.
### `MergeRequestDiffCommit`
`MergeRequestDiffCommit` is defined in `app/models/merge_request_diff_commit.rb`.
This class corresponds to a single commit contained in its corresponding `MergeRequestDiff`,
and holds header information about the commit.
```ruby
#<MergeRequestDiffCommit:0x00007fd1dfc6c4c0
authored_date: Wed, 06 Aug 2022 06:35:52.000000000 UTC +00:00,
committed_date: Wed, 06 Aug 2022 06:35:52.000000000 UTC +00:00,
merge_request_diff_id: 28,
relative_order: 0,
sha: "bb5206fee213d983da88c47f9cf4cc6caf9c66dc",
message: "Feature conflict added\n\nSigned-off-by: Sample User <sample.user@example.com>\n",
trailers: {},
commit_author_id: 19,
committer_id: 19>
```
Every `MergeRequestDiffCommit` has a corresponding `MergeRequest::DiffCommitUser`
record it `:belongs_to`, in ActiveRecord parlance. These records are `:commit_author`
and `:committer`, and could be distinct individuals.
### `MergeRequest::DiffCommitUser`
`MergeRequest::DiffCommitUser` is defined in `app/models/merge_request/diff_commit_user.rb`.
It captures the `name` and `email` of a given commit, but contains no connection
itself to any `User` records.
```ruby
#<MergeRequest::DiffCommitUser:0x00007fd1dff7c930
id: 19,
name: "Sample User",
email: "sample.user@example.com">
```
### `MergeRequestDiffFile`
`MergeRequestDiffFile` is defined in `app/models/merge_request_diff_file.rb`.
This record of this class represents the diff of a single file contained in the
`MergeRequestDiff`. It holds both meta and specific information about the file's
relationship to the change, such as:
- Whether it is added or renamed.
- Its ordering in the diff.
- The raw diff output itself.
#### External diff storage
By default, diff data of a `MergeRequestDiffFile` is stored in `diff` column in
the `merge_request_diff_files` table. On some installations, the table can grow
too large, so they're configured to store diffs on external storage to save space.
To configure it, see [Merge request diffs storage](../../../administration/merge_request_diffs.md).
When configured to use external storage:
- The `diff` column in the database is left `NULL`.
- The associated `MergeRequestDiff` record sets the `stored_externally` attribute
to `true` on creation of `MergeRequestDiff`.
A cron job named `ScheduleMigrateExternalDiffsWorker` is also scheduled at
minute 15 of every hour. This migrates `diff` that are still stored in the
database to external storage.
### `MergeRequestDiffDetail`
`MergeRequestDiffDetail` is defined in `app/models/merge_request_diff_detail.rb`.
This class provides verification information for Geo replication, but otherwise
is not used for user-facing diffs.
```ruby
#<MergeRequestDiffFile:0x00007fd1ef7c9048
merge_request_diff_id: 28,
relative_order: 0,
new_file: true,
renamed_file: false,
deleted_file: false,
too_large: false,
a_mode: "0",
b_mode: "100644",
new_path: "files/ruby/feature.rb",
old_path: "files/ruby/feature.rb",
diff:
"@@ -0,0 +1,4 @@\n+# This file was changed in feature branch\n+# We put different code here to make merge conflict\n+class Conflict\n+end\n",
binary: false,
external_diff_offset: nil,
external_diff_size: nil>
```
## Flow
These flowcharts should help explain the flow from the controllers down to the
models for different features. This page is not intended to document the entirety
of options for access and working with diffs, focusing solely on the most common.
### Generation of `MergeRequestDiff*` records
As explained above, we use database tables to cache information from Gitaly when displaying
diffs on merge requests. When enabled, we also use object storage when storing diffs.
We have 2 types of merge request diffs: base diff and `HEAD` diff. Each type
is generated differently.
#### Base diff
On every push to a merge request branch, we create a new merge request diff version.
This flowchart shows a basic explanation of how each component is used in this case.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Flowchart of generating a new diff version
accDescr: High-level flowchart of components used when creating a new diff version, based on a Git push to a branch
A[PostReceive worker] --> B[MergeRequests::RefreshService]
B --> C[Reload diff of merge requests]
C --> D[Create merge request diff]
D --> K[(Database)]
D --> E[Ensure commit SHAs]
E --> L[Gitaly]
E --> F[Set patch-id]
F --> L[Gitaly]
F --> G[Save commits]
G --> L[Gitaly]
G --> K[(Database)]
G --> H[Save diffs]
H --> L[Gitaly]
H --> K[(Database)]
H --> M[(Object Storage)]
H --> I[Keep around commits]
I --> L[Gitaly]
I --> J[Clear highlight and stats cache]
J --> N[(Redis)]
```
This sequence diagram shows a more detailed explanation of this flow.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Data flow of building a new diff
accDescr: Detailed model of the data flow through the components that build a new diff version
PostReceive-->>+MergeRequests_RefreshService: execute()
Note over MergeRequests_RefreshService: Reload diff of merge requests
MergeRequests_RefreshService-->>+MergeRequest: reload_diff()
Note over MergeRequests_ReloadDiffsService: Create merge request diff
MergeRequest-->>+MergeRequests_ReloadDiffsService: execute()
MergeRequests_ReloadDiffsService-->>+MergeRequest: create_merge_request_diff()
MergeRequest-->>+MergeRequestDiff: create()
Note over MergeRequestDiff: Ensure commit SHAs
MergeRequestDiff-->>+MergeRequest: source_branch_sha()
MergeRequest-->>+Repository: commit()
Repository-->>+Gitaly: FindCommit RPC
Gitaly-->>-Repository: Gitlab::Git::Commit
Repository-->>+Commit: new()
Commit-->>-Repository: Commit
Repository-->>-MergeRequest: Commit
MergeRequest-->>-MergeRequestDiff: Commit SHA
Note over MergeRequestDiff: Set patch-id
MergeRequestDiff-->>+Repository: get_patch_id()
Repository-->>+Gitaly: GetPatchID RPC
Gitaly-->>-Repository: Patch ID
Repository-->>-MergeRequestDiff: Patch ID
Note over MergeRequestDiff: Save commits
MergeRequestDiff-->>+Gitaly: ListCommits RPC
Gitaly-->>-MergeRequestDiff: Commits
MergeRequestDiff-->>+MergeRequestDiffCommit: create_bulk()
Note over MergeRequestDiff: Save diffs
MergeRequestDiff-->>+Gitaly: ListCommits RPC
Gitaly-->>-MergeRequestDiff: Commits
opt When external diffs is enabled
MergeRequestDiff-->>+ObjectStorage: upload diffs
end
MergeRequestDiff-->>+MergeRequestDiffFile: legacy_bulk_insert()
Note over MergeRequestDiff: Keep around commits
MergeRequestDiff-->>+Repository: keep_around()
Repository-->>+Gitaly: WriteRef RPC
Note over MergeRequests_ReloadDiffsService: Clear highlight and stats cache
MergeRequests_ReloadDiffsService->>+Gitlab_Diff_HighlightCache: clear()
MergeRequests_ReloadDiffsService->>+Gitlab_Diff_StatsCache: clear()
Gitlab_Diff_HighlightCache-->>+Redis: cache
Gitlab_Diff_StatsCache-->>+Redis: cache
```
#### `HEAD` diff
Whenever mergeability of a merge request is checked and the merge request `merge_status`
is either `:unchecked`, `:cannot_be_merged_recheck`, `:checking`, or `:cannot_be_merged_rechecking`,
we attempt to merge the changes from source branch to target branch and write to a ref.
If it's successful (meaning, no conflict), we generate a diff based on the
generated commit and show it as the `HEAD` diff.
The flow differs from the base diff generation as it has a different entry point.
This flowchart shows a basic explanation of how each component is used when generating
a `HEAD` diff.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Generating a HEAD diff (high-level view)
accDescr: High-level flowchart of components used when generating a HEAD diff
A[MergeRequestMergeabilityCheckWorker] --> B[MergeRequests::MergeabilityCheckService]
B --> C[Merge changes to ref]
C --> L[Gitaly]
C --> D[Recreate merge request HEAD diff]
D --> K[(Database)]
D --> E[Ensure commit SHAs]
E --> L[Gitaly]
E --> F[Set patch-id]
F --> L[Gitaly]
F --> G[Save commits]
G --> L[Gitaly]
G --> K[(Database)]
G --> H[Save diffs]
H --> L[Gitaly]
H --> K[(Database)]
H --> M[(Object Storage)]
H --> I[Keep around commits]
I --> L[Gitaly]
```
This sequence diagram shows a more detailed explanation of this flow.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Generating a HEAD diff (detail view)
accDescr: Detailed sequence diagram of generating a new HEAD diff
MergeRequestMergeabilityCheckWorker-->>+MergeRequests_MergeabilityCheckService: execute()
Note over MergeRequests_MergeabilityCheckService: Merge changes to ref
MergeRequests_MergeabilityCheckService-->>+MergeRequests_MergeToRefService: execute()
MergeRequests_MergeToRefService-->>+Repository: merge_to_ref()
Repository-->>+Gitaly: UserMergeBranch RPC
Gitaly-->>-Repository: Commit SHA
MergeRequests_MergeToRefService-->>+Repository: commit()
Repository-->>+Gitaly: FindCommit RPC
Gitaly-->>-Repository: Gitlab::Git::Commit
Repository-->>+Commit: new()
Commit-->>-Repository: Commit
Repository-->>-MergeRequests_MergeToRefService: Commit
Note over MergeRequests_MergeabilityCheckService: Recreate merge request HEAD diff
MergeRequests_MergeabilityCheckService-->>+MergeRequests_ReloadMergeHeadDiffService: execute()
MergeRequests_ReloadMergeHeadDiffService-->>+MergeRequest: create_merge_request_diff()
MergeRequest-->>+MergeRequestDiff: create()
Note over MergeRequestDiff: Ensure commit SHAs
MergeRequestDiff-->>+MergeRequest: merge_ref_head()
MergeRequest-->>+Repository: commit()
Repository-->>+Gitaly: FindCommit RPC
Gitaly-->>-Repository: Gitlab::Git::Commit
Repository-->>+Commit: new()
Commit-->>-Repository: Commit
Repository-->>-MergeRequest: Commit
MergeRequest-->>-MergeRequestDiff: Commit SHA
Note over MergeRequestDiff: Set patch-id
MergeRequestDiff-->>+Repository: get_patch_id()
Repository-->>+Gitaly: GetPatchID RPC
Gitaly-->>-Repository: Patch ID
Repository-->>-MergeRequestDiff: Patch ID
Note over MergeRequestDiff: Save commits
MergeRequestDiff-->>+Gitaly: ListCommits RPC
Gitaly-->>-MergeRequestDiff: Commits
MergeRequestDiff-->>+MergeRequestDiffCommit: create_bulk()
Note over MergeRequestDiff: Save diffs
MergeRequestDiff-->>+Gitaly: ListCommits RPC
Gitaly-->>-MergeRequestDiff: Commits
opt When external diffs is enabled
MergeRequestDiff-->>+ObjectStorage: upload diffs
end
MergeRequestDiff-->>+MergeRequestDiffFile: legacy_bulk_insert()
Note over MergeRequestDiff: Keep around commits
MergeRequestDiff-->>+Repository: keep_around()
Repository-->>+Gitaly: WriteRef RPC
```
### `diffs_batch.json`
The most common avenue for viewing diffs is the **Changes**
tab at the top of merge request pages in the GitLab UI. When selected, the
diffs themselves are loaded via a paginated request to `/-/merge_requests/:id/diffs_batch.json`,
which is served by [`Projects::MergeRequests::DiffsController#diffs_batch`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/merge_requests/diffs_controller.rb).
This flowchart shows a basic explanation of how each component is used in a
`diffs_batch.json` request.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Viewing a diff
accDescr: High-level flowchart a diffs_batch request, which renders diffs for browser display
A[Frontend] --> B[diffs_batch.json]
B --> C[Preload diffs and ivars]
C -->D[Gitaly]
C -->E[(Database)]
C --> F[Getting diff file collection]
C --> F[Getting diff file collection]
F --> G[Calculate unfoldable diff lines]
G --> E
G --> H{ETag header is not stale}
H --> |Yes| I[Return 304]
H --> |No| J[Serialize diffs]
J --> D
J --> E
J --> K[(Redis)]
J --> L[Return 200 with JSON]
```
Different cases exist when viewing diffs, though, and the flow for each case differs.
#### Viewing HEAD, latest or specific diff version
The HEAD diff is viewed by default, if it is available. If not, it falls back to
latest diff version. It's also possible to view a specific diff version. These cases
have the same flow.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Viewing the most recent diff
accDescr: Sequence diagram showing how a particular diff is chosen for display, first with the HEAD diff, then the latest diff, followed by a specific version if it's requested
Frontend-->>+.#diffs_batch: API call
Note over .#diffs_batch: Preload diffs and ivars
.#diffs_batch-->>+.#define_diff_vars: before_action
.#define_diff_vars-->>+MergeRequest: merge_request_head_diff() or merge_request_diff()
MergeRequest-->>+MergeRequestDiff: find()
MergeRequestDiff-->>-MergeRequest: MergeRequestDiff
MergeRequest-->>-.#define_diff_vars: MergeRequestDiff
.#define_diff_vars-->>-.#diffs_batch: @compare
Note over .#diffs_batch: Getting diff file collection
.#diffs_batch-->>+MergeRequestDiff: diffs_in_batch()
MergeRequestDiff-->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: new()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>-MergeRequestDiff: diff file collection
MergeRequestDiff-->>-.#diffs_batch: diff file collection
Note over .#diffs_batch: Calculate unfoldable diff lines
.#diffs_batch-->>+MergeRequest: note_positions_for_paths
MergeRequest-->>+Gitlab_Diff_PositionCollection: new() then unfoldable()
Gitlab_Diff_PositionCollection-->>-MergeRequest: position collection
MergeRequest-->>-.#diffs_batch: unfoldable_positions
break when ETag header is present and is not stale
.#diffs_batch-->>+Frontend: return 304 HTTP
end
.#diffs_batch->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: write_cache()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch->>+Gitlab_Diff_HighlightCache: write_if_empty()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch->>+Gitlab_Diff_StatsCache: write_if_empty()
Gitlab_Diff_HighlightCache-->>+Redis: cache
Gitlab_Diff_StatsCache-->>+Redis: cache
Note over .#diffs_batch: Serialize diffs and render JSON
.#diffs_batch-->>+PaginatedDiffSerializer: represent()
PaginatedDiffSerializer-->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: diff_files()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+MergeRequestDiff: raw_diffs()
MergeRequestDiff-->>+MergeRequestDiffFile: Get all associated records
MergeRequestDiffFile-->>-MergeRequestDiff: Gitlab::Git::DiffCollection
MergeRequestDiff-->>-Gitlab_Diff_FileCollection_MergeRequestDiffBatch: diff files
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+Gitlab_Diff_StatsCache: find_by_path()
Gitlab_Diff_StatsCache-->>+Redis: Read data from cache
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+Gitlab_Diff_HighlightCache: decorate()
Gitlab_Diff_HighlightCache-->>+Redis: Read data from cache
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>-PaginatedDiffSerializer: diff files
PaginatedDiffSerializer-->>-.#diffs_batch: JSON
.#diffs_batch-->>+Frontend: return 200 HTTP with JSON
```
However, if **Show whitespace changes** is not selected when viewing diffs:
- Whitespace changes are ignored.
- The flow changes, and now involves Gitaly.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Viewing diffs without whitespace changes
accDescr: Sequence diagram showing how a particular diff is chosen for display, if whitespace changes are not requested - first with the HEAD diff, then the latest diff, followed by a specific version if it's requested
Frontend-->>+.#diffs_batch: API call
Note over .#diffs_batch: Preload diffs and ivars
.#diffs_batch-->>+.#define_diff_vars: before_action
.#define_diff_vars-->>+MergeRequest: merge_request_head_diff() or merge_request_diff()
MergeRequest-->>+MergeRequestDiff: find()
MergeRequestDiff-->>-MergeRequest: MergeRequestDiff
MergeRequest-->>-.#define_diff_vars: MergeRequestDiff
.#define_diff_vars-->>-.#diffs_batch: @compare
Note over .#diffs_batch: Getting diff file collection
.#diffs_batch-->>+MergeRequestDiff: diffs_in_batch()
MergeRequestDiff-->>+Gitlab_Diff_FileCollection_Compare: new()
Gitlab_Diff_FileCollection_Compare-->>-MergeRequestDiff: diff file collection
MergeRequestDiff-->>-.#diffs_batch: diff file collection
Note over .#diffs_batch: Calculate unfoldable diff lines
.#diffs_batch-->>+MergeRequest: note_positions_for_paths
MergeRequest-->>+Gitlab_Diff_PositionCollection: new() then unfoldable()
Gitlab_Diff_PositionCollection-->>-MergeRequest: position collection
MergeRequest-->>-.#diffs_batch: unfoldable_positions
break when ETag header is present and is not stale
.#diffs_batch-->>+Frontend: return 304 HTTP
end
opt Cache highlights and stats when viewing HEAD, latest or specific version
.#diffs_batch->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: write_cache()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch->>+Gitlab_Diff_HighlightCache: write_if_empty()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch->>+Gitlab_Diff_StatsCache: write_if_empty()
Gitlab_Diff_HighlightCache-->>+Redis: cache
Gitlab_Diff_StatsCache-->>+Redis: cache
end
Note over .#diffs_batch: Serialize diffs and render JSON
.#diffs_batch-->>+PaginatedDiffSerializer: represent()
PaginatedDiffSerializer-->>+Gitlab_Diff_FileCollection_MergeRequestDiffBatch: diff_files()
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+MergeRequestDiff: raw_diffs()
MergeRequestDiff-->>+Repository: diff()
Repository-->>+Gitaly: CommitDiff RPC
Gitaly-->>-Repository: GitalyClient::DiffStitcher
Repository-->>-MergeRequestDiff: Gitlab::Git::DiffCollection
MergeRequestDiff-->>-Gitlab_Diff_FileCollection_MergeRequestDiffBatch: diff files
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+Gitlab_Diff_StatsCache: find_by_path()
Gitlab_Diff_StatsCache-->>+Redis: Read data from cache
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>+Gitlab_Diff_HighlightCache: decorate()
Gitlab_Diff_HighlightCache-->>+Redis: Read data from cache
Gitlab_Diff_FileCollection_MergeRequestDiffBatch-->>-PaginatedDiffSerializer: diff files
PaginatedDiffSerializer-->>-.#diffs_batch: JSON
.#diffs_batch-->>+Frontend: return 200 HTTP with JSON
```
#### Compare between merge request diff versions
You can also compare different diff versions when viewing diffs. The flow is different
from the default flow, as it makes requests to Gitaly to generate a comparison between two
diff versions. It also doesn't use Redis for highlight and stats caches.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Comparing diffs
accDescr: Sequence diagram of how diffs are compared against each other
Frontend-->>+.#diffs_batch: API call
Note over .#diffs_batch: Preload diffs and ivars
.#diffs_batch-->>+.#define_diff_vars: before_action
.#define_diff_vars-->>+MergeRequestDiff: compare_with(start_sha)
MergeRequestDiff-->>+Compare: new()
Compare-->>-MergeRequestDiff: Compare
MergeRequestDiff-->>-.#define_diff_vars: Compare
.#define_diff_vars-->>-.#diffs_batch: @compare
Note over .#diffs_batch: Getting diff file collection
.#define_diff_vars-->>+Compare: diffs_in_batch()
Compare-->>+Gitlab_Diff_FileCollection_Compare: new()
Gitlab_Diff_FileCollection_Compare-->>-Compare: diff file collection
Compare-->>-.#define_diff_vars: diff file collection
Note over .#diffs_batch: Calculate unfoldable diff lines
.#diffs_batch-->>+MergeRequest: note_positions_for_paths
MergeRequest-->>+Gitlab_Diff_PositionCollection: new() then unfoldable()
Gitlab_Diff_PositionCollection-->>-MergeRequest: position collection
MergeRequest-->>-.#diffs_batch: unfoldable_positions
break when ETag header is present and is not stale
.#diffs_batch-->>+Frontend: return 304 HTTP
end
Note over .#diffs_batch: Serialize diffs and render JSON
.#diffs_batch-->>+PaginatedDiffSerializer: represent()
PaginatedDiffSerializer-->>+Gitlab_Diff_FileCollection_Compare: diff_files()
Gitlab_Diff_FileCollection_Compare-->>+Compare: raw_diffs()
Compare-->>+Repository: diff()
Repository-->>+Gitaly: CommitDiff RPC
Gitaly-->>-Repository: GitalyClient::DiffStitcher
Repository-->>-Compare: Gitlab::Git::DiffCollection
Compare-->>-Gitlab_Diff_FileCollection_Compare: diff files
Gitlab_Diff_FileCollection_Compare-->>-PaginatedDiffSerializer: diff files
PaginatedDiffSerializer-->>-.#diffs_batch: JSON
.#diffs_batch-->>+Frontend: return 200 HTTP with JSON
```
#### Viewing commit diff
Another feature to view merge request diffs is to view diffs of a specific commit. It
differs from the default flow, and requires Gitaly to get the diff of the specific commit. It
also doesn't use Redis for the highlight and stats caches.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Viewing commit diff
accDescr: Sequence diagram showing how viewing the diff of a specific commit is different from the default diff view flow
Frontend-->>+.#diffs_batch: API call
Note over .#diffs_batch: Preload diffs and ivars
.#diffs_batch-->>+.#define_diff_vars: before_action
.#define_diff_vars-->>+Repository: commit()
Repository-->>+Gitaly: FindCommit RPC
Gitaly-->>-Repository: Gitlab::Git::Commit
Repository-->>+Commit: new()
Commit-->>-Repository: Commit
Repository-->>-.#define_diff_vars: Commit
.#define_diff_vars-->>-.#diffs_batch: @compare
Note over .#diffs_batch: Getting diff file collection
.#define_diff_vars-->>+Commit: diffs_in_batch()
Commit-->>+Gitlab_Diff_FileCollection_Commit: new()
Gitlab_Diff_FileCollection_Commit-->>-Commit: diff file collection
Commit-->>-.#define_diff_vars: diff file collection
Note over .#diffs_batch: Calculate unfoldable diff lines
.#diffs_batch-->>+MergeRequest: note_positions_for_paths
MergeRequest-->>+Gitlab_Diff_PositionCollection: new() then unfoldable()
Gitlab_Diff_PositionCollection-->>-MergeRequest: position collection
MergeRequest-->>-.#diffs_batch: unfoldable_positions
break when ETag header is present and is not stale
.#diffs_batch-->>+Frontend: return 304 HTTP
end
Note over .#diffs_batch: Serialize diffs and render JSON
.#diffs_batch-->>+PaginatedDiffSerializer: represent()
PaginatedDiffSerializer-->>+Gitlab_Diff_FileCollection_Commit: diff_files()
Gitlab_Diff_FileCollection_Commit-->>+Commit: raw_diffs()
Commit-->>+Gitaly: CommitDiff RPC
Gitaly-->>-Commit: GitalyClient::DiffStitcher
Commit-->>-Gitlab_Diff_FileCollection_Commit: Gitlab::Git::DiffCollection
Gitlab_Diff_FileCollection_Commit-->>-PaginatedDiffSerializer: diff files
PaginatedDiffSerializer-->>-.#diffs_batch: JSON
.#diffs_batch-->>+Frontend: return 200 HTTP with JSON
```
### `diffs.json`
It's also possible to view diffs while creating a merge request by scrolling
down to the bottom of the new merge request page and clicking **Changes** tab.
It doesn't use the `diffs_batch.json` endpoint as the merge request record isn't
created at that point yet. It uses the `diffs.json` instead.
This flowchart shows a basic explanation of how each component is used in a
`diffs.json` request.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Diff request flow (high level)
accDescr: High-level flowchart of the components used in a diffs request
A[Frontend] --> B[diffs.json]
B --> C[Build merge request]
C --> D[Get diffs]
D --> E[Render view with diffs]
E --> G[Gitaly]
E --> F[Respond with JSON with the rendered view]
```
This sequence diagram shows a more detailed explanation of this flow.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
sequenceDiagram
accTitle: Diff request flow (low level)
accDescr: Sequence diagram with a deeper view of the components used in a diffs request
Frontend-->>+.#diffs: API call
Note over .#diffs: Build merge request
.#diffs-->>+MergeRequests_BuildService: execute
MergeRequests_BuildService-->>+Compare: new()
Compare-->>-MergeRequests_BuildService: Compare
MergeRequests_BuildService-->>+Compare: commits()
Compare-->>+Gitaly: ListCommits RPC
Gitaly-->-Compare: Commits
Compare-->>-MergeRequests_BuildService: Commits
MergeRequests_BuildService-->>-.#diffs: MergeRequest
Note over .#diffs: Get diffs
.#diffs-->>+MergeRequest: diffs()
MergeRequest-->>+Compare: diffs()
Compare-->>+Gitlab_Diff_FileCollection_Compare: new()
Gitlab_Diff_FileCollection_Compare-->>-Compare: diff file collection
Compare-->>-MergeRequest: diff file collection
MergeRequest-->>-.#diffs: @diffs =
Note over .#diffs: Render view with diffs
.#diffs-->>+HAML: view_to_html_string('projects/merge_requests/creations/_diffs', diffs: @diffs)
HAML-->>+Gitlab_Diff_FileCollection_Compare: diff_files()
Gitlab_Diff_FileCollection_Compare-->>+Compare: raw_diffs()
Compare-->>+Repository: diff()
Repository-->>+Gitaly: CommitDiff RPC
Gitaly-->>-Repository: GitalyClient::DiffStitcher
Repository-->>-Compare: Gitlab::Git::DiffCollection
Compare-->>-Gitlab_Diff_FileCollection_Compare: diff files
Gitlab_Diff_FileCollection_Compare-->>-HAML: diff files
HAML-->>-.#diffs: rendered view
.#diffs-->>-Frontend: Respond with JSON with rendered view
```
|
https://docs.gitlab.com/development/merge_request_concepts/diffs
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/merge_request_concepts/_index.md
|
2025-08-13
|
doc/development/merge_request_concepts/diffs
|
[
"doc",
"development",
"merge_request_concepts",
"diffs"
] |
_index.md
|
Create
|
Code Review
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Working with diffs
|
Developer documentation for how diffs are generated and rendered in GitLab.
|
This page contains developer documentation for diffs. For the user documentation,
see [Diffs in merge requests](../../../user/project/merge_requests/versions.md).
We rely on different sources to present diffs. These include:
- Gitaly service
- Database (through `merge_request_diff_files`)
- Redis (cached highlighted diffs)
## Architecture overview
### Merge request diffs
When refreshing a merge request (pushing to a source branch, force-pushing to target branch, or if the target branch now contains any commits from the MR)
we fetch the comparison information using `Gitlab::Git::Compare`, which fetches `base` and `head` data using Gitaly and diff between them through
`Gitlab::Git::Diff.between`.
The diffs fetching process _limits_ single file diff sizes and the overall size of the whole diff through a series of constant values. Raw diff files are
then persisted on `merge_request_diff_files` table.
Even though diffs larger than 10% of the value of `ApplicationSettings#diff_max_patch_bytes` are collapsed,
we still keep them on PostgreSQL. However, diff files larger than defined _safety limits_
(see the [Diff limits section](#diff-limits)) are _not_ persisted in the database.
In order to present diffs information on the merge request diffs page, we:
1. Fetch all diff files from database `merge_request_diff_files`
1. Fetch the _old_ and _new_ file blobs in batch to:
- Highlight old and new file content
- Know which viewer it should use for each file (text, image, deleted, etc)
- Know if the file content changed
- Know if it was stored externally
- Know if it had storage errors
1. If the diff file is cacheable (text-based), it's cached on Redis
using `Gitlab::Diff::FileCollection::MergeRequestDiff`
### Note diffs
When commenting on a diff (any comparison), we persist a truncated diff version
on `NoteDiffFile` (which is associated with the actual `DiffNote`). So instead
of hitting the repository every time we need the diff of the file, we:
1. Check whether we have the `NoteDiffFile#diff` persisted and use it
1. Otherwise, if it's a current MR revision, use the persisted
`MergeRequestDiffFile#diff`
1. In the last scenario, go the repository and fetch the diff
## Diff limits
As explained above, we limit single diff files and the size of the whole diff. There are scenarios where we collapse the diff file,
and cases where the diff file is not presented at all, and the user is guided to the Blob view.
### Diff collection limits
Limits that act onto all diff files collection. Files number, lines number and files size are considered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:safe_max_files] = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_files] = 100
```
File diffs are collapsed (but are expandable) if 100 files have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:safe_max_lines] = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_lines] = 5000
```
File diffs are collapsed (but be expandable) if 5000 lines have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:safe_max_bytes] = Gitlab::Git::DiffCollection.collection_limits[:safe_max_files] * 5.kilobytes = 500.kilobytes
```
File diffs are collapsed (but be expandable) if 500 kilobytes have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:max_files] = Commit::DIFF_HARD_LIMIT_FILES = 1000
```
No more files are rendered at all if 1000 files have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:max_lines] = Commit::DIFF_HARD_LIMIT_LINES = 50000
```
No more files are rendered at all if 50,000 lines have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:max_bytes] = Gitlab::Git::DiffCollection.collection_limits[:max_files] * 5.kilobytes = 5000.kilobytes
```
No more files are rendered at all if 5 megabytes have already been rendered.
All collection limit parameters are sent and applied on Gitaly. That is, after the limit is surpassed,
Gitaly only returns the safe amount of data to be persisted on `merge_request_diff_files`.
### Individual diff file limits
Limits that act onto each diff file of a collection. Files number, lines number and files size are considered.
#### Expandable patches (collapsed)
Diff patches are collapsed when surpassing 10% of the value set in `ApplicationSettings#diff_max_patch_bytes`.
That is, it's equivalent to 10kb if the maximum allowed value is 100kb.
The diff is persisted and expandable if the patch size doesn't
surpass `ApplicationSettings#diff_max_patch_bytes`.
Although this nomenclature (Collapsing) is also used on Gitaly, this limit is only used on GitLab (hardcoded - not sent to Gitaly).
Gitaly only returns `Diff.Collapsed` (RPC) when surpassing collection limits.
#### Not expandable patches (too large)
The patch not be rendered if it's larger than `ApplicationSettings#diff_max_patch_bytes`.
Users see a `Changes are too large to be shown.` message and a button to view only that file in that commit.
```ruby
Commit::DIFF_SAFE_LINES = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_lines] = 5000
```
File diff is suppressed (technically different from collapsed, but behaves the same, and is expandable) if it has more than 5000 lines.
This limit is hardcoded and only applied on GitLab.
## Viewers
Diff Viewers, which can be found on `models/diff_viewer/*` are classes used to map metadata about each type of Diff File. It has information
whether it's a binary, which partial should be used to render it or which File extensions this class accounts for.
`DiffViewer::Base` validates _blobs_ (old and new versions) content, extension and file type to check if it can be rendered.
## Merge request diffs against the `HEAD` of the target branch
Historically, merge request diffs have been calculated by `git diff target...source` which compares the
`HEAD` of the source branch with the merge base (or a common ancestor) of the target branch and the source's.
This solution works well until the target branch starts containing some of the
changes introduced by the source branch: Consider the following case, in which the source branch
is `feature_a` and the target is `main`:
1. Checkout a new branch `feature_a` from `main` and remove `file_a` and `file_b` in it.
1. Add a commit that removes `file_a` to `main`.
The merge request diff still contains the `file_a` removal while the actual diff compared to
`main`'s `HEAD` has only the `file_b` removal. The diff with such redundant
changes is harder to review.
To display an up-to-date diff we
[introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27008) merge request
diffs compared against `HEAD` of the target branch: the
target branch is artificially merged into the source branch, then the resulting
merge ref is compared to the source branch to calculate an accurate
diff.
In order to support comments for both options, diff note positions are stored for
both `main (base)` and `main (HEAD)` versions ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/198457) in 12.10).
The position for `main (base)` version is stored in `Note#position` and
`Note#original_position` columns, for `main (HEAD)` version `DiffNotePosition`
has been introduced.
|
---
stage: Create
group: Code Review
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Developer documentation for how diffs are generated and rendered in GitLab.
title: Working with diffs
breadcrumbs:
- doc
- development
- merge_request_concepts
- diffs
---
This page contains developer documentation for diffs. For the user documentation,
see [Diffs in merge requests](../../../user/project/merge_requests/versions.md).
We rely on different sources to present diffs. These include:
- Gitaly service
- Database (through `merge_request_diff_files`)
- Redis (cached highlighted diffs)
## Architecture overview
### Merge request diffs
When refreshing a merge request (pushing to a source branch, force-pushing to target branch, or if the target branch now contains any commits from the MR)
we fetch the comparison information using `Gitlab::Git::Compare`, which fetches `base` and `head` data using Gitaly and diff between them through
`Gitlab::Git::Diff.between`.
The diffs fetching process _limits_ single file diff sizes and the overall size of the whole diff through a series of constant values. Raw diff files are
then persisted on `merge_request_diff_files` table.
Even though diffs larger than 10% of the value of `ApplicationSettings#diff_max_patch_bytes` are collapsed,
we still keep them on PostgreSQL. However, diff files larger than defined _safety limits_
(see the [Diff limits section](#diff-limits)) are _not_ persisted in the database.
In order to present diffs information on the merge request diffs page, we:
1. Fetch all diff files from database `merge_request_diff_files`
1. Fetch the _old_ and _new_ file blobs in batch to:
- Highlight old and new file content
- Know which viewer it should use for each file (text, image, deleted, etc)
- Know if the file content changed
- Know if it was stored externally
- Know if it had storage errors
1. If the diff file is cacheable (text-based), it's cached on Redis
using `Gitlab::Diff::FileCollection::MergeRequestDiff`
### Note diffs
When commenting on a diff (any comparison), we persist a truncated diff version
on `NoteDiffFile` (which is associated with the actual `DiffNote`). So instead
of hitting the repository every time we need the diff of the file, we:
1. Check whether we have the `NoteDiffFile#diff` persisted and use it
1. Otherwise, if it's a current MR revision, use the persisted
`MergeRequestDiffFile#diff`
1. In the last scenario, go the repository and fetch the diff
## Diff limits
As explained above, we limit single diff files and the size of the whole diff. There are scenarios where we collapse the diff file,
and cases where the diff file is not presented at all, and the user is guided to the Blob view.
### Diff collection limits
Limits that act onto all diff files collection. Files number, lines number and files size are considered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:safe_max_files] = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_files] = 100
```
File diffs are collapsed (but are expandable) if 100 files have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:safe_max_lines] = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_lines] = 5000
```
File diffs are collapsed (but be expandable) if 5000 lines have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:safe_max_bytes] = Gitlab::Git::DiffCollection.collection_limits[:safe_max_files] * 5.kilobytes = 500.kilobytes
```
File diffs are collapsed (but be expandable) if 500 kilobytes have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:max_files] = Commit::DIFF_HARD_LIMIT_FILES = 1000
```
No more files are rendered at all if 1000 files have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:max_lines] = Commit::DIFF_HARD_LIMIT_LINES = 50000
```
No more files are rendered at all if 50,000 lines have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:max_bytes] = Gitlab::Git::DiffCollection.collection_limits[:max_files] * 5.kilobytes = 5000.kilobytes
```
No more files are rendered at all if 5 megabytes have already been rendered.
All collection limit parameters are sent and applied on Gitaly. That is, after the limit is surpassed,
Gitaly only returns the safe amount of data to be persisted on `merge_request_diff_files`.
### Individual diff file limits
Limits that act onto each diff file of a collection. Files number, lines number and files size are considered.
#### Expandable patches (collapsed)
Diff patches are collapsed when surpassing 10% of the value set in `ApplicationSettings#diff_max_patch_bytes`.
That is, it's equivalent to 10kb if the maximum allowed value is 100kb.
The diff is persisted and expandable if the patch size doesn't
surpass `ApplicationSettings#diff_max_patch_bytes`.
Although this nomenclature (Collapsing) is also used on Gitaly, this limit is only used on GitLab (hardcoded - not sent to Gitaly).
Gitaly only returns `Diff.Collapsed` (RPC) when surpassing collection limits.
#### Not expandable patches (too large)
The patch not be rendered if it's larger than `ApplicationSettings#diff_max_patch_bytes`.
Users see a `Changes are too large to be shown.` message and a button to view only that file in that commit.
```ruby
Commit::DIFF_SAFE_LINES = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_lines] = 5000
```
File diff is suppressed (technically different from collapsed, but behaves the same, and is expandable) if it has more than 5000 lines.
This limit is hardcoded and only applied on GitLab.
## Viewers
Diff Viewers, which can be found on `models/diff_viewer/*` are classes used to map metadata about each type of Diff File. It has information
whether it's a binary, which partial should be used to render it or which File extensions this class accounts for.
`DiffViewer::Base` validates _blobs_ (old and new versions) content, extension and file type to check if it can be rendered.
## Merge request diffs against the `HEAD` of the target branch
Historically, merge request diffs have been calculated by `git diff target...source` which compares the
`HEAD` of the source branch with the merge base (or a common ancestor) of the target branch and the source's.
This solution works well until the target branch starts containing some of the
changes introduced by the source branch: Consider the following case, in which the source branch
is `feature_a` and the target is `main`:
1. Checkout a new branch `feature_a` from `main` and remove `file_a` and `file_b` in it.
1. Add a commit that removes `file_a` to `main`.
The merge request diff still contains the `file_a` removal while the actual diff compared to
`main`'s `HEAD` has only the `file_b` removal. The diff with such redundant
changes is harder to review.
To display an up-to-date diff we
[introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27008) merge request
diffs compared against `HEAD` of the target branch: the
target branch is artificially merged into the source branch, then the resulting
merge ref is compared to the source branch to calculate an accurate
diff.
In order to support comments for both options, diff note positions are stored for
both `main (base)` and `main (HEAD)` versions ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/198457) in 12.10).
The position for `main (base)` version is stored in `Note#position` and
`Note#original_position` columns, for `main (HEAD)` version `DiffNotePosition`
has been introduced.
|
https://docs.gitlab.com/development/distribution
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/distribution
|
[
"doc",
"development",
"distribution"
] |
_index.md
|
GitLab Delivery
|
Build
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Contribute to GitLab Distribution
|
Package methods and components for the GitLab application.
|
Learn how to add new components and services to the GitLab application.
## Support all package methods
Additions must support both Omnibus GitLab and Cloud Native GitLab. Changes
to one must be made to the other to retain feature parity.
## Contributing
The primary projects handled by Distribution are listed below. For more
information, visit the [Distribution team engineering handbook page](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/gitlab-delivery/distribution/)
or select one of the subsections in the navigation bar.
### GitLab application
- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab)
- [Cloud Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/CNG)
- [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator)
- [GitLab Chart](https://gitlab.com/gitlab-org/charts/gitlab)
### Components and tools
- [Omnibus GitLab Builder](https://gitlab.com/gitlab-org/gitlab-omnibus-builder)
- [Omnibus Fork](https://gitlab.com/gitlab-org/omnibus)
- [GitLab Logger](https://gitlab.com/gitlab-org/cloud-native/gitlab-logger)
- [Issue Bot](https://gitlab.com/gitlab-org/distribution/issue-bot)
|
---
stage: GitLab Delivery
group: Build
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Package methods and components for the GitLab application.
title: Contribute to GitLab Distribution
breadcrumbs:
- doc
- development
- distribution
---
Learn how to add new components and services to the GitLab application.
## Support all package methods
Additions must support both Omnibus GitLab and Cloud Native GitLab. Changes
to one must be made to the other to retain feature parity.
## Contributing
The primary projects handled by Distribution are listed below. For more
information, visit the [Distribution team engineering handbook page](https://handbook.gitlab.com/handbook/engineering/infrastructure-platforms/gitlab-delivery/distribution/)
or select one of the subsections in the navigation bar.
### GitLab application
- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab)
- [Cloud Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/CNG)
- [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator)
- [GitLab Chart](https://gitlab.com/gitlab-org/charts/gitlab)
### Components and tools
- [Omnibus GitLab Builder](https://gitlab.com/gitlab-org/gitlab-omnibus-builder)
- [Omnibus Fork](https://gitlab.com/gitlab-org/omnibus)
- [GitLab Logger](https://gitlab.com/gitlab-org/cloud-native/gitlab-logger)
- [Issue Bot](https://gitlab.com/gitlab-org/distribution/issue-bot)
|
https://docs.gitlab.com/development/product_analytics
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/product_analytics.md
|
2025-08-13
|
doc/development/internal_analytics
|
[
"doc",
"development",
"internal_analytics"
] |
product_analytics.md
|
Monitor
|
Platform Insights
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Product analytics
| null |
{{< details >}}
- Tier: Ultimate
- Offering: GitLab.com, GitLab Self-Managed
- Status: Beta
{{< /details >}}
{{< history >}}
- Introduced in GitLab 15.4 as an [experiment](../../policy/development_stages_support.md#experiment) feature [with a flag](../../administration/feature_flags/_index.md) named `cube_api_proxy`. Disabled by default.
- `cube_api_proxy` changed to reference only the [product analytics API](../../api/product_analytics.md) in GitLab 15.6.
- `cube_api_proxy` removed and replaced with `product_analytics_internal_preview` in GitLab 15.10.
- `product_analytics_internal_preview` replaced with `product_analytics_dashboards` in GitLab 15.11.
- Snowplow integration [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/398253) in GitLab 15.11 [with a flag](../../administration/feature_flags/_index.md) named `product_analytics_snowplow_support`. Disabled by default.
- Snowplow integration feature flag `product_analytics_snowplow_support` [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/130228) in GitLab 16.4.
- [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/414865) from GitLab Self-Managed to GitLab.com in 16.7.
- Enabled in GitLab 16.7 as a [beta](../../policy/development_stages_support.md#beta) feature.
- `product_analytics_dashboards` [enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/398653) by default in GitLab 16.11.
- Feature flag `product_analytics_dashboards` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/454059) in GitLab 17.1.
- Funnels support removed in GitLab 17.4.
- [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167192) to beta and feature flags `product_analytics_admin_settings` and [`product_analytics_features`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167296) added in GitLab 17.5. Disabled by default.
{{< /history >}}
{{< alert type="flag" >}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is not ready for production use.
{{< /alert >}}
The product analytics feature empowers you to track user behavior and gain insights into how your
applications are used and how users interact with your product.
By using the data collected with product analytics in GitLab, you can better understand your users,
identify friction points in funnels, make data-driven product decisions, and ultimately build better
products that drive user engagement and business growth.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview of the product analytics setup and functionality,
watch the [Product Analytics walkthrough videos](https://www.youtube.com/playlist?list=PL05JrBw4t0Kqfb4oLOFKkXxNrBJzDQ3sL&feature=shared).
For more information about the vision and development of product analytics, see the [group direction page](https://about.gitlab.com/direction/monitor/platform-insights/product-analytics/).
To leave feedback about product analytics bugs or functionality:
- Comment on [issue 391970](https://gitlab.com/gitlab-org/gitlab/-/issues/391970).
- Create an issue with the `group::platform insights` label.
## How product analytics works
Product analytics uses the following tools:
- [**Snowplow**](https://docs.snowplow.io/docs/) - A developer-first engine for collecting behavioral data and passing it through to ClickHouse.
- [**ClickHouse**](../../integration/clickhouse.md) - A database suited to store, query, and retrieve analytical data.
- [**Cube**](https://cube.dev/docs/product/introduction) - A universal semantic layer that provides an API to run queries against the data stored in ClickHouse.
The following diagram illustrates the product analytics flow:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TB
accTitle: Product Analytics flow
accDescr: How data is collected, processed, and visualized in dashboards.
subgraph Event collection
A([SDK]) --Send user data--> B[Snowplow Collector]
B --Pass data--> C[Snowplow Enricher]
end
subgraph Data warehouse
C --Transform and enrich data--> D([ClickHouse])
end
subgraph Data visualization
F([Dashboards with panels/visualizations])
F --Request data--> G[Product Analytics API]
G --Run Cube queries with pre-aggregations--> H[Cube]
H --Get data--> D
D --Return results--> H
H --Transform data to be rendered--> G
G --Return data--> F
end
```
## Enable product analytics
{{< history >}}
- Introduced in GitLab 15.6 [with a flag](../../administration/feature_flags/_index.md) named `cube_api_proxy`. Disabled by default.
- Moved behind a [flag](../../administration/feature_flags/_index.md) named `product_analytics_admin_settings` in GitLab 15.7. Disabled by default.
- Feature flag `cube_api_proxy` removed and replaced with `product_analytics_internal_preview` in GitLab 15.10.
- Feature flag `product_analytics_internal_preview` replaced with `product_analytics_dashboards` in GitLab 15.11.
- Feature flag `product_analytics_admin_settings` [enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/385602) by default in GitLab 16.11.
- Feature flag `product_analytics_admin_settings` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/454342) in GitLab 17.1.
{{< /history >}}
To track events in your project's applications,
you must enable and configure product analytics.
### Product analytics provider
{{< history >}}
- Self-managed provider [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/117804) in GitLab 16.0.
{{< /history >}}
Your GitLab instance connects to a product analytics provider.
A product analytics provider is the collection of services required to receive,
process, store and query your analytics data.
{{< tabs >}}
{{< tab title="GitLab-managed provider" >}}
On GitLab.com you can use a GitLab-managed provider offered only in the Google Cloud Platform zone `us-central-1`.
If GitLab manages your product analytics provider, then your analytics data is retained for one year.
You can request to delete your data at any time by [contacting support](https://about.gitlab.com/support/#contact-support).
{{< /tab >}}
{{< tab title="Self-managed provider" >}}
A self-managed product analytics provider is a deployed instance of the
[product analytics Helm charts](https://gitlab.com/gitlab-org/analytics-section/product-analytics/helm-charts).
On GitLab.com, the self-managed provider details are defined in [project-level settings](#project-level-settings).
On GitLab Self-Managed, you must define the self-managed analytics provider in [instance-level settings](#instance-level-settings).
If you need different providers for different projects, you can define additional analytics providers in [project-level settings](#project-level-settings).
{{< /tab >}}
{{< /tabs >}}
### Instance-level settings
{{< details >}}
- Offering: GitLab Self-Managed
{{< /details >}}
Prerequisites:
- You must have administrator access for the instance.
{{< alert type="note" >}}
These instance-level settings are required to enable product analytics on GitLab Self-Managed,
and cascade to all projects by default.
{{< /alert >}}
To enable product analytics on your instance:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Analytics**.
1. Enter the configuration values.
1. Select **Save changes**.
### Project-level settings
If you want to have a product analytics instance with a different configuration for your project,
you can override the instance-level settings defined by the administrator on a per-project basis.
Prerequisites:
- You must have at least the Maintainer role for the project or group the project belongs to.
- The project must be in a group namespace.
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > Analytics**.
1. Expand **Data sources** and enter the configuration values.
1. Select **Save changes**.
## Onboard a GitLab project
{{< history >}}
- Minimum required role [changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/154089/) in GitLab 17.1.
{{< /history >}}
Prerequisites:
- You must have at least the Maintainer role for the project or group the project belongs to.
Onboarding a GitLab project means preparing it to receive events that are used for product analytics.
To onboard a project:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Analyze > Analytics dashboards**.
1. Under **Product analytics**, select **Set up**.
Then continue with the setup depending on the provider type.
{{< tabs >}}
{{< tab title="GitLab-managed provider" >}}
Prerequisites:
- You must have access to the [GitLab-managed provider](#product-analytics-provider).
1. Select the **I agree to event collection and processing in this region** checkbox.
1. Select **Connect GitLab-managed provider**.
1. Remove already configured project-level settings for a self-managed provider:
1. Select **Go to analytics settings**.
1. Expand **Data sources** and remove the configuration values.
1. Select **Save changes**.
1. Select **Analyze > Analytics dashboards**.
1. Under **Product analytics**, select **Set up**.
1. Select **Connect GitLab-managed provider**.
Your instance is being created, and the project onboarded.
{{< /tab >}}
{{< tab title="Self-managed provider" >}}
1. Select **Connect your own provider**.
1. Configure project-level settings for your self-managed provider:
1. Select **Go to analytics settings**.
1. Expand **Data sources** and enter the configuration values.
1. Select **Save changes**.
1. Select **Analyze > Analytics dashboards**.
1. Under **Product analytics**, select **Set up**.
1. Select **Connect your own provider**.
Your instance is being created, and the project onboarded.
{{< /tab >}}
{{< /tabs >}}
## Instrument your application
You can instrument code to collect data by using [tracking SDKs](../_index.md).
## Product analytics dashboards
{{< history >}}
- Introduced in GitLab 15.5 [with a flag](../../administration/feature_flags/_index.md) named `product_analytics_internal_preview`. Disabled by default.
{{< /history >}}
Product analytics dashboards are a subset of dashboards under [Analytics dashboards](../../user/analytics/analytics_dashboards.md).
Specifically, product analytics dashboards and visualizations use the `cube_analytics` data type.
The `cube_analytics` data type connects to the Cube instance defined when [product analytics was enabled](#enable-product-analytics).
All filters and queries are sent to the Cube instance, and the returned data is processed by the
product analytics data source to be rendered by the appropriate visualizations.
Data table visualizations from `cube_analytics` have an additional configuration option for rendering `links`.
This option is an array of objects, each with `text` and `href` properties to specify the dimensions to be used in links.
If `href` contains multiple dimensions, values are joined into a single URL.
View an [example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json?ref_type=heads#L112).
When product analytics is enabled and onboarded, two built-in dashboards are available:
- **Audience** displays metrics related to traffic, such as the number of users and sessions.
- **Behavior** displays metrics related to user activity, such as the number of page views and events.
### Filling missing data
{{< history >}}
- Introduced in GitLab 16.3 [with a flag](../../administration/feature_flags/_index.md) named `product_analytics_dashboards`. Disabled by default.
{{< /history >}}
When [exporting data](#raw-data-export) or [viewing dashboards](../../user/analytics/analytics_dashboards.md#view-project-dashboards),
if there is no data for a given day, the missing data is autofilled with `0`.
The autofill approach has both benefits and limitations.
- Benefits:
- The visualization's day axis matches the selected date range, removing ambiguity about missing data.
- Data exports have rows for the entire date range, making data analysis easier.
- Limitations:
- The `day` [granularity](https://cube.dev/docs/product/apis-integrations/rest-api/query-format) must be used.
All other granularities are not supported.
- Only date ranges defined by the [`inDateRange`](https://cube.dev/docs/product/apis-integrations/rest-api/query-format#indaterange) filter are filled.
- The date selector in the UI already uses this filter.
- The filling of data ignores the query-defined limit. If you set a limit of 10 data points over 20 days, it
returns 20 data points, with the missing data filled by `0`. [Issue 417231](https://gitlab.com/gitlab-org/gitlab/-/issues/417231) proposes a solution to this limitation.
## Raw data export
Exporting the raw event data from the underlying storage engine can help you debug and create datasets for data analysis.
Because Cube acts as an abstraction layer between the raw data and the API, the exported raw data has some caveats:
- Data is grouped by the selected dimensions. Therefore, the exported data might be incomplete, unless including both `utcTime` and `userAnonymousId`.
- Data is by default limited to 10,000 rows, but you can increase the limit to maximum 50,000 rows. If your dataset has more than 50,000 rows, you must paginate through the results by using the `limit` and `offset` parameters.
- Data is always returned in JSON format. If you need it in a different format, you need to convert the JSON to the required format using a scripting language of your choice.
[Issue 391683](https://gitlab.com/gitlab-org/gitlab/-/issues/391683) tracks efforts to implement a more scalable export solution.
### Export raw data with Cube queries
You can [query the raw data with the REST API](../../api/product_analytics.md#send-query-request-to-cube),
and convert the JSON output to any required format.
To export the raw data for a specific dimension, pass a list of dimensions to the `dimensions` key.
For example, the following query outputs the raw data for the attributes listed:
```json
POST /api/v4/projects/PROJECT_ID/product_analytics/request/load?queryType=multi
{
"query":{
"dimensions": [
"TrackedEvents.docEncoding",
"TrackedEvents.docHost",
"TrackedEvents.docPath",
"TrackedEvents.docSearch",
"TrackedEvents.eventType",
"TrackedEvents.localTzOffset",
"TrackedEvents.pageTitle",
"TrackedEvents.src",
"TrackedEvents.utcTime",
"TrackedEvents.vpSize"
],
"order": {
"TrackedEvents.apiKey": "asc"
}
}
}
```
If the request is successful, the returned JSON includes an array of rows of results.
## View product analytics usage quota
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/424153) in GitLab 16.6 [with a flag](../../administration/feature_flags/_index.md) named `product_analytics_usage_quota`. Disabled by default.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/427838) in GitLab 16.7. Feature flag `product_analytics_usage_quota` removed.
{{< /history >}}
Product analytics usage quota is calculated from the number of events received from instrumented applications.
To view product analytics usage quota:
1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Settings > Usage quota**.
1. Select the **Product analytics** tab.
The tab displays the monthly totals for the group and a breakdown of usage per project.
The current month displays events counted to date.
The usage quota excludes projects that are not onboarded with product analytics.
## Best practices
- Define key metrics and goals from the start. Decide what questions you want to answer so you know how to use collected data.
- Use event data from all stages of the user journey. This data provides a comprehensive view of the user experience.
- Build dashboards aligned with team needs. Different teams need different data insights.
- Review dashboards regularly. This way, you can verify customer outcomes, identify trends in data, and update visualizations.
- Export raw data periodically. Dashboards provide only an overview of a subset of data, so you should export the data for a deeper analysis.
## Troubleshooting
### No events are collected
Check your [instrumentation details](#enable-product-analytics),
and make sure product analytics is enabled and set up correctly.
### Access to product analytics is restricted
Check that you are connected to a [product analytics provider](#product-analytics-provider).
|
---
stage: Monitor
group: Platform Insights
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Product analytics
breadcrumbs:
- doc
- development
- internal_analytics
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab.com, GitLab Self-Managed
- Status: Beta
{{< /details >}}
{{< history >}}
- Introduced in GitLab 15.4 as an [experiment](../../policy/development_stages_support.md#experiment) feature [with a flag](../../administration/feature_flags/_index.md) named `cube_api_proxy`. Disabled by default.
- `cube_api_proxy` changed to reference only the [product analytics API](../../api/product_analytics.md) in GitLab 15.6.
- `cube_api_proxy` removed and replaced with `product_analytics_internal_preview` in GitLab 15.10.
- `product_analytics_internal_preview` replaced with `product_analytics_dashboards` in GitLab 15.11.
- Snowplow integration [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/398253) in GitLab 15.11 [with a flag](../../administration/feature_flags/_index.md) named `product_analytics_snowplow_support`. Disabled by default.
- Snowplow integration feature flag `product_analytics_snowplow_support` [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/130228) in GitLab 16.4.
- [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/414865) from GitLab Self-Managed to GitLab.com in 16.7.
- Enabled in GitLab 16.7 as a [beta](../../policy/development_stages_support.md#beta) feature.
- `product_analytics_dashboards` [enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/398653) by default in GitLab 16.11.
- Feature flag `product_analytics_dashboards` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/454059) in GitLab 17.1.
- Funnels support removed in GitLab 17.4.
- [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167192) to beta and feature flags `product_analytics_admin_settings` and [`product_analytics_features`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167296) added in GitLab 17.5. Disabled by default.
{{< /history >}}
{{< alert type="flag" >}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is not ready for production use.
{{< /alert >}}
The product analytics feature empowers you to track user behavior and gain insights into how your
applications are used and how users interact with your product.
By using the data collected with product analytics in GitLab, you can better understand your users,
identify friction points in funnels, make data-driven product decisions, and ultimately build better
products that drive user engagement and business growth.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview of the product analytics setup and functionality,
watch the [Product Analytics walkthrough videos](https://www.youtube.com/playlist?list=PL05JrBw4t0Kqfb4oLOFKkXxNrBJzDQ3sL&feature=shared).
For more information about the vision and development of product analytics, see the [group direction page](https://about.gitlab.com/direction/monitor/platform-insights/product-analytics/).
To leave feedback about product analytics bugs or functionality:
- Comment on [issue 391970](https://gitlab.com/gitlab-org/gitlab/-/issues/391970).
- Create an issue with the `group::platform insights` label.
## How product analytics works
Product analytics uses the following tools:
- [**Snowplow**](https://docs.snowplow.io/docs/) - A developer-first engine for collecting behavioral data and passing it through to ClickHouse.
- [**ClickHouse**](../../integration/clickhouse.md) - A database suited to store, query, and retrieve analytical data.
- [**Cube**](https://cube.dev/docs/product/introduction) - A universal semantic layer that provides an API to run queries against the data stored in ClickHouse.
The following diagram illustrates the product analytics flow:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TB
accTitle: Product Analytics flow
accDescr: How data is collected, processed, and visualized in dashboards.
subgraph Event collection
A([SDK]) --Send user data--> B[Snowplow Collector]
B --Pass data--> C[Snowplow Enricher]
end
subgraph Data warehouse
C --Transform and enrich data--> D([ClickHouse])
end
subgraph Data visualization
F([Dashboards with panels/visualizations])
F --Request data--> G[Product Analytics API]
G --Run Cube queries with pre-aggregations--> H[Cube]
H --Get data--> D
D --Return results--> H
H --Transform data to be rendered--> G
G --Return data--> F
end
```
## Enable product analytics
{{< history >}}
- Introduced in GitLab 15.6 [with a flag](../../administration/feature_flags/_index.md) named `cube_api_proxy`. Disabled by default.
- Moved behind a [flag](../../administration/feature_flags/_index.md) named `product_analytics_admin_settings` in GitLab 15.7. Disabled by default.
- Feature flag `cube_api_proxy` removed and replaced with `product_analytics_internal_preview` in GitLab 15.10.
- Feature flag `product_analytics_internal_preview` replaced with `product_analytics_dashboards` in GitLab 15.11.
- Feature flag `product_analytics_admin_settings` [enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/385602) by default in GitLab 16.11.
- Feature flag `product_analytics_admin_settings` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/454342) in GitLab 17.1.
{{< /history >}}
To track events in your project's applications,
you must enable and configure product analytics.
### Product analytics provider
{{< history >}}
- Self-managed provider [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/117804) in GitLab 16.0.
{{< /history >}}
Your GitLab instance connects to a product analytics provider.
A product analytics provider is the collection of services required to receive,
process, store and query your analytics data.
{{< tabs >}}
{{< tab title="GitLab-managed provider" >}}
On GitLab.com you can use a GitLab-managed provider offered only in the Google Cloud Platform zone `us-central-1`.
If GitLab manages your product analytics provider, then your analytics data is retained for one year.
You can request to delete your data at any time by [contacting support](https://about.gitlab.com/support/#contact-support).
{{< /tab >}}
{{< tab title="Self-managed provider" >}}
A self-managed product analytics provider is a deployed instance of the
[product analytics Helm charts](https://gitlab.com/gitlab-org/analytics-section/product-analytics/helm-charts).
On GitLab.com, the self-managed provider details are defined in [project-level settings](#project-level-settings).
On GitLab Self-Managed, you must define the self-managed analytics provider in [instance-level settings](#instance-level-settings).
If you need different providers for different projects, you can define additional analytics providers in [project-level settings](#project-level-settings).
{{< /tab >}}
{{< /tabs >}}
### Instance-level settings
{{< details >}}
- Offering: GitLab Self-Managed
{{< /details >}}
Prerequisites:
- You must have administrator access for the instance.
{{< alert type="note" >}}
These instance-level settings are required to enable product analytics on GitLab Self-Managed,
and cascade to all projects by default.
{{< /alert >}}
To enable product analytics on your instance:
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > Analytics**.
1. Enter the configuration values.
1. Select **Save changes**.
### Project-level settings
If you want to have a product analytics instance with a different configuration for your project,
you can override the instance-level settings defined by the administrator on a per-project basis.
Prerequisites:
- You must have at least the Maintainer role for the project or group the project belongs to.
- The project must be in a group namespace.
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > Analytics**.
1. Expand **Data sources** and enter the configuration values.
1. Select **Save changes**.
## Onboard a GitLab project
{{< history >}}
- Minimum required role [changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/154089/) in GitLab 17.1.
{{< /history >}}
Prerequisites:
- You must have at least the Maintainer role for the project or group the project belongs to.
Onboarding a GitLab project means preparing it to receive events that are used for product analytics.
To onboard a project:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Analyze > Analytics dashboards**.
1. Under **Product analytics**, select **Set up**.
Then continue with the setup depending on the provider type.
{{< tabs >}}
{{< tab title="GitLab-managed provider" >}}
Prerequisites:
- You must have access to the [GitLab-managed provider](#product-analytics-provider).
1. Select the **I agree to event collection and processing in this region** checkbox.
1. Select **Connect GitLab-managed provider**.
1. Remove already configured project-level settings for a self-managed provider:
1. Select **Go to analytics settings**.
1. Expand **Data sources** and remove the configuration values.
1. Select **Save changes**.
1. Select **Analyze > Analytics dashboards**.
1. Under **Product analytics**, select **Set up**.
1. Select **Connect GitLab-managed provider**.
Your instance is being created, and the project onboarded.
{{< /tab >}}
{{< tab title="Self-managed provider" >}}
1. Select **Connect your own provider**.
1. Configure project-level settings for your self-managed provider:
1. Select **Go to analytics settings**.
1. Expand **Data sources** and enter the configuration values.
1. Select **Save changes**.
1. Select **Analyze > Analytics dashboards**.
1. Under **Product analytics**, select **Set up**.
1. Select **Connect your own provider**.
Your instance is being created, and the project onboarded.
{{< /tab >}}
{{< /tabs >}}
## Instrument your application
You can instrument code to collect data by using [tracking SDKs](../_index.md).
## Product analytics dashboards
{{< history >}}
- Introduced in GitLab 15.5 [with a flag](../../administration/feature_flags/_index.md) named `product_analytics_internal_preview`. Disabled by default.
{{< /history >}}
Product analytics dashboards are a subset of dashboards under [Analytics dashboards](../../user/analytics/analytics_dashboards.md).
Specifically, product analytics dashboards and visualizations use the `cube_analytics` data type.
The `cube_analytics` data type connects to the Cube instance defined when [product analytics was enabled](#enable-product-analytics).
All filters and queries are sent to the Cube instance, and the returned data is processed by the
product analytics data source to be rendered by the appropriate visualizations.
Data table visualizations from `cube_analytics` have an additional configuration option for rendering `links`.
This option is an array of objects, each with `text` and `href` properties to specify the dimensions to be used in links.
If `href` contains multiple dimensions, values are joined into a single URL.
View an [example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/analytics_visualization.json?ref_type=heads#L112).
When product analytics is enabled and onboarded, two built-in dashboards are available:
- **Audience** displays metrics related to traffic, such as the number of users and sessions.
- **Behavior** displays metrics related to user activity, such as the number of page views and events.
### Filling missing data
{{< history >}}
- Introduced in GitLab 16.3 [with a flag](../../administration/feature_flags/_index.md) named `product_analytics_dashboards`. Disabled by default.
{{< /history >}}
When [exporting data](#raw-data-export) or [viewing dashboards](../../user/analytics/analytics_dashboards.md#view-project-dashboards),
if there is no data for a given day, the missing data is autofilled with `0`.
The autofill approach has both benefits and limitations.
- Benefits:
- The visualization's day axis matches the selected date range, removing ambiguity about missing data.
- Data exports have rows for the entire date range, making data analysis easier.
- Limitations:
- The `day` [granularity](https://cube.dev/docs/product/apis-integrations/rest-api/query-format) must be used.
All other granularities are not supported.
- Only date ranges defined by the [`inDateRange`](https://cube.dev/docs/product/apis-integrations/rest-api/query-format#indaterange) filter are filled.
- The date selector in the UI already uses this filter.
- The filling of data ignores the query-defined limit. If you set a limit of 10 data points over 20 days, it
returns 20 data points, with the missing data filled by `0`. [Issue 417231](https://gitlab.com/gitlab-org/gitlab/-/issues/417231) proposes a solution to this limitation.
## Raw data export
Exporting the raw event data from the underlying storage engine can help you debug and create datasets for data analysis.
Because Cube acts as an abstraction layer between the raw data and the API, the exported raw data has some caveats:
- Data is grouped by the selected dimensions. Therefore, the exported data might be incomplete, unless including both `utcTime` and `userAnonymousId`.
- Data is by default limited to 10,000 rows, but you can increase the limit to maximum 50,000 rows. If your dataset has more than 50,000 rows, you must paginate through the results by using the `limit` and `offset` parameters.
- Data is always returned in JSON format. If you need it in a different format, you need to convert the JSON to the required format using a scripting language of your choice.
[Issue 391683](https://gitlab.com/gitlab-org/gitlab/-/issues/391683) tracks efforts to implement a more scalable export solution.
### Export raw data with Cube queries
You can [query the raw data with the REST API](../../api/product_analytics.md#send-query-request-to-cube),
and convert the JSON output to any required format.
To export the raw data for a specific dimension, pass a list of dimensions to the `dimensions` key.
For example, the following query outputs the raw data for the attributes listed:
```json
POST /api/v4/projects/PROJECT_ID/product_analytics/request/load?queryType=multi
{
"query":{
"dimensions": [
"TrackedEvents.docEncoding",
"TrackedEvents.docHost",
"TrackedEvents.docPath",
"TrackedEvents.docSearch",
"TrackedEvents.eventType",
"TrackedEvents.localTzOffset",
"TrackedEvents.pageTitle",
"TrackedEvents.src",
"TrackedEvents.utcTime",
"TrackedEvents.vpSize"
],
"order": {
"TrackedEvents.apiKey": "asc"
}
}
}
```
If the request is successful, the returned JSON includes an array of rows of results.
## View product analytics usage quota
{{< history >}}
- [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/424153) in GitLab 16.6 [with a flag](../../administration/feature_flags/_index.md) named `product_analytics_usage_quota`. Disabled by default.
- [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/427838) in GitLab 16.7. Feature flag `product_analytics_usage_quota` removed.
{{< /history >}}
Product analytics usage quota is calculated from the number of events received from instrumented applications.
To view product analytics usage quota:
1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Settings > Usage quota**.
1. Select the **Product analytics** tab.
The tab displays the monthly totals for the group and a breakdown of usage per project.
The current month displays events counted to date.
The usage quota excludes projects that are not onboarded with product analytics.
## Best practices
- Define key metrics and goals from the start. Decide what questions you want to answer so you know how to use collected data.
- Use event data from all stages of the user journey. This data provides a comprehensive view of the user experience.
- Build dashboards aligned with team needs. Different teams need different data insights.
- Review dashboards regularly. This way, you can verify customer outcomes, identify trends in data, and update visualizations.
- Export raw data periodically. Dashboards provide only an overview of a subset of data, so you should export the data for a deeper analysis.
## Troubleshooting
### No events are collected
Check your [instrumentation details](#enable-product-analytics),
and make sure product analytics is enabled and set up correctly.
### Access to product analytics is restricted
Check that you are connected to a [product analytics provider](#product-analytics-provider).
|
https://docs.gitlab.com/development/cli_contribution_guidelines
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/cli_contribution_guidelines.md
|
2025-08-13
|
doc/development/internal_analytics
|
[
"doc",
"development",
"internal_analytics"
] |
cli_contribution_guidelines.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Contributing to the Internal Events CLI
| null |
## Priorities of the CLI
1. Feature parity with the instrumentation capabilities as the CLI is the intended entrypoint for all instrumentation tasks
1. Performance and manual testing are top priorities, as the CLI is primarily responsible for giving users a clean & clear UX
1. If a user opts not to use the CLI, danger/specs/pipelines still ensure definition validity/data integrity/functionality/etc
## UX Style Guide & Principles
### When the generator should be used
The internal events generator should:
- be a one-stop-shop for any engineering tasks related to instrumenting metrics
The internal events generator _should not_:
- be required; users should be able to perform the same tasks manually
### What we expect of users
The internal events generator should:
- protect users from making mistakes
- communicate which tasks still need to be completed to achieve their goal at any given time
- communicate the consequences of selecting a particular option or inputting any text based on only the information they see on the screen
The internal events generator _should not_:
- require users to know anything about instrumentation before running the generator
- require the user to switch screens if certain context is needed in order to complete a given task
- block users from proceeding without offering an alternate path forward
### What we expect of the development environment
The internal events generator should:
- be faster than manually performing the same tasks
- leave the user's environment in a clean & valid state if force-exited
The internal events generator _should not_:
- break when invalid user-generated content exists
- require Rails to be running
- require a functioning GDK for usage
### Setting expectations with the user
The internal events generator should:
- show a progress bar and detail the required steps at the top of each screen
- have outcome-based entrypoints defining each flow
- use a casual and enthusiastic tone
### Communicating information to the user
The internal events generator should:
- provide textual labels and explanations for everything
- always print the `InternalEventsCli::Text::FEEDBACK_NOTICE` when a user exits the CLI
- use examples to illustrate outcomes
The internal events generator _should not_:
- use color & formatting as the exclusive mechanism to communicate information or context
### Collecting information from the user
The internal events generator should:
- prefer using select menus to plain text inputs
- auto-fill with defaults where possible or use previous selections to infer information
- select the most common use-case as the first/easiest/default option
- always allow any valid option; the CLI should never assume the most common use-case is always used
The internal events generator _should not_:
- require the user to re-enter the same information multiple times
- have interactions extending "past the fold" of the screen when using the CLI full-screen (where possible)
## Design Tips
- Refer to `scripts/internal_events/cli/helpers/formatting.rb` for formatting different types of information and inputs.
- Adding or removing content can change how well a flow works. Always consider the wider context & don't be afraid to make other modifications to improve UX.
- Instead of a multi-select menu with dependencies & validations, consider using a single-select menu listing each allowable combination. This may not always work well, but it is a quicker interaction and makes the outcome of the selection clearer to the user.
- When adding to an existing flow, match the formatting and structure of the existing screens as closely as possible. Think about the function each piece of text is serving, and either a) group related text by its function, or b) group related text by subject and use the same functional order for each subject.
## Development Practices
- Feature documentation: Co-release documentation updates with CLI updates
- If the CLI is our recommended entrypoint for all instrumentation, it must always be feature-complete. It should
not lag behind the documentation or the features we announce to other teams.
- CLI documentation: Rely on inline or co-located documentation of CLI code as much as possible
- The more likely we are to stumble upon context/explanation while working on the CLI, the more likely we are to a) reduce the likelihood of unused/duplicate code and b) increase code navigability and speed of re-familiarization.
- Testing: Approach tests the same as you would for a frontend application
- Automated tests should be primarily UX-oriented E2E tests, with supplementary edge case testing and unit tests on an as-needed basis.
- Apply unit tests in places where they are absolutely necessary to guard against regressions.
- Verification: Always run the CLI directly when adding feature support
- We don't want to rely only on automated tests. If our goal is great user-experience, then we as users are a critical tool in making sure everything we merge serves that goal. If it's cumbersome & annoying to manually test, then it's probably also cumbersome and annoying to use.
## FAQ
**Q**: Why don't `InternalEventsCli::Event` & `InternalEventsCli::Metric` use `Gitlab::Tracking::EventDefinition` & `Gitlab::Usage::MetricDefinition`?
**A**: Using the `EventDefinition` & `MetricDefinition` classes would require GDK to be running and the rails app to be loaded. The performance of the CLI is critical to its usability, so separate classes are worth the value snappy startup times provide. Ideally, this will be refactored in time such that the same classes can be used for both the CLI & the rails app. For now, the rails app and the CLI share the `json-schemas` for the definitions as a single source of truth.
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Contributing to the Internal Events CLI
breadcrumbs:
- doc
- development
- internal_analytics
---
## Priorities of the CLI
1. Feature parity with the instrumentation capabilities as the CLI is the intended entrypoint for all instrumentation tasks
1. Performance and manual testing are top priorities, as the CLI is primarily responsible for giving users a clean & clear UX
1. If a user opts not to use the CLI, danger/specs/pipelines still ensure definition validity/data integrity/functionality/etc
## UX Style Guide & Principles
### When the generator should be used
The internal events generator should:
- be a one-stop-shop for any engineering tasks related to instrumenting metrics
The internal events generator _should not_:
- be required; users should be able to perform the same tasks manually
### What we expect of users
The internal events generator should:
- protect users from making mistakes
- communicate which tasks still need to be completed to achieve their goal at any given time
- communicate the consequences of selecting a particular option or inputting any text based on only the information they see on the screen
The internal events generator _should not_:
- require users to know anything about instrumentation before running the generator
- require the user to switch screens if certain context is needed in order to complete a given task
- block users from proceeding without offering an alternate path forward
### What we expect of the development environment
The internal events generator should:
- be faster than manually performing the same tasks
- leave the user's environment in a clean & valid state if force-exited
The internal events generator _should not_:
- break when invalid user-generated content exists
- require Rails to be running
- require a functioning GDK for usage
### Setting expectations with the user
The internal events generator should:
- show a progress bar and detail the required steps at the top of each screen
- have outcome-based entrypoints defining each flow
- use a casual and enthusiastic tone
### Communicating information to the user
The internal events generator should:
- provide textual labels and explanations for everything
- always print the `InternalEventsCli::Text::FEEDBACK_NOTICE` when a user exits the CLI
- use examples to illustrate outcomes
The internal events generator _should not_:
- use color & formatting as the exclusive mechanism to communicate information or context
### Collecting information from the user
The internal events generator should:
- prefer using select menus to plain text inputs
- auto-fill with defaults where possible or use previous selections to infer information
- select the most common use-case as the first/easiest/default option
- always allow any valid option; the CLI should never assume the most common use-case is always used
The internal events generator _should not_:
- require the user to re-enter the same information multiple times
- have interactions extending "past the fold" of the screen when using the CLI full-screen (where possible)
## Design Tips
- Refer to `scripts/internal_events/cli/helpers/formatting.rb` for formatting different types of information and inputs.
- Adding or removing content can change how well a flow works. Always consider the wider context & don't be afraid to make other modifications to improve UX.
- Instead of a multi-select menu with dependencies & validations, consider using a single-select menu listing each allowable combination. This may not always work well, but it is a quicker interaction and makes the outcome of the selection clearer to the user.
- When adding to an existing flow, match the formatting and structure of the existing screens as closely as possible. Think about the function each piece of text is serving, and either a) group related text by its function, or b) group related text by subject and use the same functional order for each subject.
## Development Practices
- Feature documentation: Co-release documentation updates with CLI updates
- If the CLI is our recommended entrypoint for all instrumentation, it must always be feature-complete. It should
not lag behind the documentation or the features we announce to other teams.
- CLI documentation: Rely on inline or co-located documentation of CLI code as much as possible
- The more likely we are to stumble upon context/explanation while working on the CLI, the more likely we are to a) reduce the likelihood of unused/duplicate code and b) increase code navigability and speed of re-familiarization.
- Testing: Approach tests the same as you would for a frontend application
- Automated tests should be primarily UX-oriented E2E tests, with supplementary edge case testing and unit tests on an as-needed basis.
- Apply unit tests in places where they are absolutely necessary to guard against regressions.
- Verification: Always run the CLI directly when adding feature support
- We don't want to rely only on automated tests. If our goal is great user-experience, then we as users are a critical tool in making sure everything we merge serves that goal. If it's cumbersome & annoying to manually test, then it's probably also cumbersome and annoying to use.
## FAQ
**Q**: Why don't `InternalEventsCli::Event` & `InternalEventsCli::Metric` use `Gitlab::Tracking::EventDefinition` & `Gitlab::Usage::MetricDefinition`?
**A**: Using the `EventDefinition` & `MetricDefinition` classes would require GDK to be running and the rails app to be loaded. The performance of the CLI is critical to its usability, so separate classes are worth the value snappy startup times provide. Ideally, this will be refactored in time such that the same classes can be used for both the CLI & the rails app. For now, the rails app and the CLI share the `json-schemas` for the definitions as a single source of truth.
|
https://docs.gitlab.com/development/browser_sdk
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/browser_sdk.md
|
2025-08-13
|
doc/development/internal_analytics
|
[
"doc",
"development",
"internal_analytics"
] |
browser_sdk.md
|
Monitor
|
Analytics Instrumentation
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Browser SDK
| null |
This SDK is for instrumenting web sites and applications to send data for the GitLab [product analytics functionality](../_index.md).
## How to use the Browser SDK
### Using the NPM package
Add the NPM package to your package JSON using your preferred package manager:
{{< tabs >}}
{{< tab title="yarn" >}}
```shell
yarn add @gitlab/application-sdk-browser
```
{{< /tab >}}
{{< tab title="npm" >}}
```shell
npm i @gitlab/application-sdk-browser
```
{{< /tab >}}
{{< /tabs >}}
Then, for browser usage import the client SDK:
```javascript
import { glClientSDK } from '@gitlab/application-sdk-browser';
this.glClient = glClientSDK({ appId, host });
```
### Using the script directly
Add the script to the page and assign the client SDK to `window`:
```html
<script src="https://unpkg.com/@gitlab/application-sdk-browser/dist/gl-sdk.min.js"></script>
<script>
window.glClient = window.glSDK.glClientSDK({
appId: 'YOUR_APP_ID',
host: 'YOUR_HOST',
});
</script>
```
You can use a specific version of the SDK like this:
```html
<script src="https://unpkg.com/@gitlab/application-sdk-browser@0.2.5/dist/gl-sdk.min.js"></script>
```
## Browser SDK initialization options
Apart from `appId` and `host`, you can configure the Browser SDK with the following options:
```typescript
interface GitLabClientSDKOptions {
appId: string;
host: string;
hasCookieConsent?: boolean;
trackerId?: string;
pagePingTracking?:
| boolean
| {
minimumVisitLength?: number;
heartbeatDelay?: number;
};
plugins?: AllowedPlugins;
}
```
| Option | Description |
| :---------------------------- | :---------- |
| `appId` | The ID provided by the GitLab Project Analytics setup guide. This ID ensures your data is sent to your analytics instance. |
| `host` | The GitLab Project Analytics instance provided by the setup guide. |
| `hasCookieConsent` | Whether to use cookies to identify unique users. Set to `false` by default. When `false`, users are considered anonymous users. No cookies or other storage mechanisms are used to identify users. |
| `trackerId` | Used to differentiate between multiple trackers running on the same page or application, because each tracker instance can be configured differently to capture different sets of data. This identifier helps ensure that the data sent to the collector is correctly associated with the correct tracker configuration. Default value is `gitlab`. |
| `pagePingTracking` | Option to track user engagement on your website or application by sending periodic events while a user is actively browsing a page. Page pings provide valuable insight into how users interact with your content, such as how long they spend on a page, which sections they are viewing, and whether they are scrolling. `pagePingTracking` can be boolean or an object. As a boolean, set to `true` it enables page ping with default options, and set to `false` it disables page ping tracking. As an object, it has two options: `minimumVisitLength` (the minimum time that must have elapsed before the first heartbeat) and `heartbeatDelay` (the interval at which the callback is fired). |
| `plugins` | Specify which plugins to enable or disable. By default all plugins are enabled. |
### Plugins
- `Client Hints`: An alternative to tracking the User Agent, which is particularly useful in browsers that are freezing the User Agent string.
Enabling this plugin will automatically capture the following context:
For example,
[iglu:org.ietf/http_client_hints/jsonschema/1-0-0](https://github.com/snowplow/iglu-central/blob/master/schemas/org.ietf/http_client_hints/jsonschema/1-0-0)
has the following configuration:
```json
{
"isMobile":false,
"brands":[
{
"brand":"Google Chrome",
"version":"89"
},
{
"brand":"Chromium",
"version":"89"
}
]
}
```
- `Link Click Tracking`: With this plugin, the tracker adds click event listeners to all link elements. Link clicks are tracked as self-describing events. Each link-click event captures the link's `href` attribute. The event also has fields for the link's ID, classes, and target (where the linked document is opened, such as a new tab or new window).
- `Performance Timing`: It collects performance-related data from a user's browser using the `Navigation Timing API`. This API provides detailed information about the various stages of loading a web page, such as domain lookup, connection time, content download, and rendering times. This plugin helps to gather insights into how well a website performs for users, identify potential performance bottlenecks, and improve the overall user experience.
- `Error Tracking`: It helps to capture and track errors that occur on a website or application. By monitoring these errors, you can gain insights into potential issues with code or third-party libraries, which can help to improve the overall user experience, and maintain the quality of the website or application.
By default all plugins are enabled. You can disable or enable these plugins through the `plugins` object:
```typescript
const tracker = glClientSDK({
...options,
plugins: {
clientHints: true,
linkTracking: true,
performanceTiming: true,
errorTracking: true,
},
});
```
## Methods
### `identify`
Used to associate a user and their attributes with the session and tracking events.
```javascript
glClient.identify(userId, userAttributes);
```
| Property | Type | Description |
| :--------------- | :-------------------------- | :---------------------------------------------------------------------------- |
| `userId` | `String` | The user identifier your application uses to identify individual users. |
| `userAttributes` | `Object`/`Null`/`undefined` | The user attributes that need to be added to the session and tracking events. |
### `page`
Used to trigger a pageview event.
```javascript
glClient.page(eventAttributes);
```
| Property | Type | Description |
| :---------------- | :-------------------------- | :---------------------------------------------------------------- |
| `eventAttributes` | `Object`/`Null`/`undefined` | The event attributes that need to be added to the pageview event. |
The `eventAttributes` object supports the following optional properties:
| Property | Type | Description |
|:------------------|:------------|:------------|
| `title` | `String` | Override the default page title. |
| `contextCallback` | `Function` | A callback that fires on the page view. |
| `context` | `Object` | Add context (additional information) on the page view. |
| `timestamp` | `timestamp` | Set the true timestamp or overwrite the device-sent timestamp on an event. |
### `track`
Used to trigger a custom event.
```javascript
glClient.track(eventName, eventAttributes);
```
| Property | Type | Description |
| :---------------- | :-------------------------- | :--------------------------------------------------------------- |
| `eventName` | `String` | The name of the custom event. |
| `eventAttributes` | `Object`/`Null`/`undefined` | The event attributes that need to be added to the tracked event. |
### `refreshLinkClickTracking`
`enableLinkClickTracking` tracks only clicks on links that exist when the page has loaded. To track new links added to the page after it has been loaded, use `refreshLinkClickTracking`.
```javascript
glClient.refreshLinkClickTracking();
```
### `trackError`
{{< alert type="note" >}}
`trackError` is supported on the Browser SDK, but the resulting events are not used or available.
{{< /alert >}}
Used to capture errors. This works only when the `errorTracking` plugin is enabled. The [plugin](#plugins) is enabled by default.
```javascript
glClient.trackError(eventAttributes);
```
For example, `trackError` can be used in `try...catch` like below:
```javascript
try {
// Call the function that throws an error
throwError();
} catch (error) {
glClient.trackError({
message: error.message, // "This is a custom error"
filename: error.fileName || 'unknown', // The file in which the error occurred (for example, "index.html")
lineno: error.lineNumber || 0, // The line number where the error occurred (for example, 2)
colno: error.columnNumber || 0, // The column number where the error occurred (for example, 6)
error: error, // The Error object itself
});
}
```
| Property | Type | Description |
| :---------------- | :------- | :------------------------------------------------------------------------------------------------------------------- |
| `eventAttributes` | `Object` | The event attributes that need to be added to the tracked event. `message` is a mandatory key in `eventAttributes`. |
### `addCookieConsent`
`addCookieConsent` is used to allow tracking of user identifiers via cookies. By default `hasCookieConsent` is false, and no user identifiers are passed. To enable tracking of user identifiers, call the `addCookieConsent` method. This step is not needed if you initialized the Browser SDK with `hasCookieConsent` set to true.
```javascript
glClient.addCookieConsent();
```
### `setCustomUrl`
Used to set a custom URL for tracking.
```javascript
glClient.setCustomUrl(url);
```
| Property | Type | Description |
| :------- | :------- | :------------------------------------------------ |
| `url` | `String` | The custom URL that you want to set for tracking. |
### `setReferrerUrl`
Used to set a referrer URL for tracking.
```javascript
glClient.setReferrerUrl(url);
```
| Property | Type | Description |
| :------- | :------- | :-------------------------------------------------- |
| `url` | `String` | The referrer URL that you want to set for tracking. |
### `setDocumentTitle`
Used to override the document title.
```javascript
glClient.setDocumentTitle(title);
```
| Property | Type | Description |
| :------- | :------- | :--------------------------------- |
| `title` | `String` | The document title you want to set. |
## Contribute
If you would like to contribute to Browser SDK, follow the [contributing guide](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-js/-/blob/main/docs/Contributing.md).
## Troubleshooting
If the Browser SDK is not sending events, or behaving in an unexpected way, take the following actions:
1. Verify that the `appId` and host values in the options object are correct.
1. Check if any browser privacy settings, extensions, or ad blockers are interfering with the Browser SDK.
For more information and assistance, see the [Snowplow documentation](https://docs.snowplow.io/docs/collecting-data/collecting-from-own-applications/javascript-trackers/web-tracker/)
or contact the [Analytics Instrumentation team](https://handbook.gitlab.com/handbook/engineering/development/analytics/analytics-instrumentation/#team-members).
|
---
stage: Monitor
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Browser SDK
breadcrumbs:
- doc
- development
- internal_analytics
---
This SDK is for instrumenting web sites and applications to send data for the GitLab [product analytics functionality](../_index.md).
## How to use the Browser SDK
### Using the NPM package
Add the NPM package to your package JSON using your preferred package manager:
{{< tabs >}}
{{< tab title="yarn" >}}
```shell
yarn add @gitlab/application-sdk-browser
```
{{< /tab >}}
{{< tab title="npm" >}}
```shell
npm i @gitlab/application-sdk-browser
```
{{< /tab >}}
{{< /tabs >}}
Then, for browser usage import the client SDK:
```javascript
import { glClientSDK } from '@gitlab/application-sdk-browser';
this.glClient = glClientSDK({ appId, host });
```
### Using the script directly
Add the script to the page and assign the client SDK to `window`:
```html
<script src="https://unpkg.com/@gitlab/application-sdk-browser/dist/gl-sdk.min.js"></script>
<script>
window.glClient = window.glSDK.glClientSDK({
appId: 'YOUR_APP_ID',
host: 'YOUR_HOST',
});
</script>
```
You can use a specific version of the SDK like this:
```html
<script src="https://unpkg.com/@gitlab/application-sdk-browser@0.2.5/dist/gl-sdk.min.js"></script>
```
## Browser SDK initialization options
Apart from `appId` and `host`, you can configure the Browser SDK with the following options:
```typescript
interface GitLabClientSDKOptions {
appId: string;
host: string;
hasCookieConsent?: boolean;
trackerId?: string;
pagePingTracking?:
| boolean
| {
minimumVisitLength?: number;
heartbeatDelay?: number;
};
plugins?: AllowedPlugins;
}
```
| Option | Description |
| :---------------------------- | :---------- |
| `appId` | The ID provided by the GitLab Project Analytics setup guide. This ID ensures your data is sent to your analytics instance. |
| `host` | The GitLab Project Analytics instance provided by the setup guide. |
| `hasCookieConsent` | Whether to use cookies to identify unique users. Set to `false` by default. When `false`, users are considered anonymous users. No cookies or other storage mechanisms are used to identify users. |
| `trackerId` | Used to differentiate between multiple trackers running on the same page or application, because each tracker instance can be configured differently to capture different sets of data. This identifier helps ensure that the data sent to the collector is correctly associated with the correct tracker configuration. Default value is `gitlab`. |
| `pagePingTracking` | Option to track user engagement on your website or application by sending periodic events while a user is actively browsing a page. Page pings provide valuable insight into how users interact with your content, such as how long they spend on a page, which sections they are viewing, and whether they are scrolling. `pagePingTracking` can be boolean or an object. As a boolean, set to `true` it enables page ping with default options, and set to `false` it disables page ping tracking. As an object, it has two options: `minimumVisitLength` (the minimum time that must have elapsed before the first heartbeat) and `heartbeatDelay` (the interval at which the callback is fired). |
| `plugins` | Specify which plugins to enable or disable. By default all plugins are enabled. |
### Plugins
- `Client Hints`: An alternative to tracking the User Agent, which is particularly useful in browsers that are freezing the User Agent string.
Enabling this plugin will automatically capture the following context:
For example,
[iglu:org.ietf/http_client_hints/jsonschema/1-0-0](https://github.com/snowplow/iglu-central/blob/master/schemas/org.ietf/http_client_hints/jsonschema/1-0-0)
has the following configuration:
```json
{
"isMobile":false,
"brands":[
{
"brand":"Google Chrome",
"version":"89"
},
{
"brand":"Chromium",
"version":"89"
}
]
}
```
- `Link Click Tracking`: With this plugin, the tracker adds click event listeners to all link elements. Link clicks are tracked as self-describing events. Each link-click event captures the link's `href` attribute. The event also has fields for the link's ID, classes, and target (where the linked document is opened, such as a new tab or new window).
- `Performance Timing`: It collects performance-related data from a user's browser using the `Navigation Timing API`. This API provides detailed information about the various stages of loading a web page, such as domain lookup, connection time, content download, and rendering times. This plugin helps to gather insights into how well a website performs for users, identify potential performance bottlenecks, and improve the overall user experience.
- `Error Tracking`: It helps to capture and track errors that occur on a website or application. By monitoring these errors, you can gain insights into potential issues with code or third-party libraries, which can help to improve the overall user experience, and maintain the quality of the website or application.
By default all plugins are enabled. You can disable or enable these plugins through the `plugins` object:
```typescript
const tracker = glClientSDK({
...options,
plugins: {
clientHints: true,
linkTracking: true,
performanceTiming: true,
errorTracking: true,
},
});
```
## Methods
### `identify`
Used to associate a user and their attributes with the session and tracking events.
```javascript
glClient.identify(userId, userAttributes);
```
| Property | Type | Description |
| :--------------- | :-------------------------- | :---------------------------------------------------------------------------- |
| `userId` | `String` | The user identifier your application uses to identify individual users. |
| `userAttributes` | `Object`/`Null`/`undefined` | The user attributes that need to be added to the session and tracking events. |
### `page`
Used to trigger a pageview event.
```javascript
glClient.page(eventAttributes);
```
| Property | Type | Description |
| :---------------- | :-------------------------- | :---------------------------------------------------------------- |
| `eventAttributes` | `Object`/`Null`/`undefined` | The event attributes that need to be added to the pageview event. |
The `eventAttributes` object supports the following optional properties:
| Property | Type | Description |
|:------------------|:------------|:------------|
| `title` | `String` | Override the default page title. |
| `contextCallback` | `Function` | A callback that fires on the page view. |
| `context` | `Object` | Add context (additional information) on the page view. |
| `timestamp` | `timestamp` | Set the true timestamp or overwrite the device-sent timestamp on an event. |
### `track`
Used to trigger a custom event.
```javascript
glClient.track(eventName, eventAttributes);
```
| Property | Type | Description |
| :---------------- | :-------------------------- | :--------------------------------------------------------------- |
| `eventName` | `String` | The name of the custom event. |
| `eventAttributes` | `Object`/`Null`/`undefined` | The event attributes that need to be added to the tracked event. |
### `refreshLinkClickTracking`
`enableLinkClickTracking` tracks only clicks on links that exist when the page has loaded. To track new links added to the page after it has been loaded, use `refreshLinkClickTracking`.
```javascript
glClient.refreshLinkClickTracking();
```
### `trackError`
{{< alert type="note" >}}
`trackError` is supported on the Browser SDK, but the resulting events are not used or available.
{{< /alert >}}
Used to capture errors. This works only when the `errorTracking` plugin is enabled. The [plugin](#plugins) is enabled by default.
```javascript
glClient.trackError(eventAttributes);
```
For example, `trackError` can be used in `try...catch` like below:
```javascript
try {
// Call the function that throws an error
throwError();
} catch (error) {
glClient.trackError({
message: error.message, // "This is a custom error"
filename: error.fileName || 'unknown', // The file in which the error occurred (for example, "index.html")
lineno: error.lineNumber || 0, // The line number where the error occurred (for example, 2)
colno: error.columnNumber || 0, // The column number where the error occurred (for example, 6)
error: error, // The Error object itself
});
}
```
| Property | Type | Description |
| :---------------- | :------- | :------------------------------------------------------------------------------------------------------------------- |
| `eventAttributes` | `Object` | The event attributes that need to be added to the tracked event. `message` is a mandatory key in `eventAttributes`. |
### `addCookieConsent`
`addCookieConsent` is used to allow tracking of user identifiers via cookies. By default `hasCookieConsent` is false, and no user identifiers are passed. To enable tracking of user identifiers, call the `addCookieConsent` method. This step is not needed if you initialized the Browser SDK with `hasCookieConsent` set to true.
```javascript
glClient.addCookieConsent();
```
### `setCustomUrl`
Used to set a custom URL for tracking.
```javascript
glClient.setCustomUrl(url);
```
| Property | Type | Description |
| :------- | :------- | :------------------------------------------------ |
| `url` | `String` | The custom URL that you want to set for tracking. |
### `setReferrerUrl`
Used to set a referrer URL for tracking.
```javascript
glClient.setReferrerUrl(url);
```
| Property | Type | Description |
| :------- | :------- | :-------------------------------------------------- |
| `url` | `String` | The referrer URL that you want to set for tracking. |
### `setDocumentTitle`
Used to override the document title.
```javascript
glClient.setDocumentTitle(title);
```
| Property | Type | Description |
| :------- | :------- | :--------------------------------- |
| `title` | `String` | The document title you want to set. |
## Contribute
If you would like to contribute to Browser SDK, follow the [contributing guide](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-js/-/blob/main/docs/Contributing.md).
## Troubleshooting
If the Browser SDK is not sending events, or behaving in an unexpected way, take the following actions:
1. Verify that the `appId` and host values in the options object are correct.
1. Check if any browser privacy settings, extensions, or ad blockers are interfering with the Browser SDK.
For more information and assistance, see the [Snowplow documentation](https://docs.snowplow.io/docs/collecting-data/collecting-from-own-applications/javascript-trackers/web-tracker/)
or contact the [Analytics Instrumentation team](https://handbook.gitlab.com/handbook/engineering/development/analytics/analytics-instrumentation/#team-members).
|
https://docs.gitlab.com/development/review_guidelines
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/review_guidelines.md
|
2025-08-13
|
doc/development/internal_analytics
|
[
"doc",
"development",
"internal_analytics"
] |
review_guidelines.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Internal Analytics review guidelines
| null |
This page includes introductory material for an
[Analytics Instrumentation](https://handbook.gitlab.com/handbook/engineering/development/analytics/analytics-instrumentation/)
review. For broader advice and general best practices for code reviews, refer to our [code review guide](../code_review.md).
## Review process
We mandate an Analytics Instrumentation review when a merge request (MR) touches or uses internal analytics code.
This includes but is not limited to:
- Metrics, for example:
- files in [`config/metrics`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/metrics).
- files in [`ee/config/metrics`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/metrics).
- [`schema.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/schema.json).
- Internal events, for example files in [`config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/events).
- Analytics Instrumentation tooling, for example [`Internal events CLI`](https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/internal_events/cli.rb).
In most cases, an Analytics Instrumentation review is automatically added, but it can also be requested manually if the automations miss the relevant change.
### Roles and process
#### The merge request **author** should
- Decide whether a Analytics Instrumentation review is needed. You can skip the Analytics Instrumentation
review and remove the labels if the changes are not related to the Analytics Instrumentation domain.
- If an Analytics Instrumentation review is needed and was not assigned automatically, add the labels
`~analytics instrumentation` and `~analytics instrumentation::review pending`.
- If a change to an event is a part of the MR:
- Check that the events are firing locally using one of the [testing tools](internal_event_instrumentation/local_setup_and_debugging.md) available.
- If a change to a metric is a part of the MR:
- Make sure that the new metric is available and reporting data in the Service Ping payload, by running: `require_relative 'spec/support/helpers/service_ping_helpers.rb'; ServicePingHelpers.get_current_usage_metric_value(key_path)` with `key_path` substituted by the new metric's `key_path`.
- Use reviewer roulette to assign an [Analytics Instrumentation reviewer](https://gitlab-org.gitlab.io/gitlab-roulette/?hourFormat24=true&visible=reviewer%7Canalytics+instrumentation) who is not the author.
- Assign any other reviews as appropriate.
- `~analytics instrumentation` review does not require a maintainer review.
#### The Analytics Instrumentation **reviewer** should
- Perform a first-pass review on the merge request and suggest improvements to the author.
- Make sure that no deprecated analytics methods are used.
- If a change to an event is a part of the review:
- Check that the event(s) being fired have corresponding definition files.
- Check that the [event definition file](internal_event_instrumentation/event_definition_guide.md) is correct.
- Check that the tracking parameters don't contain any [sensitive information](https://handbook.gitlab.com/handbook/security/data-classification-standard/).
- If a change to a metric is a part of the review:
- Add the `~database` label and ask for a [database review](../database_review.md) for
metrics that are based on Database.
- For a metric's YAML definition:
- Check the metric's `description`.
- Check the metric's `key_path`.
- Check the `product_group` field.
They should correspond to the [stages file](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml).
- Check the file location. Consider the time frame, and if the file should be under `ee`.
- Check the tiers.
- If a metric was changed or removed: Make sure the MR author notified the Customer Success Ops team (`@csops-team`), Analytics Engineers (`@gitlab-data/analytics-engineers`), and Product Analysts (`@gitlab-data/product-analysts`) by `@` mentioning those groups in a comment on the issue for the MR and all of these groups have acknowledged the removal.
- If a change to the Internal Events CLI is a part of the review:
- Check the changes follow the [CLI style guide](cli_contribution_guidelines.md).
- Run the CLI & check the UX of the changes:
- Is the content easy to skim?
- Would this content make sense to people outside the team?
- Is this information necessary? Helpful?
- What reservations would I have if I'd never gone through this flow before?
- Is the meaning or effect of every input clear?
- If we describe edge cases or caveats, are there instructions to validate whether the user needs to worry about it?
- Approve the MR, and relabel the MR with `~"analytics instrumentation::approved"`.
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Internal Analytics review guidelines
breadcrumbs:
- doc
- development
- internal_analytics
---
This page includes introductory material for an
[Analytics Instrumentation](https://handbook.gitlab.com/handbook/engineering/development/analytics/analytics-instrumentation/)
review. For broader advice and general best practices for code reviews, refer to our [code review guide](../code_review.md).
## Review process
We mandate an Analytics Instrumentation review when a merge request (MR) touches or uses internal analytics code.
This includes but is not limited to:
- Metrics, for example:
- files in [`config/metrics`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/metrics).
- files in [`ee/config/metrics`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/metrics).
- [`schema.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/schema.json).
- Internal events, for example files in [`config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/events).
- Analytics Instrumentation tooling, for example [`Internal events CLI`](https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/internal_events/cli.rb).
In most cases, an Analytics Instrumentation review is automatically added, but it can also be requested manually if the automations miss the relevant change.
### Roles and process
#### The merge request **author** should
- Decide whether a Analytics Instrumentation review is needed. You can skip the Analytics Instrumentation
review and remove the labels if the changes are not related to the Analytics Instrumentation domain.
- If an Analytics Instrumentation review is needed and was not assigned automatically, add the labels
`~analytics instrumentation` and `~analytics instrumentation::review pending`.
- If a change to an event is a part of the MR:
- Check that the events are firing locally using one of the [testing tools](internal_event_instrumentation/local_setup_and_debugging.md) available.
- If a change to a metric is a part of the MR:
- Make sure that the new metric is available and reporting data in the Service Ping payload, by running: `require_relative 'spec/support/helpers/service_ping_helpers.rb'; ServicePingHelpers.get_current_usage_metric_value(key_path)` with `key_path` substituted by the new metric's `key_path`.
- Use reviewer roulette to assign an [Analytics Instrumentation reviewer](https://gitlab-org.gitlab.io/gitlab-roulette/?hourFormat24=true&visible=reviewer%7Canalytics+instrumentation) who is not the author.
- Assign any other reviews as appropriate.
- `~analytics instrumentation` review does not require a maintainer review.
#### The Analytics Instrumentation **reviewer** should
- Perform a first-pass review on the merge request and suggest improvements to the author.
- Make sure that no deprecated analytics methods are used.
- If a change to an event is a part of the review:
- Check that the event(s) being fired have corresponding definition files.
- Check that the [event definition file](internal_event_instrumentation/event_definition_guide.md) is correct.
- Check that the tracking parameters don't contain any [sensitive information](https://handbook.gitlab.com/handbook/security/data-classification-standard/).
- If a change to a metric is a part of the review:
- Add the `~database` label and ask for a [database review](../database_review.md) for
metrics that are based on Database.
- For a metric's YAML definition:
- Check the metric's `description`.
- Check the metric's `key_path`.
- Check the `product_group` field.
They should correspond to the [stages file](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml).
- Check the file location. Consider the time frame, and if the file should be under `ee`.
- Check the tiers.
- If a metric was changed or removed: Make sure the MR author notified the Customer Success Ops team (`@csops-team`), Analytics Engineers (`@gitlab-data/analytics-engineers`), and Product Analysts (`@gitlab-data/product-analysts`) by `@` mentioning those groups in a comment on the issue for the MR and all of these groups have acknowledged the removal.
- If a change to the Internal Events CLI is a part of the review:
- Check the changes follow the [CLI style guide](cli_contribution_guidelines.md).
- Run the CLI & check the UX of the changes:
- Is the content easy to skim?
- Would this content make sense to people outside the team?
- Is this information necessary? Helpful?
- What reservations would I have if I'd never gone through this flow before?
- Is the meaning or effect of every input clear?
- If we describe edge cases or caveats, are there instructions to validate whether the user needs to worry about it?
- Approve the MR, and relabel the MR with `~"analytics instrumentation::approved"`.
|
https://docs.gitlab.com/development/instrumentation_layer
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/instrumentation_layer.md
|
2025-08-13
|
doc/development/internal_analytics
|
[
"doc",
"development",
"internal_analytics"
] |
instrumentation_layer.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Single Instrumentation Layer
| null |
## Single Instrumentation Layer
The Single Instrumentation Layer is an event tracking abstraction that allows to track any events in GitLab using a single interface. It
uses events definitions from [Internal Event framework](internal_event_instrumentation/event_definition_guide.md) to declare event processing logic.
## Why a Single Instrumentation Layer?
The Single Instrumentation Layer allows to:
- Instrument events and processing logic in a single place
- Use the same event definitions for both instrumentation and processing
- Eliminate the need to write duplicate tracking code for the same event
## How a Single Instrumentation Layer works
[See example MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167415/diffs).
[Event definitions](internal_event_instrumentation/event_definition_guide.md) are used as a declarative specification for processing logic and are single source of truth for event properties, tracking parameters, and other metadata.
### Additional tracking systems
When an event is intended to be processed by tracking systems (for example, [AiTracking](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/tracking/ai_tracking.rb)), the event definition is extended to
include the additional processing logic. [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167415/diffs#a77ac5c62df6c489c00e9c5dd46960f390c951d0_17_17)
This logic is declared using additional processing classes using standard interface.
## How to implement it for your tracking system
To implement it for your tracking system, you need to:
1. Add a [new event definition](internal_event_instrumentation/event_definition_guide.md) or use existing one ([see events dictionary](https://metrics.gitlab.com/events)).
1. Implement the processing logic in a new tracking class. The class should have a class method `track_event` that accepts
an event name and additional named parameters
```ruby
module Gitlab
module Tracking
class NewTrackingSystemProcessor
def self.track_event(event_name, **kwargs)
# add your tracking logic here
end
end
end
end
```
1. Extend the event definition with the new tracking class added in `extra_trackers:` property
```yaml
extra_trackers:
- tracking_class: Gitlab::Tracking::NewTrackingSystemProcessor
protected_properties:
processor_type:
description: The type of the processor
```
`protected_properties` contains properties to be sent exclusively to the specified tracking class.
1. [Trigger the event](internal_event_instrumentation/quick_start.md#trigger-events) in your code using Internal Events framework
`**kwargs` is used to pass additional parameters to the tracking class from the Internal Events framework.
The actual parameters depend on the tracking parameters passed to the event invocation above.
Usually, it includes `user`, `namespace` and `project` along with `protected_properties` that can be used to pass any additional data.
The tracking systems are triggered by the order of the `extra_trackers:` property.
## Systems that use the Single Instrumentation Layer
1. [Internal Event](internal_event_instrumentation/quick_start.md). Is the main system that implements the tracking layer.
1. [AiTracking](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/tracking/ai_tracking.rb?ref_type=heads). Work in progress on migrating to the new layer.
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Single Instrumentation Layer
breadcrumbs:
- doc
- development
- internal_analytics
---
## Single Instrumentation Layer
The Single Instrumentation Layer is an event tracking abstraction that allows to track any events in GitLab using a single interface. It
uses events definitions from [Internal Event framework](internal_event_instrumentation/event_definition_guide.md) to declare event processing logic.
## Why a Single Instrumentation Layer?
The Single Instrumentation Layer allows to:
- Instrument events and processing logic in a single place
- Use the same event definitions for both instrumentation and processing
- Eliminate the need to write duplicate tracking code for the same event
## How a Single Instrumentation Layer works
[See example MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167415/diffs).
[Event definitions](internal_event_instrumentation/event_definition_guide.md) are used as a declarative specification for processing logic and are single source of truth for event properties, tracking parameters, and other metadata.
### Additional tracking systems
When an event is intended to be processed by tracking systems (for example, [AiTracking](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/tracking/ai_tracking.rb)), the event definition is extended to
include the additional processing logic. [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/167415/diffs#a77ac5c62df6c489c00e9c5dd46960f390c951d0_17_17)
This logic is declared using additional processing classes using standard interface.
## How to implement it for your tracking system
To implement it for your tracking system, you need to:
1. Add a [new event definition](internal_event_instrumentation/event_definition_guide.md) or use existing one ([see events dictionary](https://metrics.gitlab.com/events)).
1. Implement the processing logic in a new tracking class. The class should have a class method `track_event` that accepts
an event name and additional named parameters
```ruby
module Gitlab
module Tracking
class NewTrackingSystemProcessor
def self.track_event(event_name, **kwargs)
# add your tracking logic here
end
end
end
end
```
1. Extend the event definition with the new tracking class added in `extra_trackers:` property
```yaml
extra_trackers:
- tracking_class: Gitlab::Tracking::NewTrackingSystemProcessor
protected_properties:
processor_type:
description: The type of the processor
```
`protected_properties` contains properties to be sent exclusively to the specified tracking class.
1. [Trigger the event](internal_event_instrumentation/quick_start.md#trigger-events) in your code using Internal Events framework
`**kwargs` is used to pass additional parameters to the tracking class from the Internal Events framework.
The actual parameters depend on the tracking parameters passed to the event invocation above.
Usually, it includes `user`, `namespace` and `project` along with `protected_properties` that can be used to pass any additional data.
The tracking systems are triggered by the order of the `extra_trackers:` property.
## Systems that use the Single Instrumentation Layer
1. [Internal Event](internal_event_instrumentation/quick_start.md). Is the main system that implements the tracking layer.
1. [AiTracking](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/tracking/ai_tracking.rb?ref_type=heads). Work in progress on migrating to the new layer.
|
https://docs.gitlab.com/development/internal_analytics
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/internal_analytics
|
[
"doc",
"development",
"internal_analytics"
] |
_index.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Internal analytics
| null |
The internal analytics system provides the ability to track user behavior and system status for a GitLab instance
to inform customer success services and further product development.
These doc pages provide guides and information on how to leverage internal analytics capabilities of GitLab
when developing new features or instrumenting existing ones.
## Fundamental concepts
<div class="video-fallback">
See the video about <a href="https://www.youtube.com/watch?v=GtFNXbjygWo">the concepts of events and metrics.</a>
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/GtFNXbjygWo" frameborder="0" allowfullscreen> </iframe>
</figure>
Events and metrics are the foundation of the internal analytics system.
Understanding the difference between the two concepts is vital to using the system.
### Event
An event is a record of an action that happened within the GitLab instance.
An example action would be a user interaction like visiting the issue page or hovering the mouse cursor over the top navigation search.
Other actions can result from background system processing like scheduled pipeline succeeding or receiving API calls from 3rd party system.
Not every action is tracked and thereby turned into a recorded event automatically.
Instead, if an action helps draw out product insights and helps to make more educated business decisions, we can track an event when the action happens.
The produced event record, at the minimum, holds information that the action occurred,
but it can also contain additional details about the context that accompanied this action.
An example of context can be information about who performed the action or the state of the system at the time of the action.
### Metric
A single event record is not informative enough and might be caused by a coincidence.
We need to look for sets of events sharing common traits to have a foundation for analysis.
This is where metrics come into play. A metric is a calculation performed on pieces of information.
For example, a single event documenting a paid user visiting the feature's page after a new feature was released tells us nothing about the success of this new feature.
However, if we count the number of page view events happening in the week before the new feature release
and then compare it with the number of events for the week following the feature release,
we can derive insights about the increase in interest due to the release of the new feature.
This process leads to what we call a metric. An event-based metric counts the number of times an event occurred overall or in a specified time frame.
The same event can be used across different metrics and a metric can count either one or multiple events.
The count can but does not have to be based on a uniqueness criterion, such as only counting distinct users who performed an event.
Metrics do not have to be based on events. Metrics can also be observations about the state of a GitLab instance itself,
such as the value of a setting or the count of rows in a database table.
## Instrumentation
- To create an instrumentation plan, use this [template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Usage+Data+Instrumentation).
- To instrument an event-based metric, see the [internal event tracking quick start guide](internal_event_instrumentation/quick_start.md).
- To instrument a metric that observes the GitLab instances state, see [the metrics instrumentation](metrics/metrics_instrumentation.md).
## Data discovery
Event and metrics data is ultimately stored in our [Snowflake data warehouse](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/snowflake/).
It can either be accessed directly via SQL in Snowflake for [ad-hoc analyses](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/#snowflake-analyst) or visualized in our data visualization tool
[Tableau](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/tableau/), which has access to Snowflake.
Both platforms need an access request ([Snowflake](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/#warehouse-access), [Tableau](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/tableau/#tableau-online-access)).
{{< alert type="note" >}}
To track user interactions in the browser, Do-Not-Track (DNT) needs to be disabled. DNT is disabled by default for most browsers.
{{< /alert >}}
### Tableau
Tableau is a data visualization platform and allows building dashboards and GUI based discovery of events and metrics.
This method of discovery is most suited for users who are familiar with business intelligence tooling, basic verifications
and for creating persisted, shareable dashboards and visualizations.
Access to Tableau requires an [access request](https://handbook.gitlab.com/handbook/security/corporate/end-user-services/access-requests).
#### Checking events
Visit the [Snowplow event exploration dashboard](https://10az.online.tableau.com/#/site/gitlab/views/SnowplowEventExplorationLast30Days/SnowplowEventExplorationLast30D?:iid=1).
This dashboard shows you event counts as well as the most fired events.
You can scroll down to the "Structured Events Firing in Production Last 30 Days" chart and filter for your specific event action. The filter only works with exact names.
#### Checking metrics
You can visit the [Metrics exploration dashboard](https://10az.online.tableau.com/#/site/gitlab/views/PDServicePingExplorationDashboard/MetricsExploration).
On the side there is a filter for metric path which is the `key_path` of your metric and a filter for the installation ID including guidance on how to filter for GitLab.com.
#### Custom charts and dashboards
Within Tableau, more advanced charts, such as this [funnel analysis](https://10az.online.tableau.com/#/site/gitlab/views/SaaSRegistrationFunnel/RegistrationFunnelAnalyses) can be accomplished as well.
Custom charts and dashboards can be requested from the Product Data Insights team by creating an [issue in their project](https://gitlab.com/gitlab-data/product-analytics/-/issues/new?issuable_template=Ad%20Hoc%20Request).
### Snowflake
Snowflake allows direct querying of relevant tables in the warehouse within their UI with the [Snowflake SQL dialect](https://docs.snowflake.com/en/sql-reference-commands).
This method of discovery is most suited to users who are familiar with SQL and for quick and flexible checks whether data is correctly propagated.
Access to Snowflake requires an [access request](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/#warehouse-access).
#### Querying events
The following example query returns the number of daily event occurrences for the `feature_used` event.
```sql
SELECT
behavior_date,
COUNT(*) as event_occurences
FROM prod.common_mart.mart_behavior_structured_event
WHERE event_action = 'feature_used'
AND behavior_date > '2023-08-01' --restricted minimum date for performance
AND app_id='gitlab' -- use gitlab for production events and gitlab-staging for events from staging
GROUP BY 1 ORDER BY 1 desc
```
For a list of other metrics tables refer to the [Data Models Cheat Sheet](https://handbook.gitlab.com/handbook/product/groups/product-analysis/data-model-cheat-sheet/#commonly-used-data-models).
#### Querying metrics
The following example query returns all values reported for `count_distinct_user_id_from_feature_used_7d` within the last six months and the according `instance_id`:
```sql
SELECT
date_trunc('week', ping_created_at),
dim_instance_id,
metric_value
FROM prod.common.fct_ping_instance_metric_rolling_6_months --model limited to last 6 months for performance
WHERE metrics_path = 'counts.users_visiting_dashboard_weekly' --set to metric of interest
ORDER BY ping_created_at DESC
```
For a list of other metrics tables refer to the [Data Models Cheat Sheet](https://handbook.gitlab.com/handbook/product/groups/product-analysis/data-model-cheat-sheet/#commonly-used-data-models).
### Product Analytics
Internal Analytics is dogfooding the GitLab [Product Analytics](https://www.youtube.com/watch?v=i8Mze9lRZiY?) functionality, which allows you to visualize events as well.
The [Analytics Dashboards documentation](../../user/analytics/analytics_dashboards.md#create-a-dashboard-by-configuration) explains how to build custom visualizations and dashboards.
The custom dashboards accessible [within the GitLab project](https://gitlab.com/gitlab-org/gitlab/-/analytics/dashboards) are defined in a [separate repository](https://gitlab.com/gitlab-org/analytics-section/gitlab-com-dashboards).
It is possible to build dashboards based on events instrumented via the Internal events system. Only events emitted by the .com installation will be counted in those visualizations.
The [Product Analytics group's dashboard](https://gitlab.com/gitlab-org/analytics-section/gitlab-com-dashboards/-/blob/main/.gitlab/analytics/dashboards/product_analytics/product_analytics.yaml) can serve as inspiration on how to build charts based on individual events.
## Data availability
For GitLab there is an essential difference in analytics setup between GitLab.com and GitLab Self-Managed or GitLab Dedicated instances.
### Self-Managed and Dedicated
For Self-Managed and Dedicated instances only pre-computed metrics are available. These are computed once per week on a randomly chosen day and forwarded to our [version app](https://version.gitlab.com) via a process called Service Ping.
Only the metrics that were instrumented until the version the instance is running are available. For example, if a metric is instrumented during the development of version 16.9, it will be available on instances running versions equal to or bigger than 16.9 but not on instances running previous versions such as 16.8.
The received payloads are imported into our Data Warehouse once per day.
### GitLab.com
On our GitLab.com instance both individual events and pre-computed metrics are available for analysis. Additionally, page views are automatically instrumented.
#### Individual events & page views
Individual events and page views are forwarded directly to our collection infrastructure and from there into our data warehouse.
However, at this stage the data is in a raw format that is difficult to query. For this reason the data is cleaned and propagated through the warehouse until it is available in the tables and diagrams pointed out in the [data discovery section](#data-discovery).
The propagation process takes multiple hours to complete. The following diagram illustrates the availability of events:

[Source](https://lucid.app/lucidchart/fec2d72c-89d9-45a0-b40c-1d81ca13f671/edit?page=OCha14OI0mRw)
#### Pre-computed metrics
Metrics are computed once per week like on Self-Managed, with the only difference being that most of the computation takes place within the Warehouse rather than within the instance.
For GitLab.com this process is started on Monday morning and computes metrics for the time-frame from Sunday 23:59 UTC and this Sunday 23:59 UTC.
The following diagram illustrates the process:

[Source](https://lucid.app/lucidchart/fec2d72c-89d9-45a0-b40c-1d81ca13f671/edit?page=0_0)
## Data flow
On SaaS event records are directly sent to a collection system, called Snowplow, and imported into our data warehouse.
GitLab Self-Managed and GitLab Dedicated instances record event counts locally. Every week, a process called Service Ping sends the current
values for all pre-defined and active metrics to our data warehouse. For GitLab.com, metrics are calculated directly in the data warehouse.
The following chart aims to illustrate this data flow:
```mermaid
flowchart LR;
feature-->track
track-->|send event record - only on gitlab.com|snowplow
track-->|increase metric counts|redis
database-->service_ping
redis-->service_ping
service_ping-->|json with metric values - weekly export|snowflake
snowplow-->|event records - continuous import|snowflake
snowflake-->vis
subgraph glb[Gitlab Application]
feature[Feature Code]
subgraph events[Internal Analytics Code]
track[track_event / trackEvent]
redis[(Redis)]
database[(Database)]
service_ping[\Service Ping Process\]
end
end
snowplow[\Snowplow Pipeline\]
snowflake[(Snowflake Data Warehouse)]
vis[Dashboards in Tableau]
```
## Data Privacy
GitLab only receives event counts or similarly aggregated information from GitLab Self-Managed instances. User identifiers for individual events on the SaaS version of GitLab are [pseudonymized](https://metrics.gitlab.com/identifiers/).
An exact description on what kind of data is being collected through the Internal Analytics system is given in our [handbook](https://handbook.gitlab.com/handbook/legal/privacy/customer-product-usage-information/).
## Contribution guidelines
- [Instrumenting features with internal analytics](review_guidelines.md)
- [Reviewing internal analytics contributions](review_guidelines.md#the-analytics-instrumentation-reviewer-should)
- [Contributing to the Internal Events CLI](cli_contribution_guidelines.md)
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Internal analytics
breadcrumbs:
- doc
- development
- internal_analytics
---
The internal analytics system provides the ability to track user behavior and system status for a GitLab instance
to inform customer success services and further product development.
These doc pages provide guides and information on how to leverage internal analytics capabilities of GitLab
when developing new features or instrumenting existing ones.
## Fundamental concepts
<div class="video-fallback">
See the video about <a href="https://www.youtube.com/watch?v=GtFNXbjygWo">the concepts of events and metrics.</a>
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/GtFNXbjygWo" frameborder="0" allowfullscreen> </iframe>
</figure>
Events and metrics are the foundation of the internal analytics system.
Understanding the difference between the two concepts is vital to using the system.
### Event
An event is a record of an action that happened within the GitLab instance.
An example action would be a user interaction like visiting the issue page or hovering the mouse cursor over the top navigation search.
Other actions can result from background system processing like scheduled pipeline succeeding or receiving API calls from 3rd party system.
Not every action is tracked and thereby turned into a recorded event automatically.
Instead, if an action helps draw out product insights and helps to make more educated business decisions, we can track an event when the action happens.
The produced event record, at the minimum, holds information that the action occurred,
but it can also contain additional details about the context that accompanied this action.
An example of context can be information about who performed the action or the state of the system at the time of the action.
### Metric
A single event record is not informative enough and might be caused by a coincidence.
We need to look for sets of events sharing common traits to have a foundation for analysis.
This is where metrics come into play. A metric is a calculation performed on pieces of information.
For example, a single event documenting a paid user visiting the feature's page after a new feature was released tells us nothing about the success of this new feature.
However, if we count the number of page view events happening in the week before the new feature release
and then compare it with the number of events for the week following the feature release,
we can derive insights about the increase in interest due to the release of the new feature.
This process leads to what we call a metric. An event-based metric counts the number of times an event occurred overall or in a specified time frame.
The same event can be used across different metrics and a metric can count either one or multiple events.
The count can but does not have to be based on a uniqueness criterion, such as only counting distinct users who performed an event.
Metrics do not have to be based on events. Metrics can also be observations about the state of a GitLab instance itself,
such as the value of a setting or the count of rows in a database table.
## Instrumentation
- To create an instrumentation plan, use this [template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Usage+Data+Instrumentation).
- To instrument an event-based metric, see the [internal event tracking quick start guide](internal_event_instrumentation/quick_start.md).
- To instrument a metric that observes the GitLab instances state, see [the metrics instrumentation](metrics/metrics_instrumentation.md).
## Data discovery
Event and metrics data is ultimately stored in our [Snowflake data warehouse](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/snowflake/).
It can either be accessed directly via SQL in Snowflake for [ad-hoc analyses](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/#snowflake-analyst) or visualized in our data visualization tool
[Tableau](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/tableau/), which has access to Snowflake.
Both platforms need an access request ([Snowflake](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/#warehouse-access), [Tableau](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/tableau/#tableau-online-access)).
{{< alert type="note" >}}
To track user interactions in the browser, Do-Not-Track (DNT) needs to be disabled. DNT is disabled by default for most browsers.
{{< /alert >}}
### Tableau
Tableau is a data visualization platform and allows building dashboards and GUI based discovery of events and metrics.
This method of discovery is most suited for users who are familiar with business intelligence tooling, basic verifications
and for creating persisted, shareable dashboards and visualizations.
Access to Tableau requires an [access request](https://handbook.gitlab.com/handbook/security/corporate/end-user-services/access-requests).
#### Checking events
Visit the [Snowplow event exploration dashboard](https://10az.online.tableau.com/#/site/gitlab/views/SnowplowEventExplorationLast30Days/SnowplowEventExplorationLast30D?:iid=1).
This dashboard shows you event counts as well as the most fired events.
You can scroll down to the "Structured Events Firing in Production Last 30 Days" chart and filter for your specific event action. The filter only works with exact names.
#### Checking metrics
You can visit the [Metrics exploration dashboard](https://10az.online.tableau.com/#/site/gitlab/views/PDServicePingExplorationDashboard/MetricsExploration).
On the side there is a filter for metric path which is the `key_path` of your metric and a filter for the installation ID including guidance on how to filter for GitLab.com.
#### Custom charts and dashboards
Within Tableau, more advanced charts, such as this [funnel analysis](https://10az.online.tableau.com/#/site/gitlab/views/SaaSRegistrationFunnel/RegistrationFunnelAnalyses) can be accomplished as well.
Custom charts and dashboards can be requested from the Product Data Insights team by creating an [issue in their project](https://gitlab.com/gitlab-data/product-analytics/-/issues/new?issuable_template=Ad%20Hoc%20Request).
### Snowflake
Snowflake allows direct querying of relevant tables in the warehouse within their UI with the [Snowflake SQL dialect](https://docs.snowflake.com/en/sql-reference-commands).
This method of discovery is most suited to users who are familiar with SQL and for quick and flexible checks whether data is correctly propagated.
Access to Snowflake requires an [access request](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/#warehouse-access).
#### Querying events
The following example query returns the number of daily event occurrences for the `feature_used` event.
```sql
SELECT
behavior_date,
COUNT(*) as event_occurences
FROM prod.common_mart.mart_behavior_structured_event
WHERE event_action = 'feature_used'
AND behavior_date > '2023-08-01' --restricted minimum date for performance
AND app_id='gitlab' -- use gitlab for production events and gitlab-staging for events from staging
GROUP BY 1 ORDER BY 1 desc
```
For a list of other metrics tables refer to the [Data Models Cheat Sheet](https://handbook.gitlab.com/handbook/product/groups/product-analysis/data-model-cheat-sheet/#commonly-used-data-models).
#### Querying metrics
The following example query returns all values reported for `count_distinct_user_id_from_feature_used_7d` within the last six months and the according `instance_id`:
```sql
SELECT
date_trunc('week', ping_created_at),
dim_instance_id,
metric_value
FROM prod.common.fct_ping_instance_metric_rolling_6_months --model limited to last 6 months for performance
WHERE metrics_path = 'counts.users_visiting_dashboard_weekly' --set to metric of interest
ORDER BY ping_created_at DESC
```
For a list of other metrics tables refer to the [Data Models Cheat Sheet](https://handbook.gitlab.com/handbook/product/groups/product-analysis/data-model-cheat-sheet/#commonly-used-data-models).
### Product Analytics
Internal Analytics is dogfooding the GitLab [Product Analytics](https://www.youtube.com/watch?v=i8Mze9lRZiY?) functionality, which allows you to visualize events as well.
The [Analytics Dashboards documentation](../../user/analytics/analytics_dashboards.md#create-a-dashboard-by-configuration) explains how to build custom visualizations and dashboards.
The custom dashboards accessible [within the GitLab project](https://gitlab.com/gitlab-org/gitlab/-/analytics/dashboards) are defined in a [separate repository](https://gitlab.com/gitlab-org/analytics-section/gitlab-com-dashboards).
It is possible to build dashboards based on events instrumented via the Internal events system. Only events emitted by the .com installation will be counted in those visualizations.
The [Product Analytics group's dashboard](https://gitlab.com/gitlab-org/analytics-section/gitlab-com-dashboards/-/blob/main/.gitlab/analytics/dashboards/product_analytics/product_analytics.yaml) can serve as inspiration on how to build charts based on individual events.
## Data availability
For GitLab there is an essential difference in analytics setup between GitLab.com and GitLab Self-Managed or GitLab Dedicated instances.
### Self-Managed and Dedicated
For Self-Managed and Dedicated instances only pre-computed metrics are available. These are computed once per week on a randomly chosen day and forwarded to our [version app](https://version.gitlab.com) via a process called Service Ping.
Only the metrics that were instrumented until the version the instance is running are available. For example, if a metric is instrumented during the development of version 16.9, it will be available on instances running versions equal to or bigger than 16.9 but not on instances running previous versions such as 16.8.
The received payloads are imported into our Data Warehouse once per day.
### GitLab.com
On our GitLab.com instance both individual events and pre-computed metrics are available for analysis. Additionally, page views are automatically instrumented.
#### Individual events & page views
Individual events and page views are forwarded directly to our collection infrastructure and from there into our data warehouse.
However, at this stage the data is in a raw format that is difficult to query. For this reason the data is cleaned and propagated through the warehouse until it is available in the tables and diagrams pointed out in the [data discovery section](#data-discovery).
The propagation process takes multiple hours to complete. The following diagram illustrates the availability of events:

[Source](https://lucid.app/lucidchart/fec2d72c-89d9-45a0-b40c-1d81ca13f671/edit?page=OCha14OI0mRw)
#### Pre-computed metrics
Metrics are computed once per week like on Self-Managed, with the only difference being that most of the computation takes place within the Warehouse rather than within the instance.
For GitLab.com this process is started on Monday morning and computes metrics for the time-frame from Sunday 23:59 UTC and this Sunday 23:59 UTC.
The following diagram illustrates the process:

[Source](https://lucid.app/lucidchart/fec2d72c-89d9-45a0-b40c-1d81ca13f671/edit?page=0_0)
## Data flow
On SaaS event records are directly sent to a collection system, called Snowplow, and imported into our data warehouse.
GitLab Self-Managed and GitLab Dedicated instances record event counts locally. Every week, a process called Service Ping sends the current
values for all pre-defined and active metrics to our data warehouse. For GitLab.com, metrics are calculated directly in the data warehouse.
The following chart aims to illustrate this data flow:
```mermaid
flowchart LR;
feature-->track
track-->|send event record - only on gitlab.com|snowplow
track-->|increase metric counts|redis
database-->service_ping
redis-->service_ping
service_ping-->|json with metric values - weekly export|snowflake
snowplow-->|event records - continuous import|snowflake
snowflake-->vis
subgraph glb[Gitlab Application]
feature[Feature Code]
subgraph events[Internal Analytics Code]
track[track_event / trackEvent]
redis[(Redis)]
database[(Database)]
service_ping[\Service Ping Process\]
end
end
snowplow[\Snowplow Pipeline\]
snowflake[(Snowflake Data Warehouse)]
vis[Dashboards in Tableau]
```
## Data Privacy
GitLab only receives event counts or similarly aggregated information from GitLab Self-Managed instances. User identifiers for individual events on the SaaS version of GitLab are [pseudonymized](https://metrics.gitlab.com/identifiers/).
An exact description on what kind of data is being collected through the Internal Analytics system is given in our [handbook](https://handbook.gitlab.com/handbook/legal/privacy/customer-product-usage-information/).
## Contribution guidelines
- [Instrumenting features with internal analytics](review_guidelines.md)
- [Reviewing internal analytics contributions](review_guidelines.md#the-analytics-instrumentation-reviewer-should)
- [Contributing to the Internal Events CLI](cli_contribution_guidelines.md)
|
https://docs.gitlab.com/development/internal_analytics/metrics_instrumentation
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/metrics_instrumentation.md
|
2025-08-13
|
doc/development/internal_analytics/metrics
|
[
"doc",
"development",
"internal_analytics",
"metrics"
] |
metrics_instrumentation.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Metrics instrumentation guide
| null |
This guide describes how to develop Service Ping metrics using metrics instrumentation.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a video tutorial, see the [Adding Service Ping metric via instrumentation class](https://youtu.be/p2ivXhNxUoY).
## Nomenclature
- **Instrumentation class**:
- Inherits one of the metric classes: `DatabaseMetric`, `NumbersMetric` or `GenericMetric`.
- Implements the logic that calculates the value for a Service Ping metric.
- **Metric definition**
The Service Data metric YAML definition.
- **Hardening**:
Hardening a method is the process that ensures the method fails safe, returning a fallback value like -1.
## How metrics instrumentation works
All metrics must have a [corresponding metric definition](metrics_dictionary.md) to be included in the [service ping](../service_ping/_index.md#how-service-ping-works) payload.
A metric definition may have the [`instrumentation_class`](metrics_dictionary.md) field, which can be set to a class.
The defined instrumentation class should inherit one of the existing metric classes: `DatabaseMetric`, `NumbersMetric` or `GenericMetric`.
The current convention is that a single instrumentation class corresponds to a single metric.
Using an instrumentation class ensures that metrics can fail safe individually, without breaking the entire process of Service Ping generation.
## Database metrics
{{< alert type="note" >}}
Whenever possible we recommend using [internal event tracking](../internal_event_instrumentation/quick_start.md) instead of database metrics.
Database metrics can create unnecessary load on the database of bigger GitLab instances and potential optimisations can affect instance performance.
{{< /alert >}}
You can use database metrics to track data kept in the database, for example, a count of issues that exist on a given instance.
- `operation`: Operations for the given `relation`, one of `count`, `distinct_count`, `sum`, and `average`.
- `relation`: Assigns lambda that returns the `ActiveRecord::Relation` for the objects we want to perform the `operation`. The assigned lambda can accept up to one parameter. The parameter is hashed and stored under the `options` key in the metric definition.
- `start`: Specifies the start value of the batch counting, by default is `relation.minimum(:id)`.
- `finish`: Specifies the end value of the batch counting, by default is `relation.maximum(:id)`.
- `cache_start_and_finish_as`: Specifies the cache key for `start` and `finish` values and sets up caching them. Use this call when `start` and `finish` are expensive queries that should be reused between different metric calculations.
- `available?`: Specifies whether the metric should be reported. The default is `true`.
- `timestamp_column`: Optionally specifies timestamp column for metric used to filter records for time constrained metrics. The default is `created_at`.
[Example of a merge request that adds a database metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60022).
### Optimization recommendations and examples
Any single query for a Service Ping metric must stay below the [1 second execution time](../../database/query_performance.md#timing-guidelines-for-queries) with cold caches.
- Use specialized indexes. For examples, see these merge requests:
- [Example 1](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26871)
- [Example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26445)
- Use defined `start` and `finish`. These values can be memoized and reused, as in this
[example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37155).
- Avoid joins and unnecessary complexity in your queries. See this
[example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36316) as an example.
- Set a custom `batch_size` for `distinct_count`, as in this [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38000).
### Database metric Examples
#### Count Example
```ruby
module Gitlab
module Usage
module Metrics
module Instrumentations
class CountIssuesMetric < DatabaseMetric
operation :count
relation ->(options) { Issue.where(confidential: options[:confidential]) }
end
end
end
end
end
```
#### Batch counters Example
```ruby
module Gitlab
module Usage
module Metrics
module Instrumentations
class CountIssuesMetric < DatabaseMetric
operation :count
start { Issue.minimum(:id) }
finish { Issue.maximum(:id) }
relation { Issue }
end
end
end
end
end
```
#### Distinct batch counters Example
```ruby
# frozen_string_literal: true
module Gitlab
module Usage
module Metrics
module Instrumentations
class CountUsersAssociatingMilestonesToReleasesMetric < DatabaseMetric
operation :distinct_count, column: :author_id
relation { Release.with_milestones }
start { Release.minimum(:author_id) }
finish { Release.maximum(:author_id) }
end
end
end
end
end
```
#### Sum Example
```ruby
# frozen_string_literal: true
module Gitlab
module Usage
module Metrics
module Instrumentations
class JiraImportsTotalImportedIssuesCountMetric < DatabaseMetric
operation :sum, column: :imported_issues_count
relation { JiraImportState.finished }
end
end
end
end
end
```
#### Average Example
```ruby
# frozen_string_literal: true
module Gitlab
module Usage
module Metrics
module Instrumentations
class CountIssuesWeightAverageMetric < DatabaseMetric
operation :average, column: :weight
relation { Issue }
end
end
end
end
end
```
#### Estimated batch counters
Estimated batch counter functionality handles `ActiveRecord::StatementInvalid` errors
when used through the provided `estimate_batch_distinct_count` method.
Errors return a value of `-1`.
{{< alert type="warning" >}}
This functionality estimates a distinct count of a specific ActiveRecord_Relation in a given column,
which uses the [HyperLogLog](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/40671.pdf) algorithm.
As the HyperLogLog algorithm is probabilistic, the **results always include error**.
The highest encountered error rate is 4.9%.
{{< /alert >}}
When correctly used, the `estimate_batch_distinct_count` method enables efficient counting over
columns that contain non-unique values, which cannot be assured by other counters.
##### `estimate_batch_distinct_count` method
Method:
```ruby
estimate_batch_distinct_count(relation, column = nil, batch_size: nil, start: nil, finish: nil)
```
The [method](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/utils/usage_data.rb#L63)
includes the following arguments:
- `relation`: The ActiveRecord_Relation to perform the count.
- `column`: The column to perform the distinct count. The default is the primary key.
- `batch_size`: From `Gitlab::Database::PostgresHll::BatchDistinctCounter::DEFAULT_BATCH_SIZE`. Default value: 10,000.
- `start`: The custom start of the batch count, to avoid complex minimum calculations.
- `finish`: The custom end of the batch count to avoid complex maximum calculations.
The method includes the following prerequisites:
- The supplied `relation` must include the primary key defined as the numeric column.
For example: `id bigint NOT NULL`.
- The `estimate_batch_distinct_count` can handle a joined relation. To use its ability to
count non-unique columns, the joined relation **must not** have a one-to-many relationship,
such as `has_many :boards`.
- Both `start` and `finish` arguments should always represent primary key relationship values,
even if the estimated count refers to another column, for example:
```ruby
estimate_batch_distinct_count(::Note, :author_id, start: ::Note.minimum(:id), finish: ::Note.maximum(:id))
```
Examples:
1. Simple execution of estimated batch counter, with only relation provided,
returned value represents estimated number of unique values in `id` column
(which is the primary key) of `Project` relation:
```ruby
estimate_batch_distinct_count(::Project)
```
1. Execution of estimated batch counter, where provided relation has applied
additional filter (`.where(time_period)`), number of unique values estimated
in custom column (`:author_id`), and parameters: `start` and `finish` together
apply boundaries that defines range of provided relation to analyze:
```ruby
estimate_batch_distinct_count(::Note.with_suggestions.where(time_period), :author_id, start: ::Note.minimum(:id), finish: ::Note.maximum(:id))
```
## Numbers metrics
- `operation`: Operations for the given `data` block. Currently we only support `add` operation.
- `data`: a `block` which contains an array of numbers.
- `available?`: Specifies whether the metric should be reported. The default is `true`.
```ruby
# frozen_string_literal: true
module Gitlab
module Usage
module Metrics
module Instrumentations
class IssuesBoardsCountMetric < NumbersMetric
operation :add
data do |time_frame|
[
CountIssuesMetric.new(time_frame: time_frame).value,
CountBoardsMetric.new(time_frame: time_frame).value
]
end
end
end
end
end
end
end
```
You must also include the instrumentation class name in the YAML setup.
```yaml
time_frame: 28d
instrumentation_class: IssuesBoardsCountMetric
```
## Generic metrics
You can use generic metrics for other metrics, for example, an instance's database version.
- `value`: Specifies the value of the metric.
- `available?`: Specifies whether the metric should be reported. The default is `true`.
[Example of a merge request that adds a generic metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60256).
```ruby
module Gitlab
module Usage
module Metrics
module Instrumentations
class UuidMetric < GenericMetric
value do
Gitlab::CurrentSettings.uuid
end
end
end
end
end
end
```
## Prometheus metrics
This instrumentation class lets you handle Prometheus queries by passing a Prometheus client object as an argument to the `value` block.
Any Prometheus error handling should be done in the block itself.
- `value`: Specifies the value of the metric. A Prometheus client object is passed as the first argument.
- `available?`: Specifies whether the metric should be reported. The default is `true`.
[Example of a merge request that adds a Prometheus metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/122400).
```ruby
module Gitlab
module Usage
module Metrics
module Instrumentations
class GitalyApdexMetric < PrometheusMetric
value do |client|
result = client.query('avg_over_time(gitlab_usage_ping:gitaly_apdex:ratio_avg_over_time_5m[1w])').first
break FALLBACK unless result
result['value'].last.to_f
end
end
end
end
end
end
```
## Create a new metric instrumentation class
<!-- To create a stub instrumentation for a Service Ping metric, you can use a dedicated [generator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/generators/gitlab/usage_metric_generator.rb): -->
The generator takes the class name as an argument and the following options:
- `--type=TYPE` Required. Indicates the metric type. It must be one of: `database`, `generic`, `redis`, `numbers`.
- `--operation` Required for `database` & `numbers` type.
- For `database` it must be one of: `count`, `distinct_count`, `estimate_batch_distinct_count`, `sum`, `average`.
- For `numbers` it must be: `add`.
- `--ee` Indicates if the metric is for EE.
```shell
rails generate gitlab:usage_metric CountIssues --type database --operation distinct_count
create lib/gitlab/usage/metrics/instrumentations/count_issues_metric.rb
create spec/lib/gitlab/usage/metrics/instrumentations/count_issues_metric_spec.rb
```
After implementation, you should [run service ping locally](../service_ping/troubleshooting.md#generate-service-ping) to verify that the metric is included and functioning as expected.
## Migrate Service Ping metrics to instrumentation classes
This guide describes how to migrate a Service Ping metric from [`lib/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb) or [`ee/lib/ee/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/usage_data.rb) to instrumentation classes.
1. Choose the metric type:
- [Database metric](#database-metrics)
- [Numbers metric](#numbers-metrics)
- [Generic metric](#generic-metrics)
1. Determine the location of instrumentation class: either under `ee` or outside `ee`.
1. [Generate the instrumentation class file](#create-a-new-metric-instrumentation-class).
1. Fill the instrumentation class body:
- Add code logic for the metric. This might be similar to the metric implementation in `usage_data.rb`.
- Add tests for the individual metric [`spec/lib/gitlab/usage/metrics/instrumentations/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/usage/metrics/instrumentations).
- Add tests for Service Ping.
1. [Generate the metric definition file](metrics_dictionary.md#create-a-new-metric-definition).
1. Remove the code from [`lib/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb) or [`ee/lib/ee/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/usage_data.rb).
1. Remove the tests from [`spec/lib/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/usage_data_spec.rb) or [`ee/spec/lib/ee/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/spec/lib/ee/gitlab/usage_data_spec.rb).
## Troubleshoot metrics
Sometimes metrics fail for reasons that are not immediately clear. The failures can be related to performance issues or other problems.
The following pairing session video gives you an example of an investigation in to a real-world failing metric.
<div class="video-fallback">
See the video from: <a href="https://www.youtube.com/watch?v=y_6m2POx2ug">Product Intelligence Office Hours Oct 27th</a> to learn more about the metrics troubleshooting process.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/y_6m2POx2ug" frameborder="0" allowfullscreen> </iframe>
</figure>
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Metrics instrumentation guide
breadcrumbs:
- doc
- development
- internal_analytics
- metrics
---
This guide describes how to develop Service Ping metrics using metrics instrumentation.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a video tutorial, see the [Adding Service Ping metric via instrumentation class](https://youtu.be/p2ivXhNxUoY).
## Nomenclature
- **Instrumentation class**:
- Inherits one of the metric classes: `DatabaseMetric`, `NumbersMetric` or `GenericMetric`.
- Implements the logic that calculates the value for a Service Ping metric.
- **Metric definition**
The Service Data metric YAML definition.
- **Hardening**:
Hardening a method is the process that ensures the method fails safe, returning a fallback value like -1.
## How metrics instrumentation works
All metrics must have a [corresponding metric definition](metrics_dictionary.md) to be included in the [service ping](../service_ping/_index.md#how-service-ping-works) payload.
A metric definition may have the [`instrumentation_class`](metrics_dictionary.md) field, which can be set to a class.
The defined instrumentation class should inherit one of the existing metric classes: `DatabaseMetric`, `NumbersMetric` or `GenericMetric`.
The current convention is that a single instrumentation class corresponds to a single metric.
Using an instrumentation class ensures that metrics can fail safe individually, without breaking the entire process of Service Ping generation.
## Database metrics
{{< alert type="note" >}}
Whenever possible we recommend using [internal event tracking](../internal_event_instrumentation/quick_start.md) instead of database metrics.
Database metrics can create unnecessary load on the database of bigger GitLab instances and potential optimisations can affect instance performance.
{{< /alert >}}
You can use database metrics to track data kept in the database, for example, a count of issues that exist on a given instance.
- `operation`: Operations for the given `relation`, one of `count`, `distinct_count`, `sum`, and `average`.
- `relation`: Assigns lambda that returns the `ActiveRecord::Relation` for the objects we want to perform the `operation`. The assigned lambda can accept up to one parameter. The parameter is hashed and stored under the `options` key in the metric definition.
- `start`: Specifies the start value of the batch counting, by default is `relation.minimum(:id)`.
- `finish`: Specifies the end value of the batch counting, by default is `relation.maximum(:id)`.
- `cache_start_and_finish_as`: Specifies the cache key for `start` and `finish` values and sets up caching them. Use this call when `start` and `finish` are expensive queries that should be reused between different metric calculations.
- `available?`: Specifies whether the metric should be reported. The default is `true`.
- `timestamp_column`: Optionally specifies timestamp column for metric used to filter records for time constrained metrics. The default is `created_at`.
[Example of a merge request that adds a database metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60022).
### Optimization recommendations and examples
Any single query for a Service Ping metric must stay below the [1 second execution time](../../database/query_performance.md#timing-guidelines-for-queries) with cold caches.
- Use specialized indexes. For examples, see these merge requests:
- [Example 1](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26871)
- [Example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26445)
- Use defined `start` and `finish`. These values can be memoized and reused, as in this
[example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37155).
- Avoid joins and unnecessary complexity in your queries. See this
[example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36316) as an example.
- Set a custom `batch_size` for `distinct_count`, as in this [example merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38000).
### Database metric Examples
#### Count Example
```ruby
module Gitlab
module Usage
module Metrics
module Instrumentations
class CountIssuesMetric < DatabaseMetric
operation :count
relation ->(options) { Issue.where(confidential: options[:confidential]) }
end
end
end
end
end
```
#### Batch counters Example
```ruby
module Gitlab
module Usage
module Metrics
module Instrumentations
class CountIssuesMetric < DatabaseMetric
operation :count
start { Issue.minimum(:id) }
finish { Issue.maximum(:id) }
relation { Issue }
end
end
end
end
end
```
#### Distinct batch counters Example
```ruby
# frozen_string_literal: true
module Gitlab
module Usage
module Metrics
module Instrumentations
class CountUsersAssociatingMilestonesToReleasesMetric < DatabaseMetric
operation :distinct_count, column: :author_id
relation { Release.with_milestones }
start { Release.minimum(:author_id) }
finish { Release.maximum(:author_id) }
end
end
end
end
end
```
#### Sum Example
```ruby
# frozen_string_literal: true
module Gitlab
module Usage
module Metrics
module Instrumentations
class JiraImportsTotalImportedIssuesCountMetric < DatabaseMetric
operation :sum, column: :imported_issues_count
relation { JiraImportState.finished }
end
end
end
end
end
```
#### Average Example
```ruby
# frozen_string_literal: true
module Gitlab
module Usage
module Metrics
module Instrumentations
class CountIssuesWeightAverageMetric < DatabaseMetric
operation :average, column: :weight
relation { Issue }
end
end
end
end
end
```
#### Estimated batch counters
Estimated batch counter functionality handles `ActiveRecord::StatementInvalid` errors
when used through the provided `estimate_batch_distinct_count` method.
Errors return a value of `-1`.
{{< alert type="warning" >}}
This functionality estimates a distinct count of a specific ActiveRecord_Relation in a given column,
which uses the [HyperLogLog](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/40671.pdf) algorithm.
As the HyperLogLog algorithm is probabilistic, the **results always include error**.
The highest encountered error rate is 4.9%.
{{< /alert >}}
When correctly used, the `estimate_batch_distinct_count` method enables efficient counting over
columns that contain non-unique values, which cannot be assured by other counters.
##### `estimate_batch_distinct_count` method
Method:
```ruby
estimate_batch_distinct_count(relation, column = nil, batch_size: nil, start: nil, finish: nil)
```
The [method](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/utils/usage_data.rb#L63)
includes the following arguments:
- `relation`: The ActiveRecord_Relation to perform the count.
- `column`: The column to perform the distinct count. The default is the primary key.
- `batch_size`: From `Gitlab::Database::PostgresHll::BatchDistinctCounter::DEFAULT_BATCH_SIZE`. Default value: 10,000.
- `start`: The custom start of the batch count, to avoid complex minimum calculations.
- `finish`: The custom end of the batch count to avoid complex maximum calculations.
The method includes the following prerequisites:
- The supplied `relation` must include the primary key defined as the numeric column.
For example: `id bigint NOT NULL`.
- The `estimate_batch_distinct_count` can handle a joined relation. To use its ability to
count non-unique columns, the joined relation **must not** have a one-to-many relationship,
such as `has_many :boards`.
- Both `start` and `finish` arguments should always represent primary key relationship values,
even if the estimated count refers to another column, for example:
```ruby
estimate_batch_distinct_count(::Note, :author_id, start: ::Note.minimum(:id), finish: ::Note.maximum(:id))
```
Examples:
1. Simple execution of estimated batch counter, with only relation provided,
returned value represents estimated number of unique values in `id` column
(which is the primary key) of `Project` relation:
```ruby
estimate_batch_distinct_count(::Project)
```
1. Execution of estimated batch counter, where provided relation has applied
additional filter (`.where(time_period)`), number of unique values estimated
in custom column (`:author_id`), and parameters: `start` and `finish` together
apply boundaries that defines range of provided relation to analyze:
```ruby
estimate_batch_distinct_count(::Note.with_suggestions.where(time_period), :author_id, start: ::Note.minimum(:id), finish: ::Note.maximum(:id))
```
## Numbers metrics
- `operation`: Operations for the given `data` block. Currently we only support `add` operation.
- `data`: a `block` which contains an array of numbers.
- `available?`: Specifies whether the metric should be reported. The default is `true`.
```ruby
# frozen_string_literal: true
module Gitlab
module Usage
module Metrics
module Instrumentations
class IssuesBoardsCountMetric < NumbersMetric
operation :add
data do |time_frame|
[
CountIssuesMetric.new(time_frame: time_frame).value,
CountBoardsMetric.new(time_frame: time_frame).value
]
end
end
end
end
end
end
end
```
You must also include the instrumentation class name in the YAML setup.
```yaml
time_frame: 28d
instrumentation_class: IssuesBoardsCountMetric
```
## Generic metrics
You can use generic metrics for other metrics, for example, an instance's database version.
- `value`: Specifies the value of the metric.
- `available?`: Specifies whether the metric should be reported. The default is `true`.
[Example of a merge request that adds a generic metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60256).
```ruby
module Gitlab
module Usage
module Metrics
module Instrumentations
class UuidMetric < GenericMetric
value do
Gitlab::CurrentSettings.uuid
end
end
end
end
end
end
```
## Prometheus metrics
This instrumentation class lets you handle Prometheus queries by passing a Prometheus client object as an argument to the `value` block.
Any Prometheus error handling should be done in the block itself.
- `value`: Specifies the value of the metric. A Prometheus client object is passed as the first argument.
- `available?`: Specifies whether the metric should be reported. The default is `true`.
[Example of a merge request that adds a Prometheus metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/122400).
```ruby
module Gitlab
module Usage
module Metrics
module Instrumentations
class GitalyApdexMetric < PrometheusMetric
value do |client|
result = client.query('avg_over_time(gitlab_usage_ping:gitaly_apdex:ratio_avg_over_time_5m[1w])').first
break FALLBACK unless result
result['value'].last.to_f
end
end
end
end
end
end
```
## Create a new metric instrumentation class
<!-- To create a stub instrumentation for a Service Ping metric, you can use a dedicated [generator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/generators/gitlab/usage_metric_generator.rb): -->
The generator takes the class name as an argument and the following options:
- `--type=TYPE` Required. Indicates the metric type. It must be one of: `database`, `generic`, `redis`, `numbers`.
- `--operation` Required for `database` & `numbers` type.
- For `database` it must be one of: `count`, `distinct_count`, `estimate_batch_distinct_count`, `sum`, `average`.
- For `numbers` it must be: `add`.
- `--ee` Indicates if the metric is for EE.
```shell
rails generate gitlab:usage_metric CountIssues --type database --operation distinct_count
create lib/gitlab/usage/metrics/instrumentations/count_issues_metric.rb
create spec/lib/gitlab/usage/metrics/instrumentations/count_issues_metric_spec.rb
```
After implementation, you should [run service ping locally](../service_ping/troubleshooting.md#generate-service-ping) to verify that the metric is included and functioning as expected.
## Migrate Service Ping metrics to instrumentation classes
This guide describes how to migrate a Service Ping metric from [`lib/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb) or [`ee/lib/ee/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/usage_data.rb) to instrumentation classes.
1. Choose the metric type:
- [Database metric](#database-metrics)
- [Numbers metric](#numbers-metrics)
- [Generic metric](#generic-metrics)
1. Determine the location of instrumentation class: either under `ee` or outside `ee`.
1. [Generate the instrumentation class file](#create-a-new-metric-instrumentation-class).
1. Fill the instrumentation class body:
- Add code logic for the metric. This might be similar to the metric implementation in `usage_data.rb`.
- Add tests for the individual metric [`spec/lib/gitlab/usage/metrics/instrumentations/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/usage/metrics/instrumentations).
- Add tests for Service Ping.
1. [Generate the metric definition file](metrics_dictionary.md#create-a-new-metric-definition).
1. Remove the code from [`lib/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb) or [`ee/lib/ee/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/usage_data.rb).
1. Remove the tests from [`spec/lib/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/usage_data_spec.rb) or [`ee/spec/lib/ee/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/spec/lib/ee/gitlab/usage_data_spec.rb).
## Troubleshoot metrics
Sometimes metrics fail for reasons that are not immediately clear. The failures can be related to performance issues or other problems.
The following pairing session video gives you an example of an investigation in to a real-world failing metric.
<div class="video-fallback">
See the video from: <a href="https://www.youtube.com/watch?v=y_6m2POx2ug">Product Intelligence Office Hours Oct 27th</a> to learn more about the metrics troubleshooting process.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/y_6m2POx2ug" frameborder="0" allowfullscreen> </iframe>
</figure>
|
https://docs.gitlab.com/development/internal_analytics/metrics_lifecycle
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/metrics_lifecycle.md
|
2025-08-13
|
doc/development/internal_analytics/metrics
|
[
"doc",
"development",
"internal_analytics",
"metrics"
] |
metrics_lifecycle.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Metric lifecycle
| null |
The following guidelines explain the steps to follow at each stage of a metric's lifecycle.
## Add a new metric
Follow the [metrics instrumentation](metrics_instrumentation.md) guide.
## Change an existing metric
{{< alert type="warning" >}}
We want to **PREVENT** changes to the calculation logic or important attributes on any metric as this invalidates comparisons of the same metric across different versions of GitLab.
{{< /alert >}}
If you change a metric, you have to consider that not all instances of GitLab are running on the newest version. Old instances will still report the old version of the metric.
Additionally, a metric's reported numbers are primarily interesting compared to previously reported numbers.
As a result, if you need to change one of the following parts of a metric, you need to add a new metric instead. It's your choice whether to keep the old metric alongside the new one or [remove it](#remove-a-metric).
- **calculation logic**: This means any changes that can produce a different value than the previous implementation
- **YAML attributes**: The following attributes are directly used for analysis or calculation: `key_path`, `time_frame`, `value_type`, `data_source`.
If you change the `performance_indicator_type` attribute of a metric or think your case needs an exception from the outlined rules then notify the Customer Success Ops team (`@csops-team`), Analytics Engineers (`@gitlab-data/analytics-engineers`), and Product Analysts (`@gitlab-data/product-analysts`) teams by `@` mentioning those groups in a comment on the merge request or issue.
You can change any other attributes without impact to the calculation or analysis. See [this video tutorial](https://youtu.be/bYf3c01KCls) for help updating metric attributes.
Currently, the [Metrics Dictionary](https://metrics.gitlab.com/) is built automatically once a day. You can see the change in the dictionary within 24 hours when you change the metric's YAML file.
## Remove a metric
1. Create an issue for removing the metric if none exists yet. The issue needs to outline why the metric should be removed. You can use this issue to document the removal process.
- **If the metric has at least one `performance_indicator_type` of the `[x]mau` or `customer_health_score` kind**:
Notify the Customer Success Ops team (`@csops-team`), Analytics Engineers (`@gitlab-data/analytics-engineers`), and Product Analysts (`@gitlab-data/product-analysts`) by `@` mentioning the groups in a comment in the issue. Unexpected changes to these metric could break reporting.
- **If the metric is owned by a different group than the one doing the removal**:
Tag the PM and EM of the owning group according to the [stages file](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml).
1. Remove the metric instrumentation code, depending on `data_source`:
- **`database/system`**: If the metric has an `instrumentation_class` and the assigned class is no longer used by any other metric you can remove the class and specs.
If the metric is instrumented within [`lib/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb)
or [`ee/lib/ee/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/usage_data.rb) then remove the associated code and specs
([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60149/diffs#6335dc533bd21df26db9de90a02dd66278c2390d_167_167)).
- **`redis_hll/redis/internal_events`**: Remove the tracking code, for example, `track_internal_event` and associated specs.
1. Update the attributes of the metric's YAML definition:
- Set the `status:` to `removed`.
- Set `removed_by_url:` to the URL of the MR removing the metric
- Set `milestone_removed:` to the number of the
milestone in which the metric was removed.
Do not remove the metric's YAML definition altogether. Some GitLab Self-Managed instances might not immediately update to the latest version of GitLab, and
therefore continue to report the removed metric. The Analytics Instrumentation team requires a record of all removed metrics to identify and filter them.
## Group name changes
When the name of a group that owns events or metrics is changed, the `product_group` property should be updated in all metric and event definitions belonging to that group.
The `product_group_renamer` script can update all the definitions so you do not have to do it manually.
For example, if the group 5-min-app was renamed to 2-min-app, you can update the relevant files like this:
```shell
$ scripts/internal_events/product_group_renamer.rb 5-min-app 2-min-app
Updated '5-min-app' to '2-min-app' in 3 files
Updated files:
config/metrics/schema/product_groups.json
config/metrics/counts_28d/20210216184517_p_ci_templates_5_min_production_app_monthly.yml
config/metrics/counts_7d/20210216184515_p_ci_templates_5_min_production_app_weekly.yml
```
After running the script, you must commit all the modified files to Git and create a merge request.
The script is part of GDK and a frontend or backend developer can run the script and prepare the merge request.
If a group is split into multiple groups, you need to manually update the product_group.
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Metric lifecycle
breadcrumbs:
- doc
- development
- internal_analytics
- metrics
---
The following guidelines explain the steps to follow at each stage of a metric's lifecycle.
## Add a new metric
Follow the [metrics instrumentation](metrics_instrumentation.md) guide.
## Change an existing metric
{{< alert type="warning" >}}
We want to **PREVENT** changes to the calculation logic or important attributes on any metric as this invalidates comparisons of the same metric across different versions of GitLab.
{{< /alert >}}
If you change a metric, you have to consider that not all instances of GitLab are running on the newest version. Old instances will still report the old version of the metric.
Additionally, a metric's reported numbers are primarily interesting compared to previously reported numbers.
As a result, if you need to change one of the following parts of a metric, you need to add a new metric instead. It's your choice whether to keep the old metric alongside the new one or [remove it](#remove-a-metric).
- **calculation logic**: This means any changes that can produce a different value than the previous implementation
- **YAML attributes**: The following attributes are directly used for analysis or calculation: `key_path`, `time_frame`, `value_type`, `data_source`.
If you change the `performance_indicator_type` attribute of a metric or think your case needs an exception from the outlined rules then notify the Customer Success Ops team (`@csops-team`), Analytics Engineers (`@gitlab-data/analytics-engineers`), and Product Analysts (`@gitlab-data/product-analysts`) teams by `@` mentioning those groups in a comment on the merge request or issue.
You can change any other attributes without impact to the calculation or analysis. See [this video tutorial](https://youtu.be/bYf3c01KCls) for help updating metric attributes.
Currently, the [Metrics Dictionary](https://metrics.gitlab.com/) is built automatically once a day. You can see the change in the dictionary within 24 hours when you change the metric's YAML file.
## Remove a metric
1. Create an issue for removing the metric if none exists yet. The issue needs to outline why the metric should be removed. You can use this issue to document the removal process.
- **If the metric has at least one `performance_indicator_type` of the `[x]mau` or `customer_health_score` kind**:
Notify the Customer Success Ops team (`@csops-team`), Analytics Engineers (`@gitlab-data/analytics-engineers`), and Product Analysts (`@gitlab-data/product-analysts`) by `@` mentioning the groups in a comment in the issue. Unexpected changes to these metric could break reporting.
- **If the metric is owned by a different group than the one doing the removal**:
Tag the PM and EM of the owning group according to the [stages file](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml).
1. Remove the metric instrumentation code, depending on `data_source`:
- **`database/system`**: If the metric has an `instrumentation_class` and the assigned class is no longer used by any other metric you can remove the class and specs.
If the metric is instrumented within [`lib/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb)
or [`ee/lib/ee/gitlab/usage_data.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/usage_data.rb) then remove the associated code and specs
([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60149/diffs#6335dc533bd21df26db9de90a02dd66278c2390d_167_167)).
- **`redis_hll/redis/internal_events`**: Remove the tracking code, for example, `track_internal_event` and associated specs.
1. Update the attributes of the metric's YAML definition:
- Set the `status:` to `removed`.
- Set `removed_by_url:` to the URL of the MR removing the metric
- Set `milestone_removed:` to the number of the
milestone in which the metric was removed.
Do not remove the metric's YAML definition altogether. Some GitLab Self-Managed instances might not immediately update to the latest version of GitLab, and
therefore continue to report the removed metric. The Analytics Instrumentation team requires a record of all removed metrics to identify and filter them.
## Group name changes
When the name of a group that owns events or metrics is changed, the `product_group` property should be updated in all metric and event definitions belonging to that group.
The `product_group_renamer` script can update all the definitions so you do not have to do it manually.
For example, if the group 5-min-app was renamed to 2-min-app, you can update the relevant files like this:
```shell
$ scripts/internal_events/product_group_renamer.rb 5-min-app 2-min-app
Updated '5-min-app' to '2-min-app' in 3 files
Updated files:
config/metrics/schema/product_groups.json
config/metrics/counts_28d/20210216184517_p_ci_templates_5_min_production_app_monthly.yml
config/metrics/counts_7d/20210216184515_p_ci_templates_5_min_production_app_weekly.yml
```
After running the script, you must commit all the modified files to Git and create a merge request.
The script is part of GDK and a frontend or backend developer can run the script and prepare the merge request.
If a group is split into multiple groups, you need to manually update the product_group.
|
https://docs.gitlab.com/development/internal_analytics/metrics_dictionary
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/metrics_dictionary.md
|
2025-08-13
|
doc/development/internal_analytics/metrics
|
[
"doc",
"development",
"internal_analytics",
"metrics"
] |
metrics_dictionary.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Metrics Dictionary Guide
| null |
[Service Ping](../service_ping/_index.md) metrics are defined in individual YAML files definitions from which the
[Metrics Dictionary](https://metrics.gitlab.com/) is built. Currently, the metrics dictionary is built automatically once an hour.
- When a change to a metric is made in a YAML file, you can see the change in the dictionary within 1 hour of the change getting deployed to production.
- When a change to an event is made in a YAML file, you can see the change in the dictionary within 1 hour of the change getting merged to the master branch.
This guide describes the dictionary and how it's implemented.
## Metrics Definition and validation
We are using [JSON Schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/schema.json) to validate the metrics definition.
This process is meant to ensure consistent and valid metrics defined for Service Ping. All metrics must:
- Comply with the defined [JSON schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/schema.json).
- Have a unique `key_path` .
- Have an owner.
We currently have `tier` as one of the required fields for a metric definition file, however, we are now moving towards replacing `tier` with `tiers`, for this purpose it is valid to add `tiers` as a field in the metric definition files. Until the replacement process is complete, both `tier` and `tiers` would be valid fields that can be added to the metric definition files.
All metrics are stored in YAML files:
- [`config/metrics`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/metrics)
{{< alert type="warning" >}}
Only metrics with a metric definition YAML and whose status is not `removed` are added to the Service Ping JSON payload.
{{< /alert >}}
Each metric is defined in a YAML file consisting of a number of fields:
| Field | Required | Additional information |
|------------------------------|----------|------------------------|
| `key_path` | yes | JSON key path for the metric, location in Service Ping payload. |
| `description` | yes | |
| `product_group` | yes | The [group](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml) that owns the metric. |
| `product_categories` | no | `array`; The [feature categories](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/feature_categories.yml) that the metric represents usage of. Some metrics may correspond to multiple categories or no category. |
| `value_type` | yes | `string`; one of [`string`, `number`, `boolean`, `object`](https://json-schema.org/understanding-json-schema/reference/type). |
| `status` | yes | `string`; [status](#metric-statuses) of the metric, may be set to `active`, `removed`, `broken`. |
| `time_frame` | yes | `string` or `array`; may be set to `7d`, `28d`, `all`, `none` or an array including any of these values except for `none`. |
| `data_source` | yes | `string`; may be set to a value like `database`, `redis`, `redis_hll`, `prometheus`, `system`, `license`, `internal_events`. |
| `data_category` | yes | `string`; [categories](#data-category) of the metric, may be set to `operational`, `optional`, `subscription`, `standard`. The default value is `optional`. |
| `instrumentation_class` | no | `string`; used for metrics with `data_source` other than `internal_events`. See [the class that implements the metric](metrics_instrumentation.md). |
| `performance_indicator_type` | no | `array`; may be set to one of [`gmau`, `smau`, `paid_gmau`, `umau`, `customer_health_score`, `devops_report`, `lighthouse`, or `leading_indicator`](https://handbook.gitlab.com/handbook/business-technology/data-team/data-catalog/). |
| `tiers` | yes | `array`; may contain one or a combination of `free`, `premium` or `ultimate`. The [tiers](https://handbook.gitlab.com/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/tiers/#definitions) where the tracked feature is available. This should be verbose and contain all tiers where a metric is available. |
| `milestone` | yes | The milestone when the metric is introduced and when it's available to GitLab Self-Managed instances with the official GitLab release. |
| `milestone_removed` | no | The milestone when the metric is removed. Required for removed metrics. |
| `introduced_by_url` | yes | The URL to the merge request that introduced the metric to be available for GitLab Self-Managed instances. |
| `removed_by_url` | no | The URL to the merge request that removed the metric. Required for removed metrics. |
| `repair_issue_url` | no | The URL of the issue that was created to repair a metric with a `broken` status. |
| `options` | no | `object`: options information needed to calculate the metric value. |
### Metric `key_path`
The `key_path` of the metric is the location in the JSON Service Ping payload.
The `key_path` could be composed from multiple parts separated by `.` and it must be unique.
If a metric definition has an array `time_frame`, the `key_path` defined in the YAML file will have a suffix automatically added for each of the included time frames:
| time_frame | `key_path` suffix|
|------------|------------------|
| `all` | no suffix |
| `7d` | `_weekly` |
| `28d` | `_monthly` |
The `key_path`s shown in the [Metrics Dictionary](https://metrics.gitlab.com/) include those suffixes.
### Metric statuses
Metric definitions can have one of the following statuses:
- `active`: Metric is used and reports data.
- `broken`: Metric reports broken data (for example, -1 fallback), or does not report data at all. A metric marked as `broken` must also have the `repair_issue_url` attribute.
- `removed`: Metric was removed, but it may appear in Service Ping payloads sent from instances running on older versions of GitLab.
### Metric `value_type`
Metric definitions can have one of the following values for `value_type`:
- `boolean`
- `number`
- `string`
- `object`: A metric with `value_type: object` must have `value_json_schema` with a link to the JSON schema for the object.
In general, we avoid complex objects and prefer one of the `boolean`, `number`, or `string` value types.
An example of a metric that uses `value_type: object` is `topology` (`/config/metrics/settings/20210323120839_topology.yml`),
which has a related schema in `/config/metrics/objects_schemas/topology_schema.json`.
### Metric `time_frame`
A metric's time frame is calculated based on the `time_frame` field and the `data_source` of the metric. When `time_frame` is an array, the metric's values are calculated for each of the included time frames.
| data_source | time_frame | Description |
|------------------------|------------|-------------------------------------------------|
| any | `none` | A type of data that's not tracked over time, such as settings and configuration information |
| `database` | `all` | The whole time the metric has been active (all-time interval) |
| `database` | `7d` | 9 days ago to 2 days ago |
| `database` | `28d` | 30 days ago to 2 days ago |
| `internal_events` | `all` | The whole time the metric has been active (all-time interval) |
| `internal_events` | `7d` | Most recent complete week |
| `internal_events` | `28d` | Most recent 4 complete weeks |
### Data category
We use the following categories to classify a metric:
- `operational`: Required data for operational purposes.
- `optional`: Default value for a metric. Data that is optional to collect. This can be [enabled or disabled](../../../administration/settings/usage_statistics.md#enable-or-disable-service-ping) in the **Admin** area.
- `subscription`: Data related to licensing.
- `standard`: Standard set of identifiers that are included when collecting data.
### Example YAML metric definition
The linked [`uuid`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/license/uuid.yml)
YAML file includes an example metric definition, where the `uuid` metric is the GitLab
instance unique identifier.
```yaml
key_path: uuid
description: GitLab instance unique identifier
product_group: analytics_instrumentation
value_type: string
status: active
milestone: 9.1
instrumentation_class: UuidMetric
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/1521
time_frame: none
data_source: database
tier:
- free
- premium
- ultimate
tiers:
- free
- premium
- ultimate
```
### Create a new metric definition
The GitLab codebase provides dedicated generators to create new metrics, which also create valid metric definition files:
- [internal events generator](../internal_event_instrumentation/quick_start.md)
- [metric instrumentation class generator](metrics_instrumentation.md#create-a-new-metric-instrumentation-class)
For uniqueness, the generated files include a timestamp prefix in ISO 8601 format.
### Performance Indicator Metrics
To use a metric definition to manage [performance indicator](https://handbook.gitlab.com/handbook/product/analytics-instrumentation-guide/#instrumenting-metrics-and-events):
1. Create a merge request that includes related changes.
1. Use labels `~"analytics instrumentation"`, `"~Data Warehouse::Impact Check"`.
1. Update the metric definition `performance_indicator_type` [field](metrics_dictionary.md#metrics-definition-and-validation).
1. Create an issue in GitLab Product Data Insights project with the [PI Chart Help template](https://gitlab.com/gitlab-data/product-analytics/-/issues/new?issuable_template=PI%20Chart%20Help) to have the new metric visualized.
## Metrics Dictionary
[Metrics Dictionary is a separate application](https://gitlab.com/gitlab-org/analytics-section/analytics-instrumentation/metric-dictionary).
All metrics available in Service Ping are in the [Metrics Dictionary](https://metrics.gitlab.com/).
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Metrics Dictionary Guide
breadcrumbs:
- doc
- development
- internal_analytics
- metrics
---
[Service Ping](../service_ping/_index.md) metrics are defined in individual YAML files definitions from which the
[Metrics Dictionary](https://metrics.gitlab.com/) is built. Currently, the metrics dictionary is built automatically once an hour.
- When a change to a metric is made in a YAML file, you can see the change in the dictionary within 1 hour of the change getting deployed to production.
- When a change to an event is made in a YAML file, you can see the change in the dictionary within 1 hour of the change getting merged to the master branch.
This guide describes the dictionary and how it's implemented.
## Metrics Definition and validation
We are using [JSON Schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/schema.json) to validate the metrics definition.
This process is meant to ensure consistent and valid metrics defined for Service Ping. All metrics must:
- Comply with the defined [JSON schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/schema.json).
- Have a unique `key_path` .
- Have an owner.
We currently have `tier` as one of the required fields for a metric definition file, however, we are now moving towards replacing `tier` with `tiers`, for this purpose it is valid to add `tiers` as a field in the metric definition files. Until the replacement process is complete, both `tier` and `tiers` would be valid fields that can be added to the metric definition files.
All metrics are stored in YAML files:
- [`config/metrics`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/metrics)
{{< alert type="warning" >}}
Only metrics with a metric definition YAML and whose status is not `removed` are added to the Service Ping JSON payload.
{{< /alert >}}
Each metric is defined in a YAML file consisting of a number of fields:
| Field | Required | Additional information |
|------------------------------|----------|------------------------|
| `key_path` | yes | JSON key path for the metric, location in Service Ping payload. |
| `description` | yes | |
| `product_group` | yes | The [group](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml) that owns the metric. |
| `product_categories` | no | `array`; The [feature categories](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/feature_categories.yml) that the metric represents usage of. Some metrics may correspond to multiple categories or no category. |
| `value_type` | yes | `string`; one of [`string`, `number`, `boolean`, `object`](https://json-schema.org/understanding-json-schema/reference/type). |
| `status` | yes | `string`; [status](#metric-statuses) of the metric, may be set to `active`, `removed`, `broken`. |
| `time_frame` | yes | `string` or `array`; may be set to `7d`, `28d`, `all`, `none` or an array including any of these values except for `none`. |
| `data_source` | yes | `string`; may be set to a value like `database`, `redis`, `redis_hll`, `prometheus`, `system`, `license`, `internal_events`. |
| `data_category` | yes | `string`; [categories](#data-category) of the metric, may be set to `operational`, `optional`, `subscription`, `standard`. The default value is `optional`. |
| `instrumentation_class` | no | `string`; used for metrics with `data_source` other than `internal_events`. See [the class that implements the metric](metrics_instrumentation.md). |
| `performance_indicator_type` | no | `array`; may be set to one of [`gmau`, `smau`, `paid_gmau`, `umau`, `customer_health_score`, `devops_report`, `lighthouse`, or `leading_indicator`](https://handbook.gitlab.com/handbook/business-technology/data-team/data-catalog/). |
| `tiers` | yes | `array`; may contain one or a combination of `free`, `premium` or `ultimate`. The [tiers](https://handbook.gitlab.com/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/tiers/#definitions) where the tracked feature is available. This should be verbose and contain all tiers where a metric is available. |
| `milestone` | yes | The milestone when the metric is introduced and when it's available to GitLab Self-Managed instances with the official GitLab release. |
| `milestone_removed` | no | The milestone when the metric is removed. Required for removed metrics. |
| `introduced_by_url` | yes | The URL to the merge request that introduced the metric to be available for GitLab Self-Managed instances. |
| `removed_by_url` | no | The URL to the merge request that removed the metric. Required for removed metrics. |
| `repair_issue_url` | no | The URL of the issue that was created to repair a metric with a `broken` status. |
| `options` | no | `object`: options information needed to calculate the metric value. |
### Metric `key_path`
The `key_path` of the metric is the location in the JSON Service Ping payload.
The `key_path` could be composed from multiple parts separated by `.` and it must be unique.
If a metric definition has an array `time_frame`, the `key_path` defined in the YAML file will have a suffix automatically added for each of the included time frames:
| time_frame | `key_path` suffix|
|------------|------------------|
| `all` | no suffix |
| `7d` | `_weekly` |
| `28d` | `_monthly` |
The `key_path`s shown in the [Metrics Dictionary](https://metrics.gitlab.com/) include those suffixes.
### Metric statuses
Metric definitions can have one of the following statuses:
- `active`: Metric is used and reports data.
- `broken`: Metric reports broken data (for example, -1 fallback), or does not report data at all. A metric marked as `broken` must also have the `repair_issue_url` attribute.
- `removed`: Metric was removed, but it may appear in Service Ping payloads sent from instances running on older versions of GitLab.
### Metric `value_type`
Metric definitions can have one of the following values for `value_type`:
- `boolean`
- `number`
- `string`
- `object`: A metric with `value_type: object` must have `value_json_schema` with a link to the JSON schema for the object.
In general, we avoid complex objects and prefer one of the `boolean`, `number`, or `string` value types.
An example of a metric that uses `value_type: object` is `topology` (`/config/metrics/settings/20210323120839_topology.yml`),
which has a related schema in `/config/metrics/objects_schemas/topology_schema.json`.
### Metric `time_frame`
A metric's time frame is calculated based on the `time_frame` field and the `data_source` of the metric. When `time_frame` is an array, the metric's values are calculated for each of the included time frames.
| data_source | time_frame | Description |
|------------------------|------------|-------------------------------------------------|
| any | `none` | A type of data that's not tracked over time, such as settings and configuration information |
| `database` | `all` | The whole time the metric has been active (all-time interval) |
| `database` | `7d` | 9 days ago to 2 days ago |
| `database` | `28d` | 30 days ago to 2 days ago |
| `internal_events` | `all` | The whole time the metric has been active (all-time interval) |
| `internal_events` | `7d` | Most recent complete week |
| `internal_events` | `28d` | Most recent 4 complete weeks |
### Data category
We use the following categories to classify a metric:
- `operational`: Required data for operational purposes.
- `optional`: Default value for a metric. Data that is optional to collect. This can be [enabled or disabled](../../../administration/settings/usage_statistics.md#enable-or-disable-service-ping) in the **Admin** area.
- `subscription`: Data related to licensing.
- `standard`: Standard set of identifiers that are included when collecting data.
### Example YAML metric definition
The linked [`uuid`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/license/uuid.yml)
YAML file includes an example metric definition, where the `uuid` metric is the GitLab
instance unique identifier.
```yaml
key_path: uuid
description: GitLab instance unique identifier
product_group: analytics_instrumentation
value_type: string
status: active
milestone: 9.1
instrumentation_class: UuidMetric
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/1521
time_frame: none
data_source: database
tier:
- free
- premium
- ultimate
tiers:
- free
- premium
- ultimate
```
### Create a new metric definition
The GitLab codebase provides dedicated generators to create new metrics, which also create valid metric definition files:
- [internal events generator](../internal_event_instrumentation/quick_start.md)
- [metric instrumentation class generator](metrics_instrumentation.md#create-a-new-metric-instrumentation-class)
For uniqueness, the generated files include a timestamp prefix in ISO 8601 format.
### Performance Indicator Metrics
To use a metric definition to manage [performance indicator](https://handbook.gitlab.com/handbook/product/analytics-instrumentation-guide/#instrumenting-metrics-and-events):
1. Create a merge request that includes related changes.
1. Use labels `~"analytics instrumentation"`, `"~Data Warehouse::Impact Check"`.
1. Update the metric definition `performance_indicator_type` [field](metrics_dictionary.md#metrics-definition-and-validation).
1. Create an issue in GitLab Product Data Insights project with the [PI Chart Help template](https://gitlab.com/gitlab-data/product-analytics/-/issues/new?issuable_template=PI%20Chart%20Help) to have the new metric visualized.
## Metrics Dictionary
[Metrics Dictionary is a separate application](https://gitlab.com/gitlab-org/analytics-section/analytics-instrumentation/metric-dictionary).
All metrics available in Service Ping are in the [Metrics Dictionary](https://metrics.gitlab.com/).
|
https://docs.gitlab.com/development/internal_analytics/metrics
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/_index.md
|
2025-08-13
|
doc/development/internal_analytics/metrics
|
[
"doc",
"development",
"internal_analytics",
"metrics"
] |
_index.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Metrics
| null |
This page provides an overview for pages related to metrics in internal analytics at GitLab.
This page is a work in progress. If you have access to the GitLab Slack workspace, use the
`#g_analyze_analytics_instrumentation` channel for any questions or clarifications.
- [Metrics Dictionary Guide](metrics_dictionary.md)
- [Metrics Lifecycle](metrics_lifecycle.md)
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Metrics
breadcrumbs:
- doc
- development
- internal_analytics
- metrics
---
This page provides an overview for pages related to metrics in internal analytics at GitLab.
This page is a work in progress. If you have access to the GitLab Slack workspace, use the
`#g_analyze_analytics_instrumentation` channel for any questions or clarifications.
- [Metrics Dictionary Guide](metrics_dictionary.md)
- [Metrics Lifecycle](metrics_lifecycle.md)
|
https://docs.gitlab.com/development/internal_analytics/internal_event_instrumentation
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/_index.md
|
2025-08-13
|
doc/development/internal_analytics/internal_event_instrumentation
|
[
"doc",
"development",
"internal_analytics",
"internal_event_instrumentation"
] |
_index.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Internal Event Tracking
| null |
This page provides detailed guidelines on using the Internal Event Tracking system to instrument features on GitLab.
Currently Internal Event Tracking is consolidating the following systems:
- [Service Ping](../service_ping/_index.md)
- Snowplow
- AiTracking (Duo Chat) WIP
Internal Events is an unified interface to track events in GitLab. Each tracking call represents a user action and the
associated properties. Internal Events then provides underlying systems the properties they require for their specific
analytics needs.
Analytics systems summary:
| Function\System | Service Ping | Snowplow |
| --- | --- | --- |
| Primary function | Provide aggregated analytics data | Track raw events (user interactions with the service) |
| Data storage | Local instance (Redis, Postgres etc) | Snowflake |
| Data granularity | None (data is aggregated) | Per event |
| Extra parameters | None | Any amount of custom data |
| Receiving delay | Up to 1 week | A few minutes |
| Implementation | Utilises Internal Events, Database records, System Settings | Internal Events plus custom tracking context |
This page is a work in progress. If you have access to the GitLab Slack workspace, use the
`#g_monitor_analytics_instrumentation` channel for any questions or clarifications.
- [Quick start for internal event tracking](quick_start.md)
- [Migrating existing tracking to internal event tracking](migration.md)
- [Event definition guide](event_definition_guide.md)
- [Metric definition guide](metric_definition_guide.md)
- [Local setup and debugging](local_setup_and_debugging.md)
- [Internal Events CLI contribution guide](../cli_contribution_guidelines.md)
- [Internal Events Payload Samples](internal_events_payload.md)
- [Standard context fields description](standard_context_fields.md)
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Internal Event Tracking
breadcrumbs:
- doc
- development
- internal_analytics
- internal_event_instrumentation
---
This page provides detailed guidelines on using the Internal Event Tracking system to instrument features on GitLab.
Currently Internal Event Tracking is consolidating the following systems:
- [Service Ping](../service_ping/_index.md)
- Snowplow
- AiTracking (Duo Chat) WIP
Internal Events is an unified interface to track events in GitLab. Each tracking call represents a user action and the
associated properties. Internal Events then provides underlying systems the properties they require for their specific
analytics needs.
Analytics systems summary:
| Function\System | Service Ping | Snowplow |
| --- | --- | --- |
| Primary function | Provide aggregated analytics data | Track raw events (user interactions with the service) |
| Data storage | Local instance (Redis, Postgres etc) | Snowflake |
| Data granularity | None (data is aggregated) | Per event |
| Extra parameters | None | Any amount of custom data |
| Receiving delay | Up to 1 week | A few minutes |
| Implementation | Utilises Internal Events, Database records, System Settings | Internal Events plus custom tracking context |
This page is a work in progress. If you have access to the GitLab Slack workspace, use the
`#g_monitor_analytics_instrumentation` channel for any questions or clarifications.
- [Quick start for internal event tracking](quick_start.md)
- [Migrating existing tracking to internal event tracking](migration.md)
- [Event definition guide](event_definition_guide.md)
- [Metric definition guide](metric_definition_guide.md)
- [Local setup and debugging](local_setup_and_debugging.md)
- [Internal Events CLI contribution guide](../cli_contribution_guidelines.md)
- [Internal Events Payload Samples](internal_events_payload.md)
- [Standard context fields description](standard_context_fields.md)
|
https://docs.gitlab.com/development/internal_analytics/event_lifecycle
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/event_lifecycle.md
|
2025-08-13
|
doc/development/internal_analytics/internal_event_instrumentation
|
[
"doc",
"development",
"internal_analytics",
"internal_event_instrumentation"
] |
event_lifecycle.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/ee/development/development_processes.html#development-guidelines-review.
|
Event lifecycle
| null |
The following guidelines explain the steps to follow at each stage of an event's lifecycle.
## Add an event
See the [event definition guide](event_definition_guide.md) for more details.
## Remove an event
To remove an event:
1. Move the event definition file to the `/removed` subfolder.
1. Update the event definition file to set the `status` field to `removed`.
1. Update the event definition file to set the `milestone_removed` field to the milestone when the event was removed.
1. Update the event definition file to set the `removed_by_url` field to the URL of the merge request that removed the event.
1. Remove the event tracking from the codebase.
1. Remove the event tracking tests.
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/ee/development/development_processes.html#development-guidelines-review.
title: Event lifecycle
breadcrumbs:
- doc
- development
- internal_analytics
- internal_event_instrumentation
---
The following guidelines explain the steps to follow at each stage of an event's lifecycle.
## Add an event
See the [event definition guide](event_definition_guide.md) for more details.
## Remove an event
To remove an event:
1. Move the event definition file to the `/removed` subfolder.
1. Update the event definition file to set the `status` field to `removed`.
1. Update the event definition file to set the `milestone_removed` field to the milestone when the event was removed.
1. Update the event definition file to set the `removed_by_url` field to the URL of the merge request that removed the event.
1. Remove the event tracking from the codebase.
1. Remove the event tracking tests.
|
https://docs.gitlab.com/development/internal_analytics/quick_start
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/quick_start.md
|
2025-08-13
|
doc/development/internal_analytics/internal_event_instrumentation
|
[
"doc",
"development",
"internal_analytics",
"internal_event_instrumentation"
] |
quick_start.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Quick start for Internal Event Tracking
| null |
In an effort to provide a more efficient, scalable, and unified tracking API, GitLab is deprecating existing RedisHLL and Snowplow tracking. Instead, we're implementing a new `track_event` (Backend) and `trackEvent`(Frontend) method.
With this approach, we can update both RedisHLL counters and send Snowplow events without worrying about the underlying implementation.
In order to instrument your code with Internal Events Tracking you need to do three things:
1. Define an event
1. Define one or more metrics
1. Trigger the event
## Defining event and metrics
To create event and/or metric definitions, use the `internal_events` generator from the `gitlab` directory:
```shell
scripts/internal_events/cli.rb
```
This CLI will help you create the correct definition files based on your specific use-case, then provide code examples for instrumentation and testing.
Events should be named in the format of `<action>_<target_of_action>_<where/when>`, valid examples are `create_ci_build` or `click_previous_blame_on_blob_page`.
## Trigger events
Triggering an event and thereby updating a metric is slightly different on backend and frontend. Refer to the relevant section below.
### Backend tracking
<div class="video-fallback">
Watch the video about <a href="https://www.youtube.com/watch?v=Teid7o_2Mmg">Backend instrumentation using Internal Events</a>
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/Teid7o_2Mmg" frameborder="0" allowfullscreen> </iframe>
</figure>
To trigger an event, call the `track_internal_event` method from the `Gitlab::InternalEventsTracking` module with the desired arguments:
```ruby
include Gitlab::InternalEventsTracking
track_internal_event(
"create_ci_build",
user: user,
namespace: namespace,
project: project
)
```
This method automatically increments all RedisHLL metrics relating to the event `create_ci_build`, and sends a corresponding Snowplow event with all named arguments and standard context (SaaS only).
In addition, the name of the class triggering the event is saved in the `category` property of the Snowplow event.
If you have defined a metric with a `unique` property such as `unique: project.id` it is required that you provide the `project` argument.
It is encouraged to fill out as many of `user`, `namespace` and `project` as possible as it increases the data quality and make it easier to define metrics in the future.
If a `project` but no `namespace` is provided, the `project.namespace` is used as the `namespace` for the event.
In some cases you might want to specify the `category` manually or provide none at all. To do that, you can call the `InternalEvents.track_event` method directly instead of using the module.
In case when a feature is enabled through multiple namespaces and its required to track why the feature is enabled, it is
possible to pass an optional `feature_enabled_by_namespace_ids` parameter with an array of namespace ids.
```ruby
track_internal_event(
...
feature_enabled_by_namespace_ids: [namespace_one.id, namespace_two.id]
)
```
#### Additional properties
Additional properties can be passed when tracking events. They can be used to save additional data related to given event.
Tracking classes already have three built-in properties:
- `label` (string)
- `property` (string)
- `value`(numeric)
The arbitrary naming and typing of the these three properties is due to constraints from the data extraction process.
It's recommended to use these properties first, even if their name does not match with the data you want to track. You can further describe what is the actual data being tracked by using the `description` property in the YAML definition of the event. For an example, see
[`create_ci_internal_pipeline.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/537ea367dab731e886e6040d8399c430fdb67ab7/config/events/create_ci_internal_pipeline.yml):
```ruby
additional_properties:
label:
description: The source of the pipeline, e.g. a push, a schedule or similar.
property:
description: The source of the config, e.g. the repository, auto_devops or similar.
```
Additional properties are passed by including the `additional_properties` hash in the `#track_event` call:
```ruby
track_internal_event(
"create_ci_build",
user: user,
additional_properties: {
label: source, # The label is tracking the source of the pipeline
property: config_source # The property is tracking the source of the configuration
}
)
```
If you need to pass more than the three built-in additional properties, you can use the `additional_properties` hash with your custom keys:
```ruby
track_internal_event(
"code_suggestion_accepted",
user: user,
additional_properties: {
# Built-in properties
label: editor_name,
property: suggestion_type,
value: suggestion_shown_duration,
# Your custom properties
lang: 'ruby',
custom_key: 'custom_value'
}
)
```
Add custom properties only in addition to the built-in properties. Additional properties can only have string or numeric values.
{{< alert type="warning" >}}
Make sure the additional properties don't contain any sensitive information. For more information, see the [Data Classification Standard](https://about.gitlab.com/handbook/security/data-classification-standard/).
{{< /alert >}}
#### Controller and API helpers
There is a helper module `ProductAnalyticsTracking` for controllers you can use to track internal events for particular controller actions by calling `#track_internal_event`:
```ruby
class Projects::PipelinesController < Projects::ApplicationController
include ProductAnalyticsTracking
track_internal_event :charts, name: 'visit_charts_on_ci_cd_pipelines', conditions: -> { should_track_ci_cd_pipelines? }
def charts
...
end
private
def should_track_ci_cd_pipelines?
params[:chart].blank? || params[:chart] == 'pipelines'
end
end
```
You need to add these two methods to the controller body, so that the helper can get the current project and namespace for the event:
```ruby
private
def tracking_namespace_source
project.namespace
end
def tracking_project_source
project
end
```
Also, there is an API helper:
```ruby
track_event(
event_name,
user: current_user,
namespace_id: namespace_id,
project_id: project_id
)
```
#### Batching
When multiple events are emitted at once, use `with_batched_redis_writes` to batch all of them
in a single Redis call.
```ruby
Gitlab::InternalEvents.with_batched_redis_writes do
incr.times { Gitlab::InternalEvents.track_event(event) }
end
```
Notice that only updates to total counters are batched. If `n` unique metrics and `m` total counter metrics are defined, it will result in `incr * n + m` Redis writes.
### Backend testing
When testing code that triggers internal events or increments metrics, you can use the `trigger_internal_events` and `increment_usage_metrics` matchers on a block argument.
```ruby
expect { subject }
.to trigger_internal_events('web_ide_viewed')
.with(user: user, project: project, namespace: namespace)
.and increment_usage_metrics('counts.web_views')
```
The `trigger_internal_events` matcher accepts the same chain methods as the [`receive`](https://rubydoc.info/github/rspec/rspec-mocks/RSpec/Mocks/ExampleMethods#receive-instance_method) matcher (`#once`, `#at_most`, etc). By default, it expects the provided events to be triggered only once.
The chain method `#with` accepts following parameters:
- `user` - User object
- `project` - Project object
- `namespace` - Namespace object. If not provided, it will be set to `project.namespace`
- `additional_properties` - Hash. Additional properties to be sent with the event. For example: `{ label: 'scheduled', value: 20 }`
- `category` - String. If not provided, it will be set to the class name of the object that triggers the event
The `increment_usage_metrics` matcher accepts the same chain methods as the [`change`](https://rubydoc.info/gems/rspec-expectations/RSpec%2FMatchers:change) matcher (`#by`, `#from`, `#to`, etc). By default, it expects the provided metrics to be incremented by one.
```ruby
expect { subject }
.to trigger_internal_events('web_ide_viewed')
.with(user: user, project: project, namespace: namespace)
.exactly(3).times
```
Both matchers are composable with other matchers that act on a block (like `change` matcher).
```ruby
expect { subject }
.to trigger_internal_events('mr_created')
.with(user: user, project: project, category: category, additional_properties: { label: label } )
.and increment_usage_metrics('counts.deployments')
.at_least(:once)
.and change { mr.notes.count }.by(1)
```
{{< alert type="note" >}}
Debugging tip: If your new tests are failing due to metrics not being incremented when you expect them to be,
you may need to apply the `:clean_gitlab_redis_shared_state` trait to clear the Redis cache between examples.
{{< /alert >}}
To test that an event was not triggered, you can use the `not_trigger_internal_events` matcher. It does not accept message chains.
```ruby
expect { subject }.to trigger_internal_events('mr_created')
.with(user: user, project: project, namespace: namespace)
.and increment_usage_metrics('counts.deployments')
.and not_trigger_internal_events('pipeline_started')
```
Or you can use the `not_to` syntax:
```ruby
expect { subject }.not_to trigger_internal_events('mr_created', 'member_role_created')
```
The `trigger_internal_events` matcher can also be used for testing [Haml with data attributes](#haml-with-data-attributes).
### Frontend tracking
Any frontend tracking call automatically passes the values `user.id`, `namespace.id`, and `project.id` from the current context of the page.
#### Vue components
In Vue components, tracking can be done with [Vue mixin](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/tracking/internal_events.js#L29).
To implement Vue component tracking:
1. Import the `InternalEvents` library and call the `mixin` method:
```javascript
import { InternalEvents } from '~/tracking';
const trackingMixin = InternalEvents.mixin();
```
1. Use the mixin in the component:
```javascript
export default {
mixins: [trackingMixin],
data() {
return {
expanded: false,
};
},
};
```
1. Call the `trackEvent` method. Tracking options can be passed as the second parameter:
```javascript
this.trackEvent('click_previous_blame_on_blob_page');
```
Or use the `trackEvent` method in the template:
```html
<template>
<div>
<button data-testid="toggle" @click="toggle">Toggle</button>
<div v-if="expanded">
<p>Hello world!</p>
<button @click="trackEvent('click_previous_blame_on_blob_page')">Track another event</button>
</div>
</div>
</template>
```
#### Raw JavaScript
For tracking events directly from arbitrary frontend JavaScript code, a module for raw JavaScript is provided. This can be used outside of a component context where the Mixin cannot be utilized.
```javascript
import { InternalEvents } from '~/tracking';
InternalEvents.trackEvent('click_previous_blame_on_blob_page');
```
#### Data-event attribute
This attribute ensures that if we want to track GitLab internal events for a button, we do not need to write JavaScript code on Click handler. Instead, we can just add a data-event-tracking attribute with event value and it should work. This can also be used with HAML views.
```html
<gl-button
data-event-tracking="click_previous_blame_on_blob_page"
>
Click Me
</gl-button>
```
#### Haml
```ruby
= render Pajamas::ButtonComponent.new(button_options: { class: 'js-settings-toggle', data: { event_tracking: 'click_previous_blame_on_blob_page' }}) do
```
#### Internal events on render
Sometimes we want to send internal events when the component is rendered or loaded. In these cases, we can add the `data-event-tracking-load="true"` attribute:
```ruby
= render Pajamas::ButtonComponent.new(button_options: { data: { event_tracking_load: 'true', event_tracking: 'click_previous_blame_on_blob_page' } }) do
= _("New project")
```
#### Additional properties
You can include additional properties with events to save additional data. When included you must define each additional property in the `additional_properties` field. It is possible to send the three built-in additional properties with keys `label` (string), `property` (string) and `value`(numeric) and [custom additional properties](quick_start.md#additional-properties) if the built-in properties are not sufficient.
{{< alert type="note" >}}
Do not pass the page URL or page path as an additional property because we already track the pseudonymized page URL for each event.
Getting the URL from `window.location` does not pseudonymize project and namespace information [as documented](https://metrics.gitlab.com/identifiers).
{{< /alert >}}
For Vue Mixin:
```javascript
this.trackEvent('click_view_runners_button', {
label: 'group_runner_form',
property: dynamicPropertyVar,
value: 20
});
```
For raw JavaScript:
```javascript
InternalEvents.trackEvent('click_view_runners_button', {
label: 'group_runner_form',
property: dynamicPropertyVar,
value: 20
});
```
For data-event attributes:
```javascript
<gl-button
data-event-tracking="click_view_runners_button"
data-event-label="group_runner_form"
:data-event-property=dynamicPropertyVar
data-event-additional='{"key1": "value1", "key2": "value2"}'
>
Click Me
</gl-button>
```
For Haml:
```ruby
= render Pajamas::ButtonComponent.new(button_options: { class: 'js-settings-toggle', data: { event_tracking: 'action', event_label: 'group_runner_form', event_property: dynamic_property_var, event_value: 2, event_additional: '{"key1": "value1", "key2": "value2"}' }}) do
```
#### Frontend testing
##### JavaScript/Vue
If you are using the `trackEvent` method in any of your code, whether it is in raw JavaScript or a Vue component, you can use the `useMockInternalEventsTracking` helper method to assert if `trackEvent` is called.
For example, if we need to test the below Vue component,
```vue
<script>
import { GlButton } from '@gitlab/ui';
import { InternalEvents } from '~/tracking';
import { __ } from '~/locale';
export default {
components: {
GlButton,
},
mixins: [InternalEvents.mixin()],
methods: {
handleButtonClick() {
// some application logic
// when some event happens fire tracking call
this.trackEvent('click_view_runners_button', {
label: 'group_runner_form',
property: 'property_value',
value: 3,
});
},
},
i18n: {
button1: __('Sample Button'),
},
};
</script>
<template>
<div style="display: flex; height: 90vh; align-items: center; justify-content: center">
<gl-button class="sample-button" @click="handleButtonClick">
{{ $options.i18n.button1 }}
</gl-button>
</div>
</template>
```
Below would be the test case for above component.
```javascript
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import DeleteApplication from '~/admin/applications/components/delete_application.vue';
import { useMockInternalEventsTracking } from 'helpers/tracking_internal_events_helper';
describe('DeleteApplication', () => {
/** @type {import('helpers/vue_test_utils_helper').ExtendedWrapper} */
let wrapper;
const createComponent = () => {
wrapper = shallowMountExtended(DeleteApplication);
};
beforeEach(() => {
createComponent();
});
describe('sample button 1', () => {
const { bindInternalEventDocument } = useMockInternalEventsTracking();
it('should call trackEvent method when clicked on sample button', async () => {
const { trackEventSpy } = bindInternalEventDocument(wrapper.element);
await wrapper.find('.sample-button').vm.$emit('click');
expect(trackEventSpy).toHaveBeenCalledWith(
'click_view_runners_button',
{
label: 'group_runner_form',
property: 'property_value',
value: 3,
},
undefined,
);
});
});
});
```
If you are using tracking attributes for in Vue/View templates like below,
```vue
<script>
import { GlButton } from '@gitlab/ui';
import { InternalEvents } from '~/tracking';
import { __ } from '~/locale';
export default {
components: {
GlButton,
},
mixins: [InternalEvents.mixin()],
i18n: {
button1: __('Sample Button'),
},
};
</script>
<template>
<div style="display: flex; height: 90vh; align-items: center; justify-content: center">
<gl-button
class="sample-button"
data-event-tracking="click_view_runners_button"
data-event-label="group_runner_form"
>
{{ $options.i18n.button1 }}
</gl-button>
</div>
</template>
```
Below would be the test case for above component.
```javascript
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import DeleteApplication from '~/admin/applications/components/delete_application.vue';
import { useMockInternalEventsTracking } from 'helpers/tracking_internal_events_helper';
describe('DeleteApplication', () => {
/** @type {import('helpers/vue_test_utils_helper').ExtendedWrapper} */
let wrapper;
const createComponent = () => {
wrapper = shallowMountExtended(DeleteApplication);
};
beforeEach(() => {
createComponent();
});
describe('sample button', () => {
const { bindInternalEventDocument } = useMockInternalEventsTracking();
it('should call trackEvent method when clicked on sample button', () => {
const { triggerEvent, trackEventSpy } = bindInternalEventDocument(wrapper.element);
triggerEvent('.sample-button');
expect(trackEventSpy).toHaveBeenCalledWith('click_view_runners_button', {
label: 'group_runner_form',
});
});
});
});
```
#### Haml with data attributes
If you are using [data attributes](#data-event-attribute) to track internal events at the Haml layer,
you can use the [`trigger_internal_events` matcher](#backend-testing) to assert that the expected properties are present.
For example, if you need to test the below Haml,
```ruby
%div{ data: { testid: '_testid_', event_tracking: 'some_event', event_label: 'some_label' } }
```
You can call assertions on any rendered HTML compatible with the `have_css` matcher.
Use the `:on_click` and `:on_load` chain methods to indicate when you expect the event to trigger.
Below would be the test case for above haml.
- rendered HTML is a `String` ([RSpec views](https://rspec.info/features/6-0/rspec-rails/view-specs/view-spec/))
```ruby
it 'assigns the tracking items' do
render
expect(rendered).to trigger_internal_events('some_event').on_click
.with(additional_properties: { label: 'some_label' })
end
```
- rendered HTML is a `Capybara::Node::Simple` ([ViewComponent](https://viewcomponent.org/))
```ruby
it 'assigns the tracking items' do
render_inline(component)
expect(page.find_by_testid('_testid_'))
.to trigger_internal_events('some_event').on_click
.with(additional_properties: { label: 'some_label' })
end
```
- rendered HTML is a `Nokogiri::HTML4::DocumentFragment` ([ViewComponent](https://viewcomponent.org/))
```ruby
it 'assigns the tracking items' do
expect(render_inline(component))
.to trigger_internal_events('some_event').on_click
.with(additional_properties: { label: 'some_label' })
end
```
Or you can use the `not_to` syntax:
```ruby
it 'assigns the tracking items' do
render_inline(component)
expect(page).not_to trigger_internal_events
end
```
When negated, the matcher accepts no additional chain methods or arguments.
This asserts that no tracking attributes are in use.
### Using Internal Events API
You can also use our API to track events from other systems connected to a GitLab instance.
See the [Usage Data API documentation](../../../api/usage_data.md#events-tracking-api) for more information.
### Internal Events on other systems
Apart from the GitLab codebase, we are using Internal Events for the systems listed below.
1. [AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/internal_events.md?ref_type=heads)
1. [Switchboard](https://gitlab.com/gitlab-com/gl-infra/gitlab-dedicated/switchboard/-/blob/main/docs/internal_events.md)
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Quick start for Internal Event Tracking
breadcrumbs:
- doc
- development
- internal_analytics
- internal_event_instrumentation
---
In an effort to provide a more efficient, scalable, and unified tracking API, GitLab is deprecating existing RedisHLL and Snowplow tracking. Instead, we're implementing a new `track_event` (Backend) and `trackEvent`(Frontend) method.
With this approach, we can update both RedisHLL counters and send Snowplow events without worrying about the underlying implementation.
In order to instrument your code with Internal Events Tracking you need to do three things:
1. Define an event
1. Define one or more metrics
1. Trigger the event
## Defining event and metrics
To create event and/or metric definitions, use the `internal_events` generator from the `gitlab` directory:
```shell
scripts/internal_events/cli.rb
```
This CLI will help you create the correct definition files based on your specific use-case, then provide code examples for instrumentation and testing.
Events should be named in the format of `<action>_<target_of_action>_<where/when>`, valid examples are `create_ci_build` or `click_previous_blame_on_blob_page`.
## Trigger events
Triggering an event and thereby updating a metric is slightly different on backend and frontend. Refer to the relevant section below.
### Backend tracking
<div class="video-fallback">
Watch the video about <a href="https://www.youtube.com/watch?v=Teid7o_2Mmg">Backend instrumentation using Internal Events</a>
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/Teid7o_2Mmg" frameborder="0" allowfullscreen> </iframe>
</figure>
To trigger an event, call the `track_internal_event` method from the `Gitlab::InternalEventsTracking` module with the desired arguments:
```ruby
include Gitlab::InternalEventsTracking
track_internal_event(
"create_ci_build",
user: user,
namespace: namespace,
project: project
)
```
This method automatically increments all RedisHLL metrics relating to the event `create_ci_build`, and sends a corresponding Snowplow event with all named arguments and standard context (SaaS only).
In addition, the name of the class triggering the event is saved in the `category` property of the Snowplow event.
If you have defined a metric with a `unique` property such as `unique: project.id` it is required that you provide the `project` argument.
It is encouraged to fill out as many of `user`, `namespace` and `project` as possible as it increases the data quality and make it easier to define metrics in the future.
If a `project` but no `namespace` is provided, the `project.namespace` is used as the `namespace` for the event.
In some cases you might want to specify the `category` manually or provide none at all. To do that, you can call the `InternalEvents.track_event` method directly instead of using the module.
In case when a feature is enabled through multiple namespaces and its required to track why the feature is enabled, it is
possible to pass an optional `feature_enabled_by_namespace_ids` parameter with an array of namespace ids.
```ruby
track_internal_event(
...
feature_enabled_by_namespace_ids: [namespace_one.id, namespace_two.id]
)
```
#### Additional properties
Additional properties can be passed when tracking events. They can be used to save additional data related to given event.
Tracking classes already have three built-in properties:
- `label` (string)
- `property` (string)
- `value`(numeric)
The arbitrary naming and typing of the these three properties is due to constraints from the data extraction process.
It's recommended to use these properties first, even if their name does not match with the data you want to track. You can further describe what is the actual data being tracked by using the `description` property in the YAML definition of the event. For an example, see
[`create_ci_internal_pipeline.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/537ea367dab731e886e6040d8399c430fdb67ab7/config/events/create_ci_internal_pipeline.yml):
```ruby
additional_properties:
label:
description: The source of the pipeline, e.g. a push, a schedule or similar.
property:
description: The source of the config, e.g. the repository, auto_devops or similar.
```
Additional properties are passed by including the `additional_properties` hash in the `#track_event` call:
```ruby
track_internal_event(
"create_ci_build",
user: user,
additional_properties: {
label: source, # The label is tracking the source of the pipeline
property: config_source # The property is tracking the source of the configuration
}
)
```
If you need to pass more than the three built-in additional properties, you can use the `additional_properties` hash with your custom keys:
```ruby
track_internal_event(
"code_suggestion_accepted",
user: user,
additional_properties: {
# Built-in properties
label: editor_name,
property: suggestion_type,
value: suggestion_shown_duration,
# Your custom properties
lang: 'ruby',
custom_key: 'custom_value'
}
)
```
Add custom properties only in addition to the built-in properties. Additional properties can only have string or numeric values.
{{< alert type="warning" >}}
Make sure the additional properties don't contain any sensitive information. For more information, see the [Data Classification Standard](https://about.gitlab.com/handbook/security/data-classification-standard/).
{{< /alert >}}
#### Controller and API helpers
There is a helper module `ProductAnalyticsTracking` for controllers you can use to track internal events for particular controller actions by calling `#track_internal_event`:
```ruby
class Projects::PipelinesController < Projects::ApplicationController
include ProductAnalyticsTracking
track_internal_event :charts, name: 'visit_charts_on_ci_cd_pipelines', conditions: -> { should_track_ci_cd_pipelines? }
def charts
...
end
private
def should_track_ci_cd_pipelines?
params[:chart].blank? || params[:chart] == 'pipelines'
end
end
```
You need to add these two methods to the controller body, so that the helper can get the current project and namespace for the event:
```ruby
private
def tracking_namespace_source
project.namespace
end
def tracking_project_source
project
end
```
Also, there is an API helper:
```ruby
track_event(
event_name,
user: current_user,
namespace_id: namespace_id,
project_id: project_id
)
```
#### Batching
When multiple events are emitted at once, use `with_batched_redis_writes` to batch all of them
in a single Redis call.
```ruby
Gitlab::InternalEvents.with_batched_redis_writes do
incr.times { Gitlab::InternalEvents.track_event(event) }
end
```
Notice that only updates to total counters are batched. If `n` unique metrics and `m` total counter metrics are defined, it will result in `incr * n + m` Redis writes.
### Backend testing
When testing code that triggers internal events or increments metrics, you can use the `trigger_internal_events` and `increment_usage_metrics` matchers on a block argument.
```ruby
expect { subject }
.to trigger_internal_events('web_ide_viewed')
.with(user: user, project: project, namespace: namespace)
.and increment_usage_metrics('counts.web_views')
```
The `trigger_internal_events` matcher accepts the same chain methods as the [`receive`](https://rubydoc.info/github/rspec/rspec-mocks/RSpec/Mocks/ExampleMethods#receive-instance_method) matcher (`#once`, `#at_most`, etc). By default, it expects the provided events to be triggered only once.
The chain method `#with` accepts following parameters:
- `user` - User object
- `project` - Project object
- `namespace` - Namespace object. If not provided, it will be set to `project.namespace`
- `additional_properties` - Hash. Additional properties to be sent with the event. For example: `{ label: 'scheduled', value: 20 }`
- `category` - String. If not provided, it will be set to the class name of the object that triggers the event
The `increment_usage_metrics` matcher accepts the same chain methods as the [`change`](https://rubydoc.info/gems/rspec-expectations/RSpec%2FMatchers:change) matcher (`#by`, `#from`, `#to`, etc). By default, it expects the provided metrics to be incremented by one.
```ruby
expect { subject }
.to trigger_internal_events('web_ide_viewed')
.with(user: user, project: project, namespace: namespace)
.exactly(3).times
```
Both matchers are composable with other matchers that act on a block (like `change` matcher).
```ruby
expect { subject }
.to trigger_internal_events('mr_created')
.with(user: user, project: project, category: category, additional_properties: { label: label } )
.and increment_usage_metrics('counts.deployments')
.at_least(:once)
.and change { mr.notes.count }.by(1)
```
{{< alert type="note" >}}
Debugging tip: If your new tests are failing due to metrics not being incremented when you expect them to be,
you may need to apply the `:clean_gitlab_redis_shared_state` trait to clear the Redis cache between examples.
{{< /alert >}}
To test that an event was not triggered, you can use the `not_trigger_internal_events` matcher. It does not accept message chains.
```ruby
expect { subject }.to trigger_internal_events('mr_created')
.with(user: user, project: project, namespace: namespace)
.and increment_usage_metrics('counts.deployments')
.and not_trigger_internal_events('pipeline_started')
```
Or you can use the `not_to` syntax:
```ruby
expect { subject }.not_to trigger_internal_events('mr_created', 'member_role_created')
```
The `trigger_internal_events` matcher can also be used for testing [Haml with data attributes](#haml-with-data-attributes).
### Frontend tracking
Any frontend tracking call automatically passes the values `user.id`, `namespace.id`, and `project.id` from the current context of the page.
#### Vue components
In Vue components, tracking can be done with [Vue mixin](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/tracking/internal_events.js#L29).
To implement Vue component tracking:
1. Import the `InternalEvents` library and call the `mixin` method:
```javascript
import { InternalEvents } from '~/tracking';
const trackingMixin = InternalEvents.mixin();
```
1. Use the mixin in the component:
```javascript
export default {
mixins: [trackingMixin],
data() {
return {
expanded: false,
};
},
};
```
1. Call the `trackEvent` method. Tracking options can be passed as the second parameter:
```javascript
this.trackEvent('click_previous_blame_on_blob_page');
```
Or use the `trackEvent` method in the template:
```html
<template>
<div>
<button data-testid="toggle" @click="toggle">Toggle</button>
<div v-if="expanded">
<p>Hello world!</p>
<button @click="trackEvent('click_previous_blame_on_blob_page')">Track another event</button>
</div>
</div>
</template>
```
#### Raw JavaScript
For tracking events directly from arbitrary frontend JavaScript code, a module for raw JavaScript is provided. This can be used outside of a component context where the Mixin cannot be utilized.
```javascript
import { InternalEvents } from '~/tracking';
InternalEvents.trackEvent('click_previous_blame_on_blob_page');
```
#### Data-event attribute
This attribute ensures that if we want to track GitLab internal events for a button, we do not need to write JavaScript code on Click handler. Instead, we can just add a data-event-tracking attribute with event value and it should work. This can also be used with HAML views.
```html
<gl-button
data-event-tracking="click_previous_blame_on_blob_page"
>
Click Me
</gl-button>
```
#### Haml
```ruby
= render Pajamas::ButtonComponent.new(button_options: { class: 'js-settings-toggle', data: { event_tracking: 'click_previous_blame_on_blob_page' }}) do
```
#### Internal events on render
Sometimes we want to send internal events when the component is rendered or loaded. In these cases, we can add the `data-event-tracking-load="true"` attribute:
```ruby
= render Pajamas::ButtonComponent.new(button_options: { data: { event_tracking_load: 'true', event_tracking: 'click_previous_blame_on_blob_page' } }) do
= _("New project")
```
#### Additional properties
You can include additional properties with events to save additional data. When included you must define each additional property in the `additional_properties` field. It is possible to send the three built-in additional properties with keys `label` (string), `property` (string) and `value`(numeric) and [custom additional properties](quick_start.md#additional-properties) if the built-in properties are not sufficient.
{{< alert type="note" >}}
Do not pass the page URL or page path as an additional property because we already track the pseudonymized page URL for each event.
Getting the URL from `window.location` does not pseudonymize project and namespace information [as documented](https://metrics.gitlab.com/identifiers).
{{< /alert >}}
For Vue Mixin:
```javascript
this.trackEvent('click_view_runners_button', {
label: 'group_runner_form',
property: dynamicPropertyVar,
value: 20
});
```
For raw JavaScript:
```javascript
InternalEvents.trackEvent('click_view_runners_button', {
label: 'group_runner_form',
property: dynamicPropertyVar,
value: 20
});
```
For data-event attributes:
```javascript
<gl-button
data-event-tracking="click_view_runners_button"
data-event-label="group_runner_form"
:data-event-property=dynamicPropertyVar
data-event-additional='{"key1": "value1", "key2": "value2"}'
>
Click Me
</gl-button>
```
For Haml:
```ruby
= render Pajamas::ButtonComponent.new(button_options: { class: 'js-settings-toggle', data: { event_tracking: 'action', event_label: 'group_runner_form', event_property: dynamic_property_var, event_value: 2, event_additional: '{"key1": "value1", "key2": "value2"}' }}) do
```
#### Frontend testing
##### JavaScript/Vue
If you are using the `trackEvent` method in any of your code, whether it is in raw JavaScript or a Vue component, you can use the `useMockInternalEventsTracking` helper method to assert if `trackEvent` is called.
For example, if we need to test the below Vue component,
```vue
<script>
import { GlButton } from '@gitlab/ui';
import { InternalEvents } from '~/tracking';
import { __ } from '~/locale';
export default {
components: {
GlButton,
},
mixins: [InternalEvents.mixin()],
methods: {
handleButtonClick() {
// some application logic
// when some event happens fire tracking call
this.trackEvent('click_view_runners_button', {
label: 'group_runner_form',
property: 'property_value',
value: 3,
});
},
},
i18n: {
button1: __('Sample Button'),
},
};
</script>
<template>
<div style="display: flex; height: 90vh; align-items: center; justify-content: center">
<gl-button class="sample-button" @click="handleButtonClick">
{{ $options.i18n.button1 }}
</gl-button>
</div>
</template>
```
Below would be the test case for above component.
```javascript
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import DeleteApplication from '~/admin/applications/components/delete_application.vue';
import { useMockInternalEventsTracking } from 'helpers/tracking_internal_events_helper';
describe('DeleteApplication', () => {
/** @type {import('helpers/vue_test_utils_helper').ExtendedWrapper} */
let wrapper;
const createComponent = () => {
wrapper = shallowMountExtended(DeleteApplication);
};
beforeEach(() => {
createComponent();
});
describe('sample button 1', () => {
const { bindInternalEventDocument } = useMockInternalEventsTracking();
it('should call trackEvent method when clicked on sample button', async () => {
const { trackEventSpy } = bindInternalEventDocument(wrapper.element);
await wrapper.find('.sample-button').vm.$emit('click');
expect(trackEventSpy).toHaveBeenCalledWith(
'click_view_runners_button',
{
label: 'group_runner_form',
property: 'property_value',
value: 3,
},
undefined,
);
});
});
});
```
If you are using tracking attributes for in Vue/View templates like below,
```vue
<script>
import { GlButton } from '@gitlab/ui';
import { InternalEvents } from '~/tracking';
import { __ } from '~/locale';
export default {
components: {
GlButton,
},
mixins: [InternalEvents.mixin()],
i18n: {
button1: __('Sample Button'),
},
};
</script>
<template>
<div style="display: flex; height: 90vh; align-items: center; justify-content: center">
<gl-button
class="sample-button"
data-event-tracking="click_view_runners_button"
data-event-label="group_runner_form"
>
{{ $options.i18n.button1 }}
</gl-button>
</div>
</template>
```
Below would be the test case for above component.
```javascript
import { shallowMountExtended } from 'helpers/vue_test_utils_helper';
import DeleteApplication from '~/admin/applications/components/delete_application.vue';
import { useMockInternalEventsTracking } from 'helpers/tracking_internal_events_helper';
describe('DeleteApplication', () => {
/** @type {import('helpers/vue_test_utils_helper').ExtendedWrapper} */
let wrapper;
const createComponent = () => {
wrapper = shallowMountExtended(DeleteApplication);
};
beforeEach(() => {
createComponent();
});
describe('sample button', () => {
const { bindInternalEventDocument } = useMockInternalEventsTracking();
it('should call trackEvent method when clicked on sample button', () => {
const { triggerEvent, trackEventSpy } = bindInternalEventDocument(wrapper.element);
triggerEvent('.sample-button');
expect(trackEventSpy).toHaveBeenCalledWith('click_view_runners_button', {
label: 'group_runner_form',
});
});
});
});
```
#### Haml with data attributes
If you are using [data attributes](#data-event-attribute) to track internal events at the Haml layer,
you can use the [`trigger_internal_events` matcher](#backend-testing) to assert that the expected properties are present.
For example, if you need to test the below Haml,
```ruby
%div{ data: { testid: '_testid_', event_tracking: 'some_event', event_label: 'some_label' } }
```
You can call assertions on any rendered HTML compatible with the `have_css` matcher.
Use the `:on_click` and `:on_load` chain methods to indicate when you expect the event to trigger.
Below would be the test case for above haml.
- rendered HTML is a `String` ([RSpec views](https://rspec.info/features/6-0/rspec-rails/view-specs/view-spec/))
```ruby
it 'assigns the tracking items' do
render
expect(rendered).to trigger_internal_events('some_event').on_click
.with(additional_properties: { label: 'some_label' })
end
```
- rendered HTML is a `Capybara::Node::Simple` ([ViewComponent](https://viewcomponent.org/))
```ruby
it 'assigns the tracking items' do
render_inline(component)
expect(page.find_by_testid('_testid_'))
.to trigger_internal_events('some_event').on_click
.with(additional_properties: { label: 'some_label' })
end
```
- rendered HTML is a `Nokogiri::HTML4::DocumentFragment` ([ViewComponent](https://viewcomponent.org/))
```ruby
it 'assigns the tracking items' do
expect(render_inline(component))
.to trigger_internal_events('some_event').on_click
.with(additional_properties: { label: 'some_label' })
end
```
Or you can use the `not_to` syntax:
```ruby
it 'assigns the tracking items' do
render_inline(component)
expect(page).not_to trigger_internal_events
end
```
When negated, the matcher accepts no additional chain methods or arguments.
This asserts that no tracking attributes are in use.
### Using Internal Events API
You can also use our API to track events from other systems connected to a GitLab instance.
See the [Usage Data API documentation](../../../api/usage_data.md#events-tracking-api) for more information.
### Internal Events on other systems
Apart from the GitLab codebase, we are using Internal Events for the systems listed below.
1. [AI gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/internal_events.md?ref_type=heads)
1. [Switchboard](https://gitlab.com/gitlab-com/gl-infra/gitlab-dedicated/switchboard/-/blob/main/docs/internal_events.md)
|
https://docs.gitlab.com/development/internal_analytics/local_setup_and_debugging
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/local_setup_and_debugging.md
|
2025-08-13
|
doc/development/internal_analytics/internal_event_instrumentation
|
[
"doc",
"development",
"internal_analytics",
"internal_event_instrumentation"
] |
local_setup_and_debugging.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Local setup and debugging
| null |
{{< alert type="note" >}}
To track user interactions in the browser, browser settings, such as privacy filters (for example,
AdBlock, uBlock) and Do-Not-Track (DNT). For more information, see [settings that affect tracking](https://snowplow.io/blog/how-many-of-your-visitors-block-your-snowplow-tracking).
{{< /alert >}}
Internal events are using a tool called Snowplow under the hood. To develop and test internal events, there are several tools to test frontend and backend events:
| Testing Tool | Frontend Tracking | Backend Tracking | Local Development Environment | Production Environment | Shows individual events |
|----------------------------------------------|--------------------|---------------------|-------------------------------|------------------------|------------------------|
| [Internal Events Monitor](#internal-events-monitor) | Yes | Yes | Yes | Yes | Yes |
| [Snowplow Micro](#snowplow-micro) | Yes | Yes | Yes | No | Yes |
| [Manual check in GDK](#manual-check-in-gdk) | Yes | Yes | Yes | Yes | No |
| [Snowplow Analytics Debugger Chrome Extension](#snowplow-analytics-debugger-chrome-extension) | Yes | No | Yes | Yes | Yes |
| [Remote event collector](#remote-event-collector) | Yes | No | Yes | No | Yes |
For local development we recommend using the [internal events monitor](#internal-events-monitor) when actively developing new events.
## Internal Events Monitor
<div class="video-fallback">
Watch the demo video about the <a href="https://www.youtube.com/watch?v=R7vT-VEzZOI">Internal Events Tracking Monitor</a>
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/R7vT-VEzZOI" frameborder="0" allowfullscreen> </iframe>
</figure>
To understand how events are triggered and metrics are updated while you use the GitLab application locally or `rails console`,
you can use the monitor.
Start the monitor and list one or more events that you would like to monitor. In this example we would like to monitor `i_code_review_user_create_mr`.
```shell
rails runner scripts/internal_events/monitor.rb i_code_review_user_create_mr
```
The monitor can show two tables:
- The `RELEVANT METRICS` table lists all the metrics that are defined on the `i_code_review_user_create_mr` event.
The second right-most column shows the value of each metric when the monitor was started and the right most column shows the current value of each metric.
- The `SNOWPLOW EVENTS` table lists a selection of properties from only Snowplow events fired after the monitor was started and those that match the event name. It is no longer a requirement to set up [Snowplow Micro](#snowplow-micro) for this table to be visible.
If a new `i_code_review_user_create_mr` event is fired, the metrics values get updated and a new event appears in the `SNOWPLOW EVENTS` table.
The monitor looks like below.
```plaintext
Updated at 2023-10-11 10:17:59 UTC
Monitored events: i_code_review_user_create_mr
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| RELEVANT METRICS |
+-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
| Key Path | Monitored Events | Instrumentation Class | Initial Value | Current Value |
+-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
| counts_monthly.aggregated_metrics.code_review_category_monthly_active_users | i_code_review_user_create_mr | RedisHLLMetric | 13 | 14 |
| counts_monthly.aggregated_metrics.code_review_group_monthly_active_users | i_code_review_user_create_mr | RedisHLLMetric | 13 | 14 |
| counts_weekly.aggregated_metrics.code_review_category_monthly_active_users | i_code_review_user_create_mr | RedisHLLMetric | 0 | 1 |
| counts_weekly.aggregated_metrics.code_review_group_monthly_active_users | i_code_review_user_create_mr | RedisHLLMetric | 0 | 1 |
| redis_hll_counters.code_review.i_code_review_user_create_mr_monthly | i_code_review_user_create_mr | RedisHLLMetric | 8 | 9 |
| redis_hll_counters.code_review.i_code_review_user_create_mr_weekly | i_code_review_user_create_mr | RedisHLLMetric | 0 | 1 |
+-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
+---------------------------------------------------------------------------------------------------------+
| SNOWPLOW EVENTS |
+------------------------------+--------------------------+---------+--------------+------------+---------+
| Event Name | Collector Timestamp | user_id | namespace_id | project_id | plan |
+------------------------------+--------------------------+---------+--------------+------------+---------+
| i_code_review_user_create_mr | 2023-10-11T10:17:15.504Z | 29 | 93 | | default |
+------------------------------+--------------------------+---------+--------------+------------+---------+
```
The Monitor's Keyboard commands:
- The `p` key acts as a toggle to pause and start the monitor. It makes it easier to select and copy the tables.
- The `r` key resets the monitor to it's internal state, and removes any previous event that had been fired from the display.
- The `q` key quits the monitor.
## Snowplow Micro
By default, GitLab Self-Managed instances do not collect event data through Snowplow. We can use [Snowplow Micro](https://docs.snowplow.io/docs/testing-debugging/snowplow-micro/what-is-micro/), a Docker based Snowplow collector, to test events locally:
1. Ensure [Docker is installed and working](https://www.docker.com/get-started/).
1. Enable Snowplow Micro:
```shell
gdk config set snowplow_micro.enabled true
```
1. Optional. Snowplow Micro runs on port `9091` by default, you can change to `9092` by running:
```shell
gdk config set snowplow_micro.port 9092
```
1. Regenerate your Procfile and YAML configuration by reconfiguring GDK:
```shell
gdk reconfigure
```
1. Restart the GDK:
```shell
gdk restart
```
1. You can now see all events being sent by your local instance in the Snowplow Micro UI and can filter for specific events. Snowplow Micro UI can be found under the `/micro/ui` path, for example `http://localhost:9092/micro/ui`.
### Introduction to Snowplow Micro UI and API
<div class="video-fallback">
Watch the video about <a href="https://www.youtube.com/watch?v=netZ0TogNcA">Snowplow Micro</a>
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/netZ0TogNcA" frameborder="0" allowfullscreen> </iframe>
</figure>
## Manual check in GDK
As a quick test of whether an event is getting triggered & metric is updated, you can check the latest values in the rails console.
Make sure to load the helpers below so that the most recent events & records are included in the output.
To view the entire service ping payload:
```ruby
require_relative 'spec/support/helpers/service_ping_helpers.rb'
ServicePingHelpers.get_current_service_ping_payload
```
To view the current value for a specific metric:
```ruby
require_relative 'spec/support/helpers/service_ping_helpers.rb'
ServicePingHelpers.get_current_usage_metric_value(key_path)
```
## Snowplow Analytics Debugger Chrome Extension
[Snowplow Analytics Debugger](https://chromewebstore.google.com/detail/snowplow-analytics-debugg/jbnlcgeengmijcghameodeaenefieedm) is a browser extension for testing frontend events.
It works in production, staging, and local development environments. It is especially suited to verifying correct events are getting sent in a deployed environment.
1. Install the [Snowplow Analytics Debugger](https://chromewebstore.google.com/detail/snowplow-analytics-debugg/jbnlcgeengmijcghameodeaenefieedm) Chrome browser extension.
1. Open Chrome DevTools to the Snowplow Debugger tab.
1. Any event triggered on a GitLab page should appear in the Snowplow Debugger tab.
## Remote event collector
On GitLab.com events are sent to a collector configured by GitLab. By default, GitLab Self-Managed instances do not have a collector configured and do not collect data with Snowplow.
You can configure your instance to use a custom Snowplow collector.
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > General**.
1. Expand **Snowplow**.
1. Select **Enable Snowplow tracking** and enter your Snowplow configuration information. For example if your custom snowplow collector is available at `your-snowplow-collector.net`:
| Name | Value |
|--------------------|-------------------------------|
| Collector hostname | `your-snowplow-collector.net` |
| App ID | `gitlab` |
| Cookie domain | `.your-gitlab-instance.com` |
1. Select **Save changes**.
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Local setup and debugging
breadcrumbs:
- doc
- development
- internal_analytics
- internal_event_instrumentation
---
{{< alert type="note" >}}
To track user interactions in the browser, browser settings, such as privacy filters (for example,
AdBlock, uBlock) and Do-Not-Track (DNT). For more information, see [settings that affect tracking](https://snowplow.io/blog/how-many-of-your-visitors-block-your-snowplow-tracking).
{{< /alert >}}
Internal events are using a tool called Snowplow under the hood. To develop and test internal events, there are several tools to test frontend and backend events:
| Testing Tool | Frontend Tracking | Backend Tracking | Local Development Environment | Production Environment | Shows individual events |
|----------------------------------------------|--------------------|---------------------|-------------------------------|------------------------|------------------------|
| [Internal Events Monitor](#internal-events-monitor) | Yes | Yes | Yes | Yes | Yes |
| [Snowplow Micro](#snowplow-micro) | Yes | Yes | Yes | No | Yes |
| [Manual check in GDK](#manual-check-in-gdk) | Yes | Yes | Yes | Yes | No |
| [Snowplow Analytics Debugger Chrome Extension](#snowplow-analytics-debugger-chrome-extension) | Yes | No | Yes | Yes | Yes |
| [Remote event collector](#remote-event-collector) | Yes | No | Yes | No | Yes |
For local development we recommend using the [internal events monitor](#internal-events-monitor) when actively developing new events.
## Internal Events Monitor
<div class="video-fallback">
Watch the demo video about the <a href="https://www.youtube.com/watch?v=R7vT-VEzZOI">Internal Events Tracking Monitor</a>
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/R7vT-VEzZOI" frameborder="0" allowfullscreen> </iframe>
</figure>
To understand how events are triggered and metrics are updated while you use the GitLab application locally or `rails console`,
you can use the monitor.
Start the monitor and list one or more events that you would like to monitor. In this example we would like to monitor `i_code_review_user_create_mr`.
```shell
rails runner scripts/internal_events/monitor.rb i_code_review_user_create_mr
```
The monitor can show two tables:
- The `RELEVANT METRICS` table lists all the metrics that are defined on the `i_code_review_user_create_mr` event.
The second right-most column shows the value of each metric when the monitor was started and the right most column shows the current value of each metric.
- The `SNOWPLOW EVENTS` table lists a selection of properties from only Snowplow events fired after the monitor was started and those that match the event name. It is no longer a requirement to set up [Snowplow Micro](#snowplow-micro) for this table to be visible.
If a new `i_code_review_user_create_mr` event is fired, the metrics values get updated and a new event appears in the `SNOWPLOW EVENTS` table.
The monitor looks like below.
```plaintext
Updated at 2023-10-11 10:17:59 UTC
Monitored events: i_code_review_user_create_mr
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| RELEVANT METRICS |
+-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
| Key Path | Monitored Events | Instrumentation Class | Initial Value | Current Value |
+-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
| counts_monthly.aggregated_metrics.code_review_category_monthly_active_users | i_code_review_user_create_mr | RedisHLLMetric | 13 | 14 |
| counts_monthly.aggregated_metrics.code_review_group_monthly_active_users | i_code_review_user_create_mr | RedisHLLMetric | 13 | 14 |
| counts_weekly.aggregated_metrics.code_review_category_monthly_active_users | i_code_review_user_create_mr | RedisHLLMetric | 0 | 1 |
| counts_weekly.aggregated_metrics.code_review_group_monthly_active_users | i_code_review_user_create_mr | RedisHLLMetric | 0 | 1 |
| redis_hll_counters.code_review.i_code_review_user_create_mr_monthly | i_code_review_user_create_mr | RedisHLLMetric | 8 | 9 |
| redis_hll_counters.code_review.i_code_review_user_create_mr_weekly | i_code_review_user_create_mr | RedisHLLMetric | 0 | 1 |
+-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
+---------------------------------------------------------------------------------------------------------+
| SNOWPLOW EVENTS |
+------------------------------+--------------------------+---------+--------------+------------+---------+
| Event Name | Collector Timestamp | user_id | namespace_id | project_id | plan |
+------------------------------+--------------------------+---------+--------------+------------+---------+
| i_code_review_user_create_mr | 2023-10-11T10:17:15.504Z | 29 | 93 | | default |
+------------------------------+--------------------------+---------+--------------+------------+---------+
```
The Monitor's Keyboard commands:
- The `p` key acts as a toggle to pause and start the monitor. It makes it easier to select and copy the tables.
- The `r` key resets the monitor to it's internal state, and removes any previous event that had been fired from the display.
- The `q` key quits the monitor.
## Snowplow Micro
By default, GitLab Self-Managed instances do not collect event data through Snowplow. We can use [Snowplow Micro](https://docs.snowplow.io/docs/testing-debugging/snowplow-micro/what-is-micro/), a Docker based Snowplow collector, to test events locally:
1. Ensure [Docker is installed and working](https://www.docker.com/get-started/).
1. Enable Snowplow Micro:
```shell
gdk config set snowplow_micro.enabled true
```
1. Optional. Snowplow Micro runs on port `9091` by default, you can change to `9092` by running:
```shell
gdk config set snowplow_micro.port 9092
```
1. Regenerate your Procfile and YAML configuration by reconfiguring GDK:
```shell
gdk reconfigure
```
1. Restart the GDK:
```shell
gdk restart
```
1. You can now see all events being sent by your local instance in the Snowplow Micro UI and can filter for specific events. Snowplow Micro UI can be found under the `/micro/ui` path, for example `http://localhost:9092/micro/ui`.
### Introduction to Snowplow Micro UI and API
<div class="video-fallback">
Watch the video about <a href="https://www.youtube.com/watch?v=netZ0TogNcA">Snowplow Micro</a>
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/netZ0TogNcA" frameborder="0" allowfullscreen> </iframe>
</figure>
## Manual check in GDK
As a quick test of whether an event is getting triggered & metric is updated, you can check the latest values in the rails console.
Make sure to load the helpers below so that the most recent events & records are included in the output.
To view the entire service ping payload:
```ruby
require_relative 'spec/support/helpers/service_ping_helpers.rb'
ServicePingHelpers.get_current_service_ping_payload
```
To view the current value for a specific metric:
```ruby
require_relative 'spec/support/helpers/service_ping_helpers.rb'
ServicePingHelpers.get_current_usage_metric_value(key_path)
```
## Snowplow Analytics Debugger Chrome Extension
[Snowplow Analytics Debugger](https://chromewebstore.google.com/detail/snowplow-analytics-debugg/jbnlcgeengmijcghameodeaenefieedm) is a browser extension for testing frontend events.
It works in production, staging, and local development environments. It is especially suited to verifying correct events are getting sent in a deployed environment.
1. Install the [Snowplow Analytics Debugger](https://chromewebstore.google.com/detail/snowplow-analytics-debugg/jbnlcgeengmijcghameodeaenefieedm) Chrome browser extension.
1. Open Chrome DevTools to the Snowplow Debugger tab.
1. Any event triggered on a GitLab page should appear in the Snowplow Debugger tab.
## Remote event collector
On GitLab.com events are sent to a collector configured by GitLab. By default, GitLab Self-Managed instances do not have a collector configured and do not collect data with Snowplow.
You can configure your instance to use a custom Snowplow collector.
1. On the left sidebar, at the bottom, select **Admin**.
1. Select **Settings > General**.
1. Expand **Snowplow**.
1. Select **Enable Snowplow tracking** and enter your Snowplow configuration information. For example if your custom snowplow collector is available at `your-snowplow-collector.net`:
| Name | Value |
|--------------------|-------------------------------|
| Collector hostname | `your-snowplow-collector.net` |
| App ID | `gitlab` |
| Cookie domain | `.your-gitlab-instance.com` |
1. Select **Save changes**.
|
https://docs.gitlab.com/development/internal_analytics/metric_definition_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/metric_definition_guide.md
|
2025-08-13
|
doc/development/internal_analytics/internal_event_instrumentation
|
[
"doc",
"development",
"internal_analytics",
"internal_event_instrumentation"
] |
metric_definition_guide.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Metrics definitions
| null |
Metrics are defined in YAML files located in subfolders of `config/metrics` and `ee/config/metrics`.
The YAML files are called metrics definitions.
This page describes the subsection of metric definitions with `data_source: internal_events`.
You can find a general overview of metric definition files in the [Metric Dictionary Guide](../metrics/metrics_dictionary.md)
## Supported metric types
Internal events supports three different metric types which are grouped like this:
1. All time total counters
1. Time framed total counters
1. Time framed unique counters
| Count type / Time frame | `7d` / `28d` | `all` |
|-------------------------|-----------------------------|-------------------------|
| **Total count** | Time framed total counters | All time total counters |
| **Unique count** | Time framed unique counters | |
You can tell if a metric is counting unique values or total values by looking at the [event selection rules](#event-selection-rules).
A snippet from a unique metric could look like below. Notice the `unique` property which defines which [identifier](event_definition_guide.md#event-definition-and-validation) of the `create_merge_request` event is used for counting the unique values.
```yaml
events:
- name: create_merge_request
unique: user.id
```
Similarly, a snippet from a total count metric can look like below. Notice how there is no `unique` property.
```yaml
events:
- name: create_merge_request
```
We can track multiple events within one metric via [aggregated metrics](#aggregated-metrics).
### All time total counters
Example: Total visits to /groups/:group/-/analytics/productivity_analytics all time
```yaml
data_category: optional
key_path: counts.productivity_analytics_views
description: Total visits to /groups/:group/-/analytics/productivity_analytics all time
product_group: optimize
value_type: number
status: active
time_frame: all
data_source: internal_events
events:
- name: view_productivity_analytics
tiers:
- premium
- ultimate
performance_indicator_type: []
milestone: "<13.9"
```
The combination of `time_frame: all` and the event selection rule under `events` referring to the
`view_productivity_analytics` event means that this is an "all time total count" metric.
### Time framed total counters
An example is: Weekly count of Runner usage CSV report exports
```yaml
key_path: counts.count_total_export_runner_usage_by_project_as_csv_weekly
description: Weekly count of Runner usage CSV report exports
product_group: runner
performance_indicator_type: []
value_type: number
status: active
milestone: '16.9'
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142328
data_source: internal_events
data_category: optional
tiers:
- ultimate
time_frame: 7d
events:
- name: export_runner_usage_by_project_as_csv
```
The combination of `time_frame: 7d` and the event selection rule under `events` referring to the
`export_runner_usage_by_project_as_csv` event means that this is a "timed framed total count" metric.
### Time framed unique counters
Example: Count of distinct users who opted to filter out anonymous users on the analytics dashboard view in the last 28 days.
```yaml
key_path: count_distinct_user_id_from_exclude_anonymised_users_28d
description: Count of distinct users who opted to filter out anonymous users on the analytics dashboard view in the last 28 days.
product_group: platform_insights
performance_indicator_type: []
value_type: number
status: active
milestone: '16.7'
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/138150
time_frame: 28d
data_source: internal_events
data_category: optional
tiers:
- ultimate
events:
- name: exclude_anonymised_users
unique: user.id
```
The combination of `time_frame: 28d`, the event selection rule under `events` referring to the
`exclude_anonymised_users` event and the unique value (`unique: user.id`) means that this is a "timed framed unique count" metric.
## Event Selection Rules
Event selection rules are the parts which connects metric definitions and event definitions.
They are needed to know which metrics should be updated when an event is triggered.
Each internal event based metric should have a least one event selection rule with the following properties.
| Property | Required | Additional information |
|--------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | yes | Name of the event |
| `unique` | no | Used if the metric should count the distinct number of users, projects, namespaces, or count the unique values for additional properties present in the event. Valid values are `user.id`, `project.id` and `namespace.id`. Additionally `label`, `property`, and `value` may also be used in reference to any [additional properties](quick_start.md#additional-properties) included with the event. |
| `filter` | no | Used when only a subset of events should be included in the metric. Only additional properties can be used for filtering. |
An example of a single event selection rule which updates a unique count metric when an event called `pull_package` with additional property `label` with the value `rubygems` occurs:
```yaml
- name: pull_package
unique: user.id
filter:
label: rubygems
```
### Filters
Filters are used to constrain which events cause a metric to increase.
This filter includes only `pull_package` events with `label: rubygems`:
```yaml
- name: pull_package
filter:
label: rubygems
```
Whereas, this filter is even more restricted and only includes `pull_package` events with `label: rubygems` and `property: deploy_token`:
```yaml
- name: pull_package
filter:
label: rubygems
property: deploy_token
```
Filters support also [custom additional properties](quick_start.md#additional-properties):
```yaml
- name: pull_package
filter:
custom_key: custom_value
```
Filters only support matching of exact values and not wildcards or regular expressions.
## Aggregated metrics
A metric definition with several event selection rules can be considered an aggregated metric.
If you want to get total number of `pull_package` and `push_package` events you have to add two event selection rules:
```yaml
events:
- name: pull_package
- name: push_package
```
To get the number of unique users that have at least pushed or pulled a package once:
```yaml
events:
- name: pull_package
unique: user.id
- name: push_package
unique: user.id
```
Notice that unique metrics and total count metrics cannot be mixed in a single metric.
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Metrics definitions
breadcrumbs:
- doc
- development
- internal_analytics
- internal_event_instrumentation
---
Metrics are defined in YAML files located in subfolders of `config/metrics` and `ee/config/metrics`.
The YAML files are called metrics definitions.
This page describes the subsection of metric definitions with `data_source: internal_events`.
You can find a general overview of metric definition files in the [Metric Dictionary Guide](../metrics/metrics_dictionary.md)
## Supported metric types
Internal events supports three different metric types which are grouped like this:
1. All time total counters
1. Time framed total counters
1. Time framed unique counters
| Count type / Time frame | `7d` / `28d` | `all` |
|-------------------------|-----------------------------|-------------------------|
| **Total count** | Time framed total counters | All time total counters |
| **Unique count** | Time framed unique counters | |
You can tell if a metric is counting unique values or total values by looking at the [event selection rules](#event-selection-rules).
A snippet from a unique metric could look like below. Notice the `unique` property which defines which [identifier](event_definition_guide.md#event-definition-and-validation) of the `create_merge_request` event is used for counting the unique values.
```yaml
events:
- name: create_merge_request
unique: user.id
```
Similarly, a snippet from a total count metric can look like below. Notice how there is no `unique` property.
```yaml
events:
- name: create_merge_request
```
We can track multiple events within one metric via [aggregated metrics](#aggregated-metrics).
### All time total counters
Example: Total visits to /groups/:group/-/analytics/productivity_analytics all time
```yaml
data_category: optional
key_path: counts.productivity_analytics_views
description: Total visits to /groups/:group/-/analytics/productivity_analytics all time
product_group: optimize
value_type: number
status: active
time_frame: all
data_source: internal_events
events:
- name: view_productivity_analytics
tiers:
- premium
- ultimate
performance_indicator_type: []
milestone: "<13.9"
```
The combination of `time_frame: all` and the event selection rule under `events` referring to the
`view_productivity_analytics` event means that this is an "all time total count" metric.
### Time framed total counters
An example is: Weekly count of Runner usage CSV report exports
```yaml
key_path: counts.count_total_export_runner_usage_by_project_as_csv_weekly
description: Weekly count of Runner usage CSV report exports
product_group: runner
performance_indicator_type: []
value_type: number
status: active
milestone: '16.9'
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142328
data_source: internal_events
data_category: optional
tiers:
- ultimate
time_frame: 7d
events:
- name: export_runner_usage_by_project_as_csv
```
The combination of `time_frame: 7d` and the event selection rule under `events` referring to the
`export_runner_usage_by_project_as_csv` event means that this is a "timed framed total count" metric.
### Time framed unique counters
Example: Count of distinct users who opted to filter out anonymous users on the analytics dashboard view in the last 28 days.
```yaml
key_path: count_distinct_user_id_from_exclude_anonymised_users_28d
description: Count of distinct users who opted to filter out anonymous users on the analytics dashboard view in the last 28 days.
product_group: platform_insights
performance_indicator_type: []
value_type: number
status: active
milestone: '16.7'
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/138150
time_frame: 28d
data_source: internal_events
data_category: optional
tiers:
- ultimate
events:
- name: exclude_anonymised_users
unique: user.id
```
The combination of `time_frame: 28d`, the event selection rule under `events` referring to the
`exclude_anonymised_users` event and the unique value (`unique: user.id`) means that this is a "timed framed unique count" metric.
## Event Selection Rules
Event selection rules are the parts which connects metric definitions and event definitions.
They are needed to know which metrics should be updated when an event is triggered.
Each internal event based metric should have a least one event selection rule with the following properties.
| Property | Required | Additional information |
|--------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | yes | Name of the event |
| `unique` | no | Used if the metric should count the distinct number of users, projects, namespaces, or count the unique values for additional properties present in the event. Valid values are `user.id`, `project.id` and `namespace.id`. Additionally `label`, `property`, and `value` may also be used in reference to any [additional properties](quick_start.md#additional-properties) included with the event. |
| `filter` | no | Used when only a subset of events should be included in the metric. Only additional properties can be used for filtering. |
An example of a single event selection rule which updates a unique count metric when an event called `pull_package` with additional property `label` with the value `rubygems` occurs:
```yaml
- name: pull_package
unique: user.id
filter:
label: rubygems
```
### Filters
Filters are used to constrain which events cause a metric to increase.
This filter includes only `pull_package` events with `label: rubygems`:
```yaml
- name: pull_package
filter:
label: rubygems
```
Whereas, this filter is even more restricted and only includes `pull_package` events with `label: rubygems` and `property: deploy_token`:
```yaml
- name: pull_package
filter:
label: rubygems
property: deploy_token
```
Filters support also [custom additional properties](quick_start.md#additional-properties):
```yaml
- name: pull_package
filter:
custom_key: custom_value
```
Filters only support matching of exact values and not wildcards or regular expressions.
## Aggregated metrics
A metric definition with several event selection rules can be considered an aggregated metric.
If you want to get total number of `pull_package` and `push_package` events you have to add two event selection rules:
```yaml
events:
- name: pull_package
- name: push_package
```
To get the number of unique users that have at least pushed or pulled a package once:
```yaml
events:
- name: pull_package
unique: user.id
- name: push_package
unique: user.id
```
Notice that unique metrics and total count metrics cannot be mixed in a single metric.
|
https://docs.gitlab.com/development/internal_analytics/internal_events_payload
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/internal_events_payload.md
|
2025-08-13
|
doc/development/internal_analytics/internal_event_instrumentation
|
[
"doc",
"development",
"internal_analytics",
"internal_event_instrumentation"
] |
internal_events_payload.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Internal Events Payload Samples
| null |
> **Important**: Internal Event Tracking collects data solely for GitLab internal analytics purposes. This data is not shared with any third-party services or tools. GitLab uses components like Snowplow for implementation, but all data is collected, processed, and stored within GitLab infrastructure. User IDs are pseudonymized to protect privacy, and GitLab does not undertake any processes to re-identify users. For more information about data privacy, see [Customer product usage information](https://handbook.gitlab.com/handbook/legal/privacy/customer-product-usage-information/).
## Internal Events Payload
This guide provides payload samples for internal events tracked across frontend and backend services. Each event type includes a detailed breakdown of its fields and descriptions. Internal events use Snowplow to track events. For more information, see [Snowplow event parameters guide](https://docs.snowplow.io/docs/sources/trackers/snowplow-tracker-protocol/going-deeper/event-parameters/).
From GitLab 18.0, Self-Managed and Dedicated instances will be sending structured events, self-describing events, page views, and page pings.
## Event Types
At its core, our Internal Events tracking system is designed for granular tracking of events. Each event is denoted by an `e=...` parameter.
There are three categories of events:
- Standard events, such as page views and page pings
- Custom structured events
- Self-describing events based on a schema
| **Type of tracking** | **Event type (value of e)** |
| ----------------------------------- | --------------------------- |
| Self-describing event | ue |
| Pageview tracking | pv |
| Page pings | pp |
| Custom structured event | se |
## Common Parameters
### Event Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ---------------- | -------- | --------------- | ------------------ |
| e | event | text | Event type | pv, pp, ue, se |
| eid | `event_id` | text | Event UUID | 606adff6-9ccc-41f4-8807-db8fdb600df8 |
### Application Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ---------------- | -------- | --------------- | ------------------ |
| tna | namespace_tracker | text | The tracker namespace | `gl` |
| aid | `app_id` | text | Unique identifier for the application | `gitlab-sm`|
| p | platform | text | The platform the app runs on | web, srv, app |
| tv | v_tracker | text | Identifier for tracker version | js-3.24.2 |
### Timestamp Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | --------------------- | -------- | --------------- | ------------------ |
| dtm | dvce_created_tstamp | int | Timestamp when event occurred, as recorded by client device | 1361553733313 |
| stm | dvce_sent_tstamp | int | Timestamp when event was sent by client device to collector | 1361553733371 |
| ttm | true_tstamp | int | User-set exact timestamp | 1361553733371 |
| tz | os_timezone | text | Time zone of client devices OS | Europe%2FLondon |
> **Note**: The Internal Events Collector will also capture `collector_tstamp` which is the time the event arrived at the collector.
### User-Related Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ------------------ | -------- | --------------- | ------------------ |
| duid | `domain_userid` | text | Unique rotating identifier for a user, based on a first-party cookie. | aeb1691c5a0ee5a6 |
| uid | `user_id` | text | `user_id`, which gets pseudonymized in the snowplow [pipeline](https://metrics.gitlab.com/identifiers/) | 1234567890 |
| vid | `domain_sessionidx` | int | Index of number of visits that this user has made to the application | 1 |
| sid | `domain_sessionid` | text | Unique identifier (UUID) generated to track a user's activity during a single visit or session. This identifier resets between sessions. The identifier is not linked to personal information. | 9c65e7f3-8e8e-470d-b243-910b5b300da0 |
| `ip` | `user_ipaddress`, we collect Geo information but do not store the IP address in the snowplow pipeline | text | IP address override | 37.157.33.178 |
### Platform Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ---------------- | -------- | --------------- | ------------------ |
| `url` | `page_url` | text | Page URL. We pseudonymize sensitive data from the URL ([see examples](https://metrics.gitlab.com/identifiers/)). | `https://gitlab.com/dashboard/projects` |
| `ua` | `useragent` | text | Useragent | `Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:105.0) Gecko/20100101 Firefox/105.0` |
| `page` | page_title | text | This value will always be hardcoded to `GitLab` | GitLab |
| refr | page_referrer | text | Referrer URL, similar to `page_url`. We pseudonymize referrer URL. | `https://gitlab.com/group:123/project:356` |
| cookie | br_cookies | boolean | Does the browser permit cookies? | 1 |
| lang | br_lang | text | Browser language | en-US |
| cd | br_colordepth | integer | Browser color depth | 24 |
| cs | doc_charset | text | Web page's character encoding | UTF-8 |
| ds | doc_width and doc_height | text | Web page width and height | 1090x1152 |
| vp | br_viewwidth and br_viewheight | text | Browser viewport width and height | 1105x390 |
| res | dvce_screenwidth and dvce_screenheight | text | Screen/monitor resolution | 1280x1024 |
## Self-describing Events
Self-describing events are the recommended way to track custom events with Internal Events tracking. They allow tracking of events according to a predefined schema.
When tracking a self-describing event:
- The event type is set to `e=ue`.
- The event data is base64 encoded and included in the payload.
## Specific Event Types
### Page Views
Pageview tracking is used to record views of web pages.
Recording a pageview involves recording an event where `e=pv`. All the fields associated with web events can be tracked.
### Page Pings
Page ping events track user engagement by periodically firing while a user remains active on a page. They measure actual time spent on page.
Page pings are identified by `e=pp` and include these additional fields:
| **Parameter** | **Table Column** | **Type** | **Description** |
| ------------- | ---------------- | -------- | --------------- |
| pp_mix | pp_xoffset_min | integer | Minimum page x offset seen in the last ping period |
| pp_max | pp_xoffset_max | integer | Maximum page x offset seen in the last ping period |
| pp_miy | pp_yoffset_min | integer | Minimum page y offset seen in the last ping period |
| pp_may | pp_yoffset_max | integer | Maximum page y offset seen in the last ping period |
### Structured Event Tracking
As well as setting `e=se`, there are five custom event specific parameters that can be set:
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ---------------- | -------- | --------------- | ------------------ |
| se_ca | se_category | text | The event category. By default, where the event happened. For frontend events, it is the page name, for backend events it is the controller name. | projects:merge_requests:show |
| se_ac | se_action | text | The action or event name | code_suggestion_accepted |
| se_la | se_label | text | A label often used to refer to the 'object' the action is performed on | `${editor_name}` |
| se_pr | se_property | text | A property associated with either the action or the object | `${suggestion_type}` |
| se_va | se_value | decimal | A value associated with the user action | `${suggestion_shown_duration}` |
| cx | contexts | JSON | It passes base64 encoded context to the event | JSON |
Contexts has some of the predefined fields which will be sent with each event. All the predefined schemas are stored in the [`gitlab-org/iglu`](https://gitlab.com/gitlab-org/iglu) repository.
Most of the self-describing events have `gitlab_standard` context, which is a set of fields that are common to all events. For more information about the `gitlab_standard` context, see [Standard context fields](standard_context_fields.md).
## Internal Events Payload Examples
### Page View
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/payload_data/jsonschema/1-0-4",
"data": [
{
"e": "pv",
"url": "https://gitlab.com/",
"page": "GitLab",
"refr": "https://gitlab.com/",
"eid": "564f9834-3f98-4d78-a738-b7977d621371",
"tv": "js-3.24.2",
"tna": "gl",
"aid": "gitlab",
"p": "web",
"cookie": "1",
"cs": "UTF-8",
"lang": "en-GB",
"res": "1728x1117",
"cd": "30",
"tz": "Asia/Calcutta",
"dtm": "1742205227525",
"vp": "1920x331",
"ds": "1920x388",
"vid": "720",
"sid": "1574509e-5d6d-43d1-9e76-e42801ae2e55",
"duid": "9e5500ac-3437-4457-a007-351911d54983",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy5z...",
"stm": "1742205227528"
}
]
}
```
cx field is base64 encoded and contains the following JSON:
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/contexts/jsonschema/1-0-0",
"data": [
{
"schema": "iglu:com.gitlab/gitlab_standard/jsonschema/1-1-1",
"data": {
"environment": "production",
"source": "gitlab-javascript",
"correlation_id": "01JPHRC3K30KDDV165EWTCFJ02",
"plan": null,
"extra": {},
"user_id": 11979729,
"global_user_id": "XsZfAb677xjp9zut/lL6X0ZKX5b7pli65uk2wnfu0SY=",
"is_gitlab_team_member": true,
"namespace_id": null,
"project_id": null,
"feature_enabled_by_namespace_ids": null,
"realm": "saas",
"instance_id": "ea8bf810-1d6f-4a6a-b4fd-93e8cbd8b57f",
"host_name": "gitlab-webservice-web-58446c98b5-zprvd",
"instance_version": "17.10.0",
"context_generated_at": "2025-03-17T09:53:46.709Z",
"google_analytics_id": "GA1.1.424273043.1737451027"
}
},
{
"schema": "iglu:com.snowplowanalytics.snowplow/web_page/jsonschema/1-0-0",
"data": {
"id": "90ea98bd-3bdb-48d2-935c-59a4d03a4710"
}
},
{
"schema": "iglu:com.google.analytics/cookies/jsonschema/1-0-0",
"data": {
"_ga": "GA1.1.424273043.1737451027"
}
},
{
"schema": "iglu:com.google.ga4/cookies/jsonschema/1-0-0",
"data": {
"_ga": "GA1.1.424273043.1737451027",
"session_cookies": [
{
"measurement_id": "G-ENFH3X7M5Y",
"session_cookie": "GS1.1.1742200876.45.1.1742202521.0.0.0"
}
]
}
},
{
"schema": "iglu:org.w3/PerformanceTiming/jsonschema/1-0-0",
"data": {
"navigationStart": 1742205226288,
"redirectStart": 0,
"redirectEnd": 0,
"fetchStart": 1742205226289,
"domainLookupStart": 1742205226289,
"domainLookupEnd": 1742205226289,
"connectStart": 1742205226289,
"secureConnectionStart": 0,
"connectEnd": 1742205226289,
"requestStart": 1742205226323,
"responseStart": 1742205226969,
"responseEnd": 1742205226972,
"unloadEventStart": 1742205226975,
"unloadEventEnd": 1742205226975,
"domLoading": 1742205226980,
"domInteractive": 1742205227044,
"domContentLoadedEventStart": 1742205227437,
"domContentLoadedEventEnd": 1742205227437,
"domComplete": 0,
"loadEventStart": 0,
"loadEventEnd": 0
}
},
{
"schema": "iglu:org.ietf/http_client_hints/jsonschema/1-0-0",
"data": {
"isMobile": false,
"brands": [
{
"brand": "Chromium",
"version": "134"
},
{
"brand": "Not:A-Brand",
"version": "24"
},
{
"brand": "Google Chrome",
"version": "134"
}
]
}
}
]
}
```
### Page Ping
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/payload_data/jsonschema/1-0-4",
"data": [
{
"e": "pp",
"url": "https://gitlab.com/",
"page": "GitLab",
"refr": "https://gitlab.com/",
"eid": "ac958a76-5360-44e1-a9f3-8172d6df0f80",
"tv": "js-3.24.2",
"tna": "gl",
"aid": "gitlab",
"p": "web",
"cookie": "1",
"cs": "UTF-8",
"lang": "en-GB",
"res": "1728x1117",
"cd": "30",
"tz": "Asia/Calcutta",
"dtm": "1742205324496",
"vp": "1920x331",
"ds": "1920x1694",
"vid": "720",
"sid": "1574509e-5d6d-43d1-9e76-e42801ae2e55",
"duid": "9e5500ac-3437-4457-a007-351911d54983",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"stm": "1742205324501"
}
]
}
```
### Self-describing Events
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/payload_data/jsonschema/1-0-4",
"data": [
{
"e": "ue",
"eid": "67ae8ec1-3ec0-46b7-89e0-fd944d90acc6",
"tv": "js-3.24.2",
"tna": "gl",
"aid": "gitlab",
"p": "web",
"cookie": "1",
"cs": "UTF-8",
"lang": "en-GB",
"res": "1728x1117",
"cd": "30",
"tz": "Asia/Calcutta",
"dtm": "1742205393772",
"vp": "1920x331",
"ds": "1920x1694",
"vid": "720",
"sid": "1574509e-5d6d-43d1-9e76-e42801ae2e55",
"duid": "9e5500ac-3437-4457-a007-351911d54983",
"refr": "https://gitlab.com/",
"url": "https://gitlab.com/",
"ue_px": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"stm": "1742205393774"
}
]
}
```
This is part of link click tracking. The `ue_px` field is base64 encoded and contains the following JSON:
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/unstruct_event/jsonschema/1-0-0",
"data": {
"schema": "iglu:com.snowplowanalytics.snowplow/link_click/jsonschema/1-0-1",
"data": {
"targetUrl": "https://gitlab.com/",
"elementId": "",
"elementClasses": [
"brand-logo"
],
"elementTarget": ""
}
}
}
```
### Structured Events
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/payload_data/jsonschema/1-0-4",
"data": [
{
"e": "se",
"se_ca": "root:index",
"se_ac": "render_duo_chat_callout",
"eid": "12c18f54-ef65-489e-99f8-00922f9c3249",
"tv": "js-3.24.2",
"tna": "gl",
"aid": "gitlab",
"p": "web",
"cookie": "1",
"cs": "UTF-8",
"lang": "en-GB",
"res": "1728x1117",
"cd": "30",
"tz": "Asia/Calcutta",
"dtm": "1742205394848",
"vp": "1920x331",
"ds": "1920x388",
"vid": "720",
"sid": "1574509e-5d6d-43d1-9e76-e42801ae2e55",
"duid": "9e5500ac-3437-4457-a007-351911d54983",
"refr": "https://gitlab.com/",
"url": "https://gitlab.com/",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"stm": "1742205395080"
}
]
}
```
### Backend Events
```json
{
"e": "se",
"eid": "2e78c447-c18e-4087-a3a8-35723ecfb602",
"aid": "asdfsadf",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"tna": "gl",
"stm": "1742268163018",
"tv": "rb-0.8.0",
"se_ac": "perform_action",
"se_la": "redis_hll_counters.manage.unique_active_users_monthly",
"se_ca": "Users::ActivityService",
"p": "srv",
"dtm": "1742268163016"
}
```
cx field is base64 encoded and contains the following JSON:
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/contexts/jsonschema/1-0-1",
"data": [
{
"schema": "iglu:com.gitlab/gitlab_standard/jsonschema/1-1-1",
"data": {
"environment": "development",
"source": "gitlab-rails",
"correlation_id": "01JPKMCRCBSMB07DPGVSJJ708F",
"plan": null,
"extra": {},
"user_id": 1,
"global_user_id": "KaAjqePKpCsnc6P40up8ZOi4+BUwEUIyab6W5jWIg5M=",
"is_gitlab_team_member": null,
"namespace_id": null,
"project_id": null,
"feature_enabled_by_namespace_ids": null,
"realm": "self-managed",
"instance_id": "e1baa3de-7e45-4fbc-b17e-95995935cf09",
"host_name": "nbelokolodov--20220811-Y26WJ",
"instance_version": "17.10.0",
"context_generated_at": "2025-03-18 03:22:43 UTC"
}
},
{
"schema": "iglu:com.gitlab/gitlab_service_ping/jsonschema/1-0-1",
"data": {
"data_source": "redis_hll",
"event_name": "unique_active_user"
}
}
]
}
```
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Internal Events Payload Samples
breadcrumbs:
- doc
- development
- internal_analytics
- internal_event_instrumentation
---
> **Important**: Internal Event Tracking collects data solely for GitLab internal analytics purposes. This data is not shared with any third-party services or tools. GitLab uses components like Snowplow for implementation, but all data is collected, processed, and stored within GitLab infrastructure. User IDs are pseudonymized to protect privacy, and GitLab does not undertake any processes to re-identify users. For more information about data privacy, see [Customer product usage information](https://handbook.gitlab.com/handbook/legal/privacy/customer-product-usage-information/).
## Internal Events Payload
This guide provides payload samples for internal events tracked across frontend and backend services. Each event type includes a detailed breakdown of its fields and descriptions. Internal events use Snowplow to track events. For more information, see [Snowplow event parameters guide](https://docs.snowplow.io/docs/sources/trackers/snowplow-tracker-protocol/going-deeper/event-parameters/).
From GitLab 18.0, Self-Managed and Dedicated instances will be sending structured events, self-describing events, page views, and page pings.
## Event Types
At its core, our Internal Events tracking system is designed for granular tracking of events. Each event is denoted by an `e=...` parameter.
There are three categories of events:
- Standard events, such as page views and page pings
- Custom structured events
- Self-describing events based on a schema
| **Type of tracking** | **Event type (value of e)** |
| ----------------------------------- | --------------------------- |
| Self-describing event | ue |
| Pageview tracking | pv |
| Page pings | pp |
| Custom structured event | se |
## Common Parameters
### Event Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ---------------- | -------- | --------------- | ------------------ |
| e | event | text | Event type | pv, pp, ue, se |
| eid | `event_id` | text | Event UUID | 606adff6-9ccc-41f4-8807-db8fdb600df8 |
### Application Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ---------------- | -------- | --------------- | ------------------ |
| tna | namespace_tracker | text | The tracker namespace | `gl` |
| aid | `app_id` | text | Unique identifier for the application | `gitlab-sm`|
| p | platform | text | The platform the app runs on | web, srv, app |
| tv | v_tracker | text | Identifier for tracker version | js-3.24.2 |
### Timestamp Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | --------------------- | -------- | --------------- | ------------------ |
| dtm | dvce_created_tstamp | int | Timestamp when event occurred, as recorded by client device | 1361553733313 |
| stm | dvce_sent_tstamp | int | Timestamp when event was sent by client device to collector | 1361553733371 |
| ttm | true_tstamp | int | User-set exact timestamp | 1361553733371 |
| tz | os_timezone | text | Time zone of client devices OS | Europe%2FLondon |
> **Note**: The Internal Events Collector will also capture `collector_tstamp` which is the time the event arrived at the collector.
### User-Related Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ------------------ | -------- | --------------- | ------------------ |
| duid | `domain_userid` | text | Unique rotating identifier for a user, based on a first-party cookie. | aeb1691c5a0ee5a6 |
| uid | `user_id` | text | `user_id`, which gets pseudonymized in the snowplow [pipeline](https://metrics.gitlab.com/identifiers/) | 1234567890 |
| vid | `domain_sessionidx` | int | Index of number of visits that this user has made to the application | 1 |
| sid | `domain_sessionid` | text | Unique identifier (UUID) generated to track a user's activity during a single visit or session. This identifier resets between sessions. The identifier is not linked to personal information. | 9c65e7f3-8e8e-470d-b243-910b5b300da0 |
| `ip` | `user_ipaddress`, we collect Geo information but do not store the IP address in the snowplow pipeline | text | IP address override | 37.157.33.178 |
### Platform Parameters
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ---------------- | -------- | --------------- | ------------------ |
| `url` | `page_url` | text | Page URL. We pseudonymize sensitive data from the URL ([see examples](https://metrics.gitlab.com/identifiers/)). | `https://gitlab.com/dashboard/projects` |
| `ua` | `useragent` | text | Useragent | `Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:105.0) Gecko/20100101 Firefox/105.0` |
| `page` | page_title | text | This value will always be hardcoded to `GitLab` | GitLab |
| refr | page_referrer | text | Referrer URL, similar to `page_url`. We pseudonymize referrer URL. | `https://gitlab.com/group:123/project:356` |
| cookie | br_cookies | boolean | Does the browser permit cookies? | 1 |
| lang | br_lang | text | Browser language | en-US |
| cd | br_colordepth | integer | Browser color depth | 24 |
| cs | doc_charset | text | Web page's character encoding | UTF-8 |
| ds | doc_width and doc_height | text | Web page width and height | 1090x1152 |
| vp | br_viewwidth and br_viewheight | text | Browser viewport width and height | 1105x390 |
| res | dvce_screenwidth and dvce_screenheight | text | Screen/monitor resolution | 1280x1024 |
## Self-describing Events
Self-describing events are the recommended way to track custom events with Internal Events tracking. They allow tracking of events according to a predefined schema.
When tracking a self-describing event:
- The event type is set to `e=ue`.
- The event data is base64 encoded and included in the payload.
## Specific Event Types
### Page Views
Pageview tracking is used to record views of web pages.
Recording a pageview involves recording an event where `e=pv`. All the fields associated with web events can be tracked.
### Page Pings
Page ping events track user engagement by periodically firing while a user remains active on a page. They measure actual time spent on page.
Page pings are identified by `e=pp` and include these additional fields:
| **Parameter** | **Table Column** | **Type** | **Description** |
| ------------- | ---------------- | -------- | --------------- |
| pp_mix | pp_xoffset_min | integer | Minimum page x offset seen in the last ping period |
| pp_max | pp_xoffset_max | integer | Maximum page x offset seen in the last ping period |
| pp_miy | pp_yoffset_min | integer | Minimum page y offset seen in the last ping period |
| pp_may | pp_yoffset_max | integer | Maximum page y offset seen in the last ping period |
### Structured Event Tracking
As well as setting `e=se`, there are five custom event specific parameters that can be set:
| **Parameter** | **Table Column** | **Type** | **Description** | **Example values** |
| ------------- | ---------------- | -------- | --------------- | ------------------ |
| se_ca | se_category | text | The event category. By default, where the event happened. For frontend events, it is the page name, for backend events it is the controller name. | projects:merge_requests:show |
| se_ac | se_action | text | The action or event name | code_suggestion_accepted |
| se_la | se_label | text | A label often used to refer to the 'object' the action is performed on | `${editor_name}` |
| se_pr | se_property | text | A property associated with either the action or the object | `${suggestion_type}` |
| se_va | se_value | decimal | A value associated with the user action | `${suggestion_shown_duration}` |
| cx | contexts | JSON | It passes base64 encoded context to the event | JSON |
Contexts has some of the predefined fields which will be sent with each event. All the predefined schemas are stored in the [`gitlab-org/iglu`](https://gitlab.com/gitlab-org/iglu) repository.
Most of the self-describing events have `gitlab_standard` context, which is a set of fields that are common to all events. For more information about the `gitlab_standard` context, see [Standard context fields](standard_context_fields.md).
## Internal Events Payload Examples
### Page View
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/payload_data/jsonschema/1-0-4",
"data": [
{
"e": "pv",
"url": "https://gitlab.com/",
"page": "GitLab",
"refr": "https://gitlab.com/",
"eid": "564f9834-3f98-4d78-a738-b7977d621371",
"tv": "js-3.24.2",
"tna": "gl",
"aid": "gitlab",
"p": "web",
"cookie": "1",
"cs": "UTF-8",
"lang": "en-GB",
"res": "1728x1117",
"cd": "30",
"tz": "Asia/Calcutta",
"dtm": "1742205227525",
"vp": "1920x331",
"ds": "1920x388",
"vid": "720",
"sid": "1574509e-5d6d-43d1-9e76-e42801ae2e55",
"duid": "9e5500ac-3437-4457-a007-351911d54983",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy5z...",
"stm": "1742205227528"
}
]
}
```
cx field is base64 encoded and contains the following JSON:
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/contexts/jsonschema/1-0-0",
"data": [
{
"schema": "iglu:com.gitlab/gitlab_standard/jsonschema/1-1-1",
"data": {
"environment": "production",
"source": "gitlab-javascript",
"correlation_id": "01JPHRC3K30KDDV165EWTCFJ02",
"plan": null,
"extra": {},
"user_id": 11979729,
"global_user_id": "XsZfAb677xjp9zut/lL6X0ZKX5b7pli65uk2wnfu0SY=",
"is_gitlab_team_member": true,
"namespace_id": null,
"project_id": null,
"feature_enabled_by_namespace_ids": null,
"realm": "saas",
"instance_id": "ea8bf810-1d6f-4a6a-b4fd-93e8cbd8b57f",
"host_name": "gitlab-webservice-web-58446c98b5-zprvd",
"instance_version": "17.10.0",
"context_generated_at": "2025-03-17T09:53:46.709Z",
"google_analytics_id": "GA1.1.424273043.1737451027"
}
},
{
"schema": "iglu:com.snowplowanalytics.snowplow/web_page/jsonschema/1-0-0",
"data": {
"id": "90ea98bd-3bdb-48d2-935c-59a4d03a4710"
}
},
{
"schema": "iglu:com.google.analytics/cookies/jsonschema/1-0-0",
"data": {
"_ga": "GA1.1.424273043.1737451027"
}
},
{
"schema": "iglu:com.google.ga4/cookies/jsonschema/1-0-0",
"data": {
"_ga": "GA1.1.424273043.1737451027",
"session_cookies": [
{
"measurement_id": "G-ENFH3X7M5Y",
"session_cookie": "GS1.1.1742200876.45.1.1742202521.0.0.0"
}
]
}
},
{
"schema": "iglu:org.w3/PerformanceTiming/jsonschema/1-0-0",
"data": {
"navigationStart": 1742205226288,
"redirectStart": 0,
"redirectEnd": 0,
"fetchStart": 1742205226289,
"domainLookupStart": 1742205226289,
"domainLookupEnd": 1742205226289,
"connectStart": 1742205226289,
"secureConnectionStart": 0,
"connectEnd": 1742205226289,
"requestStart": 1742205226323,
"responseStart": 1742205226969,
"responseEnd": 1742205226972,
"unloadEventStart": 1742205226975,
"unloadEventEnd": 1742205226975,
"domLoading": 1742205226980,
"domInteractive": 1742205227044,
"domContentLoadedEventStart": 1742205227437,
"domContentLoadedEventEnd": 1742205227437,
"domComplete": 0,
"loadEventStart": 0,
"loadEventEnd": 0
}
},
{
"schema": "iglu:org.ietf/http_client_hints/jsonschema/1-0-0",
"data": {
"isMobile": false,
"brands": [
{
"brand": "Chromium",
"version": "134"
},
{
"brand": "Not:A-Brand",
"version": "24"
},
{
"brand": "Google Chrome",
"version": "134"
}
]
}
}
]
}
```
### Page Ping
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/payload_data/jsonschema/1-0-4",
"data": [
{
"e": "pp",
"url": "https://gitlab.com/",
"page": "GitLab",
"refr": "https://gitlab.com/",
"eid": "ac958a76-5360-44e1-a9f3-8172d6df0f80",
"tv": "js-3.24.2",
"tna": "gl",
"aid": "gitlab",
"p": "web",
"cookie": "1",
"cs": "UTF-8",
"lang": "en-GB",
"res": "1728x1117",
"cd": "30",
"tz": "Asia/Calcutta",
"dtm": "1742205324496",
"vp": "1920x331",
"ds": "1920x1694",
"vid": "720",
"sid": "1574509e-5d6d-43d1-9e76-e42801ae2e55",
"duid": "9e5500ac-3437-4457-a007-351911d54983",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"stm": "1742205324501"
}
]
}
```
### Self-describing Events
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/payload_data/jsonschema/1-0-4",
"data": [
{
"e": "ue",
"eid": "67ae8ec1-3ec0-46b7-89e0-fd944d90acc6",
"tv": "js-3.24.2",
"tna": "gl",
"aid": "gitlab",
"p": "web",
"cookie": "1",
"cs": "UTF-8",
"lang": "en-GB",
"res": "1728x1117",
"cd": "30",
"tz": "Asia/Calcutta",
"dtm": "1742205393772",
"vp": "1920x331",
"ds": "1920x1694",
"vid": "720",
"sid": "1574509e-5d6d-43d1-9e76-e42801ae2e55",
"duid": "9e5500ac-3437-4457-a007-351911d54983",
"refr": "https://gitlab.com/",
"url": "https://gitlab.com/",
"ue_px": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"stm": "1742205393774"
}
]
}
```
This is part of link click tracking. The `ue_px` field is base64 encoded and contains the following JSON:
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/unstruct_event/jsonschema/1-0-0",
"data": {
"schema": "iglu:com.snowplowanalytics.snowplow/link_click/jsonschema/1-0-1",
"data": {
"targetUrl": "https://gitlab.com/",
"elementId": "",
"elementClasses": [
"brand-logo"
],
"elementTarget": ""
}
}
}
```
### Structured Events
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/payload_data/jsonschema/1-0-4",
"data": [
{
"e": "se",
"se_ca": "root:index",
"se_ac": "render_duo_chat_callout",
"eid": "12c18f54-ef65-489e-99f8-00922f9c3249",
"tv": "js-3.24.2",
"tna": "gl",
"aid": "gitlab",
"p": "web",
"cookie": "1",
"cs": "UTF-8",
"lang": "en-GB",
"res": "1728x1117",
"cd": "30",
"tz": "Asia/Calcutta",
"dtm": "1742205394848",
"vp": "1920x331",
"ds": "1920x388",
"vid": "720",
"sid": "1574509e-5d6d-43d1-9e76-e42801ae2e55",
"duid": "9e5500ac-3437-4457-a007-351911d54983",
"refr": "https://gitlab.com/",
"url": "https://gitlab.com/",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"stm": "1742205395080"
}
]
}
```
### Backend Events
```json
{
"e": "se",
"eid": "2e78c447-c18e-4087-a3a8-35723ecfb602",
"aid": "asdfsadf",
"cx": "eyJzY2hlbWEiOiJpZ2x1OmNvbS5zbm93cGxvd2FuYWx5dGljcy...",
"tna": "gl",
"stm": "1742268163018",
"tv": "rb-0.8.0",
"se_ac": "perform_action",
"se_la": "redis_hll_counters.manage.unique_active_users_monthly",
"se_ca": "Users::ActivityService",
"p": "srv",
"dtm": "1742268163016"
}
```
cx field is base64 encoded and contains the following JSON:
```json
{
"schema": "iglu:com.snowplowanalytics.snowplow/contexts/jsonschema/1-0-1",
"data": [
{
"schema": "iglu:com.gitlab/gitlab_standard/jsonschema/1-1-1",
"data": {
"environment": "development",
"source": "gitlab-rails",
"correlation_id": "01JPKMCRCBSMB07DPGVSJJ708F",
"plan": null,
"extra": {},
"user_id": 1,
"global_user_id": "KaAjqePKpCsnc6P40up8ZOi4+BUwEUIyab6W5jWIg5M=",
"is_gitlab_team_member": null,
"namespace_id": null,
"project_id": null,
"feature_enabled_by_namespace_ids": null,
"realm": "self-managed",
"instance_id": "e1baa3de-7e45-4fbc-b17e-95995935cf09",
"host_name": "nbelokolodov--20220811-Y26WJ",
"instance_version": "17.10.0",
"context_generated_at": "2025-03-18 03:22:43 UTC"
}
},
{
"schema": "iglu:com.gitlab/gitlab_service_ping/jsonschema/1-0-1",
"data": {
"data_source": "redis_hll",
"event_name": "unique_active_user"
}
}
]
}
```
|
https://docs.gitlab.com/development/internal_analytics/migration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/migration.md
|
2025-08-13
|
doc/development/internal_analytics/internal_event_instrumentation
|
[
"doc",
"development",
"internal_analytics",
"internal_event_instrumentation"
] |
migration.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Migrating existing tracking to internal event tracking
| null |
GitLab Internal Events Tracking exposes a unified API on top of the deprecated Snowplow and Redis/RedisHLL event tracking options.
This page describes how you can switch from one of the previous methods to using Internal Events Tracking.
{{< alert type="note" >}}
Tracking events directly via Snowplow, Redis/RedisHLL is deprecated but won't be removed in the foreseeable future.
While we encourage you to migrate to Internal Event tracking the deprecated methods will continue to work for existing events and metrics.
{{< /alert >}}
## Migrating from existing Snowplow tracking
If you are already tracking events in Snowplow, you can also start collecting metrics from GitLab Self-Managed instances by switching to Internal Events Tracking.
The event triggered by Internal Events has some special properties compared to previously tracking with Snowplow directly:
1. The `category` is automatically set to the location where the event happened. For Frontend events it is the page name and for Backend events it is a class name. If the page name or class name is not used, the default value of `"InternalEventTracking"` will be used.
Make sure that you are okay with this change before you migrate and dashboards are changed accordingly.
### Backend
If you are already tracking Snowplow events using `Gitlab::Tracking.event` and you want to migrate to Internal Events Tracking you might start with something like this:
```ruby
Gitlab::Tracking.event(name, 'ci_templates_unique', namespace: namespace,
project: project, context: [context], user: user, label: label)
```
The code above can be replaced by this:
```ruby
include Gitlab::InternalEventsTracking
track_internal_event('ci_templates_unique', namespace: namespace, project: project, user: user, additional_properties: { label: label })
```
The `label`, `property` and `value` attributes need to be sent inside the `additional_properties` hash. In case they were not included in the original call, the `additional_properties` argument can be skipped.
In addition, you have to create definitions for the metrics that you would like to track.
To generate metric definitions, you can use the generator:
```shell
scripts/internal_events/cli.rb
```
The generator walks you through the required inputs step-by-step.
If the migrated event has been previously used for tracking RedisHLL metrics, test the migration by using the `migrated internal event` [shared examples](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/shared_examples/controllers/internal_event_tracking_examples.rb) ([example usage](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/182450/diffs)).
### Frontend
If you are using the `Tracking` mixin in the Vue component, you can replace it with the `InternalEvents` mixin.
For example, if your current Vue component look like this:
```vue
import Tracking from '~/tracking';
...
mixins: [Tracking.mixin()]
...
...
this.track('some_label', options)
```
After converting it to Internal Events Tracking, it should look like this:
```vue
import { InternalEvents } from '~/tracking';
...
mixins: [InternalEvents.mixin()]
...
...
this.trackEvent('action', {}, 'category')
```
If you are currently passing `category` and need to keep it, it can be passed as the third argument in the `trackEvent` method, as illustrated in the previous example. Nonetheless, it is strongly advised against using the `category` parameter for new events. This is because, by default, the category field is populated with information about where the event was triggered.
You can use [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123901/diffs) as an example. It migrates the `devops_adoption_app` component to use Internal Events Tracking.
If you are using `label`, `value`, and `property` in Snowplow tracking, you can pass them as an object as the third argument to the `trackEvent` function. It is an optional parameter.
For Vue Mixin:
```javascript
this.trackEvent('i_code_review_user_apply_suggestion', {
label: 'push_event',
property: 'golang',
value: 20
});
```
For raw JavaScript:
```javascript
InternalEvents.trackEvent('i_code_review_user_apply_suggestion', {
label: 'admin',
property: 'system',
value: 20
});
```
If you are using `data-track-action` in the component, you have to change it to `data-event-tracking` to migrate to Internal Events Tracking. If there are additional tracking attributes like `data-track-label`, `data-track-property` and `data-track-value` then you can replace them with `data-event-label`, `data-event-property` and `data-event-value`. If you want to pass any additional property as a custom key-value pair, you can use `data-event-additional` attribute.
For example, if a button is defined like this:
```vue
<gl-button
:href="diffFile.external_url"
:title="externalUrlLabel"
:aria-label="externalUrlLabel"
target="_blank"
data-track-action="click_toggle_external_button"
data-track-label="diff_toggle_external_button"
data-track-property="diff_toggle_external"
icon="external-link"
/>
```
This can be converted to Internal Events Tracking like this:
```vue
<gl-button
:href="diffFile.external_url"
:title="externalUrlLabel"
:aria-label="externalUrlLabel"
target="_blank"
data-event-tracking="click_toggle_external_button"
data-event-label="diff_toggle_external_button"
data-event-property="diff_toggle_external"
data-event-additional='{"key1": "value1", "key2": "value2"}'
icon="external-link"
/>
```
Notice that we just need action to pass in the `data-event-tracking` attribute which will be passed to both Snowplow and RedisHLL.
## Migrating from tracking with RedisHLL
### Backend
If you are currently tracking a metric in `RedisHLL` like this:
```ruby
Gitlab::UsageDataCounters::HLLRedisCounter.track_event(:git_write_action, values: current_user.id)
```
To start using Internal Events Tracking, follow these steps:
1. If event is not being sent to Snowplow, consider renaming if to meet [our naming convention](quick_start.md#defining-event-and-metrics).
1. Create an event definition that describes `git_write_action` ([guide](event_definition_guide.md)).
1. Find metric definitions that list `git_write_action` in the events section (`20210216182041_action_monthly_active_users_git_write.yml` and `20210216184045_git_write_action_weekly.yml`).
1. Change the `data_source` from `redis_hll` to `internal_events` in the metric definition files.
1. Remove the `instrumentation_class` property. It's not used for Internal Events metrics.
1. Add an `events` section to both metric definition files.
```yaml
events:
- name: git_write_action
unique: user.id
```
Use `project.id` or `namespace.id` instead of `user.id` if your metric is counting something other than unique users.
1. Remove the `options` section from both metric definition files.
1. Include the `Gitlab::InternalEventsTracking` module and call `track_internal_event` instead of `HLLRedisCounter.track_event`:
```diff
- Gitlab::UsageDataCounters::HLLRedisCounter.track_event(:git_write_action, values: current_user.id)
+ include Gitlab::InternalEventsTracking
+ track_internal_event('project_created', user: current_user)
```
1. Optional. Add additional values to the event. You typically want to add `project` and `namespace` as it is useful information to have in the data warehouse.
```diff
- Gitlab::UsageDataCounters::HLLRedisCounter.track_event(:git_write_action, values: current_user.id)
+ include Gitlab::InternalEventsTracking
+ track_internal_event('project_created', user: current_user, project: project, namespace: namespace)
```
1. Update your test to use the `internal event tracking` shared example.
1. Remove the event's name from [hll_redis_legacy_events](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/usage_data_counters/hll_redis_legacy_events.yml)
1. Add the event to [hll_redis_key_overrides](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/usage_data_counters/hll_redis_key_overrides.yml) file. The format used in this file is: `project_created-user: 'project_created'`, where `project_created` is the event's name and `user` is the unique value that has been specified in the metric definition files.
### Frontend
You can convert `trackRedisHllUserEvent` calls to Internal events by using the mixin, raw JavaScript, or the `data-event-tracking` attribute.
[Quick start guide](quick_start.md#frontend-tracking) has examples for each method.
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Migrating existing tracking to internal event tracking
breadcrumbs:
- doc
- development
- internal_analytics
- internal_event_instrumentation
---
GitLab Internal Events Tracking exposes a unified API on top of the deprecated Snowplow and Redis/RedisHLL event tracking options.
This page describes how you can switch from one of the previous methods to using Internal Events Tracking.
{{< alert type="note" >}}
Tracking events directly via Snowplow, Redis/RedisHLL is deprecated but won't be removed in the foreseeable future.
While we encourage you to migrate to Internal Event tracking the deprecated methods will continue to work for existing events and metrics.
{{< /alert >}}
## Migrating from existing Snowplow tracking
If you are already tracking events in Snowplow, you can also start collecting metrics from GitLab Self-Managed instances by switching to Internal Events Tracking.
The event triggered by Internal Events has some special properties compared to previously tracking with Snowplow directly:
1. The `category` is automatically set to the location where the event happened. For Frontend events it is the page name and for Backend events it is a class name. If the page name or class name is not used, the default value of `"InternalEventTracking"` will be used.
Make sure that you are okay with this change before you migrate and dashboards are changed accordingly.
### Backend
If you are already tracking Snowplow events using `Gitlab::Tracking.event` and you want to migrate to Internal Events Tracking you might start with something like this:
```ruby
Gitlab::Tracking.event(name, 'ci_templates_unique', namespace: namespace,
project: project, context: [context], user: user, label: label)
```
The code above can be replaced by this:
```ruby
include Gitlab::InternalEventsTracking
track_internal_event('ci_templates_unique', namespace: namespace, project: project, user: user, additional_properties: { label: label })
```
The `label`, `property` and `value` attributes need to be sent inside the `additional_properties` hash. In case they were not included in the original call, the `additional_properties` argument can be skipped.
In addition, you have to create definitions for the metrics that you would like to track.
To generate metric definitions, you can use the generator:
```shell
scripts/internal_events/cli.rb
```
The generator walks you through the required inputs step-by-step.
If the migrated event has been previously used for tracking RedisHLL metrics, test the migration by using the `migrated internal event` [shared examples](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/shared_examples/controllers/internal_event_tracking_examples.rb) ([example usage](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/182450/diffs)).
### Frontend
If you are using the `Tracking` mixin in the Vue component, you can replace it with the `InternalEvents` mixin.
For example, if your current Vue component look like this:
```vue
import Tracking from '~/tracking';
...
mixins: [Tracking.mixin()]
...
...
this.track('some_label', options)
```
After converting it to Internal Events Tracking, it should look like this:
```vue
import { InternalEvents } from '~/tracking';
...
mixins: [InternalEvents.mixin()]
...
...
this.trackEvent('action', {}, 'category')
```
If you are currently passing `category` and need to keep it, it can be passed as the third argument in the `trackEvent` method, as illustrated in the previous example. Nonetheless, it is strongly advised against using the `category` parameter for new events. This is because, by default, the category field is populated with information about where the event was triggered.
You can use [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123901/diffs) as an example. It migrates the `devops_adoption_app` component to use Internal Events Tracking.
If you are using `label`, `value`, and `property` in Snowplow tracking, you can pass them as an object as the third argument to the `trackEvent` function. It is an optional parameter.
For Vue Mixin:
```javascript
this.trackEvent('i_code_review_user_apply_suggestion', {
label: 'push_event',
property: 'golang',
value: 20
});
```
For raw JavaScript:
```javascript
InternalEvents.trackEvent('i_code_review_user_apply_suggestion', {
label: 'admin',
property: 'system',
value: 20
});
```
If you are using `data-track-action` in the component, you have to change it to `data-event-tracking` to migrate to Internal Events Tracking. If there are additional tracking attributes like `data-track-label`, `data-track-property` and `data-track-value` then you can replace them with `data-event-label`, `data-event-property` and `data-event-value`. If you want to pass any additional property as a custom key-value pair, you can use `data-event-additional` attribute.
For example, if a button is defined like this:
```vue
<gl-button
:href="diffFile.external_url"
:title="externalUrlLabel"
:aria-label="externalUrlLabel"
target="_blank"
data-track-action="click_toggle_external_button"
data-track-label="diff_toggle_external_button"
data-track-property="diff_toggle_external"
icon="external-link"
/>
```
This can be converted to Internal Events Tracking like this:
```vue
<gl-button
:href="diffFile.external_url"
:title="externalUrlLabel"
:aria-label="externalUrlLabel"
target="_blank"
data-event-tracking="click_toggle_external_button"
data-event-label="diff_toggle_external_button"
data-event-property="diff_toggle_external"
data-event-additional='{"key1": "value1", "key2": "value2"}'
icon="external-link"
/>
```
Notice that we just need action to pass in the `data-event-tracking` attribute which will be passed to both Snowplow and RedisHLL.
## Migrating from tracking with RedisHLL
### Backend
If you are currently tracking a metric in `RedisHLL` like this:
```ruby
Gitlab::UsageDataCounters::HLLRedisCounter.track_event(:git_write_action, values: current_user.id)
```
To start using Internal Events Tracking, follow these steps:
1. If event is not being sent to Snowplow, consider renaming if to meet [our naming convention](quick_start.md#defining-event-and-metrics).
1. Create an event definition that describes `git_write_action` ([guide](event_definition_guide.md)).
1. Find metric definitions that list `git_write_action` in the events section (`20210216182041_action_monthly_active_users_git_write.yml` and `20210216184045_git_write_action_weekly.yml`).
1. Change the `data_source` from `redis_hll` to `internal_events` in the metric definition files.
1. Remove the `instrumentation_class` property. It's not used for Internal Events metrics.
1. Add an `events` section to both metric definition files.
```yaml
events:
- name: git_write_action
unique: user.id
```
Use `project.id` or `namespace.id` instead of `user.id` if your metric is counting something other than unique users.
1. Remove the `options` section from both metric definition files.
1. Include the `Gitlab::InternalEventsTracking` module and call `track_internal_event` instead of `HLLRedisCounter.track_event`:
```diff
- Gitlab::UsageDataCounters::HLLRedisCounter.track_event(:git_write_action, values: current_user.id)
+ include Gitlab::InternalEventsTracking
+ track_internal_event('project_created', user: current_user)
```
1. Optional. Add additional values to the event. You typically want to add `project` and `namespace` as it is useful information to have in the data warehouse.
```diff
- Gitlab::UsageDataCounters::HLLRedisCounter.track_event(:git_write_action, values: current_user.id)
+ include Gitlab::InternalEventsTracking
+ track_internal_event('project_created', user: current_user, project: project, namespace: namespace)
```
1. Update your test to use the `internal event tracking` shared example.
1. Remove the event's name from [hll_redis_legacy_events](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/usage_data_counters/hll_redis_legacy_events.yml)
1. Add the event to [hll_redis_key_overrides](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/usage_data_counters/hll_redis_key_overrides.yml) file. The format used in this file is: `project_created-user: 'project_created'`, where `project_created` is the event's name and `user` is the unique value that has been specified in the metric definition files.
### Frontend
You can convert `trackRedisHllUserEvent` calls to Internal events by using the mixin, raw JavaScript, or the `data-event-tracking` attribute.
[Quick start guide](quick_start.md#frontend-tracking) has examples for each method.
|
https://docs.gitlab.com/development/internal_analytics/standard_context_fields
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/standard_context_fields.md
|
2025-08-13
|
doc/development/internal_analytics/internal_event_instrumentation
|
[
"doc",
"development",
"internal_analytics",
"internal_event_instrumentation"
] |
standard_context_fields.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Standard Context Fields
| null |
Standard context, also referred to as [Cloud context](https://gitlab.com/gitlab-org/analytics-section/analytics-instrumentation/proposals/-/blob/master/doc/data_usage_collection_outside_gitlab_codebase.md?ref_type=heads), describes all the fields available in the GitLab Standard Context schema.
## Required Fields
| Field | Type | Description | Example |
|----------------|--------|------------------------------------|-----------------------|
| `environment` | string | Name of the source environment. | `"production"`, `"staging"` |
## Optional Fields
| Field | Type | Description | Example |
|-------------------|---------------|---------------------------------------------------------------------------------------------------|---------------------|
| `project_id` | integer, null | ID of the associated project. This is available when tracking is done inside any project path. (example: [GitLab project](https://gitlab.com/gitlab-org/gitlab)) | `12345` |
| `namespace_id` | integer, null | ID of the associated namespace. This is available when tracking is done inside any group path. (example: [GitLab-org](https://gitlab.com/gitlab-org)) | `67890` |
| `ultimate_parent_namespace_id` | integer, null | Ultimate parent namespace ID of the associated namespace. This is available when the namespace ID of the event is known | `67869` |
| `user_id` | integer, null | ID of the associated user. This gets pseudonymized in the Snowplow enricher. Refer to the [metrics dictionary](https://metrics.gitlab.com/identifiers/). | `longhash` |
| `global_user_id` | string, null | An anonymized `user_id` hash unique across instances. | `longhash` |
| `is_gitlab_team_member` | boolean, null | Indicates if the action was triggered by a GitLab team member. | `true`, `false` |
### Instance Information
| Field | Type | Description | Example |
|------------------|---------------|----------------------------------------------------------|---------------------------|
| `instance_id` | string, null | ID of the GitLab instance where the request originated. | `instance_long_uuid` |
| `unique_instance_id` | string, null | Unique ID of the GitLab instance where the request originated. | `instance_long_uuid` |
| `host_name` | string, null | Hostname of the GitLab instance. | `"gitlab-host-id"` |
| `instance_version` | string, null | Version of the GitLab instance. | `"15.8.0"` |
| `realm` | string, null | Deployment type of GitLab. Must be one of: `"self-managed"`, `"saas"`, `"dedicated"`. | `"saas"` |
### Client Information
| Field | Type | Description | Example |
|------------------|---------------|----------------------------------------------------------|---------------------------|
| `client_name` | string, null | Name of the client sending the request. | `"chrome"`, `"jetbrains"` |
| `client_version` | string, null | Version of the client. | `"108.0.5359.124"` |
| `client_type` | string, null | Type of client. | `"browser"`, `"ide"` |
| `interface` | string, null | Interface from which the request originates. | `"Duo Chat"` |
### Feature and Plan Information
| Field | Type | Description | Example |
|-------------------------------|---------------|-----------------------------------------------------------------------------|--------------------------|
| `feature_category` | string, null | Category where the specific feature belongs. | `"duo_chat"` |
| `feature_enabled_by_namespace_ids` | array, null | List of namespace IDs allowing the user to use the tracked feature. | `[123, 456, 789]` |
| `plan` | string, null | Name of the subscription plan (maximum length: 32 characters). | `"free"`, `"ultimate"` |
### Tracking and Context
| Field | Type | Description | Example |
|-----------------------|---------------|----------------------------------------------------------|------------------------------|
| `source` | string, null | Name of the source application. | `"gitlab-rails"`, `"gitlab-javascript"` |
| `google_analytics_id` | string, null | Google Analytics ID from the marketing site. | `"UA-XXXXXXXX-X"` |
| `context_generated_at` | string, null | Timestamp indicating when the context was generated. | `"2023-12-20T10:00:00Z"` |
| `correlation_id` | string, null | Unique request ID for each request. | `uuid` |
| `extra` | object, null | Additional data associated with the event, in key-value pair format. | `{"key": "value"}` |
### Adding a New Field to the Standard Context
To add a new field to the standard context:
1. Create a merge request in the [iglu](https://gitlab.com/gitlab-org/iglu/-/tree/master/public/schemas/com.gitlab/gitlab_standard/jsonschema?ref_type=heads) repository to update the schema.
1. If the new field should be pseudonymized, add it to the [ATTRIBUTE_TO_PSEUDONYMISE](https://gitlab.com/gitlab-org/analytics-section/analytics-instrumentation/snowplow-pseudonymization/-/blob/main/lib/snowplow/gitlab_standard_context.rb?ref_type=heads#L9) constant in the `snowplow-pseudonymization` project.
1. Update the `GITLAB_STANDARD_SCHEMA_URL` in [tracking/standard_context.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/tracking/standard_context.rb#L6) to match the new version from `gitlab-org/iglu`.
1. Start sending events that include the new field in Standard Context.
### Related Links
- Descriptions of Unit Primitives are documented in [cloud connector](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/tree/main/config/unit_primitives).
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab Standard Context Fields
breadcrumbs:
- doc
- development
- internal_analytics
- internal_event_instrumentation
---
Standard context, also referred to as [Cloud context](https://gitlab.com/gitlab-org/analytics-section/analytics-instrumentation/proposals/-/blob/master/doc/data_usage_collection_outside_gitlab_codebase.md?ref_type=heads), describes all the fields available in the GitLab Standard Context schema.
## Required Fields
| Field | Type | Description | Example |
|----------------|--------|------------------------------------|-----------------------|
| `environment` | string | Name of the source environment. | `"production"`, `"staging"` |
## Optional Fields
| Field | Type | Description | Example |
|-------------------|---------------|---------------------------------------------------------------------------------------------------|---------------------|
| `project_id` | integer, null | ID of the associated project. This is available when tracking is done inside any project path. (example: [GitLab project](https://gitlab.com/gitlab-org/gitlab)) | `12345` |
| `namespace_id` | integer, null | ID of the associated namespace. This is available when tracking is done inside any group path. (example: [GitLab-org](https://gitlab.com/gitlab-org)) | `67890` |
| `ultimate_parent_namespace_id` | integer, null | Ultimate parent namespace ID of the associated namespace. This is available when the namespace ID of the event is known | `67869` |
| `user_id` | integer, null | ID of the associated user. This gets pseudonymized in the Snowplow enricher. Refer to the [metrics dictionary](https://metrics.gitlab.com/identifiers/). | `longhash` |
| `global_user_id` | string, null | An anonymized `user_id` hash unique across instances. | `longhash` |
| `is_gitlab_team_member` | boolean, null | Indicates if the action was triggered by a GitLab team member. | `true`, `false` |
### Instance Information
| Field | Type | Description | Example |
|------------------|---------------|----------------------------------------------------------|---------------------------|
| `instance_id` | string, null | ID of the GitLab instance where the request originated. | `instance_long_uuid` |
| `unique_instance_id` | string, null | Unique ID of the GitLab instance where the request originated. | `instance_long_uuid` |
| `host_name` | string, null | Hostname of the GitLab instance. | `"gitlab-host-id"` |
| `instance_version` | string, null | Version of the GitLab instance. | `"15.8.0"` |
| `realm` | string, null | Deployment type of GitLab. Must be one of: `"self-managed"`, `"saas"`, `"dedicated"`. | `"saas"` |
### Client Information
| Field | Type | Description | Example |
|------------------|---------------|----------------------------------------------------------|---------------------------|
| `client_name` | string, null | Name of the client sending the request. | `"chrome"`, `"jetbrains"` |
| `client_version` | string, null | Version of the client. | `"108.0.5359.124"` |
| `client_type` | string, null | Type of client. | `"browser"`, `"ide"` |
| `interface` | string, null | Interface from which the request originates. | `"Duo Chat"` |
### Feature and Plan Information
| Field | Type | Description | Example |
|-------------------------------|---------------|-----------------------------------------------------------------------------|--------------------------|
| `feature_category` | string, null | Category where the specific feature belongs. | `"duo_chat"` |
| `feature_enabled_by_namespace_ids` | array, null | List of namespace IDs allowing the user to use the tracked feature. | `[123, 456, 789]` |
| `plan` | string, null | Name of the subscription plan (maximum length: 32 characters). | `"free"`, `"ultimate"` |
### Tracking and Context
| Field | Type | Description | Example |
|-----------------------|---------------|----------------------------------------------------------|------------------------------|
| `source` | string, null | Name of the source application. | `"gitlab-rails"`, `"gitlab-javascript"` |
| `google_analytics_id` | string, null | Google Analytics ID from the marketing site. | `"UA-XXXXXXXX-X"` |
| `context_generated_at` | string, null | Timestamp indicating when the context was generated. | `"2023-12-20T10:00:00Z"` |
| `correlation_id` | string, null | Unique request ID for each request. | `uuid` |
| `extra` | object, null | Additional data associated with the event, in key-value pair format. | `{"key": "value"}` |
### Adding a New Field to the Standard Context
To add a new field to the standard context:
1. Create a merge request in the [iglu](https://gitlab.com/gitlab-org/iglu/-/tree/master/public/schemas/com.gitlab/gitlab_standard/jsonschema?ref_type=heads) repository to update the schema.
1. If the new field should be pseudonymized, add it to the [ATTRIBUTE_TO_PSEUDONYMISE](https://gitlab.com/gitlab-org/analytics-section/analytics-instrumentation/snowplow-pseudonymization/-/blob/main/lib/snowplow/gitlab_standard_context.rb?ref_type=heads#L9) constant in the `snowplow-pseudonymization` project.
1. Update the `GITLAB_STANDARD_SCHEMA_URL` in [tracking/standard_context.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/tracking/standard_context.rb#L6) to match the new version from `gitlab-org/iglu`.
1. Start sending events that include the new field in Standard Context.
### Related Links
- Descriptions of Unit Primitives are documented in [cloud connector](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/tree/main/config/unit_primitives).
|
https://docs.gitlab.com/development/internal_analytics/event_definition_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/event_definition_guide.md
|
2025-08-13
|
doc/development/internal_analytics/internal_event_instrumentation
|
[
"doc",
"development",
"internal_analytics",
"internal_event_instrumentation"
] |
event_definition_guide.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Event definition guide
| null |
{{< alert type="note" >}}
The event dictionary is a work in progress, and this process is subject to change.
{{< /alert >}}
This guide describes the event dictionary and how it's implemented.
## Event definition and validation
This process is meant to document all internal events and ensure consistency. Every internal event needs to have such a definition. Event definitions must comply with the [JSON Schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/events/schema.json).
All event definitions are stored in the following directories:
- [`config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/events)
- [`ee/config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/events)
Removed events are stored in the `/removed` subfolders:
- [`config/events/removed`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/events/removed)
- [`ee/config/events/removed`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/events/removed)
See the [event lifecycle](event_lifecycle.md) guide for more details.
Each event is defined in a separate YAML file consisting of the following fields:
| Field | Required | Additional information |
|---------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `description` | yes | A description of the event. |
| `internal_events` | no | Always `true` for events used in Internal Events. |
| `category` | no | Required for legacy events. Should not be used for Internal Events. |
| `action` | yes | A unique name for the event. Only lowercase, numbers, and underscores are allowed. Use the format `<operation>_<target_of_operation>_<where/when>`. <br/><br/> For example: `publish_go_module_to_the_registry_from_pipeline` <br/>`<operation> = publish`<br/>`<target> = go_module`<br/>`<when/where> = to_the_registry_from_pipeline`. |
| `identifiers` | no | A list of identifiers sent with the event. Can be set to one or more of `project`, `user`, `namespace` or `feature_enabled_by_namespace_ids` |
| `product_group` | yes | The [group](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml) that owns the event. |
| `product_categories`| false | A list of the [feature categories](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/feature_categories.yml) that the event represents usage of. Some events may correspond to multiple categories or no category. |
| `milestone` | no | The milestone when the event is introduced. |
| `status` | no | The status of the event. Can be set to one of `active`, `removed`, or `null`. |
| `milestone_removed` | no | The milestone when the event is removed. |
| `removed_by_url` | no | The URL to the merge request that removed the event. |
| `introduced_by_url` | no | The URL to the merge request that introduced the event. |
| `tiers` | yes | The [tiers](https://handbook.gitlab.com/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/tiers/) where the tracked feature is available. Can be set to one or more of `free`, `premium`, or `ultimate`. |
| `additional_properties` | no | A list of additional properties that are sent with the event. Each additional property must have a record entry with a `description` field. It is required to add all the additional properties that would be sent with the event in the event definition file. Built-in properties are: `label` (string), `property` (string) and `value` (numeric). [Custom](quick_start.md#additional-properties) properties can be added if the built-in options are not sufficient. |
## Changing the `action` property in event definitions
When considering changing the `action` field in an event definition, it is important to know that:
- Renaming an event is equivalent to deleting the existing event and creating a new one. This is acceptable if the event is not used in any metrics.
- Ensure that the YAML file's name matches the new `action` name to avoid confusion. This helps maintain clarity and consistency in the event definitions.
### Example event definition
This is an example YAML file for an internal event:
```yaml
description: A user visited a product analytics dashboard
internal_events: true
action: visit_product_analytics_dashboard
identifiers:
- project
- user
- namespace
product_group: group::product analytics
milestone: "16.4"
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128029
tiers:
- ultimate
```
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Event definition guide
breadcrumbs:
- doc
- development
- internal_analytics
- internal_event_instrumentation
---
{{< alert type="note" >}}
The event dictionary is a work in progress, and this process is subject to change.
{{< /alert >}}
This guide describes the event dictionary and how it's implemented.
## Event definition and validation
This process is meant to document all internal events and ensure consistency. Every internal event needs to have such a definition. Event definitions must comply with the [JSON Schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/events/schema.json).
All event definitions are stored in the following directories:
- [`config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/events)
- [`ee/config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/events)
Removed events are stored in the `/removed` subfolders:
- [`config/events/removed`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/events/removed)
- [`ee/config/events/removed`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/events/removed)
See the [event lifecycle](event_lifecycle.md) guide for more details.
Each event is defined in a separate YAML file consisting of the following fields:
| Field | Required | Additional information |
|---------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `description` | yes | A description of the event. |
| `internal_events` | no | Always `true` for events used in Internal Events. |
| `category` | no | Required for legacy events. Should not be used for Internal Events. |
| `action` | yes | A unique name for the event. Only lowercase, numbers, and underscores are allowed. Use the format `<operation>_<target_of_operation>_<where/when>`. <br/><br/> For example: `publish_go_module_to_the_registry_from_pipeline` <br/>`<operation> = publish`<br/>`<target> = go_module`<br/>`<when/where> = to_the_registry_from_pipeline`. |
| `identifiers` | no | A list of identifiers sent with the event. Can be set to one or more of `project`, `user`, `namespace` or `feature_enabled_by_namespace_ids` |
| `product_group` | yes | The [group](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml) that owns the event. |
| `product_categories`| false | A list of the [feature categories](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/feature_categories.yml) that the event represents usage of. Some events may correspond to multiple categories or no category. |
| `milestone` | no | The milestone when the event is introduced. |
| `status` | no | The status of the event. Can be set to one of `active`, `removed`, or `null`. |
| `milestone_removed` | no | The milestone when the event is removed. |
| `removed_by_url` | no | The URL to the merge request that removed the event. |
| `introduced_by_url` | no | The URL to the merge request that introduced the event. |
| `tiers` | yes | The [tiers](https://handbook.gitlab.com/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/tiers/) where the tracked feature is available. Can be set to one or more of `free`, `premium`, or `ultimate`. |
| `additional_properties` | no | A list of additional properties that are sent with the event. Each additional property must have a record entry with a `description` field. It is required to add all the additional properties that would be sent with the event in the event definition file. Built-in properties are: `label` (string), `property` (string) and `value` (numeric). [Custom](quick_start.md#additional-properties) properties can be added if the built-in options are not sufficient. |
## Changing the `action` property in event definitions
When considering changing the `action` field in an event definition, it is important to know that:
- Renaming an event is equivalent to deleting the existing event and creating a new one. This is acceptable if the event is not used in any metrics.
- Ensure that the YAML file's name matches the new `action` name to avoid confusion. This helps maintain clarity and consistency in the event definitions.
### Example event definition
This is an example YAML file for an internal event:
```yaml
description: A user visited a product analytics dashboard
internal_events: true
action: visit_product_analytics_dashboard
identifiers:
- project
- user
- namespace
product_group: group::product analytics
milestone: "16.4"
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128029
tiers:
- ultimate
```
|
https://docs.gitlab.com/development/internal_analytics/troubleshooting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/troubleshooting.md
|
2025-08-13
|
doc/development/internal_analytics/service_ping
|
[
"doc",
"development",
"internal_analytics",
"service_ping"
] |
troubleshooting.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Troubleshooting Service Ping
| null |
## Set up and test Service Ping locally
To set up Service Ping locally, you must:
1. [Set up local repositories](#set-up-local-repositories).
1. [Test local setup](#test-local-setup).
1. Optional. [Test Prometheus-based Service Ping](#test-prometheus-based-service-ping).
### Set up local repositories
1. Clone and start [GitLab](https://gitlab.com/gitlab-org/gitlab-development-kit).
1. Clone and start [Versions Application](https://gitlab.com/gitlab-org/gitlab-services/version.gitlab.com).
Make sure you run `docker-compose up` to start a PostgreSQL and Redis instance.
1. Point GitLab to the Versions Application endpoint instead of the default endpoint:
1. Open [service_ping/submit_service.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/service_ping/submit_service.rb#L5) locally and modify `STAGING_BASE_URL`.
1. Set it to the local Versions Application URL: `http://localhost:3000`.
### Test local setup
1. Using the `gitlab` Rails console, manually trigger Service Ping:
```ruby
GitlabServicePingWorker.new.perform('triggered_from_cron' => false)
```
1. Use the `versions` Rails console to check the Service Ping was successfully received,
parsed, and stored in the Versions database:
```ruby
UsageData.last
```
## Test Prometheus-based Service Ping
If the data submitted includes metrics [queried from Prometheus](../metrics/metrics_instrumentation.md#prometheus-metrics)
you want to inspect and verify, you must:
- Ensure that a Prometheus server is running locally.
- Ensure the respective GitLab components are exporting metrics to the Prometheus server.
If you do not need to test data coming from Prometheus, no further action
is necessary. Service Ping should degrade gracefully in the absence of a running Prometheus server.
Three kinds of components may export data to Prometheus, and are included in Service Ping:
- [`node_exporter`](https://github.com/prometheus/node_exporter): Exports node metrics
from the host machine.
- [`gitlab-exporter`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter): Exports process metrics
from various GitLab components.
- Other various GitLab services, such as Sidekiq and the Rails server, which export their own metrics.
### Test with an Omnibus container
This is the recommended approach to test Prometheus-based Service Ping.
To verify your change, build a new Omnibus image from your code branch using CI/CD, download the image,
and run a local container instance:
1. From your merge request, select the `qa` stage, then trigger the `e2e:test-on-omnibus-ee` job. This job triggers an Omnibus
build in a [downstream pipeline of the `omnibus-gitlab-mirror` project](https://gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/-/pipelines).
1. In the downstream pipeline, wait for the `gitlab-docker` job to finish.
1. Open the job logs and locate the full container name including the version. It takes the following form: `registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:<VERSION>`.
1. On your local machine, make sure you are signed in to the GitLab Docker registry. You can find the instructions for this in
[Authenticate to the GitLab container registry](../../../user/packages/container_registry/authenticate_with_container_registry.md).
1. Once signed in, download the new image by using `docker pull registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:<VERSION>`
1. For more information about working with and running Omnibus GitLab containers in Docker, refer to [GitLab Docker images](../../../install/docker/_index.md) documentation.
### Test with GitLab development toolkits
This is the less recommended approach, because it comes with a number of difficulties when emulating a real GitLab deployment.
The [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit) is not set up to run a Prometheus server or `node_exporter` alongside other GitLab components. If you would
like to do so, [Monitoring the GDK with Prometheus](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/prometheus/index.md#monitoring-the-gdk-with-prometheus) is a good start.
The [GCK](https://gitlab.com/gitlab-org/gitlab-compose-kit) has limited support for testing Prometheus based Service Ping.
By default, it comes with a fully configured Prometheus service that is set up to scrape a number of components.
However, it has the following limitations:
- It does not run a `gitlab-exporter` instance, so several `process_*` metrics from services such as Gitaly may be missing.
- While it runs a `node_exporter`, `docker-compose` services emulate hosts, meaning that it usually reports itself as not associated
with any of the other running services. That is not how node metrics are reported in a production setup, where `node_exporter`
always runs as a process alongside other GitLab components on any given node. For Service Ping, none of the node data would therefore
appear to be associated to any of the services running, because they all appear to be running on different hosts. To alleviate this problem, the `node_exporter` in GCK was arbitrarily "assigned" to the `web` service, meaning only for this service `node_*` metrics appears in Service Ping.
## Generate Service Ping
### Generate or get the cached Service Ping in rails console
Use the following method in the [rails console](../../../administration/operations/rails_console.md#starting-a-rails-console-session).
```ruby
Gitlab::Usage::ServicePingReport.for(output: :all_metrics_values, cached: true)
```
### Generate a fresh new Service Ping
Use the following method in the [rails console](../../../administration/operations/rails_console.md#starting-a-rails-console-session).
This also refreshes the cached Service Ping displayed in the **Admin** area.
```ruby
Gitlab::Usage::ServicePingReport.for(output: :all_metrics_values)
```
### Generate a new Service Ping including today's usage data
Use the following methods in the [rails console](../../../administration/operations/rails_console.md#starting-a-rails-console-session).
```ruby
require_relative 'spec/support/helpers/service_ping_helpers.rb'
ServicePingHelpers.get_current_service_ping_payload
# To get a single metric's value, provide the metric's key_path like so:
ServicePingHelpers.get_current_usage_metric_value('counts.count_total_render_duo_pro_lead_page')
```
### Generate and print
Generates Service Ping data in JSON format.
```shell
gitlab-rake gitlab:usage_data:generate
```
Generates Service Ping data in YAML format:
```shell
gitlab-rake gitlab:usage_data:dump_sql_in_yaml
```
### Generate and send Service Ping
Prints the metrics saved in `conversational_development_index_metrics`.
```shell
gitlab-rake gitlab:usage_data:generate_and_send
```
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Troubleshooting Service Ping
breadcrumbs:
- doc
- development
- internal_analytics
- service_ping
---
## Set up and test Service Ping locally
To set up Service Ping locally, you must:
1. [Set up local repositories](#set-up-local-repositories).
1. [Test local setup](#test-local-setup).
1. Optional. [Test Prometheus-based Service Ping](#test-prometheus-based-service-ping).
### Set up local repositories
1. Clone and start [GitLab](https://gitlab.com/gitlab-org/gitlab-development-kit).
1. Clone and start [Versions Application](https://gitlab.com/gitlab-org/gitlab-services/version.gitlab.com).
Make sure you run `docker-compose up` to start a PostgreSQL and Redis instance.
1. Point GitLab to the Versions Application endpoint instead of the default endpoint:
1. Open [service_ping/submit_service.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/service_ping/submit_service.rb#L5) locally and modify `STAGING_BASE_URL`.
1. Set it to the local Versions Application URL: `http://localhost:3000`.
### Test local setup
1. Using the `gitlab` Rails console, manually trigger Service Ping:
```ruby
GitlabServicePingWorker.new.perform('triggered_from_cron' => false)
```
1. Use the `versions` Rails console to check the Service Ping was successfully received,
parsed, and stored in the Versions database:
```ruby
UsageData.last
```
## Test Prometheus-based Service Ping
If the data submitted includes metrics [queried from Prometheus](../metrics/metrics_instrumentation.md#prometheus-metrics)
you want to inspect and verify, you must:
- Ensure that a Prometheus server is running locally.
- Ensure the respective GitLab components are exporting metrics to the Prometheus server.
If you do not need to test data coming from Prometheus, no further action
is necessary. Service Ping should degrade gracefully in the absence of a running Prometheus server.
Three kinds of components may export data to Prometheus, and are included in Service Ping:
- [`node_exporter`](https://github.com/prometheus/node_exporter): Exports node metrics
from the host machine.
- [`gitlab-exporter`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter): Exports process metrics
from various GitLab components.
- Other various GitLab services, such as Sidekiq and the Rails server, which export their own metrics.
### Test with an Omnibus container
This is the recommended approach to test Prometheus-based Service Ping.
To verify your change, build a new Omnibus image from your code branch using CI/CD, download the image,
and run a local container instance:
1. From your merge request, select the `qa` stage, then trigger the `e2e:test-on-omnibus-ee` job. This job triggers an Omnibus
build in a [downstream pipeline of the `omnibus-gitlab-mirror` project](https://gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/-/pipelines).
1. In the downstream pipeline, wait for the `gitlab-docker` job to finish.
1. Open the job logs and locate the full container name including the version. It takes the following form: `registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:<VERSION>`.
1. On your local machine, make sure you are signed in to the GitLab Docker registry. You can find the instructions for this in
[Authenticate to the GitLab container registry](../../../user/packages/container_registry/authenticate_with_container_registry.md).
1. Once signed in, download the new image by using `docker pull registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:<VERSION>`
1. For more information about working with and running Omnibus GitLab containers in Docker, refer to [GitLab Docker images](../../../install/docker/_index.md) documentation.
### Test with GitLab development toolkits
This is the less recommended approach, because it comes with a number of difficulties when emulating a real GitLab deployment.
The [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit) is not set up to run a Prometheus server or `node_exporter` alongside other GitLab components. If you would
like to do so, [Monitoring the GDK with Prometheus](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/prometheus/index.md#monitoring-the-gdk-with-prometheus) is a good start.
The [GCK](https://gitlab.com/gitlab-org/gitlab-compose-kit) has limited support for testing Prometheus based Service Ping.
By default, it comes with a fully configured Prometheus service that is set up to scrape a number of components.
However, it has the following limitations:
- It does not run a `gitlab-exporter` instance, so several `process_*` metrics from services such as Gitaly may be missing.
- While it runs a `node_exporter`, `docker-compose` services emulate hosts, meaning that it usually reports itself as not associated
with any of the other running services. That is not how node metrics are reported in a production setup, where `node_exporter`
always runs as a process alongside other GitLab components on any given node. For Service Ping, none of the node data would therefore
appear to be associated to any of the services running, because they all appear to be running on different hosts. To alleviate this problem, the `node_exporter` in GCK was arbitrarily "assigned" to the `web` service, meaning only for this service `node_*` metrics appears in Service Ping.
## Generate Service Ping
### Generate or get the cached Service Ping in rails console
Use the following method in the [rails console](../../../administration/operations/rails_console.md#starting-a-rails-console-session).
```ruby
Gitlab::Usage::ServicePingReport.for(output: :all_metrics_values, cached: true)
```
### Generate a fresh new Service Ping
Use the following method in the [rails console](../../../administration/operations/rails_console.md#starting-a-rails-console-session).
This also refreshes the cached Service Ping displayed in the **Admin** area.
```ruby
Gitlab::Usage::ServicePingReport.for(output: :all_metrics_values)
```
### Generate a new Service Ping including today's usage data
Use the following methods in the [rails console](../../../administration/operations/rails_console.md#starting-a-rails-console-session).
```ruby
require_relative 'spec/support/helpers/service_ping_helpers.rb'
ServicePingHelpers.get_current_service_ping_payload
# To get a single metric's value, provide the metric's key_path like so:
ServicePingHelpers.get_current_usage_metric_value('counts.count_total_render_duo_pro_lead_page')
```
### Generate and print
Generates Service Ping data in JSON format.
```shell
gitlab-rake gitlab:usage_data:generate
```
Generates Service Ping data in YAML format:
```shell
gitlab-rake gitlab:usage_data:dump_sql_in_yaml
```
### Generate and send Service Ping
Prints the metrics saved in `conversational_development_index_metrics`.
```shell
gitlab-rake gitlab:usage_data:generate_and_send
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.