Gitlab: How to configure Backup when using object-store

676 Views Asked by At

we are running GitLab installed in our Kubernetes Cluster, using rook-ceph Rados-Gateway as S3 Storage backend. We want to use the backup-utility delivered in the tools container from gitlab. As backup target we configured an external minio Instance. When using the backup-utility, this error messages occurs:

Bucket not found: gitlab-registry-bucket. Skipping backup of registry ...
Bucket not found: gitlab-uploads-bucket. Skipping backup of uploads ...
Bucket not found: gitlab-artifacts-bucket. Skipping backup of artifacts ...
Bucket not found: gitlab-lfs-bucket. Skipping backup of lfs ...
Bucket not found: gitlab-packages-bucket. Skipping backup of packages ...
Bucket not found: gitlab-mr-diffs. Skipping backup of external_diffs ...
Bucket not found: gitlab-terraform-state. Skipping backup of terraform_state ...
Bucket not found: gitlab-pages-bucket. Skipping backup of pages ...

When I'm executing s3cmd ls, I only see the two Backup Buckets on our minio Instance, not the "source" Buckets.

Can someone tell me, how to configure the backup-utility or the s3cmd so it can access both, the Rados-Gateway for the Source Buckets and the minio as Backup Target?

I have tried to insert multiple connections into the .s3cfg File like this:

[target]
host_base = file01.xxx.xxx:80
host_bucket = file01.xxx.xxx:80
use_https = false
bucket_location = us-east-1
access_key = xxx
secret_key = xxx
[source]
host_base = s3.xxx.xxx:80
host_bucket = s3.xxx.xxx:80
use_https = false
bucket_location = us-east-1
access_key = xxx
secret_key = xxx

but that did not show any buckets from the Target when using s3cmd ls.

1

There are 1 best solutions below

0
On

@Löppinator : Please check GitLab Documentation link here for values.yaml and sample configuration looks like below :

global:
  .
  .
  .
  pages:  #pages bucket to be added with connection
    enabled: true
    host: <hostname>
    artifactsServer: true
    objectStore:
      enabled: true
      bucket: <s3-bucket-name>
      # proxy_download: true
      connection:
        secret: <secret-for-s3-connection>
  
    .
    .
    .
  appConfig:
    .
    .
    .
    object_store:
      enabled: true
      proxy_download: true
      connection:
        secret: <secret-for-s3-connection>
    lfs:
      enabled: true
      proxy_download: false
      bucket: <s3-bucket-name>
      connection: {}
    artifacts:
      enabled: true
      proxy_download: true
      bucket: <s3-bucket-name>
      connection: {}
    uploads:
      enabled: true
      proxy_download: true
      bucket: <s3-bucket-name>
      connection: {}
    packages:
      enabled: true
      proxy_download: true
      bucket: <s3-bucket-name>
      connection: {}
    externalDiffs:
      enabled: true
      proxy_download: true
      bucket: <s3-bucket-name>
      connection: {}
    terraformState:
      enabled: true
      bucket: <s3-bucket-name>
      connection: {}
    ciSecureFiles:
      enabled: true
      bucket: <s3-bucket-name>
      connection: {}
    dependencyProxy:
      enabled: true
      proxy_download: true
      bucket: <s3-bucket-name>
      connection: {}
    backups:
      bucket: <s3-bucket-name>
      tmpBucket: <s3-bucket-name>
      
  registry:  #registry bucket also should be added in S3 and no connection is required here
    bucket: <s3-bucket-name>

You have to check indentation to consider Pages and registry buckets which will be under global config and rest of the buckets will be under appConfig if you see my code above.

I hope this helps!!!