Couchbase backup to S3 with Kubernetes service account role attachment is possible?

58 Views Asked by At

I am trying setup Couchbase backup to AWS S3 following the document https://docs.couchbase.com/operator/current/howto-backup.html

Kubernetes (EKS) cluster setup in AWS and deployed Couchbase cluster enterprises.

I wanted to store my Cluster backup to S3. The document says Kubernetes secrets with aws user key/secret/region is a must. However, it also says a backup service account with an IAM role attached to the node is sufficient - confused.

Can I avoid AWS user creation and storing that in secret by using AWS role with metadata authentication for this purpose?

apiVersion: couchbase.com/v2
kind: CouchbaseBackup
metadata:
   name: couchbase-backup-fulls3
   namespace: couchbase
spec:
  strategy: immediate_full
  size: 100Gi
storageClassName: couchbase-backup-sc
ephemeralVolume: true
services:
  data: True
  bucketConfig: True
objectStore:
useIAM: true
#secret: s3-region-secret <----- Want to avoid this?
uri: s3://couchbase-backup-<evironment>
1

There are 1 best solutions below

0
Jijo John On BEST ANSWER

I was able to figure this out. The answer is Yes, you can use the eks backup-service account attached to the AWS Role for couchbase backup to S3. No user account credentials are set up as Kubernetes secrets.

apiVersion: couchbase.com/v2
kind: CouchbaseBackup
metadata:
  name: couchbase-s3-backup
  namespace: couchbase
spec:
  strategy: full_incremental
  full:
    schedule: "30 22 * * 0-6" # At 10:30 PM on Sunday through Saturday.
  incremental:
    schedule: "*/60 * * * *" #At every 60th minute
  size: 1000Gi 
  storageClassName: couchbase-backup-sc
  ephemeralVolume: false
  objectStore: 
    useIAM: true
    uri: s3://couchbase-backup-<Env-Name>
  services:
    analytics: True
    bucketConfig: True
    bucketQuery: True
    clusterAnalytics: True
    clusterQuery: True
    data: True
    eventing: True
    ftsAliases: True
    ftsIndexes: True
    gsIndexes: True
    views: True
  defaultRecoveryMethod: purge
  backoffLimit: 3
  backupRetention: "36h"
  successfulJobsHistoryLimit: 5
  failedJobsHistoryLimit: 10
  ttlSecondsAfterFinished: 300
  threads: 3
  logRetention: "24h"
  autoScaling:
    thresholdPercent: 10
    incrementPercent: 2
    limit: 500Gi

Since I use Terraform to do, I am posting the backup service account and role attachments via terraform below. You can use eksctl command to attach your custom policy, for example.

eksctl create iamserviceaccount --name backup-service-accountName --namespace some-namespace --region your-region --cluster clustername --attach-policy-arn your-policy-arn --approve --override-existing-serviceaccounts

Here is the Terraform code I used

resource "aws_iam_role_policy_attachment" "s3backup-role-attachment" {
  role       = aws_iam_role.s3backup.name
  policy_arn = aws_iam_policy.s3backup.arn
}

resource "aws_iam_role" "s3backup" {
  name               = "eks-couchbase-s3-backup-role"
  description        = "S3 Access from EKS"
  assume_role_policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws-cn:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/oidc.eks.cn-northwest-1.amazonaws.com.cn/id/<oidc-id>"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.cn-northwest-1.amazonaws.com.cn/id/<oidc-id>:aud": "sts.amazonaws.com",
                    "oidc.eks.cn-northwest-1.amazonaws.com.cn/id/<oidc-id>:sub": "system:serviceaccount:couchbase:couchbase-backup"
                }
            }
        }
    ]
}
EOF
}

resource "aws_iam_policy" "s3backup" {
  name        = "eks-couchbase-s3-backup-policy"
  description = "EKS Couchbase backup s3 policy"

  policy = jsonencode({

    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Effect" : "Allow",
        "Action" : [
          "s3:AbortMultipartUpload",
          "s3:CompleteMultipartUpload",
          "s3:CreateMultipartUpload",
          "s3:DeleteObject",
          "s3:DeleteObjects",
          "s3:GetObject",
          "s3:HeadObject",
          "s3:ListObjectsV2",
          "s3:ListObjects",
          "s3:ListParts",
          "s3:ListBucket",
          "s3:PutObject"
        ],
        "Resource" : "arn:aws-cn:s3:::*",
      },
    ]
  })
}

Note: My Couchbase cluster environment setup is in AWS China, you will see aws partition as aws-cn.