I am trying to configure CI on Kubernetes with Gitlab and Google Cloud and I'm stuck on Let's Encrypt certificate creation. I have 2 clusters for 2 environments:
- Environment scope:
production
- for production instance - Environment scope:
*
- for staging and review instances
After deployment I have configured an Ingress service with endpoint which I declared: staging.my-domain.com
and second one which is a mystery for me: le-23830502.my-domain.com
. That second host is equal in two environment - staging and production and when I'm trying to generate certificates I can do it only for one environment (because on second one acme challenge will never pass, because I can't point on DNS two different ip addresses on one subdomain).
Anyone know what is that host? Where is configured and can I disable it or make it unique on different environments?
I notice that is my project id from gitlab with le-
prefix. I found also 2 environment variables ($ADDITIONAL_HOSTS
and $ENVIRONMENT_ADDITIONAL_HOSTS
) for adding another host addresses for Ingress but that one is still there.
Staging deployment:
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
staging 1/1 1 1 6d3h
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
staging-69d9fb68cc-85prp 1/1 Running 0 13s
staging-744bfc8cc5-jc5w9 1/1 Terminating 0 22h
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
staging-auto-deploy ClusterIP 10.116.8.120 <none> 3030/TCP 6d3h
==> v1beta1/Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
staging-auto-deploy <none> staging.my-domain.com,le-23830502.my-domain.com 34.121.X.X 80, 443 6d3h
Prod deployment:
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
production 1/1 1 1 26h
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
production-77d9fbdf45-ps6xg 0/1 Terminating 6 10m
production-c7849868f-djhhk 1/1 Running 0 18s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
production-auto-deploy ClusterIP 10.27.15.197 <none> 3030/TCP 26h
==> v1beta1/Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
production-auto-deploy <none> prod.my-domain.com,le-23830502.my-domain.com 34.69.X.X 80, 443 26h
.gitlab-ci.yaml:
include:
- template: Auto-DevOps.gitlab-ci.yml
test:
variables:
DB_URL: "mongodb://mongo:27017/kubernetes-poc-app"
services:
- name: mongo:4.4.3
alias: mongo
stage: test
image: gliderlabs/herokuish:latest
needs: []
script:
- cp -R . /tmp/app
- /bin/herokuish buildpack test
rules:
- if: '$TEST_DISABLED'
when: never
- if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete canary
- auto-deploy delete rollout
- auto-deploy persist_environment_url
environment:
name: production
url: http://prod.$KUBE_INGRESS_BASE_DOMAIN
artifacts:
paths: [environment_url.txt, tiller.log]
when: always
production:
<<: *production_template
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$STAGING_ENABLED'
when: never
- if: '$CANARY_ENABLED'
when: never
- if: '$INCREMENTAL_ROLLOUT_ENABLED'
when: never
- if: '$INCREMENTAL_ROLLOUT_MODE'
when: never
- if: '$CI_COMMIT_BRANCH == "master"'
staging:
extends: .auto-deploy
stage: staging
variables:
DATABASE_URL: "here should be url"
DATABASE_NAME: "here should be name"
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy persist_environment_url
artifacts:
paths: [ environment_url.txt, tiller.log ]
when: always
environment:
name: staging
url: http://staging.$KUBE_INGRESS_BASE_DOMAIN
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$CI_COMMIT_BRANCH != "develop"'
when: never
- if: '$STAGING_ENABLED'
review:
extends: .auto-deploy
stage: review
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy persist_environment_url
environment:
name: review/$CI_COMMIT_REF_NAME
url: http://review.$KUBE_INGRESS_BASE_DOMAIN
on_stop: stop_review
artifacts:
paths: [environment_url.txt, tiller.log]
when: always
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "develop"'
when: never
- if: '$REVIEW_DISABLED'
when: never
- if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
when: manual
allow_failure: true
stop_review:
extends: .auto-deploy
stage: cleanup
variables:
GIT_STRATEGY: none
script:
- auto-deploy initialize_tiller
- auto-deploy delete
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
allow_failure: true
rules:
- if: '$CI_KUBERNETES_ACTIVE == null || $CI_KUBERNETES_ACTIVE == ""'
when: never
- if: '$CI_COMMIT_BRANCH == "master" || $CI_COMMIT_BRANCH == "develop"'
when: never
- if: '$REVIEW_DISABLED'
when: never
- if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
when: manual
If you need to remove the le-1234567 domain that is added, you need to modify the ingress.yaml template.
You can find it here: https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/blob/master/assets/auto-deploy-app/templates/ingress.yaml
You can fork the project, and build new images, or you can follow the guidelines of adding your own chart here: https://docs.gitlab.com/ee/topics/autodevops/customize.html#custom-helm-chart
Here is an example of how I modified it for my rails apps: https://gitlab.com/leifcr/auto-deploy-image-rails/-/blob/master/assets/auto-deploy-app/templates/ingress.yaml
If you have multiple projects that all need the same config, I recommend changing the image to fit your needs. If you only have one, add a bundled chart.