Django App not accessible on host machine when deployed on microk8s environment

189 Views Asked by At

I have a Django app that i previously was running using docker-compose that I am trying to test on MicroK8s. I used kompose to convert the docker-compose config to kubernetes.

This is the deployment definition.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert -f ../docker-compose.yml
    kompose.version: 1.31.2 (a92241f79)
  creationTimestamp: null
  labels:
    io.kompose.service: app
  name: app
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: app
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        kompose.cmd: kompose convert -f ../docker-compose.yml
        kompose.version: 1.31.2 (a92241f79)
      creationTimestamp: null
      labels:
        io.kompose.network/bulabuy-build-default: "true"
        io.kompose.service: app
    spec:
      containers:
        - args:
            - sh
            - -c
            - |-
              python manage.py wait_for_db &&
                     python manage.py migrate &&
                     python manage.py runserver 0.0.0.0:8000
          env:
            - name: DB_HOST
              value: db
            - name: DB_NAME
              value: devdb
            - name: DB_PASSWORD
              value: password
            - name: DB_USER
              value: devuser
            - name: SELENIUM_HOST
              value: selenium-custom
            - name: SELENIUM_PORT
              value: "5000"
          image: bulabuy-build-app
          imagePullPolicy: Never
          name: app
          ports:
            - containerPort: 8000
              hostPort: 8000
              protocol: TCP
          resources: {}
          volumeMounts:
            - mountPath: /vol/web
              name: dev-static-data
      restartPolicy: Always
      volumes:
        - name: dev-static-data
          persistentVolumeClaim:
            claimName: dev-static-data
status: {}

This is the service definition:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert -f ../docker-compose.yml
    kompose.version: 1.31.2 (a92241f79)
  creationTimestamp: null
  labels:
    io.kompose.service: app
  name: app
spec:
  ports:
    - name: "8000"
      port: 8000
      targetPort: 8000
  selector:
    io.kompose.service: app
status:
  loadBalancer: {}

This the output from the pods status (app-577dc6d4f4-cjcdw is the django app):

!Pods status: (https://i.stack.imgur.com/O3mpU.png)

This is the output from the service status: !Services status: (https://i.stack.imgur.com/iv6nh.png)

I tried to access the pod to run some checks on the app.. microk8s kubectl exec app-577dc6d4f4-cjcdw -- bash

This resulted in the following error:

Error from server: error dialing backend: tls: failed to verify certificate: x509: certificate is valid for 192.168.1.72, 172.20.0.1, 172.17.0.1, 172.18.0.1, 172.23.0.1, 172.22.0.1, 172.19.0.1, not 192.168.192.225

I also changed the service definition in the spec section to set type to NodePort and still could not access the app on the ip address listed in services.

I am not too sure what I am missing!. Any help with directions is appreciated.

2

There are 2 best solutions below

0
On BEST ANSWER

I managed to get it to work. I cleaned up the deployment manifest for the django app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: build-api
spec:
  selector:
    matchLabels:
      app: build-api
  replicas: 1
  template:
    metadata:
      labels:
        app: build-api
    spec:
      containers:
        - name: build-api
          image: bulabuy-api
          imagePullPolicy: Never
          ports:
            - containerPort: 8000
          env:
            - name: DB_HOST
              value: $(DEV_DB_SERVICE_SERVICE_HOST)
            - name: DB_NAME
              value: devdb
            - name: DB_USER
              value: devuser
            - name: DB_PASSWORD
              value: password
            - name: SELENIUM_HOST
              value: selenium-custom
            - name: SELENIUM_PORT
              value: "5000"
            - name: DB_PORT
              value: "5432"

The django app pod was not able to connect to the database pod because the environment variable for DB_HOST was not reflecting the db pods actual ip. Kubernetes injects service environment variable for each service in the format <SERVICE NAME..>_SERVICE_HOST and so since my db service was named dev_db_service, kubernetes automatically injected an environment variable DEV_DB_SERVICE_SERVICE_HOST into my django-app deployment. And so in my django app deployment manifest, the value of the environment variiable DB_HOST references the injected environment variable which had the db host ip address.

I also found out that I had to manually create the user for the database. In docker-compose simply setting the POSTGRES_USER and POSTGRES_PASSWORD created the db user but this did not work for me in microk8s. I had to access the pod via microk8s kubectl exec -it <db-pod-name..> -- bash and create the user manually. Once the db deployment and service manifests were correctly configured, I then applied the deployment for the django-app.

This is the deployment manifest for the db.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dev-db
spec:
  selector:
    matchLabels:
      app: dev-db
  template:
    metadata:
      labels:
        app: dev-db
    spec:
      containers:
        - name: dev-db
          image: postgres:13-alpine
          ports:
            - containerPort: 5432
              name: postgres
          volumeMounts:
            - mountPath: "/var/lib/postgresql/data"
              name: dev-db-data-storage
          env:
            - name: POSTGRES_DB
              value: devdb
            - name: POSTGRES_USER
              value: devuser
            - name: POSTGRES_PASSWORD
              value: password
      
      volumes:
        - name: dev-db-data-storage
          persistentVolumeClaim:
            claimName: dev-db-persistent-volume-claim

This is the service manifest for the db..

apiVersion: v1
kind: Service
metadata:
  name: dev-db-service
spec:
  type: NodePort
  selector:
    app: dev-db
  ports:
    - name: "postgres"
      protocol: TCP
      port: 5432
      targetPort: 5432
      nodePort: 30432 

Hope it helps someone facing the same issue..

0
On

To fix this issue, you can try the following :

  • Based on the error message, the certificate used for TLS encryption between the Kubernetes nodes and the Docker containers is not valid for the IP address of the node that is trying to connect. suggests checking if the certificate is valid for the IP address that is being used to access the K8s cluster. And also make sure that the certificate is signed by a trusted CA.
  • This error may occur because the IP address that you are allocating is already being used by another IP, as you can see by doing a sudo cat /var/snap/microk8s/current/certs/csr.conf file check. If it is the issue, a higher ID should be utilized and this will work.
  • Every node's /var/snap/microk8s/current/certs/csr.conf.template needs to be updated. The IP address of the API server endpoint needs to be changed to your domain in the kubeconfig file that you receive with the MicroK8s configuration. Also check official troubleshooting steps for more details
  • Additionally, you can see if the host has an antivirus installed and check firewall rules. These days, ESET antivirus can use its own certificate to replace the original ones, which can cause problems in the command line even if web browsers accept it.
  • As a temporary workaround, you can disable certificate verification by setting the insecure-skip-tls-verify flag to true in the Kubernetes configuration file.
  • If the issue is still not resolved then delete the old certificate and install a new certificate and restart the Kubernetes nodes and Docker containers to apply the changes.