Automatic restarts on GCP, GKE (Workloads, loadBalancer)

534 Views Asked by At

Once a week the load balancer stops communicating with the workloads. the load balancer stops having communication with the workloads.

On the one hand I can see that the Workload was restarted since I see the last restart time, to solve this what I do is restart the Workloads manually and there is a connection again,

My question is, why does it restart? why if it restarts we leave the loadBalancer without communication with the Worckloads?

More info:

The loadBalancer is of type Internal HTTP(S), with respect to the provisioning we do it from a yaml file deployed in kubernetes:

    # internal-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ilb-cym-ingress
  namespace: k8s-cym
  annotations:
    kubernetes.io/ingress.class: "gce-internal"

The problem is: for some reason once in a while (1 week, 5 days ...) we receive an alert that we have configured when our WorkLoad is restarted, this causes in the Ingress that the BackEnd referring to this WorkLoad remains in UnHealthy state.

On the other hand, we can redeploy from the GCP console and we did not find any problems.

Regarding the configuration of the HealthCheck we have an interval of 60 sec with a timeout of 30 sec.

And in terms of the Threshold we have the Healthy in 1 attempt and the unHealthhy in 5 attempts, these parameters we have been varying to see if we managed to solve it but we have not managed to solve it.

Finally, I wanted to comment that when the workload is launched we have an initial delay of 20 sec to allow time for the database to connect correctly. I don't know if this can interfere with our HealtChecks

0

There are 0 best solutions below