Kubenetes deployment get failed, got "Liveness probe failed: HTTP probe failed with statuscode: 503" error

827 Views Asked by At

Kubernetes cluster was already there, I want to deploy latest version of the code, when updating the docker image version, newly created pod starts throwing error Liveness probe failed: HTTP probe failed with statuscode: 503 and Back-off restarting failed container

When changing the docker image back to previous version it starts working without any error

kubectl get pod shows CrashLoopBackOff for newly added pod

Updating timeoutSeconds: 20 and periodSeconds:30 gets error

0/2 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 Insufficient memory. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling. and Liveness probe failed: Get "http://10.244.3.189:4000/health": dial tcp 10.244.3.189:4000: connect: connection refused

Events:

LAST SEEN   TYPE      REASON              OBJECT                     MESSAGE
12m         Normal    Scheduled           pod/api-fbd759d7d-27pnp    Successfully assigned default/api-fbd759d7d-27pnp to api-ygway
11m         Normal    Pulling             pod/api-fbd759d7d-27pnp    Pulling image "registry.digitalocean.com/*****-container-registry/***backend:******"
12m         Normal    Pulled              pod/api-fbd759d7d-27pnp    Successfully pulled image "registry.digitalocean.com/*****-container-registry/***backend:******" in 2.335557108s
11m         Normal    Created             pod/api-fbd759d7d-27pnp    Created container ***backend
11m         Normal    Started             pod/api-fbd759d7d-27pnp    Started container ***backend
7m13s       Warning   Unhealthy           pod/api-fbd759d7d-27pnp    Liveness probe failed: HTTP probe failed with statuscode: 503
9m53s       Warning   Unhealthy           pod/api-fbd759d7d-27pnp    Readiness probe failed: HTTP probe failed with statuscode: 503
9m53s       Normal    Killing             pod/api-fbd759d7d-27pnp    Container ***backend failed liveness probe, will be restarted
11m         Normal    Pulled              pod/api-fbd759d7d-27pnp    Successfully pulled image "registry.digitalocean.com/*****-container-registry/***backend:******" in 1.515939876s
2m28s       Warning   BackOff             pod/api-fbd759d7d-27pnp    Back-off restarting failed container
12m         Normal    SuccessfulCreate    replicaset/api-fbd759d7d   Created pod: api-fbd759d7d-27pnp
12m         Normal    ScalingReplicaSet   deployment/api             Scaled up replica set api-fbd759d7d to 1 from 0

kubectl top pod returns

NAME                   CPU(cores)   MEMORY(bytes)   
api-6ff7f58f5f-mvjcw   7m           186Mi           
kvr-5b9965f8d8-d7srw   1m           138Mi  

kubectl top node returns

NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
api-ygway   41m          2%     1228Mi          78%       
kvr-qeimd   42m          2%     1497Mi          95%  

DigitalOcean Droplet config: CPU 1 RAM 2GB

In kubenernetes deployment yml file Resources: CPU: 100M Memory: 500M

No limit set in yml file, when I set limit it still throws same error

Deployment config

kind: Deployment
apiVersion: apps/v1
metadata:
  name: api
  namespace: default
  uid: 652af561-1451-41a4-a3de-56b9a6d9ac8a
  resourceVersion: '64031188'
  generation: 102
  creationTimestamp: '2023-03-06T12:19:51Z'
  labels:
    app: api
  annotations:
    deployment.kubernetes.io/revision: '66'
  managedFields:
    - manager: k8saasapi
      operation: Update
      apiVersion: apps/v1
      time: '2023-09-06T06:45:40Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations: {}
          f:labels:
            .: {}
            f:app: {}
        f:spec:
          f:progressDeadlineSeconds: {}
          f:revisionHistoryLimit: {}
          f:selector: {}
          f:strategy:
            f:rollingUpdate:
              .: {}
              f:maxSurge: {}
              f:maxUnavailable: {}
            f:type: {}
          f:template:
            f:metadata:
              f:annotations:
                .: {}
                f:kubectl.kubernetes.io/restartedAt: {}
              f:labels:
                .: {}
                f:app: {}
            f:spec:
              f:containers:
                k:{"name":"***backend"}:
                  .: {}
                  f:envFrom: {}
                  f:imagePullPolicy: {}
                  f:livenessProbe:
                    .: {}
                    f:failureThreshold: {}
                    f:httpGet:
                      .: {}
                      f:path: {}
                      f:port: {}
                      f:scheme: {}
                    f:initialDelaySeconds: {}
                    f:periodSeconds: {}
                    f:successThreshold: {}
                    f:timeoutSeconds: {}
                  f:name: {}
                  f:ports:
                    .: {}
                    k:{"containerPort":4000,"protocol":"TCP"}:
                      .: {}
                      f:containerPort: {}
                      f:protocol: {}
                    k:{"containerPort":4001,"protocol":"TCP"}:
                      .: {}
                      f:containerPort: {}
                      f:protocol: {}
                  f:readinessProbe:
                    .: {}
                    f:failureThreshold: {}
                    f:httpGet:
                      .: {}
                      f:path: {}
                      f:port: {}
                      f:scheme: {}
                    f:initialDelaySeconds: {}
                    f:periodSeconds: {}
                    f:successThreshold: {}
                    f:timeoutSeconds: {}
                  f:resources:
                    .: {}
                    f:requests:
                      .: {}
                      f:cpu: {}
                      f:memory: {}
                  f:securityContext:
                    .: {}
                    f:allowPrivilegeEscalation: {}
                    f:runAsGroup: {}
                    f:runAsUser: {}
                  f:terminationMessagePath: {}
                  f:terminationMessagePolicy: {}
                  f:volumeMounts:
                    .: {}
                    k:{"mountPath":"/home/elixir/app/tmp"}:
                      .: {}
                      f:mountPath: {}
                      f:name: {}
                    k:{"mountPath":"/tmp"}:
                      .: {}
                      f:mountPath: {}
                      f:name: {}
              f:dnsPolicy: {}
              f:nodeSelector: {}
              f:restartPolicy: {}
              f:schedulerName: {}
              f:securityContext: {}
              f:terminationGracePeriodSeconds: {}
              f:volumes:
                .: {}
                k:{"name":"cache-volume"}:
                  .: {}
                  f:emptyDir: {}
                  f:name: {}
    - manager: kube-controller-manager
      operation: Update
      apiVersion: apps/v1
      time: '2023-09-06T07:05:00Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            f:deployment.kubernetes.io/revision: {}
        f:status:
          f:availableReplicas: {}
          f:conditions:
            .: {}
            k:{"type":"Available"}:
              .: {}
              f:lastTransitionTime: {}
              f:lastUpdateTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
            k:{"type":"Progressing"}:
              .: {}
              f:lastTransitionTime: {}
              f:lastUpdateTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
          f:observedGeneration: {}
          f:readyReplicas: {}
          f:replicas: {}
          f:updatedReplicas: {}
      subresource: status
    - manager: kubectl-set
      operation: Update
      apiVersion: apps/v1
      time: '2023-09-06T07:05:00Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:template:
            f:spec:
              f:containers:
                k:{"name":"***backend"}:
                  f:image: {}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: api
      annotations:
        kubectl.kubernetes.io/restartedAt: '2023-09-06T06:28:57Z'
    spec:
      volumes:
        - name: cache-volume
          emptyDir: {}
      containers:
        - name: ***backend
          image: >-
            registry.digitalocean.com/***-container-registry/***backend:da083d8c677758553c34afd41036d26*******
          ports:
            - containerPort: 4000
              protocol: TCP
          envFrom:
            - configMapRef:
                name: api-config
          resources:
            requests:
              cpu: 100m
              memory: 500M
          volumeMounts:
            - name: cache-volume
              mountPath: /home/elixir/app/tmp
          livenessProbe:
            httpGet:
              path: /health
              port: 4000
              scheme: HTTP
            initialDelaySeconds: 20
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /health
              port: 4000
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 1
            periodSeconds: 20
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
          securityContext:
            runAsUser: 1000
            runAsGroup: 1000
            allowPrivilegeEscalation: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      nodeSelector:
        doks.digitalocean.com/node-pool: api
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
status:
  observedGeneration: 102
  replicas: 1
  updatedReplicas: 1
  readyReplicas: 1
  availableReplicas: 1
  conditions:
    - type: Available
      status: 'True'
      lastUpdateTime: '2023-09-06T06:51:42Z'
      lastTransitionTime: '2023-09-06T06:51:42Z'
      reason: MinimumReplicasAvailable
      message: Deployment has minimum availability.
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2023-09-06T07:05:00Z'
      lastTransitionTime: '2023-09-06T06:26:30Z'
      reason: NewReplicaSetAvailable
      message: ReplicaSet "api-67f967d5df" has successfully progressed.

Docker file

ARG MIX_ENV="prod"

# build stage
FROM hexpm/elixir:1.12.3-erlang-24.1.2-alpine-3.14.2 AS build

# install build dependencies
RUN apk add --no-cache build-base git python3 curl

# sets work dir
WORKDIR /app

# install hex + rebar
RUN mix local.hex --force && \
    mix local.rebar --force

ARG MIX_ENV
ENV MIX_ENV="${MIX_ENV}"

# install mix dependencies
COPY mix.exs mix.lock ./
RUN mix deps.get --only $MIX_ENV

# copy compile configuration files
RUN mkdir config
COPY config/config.exs config/$MIX_ENV.exs config/

# compile dependencies
RUN mix deps.compile

# copy assets
COPY priv priv
COPY assets assets

# Compile assets
RUN mix assets.deploy

# compile project
COPY lib lib
RUN mix compile

# copy runtime configuration file
COPY config/releases.exs config/

# assemble release
RUN mix release

# app stage
FROM alpine:3.14.2 AS app

ARG MIX_ENV

# install runtime dependencies
RUN apk add --no-cache libstdc++ openssl ncurses-libs

ENV USER="elixir"

WORKDIR "/home/${USER}/app"

# Create  unprivileged user to run the release
RUN \
    addgroup \
    -g 1000 \
    -S "${USER}" \
    && adduser \
    -s /bin/sh \
    -u 1000 \
    -G "${USER}" \
    -h "/home/${USER}" \
    -D "${USER}" \
    && su "${USER}"

# run as user
USER "${USER}"

# copy release executables
COPY --from=build --chown="${USER}":"${USER}" /app/_build/"${MIX_ENV}"/rel/project ./

ENTRYPOINT ["bin/project"]

CMD ["start"]

I'm new to kubernetes and don't know why this error occurred? please help to fix this error

0

There are 0 best solutions below