Kubernetes when pod limit reached, pod didn't terminate

37 Views Asked by At

My environment running on microk8s. I have setup pod limit like below, but the limit reached pod didn't terminate. Example, I have set 14 Gi memory limit and 10 Gi request but now my pod using 20Gi memory. It should terminate and run a new pod. What do you think could be the reason?

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-development
  labels:
    app: example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      name: example
      labels:
        app: example
    spec:
      volumes:
        - name: appsettings-volume
          configMap:
            name: example-appsettings
        - name: dashboards-volume
          hostPath:
            path: /some path/example/api/data/Dashboards
        - name: reports-volume
          hostPath:
            path: /some path/example/api/data/Reports
      containers:
        - name: example
          image: myregistryurl/example:latest
          volumeMounts:
            - name: appsettings-volume
              mountPath: /app/appsettings.json
              subPath: appsettings.json
            - name: dashboards-volume
              mountPath: /app/Dashboards
            - name: reports-volume
              mountPath: /app/Reports
          resources:
            requests:
              memory: "10Gi"
            limits:
              memory: "14Gi"
          imagePullPolicy: "IfNotPresent"
      imagePullSecrets:
        - name: regcred
      restartPolicy: Always

and when kubectl decribe pod command output like this

Name:             ****-development-***************
Namespace:        default
Priority:         0
Service Account:  default
Node:             ****/*************
Start Time:       Mon, 25 Mar 2024 06:59:37 +0000
Labels:           ****=*****
                  pod-template-hash=***************
Annotations:      cni.projectcalico.org/containerID: ******
                  cni.projectcalico.org/podIP: *****
                  cni.projectcalico.org/podIPs: *****
                  kubectl.kubernetes.io/restartedAt: 2024-03-15T11:00:35Z
Status:           Running
IP:               *****
IPs:
  IP:           *****
Controlled By:  ReplicaSet/****-development-***************
Containers:
  ****:
    Container ID:   ****://************************************
    Image:          *****************/***/****/api:backup
    Image ID:       ***************/*****/****/api@sha256:******************************
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 25 Mar 2024 06:59:38 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  14Gi
    Requests:
      memory:     10Gi
    Environment:  <none>
    Mounts:
      /app/Dashboards from dashboards-volume (rw)
      /app/Reports from reports-volume (rw)
      /app/appsettings.json from appsettings-volume (rw,path="appsettings.json")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9mhk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  appsettings-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      ****-appsettings
    Optional:  false
  dashboards-volume:
    Type:          HostPath (bare host directory volume)
    Path:          *****/api/data/Dashboards
    HostPathType:  
  reports-volume:
    Type:          HostPath (bare host directory volume)
    Path:          *****/api/data/Reports
    HostPathType:  
  kube-api-access-m9mhk:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                           Age                 From     Message
  ----     ------                           ----                ----     -------
  Warning  FailedToRetrieveImagePullSecret  15s (x13 over 12m)  kubelet  Unable to retrieve some image pull secrets (regcred); attempting to pull the image may not succeed.

I have tried HPA, liveness probe and other things.

What do you think could be the reason?

0

There are 0 best solutions below