I get a crash loopback error when creating a pod. but I didn't know any solution

928 Views Asked by At

This is what I keep getting:

al8-1@al8-1:~/kuber_test/pod_nginx$ kubectl get pods
NAME         READY   STATUS             RESTARTS   AGE
nginx        1/1     Running            0          6d2h
pod-apigw2   0/1     CrashLoopBackOff   1          15s

Below is output from "kubectl describe pods pod-apigw2"

Name:         pod-apigw2
Namespace:    default
Priority:     0
Node:         al8-2/192.168.15.59
Start Time:   Wed, 26 Feb 2020 16:33:30 +0900
Labels:       <none>
Annotations:  cni.projectcalico.org/podIP: 192.168.4.55/32
Status:       Running
IP:           192.168.4.55
IPs:
  IP:  192.168.4.55
Containers:
  apigw2:
    Container ID:   docker://f684ef44ae53fd3176ddd7c051c9670da65da4bec84a1402359561abc646d85d
    Image:          parkdongwoo/apigw_test:v1
    Image ID:       docker-pullable://parkdongwoo/apigw_test@sha256:a447f131f0c9e63bb02a74708f4cbc2f6dd4551b0ba8f737b09072a8cc74c759
Port:           8080/TCP
Host Port:      0/TCP
State:          Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Wed, 26 Feb 2020 16:37:00 +0900
  Finished:     Wed, 26 Feb 2020 16:37:00 +0900
Ready:          False
Restart Count:  5
Environment:    <none>
Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from default-token-z72r6 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-z72r6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z72r6
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
             node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  <unknown>              default-scheduler  Successfully assigned default/pod-apigw2 to al8-2
  Normal   Pulled     4m26s (x4 over 5m23s)  kubelet, al8-2     Successfully pulled image "parkdongwoo/apigw_test:v1"
  Normal   Created    4m26s (x4 over 5m22s)  kubelet, al8-2     Created container apigw2
  Normal   Started    4m25s (x4 over 5m21s)  kubelet, al8-2     Started container apigw2
  Normal   Pulling    3m38s (x5 over 5m26s)  kubelet, al8-2     Pulling image "parkdongwoo/apigw_test:v1"
  Warning  BackOff    19s (x24 over 5m16s)   kubelet, al8-2     Back-off restarting failed container

But when I tried to look at the log, nothing came out

al8-1@al8-1:~/kuber_test/pod_nginx$ kubectl logs pod-apigw2
al8-1@al8-1:~/kuber_test/pod_nginx$ kubectl logs pod-apigw2 -p
al8-1@al8-1:~/kuber_test/pod_nginx$

This is my yaml file

apiVersion: v1
kind: Pod
metadata:
  name: pod-apigw2
spec:
  selector:
      app: apigw2
  containers:
      - name: apigw2
        image: parkdongwoo/apigw_test:v1
        imagePullPolicy: Always
        ports:
                - name: port-apigw2
                  containerPort: 8080

If I run the docker image by "docker run" I was able to run the image without any issue, only through kubernetes I got the crash.

Can someone help me out, how can I debug without seeing any log?

3

There are 3 best solutions below

1
On

Hi can you please provide output of command:

kubectl logs pod-apigw2 -c apigw2

Or even better install stern, this is viewer of logs and you can see logs of pods and their containers in realtime ;). Just copy the binary, change privileges so you can execute it and run it.

https://github.com/wercker/stern

0
On

The issue here is that default command and arguments provided by the container image in Your pod completed and exit with code 0. Then kubernetes sees that container is not running so it restarts it and ends up repeating this loop which is represented by pod status: CrashLoopBackOff.

This is because Kubernetes containers are treated little bit different than in docker. So not every docker image that is working correctly in docker is compatible with kubernetes from the get go.

According to kubernetes documentation:

Define a command and arguments when you create a Pod

When you create a Pod, you can define a command and arguments for the containers that run in the Pod. To define a command, include the command field in the configuration file. To define arguments for the command, include the args field in the configuration file. The command and arguments that you define cannot be changed after the Pod is created.

The command and arguments that you define in the configuration file override the default command and arguments provided by the container image. If you define args, but do not define a command, the default command is used with your new arguments.

So in order to debug this issue You need to look at the docker image You are using. It most likely needs slight modification to the docker file so that the default process (command) is actually the web application server process.

Hope this helps.

0
On

The main process of the docker image you are using just finished its job and get done. You most probably need to find the script that is used in this docker image to start the service.

Then in your pod definition, you can call this command via command and args. See this example.