I have a problem with kubernetes local cluster recently.When I was using the command kubectl exec -it curl-- bash
to run some commands on the pod called 'curl',I got some errors:
error info
And here are the nodes' info: nodes info
The pod 'curl' is working nicely on the datanode-2 and kubelet is listening on the port 10250,but I don't know why I got the error info,Here is the `kubectl describe po curl': curl pod describe
And here are the pods in the namespace kube-system,the CNI is flannel: enter image description here
It's same to run kubectl exec
on others pod(same on datanode-1),how to solve this?
This error might be related with communication of the kube-apiserver.service (on the control nodes) with the kubelet.service (port 10250 by default)
To Troubleshoot , you might want to ssh into the control node and
If both telnet tests failed it might be related with your firewall on the worker nodes . So you should open the port 10250 in the worker nodes . To check if the kubelet is running on this port
If the telnet test fails with the hostname or public ip, but works with the private ip . You should add to the unit service file of the kube-apiserver (located at /etc/systemd/system/kube-apiserver.service) the flag
Save it , and then just