Worker nodes not available

795 Views Asked by At

I have setup and installed IBM Cloud private CE with two ubuntu images in Virtual Box. I can ssh into both images and from there ssh into the others. The ICp dashboard shows only one active node I was expecting two.

I explicitly ran the command (from a root user on master node):

docker run -e LICENSE=accept --net=host \
  -v "$(pwd)":/installer/cluster \
  ibmcom/cfc-installer install -l \
  192.168.27.101

The result of this command seemed to be a successful addition of the worker node:

PLAY RECAP *********************************************************************
192.168.27.101             : ok=45   changed=11   unreachable=0    failed=0

But still the worker node isn't showing in the dashboard.

What should I be checking to ensure the worker node will work for the master node?

3

There are 3 best solutions below

5
On

You can check on your worker node with following steps:

  1. check cluster nodes status kubectl get nodes to check status of the newly added worker node

  2. if it's NotReady, check kubelet log if there is error message about why kubelet is not running properly:

    • ICp 2.1 systemctl status kubelet
    • ICp 1.2 docker ps -a|grep kubelet to get kubelet_containerid, docker logs kubelet_containerid
0
On

If you're using Vagrant to configure IBM Cloud Private, I'd highly recommend trying https://github.com/IBM/deploy-ibm-cloud-private

The project will use a Vagrantfile to configure a master/proxy and then provision 2 workers within the image using LXD. You'll get better density and performance on your laptop with this configuration over running two full Virtual Box images (1 for master/proxy, 1 for the worker).

0
On

Run this to get the kubectl working

ln -sf /opt/kubernetes/hyperkube /usr/local/bin/kubectl 

run the below command to identified failed pods if any in the setup on the master node.

Run this to get the pods details running in the environment kubectl -n kube-system get pods -o wide

for restarting any failed pods of icp

txt="0/";ns="kube-system";type="pods"; kubectl -n $ns get $type | grep "$txt" | awk '{ print $1 }' | xargs kubectl -n $ns delete $type

now run the kubectl cluster-info

kubectl get nodes

Then ckeck the cluster info command of kubectl

Check kubectl version is giving you https://localhost:8080  or https://masternodeip:8001

kubectl cluster-info

Do you get the output

if no.. then

login to https://masternodeip:8443 using admin login

and then copy the configure clientcli settings by clicking on admin on the panel paste it in ur master node.

and run the kubectl cluster-info