Kubectl get nodes return "the server doesn't have a resource type "nodes""

48.5k Views Asked by At

I installed the Kubernetes and performed kubeadm init and join from the worker too. But when i run kubectl get nodes it gives the following response

the server doesn't have a resource type "nodes"

What might be the problem here? COuld not see anything in the /var/log/messages

Any hints here?

4

There are 4 best solutions below

6
On BEST ANSWER

It looks to me that the authentication credentials were not set correctly. Did you copy the kubeconfig file /etc/kubernetes/admin.conf to ~/.kube/config? If you used kubeadm the API server should be configured to run on 6443, not in 8080. Could you also check that the KUBECONFIG variable is not set?

It would also help to increase the verbose level using the flag --v=99. Moreover, are you accessing from the same machine where the Kubernetes master components are installed, or are you accessing from the outside?

0
On

Knowing that "node" resource always exist in a Kubernetes cluster, this message means that kubectl connected to something else than a real cluster.

It is usually the sign of an error in the ~/.kube/config file, in which case it tries by default to connect to localhost:8080 If you have anything running locally on that port, the it will return a 404 error at the kubectl call, which will interpret this as "Resource not found"

This will give errors like :

the server doesn't have a resource type "nodes"
the server doesn't have a resource type "pods"
the server doesn't have a resource type "services"
...
0
On

In my case, I wanted to see the description of my pods.

When I used kubectl describe postgres-deployment-866647ff76-72kwf, the error said error: the server doesn't have a resource type "postgres-deployment-866647ff76-72kwf".

I corrected it by adding pod, before the pod name, as follows:

kubectl describe pod postgres-deployment-866647ff76-72kwf
1
On

I got this message when I was trying to play around with Docker-Desktop. I had previously been doing a few experiments with Google Cloud and run some kubectl commands for that. The result was that in my ~/.kube/config file I still had stale config related to a now non-existent GCP cluster, and my default k8s context was set to that.

Try the following:

# Find what current contexts you have
kubectl config view

I get:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://kubernetes.docker.internal:6443
  name: docker-desktop
contexts:
- context:
    cluster: docker-desktop
    user: docker-desktop
  name: docker-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

So only one context now. If you have more than one context here, check that its the one you expect that is set to current-context. If not change it with:

# Get rid of old contexts that you don't use 
kubectl config delete-context some-old-context

# Selecting the context that I have auth for
kubectl config use-context docker-desktop