Metric server not working : unable to handle the request (get nodes.metrics.k8s.io)

31.1k Views Asked by At

I am running command kubectl top nodes and getting error :

node@kubemaster:~/Desktop/metric$ kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

Metric Server pod is running with following params :

    command:
    - /metrics-server
    - --metric-resolution=30s
    - --requestheader-allowed-names=aggregator
    - --kubelet-insecure-tls
    - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP

Most of the answer I am getting is the above params, Still getting error

E0601 18:33:22.012798       1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:kubemaster: unable to fetch metrics from Kubelet kubemaster (192.168.56.30): Get https://192.168.56.30:10250/stats/summary?only_cpu_and_memory=true: context deadline exceeded, unable to fully scrape metrics from source kubelet_summary:kubenode1: unable to fetch metrics from Kubelet kubenode1 (192.168.56.31): Get https://192.168.56.31:10250/stats/summary?only_cpu_and_memory=true: dial tcp 192.168.56.31:10250: i/o timeout]

I have deployed metric server using :

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

What am I missing? Using Calico for Pod Networking

On github page of metric server under FAQ:

[Calico] Check whether the value of CALICO_IPV4POOL_CIDR in the calico.yaml conflicts with the local physical network segment. The default: 192.168.0.0/16.

Could this be the reason. Can someone explains this to me.

I have setup Calico using : kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml

My Node Ips are : 192.168.56.30 / 192.168.56.31 / 192.168.56.32

I have initiated the cluster with --pod-network-cidr=20.96.0.0/12. So my pods Ip are 20.96.205.192 and so on.

Also getting this in apiserver logs

E0601 19:29:59.362627       1 available_controller.go:420] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.152.145:443/apis/metrics.k8s.io/v1beta1: Get https://10.100.152.145:443/apis/metrics.k8s.io/v1beta1: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

where 10.100.152.145 is IP of service/metrics-server(ClusterIP)

Surprisingly it works on another cluster with Node Ip in 172.16.0.0 range. Rest everything is same. Setup using kudeadm, Calico, same pod cidr

4

There are 4 best solutions below

0
On

I had same issue in my on-prem k8s v1.26 (cni=calico). I thinks that this issue because of Metric-Server version (v0.6). I solved my issue by apply Metric-Server v5.0.2

1- Download this Yaml file from official source

2- add ( - --kubelet-insecure-tls=true ) bellow the -args section

3- apply yaml

enjoy ;)

1
On

It started working after I edited the metrics-server deployment yaml config to include a DNS policy.

hostNetwork: true

Click here to view the image description

Refer to the link below: https://www.linuxsysadmins.com/service-unavailable-kubernetes-metrics/

0
On

I had the same problem trying to run metrics on docker desktop and I followed @suren's answer and it worked. The default configuration is:

- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP

And I changed to:

- --kubelet-preferred-address-types=InternalIP
0
On

Default value of Calico net is 192.168.0.0/16 There is a comment in yaml file:

The default IPv4 pool to create on startup if none exists. Pod IPs will be chosen from this range. Changing this value after installation will have no effect. This should fall within --cluster-cidr.

  • name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/16"

So, its better use different one if your home network is contained in 192.168.0.0/16.

Also, if you used kubeadm you can check your cidr in k8s:

kubeadm config view | grep Subnet

Or you can use kubectl:

kubectl  --namespace kube-system get configmap kubeadm-config -o yaml

Default one in kubernetes "selfhosted" is 10.96.0.0/12