i try to understand a simple and basic kubeadm init
control plane setup.
The kubeconfig file in /etc/kubernetes/kubelet.conf is used by the kubelet process at startup time:
ubuntu@c1:~$ ps -ef | grep kubelet | sed s/\\s--/\\n--/g
root 35361 1 1 Mar17 ? 00:51:48 /usr/bin/kubelet
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
--kubeconfig=/etc/kubernetes/kubelet.conf
--config=/var/lib/kubelet/config.yaml
--container-runtime=remote
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
--pod-infra-container-image=registry.k8s.io/pause:3.8
It tells to use a "user" named "system:node:c1", where "c1" is my node's name:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ✂ ✂ ✂
server: https://k8scp:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:c1
name: system:node:c1@kubernetes
current-context: system:node:c1@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:c1
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
As far as i understand was this kublet's identity established by it's certificat's CN during kubeadm init cert
phase and is used by the kubelet to authenticate against the api-server.
Now, poking around shows up a ClusterRoleBinding named "system:node" which has a "roleRef" of kind "ClusterRole" and name "system:node". But(!) there is no "subject" entry. it's missing:
ubuntu@c1:~$ kubectl get clusterrolebinding system:node -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2023-03-14T14:22:57Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:node
resourceVersion: "144"
uid: 256e1f6b-e491-45d0-beda-1e250b260f46
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
ubuntu@c1:~$ kubectl describe clusterrolebinding system:node
Name: system:node
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: system:node
Subjects:
Kind Name Namespace
---- ---- ---------
ubuntu@c1:~$
The resource specification of a ClusterRoleBinding tells that there has to be a subject array. I guess if that array is empty it's still a valid subject. So that binding cannot associate any role to the before mentioned kubelet's authentication context.
kubelet processes life outside the orchestration boundaries. They are managed by systemd (at least on ubuntu nodes). How do kubelets get authorized and what roles/rights do they have?
finally, a little bit over four months later, i stumbled upon the answer (by myself?, by some pitying, compassionate kami-sama? we will never know, but for all that lost souls out there, desperately in search for enlightenment, here's the answer:
as the docs say, there are multiple authorization modes, and node-mode specifically authorizes API requests made by kubelets:
my cluster was set up according the docs with kubeadm. the critical part here is the API server configuration; mine has both RBAC and Node authorization modes enabled:
concerning the "unnecessary clusterrole and clusterrolebinding", as stated in the title of this question's thread, the same doc's section on RBAC Node Permissions shed's some more nourishing light on that topic (for the interested ones, i give their full explanation, mostly regarding backward compatibility):