I have a few Kubernetes clusters with different #of nodes in each. And my deployment of config has "replicas: #nodes". There is no specific config set up for scheduling that pod but after deployment, I see strange behavior in terms of the distribution of pods on nodes.
Example:
For 30 nodes cluster (30 replicas) all 30 pod replicas distributed across 25 nodes only and other 5 nodes are sitting ideal in the cluster. Similar cases for many other different clusters and this count varies in every new/redeployment.
Question:
I want to distribute my pod replicas across all nodes. If I set "replicas: #nodes" then I should have one pod replica in each node. If I increase/double the replicas count then it should distribute evenly. is there any specific configuration in deployment yaml for Kubernetes?
Configuration with node AntiAffinity, but still this one is behaving as above. Tried with "requiredDuringSchedulingIgnoredDuringExecution" and that one did deployed one pod in each node, but if I increase the replicas or any node goes down during the deployment then then whole deployment fails.
metadata:
labels:
app: test1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- test1
topologyKey: kubernetes.io/hostname
see Pod Topology Spread Constraints https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ this feature gives you permission to finely define how your pods will be distributed across your cluster based on regions, zones, nodes, and other user-defined topology domains.
So you may define your own rules how to distribute pods.