I have a kubernetes cluster which has two worker nodes. I have pointed the coredns to forward any DNS requests that matches ".com" domain to a remote server.
.com:53 {
forward . <remote machine IP>
}
Let's say, pod-0 sits in worker-0 and pod-1 sits in worker-1.
When I uninstall pods and reinstall it, there are chances that pods gets assigned to different worker nodes.
Is there a possibility coredns will resolve the pod hostname to its worker node IP?
It would be really helpful if someone has an approach to handle this issue. Thanks in advance!
there is a work around for this issue you can use node selectors for deploying your pods on the same node. If you don’t want to do it in this way, if you are implementing this via a pipeline you can add a few steps to your pipeline for making the entries the flow goes as below.
Trigger CI/CD pipeline → Pod getting deployed → execute kubectl command for getting pods on each node → ssh into the remote machine give sudo privileges if required & change the required config files.
Use the below command to get details of pods running on a particular node