We have an AKS cluster and sometimes we end up with an issue where a deployment needs a restart (e.g. cached data has been updated and we want to refresh it or there is corrupt cache data we want to refresh).
I've been using the approach of scaling the deployment to 0 and then scaling it back up using the commands below:
kubectl scale deployments/<deploymentName> --replicas=0
kubectl scale deployments/<deploymentName> --replicas=1
This does what I expect it to do, but it feels hacky and it means we're not running any deployments while this process is taking place.
What's a better approach to doing this? For either a specific deployment and for all the deployments?
If you have a strategy of
RollingUpdate
on your deployments you can delete the pods in order to replace the pod and refresh it.About the RollingUpdate strategy:
RollingUpdate config:
maxSurge
specifies the maximum number of Pods that can be created over the desired number of Pods.maxUnavailable
specifies the maximum number of Pods that can be unavailable during the update process.Delete the pod:
Edit:
Also, you can rollout the deployment, which will restart the pod but will create a new revision of the deployment as well.
Ex:
kubectl rollout restart deployments/<deployment-name>