When deploying a workload that has a VOLUME
in a Dockerfile, that volume may not be mapped to a persistent volume (PV/PVC) in Kubernetes.
Actually, unless a Kubernetes volume is attached to that workload, The docker daemon container will temporarily create a docker-volume when starting the container (driver type: local
). Kubernetes won't be aware of it. See: are VOLUME in Dockerfile persistent in kubernetes. This docker volume will be destroyed when the pod is removed or redeployed.
It is certainly good practice to use a kubernetes volume, even ephermeal volumes (or generic ephemeral volumes... still in alpha in 1.19)
Q: How to list pods/containers that use such local volumes?
This is really important since restarting the workload/deployment/stateful-set will cause disruption (lost of ephemeral volume).
Here's a little script you can run on the Kubernetes nodes (for installation that use Docker daemon).
exemple:
Note that some system pods/containers don't have namespace.
This script should be ran on each node you want to audit.
An easy way to scan multiple host from a central server using ssh, would be to copy the script above in a file
local_volumes.sh
, then execute a command likecat local_volumes.sh | ssh node001 sudo bash -
For rancher users, this snippet audit all cordoned nodes:
for s in $(rancher nodes ls | grep cordoned | cut -d " " -f 1 ); do cat local_volumes.sh | rancher ssh $s sudo bash - ;done