Understanding df output inside GKE container

863 Views Asked by At

I’m trying out ephemeral storage for the first time. I have a node with a single 375G local SSD disk (GKE), and a 100G standard boot disk. For the ephemeral storage, I use emptyDir, mount the volume at /workdir, and set my request/limit at 20G. When I exec into the pod and run df -kh I see:

Filesystem      Size  Used Avail Use% Mounted on
overlay         369G  7.8G  342G   3% /
tmpfs            64M     0   64M   0% /dev
tmpfs           103G     0  103G   0% /sys/fs/cgroup
/dev/nvme0n1    369G  7.8G  342G   3% /workdir
shm              64M   24K   64M   1% /dev/shm
tmpfs           103G     0  103G   0% /proc/acpi
tmpfs           103G     0  103G   0% /proc/scsi
tmpfs           103G     0  103G   0% /sys/firmware

I expected /workdir to have a size of 20G. Why do both / and /workdir seem to be identical, and why do I see nearly the full disk?

1

There are 1 best solutions below

5
Gari Singh On BEST ANSWER

When you use emptyDir, it does not create a virtual/logical disk. It's basically just using the underlying local storage used by the node itself (which in this case is the SSD). Further, request limits don't work the way you think, meaning you won't end up with a volume backed by a 20G "disk". The kubelet will monitor the storage used by your volume and if it detects that storage usage exceeds the limit, it will mark the pod for eviction.

I believe to do what you want, you'll need to wait for generic ephemeral volumes to mature.