rook-ceph provides PV also when there's no more capacity

632 Views Asked by At

I'm using rook-ceph in kubernetes. I deployed rook-ceph-operator and rook-ceph-cluster Helm charts.

I have 5 worker nodes with 2 OSD each one. Each OSD has 100GB. So in total, I have 5*200GB = 1TB raw storage. In CephBlockPool I set replicated=3 failureDomain=host.

I created a POD with a PVC which ask for 500Gi and it works. Then I can create more PODs with PVC > 500Gi and it works also. Basically I can create hundreds PVC and I have hundreds of PV Bounded, but the TOTAL space is more than 1TB, which is the maximum.

What I expected? If I create PVs, and they exceeds 1TB, they SHOULD NOT be created. I don't want to use thin provisioning.

Last thing, my REAL storage should be 1TB/3 because I have a replica of 3.

I tried using ResourceQuotas with requests.storage and it works but the problem is that it's applied per namespace. And I have a lot of namespaces. I need to limit the PCV requests to all the cluster.

My question is: How can I limit the storage requests in k8s to avoid request more than the maximum storage capacity for ALL the cluster?

1

There are 1 best solutions below

2
On

they SHOULD NOT be created

Says who? You did not consume all that space yet, did you?

With Ceph, you can allocate more storage space than you actually have.

Reaching your nearfull-ratio (~80 capacity), you should start seeing warnings. Reaching full-ratio (~90%), you won't be able to write anymore. Up to you, to monitor your cluster capacity, add OSDs or purge volumes when needed.

To my knowledge, there is no thick provisioning with RBD.

You could use ResourceQuotas limiting the allocatable storage space per Namespace. On OpenShift, they also have ClusterResourceQuotas, which could ensure no Namespace escapes that quota.