I'm trying to use existing NFS with StateFulSets.
Creation of PresistentVolumeClaim
seems automatic using the volumeClaimTemplates
.
Problem:
But, since the PresistentVolumeClaim
claims a entire PresistentVolume
. I have to create PresistentVolume
manually for all the replicas.
Is there a way to dynamically provision NFS persistent volumes in Kubernetes ?
Note: NFS Server itself is static, just need to create volumes in K8s dynamically, not the NFS Server itslef.
I'm using mongo statefulset example:
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-data
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo"
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: nfs
It needs 3 PresistentVolumeClaim
, so I have to create 3 PV
for it to use. Can this be dynamically added in the NFS like other dynamic provisioners like aws-ebs
Is this the proper way to get a StatefulSet with NFS Persistent Volume.
This is a work in progress I came back to just yesterday (my solution) but my advice if it suits your purposes (or anyone finding this later) is to check out GlusterFS and Heketi.
Information is included below but the TLDR is that GlusterFS is your NFS, Heketi can autoprovision the rest. My github repo automates the setup... it's ugly... but it works for me and I'll be making it less ugly with what I know now.
https://github.com/stevenaldinger/gke-glusterfs-heketi