I want to create large EBS volumes (e.g. 1TB) dynamically for each jenkins worker pod on my EKS cluster such that each persistentVolume representing an EBS volume only persists for the lifespan of the pod.
I can't change the size of the nodes on the EKS cluster, so I'm trying to use external EBS volumes via the ebs-csi-driver helm chart.
I can't seem to find a configuration for the podTemplate that enables dynamic provisioning of the persistentVolumeClaim and subsequent EBS persistentVolume for hosting my builds. The only way I'm able to successfully provision and mount EBS persistentVolumes dynamically to my worker pods is by using my own persistentVolumeClaim which I have to reference in the podTemplate and manually deploy using kubectl apply -f pvc.yaml and destroy kubectl delete -f pvc.yaml for each build cycle.
So far I've been able to provision one persistentVolume at a time using the following:
persistentVolumeClaim:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ebs-claim namespace: jenkins spec: accessModes: - ReadWriteOnce storageClassName: ebs-sc resources: requests: storage: 1000GiStorageClass:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ebs-sc provisioner: ebs.csi.aws.com volumeBindingMode: WaitForFirstConsumerpodTemplate:apiVersion: v1 kind: Pod spec: serviceAccount: jenkins-sa securityContext: fsGroup: 1000 containers: - name: jenkins-worker image: <some-image> imagePullPolicy: Always command: - cat tty: true securityContext: runAsGroup: 1000 runAsUser: 1000 volumeMounts: - name: build-volume mountPath: <some-mount-path> volumes: - name: build-volume persistentVolumeClaim: claimName: ebs-claim- I annotated the "jenkins-sa"
serviceAccountwith an AWS IAM Role ARN (eks.amazonaws.com/role-arn) with all the AWS IAM policies and trust-relationships needed to allow the jenkins worker Pod to provision an AWS EBSpersistentVolume(as described in the AWS ebs-csi-driver docs).
- I annotated the "jenkins-sa"
Once the pipeline is started, and the jenkins worker pod is executing a build,
I'm able to ssh into it and confirm that the external EBS volume is mouted to the build directory using df.
Happy days!
However, I noticed the persistentVolume provisioned by the jenkins worker pod was lingering around
after the build finished and the pod destroyed. Apparently this is because the PersistentVolumeClaim
needs to be deleted before the persistentVolume it's bound to can be released (see here for details).
After some digging it looks like I need to specify a dynamicPVC() in either the
volumes or workspaceVolume spec in the podTemplate yaml.
However I'm struggling to find documentation around dynamic PersistentVolumeClaim provisioning
beyond the jenkins kubernetes plugin page.
Unfortunately when I try to create persistentVolumeClaims dynamically like this
it doesn't work and just uses the maximum storage that the node can provision (which is limited to 20Gi).
- Updated
podTemplate:apiVersion: v1 kind: Pod spec: serviceAccount: jenkins-sa workspaceVolume: dynamicPVC: accessModes: ReadWriteOnce requestsSize: 800Gi storageClassName: ebs-sc securityContext: fsGroup: 1000 containers: - name: jenkins-worker image: <some-image> imagePullPolicy: Always command: - cat tty: true securityContext: runAsGroup: 1000 runAsUser: 1000
I am expecting a new PersistentVolumeClaim and PersistentVolume to be created
by the podTemplate when I start my build pipeline, and subsequently destroyed
when the pipeline finishes and the pod released.
Many thanks to Saifeddine Rajhi for pointing me in the right direction!
After investigating how the Jenkins helm-chart implements its
StatefulSetin the jenkinsci repo and looking at similar issues around provisioningpersistentVolumesdynamically for jenkins worker pods, I achieved the behavior I was looking for by specifyingdynamicPVCwith the workspaceworkspaceVolumeargument as shown here:I then just point my build directory to the default agent workspace directory where the dynamically provisioned EBS volume is mounted to in my container (
/home/jenkins/agent).Update (
30/11/2023) following comment from Bernard Halas. Thanks for your input! Given thatworkspaceVolumeis not a supported pod spec API element, then it seems thepodTemplateapproach doesn't support dynamic provisioning ofPVCsso is not appropriate for my use-case and certainly explains why that approach wasn't working.