I have setup helm chart values for bitnami kafka & zookeper values
kafka:
persistence:
enabled: true
accessModes: ["ReadWriteOnce"]
size: 50M
mountPath: /bitnami/kafka
storageClass: default
existingClaim: ""
zookeeper:
volumePermissions:
enabled: true
persistence:
enabled: true
storageClass: default
existingClaim: ""
accessModes: [ "ReadWriteOnce" ]
size: 50M
dataLogDir:
size: 50M
existingClaim: ""
It seems the pvc created for kafka is of size 16 gigs. Is there a way to setup very small disk size for testing purposes?
once
storageClass: defaultis changed tostorageClass: ""orstorageClass: "-"it starts to take the values supplied into account. Locally I could go as low as 50M, but on cluster I was able to go only as low as 1Gi.I think it relates to the existing PV setup.