VerneMq not retaining messages on restart, even after persistent storage is configured

246 Views Asked by At

We're running a cluster with 2 VerneMq brokers. Everything works fine if we start one broker at one time, but as soon as we need to start both the brokers, all the retained messages are lost.

To fix the issue, we tried configuring persistent volume with vernemq, and we can see that the claim is bound to vernemq and volume is created. Even after this when we tested our scenario by restarting both the pods, we found that the retained messages were not synced. This is leading to data loss.

Below is the configuration we are using to create storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azure-storage
provisioner: kubernetes.io/azure-disk
parameters:
  storageaccounttype: Standard_LRS
  kind: managed

Our vernemq configuration looks like this:

{{- if .Values.persistentVolume.enabled }}
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations:
        {{- range $key, $value := .Values.persistentVolume.annotations }}
          {{ $key }}: {{ $value }}
        {{- end }}
      spec:
        accessModes:
        {{- range .Values.persistentVolume.accessModes }}
          - {{ . | quote }}
        {{- end }}
        resources:
          requests:
            storage: {{ .Values.persistentVolume.size }}
      {{- if .Values.persistentVolume.storageClass }}
      {{- if (eq "-" .Values.persistentVolume.storageClass) }}
        storageClassName: ""
      {{- else }}
        storageClassName: "{{ .Values.persistentVolume.storageClass }}"
      {{- end }}
      {{- end }}
{{- else }}
        - name: data
{{- end }}

PVCs are properly treated. Kubectl get pvc returns us the created PVCs:

data-vernemq-0             Bound    xxx   5Gi        RWO            default        19h
data-vernemq-1             Bound    xxx   5Gi        RWO            default        18h

Storage class:

NAME                    PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
azure-storage           kubernetes.io/azure-disk   Delete          Immediate              false                  19h

Is there something that I am missing in configuration?

1

There are 1 best solutions below

1
Klevi Merkuri On

I would also suggest to use this provisioner provisioner: disk.csi.azure.com .Starting from 1.21 this provisioner is available by default . Also just for a test you can use storage class that points to a storage account and then you can have a look at the files that that pod will create. One other thing that i can think of is to add the code below in your storageclass:

mountOptions:
  - dir_mode=0777
  - file_mode=0777