MongoDB data lost in KS8 after restart

130 Views Asked by At

Im running a Kubernetes cluster with 3 replicas in a StatefulSet (see settings below). Im importing a dump and my pods are crashing because of a disk pressure on my nodes. Thats fine ofc, but when the pods are restarted all my data is gone and my replication set config is gone as well. When I check my master node with rs.status(), this is the response:

rs.status()
MongoServerError[NotYetInitialized]: no replset config has been received

After doing the rs.initiate(... I can connect again but without any data. Why does MongoDB clears my /data/db folder? /data/ is a persistent volume, if I do a clean shutdown and restart my data is still in the db and data/db folder.

In the past I had Deployments instead of a StatefulSet, when I restarted all my pods in 1 go, my data was lost as well, thats the reason I switched to StatefulSet but the same behaviour happens over and over again, anyone with the same problem/knows the issue?

---
apiVersion: v1 
kind: Service 
metadata: 
  namespace: mongodb 
  name: mongodb
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec: 
  type: LoadBalancer 
  ports: 
  - protocol: TCP 
    port: 27017 
    targetPort: 27017 
  selector: 
    app: mongodb

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
  namespace: mongodb
spec:
  selector:
    matchLabels:
      app: mongodb
  serviceName: mongodb
  replicas: 3
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        - name: mongodb
          image: private-repo/mongo:7.0
          imagePullPolicy: Always
          args:
            - --bind_ip_all
            - --replSet
            - rs0
            - --dbpath
            - /data/db
            - --keyFile
            - /etc/mongo/keyfile
            - --setParameter
            - authenticationMechanisms=SCRAM-SHA-256 
            - --auth
            - --wiredTigerCacheSizeGB
            - "6"
          ports:
          - name: mongodb
            containerPort: 27017
          volumeMounts:
          - name: mongodb-pvc 
            mountPath: /data
          envFrom:
          - configMapRef:
              name: mongodb
          - secretRef:
              name: mongodb
  volumeClaimTemplates:
  - metadata:
      name: mongodb-pvc 
      namespace: mongodb 
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: premium-retain
      resources:
        requests:
          storage: 256Gi

---
apiVersion: v1 
kind: Service 
metadata: 
  namespace: mongodb 
  name: mongodb
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec: 
  type: LoadBalancer 
  ports: 
  - protocol: TCP 
    port: 27017 
    targetPort: 27017 
  selector: 
    app: mongodb

I use a private docker image instead of the official docker image with a keyfile available on /etc/mongo/keyfile. (if this doesnt exist you cant start MongoDB).

Data exists for sure on the shared drive, you see the collection files on the volume mount and the used diskspace is on the mounted drive as well.

Config:

---
apiVersion: v1 
kind: Secret 
metadata: 
  name: mongodb 
  namespace: mongodb
type: Opaque
stringData:
  MONGO_INITDB_ROOT_USERNAME: "USERNAME"
  MONGO_INITDB_ROOT_PASSWORD: "PASSWORD"
  MONGO_INITDB_DATABASE: "db"

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
  namespace: mongodb
  labels:
    app: mongodb
data:
  ENV: "prod"

Docker file:

FROM mongo:7.0

COPY /mongodb-keyfile /etc/mongo/keyfile
RUN chown -R 999:999 /etc/mongo/keyfile
RUN chmod 400 /etc/mongo/keyfile
0

There are 0 best solutions below