Kubernetes strange behavior with Persistent Volume Claim and volumeMounts

43 Views Asked by At

I can't understand why there are differences in the pod filesystem, when this is deployed using a volumeMount, compared to the same pod but without a volumeMount, in details using the following deployment file :

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: error-manager
  name: error-manager
spec:
  replicas: 1
  selector:
    matchLabels:
      app: error-manager
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: error-manager
    spec:
      containers:
      - image: almaviva.jfrog.io/conversational-tech-docker/error-manager:1.0.4
        name: error-manager
        ports:
          - containerPort: 8001
        resources: {}
        imagePullPolicy: Always
      imagePullSecrets:
        - name: conv-ai-reg
status: {}
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: error-manager
  name: error-manager
spec:
  ports:
  - name: 8001-8001
    port: 8001
    protocol: TCP
    targetPort: 8001
  selector:
    app: error-manager
  type: ClusterIP
status:
  loadBalancer: {}

The pod has the following files with their respective sizes in the /sqlite path:

root@error-manager-54db47994c-brfsh:/sqlite# ls -l
total 980
-rw-r--r-- 1 root root 376832 Mar  4 10:26 db.sqlite3
-rw-r--r-- 1 root root 626688 Mar  4 10:26 db2.sqlite3

Instead defining a PVC and modifying the Deployment as follows:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sqlite-pv-claim-error-manager
spec:
  storageClassName: nfs-azurefile-csi-premium
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: error-manager
  name: error-manager
spec:
  replicas: 1
  selector:
    matchLabels:
      app: error-manager
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: error-manager
    spec:
      containers:
      - image: almaviva.jfrog.io/conversational-tech-docker/error-manager:1.0.4
        name: error-manager
        ports:
          - containerPort: 8001
        volumeMounts:
          - name: sqlite-persistent-volume
            mountPath: /sqlite
        resources: {}
        imagePullPolicy: Always
      imagePullSecrets:
        - name: conv-ai-reg
      volumes:
      - name: sqlite-persistent-volume
        persistentVolumeClaim:
          claimName: sqlite-pv-claim-error-manager
status: {}
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: error-manager
  name: error-manager
spec:
  ports:
  - name: 8001-8001
    port: 8001
    protocol: TCP
    targetPort: 8001
  selector:
    app: error-manager
  type: ClusterIP
status:
  loadBalancer: {}

The pod has the following files with their respective sizes:

root@error-manager-5ff767b9cb-s57wv:/sqlite# ls -lh
total 0
-rw-r--r-- 1 root root 0 Mar  5 00:12 db.sqlite3
root@error-manager-5ff767b9cb-s57wv:/sqlite#

What could be the problem?

2

There are 2 best solutions below

1
RichardoC On

When you recreated the pod it will have deleted all the data that was on the container's filesystem.

Have you re-run whatever data ingestion put the data into that database, and then checked the size?

0
Ron Etch On

You might want to run the following command to inspect the image created.

docker image inspect -f json

If you created the image, you may use -o out flag on the command to create a log

docker build -o out .