GKE - filestore - not accessible

1.1k Views Asked by At

I have a filestore in my custom vpc.

name: my-filler

NFS mount point: 10.165.122.140:/bindata

connect mode: DIRECT_PEERING

I am trying to access it from my GKE cluster. Reference URL: https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/blob/master/docs/kubernetes/pre-provisioned-pv.md

PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pre-pv
  namespace: sylius
  annotations:
    pv.kubernetes.io/provisioned-by: filestore.csi.storage.gke.io
spec:
  storageClassName: csi-filestore
  capacity:
    storage: 1Ti
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
  csi:
    driver: filestore.csi.storage.gke.io
    # "modeInstance/<zone>/<filestore-instance-name>/<filestore-share-name>"
    volumeHandle: "modeInstance/europe-west3-a/my-filler/bindata"
    volumeAttributes:
      ip: 10.165.122.140 
      volume: bindata

But when I am trying to access it from the pod, it is not working.

for example, a sample deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: busybox
        volumeMounts:
        - name: config
          mountPath: /etc/vol1
      volumes:
      - name: config
        persistentVolumeClaim:
         claimName: preprov-pvc

The pod status is always ContainerCreating.

When I describe the pod I have following message:

Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    9m26s                default-scheduler  Successfully assigned sylius/myapp-789dc79fc9-5wcmj to gke-otcp-sylius-dev-private-pool-458796a9-tk6c
  
Warning  FailedMount  39s (x4 over 7m23s)  kubelet            Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[config kube-api-access-5wpcd]: timed out waiting for the condition

I have also installed csi driver in the GKE cluster.

It seems pv is ok.

kubectl get pv

NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS    REASON   AGE
my-pre-pv   1Ti        RWX            Retain           Bound    sylius/preprov-pvc   csi-filestore            23m

PVC is also bound.

kubectl get PVC

NAME          STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS    AGE
preprov-pvc   Bound    my-pre-pv   1Ti        RWX            csi-filestore   24m

Any idea why not able to attach?

Is it because connect mode is not PRIVATE_SERVICE_ACCESS? If yes, how to make the peering, with between which two networks?

1

There are 1 best solutions below

0
On

You need to enable Filestore CSI driver in your cluster.

Follow this guide: https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver