I am wondering if there is an easy way to make use of Amazon EFS ( Elastic File System ) to be mounted as a volume in a local docker-compose setup.
Reason being that for local development, volumes created are persisted on my laptop - If I were to change machines, I can't access any of that underlying data. A cloud NFS would solve this problem as it would be readily available from anywhere.
The AWS documentation (https://docs.aws.amazon.com/efs/latest/ug/efs-onpremises.html) seems to suggest the use of AWS Direct Connect / VPN - is there any way to avoid this by opening port 2049 (NFS traffic) in a security group that listens on all IP addresses, and applying that security group to a newly created EFS?
Here is my docker-compose.yml:
version: "3.2"
services:
postgres_db:
container_name: "postgres"
image: "postgres:13"
ports:
- 5432:5432
volumes:
- type: volume
source: postgres_data
target: /var/lib/postgresql/data
volume:
nocopy: true
environment:
POSTGRES_USER: 'admin'
POSTGRES_PASSWORD: "password"
volumes:
postgres_data:
driver_opts:
type: "nfs"
o: "addr=xxx.xx.xx.xx,nolock,soft,rw"
device: ":/docker/example"
I am getting the below error:
ERROR: for postgres_db Cannot start service postgres_db: error while mounting volume '/var/lib/docker/volumes/flowstate_example/_data': failed to mount local volume: mount :/docker/example:/var/lib/docker/volumes/flowstate_example/_data, data: addr=xxx.xx.xx.xx,nolock,soft: connection refused
Which I interpret to be that my laptop connection is not part of the AWS EFS VPC, hence it is unable to mount the EFS.
For added context, I am looking to dockerize a web scraping setup and have the data volume persisted in the cloud so I can connect to it from anywhere.
EFS assumes nfs4, so:
Of course, the referenced nfs-export/path must exist. Swarm will not automatically create non-existing folders.
Make sure to delete any old docker volumes of this faulty kind/name manually (on all swarm nodes!) before recreating the stack:
This is important to understand: Docker NFS Volumes are actually only the declaration where to find the data. It does not update when you update your docker-compose.yml, hence you must remove the volume so any new configuration will appear
see output of
for more information why volume couldn't be mountet
Also make sure you can mount the nfs-export via
mount -t nfs4 ...see
showmount -e your.efs.ip.address