I've recently finished the development of my app locally and wanted to transfer to the remote server. There is a total of 4 containers.
Locally I am running WSL 2 with Ubuntu 22.04.2 LTS the server runs Ubuntu 23.10 The only image that I've modified was Postgres, but I've had trouble importing it and kept getting unexpected EOF errors. After many retries, I've decided to recreate the image from the original on a remote server (installing pgvector extension).
Content for 3 of my containers is copied at runtime from the source directory. The only volume I had to deal with was the Postgres one. So I've copied it from WSL directory, packed it to tar and scp'd to /tmp where I unpacked it and moved it to /var/lib/docker/volumes The path to the volume is /var/lib/docker/volumes/hmh_images_postgres_data It also appears, that the volume is correctly mounted to the container
My Dockerfile
FROM postgres:latest
Postgres container inspect
"Mounts": [
{
"Type": "volume",
"Name": "hmh_images_postgres_data",
"Source": "/home/podstavek/.local/share/docker/volumes/hmh_images_postgres_data/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
My docker compose
postgres:
image: hmh_postgres
volumes:
- "postgres_data:/var/lib/postgresql/data"
build:
dockerfile: ./postgres/Dockerfile
ports:
- "5432:5432"
container_name: postgres
restart: always
environment:
POSTGRES_USER: stack
POSTGRES_PASSWORD: overflow
POSTGRES_DB: db
networks:
- internal
volumes:
postgres_data:
When I run docker compose up I get: The PostgreSQL Database directory appears to contain a database;
Skipping initialization proves, that there is indeed a database. However, when querying the database tables and extensions are not shown. I am having trouble understanding why the data from volume is not used to create tables. Is there something missing in my Dockerfile? And moving forward, can I use volumes in the way described above or should I utilize pg_dump and pg_restore? Thanks.