Dockerfile: how to COPY file from a shared volume

57 Views Asked by At

I have two containers, the first creates a file, the second needs to use it. There is a shared volume that should persist the file.

How do I COPY that file in the Dockerfile of the second container? It's a sql script that is automatically executed when Postgres (the second service/container) starts up if it's found in the right location.

compose.yml

services:
  fetch:
    build: fetch/.
    env_file: fetch/.env
    volumes:
      - data:/dump/data
  db:
    build: db/.
    restart: always
    environment:
      POSTGRES_PASSWORD: postgres
    ports:
      - "5432:5432"
    volumes:
      - data:/dump/data
    depends_on:
      fetch:
        condition: service_completed_successfully

volumes:
  data:

Dockerfile for the second container/service (Postgres)

FROM postgres:latest

# copy in form of volume_name:path/file destination    
COPY data:/dump/data/restore.sql /docker-entrypoint-initdb.d/

EXPOSE 5432

CMD ["postgres"]
1

There are 1 best solutions below

0
David Maze On

You can't do this the way you describe. Images are completely built before anything around volumes or mounts are considered. The file-sharing setup you show isn't especially reliable: it will ignore any changes in the underlying images, it doesn't work on any setups other than specifically Docker named volumes (for example, it won't work in Kubernetes), and if files exist in both images, you'll get one image's files or the other but not both.

For a database initialization, the path you describe in comments makes sense: make this be a complete container in itself, and have it make the database connection as its primary command.

# fetch/Dockerfile
...
CMD ["psql", "-f", "dump.sql"]
version: '2.4'
services:
   fetch:
    build: ./fetch/
    environment:
      PGHOST: db
    restart: on-failure
    depends_on:
      db: {condition: service_healthy}
  db: { ... }
    # does not depends_on: fetch

You could also COPY the file in your image build, either directly copying from one image to another or using a multistage build.

COPY --from=myproject_fetch dump.sql /docker-entrypoint-initdb.d/
FROM ... AS fetch
# insert the contents of fetch/Dockerfile here

FROM postgres:14
COPY --from=fetch /dump/data/dump.sql /docker-entrypoint-initdb.d/

Using multiple images has a little better separation of concerns, but is hard to manage – neither Compose nor core Docker has any notion that builds need to be orchestrated, where the "fetch" image needs to be built before the "database" image.

In this space, I'd also consider using your application framework's database-migration system, if this file just contains CREATE TABLE and similar DDL statements, over using the container initialization mechanism (see for example How do you perform Django database migrations when using Docker-Compose?). If this is a one-time data load, also consider using psql directly from the host system, or having a script in your application container that you could docker-compose run.