Tag multiple targets during one docker build

11.1k Views Asked by At

I have a Dockerfile with multiple targets. For example:

FROM x as frontend
...

FROM y as backend
...

FROM z as runtime
...
COPY --from=frontend ...
COPY --from=backend ...

In order to build and tag the final image, I use:

docker build -t my-project .

To build and tag intermediary targets, I provide --target argument:

docker build -t my-project-backend --target backend .

But is it possible to build a final image and tag all the intermediary images as well? In other words, the same as :

docker build -t my-project-frontend --target frontend .
docker build -t my-project-backend --target backend .
docker build -t my-project .

But with a single command?

I think a bit of explanation required. If use buildkit (export DOCKER_BUILDKIT=1), then all independent targets are built in parallel. So it's simply faster than building them one by one. And I need to tag every target to push them to a docker registry as well as final one.

Currently I'm building my images in CI without buildkit and I'm trying to speed up the process a bit.

4

There are 4 best solutions below

3
On

I don't think there's a way to create a container for two applications, and I don't think that's the right way to do it.

A container docker is created for a single application, and even for logs how to do with two applications? If you want to stop one or restart one ?

I think the right way to do it is docker-compose

and use somethink like:

docker-compose.yml:

version: "3.3"
services:
  my-project-frontend:
    build: "./my-project-frontend/"
    container_name: "front"
    restart: always
    depends_on:
      -  my-project-backend
      
   my-project-backend:
    build: "./my-project-backend/"
    container_name: "back"
    restart: always

And run:

docker-compose up

or

docker-compose build
0
On

I did some searching but it seems that the docker CLI currently just does not offer any straight forward way to do this. The closest thing is the idea I proposed in my comment: Build the main image and tag all intermediate images afterwards.

Take this Dockerfile as an example:

FROM alpine AS frontend
RUN sleep 15 && touch /frontend

FROM alpine AS backend
RUN sleep 15 && touch /backend

FROM alpine AS runtime
COPY --from=frontend /frontend /frontend
COPY --from=backend /backend /backend

(the sleeps are only there to make the speedup by caching obvious)

Building this with:

export DOCKER_BUILDKIT=1 # enable buildkit for parallel builds
docker build -t my-project .
docker build -t my-project-backend --target backend .
docker build -t my-project-frontend --target frontend .

will

  1. build the main image runtime by first building all required intermediate images, e.g. frontend and backend, and tag only the main image with my-project
  2. build the target backend tagged as my-project-backend but using the cache from the previous build
  3. same but for backend

Every image here will only be built once - but ultimately this is the very same you already did as stated in your question, just in a different order.

If you really want to be able to do this in a single command you could use docker-compose to build the "multiple images":

version: "3.8"
services:
  my-project:
    image: my-project
    build: .
  backend:
    image: my-project-backend
    build:
      context: .
      target: backend
  frontend:
    image: my-project-frontend
    build:
      context: .
      target: frontend
export DOCKER_BUILDKIT=1 # enable buildkit for parallel builds
export COMPOSE_DOCKER_CLI_BUILD=1 # use docker cli for building
docker-compose build

Here docker-compose will basically run the same docker build commands as above for you.

In both cases though you should be aware that although the cached layers massively speed up the build there is still a new build taking place which will each time:

  • send the build context - i.e. content of the current directory - to the docker daemon
  • download any remote files you ADD to the image and only use the cache if the contents are the same again - which for large files/slow network will be a noticeable slow down.

Another workaround I found in this forum thread was to add a LABEL to the image and use docker image ls --filter to get the image IDs after the build.

But testing this it seems docker image ls won't show intermediate images when using buildkit. Also this approach would required more commands / a dedicated script - which would be again more work than your current approach.

1
On

The closest you'll get to this right now is with Docker's buildx bake command. It allows you to define an HCL file with syntax like:

group "default" {
    targets = ["app", "frontend", "backend"]
}

target "app" {
    dockerfile = "Dockerfile"
    tags = ["docker.io/username/app"]
}

target "frontend" {
    dockerfile = "Dockerfile"
    target = "frontend"
    tags = ["docker.io/username/frontend"]

}

target "backend" {
    dockerfile = "Dockerfile"
    target = "backend"
    tags = ["docker.io/username/backend"]
}

And then you would build with docker buildx bake -f bake.hcl

That said, what you are doing is almost certainly a mistake. A multi-stage build is designed to separate the build environments from the runtime environments, not to create multiple distinct images. In other words, you're using a hammer when you need a screwdriver, yes it will work, but the result is suboptimal.

The preferred and much simpler solution is to create a separate Dockerfile for each image you want to build. If your images have a common base, then consider moving that out to it's own image, and referencing that in your FROM step.

To build multiple images in docker as a developer, it's common to use a docker-compose.yml file that defines all three images, and then docker-compose up --build will start the entire stack after building each of the images, with a single command. E.g. the compose file may look like:

version: 2
services:
  app:
    build: Dockerfile.app
    image: username/app
    # ...
  frontend:
    build: Dockerfile.frontend
    image: username/frontend
    # ...
  backend:
    build: Dockerfile.backend
    image: username/backend
    # ...

And for deploying to production, this would be separate CI/CD pipelines for each image to perform the needed unit tests, build, and then fan-in to a deployment step that runs the entire stack with the specified releases of each image.

0
On

In my opinion the best solution to enforce build multiple stages is to create stage with copy "fake" files from stages which we want to build.

FROM scratch AS build-all-stages
COPY --from=first-stage-to-build fake-not-exist-file?.txt .
COPY --from=second-stage-to-build fake-not-exist-file?.txt .
...
COPY --from=x-stage-to-build fake-not-exist-file?.txt .

Purpose of COPY steps are only to enforce build every stage.

From the performance point of view the above example is better than mentioned in previous answer because we are wasting the smallest amount of memory and time because of using empty base image "scratch" and "copying" files which are not exist.

Next we should build every image separately with --target argument (all layers will be taken from cache).