We use ubuntu docker container for development, in which case we mount different branches of a git repo inside container and test them. This requires changing code, recompiling it and starting / stopping the compiled process several time. So we run the container with ENTRYPOINT ["tail", "-f", "/dev/null"]
, so that it wont exit and connect vscode to it for development. Once branch is tested by developers, we hand over this machine to someone to carry out several experiments on this branch in different external environmental conditions. That person should not need to run any command. So in this case, we want the process to start automatically inside docker when system boots. I can ensure that the container starts on boot with docker start --restart always
. When handed over to another person for experiments, we want to ensure that the process autostarts inside the container when the container boots up. During development, we dont want the process to auto-start on every boot so that we can make code changes, recompile and then start the process for testing.
My guess was to setup service inside container with systemctl to start the process on every boot. During development, we can turn OFF this process, and during experimentation, we can turn it ON, with docker exec
. Q. Is this approach correct? Is there any better approach?
Also, running systemctl inside container gives following error:
# systemctl
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
How can I fix this?
Update
Does below solution sounds sensible?
Have a separate script entry.sh
:
#!/bin/bash
if [ "$AUTOSTART" = "true" ]; then
# Start your process here
your_process_command
fi
# Keep the script running to keep the container alive
tail -f /dev/null
And inside dockerfile, I can do:
COPY entry.sh /entry.sh
RUN chmod +x /entry.sh
ENTRYPOINT "/entry.sh"
Then we can pass AUTOSTART
parameter while starting the container if we want the process to start on every boot:
docker run -e AUTOSTART=true -d image-name
The container is just a wrapper around a process. I'd argue setting the image's
ENTRYPOINT
to a no-op tail command doesn't make sense at all, any more than you'd runtail -f /dev/null
in a terminal window "to keep it from exiting".There's a minor Docker style question about using
ENTRYPOINT
vs.CMD
. I tend to preferCMD
for most cases specifically because it's easy to override. In the Dockerfile, I would not setENTRYPOINT
, but I would setCMD
to the "normal" main container command. You probably do not need a script.If you really need a container that's not running the normal process, you can override that command when you run the container.
Just overriding the command is probably easier and cleaner than having a script like what you show that tries to deduce the command from environment variables.
I'd suggest that a more Docker-native workflow for this doesn't try to keep a single container alive and changing the process inside it, but rather to build a new image on each branch. If the compilation step is in your Dockerfile then you don't really need to do anything on the host.
Containers don't "boot" or "run services", and commands like
service
orsystemctl
can behave unintuitively if they work at all. A typical best practice is to run the program in the container directly as a foreground process, possibly with a lightweight signal-handling zombie-reaping init system like tini around it but no more.