Run webserver on Docker in Docker

91 Views Asked by At

I have the following Dockerfile:

FROM ubuntu:bionic

RUN apt-get update
RUN apt-get -y install curl
RUN apt-get install sudo

# Install Miniconda
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"

RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*

RUN wget \
    https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
    && mkdir /root/.conda \
    && bash Miniconda3-latest-Linux-x86_64.sh -b \
    && rm -f Miniconda3-latest-Linux-x86_64.sh 
RUN conda --version

## Install Docker
RUN sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    lsb-release
RUN sudo apt-get install gnupg

RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
RUN echo \
  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

RUN sudo apt-get update
RUN sudo apt-get install docker-ce docker-ce-cli containerd.io -y
RUN pip install gevent

WORKDIR /mnt

I run that image with:

docker run -v /some/dir:/mnt \
    -v /var/run/docker.sock:/var/run/docker.sock -it <image_tag> /bin/bash

Once inside the image I do:

docker run -d -p 8501:8501 -v /path/to/model:/models tensorflow/serving 

but when I attempt to then do

curl -v http://localhost:8501/v1/models/model

I get:

curl: (7) Failed to connect to localhost port 8501: Connection refused

So how should I be running these two containers in order to be able to run curl against one of them?

1

There are 1 best solutions below

0
On

This works the same way as any other setup where two containers need to communicate with each other: they both need to be on the same (non-default) Docker network, and one can use the other's docker run --name as a DNS name. It doesn't matter that one container started the other.

More broadly, this works the same way as any other setup where the Docker daemon is "somewhere else"; maybe you've set $DOCKER_HOST to point at Docker running in a VM, or done the work to set up Docker with mutual TLS authentication on a remote host. docker run -v options refer to host-system paths where the Docker daemon is running; docker run -p options publish ports on the host where the Docker daemon is running. In your case you're making a call to localhost in the container's network space, but that's different from the Docker daemon's network space, so you can't reach the other container this way.

Since you need to know a lot of details about the host-system environment, you need to pass these to the launching container, perhaps as environment variables. Using the Python Docker client library, for example:

import os
import docker
import requests

if __name__ == '__main__':
  shared_host_path = os.environ['SHARED_HOST_PATH']
  docker_network = os.environ['DOCKER_NETWORK']
  client = docker.from_env()
  container = client.containers.run('tensorflow/serving',
    detach=True,
    network=docker_network,
    name='tensorflow',
    volumes={shared_host_path: {'bind': '/models'}}
  )
  requests.get('http://tensorflow:8501/v1/models/model')

You'd have to pass these details into the container when you launch it:

sudo docker network create some-network
sudo docker run -d \
  --net some-network \
  -e DOCKER_NETWORK=some-network \
  -e SHARED_HOST_PATH=/path/to/model \
  -v /path/to/model:/mnt \
  -v /var/run/docker.sock:/var/run/docker.sock \
  your-image

This is awkward to set up, it is very Docker-specific (it will not work in Kubernetes or other orchestrators), and it requires administrator privileges (anything with access to the Docker socket can trivially root the host). I've found this approach useful for some types of integration testing. A more typical pattern is to create a long-running process that can accept requests via HTTP or a job queue, and a second process that can send it requests. You can build and test this without Docker, and then when you want to run it in containers you can use normal Docker networking, without any special privileges or complex setup.