Browser services' container in Docker Swarm mode

1.1k Views Asked by At

I've created 3 vm using docker-machine:

docker-machine create -d virtualbox manager1
docker-machine create -d virtualbox worker1
docker-machine create -d virtualbox worker2

these are theirs ip:

docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
manager   -        virtualbox   Running tcp://192.168.99.102:2376                                   v1.12.6
worker1   -        virtualbox   Running   tcp://192.168.99.100:2376           v1.13.0-rc5  
worker2   -        virtualbox   Running   tcp://192.168.99.101:2376           v1.13.0-rc5   

Then docker-machine ssh manager1

and:

docker swarm init --advertise-addr 192.168.99.102:2377

then worker1 and worker2 join to the swarm.

Now i've created a overlay network as:

docker network create -d overlay skynet

and deployed a service in global mode (1 task for node):

docker service create --name http --network skynet --mode global -p 8200:80 katacoda/docker-http-server

And there is effectively 1 container (task) for node.

Now, i'd like accessing directly to my virtual host.. or, at least, i'd like browsing directly my service's container, because of i'd like developing a load balancer of my service with nginx. For doing that, in my nginx conf file, i'd like to point to a specific service container (i.e. now i have 3 node (1 manager and 2 workers) in global mode, so i have 3 tasks running-->i'd like to choose one of these 3 containers). How can i do that?

[edit]: i can point to my swarm nodes simply browsing to VM_IP:SERVICE_PORT, i.e: 192.168.99.102:8200, but there is still internal load balancing.

I was thinking that, if i point to a specific swarm node, i'll use container inside that specific node. But nothing, for now.

2

There are 2 best solutions below

0
On

Due to the way SwarmMode works with the IPVS Load Balancer (discussed at https://www.katacoda.com/courses/docker-orchestration/load-balance-service-discovery-swarm-mode), it's not possible to just access a single container deployed as a service.

Request for configuring the load balancer has an open Github issue at https://github.com/docker/docker/issues/23813

What you may find helpful is to use a proxy running on each node. This could be configured to only response to certain nodes request (in theory). Two which are designed around SwarmMode include:

https://github.com/vfarcic/docker-flow-proxy

https://github.com/tpbowden/swarm-ingress-router

5
On

Adding to the answer @ben-hall provided above; Docker 1.13 will introduce an advanced syntax for the --publish flag, which includes a mode=host publish mode to publishing service ports (see the pull-request here: docker#27917, and docker#28943). Using this mode, ports of the containers (tasks) backing a service are published directly on the host they are running on, bypassing the Routing Mesh (and thus, load-balancer).

Keep in mind that as a consequence, only a single task of a service can run on a node.

On docker 1.13 and up; the following example creates a myservice service, an port 80 of the task is published on port 8080 of the node that the task is deployed on.

docker service create \
  --name=myservice \
  --publish mode=host,target=80,published=8080,protocol=tcp \
  nginx:alpine

Contrary to tasks that publish ports through the routing mesh, docker ps also shows the ports that are published for tasks that use "host-mode" publishing (see the PORTS column);

CONTAINER ID        IMAGE                                                                           COMMAND                  CREATED              STATUS              PORTS                           NAMES
acca053effcc        nginx@sha256:30e3a72672e4c4f6a5909a69f76f3d8361bd6c00c2605d4bf5fa5298cc6467c2   "nginx -g 'daemon ..."   3 seconds ago        Up 2 seconds        443/tcp, 0.0.0.0:8080->80/tcp   myservice.1.j7nbqov733mlo9hf160ssq8wd