I'm quite new to Docker but have started thinking about production set-ups, hence needing to crack the challenge of data persistence when using Docker Swarm. I decided to start by creating my deployment infrastructure (TeamCity for builds and NuGet plus the "registry" [https://hub.docker.com/_/registry/] for storing images).
I've started with TeamCity. Obvious this needs data persistence in order to work. I am able to run TeamCity in a container with an EBS drive and everything looks like it is working just fine - TeamCity is working through the set-up steps and my TeamCity drives appear in AWS EBS, but then the worker node TeamCity gets allocated to shuts down and the install process stops.
Here are all the steps I'm following:
Phase 1 - Machine Setup:
- Create one AWS instance for master
- Create two AWS instances for workers
- All are 64-bit Ubuntu t2.mircro instances
- Create three elastic IPs for convenience and assign them to the above machines.
- Install Docker on all nodes using this: https://docs.docker.com/install/linux/docker-ce/ubuntu/
- Install Docker Machine on all nodes using this: https://docs.docker.com/machine/install-machine/
- Install Docker Compose on all nodes using this: https://docs.docker.com/compose/install/
Phase 2 - Configure Docker Remote on the Master:
$ sudo docker run -p 2375:2375 --rm -d -v /var/run/docker.sock:/var/run/docker.sock jarkt/docker-remote-api
Phase 3 - install the rexray/ebs plugin on all machines:
$ sudo docker plugin install --grant-all-permissions rexray/ebs REXRAY_PREEMPT=true EBS_ACCESSKEY=XXX EBS_SECRETKEY=YYY
[I lifted the correct values from AWS for XXX and YYY]
I test this using:
$ sudo docker volume create --driver=rexray/ebs --name=delete --opt=size=2
$ sudo docker volume rm delete
All three nodes are able to create and delete drives in AWS EBS with no issue.
Phase 4 - Setup the swarm:
Run this on the master:
$ sudo docker swarm init --advertise-addr eth0:2377
This gives the command to run on each of the workers, which looks like this:
$ sudo docker swarm join --token XXX 1.2.3.4:2377
These execute fine on the worker machines.
Phase 5 - Set up visualisation using Remote Powershell on my local machine:
$ $env:DOCKER_HOST="{master IP address}:2375"
$ docker stack deploy --with-registry-auth -c viz.yml viz
viz.yml looks like this:
version: '3.1'
services:
viz:
image: dockersamples/visualizer
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "8080:8080"
deploy:
placement:
constraints:
- node.role==manager
- This works fine and allows me to visualise my swarm.
Phase 6 - Install TeamCity using Remote Powershell on my local machine:
$ docker stack deploy --with-registry-auth -c docker-compose.yml infra
docker-compose.yml looks like this:
version: '3'
services:
teamcity:
image: jetbrains/teamcity-server:2017.1.2
volumes:
- teamcity-server-datadir:/data/teamcity_server/datadir
- teamcity-server-logs:/opt/teamcity/logs
ports:
- "80:8111"
volumes:
teamcity-server-datadir:
driver: rexray/ebs
teamcity-server-logs:
driver: rexray/ebs
[Incorporating NGINX as a proxy is a later step on my to do list.]
I can see both the required drives appear in AWS EBS and the container appear in my swarm visualisation.
However, after a while of seeing the progress screen in TeamCity the worker machine containing the TeamCity instance shuts down and the process abruptly ends.
I'm at a loss as to what to do next. I'm not even sure where to look for logs.
Any help gratefully received!
Cheers,
Steve.
I found a way to get logs for my service. First do this to list the services the stack creates:
Then do this to see logs for the service:
Now I just need to wade through the logs and see what went wrong...