How to use Docker Swarm Mode to share data between containers?

3.1k Views Asked by At

I have just started using docker . I was able to create a docker compose file which deploys three components of my application ,with the necessary number of replications in one host . I want to replicate the same same thing ,with multiple hosts now . I have three processes A[7 copies ] ,B [ 1 copy] ,C [1 Copy] I followed the creating swarm tutorial on the docker website ,and managed to create a manager and attach two workers to it .

So now when I run my command

 docker stack deploy --compose-file docker-compose.yml perf

It does spawn the required number of machines ,but all of them in the manager itself . I would ideally want them to spawn C and B in the manager and ann the copies of A distributed between worker 1 and worker 2.
Here is my docker -compose file

version: '3'

services:

  A:
    image: A:host
    tty: true
    volumes:
      - LogFilesLocationFolder:/jmeter/log
      - AntOutLogFolder:/antout
      - ZipFilesLocationFolder:/zip
    deploy:
      replicas: 7
      placement:
        constraints: [node.role == worker]
    networks:
      - perfhost

  B:
    container_name: s1_perfSqlDB
    restart: always
    tty: true
    image: mysql:5.5
    environment:
      MYSQL_ROOT_PASSWORD: ''
    volumes:
      - mysql:/var/lib/mysql
    ports:  
      - "3306:3306"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
       - perfhost

  C:
    container_name: s1_scheduler
    image: C:host
    tty: true
    volumes:
      - LogFilesLocationFolder:/log
      - ZipFilesLocationFolder:/zip
      - AntOutLogFolder:/antout
    networks:
      - perfhost
    deploy:
      placement:
        constraints: [node.role == manager]
    ports:
      - "7000:7000"


networks:
  perfhost:

volumes:
     mysql:
     LogFilesLocationFolder:
     ZipFilesLocationFolder:
     AntOutLogFolder:

B) And if I do get this working ,how do I use volumes to transfer data between Conatiner for Service A and container for Service B ,given that they are on different host machines

3

There are 3 best solutions below

4
namokarm On

First you should run docker node ls And check if all of your nodes are available. If they are, you should check if the workers have the images they need to run the containers. I would also try with a constraint using the id of each node instead, you can see the ids with the previous command.

5
Bret Fisher On

A few tips and answers:

  • for service names I don't recommend capital letters. Use valid DNS hostnames (lowercase, no special char except -).
  • container_name isn't supported in swarm and shouldn't be needed. Looks like C: should be something like scheduler, etc. Make the service names simple so they are easy to use/remember on their virtual network.
  • All services in a single compose file are always on the same docker network in swarm (and docker-compose for local development), so no need for the network assignment or listing.
  • restart:always isn't needed in swarm. That setting isn't used and is the default anyways. If you're using it for docker-compose, it's rarely needed as you usually don't want apps in a respawn loop during errors which will usually result in CPU race condition. I recommend leaving it off.
  • Volumes use a "volume driver". The default is local, just like normal docker commands. If you have shared storage you can use a volume driver plugin from store.docker.com to ensure shared storage is connected to the correct node.
  • If you're still having issues with worker/manager task assignment, put the output of docker node ls and maybe docker service ls and docker node ps <managername> for us to help troubleshoot.
0
4n70wa On

Run before docker stack deploy:

mkdir /srv/service/public
docker run --rm -v /srv/service/public:/srv/service/public my-container-with-data cp -R /var/www/app/public /srv/service/public

Use direcory /srv/service/public as volume in containers.