In Docker Swarm you can set maximum system requirements like so:
my-service
image: hello-world
deploy:
resources:
limits:
cpus: '2'
memory: 4GB
I have a container that has minimum system requirements of 2 CPU cores and 4GB of RAM which is the exact size of the nodes in my Docker Swarm. This means that when this container is running, it needs to be the only container running on that node.
However, when I run the container alongside others, other containers get placed on the same node. How can I ensure that Docker gives this container a minimum level of CPU and RAM?
Update
I added reservations
as suggested by @yamenk, however I still get other containers starting on the same node which causes performance problems for the container I am trying to protect:
my-service
image: hello-world
deploy:
resources:
reservations:
cpus: '2'
memory: 4GB
Update
Apparently the effect of memory reservations in docker swarm are not very well documented and they work as a best effort. To understand the effect of memory reservation flag, check the documentation:
To enforce that no other container runs on the same node, you need to set service constraints. What you can do is give nodes in the swarm specific labels and use these labels to scheduel services to run only on nodes that have those specific labels.
As decribed here, node labels can be added to a node using the command:
Then inside your stack file, you can restrict the container to run on nodes only having the specified label, and other container to avoid nodes labeled with
hello-world=yes
.If you want to start replicas of my-service on multiple nodes, and still have one container running on each node, you need to set the global mode of my-service, and add the same label to nodes where you want a container to run.
The global mode ensures that exactly one container will run each node that satisfies the service constraints:
Old Answer:
You can set resource reservations as such: