Skip to content
All posts


This post describes the installation of Docker swarm using Ubuntu 18.04 server virtual machines running on top of VMware vSphere.

After the installation and spinning up some container for ESXi, Ubuntu Linux and Docker Swarm.

This step is the preparation for a future post that covers monitoring of Docker Swarm and all components.


The setup consists of 3 virtual machines running Ubuntu 18.04 LTS on vSphere 6.7.

As its a test environment the VMs have been configured with 4 vCPUs, 8 GB RAM and 32 GB disk. Please make sure to adjust your environment to fit your needs.

VMs are named swarm1, swarm2 and swarm3.

Install Docker CE

After the common sudo apt update & sudo apt upgrade, we install Docker CE.

sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] bionic stable"sudo apt updatesudo apt install docker-cesudo systemctl status dockersudo usermod -aG docker ${USER}apt-cache policy docker-cesudo systemctl start docker
sudo systemctl enable docker

Install Docker swarm

Swarm is automatically part of the Docker CE installation, but we need to make some preparations.

1 Node should be the manager, while the other nodes become worker nodes. Let’s reflect that within the hosts file (of course dns is preferable, but doesn’t matter so much in a test environment).

# sudo vi /etc/hosts swarm1 manager1 swarm2 worker1 swarm3 worker2

Initialize Docker swarm

You might notice that we named manager1 – please run the following command only on the manager node

docker swarm init –advertise-addr

docker swarm init master

Run the following command on the 2 worker nodes, based on the output of the docker swarm init command.

docker swarm join –token SWMTKN-1-54hsijoskp3urbgeni9fqzz35yjzg527lvxxkg5qdg4ce7dqyz-bf1sladxf3t5v5wtl9droybdc

You should receive a message like:

This node joined a swarm as a worker.

Setup Docker swarm UI

sudo apt install unzip
mv docker-swarm-visualizer-master dockersamples
docker run -it -d -p 5000:8080 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer

docker swam ui

Check Docker Swarm status

You can always check your Docker Swarm cluster and how many nodes are available using docker node ls

Docker Swarm status

Use docker service ls to check the running services: docker service ls

Or more details of a specific service: docker service ps redis

Deploy first service

As Docker is now managing a host cluster for container services and not a single host engine, you need to deploy container (services) in a different way. Lets deploy a redis database as a service with 2 replicas.

A replica is a kind of an instance. The benefit of multiple replicas is, that there are always more container running to takeover a service. If a container or container host would go down, Docker Swarm would restart the container of that service, but its not instant (can take some seconds). Having multiple replicas means an instant takeover to the remaining replicas without any restart.

docker service create –name redis –replicas 2 –publish 6379:6379 redis


When Docker publishes a port for a service, it does so by listening on that port across all nodes within the Swarm Cluster. When traffic arrives on that port, that traffic is then routed to a container running for that service. While this concept is pretty standard when all nodes are running a service’s container, this concept gets interesting when we have more nodes than we do replicas. Despite not running replicas on all Swarm nodes, the published service will be available on all nodes.

Docker service deploy

You can also create a service first and scale up later – lets deploy nginx

# start with one service
docker service create –name nginx-web –publish 8080:80 nginx# scale out to 3 replicas
docker service scale nginx-web=3

Docker Swarm multiple services

user@swarm1:~$ docker service ls
8a0wumpvlm7u nginx-web replicated 3/3 nginx:latest *:8080->80/tcp
hya01a0hl6k0 redis replicated 2/2 redis:latest *:6379->6379/tcp

You can now simply check if Redis is responding on all nodes, despite the fact its only running on swarm1 and swarm2.

redis-cli -h -p 6379
redis-cli -h -p 6379
redis-cli -h -p 6379

multiple container – WordPress + Database

This setup consists of multiple components: secrets, network, MariaDB Database service and WordPress service.


Lets create the passwords for the root database and the wordpress database first and store them as a secret available on every Swarm node.

openssl rand -base64 20 | docker secret create root_db_password –
openssl rand -base64 20 | docker secret create wp_db_password –

docker secret ls to show all secrets


Now we need a network connectivity across the cluster.

docker network create -d overlay wordpress-net

Create MariaDB service

docker service create
–name mariadb
–replicas 1
–network wordpress-net
–secret source=root_db_password,target=root_db_password
–secret source=wp_db_password,target=wp_db_password
-e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/root_db_password
-e MYSQL_PASSWORD_FILE=/run/secrets/wp_db_password

Create WordPress service

docker service create
–name wp
–replicas 1
–network wordpress-net
–publish 80:80
–secret source=wp_db_password,target=wp_db_password,mode=0400
-e WORDPRESS_DB_PASSWORD_FILE=/run/secrets/wp_db_password

If you visit any Docker Swarm node on Port 80 you should see the WordPress installer.