Multi-node Docker deployments with Caching
This article provides a sample approach that you can follow to setup the database and other infrastructure of your choice.
note
- For production deployments, it is advised that you use Kubernetes to orchestrate the deployment of the applications. Docker compose is not ideal for production setup. Proceed at your own risk.
- For HA setup to work, you need to mount a license file or provide an environment variable
XL_LICENSE
with a license text converted to base64 for the Deploy instances. The folders you mount needs to be owned by user 10001, for example, you can runsudo chown -R 10001 xl-deploy-master
if you want to mount directories under$PWD/xl-deploy-master
folder.
Setup
The setup includes
- A load balancer with HaProxy
- RabbitMQ single node setup
- PostgreSQL database single node setup
- Deploy master nodes
- Deploy worker nodes
- Infinispan caching server
Limitations
- The database setup is for demo purposes, use your own setup or use an external database
- The MQ setup is for demo purposes, use your own setup or use an external MQ
- The HAproxy setup is for demo purposes, use your own setup
Steps
The production setup for Deploy as mentioned here can be performed using the Docker compose file as described in the steps below. Follow the steps to deploy the sample:
- Create a directory "xl-deploy-ha" and download the docker-compose-xld-ha.yaml and docker-compose-xld-ha-workers.yaml files.
- In the Docker compose file for the master (docker-compose-xld-ha.yaml), make the following changes:
- Add
infinispan
to the dependency order ofxl-deploy-master
:
xl-deploy-master:
image: xebialabs/xl-deploy:23.1.0
depends_on:
- infinispan
- rabbitmq
- postgresql- Set the USE_CACHE environment variable to
true
to enable caching. - Set the XLD_IN_PROCESS variable to
false
to disable In process worker. - Set the XL_CLUSTER_MODE variable to
full
to set the cluster mode to Active-active.
- Add
note
hot-standby
mode is not supported for caching setup in Docker.
- Add the Infinispan server details:
infinispan:
image: infinispan/server
networks:
- xld-network
ports:
- "11222:11222"
environment:
- USER=admin
- PASS=admin
- Mount the necessary .jar files to the
hotfix/lib
folder and copy other configuration files.
volumes:
- $PWD/Infinispan-JARs:/opt/xebialabs/xl-deploy-server/hotfix/lib:z
- $PWD/xl-deploy-master/centralConfiguration/deploy-caches.yaml:/opt/xebialabs/xl-deploy-server/centralConfiguration/deploy-caches.yaml:z
- $PWD/xl-deploy-master/conf/infinispan-hotrod.properties:/opt/xebialabs/xl-deploy-server/conf/infinispan-hotrod.properties:z
- $PWD/xl-deploy-master/conf:/opt/xebialabs/xl-deploy-server/conf:z
- In the Docker compose file for the worker (docker-compose-xld-ha-workers.yaml), make the following changes:
- Set the XLD_IN_PROCESS variable to
false
to disable In process worker. - Set the XL_CLUSTER_MODE variable to
full
to set the cluster mode to Active-active.
- Set the XLD_IN_PROCESS variable to
note
hot-standby
mode is not supported for caching setup in Docker.
- Copy the
infinispan-hotrod.properties
file and other configuration files to the/conf
folder.
volumes:
- $PWD/xl-deploy-worker/conf/infinispan-hotrod.properties:/opt/xebialabs/xl-deploy-worker/conf/infinispan-hotrod.properties:z
- $PWD/xl-deploy-worker/conf:/opt/xebialabs/xl-deploy-server/conf
- $PWD/xl-deploy-worker/hotfix/lib:/opt/xebialabs/xl-deploy-worker/hotfix/lib:z
- Add the following circuit-breaker configuration to the
infinispan-hotrod.properties
file to handle situations when the Infinispan server is down:
infinispan.client.hotrod.connect_timeout=5000
infinispan.client.hotrod.max_retries=1
Notes:
- Make sure that the cache names mentioned in the
infinispan-hotrod.properties
file match with the original cache names (CI_PK_CACHE, CI_PATH_CACHE, and CI_PROPERTIES_CACHE). - Ensure that all CI caches (CI_PK_CACHE, CI_PATH_CACHE, and CI_PROPERTIES_CACHE) are enabled or disabled together.
- Some properties such as
ttl-minutes
in thedeploy-caches.yaml
file exist under a different name (expiration lifespan
) in theinfinispan-hotrod.properties
file. In such cases, the value provided in theinfinispan-hotrod.properties
file takes the higher precedence.
- Create a directory rabbitmq in "xl-deploy-ha" directory and download the enabled_plugins.
- To set up using automated scripts, use the run_master.sh and run_worker.sh scripts. For manual setup, follow the steps described below. Note: Change the passwords as required whether you run automated scripts, or perform manual setup.
- To shut down and clean the Docker container, run the cleanup.sh script.
# Set passwords
export XLD_ADMIN_PASS=admin
export RABBITMQ_PASS=admin
export POSTGRES_PASS=admin
# Create Docker network
docker network create xld-network
# deploy master nodes, load balancer, mq and database.
docker-compose -f docker-compose-xld-ha.yaml up --scale xl-deploy-master=3 -d
# get the IP of master nodes, change the container names if you are not inside a folder named "xl-deploy-ha"
export XLD_MASTER_1=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' xl-deploy-ha_xl-deploy-master_1)
export XLD_MASTER_2=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' xl-deploy-ha_xl-deploy-master_2)
export XLD_MASTER_3=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' xl-deploy-ha_xl-deploy-master_3)
export XLD_LB=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' xl-deploy-lb)
# Deploy the worker nodes (if required, you can change the number of nodes here)
docker-compose -f docker-compose-xld-ha-workers.yaml up --scale xl-deploy-worker=3 -d
# Print the status
docker-compose -f docker-compose-xld-ha.yaml -f docker-compose-xld-ha-workers.yaml ps
- To view the logs of individual containers, run the
docker logs <container_name> -f
command. - To access Deploy UI, go to
http://localhost:8080
. - To shut down the setup, run:
# Shutdown deployments
docker-compose -f docker-compose-xld-ha.yaml -f docker-compose-xld-ha-workers.yaml down
# Remove network
docker network rm xld-network
Upgrading multi-node deployments
For more information about upgrading a multi-node Deploy using Docker Compose, see Multi-node docker deployments