Cluster Mode
This topic describes how to install and upgrade Release as a cluster. Running Release in a cluster mode lets you have a Highly Available (HA) Release setup. Release supports the following HA mode.
- Active/active: Three or more nodes (odd number) are running simultaneously to process all requests. A load balancer is needed to distribute requests.
Prerequisites
Using Release in cluster mode requires the following:
-
Release must be installed according to the system requirements. For more information, see requirements for installing Release.
-
The Release repository and archive must be stored in an external database, as described in configure the Release repository in a data database topic.
Cluster mode is not supported for the default configuration with an embedded database.
-
A load balancer. For more information, see the HAProxy load balancer documentation.
-
A Java Messaging System (JMS) for the Webhooks functionality. For more information, see Webhooks overview.
-
The time on both Release nodes must be synchronized through an NTP server.
-
The servers running Release must run on the same operating system.
-
Release servers and load balancers must be on the same network.
- All the Release cluster nodes must reside in the same network segment. This is required for the clustering protocol to function correctly. For optimal performance, it is also recommended that you put the database server in the same network segment to minimize network latency.
- *When you are using Release in cluster mode, you must specify a shared directory to store generated reports.
You can specify the location of your shared directory in the xl-release.conf
file. The parameter that stores the default location is:
xl.reporting.engine.location
The default parameter value is reports
, and it holds the path of the reports directory, relative to the XL_RELEASE installation directory.
Setup Procedure
The initial cluster setup is:
- A load balancer
- A database server
- Three Release servers
Important: It is recommended to have a multi-node setup with odd number of nodes to facilitate high fault tolerance in production environments. It is also recommended not to have a cluster with more than five nodes to prevent database latency issues. You can, however, with some database configuration tuning, have a cluster with more than five nodes. Contact Digital.ai Support for more information about setting up a cluster with more than five nodes.
To set up the cluster, perform the following configuration steps before starting Release.
Step 1 - Set Up External Databases
- See Configure the Release SQL repository in a database.
- If you are upgrading Release on a new cluster, set up the database server in the new cluster, back up your database from the existing Release cluster, and restore the same on the new database server.
Both the xlrelease
repository and the reporting archive must be configured in an external database.
Note: In Release, you have the PostgreSQL streaming replication set up to to create a high availability (HA) cluster configuration with one or more standby servers ready to take over operations if the primary server fails.
Step 2 - Set Up the Cluster in the Release Application Configuration File
All Active/Active configuration settings are specified in the XL_RELEASE_SERVER_HOME/conf/xl-release.conf
file, which uses the HOCON format.
- Enable clustering by setting
xl.cluster.mode
tofull
(active/active). - Define ports for different types of incoming TCP connections in the
xl.cluster.node
section:
Parameter | Description |
---|---|
xl.cluster.mode | Possible values: default (single node, no cluster); full (active/active). Use this property to turn on the cluster mode by setting it to full . |
xl.cluster.name | A label to identify the cluster. |
xl.cluster.node.id | Unique ID that identifies this node in the cluster. |
xl.cluster.node.hostname | IP address or host name of the machine where the node is running. Note that a loopback address such as 127.0.0.1 or localhost should not be used. |
xl.cluster.node.clusterPort | Port used for cluster-wide communications; defaults to 5531 . |
xl.queue.embedded | Possible values: true or false . Set this to false if you want to use the webhooks feature. |
Sample configuration
This is an example of the xl-release.conf
configuration for an active/active setup:
xl {
cluster {
mode = full
name = "xlr-cluster"
node {
clusterPort = 5531
hostname = "xlrelease-1.example.com"
id = "xlrelease-1"
}
}
database {
...
}
queue {
embedded = false
...
}
}
Note: If you are upgrading Release, you can use the existing Release cluster's
xl-release.conf
file. Copy the existingxl-release.conf
file to the new server and update the file with any changes to the cluster name (xl.cluster.name
) hostname (xl.cluster.node.hostname
), and so on.
Important: If you want to use the webhooks feature in a High Availability (cluster mode) setup, the JMS queue cannot be embedded. It must be external and shared by all nodes in the Release cluster.