Skip to main content
Version: Deploy 22.1

Setup ActiveMQ Artemis HA With UDP

This topic outlines the procedure to setup Active Messaging Queue (MQ) Artemis in highly available configuration with UDP protocol. It also describes how Digital.ai Deploy can be connected to Artemis nodes.

Prerequisites

In the example setup that follows we have used CentOS release 8.1.1911 operating system.

HostnameIP AddressPurpose
node1192.168.10.1Artemis master (live server)
node2192.168.10.2Artemis slave (backup server)
note

To test or setup a cluster, You must have at least two physical or virtual machines. However, you can always add more machines to your cluster environment.

If you are setting up the cluster in a development environment, use the following commands to disable the firewalld service:

systemctl stop firewalld.service

systemctl disable firewalld.service

note

However, if you want to continue using the firewalld service you need to allow multicast traffic with firewalld

Allowing IGMP traffic

IGMP traffic must be allowed through the firewall so that the system can respond to multicast queries for general group memberships, and for specific groups:

Use the following commands to enable IGMP traffic:

firewall-cmd --add-protocol=igmp

firewall-cmd --permanent --add-protocol=igmp

Allowing UDP Multicast

The actual multicast traffic itself must be allowed through the firewall so that the system can actually receive the traffic carrying the data payload.

Use the following commands to enable multicast traffic

firewall-cmd --direct --add-rule ipv4 filter INPUT 10 -d 231.7.7.7 -j ACCEPT

firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 10 -d 231.7.7.7 -j ACCEPT

Enabling the port

Use the following commands to enable the multicast UDP port in the firewall:

firewall-cmd --add-port=9876/udp

firewall-cmd --permanent --add-port=9876/udp

Configuring a multicast listener

A service XML file can be defined or created in the /etc/firewalld/services/ directory for enabling a multicast listener.

vi /etc/firewalld/services/multicastlistener.xml

A sample XML file looks like this, you need to replace the values by actuals based on your environments.

<?xml version="1.0" encoding="utf-8"?>

<service>

<short>My Multicast Listener</short>

<description>A service which allows traffic to a fictitious multicast listener.</description>

<port protocol="udp" port="9876"/>

<destination ipv4="231.7.7.7"/>

</service>

Once you have created the file, the service can be applied:

firewall-cmd --reload

firewall-cmd --add-service=multicastlistener

firewall-cmd --permanent --add-service=multicastlistener

sudo firewall-cmd --zone=public --permanent --add-port=61616/tcp

sudo firewall-cmd --zone=public --permanent --add-port=8161/tcp

firewall-cmd --reload

Preparing the master broker on the node1

In our example we will setup the master broker on node1.

  1. Download the latest distribution of apache from the below link and unzip it:
    https://activemq.apache.org/components/artemis/download/
  1. Create a message broker using the following commands:

    cd /root/apache-artemis-2.15.0/bin

./artemis create /opt/master-broker/

Provide the username and password for the user during the broker creation.

  1. Change the jolokia-access.xml and bootstrap.xml files to access the Artemis UI

Navigate to the location of the jolokia-access.xml file and use an editor to edit it's contents.

cd /opt/master-broker/etc

vi jolokia-access.xml

Contents of the jolokia-access.xml file used in this example setup are shown below:

<cors>

<!-- Allow cross origin access from localhost ... -->

<allow-origin>*://localhost*</allow-origin>

<!-- Options from this point on are auto-generated by Create.java from the Artemis CLI -->

<!-- Check for the proper origin on the server side, too -->

<strict-checking/>

</cors>

In the file replace the value in the <allow-origin>*://localhost*</allow-origin> tag to look like this:

<allow-origin>*://*</allow-origin>

In the bootstrap.xml file replace value in the <web bind="http://localhost:8161" path="web"> attribute to look as follows:

<web bind="http:// 0.0.0.0:8161" path="web">

Contents of the bootstrap.xml file used in this example setup are shown below:

 <!-- The web server is only bound to localhost by default -->

<web bind="http://localhost:8161" path="web">

<app url="activemq-branding" war="activemq-branding.war"/>

<app url="artemis-plugin" war="artemis-plugin.war"/>

<app url="console" war="console.war"/>

</web>
  1. Remove the existing broker.xml file, using the following commands:

cd /opt/master-broker/etc

rm broker.xml

Create a new broker.xml file.

Note: The section below contains a complete sample broker.xml file. You can use the contents of the sample file by copy pasting them into your newly created broker.xml. Remember to replace the value node1_ip by your host's IP Address and fill in the actual values for these attributes:

<cluster-user>ACTIVEMQ.CLUSTER.ADMIN.USER</cluster-user>

<cluster-password>CHANGE ME!!</cluster-password>

Sample broker.xml

<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->

<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">

<name>artemis-master</name>


<persistence-enabled>true</persistence-enabled>

<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>

<paging-directory>data/paging</paging-directory>

<bindings-directory>data/bindings</bindings-directory>

<journal-directory>data/journal</journal-directory>

<large-messages-directory>data/large-messages</large-messages-directory>

<journal-datasync>true</journal-datasync>

<journal-min-files>2</journal-min-files>

<journal-pool-files>10</journal-pool-files>

<journal-device-block-size>4096</journal-device-block-size>

<journal-file-size>10M</journal-file-size>

<!--
This value was determined through a calculation.
Your system could perform 15.62 writes per millisecond
on the current journal configuration.
That translates as a sync write every 64000 nanoseconds.

Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>64000</journal-buffer-timeout>


<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>4096</journal-max-io>

<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>

<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>

<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>

<critical-analyzer-timeout>120000</critical-analyzer-timeout>

<critical-analyzer-check-period>60000</critical-analyzer-check-period>

<critical-analyzer-policy>HALT</critical-analyzer-policy>


<page-sync-timeout>3492000</page-sync-timeout>

<!-- Connectors -->

<connectors>
<connector name="netty-connector">tcp://node1_ip:61616</connector>
</connectors>

<!-- Acceptors -->
<acceptors>
<acceptor name="netty-acceptor">tcp://node1_ip:61616</acceptor>
</acceptors>

<!-- Clustering configuration -->
<broadcast-groups>
<broadcast-group name="Artemis-broadcast-group">
<local-bind-address>node1_ip</local-bind-address>
<local-bind-port>5433</local-bind-port>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>100</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>

<discovery-groups>
<discovery-group name="Artemis-discovery-group">
<local-bind-address>node1_ip</local-bind-address>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>

<!-- This is a symmetric cluster -->

<cluster-connections>
<cluster-connection name="Demo-Artemis-Cluster">
<address></address>
<connector-ref>netty-connector</connector-ref>
<check-period>1000</check-period>
<connection-ttl>5000</connection-ttl>
<min-large-message-size>50000</min-large-message-size>
<call-timeout>5000</call-timeout>
<retry-interval>500</retry-interval>
<retry-interval-multiplier>1.0</retry-interval-multiplier>
<max-retry-interval>5000</max-retry-interval>
<initial-connect-attempts>-1</initial-connect-attempts>
<reconnect-attempts>-1</reconnect-attempts>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<confirmation-window-size>32000</confirmation-window-size>
<call-failover-timeout>30000</call-failover-timeout>
<notification-interval>1000</notification-interval>
<notification-attempts>2</notification-attempts>
<discovery-group-ref discovery-group-name="Artemis-discovery-group"/>
</cluster-connection>
</cluster-connections>

<cluster-user>admin</cluster-user>
<cluster-password>admin</cluster-password>

<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>

<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>

<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>

</addresses>


<!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
-->
<ha-policy>
<replication>
<master>
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>

</core>
</configuration>

Preparing the master broker on the node1

In our example we will setup the master broker on node1.

  1. Download the latest distribution of apache from the below link and unzip it:
    https://activemq.apache.org/components/artemis/download/
  1. Create a message broker using the following commands:

    cd /root/apache-artemis-2.15.0/bin

./artemis create /opt/master-broker/

Provide the username and password for the user during the broker creation.

  1. Change the jolokia-access.xml and bootstrap.xml files to access the Artemis UI

Navigate to the location of the jolokia-access.xml file and use an editor to edit it's contents.

cd /opt/master-broker/etc

vi jolokia-access.xml

Contents of the jolokia-access.xml file used in this example setup are shown below:

<cors>

<!-- Allow cross origin access from localhost ... -->

<allow-origin>*://localhost*</allow-origin>

<!-- Options from this point on are auto-generated by Create.java from the Artemis CLI -->

<!-- Check for the proper origin on the server side, too -->

<strict-checking/>

</cors>

In the file replace the value in the <allow-origin>*://localhost*</allow-origin> tag to look like this:

<allow-origin>*://*</allow-origin>

In the bootstrap.xml file replace value in the <web bind="http://localhost:8161" path="web"> attribute to look as follows:

<web bind="http:// 0.0.0.0:8161" path="web">

Contents of the bootstrap.xml file used in this example setup are shown below:

 <!-- The web server is only bound to localhost by default -->

<web bind="http://localhost:8161" path="web">

<app url="activemq-branding" war="activemq-branding.war"/>

<app url="artemis-plugin" war="artemis-plugin.war"/>

<app url="console" war="console.war"/>

</web>
  1. Remove the existing broker.xml file, using the following commands:

cd /opt/master-broker/etc

rm broker.xml

Create a new broker.xml file.

Note: The section below contains a complete sample broker.xml file. You can use the contents of the sample file by copy pasting them into your newly created broker.xml. Remember to replace the value node1_ip by your host's IP Address and fill in the actual values for these attributes:

<cluster-user>ACTIVEMQ.CLUSTER.ADMIN.USER</cluster-user>

<cluster-password>CHANGE ME!!</cluster-password>