Salesforce

How to configure Mule nodes belonging to different clusters instances when sharing the same host

« Go Back

Information

 
Content

GOAL

The goal of this document is to help you configure two or more Mule clusters to form up correctly without issues while sharing the same server. For example, you have two physical Servers and you are trying to create clustering like this

Server 1:
Node 1 - Cluster A
Node 2 - Cluster B

Server 2:
Node 3 - Cluster A
Node 4 - Cluster B

BACKGROUND

The configuration of the cluster behavior is done through the mule-cluster.properties file, which controls many aspects of the cluster formation. You can configure your cluster to use either multicast or unicast communication, but we suggest using unicast since it does not relies on network configuration. Multicast is less effective, requires more interface configuration, and is more resource-intensive than unicast.
When two or more clusters live together, a problem regarding the reuse of the default port 5701 can arise, preventing the cluster from being properly formed. You must explicitly configure the TCP listener port to circumvent this situation, keeping the nodes from clusters "A" apart from the nodes from cluster "B".

PROCEDURE

To make the things work properly, and to help you diagnose cluster issues properly, you should configure the following, as indicated below:
  1. Turn the cluster to unicast (don't use multicast, it is network infrastructure dependent and more error prone).
  2. Specify a "mule.cluster.nodes" list for the cluster, including host and port locations (it is necessary for unicast messaging).
  3. Turn on TCP/IP communication on cluster enabled members
  4. Specify a different Hazelcast local node TCP listener port for both clusters: this will definitely avoid the clusters from "clashing" together trying to use the very same ports
  5. Disable multicast messaging explicitly using the "mule.cluster.multicastenabled" set to false.

This example assumes you have 2 clusters dispersed among two boxes (first node IP 172.16.171.135, second node 172.16.171.140):

Cluster 1
 
OptionDescriptionExample
mule.cluster.multicastenabledEnabled/Disables multicast messagingfalse
mule.cluster.nodesSpecifies the cluster member nodes, with the form H1:P1,H2:P2,...172.16.171.135:7512,
172.16.171.140:7512
mule.cluster.tcpipenabledSpecifies if TCP/IP protocol is used for unicasttrue
mule.cluster.tcpinboundportSpecifies the Hazelcast local node listening port7512
mule.cluster.networkinterfacesComma separated list of IPs corresponding to local network interfaces to pick up on local node bind.172.16.171.135,
172.16.171.140

Cluster 2
 
OptionDescriptionExample
mule.cluster.multicastenabledEnabled/Disables multicast messagingfalse
mule.cluster.nodesSpecifies the cluster member nodes, with the form H1:P1,H2:P2,...172.16.171.135:7522,
172.16.171.140:7522
mule.cluster.tcpipenabledSpecifies if TCP/IP protocol is used for unicasttrue
mule.cluster.tcpinboundportSpecifies the Hazelcast local node listening port7522
mule.cluster.networkinterfacesComma separated list of IPs corresponding to local network interfaces to pick up on local node bind.172.16.171.135,
172.16.171.140
The important part of this configuration, and which makes a big difference is the "mule.cluster.tcpinboundport" property, since it allows a cluster node to listen on that very specific port, instead of the default 5701.  It should match the very same value you specified for the ports number on the "mule.cluster.nodes" property.

DISCLAIMER

PLEASE NOTE: Manually modifying a cluster configuration is not supported and can lead to inconsistencies if the cluster is managed using Runtime Manager. This procedure is only provided for your information.
Attachments

Powered by