In my current setup, I'm using the default multicast option of the Hazelcast cluster manager. When I link the instances of my containerized Vertx modules (via Docker networking links), I can see that they are successfully creating Hazelcast cluster. However, when I try publishing events on the event bus from one module, the other module doesn't react to it. I'm not sure how the network settings in the Hazelcast cluster related to the network settings for the event bus.
At the moment, I have the following programmatic configuration for each of my Vert.x module, each deployed inside a docker container.
ClusterManager clusterManager = new HazelcastClusterManager();
VertxOptions vertxOptions = new VertxOptions()
.setClustered(true)
.setClusterManager(clusterManager);
vertxOptions.setEventBusOptions(new EventBusOptions()
.setClustered(true)
.setClusterPublicHost("application"));
The Vert.x Core manual states that I may have to configure clusterPublicHost
, and clusterPublicPort
for the event bus, but I'm not sure how those relate to the general network topology.
One answer is here https://groups.google.com/d/msg/vertx/_2MzDDowMBM/nFoI_k6GAgAJ
I see this question come up a lot, and what a lot of people miss in
the documentation (myself included) is that Event Bus does not use the
cluster manager to send event bus messages. I.e. in your example with
Hazelcast as the cluster manager, you have the Hazelcast cluster up
and communicating properly (so your Cluster Manager is fine); however,
the Event bus is failing to communicate with your other docker
instances due to one or more of the following:
- It is attempting to use an incorrect IP address to the other node (i.e. the IP of the private interface on the Docker instance, not the
publicly mapped one)
- It is attempting to communicate on a port Docker is not configured to forward (the event bus picks a dynamic port if you don't specify
one)
What you need to do is:
- Tell Vertx the IP address that the other nodes should use to talk to each instance ( using the -cluster-host [command line] ,
setClusterPublicHost [VertXOptions] or "vertx.cluster.public.host"
[System Property] options)
- Tell Vertx explicitly the Port to use for event bus communication and ensure Docker is forwarding traffic for those ports ( using the
"vertx.cluster.public.port" [System Property], setClusterPublicPort
[VertXOptions] or -cluster-port [command line] options). In the past,
I have used 15701 because it is easy to remember (just a '1' in fromt
of the Hazelcast ports).
The Event bus only uses the Cluster Manager to manage the IP/Port
information of the other Vertx Instances and the registration of the
Consumers/Producers. The communications are done independently of the
cluster manager, which is why you can have the cluster manager
configured properly and communicating, but still have no Event bus
communications.
You may not need to do both the steps above if both your containers
are running on the same host, but you definitely will once you start
running them on separate hosts.
Something that also can happen, is that vert.x uses the loopback interface, when not specifying the IP which vert.x (not hazelcast) should take to communicate over eventbus. The problem here is, that you don't know which interface is taken to communicate over (loopback, interface with IP, you could even have multiple interfaces with IP).
To overcome this problem, I wrote a method once https://github.com/swisspush/vertx-cluster-watchdog/blob/master/src/main/java/org/swisspush/vertx/cluster/ClusterWatchdogRunner.java#L101
The cluster manager works fine, the cluster manager configuration has to be the same on each node (machine/docker container) in your cluster or don't make any configurations at all (use the default configuration of your cluster manager).
You have to make the event bus configuration be consistent on each node, you have to set the cluster host on each node to be the IP address of this node itself and any arbitrary port number (unless you try to run more than Vert.x instance on the same node you have to choose a different port number for each Vert.x instance).
For example if a node's IP address is 192.168.1.12 then you would do the following:
VertxOptions options = new VertxOptions()
.setClustered(true)
.setClusterHost("192.168.1.12") // node ip
.setClusterPort(17001) // any arbitrary port but make sure no other Vert.x instances using same port on the same node
.setClusterManager(clusterManager);
on another node whose IP address is 192.168.1.56 then you would do the following:
VertxOptions options = new VertxOptions()
.setClustered(true)
.setClusterHost("192.168.1.56") // other node ip
.setClusterPort(17001) // it is ok because this is a different node
.setClusterManager(clusterManager);
found this solution that worked perfectly for me, below is my code snippet (important part is the options.setClusterHost()
public class Runner {
public static void run(Class clazz) {
VertxOptions options = new VertxOptions();
try {
// for docker binding
String local = InetAddress.getLocalHost().getHostAddress();
options.setClusterHost(local);
} catch (UnknownHostException e) { }
options.setClustered(true);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
res.result().deployVerticle(clazz.getName());
} else {
res.cause().printStackTrace();
}
});
}
}
public class Publisher extends AbstractVerticle {
public static void main(String[] args) {
Runner.run(Publisher.class);
}
...
}
no need to define anything else...