I'm trying to configure Apache ServiceMix 4 to provide load balancing feature mentioned in it's documentation (for example here: http://servicemix.apache.org/clustering.html). Although it's mentioned, I couldn't find the exact way how to do it.
The idea is to have 2 ServiceMixes (in LAN, for example) with the same OSGi service installed in them. When client tries to use the service, the load balancer takes him to appropriate service instance on one of the ServiceMixes.
Is there an easy way to do that?
I havent reached this phase of my project yet, so I have no experience with working with it; but Karaf has a subproject Cellar that is designed around Distributed OSGi.
Fabric8 (http://fabric8.io/) can do Karaf/ServiceMix clustering and much more out of the box. It also have additional clustered Camel components such as the master and fabric endpoints
There is a clustered Camel example, that demonstrates that
The client will then load balance between the active nodes that provides the service. So if you have 5 nodes, then it balance among those. If one of the nodes dies or is stopped etc, then it just balance between the 4 still active nodes. So the solution is fully elastic and scalable.
The principle is illustrated in the image below:
I have no experience with Distributed OSGi so I will only talk about the JMS based clustering solutions.
Here is a good ServiceMix 4 JBI cluster example (And please believe him that you have to turn off conduitSubscriptions...): http://trenaman.blogspot.com/2010/04/four-things-you-need-to-know-about-new.html
And that is important too: trenaman.blogspot.com/2009/03/new-jms-flow-in-servicemix-4.html
The name "JBI cluster" suggests that you should use this mechanism to cluster your application but it is in the most cases better to use the simple JMS endpoints for the clustering functionality. This is especially true if you can avoid JBI completely.