I have a cluster of 3 ElasticSearch nodes running on AWS EC2. These nodes are setup using OpsWorks/Chef. My intent is to design this cluster to be very resilient and elastic (nodes can come in and out when needed).
From everything I've read about ElasticSearch, it seems like no one recommends putting a load balancer in front of the cluster; instead, it seems like the recommendation is to do one of two things:
Point your client at the URL/IP of one node, let ES do the load balancing for you and hope that node never goes down.
Hard-code the URLs/IPs of ALL your nodes into your client app and have the app handle the failover logic.
My background is mostly in web farms where it's just common sense to create a huge pool of autonomous web servers, throw an ELB in front of them and let the load balancer decide what nodes are alive or dead. Why does ES not seem to support this same architecture?
You don't need a load balancer — ES is already providing that functionality. You'd just another component, which could misbehave and which would add an unnecessary network hop.
ES will shard your data (by default into 5 shards), which it will try to evenly distribute among your instances. In your case 2 instances should have 2 shards and 1 just one, but you might want to change the shards to 6 for an equal distribution.
By default replication is set to
"number_of_replicas":1
, so one replica of each shard. Assuming you are using 6 shards, it could look something like this (R is a replicated shard):Assuming node1 dies, the cluster would change to the following setup:
Depending on your connection setting, you can either connect to one instance (transport client) or you could join the cluster (node client). With the node client you'll avoid double hops, since you'll always connect to the correct shard / index. With the transport client, your requests will be routed to the correct instance.
So there's nothing to load balance for yourself, you'd just add overhead. The auto-clustering is probably ES's greatest strength.
I believe load balancing an Elasticsearch cluster is a good idea (designing a fault tolerant system, resilient to single node failure.)
To architect your cluster you'll need background on the two primary functions of Elasticsearch: 1. Writing and updating documents and 2. Querying Documents.
Writing / indexing documents in elasticsearch:
Querying documents in Elasticsearch:
Architect a Load Balancer for Writes / Indexing / Updates
Elasticsearch self manages the location of shards on nodes. The "master node" keeps and updates the "shard routing table". The "master node" provides a copy of the shard routing table to other nodes in the cluster.
Generally, you don't want your master node doing much more than health checks for the cluster and updating routing tables, and managing shards.
It's probably best to point the load balancer for writes to the "data nodes" (Data nodes are nodes that contain data = shards) and let the data nodes use their shard routing tables to get the writes to the correct shards.
Architecting for Queries
Elasticsearch has created a special node type: "client node", which contains "no data", and cannot become a "master node". The client node's function is to perform the final resource heavy merge-sort at the end of the query.
For AWS you'd probably use a c3 or c4 instance type as a "client node"
Best practice is to point the load balancer for queries to client nodes.
Cheers!
References:
You're quite right to want to design for 'failover', and in AWS, here's how I recommend you do it.
1) Limit the nodes in your cluster that can be elected master. For the rest, set node.client: true. Base your choice of how many master electable nodes you have on how many you want available for failover.
2) Create an ELB that includes only the master electable nodes.
3) In Route 53, create a CNAME for your cluster, with the value set to the DNS name of your ELB.