- I have an AWS load Balancer in front of instance x.
- Instance x runs on port 3000. Instance x has two pages i.e. x/abc and x/zyx.
- Currently the load balancer of x has two listeners i.e.
80 -> 3000
and8080 -> 3000
. And ping on/
Requirement: I has two servers that want to communicate to instance x. Server 1 wants to send http request to x/abc and server 2 wants to send http request to x/zyx.
How can I configure the LB to route to particular pages e.g. x/abc and x/zyx? OR write my requests differently?
Code 1: Server 1 wants to make http request to x/abc
// url is the DNS of load balancer... this should go to x/abc(?)
request({
url: "LoadBalancer-11122232.us-west-2.elb.amazonaws.com:80",
method: "POST",
json: true,
body: tweetJSON
}
Code 2: Server 2 wants to make http request to x/zyx
// url is the DNS of load balancer... this should go to x/abc
// DO I EVEN NEED TWO DIFFERENT PORT LISTENERS(?)
request({
url: "LoadBalancer-11122232.us-west-2.elb.amazonaws.com:8080",
method: "POST",
json: true,
body: tweetJSON
}
You don't configure the Load Balancer to route requests to different endpoints, that's more the job of a reverse proxy, like Nginx.
The Load Balancer provides a single endpoint to call, and forwards requests from clients to one of many identical servers. The objective it to share high loads across many servers.
In your situation, you can still have a Load Balancer in the mix, but as far as routing I suggest that you address the URL in full:
Code 1: Server 1 wants to make http request to x/abc
Code 2: Server 2 wants to make http request to x/zyx
If you need to prevent clients going to the backend url, you need some form of authentication to identify server 2.
Normally you use the load-balancer to balance traffic between two node servers.
So you have two pretty easy options. One is instead of using the classic Amazon load balancer (ELB), you can switch instead to an ALB (Application Load Balancer).
For setup, follow these Amazon instructions.
Specifically pay attention to the final section:
For more information, see Listener Rules.
A cheaper alternative may be to roll your own Load Balancer, using Nginx. You could spin up an EC2 instance with an Nginx configured AMI.
Then you'd edit your Nginx configuration to look something like this:
Although, even better, and what I think you actually want, is using nginx as a proper load-balancing reverse proxy. To do this you would run two copies of the same node.js application, where each one can respond to routes /abc or /xyz, and server the page. Then use a configuration like this:
If you do this last configuration then you don't have to worry about any complex url rewriting on your pages.
You get the benefit of two separate node instances on separate servers. So if one of your node instances goes down, then your load balancer will use the other node instance. (Add a /_health route to the nodejs app that responds with a 200 OK)
You can easily do A/B testing, and blue-green deploys where you update one instance only with new code before updating the other.
Nginx can be configured for different load-balancing strategies, round-robin being the default. You can also add sticky-sessions, if you need to send users to the same node instance once they begin a session.