I have a few back-end microservices managed by consul, and to get some data from one service for the other one, I use service discovery feature of consul - like get all healthy servers, then get server address and port from the retrieved entry etc. But how should I do it from a front-end side? Just call needed microserver using it's actual ip or call it using namespace of docker container? It will be very helpful to get any response from someone who knows how to do it or even better, who did it before, because I stuck with it a bit.
相关问题
- Docker task in Azure devops won't accept "$(pw
- npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fs
- google-drive can't get push notifications
- Unable to run mariadb when mount volume
- Register MicroServices in Azure Active Directory (
During the investigation, I found that there are a few approaches:
Client-side Service Discovery - suppose you have consul and it knows all about available servers and their statuses, at the client you should write a service layer, which can call consul's API, fetch healthy servers, and then do one more http request to needed server. (Of course it can be a bit smarter and has an ability e.g. cache healthy servers etc.).
Server-side Service Discovery (load balancer) - an additional layer above the consul - it can be haproxy or nginx, and it will forward requests to needed server. (From the front-end side you can use consul dns names or docker container dns names).
Server-side Service Discovery (API Gateway) - and the last one, you can write one more microservice which will handle all requests and proxy them to the needed servers after checking their statuses in the consul.
But now there is one more question - which approach should you use? - I think it very depends on the project complexity, server load, and count of microservices.
IMHO if you have a few microservices and low server load, you can use any of them, but in any other cases I think it's better to choose 2th approach.
By "frontend" do you mean Javascript running on a web browser or a piece of software you've got running within the same datacenter? I'll assume we are not talking about web browser scenario here.
I think client-side discovery with smart caching and round-robin load balancing scales the best as there is no single point of failure and it reacts very fast to any disruptions within the cluster. But it pushes more logic to client side and makes logging more difficult than the trivial access log of Nginx.
2nd option is very standard and well understood, and Nginx and Haproxy were designed for this workload. Note that you should have a few of them available not to have a single point of failure, and upgrading their binaries (especially if you run them on Docker) will cause a short period of downtime. Clients need to discover these load balancers somehow anyway, DNS is the most common option. DNS works well when the situation is quite static and everything is running on default ports so you don't need to tinker too much with TTLs and SRV records.
3rd option makes client logic simpler because the API Gateway can act as a "view" to the services you've got internally available. But you still need service discovery for clients to find these so they don't really solve the original problem.
Any feedback is welcome, this is a very broad topic and your mileage may vary.
Update: Also if you are using HTTP protocol you might want to secure it by HTTPS. With a load balancer you have the chance of terminating HTTPS there and have simpler non-encrypted traffic within your VPC or whatever behind a firewall.