I am thinking of building an application using a Service Oriented Architecture (SOA).
This architecture is not as complex and messy as a microservices solution (I think), but I am facing similar design problems. Imagine I have services of type ServiceA that send work to services of type ServiceB. I guess, if I use a queue, then load balancing will not be a problem (since consumers will take what they can handle from the queue). But queues tend to generate some bad asynchrony in the code that requires extra effort to fix. So, I was more inclined to use HTTP calls between services, using the efficient and amazing async/await
feature of C#. But this generates issues on sharing the workload and detecting services that are saturated or dead.
So my questions are:
- Is there a queue that supports some sort of
async/await
feature and that functions like an HTTP call that returns the result where you need it and not in some callback where you cannot continue your original execution flow?
- How do I load-balance the traffic between services and detect nodes that are not suitable for new assignments when using HTTP? I mean, I can probably design something by myself from scratch, but there ought to be some standard way or library or framework to do that by now. The best I found online was this, but it is built for microservices, so I am not sure if I can use it without problems or overkills.
Update:
I have now discovered this question, that also asks for awaitable queues: awaitable Task based queue
...and also discovered Kubernetes, Marathon, and the like.
Regarding your first question, NServiceBus, which is a commercial framework for .NET that abstracts message transports and adds many features on top of them, has the exact feature that you are looking for. They actually call it "callbacks" and the usage is as follows:
Assuming you have a Message to send to a backend service and a Response that you expect back, you would do, in ServiceA:
var message = new Message();
var response = await endpoint.Request<ResponseMessage>(message);
log.Info($"Callback received with response:{response.Result}");
Where endpoint is an NServiceBus artifact that allows you to send messages and receive messages.
What this simple syntax will do is put Message in a queue and wait (asynchronously) until the message has been handled by a backend service and it has replied to it. The response is a message of type Response in a queue.
In ServiceB, you would do:
public class Handler : IHandleMessages<Message>
{
public Task Handle(Message message, IMessageHandlerContext context)
{
var responseMessage = new ResponseMessage
{
Result = "TheResult"
};
return context.Reply(responseMessage);
}
}
This allows you to have multiple ServiceA nodes sending messages to multiple ServiceB nodes (competing consumers on a single queueu). NServiceBus takes care of routing the response to the right ServerA for every given message.
Note that this has the disadvantage that if ServerA goes down while waiting a response, you'll never receive the response. For this reason, this pattern is not recommended for most scenarios.
Regarding your question number 2, I would say a load balancer would do the job. For more complex scenarios, you could look at Service Fabric.