What is the scenario for deploying multiple applic

2019-08-27 01:02发布

问题:

As per this page, node types in service fabric can be seen as analogous to roles in cloud services.

If that is the case, then how do we think about deploying multiple applications to same service fabric cluster. E.g. let's say there are 2 applications:

  • The first only needs a web role,
  • The seconds, which needs 1 web role and 2 worker roles.

Questions:

  1. Then do we create service fabric cluster with 3 node types (web-1, worker-1, worker-2) and then let web roles of both apps share web-1 node type?

  2. What if the performance/scalability requirements of both apps are very different e.g. App1 web role needs 20 VMs, whereas App2 web role only needs 2? We still have to change instance count of nodetype1 to 20, right?

  3. And how does service fabric isolate one app from effect of another? E.g. App1 starts getting a lot of traffic, and hence ends up consuming most of the CPU/Memory, wouldn't it impact App2?

回答1:

  1. You would create 2 node types: Web and Worker. Services on the Web node type would be directly accessible from the internet and services in the Worker node type would not. Both applications share the nodes, so they can make optimal use of the available resources.

  2. Depending on the application characteristics, you'd need 20 to 22 nodes. If one is memory heavy, and the other CPU heavy you might get away with using 20. If they are both CPU heavy, you'd likely need 22 nodes. So not 22 node types, but 22 nodes (VM's).

  3. SF will perform resource balancing to divide the workload across available nodes. By using Containers and resource governance, you can restrict the impact that one service can have on others.



回答2:

There is not a right answer, for all your questions it will depend on how your application consume resources.

One thing you can be sure, you will need one NodeType public available to receive external calls and redirect these calls internally to worker nodes, and for the workers it will depend how they consume the server resources, one might be CPU intensive, other Disk or Network.

  1. If you create node types for each service type, you might have idle or low used machines very often just to support these services when required. If the load increase/decrease quickly you might have a delay to scale up\down to adapt the demand. But this approach will keep them isolated from each other.

  2. If you deploy them on same node they might compete for resources like memory, disk and CPU, but in this case they will use as much as they have available until you reach the node limits, making a good use of the resources available. This would be an issue if this concurrency force them to move very often and interrupting their processing, this would not be acceptable when they have to run long running operations.