How does the Azure Web Apps architecture look like

2019-01-15 16:33发布

问题:

I've had a few outages of 10 to 15 minutes, because apparently Microsoft had a 'blip' on their storages. They told me that it is because of a shared file system between the instances (making it a single point of failure?)

I didn't understand it and asked how file share is involved, because I would assume a really dumb stateless IIS app that communicates with SQL Azure for it's data.

I would assume the situation below:

This is their reply to my question (I didn't include the drawing)

The file shares are not necessarily for your web app to communicate to another resources but they are on our end where the app content resides on. That is what we meant when we suggested that about storage being unavailable on our file servers. The reason the restarts would be triggered for your app that is on both the instances is because the resources are shared, the underlying storage would be the same for both the instances. That’s the reason if it goes down on one, the other would also follow eventually. If you really want the availability of the app to be improved, you can always use a traffic manager. However, there is no guarantee that even with traffic manager in place, the app doesn’t go down but it improves overall availability of your app. Also we have recently rolled out an update to production that should take care of restarts caused by storage blips ideally, but for this feature to be kicked it you need to make sure that there is ample amount of memory needs to be available in the cases where this feature needs to kick in. We have couple of options that you can have set up in order to avoid any unexpected restarts of the app because of a storage blip on our end:

  • You can evaluate if you want to move to a bigger instance so that we might have enough memory for the overlap recycling feature to be kicked in.

  • If you don’t want to move to a bigger instance, you can always use local cache feature as outlined by us in our earlier email.

Because of the time differences the communication takes ages. Can anyone tell me what is wrong in my thinking?

The only thing that I think of is that when you've enabled two instances, they run on the same physical server. But that makes really little sense to me.

I have two instances one core, 1.75 GB memory.

回答1:

My presumption for App Service Plans was that they were automatically split into availability sets (see below for a brief description) Largely based on Web Apps sales spiel which states

App Service provides availability and automatic scale on a global data centre infrastructure. Easily scale applications up or down on demand, and get high availability within and across different geographical regions.

Following on from David Ebbo's answer and comments, the underlying architecture of Web apps appears to be that the VM's themselves are separated into availability sets. However all of the instances use the same fileserver to share the underlying disk space. This file server being a significant single point of failure.

To mitigate this Azure have created the WEBSITE_LOCAL_CACHE_OPTION which will cache the contents of the file server onto the individual Web App instances. Using caching in lieu of solid, high availability engineering principles.

The problem here is that as a customer we have no visibility into this issue, we've no idea if there is a plan to fix it, or if or when it will ever be fixed since it seems unlikely that Azure is going to issue a document that admits to how badly this has been engineered, even if it is to say that it is fixed.

I also can't imagine that this issue would be any different between ASM and ARM. It seems exceptionally unlikely that there was originally a high availability solution at the backend that they scrapped when ARM came along. So it is very likely that cloud services would suffer the exact same issue.

The small upside is that now that we know this is an issue, one possible solution would be to deploy multiple web apps and have a traffic manager between them. Even if they are in the same region, different apps should have different backend file servers.

My first action would be to reply to that email, with a link to the Web Apps page, (and this question) with a copy of the quote and ask how to enable high availability within a geographic region.

After that you'll likely need to rearchitect your solution!

Availability sets

For virtual machines Azure will let you specify an availability set. An availability set will automatically split VMs into separate update and fault domains. Meaning that servers will end up in different server racks, and those server racks won't get updates at the same time. (it is a little more complex than that, but that's the basics!)



回答2:

Azure Web Apps do used a shared file storage. The best way to think about it is that all the instances of your app map to the same network share that have your files. So if you modify the files by any mean (e.g. FTP, msdeploy, git, ...), all the instances instantly get the new files (since there is only one set of files).

And to answer your final question, each instance does run on a separate VM.