We are having issues to scale out our ASP.NET Web API on Azure because of our SQL Azure DB. The issue is that our Web/Business SQL Azure DB supports a maximum of 180 concurrent requests. At multiple occasions, we've hit that limit, and the following exception is thrown:
Resource ID : 1. The request limit for the database is 180 and has been reached. See 'http://go.microsoft.com/fwlink/?LinkId=267637'
To prevent that to happen, we explicitly set the "Max Pool Size" property of our connectionstring to 180 so that ADO.NET is not able to exceed the number of concurrent requests to the DB. But even after this change we kept receiving the same error. We then realized that it was because we had multiple nodes of our app. Therefore, our idea is to set the "Max Pool Size" to 180 divided by the number of nodes we are using, but that seems nuts to me. It feels like a very inefficient use of resources. If I push that reasoning to the extreme, it means I will never be able to scale my web app to more than 180 nodes if I want to make sure that I never exceed the max number of concurrent requests to the DB.
Is there a better approach to deal with this limitation in a distributed environment?