Azure Redis Cache max connections reached

2020-06-04 13:04发布

I am using Azure Redis Cache for storing some quick lookup data and this Cache is read/connected by 10 client applications. All the applications are written in .NET 4.6, and this includes ASP.NET MVC Web application, Web API and few worker roles that run in every 1 second. All clients are using StackExchange.Redis for connecting to the Cache. However, I get intermittent timeouts and I have observed that in Azure Portal, max connections has reached 1000 (for my pricing tier). Since I have only 10 client applications and none of these are multithreaded, what could create the 1000 connections to the cache?

Are there any best practices available which I can follow for the Cache clients?

1条回答
唯我独甜
2楼-- · 2020-06-04 13:43

This is very similar to this question: Why are connections to Azure Redis Cache so high?

Here are the best practices we recommend for most customers:

  1. set abortConnect to false in your connection string
  2. create a singleton connectionMultiplexer and reuse it. This is sufficient for most scenarios. Some advanced scenarios may require creating multiple connectionMultiplexer objects per application, but most are fine with just one. I would recommend following the coding pattern shown here: https://azure.microsoft.com/en-us/documentation/articles/cache-dotnet-how-to-use-azure-redis-cache/#connect-to-the-cache
  3. Let the ConnectionMultiplexer handle reconnecting - don't do it yourself unless you have tested your code very thoroughly. Most connection leaks I have seen are because people are re-creating the connectionMultiplexer but fail to dispose the old one. In most cases, it is best to just let the multiplexer do the reconnecting.
查看更多
登录 后发表回答