My webapi must perform a set of heavy operations in order to fullfil a request. To minimize processing time I am offload the "view counter increment" to a webjob.
The way I am doing it currently is by enqueueing a message with userId and productId to azure queue storage at the end of each request. The webjob function triggers on new queue messages and after parsing the message it adds values (increments or adds new) to a static concurrent dictionary.
I am not incrementing and writing to azure table because I'd like to use another timer-based webjob to persist values from concurrent dictionary to the table to avoid excessive writes.
While no messages get lost and all values get recorded/added properly into dict, I am seeing very poor performance. It takes over 5 minutes to process 1000 messages. Messages come in bursts and are processed slower in parallel than when I set the BatchSize to 1. My webjob settings are:
config.Queues.BatchSize = 32;
config.Queues.NewBatchThreshold = 100;
config.Queues.MaxPollingInterval = TimeSpan.FromSeconds(2);
config.Queues.MaxDequeueCount = 3;
The performance doesn't seem to suffer due to dictionary collisions as those seem to be processing fast. My timer shows that it takes between 0 and 9ms to process each increment, so 1000 messages should take 9 seconds at most. Is the webjob lagging/idling and is it by design? At what maximum rate can a webjob process the queue? Are there any other settings I can adjust to makes the webjob run faster? Are there other ways to go about this "view increment" task, considering that I can have many messages with different userIds/productIds?
As far as I know, WebJobs poll the queue with specific polling algorithm. When a message is found, the SDK waits two seconds and then checks for another message; when no message is found it waits about four seconds before trying again. For detailed information, please check Polling algorithm topic in this article.
Besides, you could try to run WebJobs in a Azure App Service web app and scale WebJobs out to multiple instances.