I have coded an MVC 3 app hosted in Azure. I am using Session variables to store update status information between http calls on one of my long running processes. This is then used to update a progress bar. The values can change pretty rapidly.
This all works great when using an InProc session provider. However whenever I change to use the Azure Cache session provider the session variable does not get updated from the long running process.
I am now changing things to use Cache variables directly, which seems to work so far.
Why is the following method not work when using Session in cache, but fine InProc?
For example, I set might initiate a session variable in one controller ActionResult
Session["OPERATION_PROGRESS"] = 0;
I then get a handle on the session like
HttpSessionStateBase session = Session;
and pass it to my long running process like
LongRunningProcess.Go(session);
Then from within the LongRunningProcess method it will update the session variable as it progresses through its task using the passed session object.
passedSession["OPERATION_PROGRESS"]=10;
The webs client call a progress page that passes the session variable value back to update the progress bar.
Based on what I've been reading about session providers lately I suspect that what is happening is that after the request that initialises the long running request is complete, the session provider releases its lock on the session information, effectively disconnecting it. From MSDN:
At the end of a request, if the session-state values have been
modified, the SessionStateModule instance calls the
SessionStateStoreProviderBase.SetAndReleaseItemExclusive method to
write the updated values to the session-state store.
There's still an object for you to talk to (which is why your long running process still works), but none of the changes to that object are sent to the persistence layer (which is why subsequent requests don't pick up those changes).
What I've done in similar situations is on the start of the request generate a request ID and create an row in an azure table with that as the partition key (but you can use any storage you like), pass this ID into the long running process and also return this id to the client. The long running process just updates this one row in the table. All subsequent requests for progress pass in this request ID and it's trivial to look it up. To stop the table from getting too large, on the request that discovers the process is complete it deletes the row.
And improvement on this system if you are going to use azure table storage is to use the current time in ticks as partition key and another unique ID as the row key. This way it will be easy to find rows that have been in the table longer than they should and clean them out.