I'm working on a project to gradually phase out a legacy application. In the proces, as a temporary solution we integrate with the legacy application using the database.
The legacy application uses transactions with serializable isolation level. Because of database integration with a legacy application, i am for the moment best off using the same pessimistic concurrency model and serializable isolation level.
These serialised transactions should not only be wrapped around the SaveChanges statement but includes some reads of data as well.
I do this by
- Creation a transactionScope around my DbContext with serialised isolation level.
- Create a DbContext
- Do some reads
- Do some changes to objects
- Call SaveChanges on the DbContext
- Commit the transaction scope (thus saving the changes)
I am under the notion that this wraps my entire reads and writes into on serialised transaction and then commits.
I consider this a way form of pessimistic concurrency.
However, reading this article, https://docs.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application states that ef does not support pessimistic concurrency.
My question is:
- A: Does EF support my way of using a serializable transaction around reads and writes
- B: Wrapping the reads and writes in one transaction gives me the guarantee that my read data is not changed when committing the transaction.
- C: This is a form of pessimistic concurrency right?
One way to acheive pessimistic concurrency is to use sonething like this:
In VS2017 it seems you have to rightclick TransactionScope then get it to add a reference for: Reference Assemblies\Microsoft\Framework.NETFramework\v4.6.1\System.Transactions.dll
However if you have two threads attempt to increment the same counter, you will find one succeeds whereas the other thread thows a timeout in 10 seconds. The reason for this is when they proceed to saving changes they both need to upgrade their lock to exclusive, but they cannot because other transaction is already holding a shared lock on the same row. SQL Server will then detect the deadlock after a while fails one transactions to solve the deadlock. Failing one transaction will release shared lock and the second transaction will be able to upgrade its shared lock to exclusive lock and proceed with execution.
The way out of this deadlocking is to provide a UPDLOCK hint to the database using something such as:
This code came from Ladislav Mrnka's blog which now looks to be unavailable. The other alternative is to resort to optimistic locking.
The document states that EF does not have a built in pessimistic concurrency support. But this does not mean you can't have pessimistic locking with EF. So YOU CAN HAVE PESSIMISTIC LOCKING WITH EF!
The recipe is simple:
I did a lot of pessimistic locking, but optimistic locking is better. You can't go wrong with it.
A typical example where pessimistic locking can't help is a parent child relation, where you might lock the parent and treat it like an aggregate (so you assume you are the only one having access to the child too). So if other thread tries to access the parent object, it won't work (will be blocked) until the other thread releases the lock from the parent table. But with an ORM, any other coder can load the child independently - and from that point 2 threads will make changes to the child object... With pessimistic locking you might mess up the data, with optimistic you'll get an exception, you can reload valid data and do try to save again...
So the code: