LinqToSql. Deadlock while updating a row. Parallel

2019-05-24 18:48发布

I have a problem. I'm trying to update the database base using parallel. Here is the code:

Parallel.For(rCnt, range.Rows.Count + 1, (jrCnt, loopState) =>
{
    var prcI = new Price(); // new 

    /*bla bla bla bla - bla bla - bla bla - bla */

    if ((!string.IsNullOrEmpty(prcI.name)) && (prcI.prc != 0)) // process add or update
    {
        prcI.company = nameprice;
        prcI.date = datatimeselect.Text; 

        Accessor.AddProductUpdateProduct(prcI); // main func

            /*bla bla bla bla - bla bla - bla bla - bla bla - bla  */
    }

Here's the function code field to update:

public static bool AddProductUpdateProduct(Price price)
    {
        bool add = false;
        var db = new PriceDataContext();

        var matchedprod =
           db.Price.Single(x => x.name == price.name && x.date != price.date && x.company == price.company); // find match

        if (matchedprod != null) // match FOUnDE
        {
            if (matchedprod.prc != price.prc)
            {
                matchedprod.date = price.date;
                matchedprod.prc = price.prc;
            }
            else
            {
                matchedprod.date = price.date;
            }
            db.SubmitChanges(); // DEADLOCK is her!!!
        }
        /*bla - bla bla - bla bla - bla bla - bla bla - bla */
    }

When I create a record that all is well!

Thank you!

2条回答
虎瘦雄心在
2楼-- · 2019-05-24 19:47

I guess it could be same problem I described in this question Deadlock on SELECT/UPDATE. It is not the problem with linq to sql. The problem with linq to sql is that you can't easily perform select with updlock.

查看更多
▲ chillily
3楼-- · 2019-05-24 19:49

With a record count between 3000 and 10000 (comments) I would be looking at a solution here that used SqlBulkCopy to push the data into a staging table (i.e. a table that looks similar to the data you are manipulating, but not part of your core model). This is the most efficient way of getting a bulk set of data to the server (although you might also look at table valued parameters).

With the data at the server, I would then do either one update (inner join) and one insert (where not exists), or a single "upsert" (available in SQL Server 2008 and above).

This uses less CPU at the app-server, less network, and less database resources. Also, since only one SPID is involved in the insert/update there is no risk of deadlock.

查看更多
登录 后发表回答