Speed up sql JOIN

2019-05-25 03:51发布

First of all, some background.

We have an order processing system, where staff enter billing data about orders in an app that stores it in a sql server 2000 database. This database isn't the real billing system: it's just a holding location so that the records can be run into a mainframe system via a nightly batch process.

This batch process is a canned third party package provided by an outside vendor. Part of what it's supposed to do is provide a report for any records that were rejected. The reject report is worked manually.

Unfortunately, it turns out the third party software doesn't catch all the errors. We have separate processes that pull back the data from the mainframe into another table in the database and load the rejected charges into yet another table.

An audit process then runs to make sure everything that was originally entered by the staff can be accounted for somewhere. This audit takes the form of an sql query we run, and it looks something like this:

SELECT *
FROM [StaffEntry] s with (nolock)
LEFT JOIN [MainFrame] m with (nolock)
    ON m.ItemNumber = s.ItemNumber 
        AND m.Customer=s.Customer 
        AND m.CustomerPO = s.CustomerPO -- purchase order
        AND m.CustPORev = s.CustPORev  -- PO revision number
LEFT JOIN [Rejected] r with (nolock) ON r.OrderID = s.OrderID
WHERE s.EntryDate BETWEEN @StartDate AND @EndDate
    AND r.OrderID IS NULL AND m.MainFrameOrderID IS NULL

That's heavily modified, of course, but I believe the important parts are represented. The problem is that this query is starting to take too long to run, and I'm trying to figure out how to speed it up.

I'm pretty sure the problem is the JOIN from the StaffEntry table to the MainFrame table. Since both hold data for every order since the beginning of time (2003 in this system), they tend to be a little large. The OrderID and EntryDate values used in the StaffEntry table are not preserved when imported to the mainframe, which is why that join is a little more complicated. And finally, since I'm looking for records in the MainFrame table that don't exist, after doing the JOIN we have that ugly IS NULL in the where clause.

The StaffEntry table is indexed by EntryDate (clustered) and separately on Customer/PO/rev. MainFrame is indexed by customer and the mainframe charge number (clustered, this is needed for other systems) and separately by customer/PO/Rev. Rejected is not indexed at all, but it's small and testing shows it's not the problem.

So, I'm wondering if there is another (hopefully faster) way I can express that relationship?

7条回答
神经病院院长
2楼-- · 2019-05-25 04:50

Update:
In case it wasn't already obvious, I made a mistake in the code for the original question. That's now fixed, but unfortunately it means some of the better responses here are actually going the completely wrong direction.

I also have some statistics updates: I can make the query run nice and quick by severely limiting the data range used with StaffEntry.EntryDate. Unfortunately, I'm only able to do that because after running it the long way once I then know exactly which dates I care about. I don't normally know that in advance.

Tthe execution plan from the original run showed 78% cost for a clustered index scan on the StaffEntry table, and 11% cost on an index seek for the MainFrame table, and then 0% cost on the join itself. Running it using the narrow date range, that changes to 1% for an index seek of StaffEntry, 1% for an index seek of 'MainFrame', and 93% for a table scan of Rejected. These are 'actual' plans, not estimated.

查看更多
登录 后发表回答