At some point in your career with SQL Server does

2020-07-14 06:35发布

问题:

Today again, I have a MAJOR issue with what appears to be parameter sniffing in SQL Server 2005.

I have a query comparing some results with known good results. I added a column to the results and the known good results, so that each month, I can load a new months results in both sides and compare only the current month. The new column is first in the clustered index, so new months will add to the end.

I add a criteria to my WHERE clause - this is code-generated, so it's a literal constant:

WHERE DATA_DT_ID = 20081231 -- Which is redundant because all DATA_DT_ID are 20081231 right now.

Performance goes to pot. From 7 seconds to compare about 1.5m rows to 2 hours and nothing completing. Running the generated SQL right in SSMS - no SPs.

I've been using SQL Server for going on 12 years now and I have never had so many problems with parameter sniffing as I have had on this production server since October (build build 9.00.3068.00). And in every case, it's not because it was run the first time with a different parameter or the table changed. This is a new table and it's only run with this parameter or no WHERE clause at all.

And, no, I don't have DBA access, and they haven't given me enough rights to see the execution plans.

It's to the point where I'm not sure I'm going to be able to handle this system off to SQL Server users with only a couple years experience.

UPDATE Turns out that although statistics claim to be up to date, running UPDATE STATISTICS WITH FULLSCAN clears up the problem.

FINAL UPDATE Even with recreating the SP, using WITH RECOMPILE and UPDATE STATISTICS, it turned out the query had to be rewritten in a different way to use a NOT IN instead of a LEFT JOIN with NULL check.

回答1:

Not quite an answer, but I'll share my experience.

Parameter sniffing took a few years of SQL Server to come and bite me, when I went back to Developer DBA after moving away to mostly prod DBA work. I understood more about the engine, how SQL works, what was best left to the client etc and I was a better SQL coder.

For example, dynamic SQL or CURSORs or just plain bad SQL code probably won't ever suffer parameter sniffing. But better set programming or how to avoid dynamic SQL or more elegant SQL more likely will.

I noticed it for complex search code (plenty of conditionals) and complex reports where parameter defaults affected the plan. When I see how less experienced developers would write this code, then it won't suffer parameter sniffing.

In any event, I prefer parameter masking to WITH RECOMPILE. Updating stats or indexes forces a recompile anyway. But why recompile all the time? I've answered elsewhere to one of your questions with a link that mentions parameters are sniffed during compilation, so I don't have faith in it either.

Parameter masking is an overhead, yes, but it allows the optimiser to evaluate the query case by case, rather than blanket recompiling. Especially with statement level recompilation of SQL Server 2005

OPTIMISE FOR UNKNOWN in SQL Server 2008 also appears to do exactly the same thing as masking. My SQL Server MVP colleague and I spent some time investigating and came to this conclusion.



回答2:

I suspect your problem is caused by out of data statistics. Since you do not have DBA access to the server, I would encourage you to ask the DBA when the last time statistics were updated. This can have a huge impact on performance. It also sounds like your tables are not indexed very well.

Basically, this does not "feel" like a parameter sniffing issue, but more of a "healthy" database issue.

This article describes how you can determine the last time statistics were updated: Statistics Update Time



回答3:

I second the comment about checking the statistics - I have seen several instances where a query's performance has fallen off a cliff specifically because the statistics are out of date.

Specifically, if you have a date in your PK, and SQL Server thinks there are only a 10 or 100 records which after a specific date when in fact there are thousands, it may choose terribly inefficient query plans because it thinks the dataset is much smaller than it really is.

HTH,

  • Andrew


回答4:

I had a production issue exactly like this. A tab in the application which called a stored proc would not show. I ran a trace for the specific proc and saw the call. The application times out in 30 secs and the proc would take close to 40 - 50 secs to complete (ran the proc exactly as called from the trace).

Next step was to figure out which statement was causing the scans I notice in the execution of the procedure. So I scripted out the proc, removed the procedure syntax and declared variables and ran in query analyser. It RAN in 3 secs!!!

I'm writing this to let anyone out there looking for answeres know that this can happen in SQL. It stems from the parameter sniffing issue. I was able t ofind this thread because I pin-pointed the cause as a faulty cached query plan! I've read posts where they said it happens to one specific users/ value. But it can happen to any value and once it starts, it can be a continuous thing.

The solution for me was to script out the proc and run it again. yeah. that simple. An alter works fine. No need to drop and re-create. This causes SQL to refresh the cached plan and things were fine. I have not figured out how to disable this at a server level. It is too cumbersome to clean up all the procs. Hope this helps