I understand that the WITH RECOMPILE option forces the optimizer to rebuild the query plan for stored procs but when would you want that to happen?
What are some rules of thumb on when to use the WITH RECOMPILE option and when not to?
What's the effective overhead associated with just putting it on every sproc?
It should only be used when testing with reprentative data and context demonstrate that doing without produces invalid query plans (whatever the possible reasons might be). Don't assume beforehand (without testing) that an SP won't optimize properly.
Sole exception for manual invocation only (i.e. don't code it into the SP): When you know that you've substantially altered the character of the target tables. e.g. TRUNCATE, bulk loads, etc.
It's yet another opportunity for premature optimization.
Note: I have plenty of points. If a newby submits the same answer below, and you agree, upvote theirs.
generally a much better alternative to
WITH RECOMPILE
isOPTION(RECOMPILE)
as you can see on the explanation below, taken from the answer of this question hereAs others have said, you don't want to simply include
WITH RECOMPILE
in every stored proc as a matter of habit. By doing so, you'd be eliminating one of the primary benefits of stored procedures: the fact that it saves the query plan.Why is that potentially a big deal? Computing a query plan is a lot more intensive than compiling regular procedural code. Because the syntax of a SQL statement only specifies what you want, and not (generally) how to get it, that allows the database a wide degree of flexibility when creating the physical plan (that is, the step-by-step instructions to actually gather and modify data). There are lots of "tricks" the database query pre-processor can do and choices it can make - what order to join the tables, which indexes to use, whether to apply
WHERE
clauses before or after joins, etc.For a simple SELECT statement, it might not make a difference, but for any non-trivial query, the database is going to spend some serious time (measured in milliseconds, as opposed to the usual microseconds) to come up with an optimal plan. For really complex queries, it can't even guarantee an optimal plan, it has to just use heuristics to come up with a pretty good plan. So by forcing it to recompile every time, you're telling it that it has to go through that process over and over again, even if the plan it got before was perfectly good.
Depending on the vendor, there should be automatic triggers for recompiling query plans - for example, if the statistics on a table change significantly (like, the histogram of values in a certain column starts out evenly distributed by over time becomes highly skewed), then the DB should notice that and recompile the plan. But generally speaking, the implementers of a database are going to be smarter about that on the whole than you are.
As with anything performance related, don't take shots in the dark; figure out where the bottlenecks are that are costing 90% of your performance, and solve them first.
Putting it on every stored procedure is NOT a good idea, because compiling a query plan is a relatively expensive operation and you will not see any benefit from the query plans being cached and re-used.
The case of a dynamic where clause built up inside a stored procedure can be handled using
sp_executesql
to execute the TSQL rather than addingWITH RECOMPILE
to the stored procedure.Another solution (SQL Server 2005 onwards) is to use hint with specific parameters using the
OPTIMIZE FOR
hint. This works well if the values in the rows are static.SQL Server 2008 has introduced a little known feature called "
OPTIMIZE FOR UNKNOWN
":The most common use is when you might have a dynamic WHERE clause in a procedure...you wouldn't want that particular query plan to get compiled and saved for subsequent executions because it very well might not be the exact same clause the next time the procedure is called.