I have a fairly simple query that I keep getting timeouts (it takes over three minutes to complete, I stopped it early so I could post this question) on when it is running in code, however when I run the same query from the same computer in Sql Server Management Studio the query will only take 2532 ms
the first query when the data is not cached on the server and 524 ms
for repeated queries.
Here is my c# code
using (var conn = new SqlConnection("Data Source=backend.example.com;Connect Timeout=5;Initial Catalog=Logs;Persist Security Info=True;User ID=backendAPI;Password=Redacted"))
using (var ada = new SqlDataAdapter(String.Format(@"
SELECT [PK_JOB],[CLIENT_ID],[STATUS],[LOG_NAME],dt
FROM [ES_HISTORY]
inner join [es_history_dt] on [PK_JOB] = [es_historyid]
Where client_id = @clientID and dt > @dt and (job_type > 4 {0}) {1}
Order by dt desc"
, where.ToString(), (cbShowOnlyFailed.Checked ? "and Status = 1" : "")), conn))
{
ada.SelectCommand.Parameters.AddWithValue("@clientID", ClientID);
ada.SelectCommand.Parameters.AddWithValue("@dt", dtpFilter.Value);
//ada.SelectCommand.CommandTimeout = 60;
conn.Open();
Logs.Clear();
ada.Fill(Logs); //Time out exception for 30 sec limit.
}
here is my code I am running in SSMS, I pulled it right from ada.SelectCommand.CommandText
declare @clientID varchar(200)
set @clientID = '138'
declare @dt datetime
set @dt = '9/19/2011 12:00:00 AM'
SELECT [PK_JOB],[CLIENT_ID],[STATUS],[LOG_NAME],dt
FROM [ES_HISTORY]
inner join [es_history_dt] on [PK_JOB] = [es_historyid]
Where client_id = @clientID and dt > @dt and (job_type > 4 or job_type = 0 or job_type = 1 or job_type = 4 )
Order by dt desc
What is causing the major discrepancy for the difference in time?
To keep the comment section clean, I will answer some FAQ's here.
The same computer and logon is used for both the application and ssms.
Only 15 rows are returned in my example query. However, es_history
contains 11351699 rows
and es_history_dt
contains 8588493 rows
. Both tables are well indexed and the execution plan in SSMS says they are using index seeks for the look-ups so they are fast lookups. The program is behaving as if it is not using the indexes for the C# version of the query.
Run the profiler on your c# connection - there may be other activity going on that you are not aware of.
Capture the execution plan from both SSMS when you manually run your query and then from Profiler when you are running your application. Compare and contrast.
Your code in SSMS is not the same code you run in your application. This line in your application adds a NVARCHAR parameter:
while in the SSMS script you declare it as VARCHAR:
Due to the rules of Data Type Precedence the
Where client_id = @clientID
expression in your query is not SARG-able where@clientID
is of type NVARCHAR (I'm making a leap of faith and assume thatclient_id
column is of type VARCHAR). The application thus forces a table scan where the SSMS query can do a quick key seek. This is a well know and understood issue with using Parameters.AddWithValue and has been discussed in many articles before, eg. see How Data Access Code Affects Database Performance. Once the problem is understood, the solutions are trivial:add parameters with the constructor that accepts a type:
Parameters.Add("@clientID", SqlDbType.Varchar, 200)
(and do pass in the explicit length to prevent cache pollution, see Query performance and plan cache issues when parameter length not specified correctlyor cast the parameter in the SQL text:
where client_id = cast(@clientID as varchar(200))
.The first solution is superior because it solves the cache pollution problem in addition to the SARG-ability problem.
I would also recommend you read Slow in the Application, Fast in SSMS? Understanding Performance Mysteries
Run
DBCC FREEPROCCACHE
, as suggested here, just to make sure the problem isn't due to a stale query execution plan.