Here's what I use:
SELECT CAST(FLOOR(CAST(getdate() as FLOAT)) as DATETIME)
I'm thinking there may be a better and more elegant way.
Requirements:
- It has to be as fast as possible (the less casting, the better).
- The final result has to be a
datetime
type, not a string.
SQL2005: I recommend cast instead of dateadd. For example,
averaged around 10% faster on my dataset, than
(and casting into smalldatetime was faster still)
Your
CAST
-FLOOR
-CAST
already seems to be the optimum way, at least on MS SQL Server 2005.Some other solutions I've seen have a string-conversion, like
Select Convert(varchar(11), getdate(),101)
in them, which is slower by a factor of 10.SQL Server 2008 has a new date data type and this simplifies this problem to:
Itzik Ben-Gan in DATETIME Calculations, Part 1 (SQL Server Magazine, February 2007) shows three methods of performing such a conversion (slowest to fastest; the difference between second and third method is small):
Your technique (casting to float) is suggested by a reader in the April issue of the magazine. According to him, it has performance comparable to that of second technique presented above.
Please try:
SQL Server 2008 and up
In SQL Server 2008 and up, of course the fastest way is
Convert(date, @date)
. This can be cast back to adatetime
ordatetime2
if necessary.What Is Really Best In SQL Server 2005 and Older?
I've seen inconsistent claims about what's fastest for truncating the time from a date in SQL Server, and some people even said they did testing, but my experience has been different. So let's do some more stringent testing and let everyone have the script so if I make any mistakes people can correct me.
Float Conversions Are Not Accurate
First, I would stay away from converting
datetime
tofloat
, because it does not convert correctly. You may get away with doing the time-removal thing accurately, but I think it's a bad idea to use it because it implicitly communicates to developers that this is a safe operation and it is not. Take a look:This is not something we should be teaching people in our code or in our examples online.
Also, it is not even the fastest way!
Proof – Performance Testing
If you want to perform some tests yourself to see how the different methods really do stack up, then you'll need this setup script to run the tests farther down:
Please note that this creates a 427.57 MB table in your database and will take something like 15-30 minutes to run. If your database is small and set to 10% growth it will take longer than if you size big enough first.
Now for the actual performance testing script. Please note that it's purposeful to not return rows back to the client as this is crazy expensive on 26 million rows and would hide the performance differences between the methods.
Performance Results
Some Rambling Analysis
Some notes about this. First of all, if just performing a GROUP BY or a comparison, there's no need to convert back to
datetime
. So you can save some CPU by avoiding that, unless you need the final value for display purposes. You can even GROUP BY the unconverted value and put the conversion only in the SELECT clause:Also, see how the numeric conversions only take slightly more time to convert back to
datetime
, but thevarchar
conversion almost doubles? This reveals the portion of the CPU that is devoted to date calculation in the queries. There are parts of the CPU usage that don't involve date calculation, and this appears to be something close to 19875 ms in the above queries. Then the conversion takes some additional amount, so if there are two conversions, that amount is used up approximately twice.More examination reveals that compared to
Convert(, 112)
, theConvert(, 101)
query has some additional CPU expense (since it uses a longervarchar
?), because the second conversion back todate
doesn't cost as much as the initial conversion tovarchar
, but withConvert(, 112)
it is closer to the same 20000 ms CPU base cost.Here are those calculations on the CPU time that I used for the above analysis:
round is the CPU time for a round trip back to
datetime
.single is CPU time for a single conversion to the alternate data type (the one that has the side effect of removing the time portion).
base is the calculation of subtracting from
single
the difference between the two invocations:single - (round - single)
. It's a ballpark figure that assumes the conversion to and from that data type anddatetime
is approximately the same in either direction. It appears this assumption is not perfect but is close because the values are all close to 20000 ms with only one exception.One more interesting thing is that the base cost is nearly equal to the single
Convert(date)
method (which has to be almost 0 cost, as the server can internally extract the integer day portion right out of the first four bytes of thedatetime
data type).Conclusion
So what it looks like is that the single-direction
varchar
conversion method takes about 1.8 μs and the single-directionDateDiff
method takes about 0.18 μs. I'm basing this on the most conservative "base CPU" time in my testing of 18458 ms total for 25,920,000 rows, so 23218 ms / 25920000 = 0.18 μs. The apparent 10x improvement seems like a lot, but it is frankly pretty small until you are dealing with hundreds of thousands of rows (617k rows = 1 second savings).Even given this small absolute improvement, in my opinion, the
DateAdd
method wins because it is the best combination of performance and clarity. The answer that requires a "magic number" of0.50000004
is going to bite someone some day (five zeroes or six???), plus it's harder to understand.Additional Notes
When I get some time I'm going to change
0.50000004
to'12:00:00.003'
and see how it does. It is converted to the samedatetime
value and I find it much easier to remember.For those interested, the above tests were run on a server where @@Version returns the following: