I have code running in a loop and it's saving state based on the current time. Sometimes this can be just milliseconds apart, but for some reason it seems that DateTime.Now will always return values of at least 10 ms apart even if it's only 2 or 3 ms later. This presents a major problem since the state i'm saving depends on the time it was saved (e.g. recording something)
My test code that returns each value 10 ms apart:
public static void Main()
{
var dt1 = DateTime.Now;
System.Threading.Thread.Sleep(2);
var dt2 = DateTime.Now;
// On my machine the values will be at least 10 ms apart
Console.WriteLine("First: {0}, Second: {1}", dt1.Millisecond, dt2.Millisecond);
}
Is there another solution on how to get the accurate current time up to the millisecond ?
Someone suggested to look at the Stopwatch class. Although the Stopwatch class is very accurate it does not tell me the current time, something i need in order to save the state of my program.
Curiously, your code works perfectly fine on my quad core under Win7, generating values exactly 2 ms apart almost every time.
So I've done a more thorough test. Here's my example output for
Thread.Sleep(1)
. The code prints the number of ms between consecutive calls toDateTime.UtcNow
in a loop:Each row contains 100 characters, and thus represents 100ms of time on a "clean run". So this screen covers roughly 2 seconds. The longest preemption was 4ms; moreover, there was a period lasting around 1 second when every iteration took exactly 1 ms. That's almost real-time OS quality!1 :)
So I tried again, with
Thread.Sleep(2)
this time:Again, almost perfect results. This time each row is 200ms long, and there's a run almost 3 seconds long where the gap was never anything other than exactly 2ms.
Naturally, the next thing to see is the actual resolution of
DateTime.UtcNow
on my machine. Here's a run with no sleeping at all; a.
is printed ifUtcNow
didn't change at all:Finally, while investigating a strange case of timestamps being 15ms apart on the same machine that produced the above results, I've run into the following curious occurrences:
There is a function in the Windows API called
timeBeginPeriod
, which applications can use to temporarily increase the timer frequency, so this is presumably what happened here. Detailed documentation of the timer resolution is available via the Hardware Dev Center Archive, specifically Timer-Resolution.docx (a Word file).Conclusions:
DateTime.UtcNow
can have a much higher resolution than 15msThread.Sleep(1)
can sleep for exactly 1msUtcNow
grows grow by exactly 1ms at a time (give or take a rounding error - Reflector shows that there's a division inUtcNow
).Here's the code:
It turns out there exists an undocumented function which can alter the timer resolution. I haven't investigated the details, but I thought I'd post a link here:
NtSetTimerResolution
.1Of course I made extra certain that the OS was as idle as possible, and there are four fairly powerful CPU cores at its disposal. If I load all four cores to 100% the picture changes completely, with long preemptions everywhere.
You should ask yourself if you really need accurate time, or just close enough time plus an increasing integer.
You can do good things by getting now() just after a wait event such as a mutex, select, poll, WaitFor*, etc, and then adding a serial number to that, perhaps in the nanosecond range or wherever there is room.
You can also use the rdtsc machine instruction (some libraries provide an API wrapper for this, not sure about doing this in C# or Java) to get cheap time from the CPU and combine that with time from now(). The problem with rdtsc is that on systems with speed scaling you can never be quite sure what its going to do. It also wraps around fairly quickly.
All that I used to accomplish this task 100% accurately is a timer control, and a label.
The code does not require much explanation, fairly simple. Global Variables:
This is the tick event:
Here is the form load:
The problem with DateTime when dealing with milliseconds isn't due to the DateTime class at all, but rather, has to do with CPU ticks and thread slices. Essentially, when an operation is paused by the scheduler to allow other threads to execute, it must wait at a minimum of 1 time slice before resuming which is around 15ms on modern Windows OSes. Therefore, any attempt to pause for less than this 15ms precision will lead to unexpected results.
Answering the second part of your question regarding a more precise API, the comment from AnotherUser lead me to this solution that in my scenario overcomes the DateTime.Now precision issue:
In my own benchmarks, iterating 1M, it returns on an average 3 ticks vs DateTime.Now 2 ticks.
Why 1600 is out of my jurisdiction, but I use it to get the correct year.
EDIT: This is still an issue on win10. Anybody interested can run this peace of evidence:
IF you take a snap shot of the current time before you do anything, you can just add the stopwatch to the time you stored, no?