In this interesting article about falsehoods programmers believe about time, one of them is
Thread.sleep(1000) sleeps for >= 1000 milliseconds.
When isn't this true?
In this interesting article about falsehoods programmers believe about time, one of them is
Thread.sleep(1000) sleeps for >= 1000 milliseconds.
When isn't this true?
According to this (Implementation of the sleep by windows operating system, which is what Thread.sleep will call underneath): If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on. To increase the accuracy of the sleep interval, call the timeGetDevCaps function to determine the supported minimum timer resolution and the timeBeginPeriod function to set the timer resolution to its minimum.
The OS only reacts at interrupts and therefore handles sleep expiries at the time of an interrupt. It is correct the the interrup frequency can be increased by means of timeBeginPeriod
. The difficulty is that the expiry of the Sleep()
function requires two conditions to be met:
Condition 2 is the problem here. The dwMilliseconds will be compared to the expired system time at interrupts. The system time will cause the Sleep()
function to expire in filetime-format increments, in other words when n times the system-time increment becomes larger than dwMilliseconds. Thus one may not be able to ever get 1ms sleep delays. This heavely depends on the systems hard- and software and configuration (system-time increment/granularity).
A closer look with some examples can be found here
To answer the question: Thread.sleep(1000) sleeps for >= 1000 milliseconds is always TRUE! Edit: When executed right after a Thread.sleep(1)
Edit: However Thread.sleep(1) sleeps for >= 1 milliseconds may not always be TRUE