I can give it floating point numbers, such as
time.sleep(0.5)
but how accurate is it? If i give it
time.sleep(0.05)
will it really sleep about 50 ms?
I can give it floating point numbers, such as
time.sleep(0.5)
but how accurate is it? If i give it
time.sleep(0.05)
will it really sleep about 50 ms?
The accuracy of the time.sleep function depends on your underlying OS\'s sleep accuracy. For non-realtime OS\'s like a stock Windows the smallest interval you can sleep for is about 10-13ms. I have seen accurate sleeps within several milliseconds of that time when above the minimum 10-13ms.
Update: Like mentioned in the docs cited below, it\'s common to do the sleep in a loop that will make sure to go back to sleep if it wakes you up early.
I should also mention that if you are running Ubuntu you can try out a pseudo real-time kernel (with the RT_PREEMPT patch set) by installing the rt kernel package (at least in Ubuntu 10.04 LTS).
EDIT: Correction non-realtime Linux kernels have minimum sleep interval much closer to 1ms then 10ms but it varies in a non-deterministic manner.
People are quite right about the differences between operating systems and kernels, but I do not see any granularity in Ubuntu and I see a 1 ms granularity in MS7. Suggesting a different implementation of time.sleep, not just a different tick rate. Closer inspection suggests a 1μs granularity in Ubuntu by the way, but that is due to the time.time function that I use for measuring the accuracy.
From the documentation:
On the other hand, the precision of
time()
andsleep()
is better than their Unix equivalents: times are expressed as floating point numbers,time()
returns the most accurate time available (using Unixgettimeofday
where available), andsleep()
will accept a time with a nonzero fraction (Unixselect
is used to implement this, where available).
And more specifically w.r.t. sleep()
:
Suspend execution for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. The actual suspension time may be less than that requested because any caught signal will terminate the
sleep()
following execution of that signal’s catching routine. Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.
Why don\'t you find out:
from datetime import datetime
import time
def check_sleep(amount):
start = datetime.now()
time.sleep(amount)
end = datetime.now()
delta = end-start
return delta.seconds + delta.microseconds/1000000.
error = sum(abs(check_sleep(0.050)-0.050) for i in xrange(100))*10
print \"Average error is %0.2fms\" % error
For the record, I get around 0.1ms error on my HTPC and 2ms on my laptop, both linux machines.
Here\'s my follow-up to Wilbert\'s answer: the same for Mac OS X Yosemite, since it\'s not been mentioned much yet.
Looks like a lot of the time it sleeps about 1.25 times the time that you request and sometimes sleeps between 1 and 1.25 times the time you request. It almost never (~twice out of 1000 samples) sleeps significantly more than 1.25 times the time you request.
Also (not shown explicitly) the 1.25 relationship seems to hold pretty well until you get below about 0.2 ms, after which it starts get a little fuzzy. Additionally, the actual time seems to settle to about 5 ms longer than you request after the amount of time requested gets above 20 ms.
Again, it appears to be a completely different implementation of sleep()
in OS X than in Windows or whichever Linux kernal Wilbert was using.
A small correction, several people mention that sleep can be ended early by a signal. In the 3.6 docs it says,
Changed in version 3.5: The function now sleeps at least secs even if the sleep is interrupted by a signal, except if the signal handler raises an exception (see PEP 475 for the rationale).
You can\'t really guarantee anything about sleep(), except that it will at least make a best effort to sleep as long as you told it (signals can kill your sleep before the time is up, and lots more things can make it run long).
For sure the minimum you can get on a standard desktop operating system is going to be around 16ms (timer granularity plus time to context switch), but chances are that the % deviation from the provided argument is going to be significant when you\'re trying to sleep for 10s of milliseconds.
Signals, other threads holding the GIL, kernel scheduling fun, processor speed stepping, etc. can all play havoc with the duration your thread/process actually sleeps.
Tested this recently on Python 3.7 on Windows 10. Precision was around 1ms.