I have a while True loop which sends variables to an external function, and then uses the returned values. This send/receive process has a user-configurable frequency, which is saved and read from an external .ini configuration file.
I've tried time.sleep(1 / Frequency), but am not satisfied with the accuracy, given the number of threads being used elsewhere. E.g. a frequency of 60Hz (period of 0.0166667) is giving an 'actual' time.sleep() period of ~0.0311.
My preference would be to use an additional while loop, which compares the current time to the start time plus the period, as follows:
EndTime = time.time() + (1 / Frequency)
while time.time() - EndTime < 0:
sleep(0)
This would fit into the end of my while True function as follows:
while True:
A = random.randint(0, 5)
B = random.randint(0, 10)
C = random.randint(0, 20)
Values = ExternalFunction.main(Variable_A = A, Variable_B = B, Variable_C = C)
Return_A = Values['A_Out']
Return_B = Values['B_Out']
Return_C = Values['C_Out']
#Updated other functions with Return_A, Return_B and Return_C
EndTime = time.time() + (1 / Frequency)
while time.time() - EndTime < 0:
time.sleep(0)
I'm missing something, as the addition of the while loop causes the function to execute once only. How can I get the above to function correctly? Is this the best approach to 'accurate' frequency control on a non-real time operating system? Should I be using threading for this particular component? I'm testing this function on both Windows 7 (64-bit) and Ubuntu (64-bit).
If I understood your question correctly, you want to execute ExternalFunction.main
at a given frequency. The problem is that the execution of ExternalFunction.main
itself takes some time. If you don't need very fine precision -- it seems that you don't -- my suggestion is doing something like this.
import time
frequency = 1 # Hz
period = 1.0/frequency
while True:
time_before = time.time()
[...]
ExternalFunction.main([...])
[...]
while (time.time() - time_before) < period:
time.sleep(0.001) # precision here
You may tune the precision to your needs. Greater precision (smaller number) will make the inner while loop execute more often.
This achieves decent results when not using threads. However, when using Python threads, the GIL (Global Interpreter Lock) makes sure only one thread runs at a time. If you have a huge number of threads it may be that it is taking way too much time for the program to go back to your main thread. Increasing the frequency Python changes between threads may give you more accurate delays.
Add this to the beginning of your code to increase the thread switching frequency.
import sys
sys.setcheckinterval(1)
1
is the number of instructions executed on each thread before switching (the default is 100), a larger number improves performance but will increase the threading switching time.
You may want to try python-pause
Pause until a unix time, with millisecond precision:
import pause
pause.until(1370640569.7747359)
Pause using datetime:
import pause, datetime
dt = datetime.datetime(2013, 6, 2, 14, 36, 34, 383752)
pause.until(dt)
You may use it like:
freqHz=60.0
td=datetime.timedelta(seconds=1/freqHz)
dt=datetime.now()
while true:
#Your code here
dt+=td
pause.until(dt)
Another solution for an accurate delay is to use the perf_counter() function from module time. Especially useful in windows as time.sleep is not accurate in milliseconds. See below example where function accurate_delay creates a delay in milliseconds.
import time
def accurate_delay(delay):
''' Function to provide accurate time delay in millisecond
'''
_ = time.perf_counter() + delay/1000
while time.perf_counter() < _:
pass
delay = 10
t_start = time.perf_counter()
print('Wait for {:.0f} ms. Start: {:.5f}'.format(delay, t_start))
accurate_delay(delay)
t_end = time.perf_counter()
print('End time: {:.5f}. Delay is {:.5f} ms'.
format(t_end, 1000*(t_end - t_start)))
sum = 0
ntests = 1000
for _ in range(ntests):
t_start = time.perf_counter()
accurate_delay(delay)
t_end = time.perf_counter()
print('Test completed: {:.2f}%'.format(_/ntests * 100), end='\r', flush=True)
sum = sum + 1000*(t_end - t_start) - delay
print('Average difference in time delay is {:.5f} ms.'.format(sum/ntests))`