On Darwin, the POSIX standard clock_gettime(CLOCK_MONOTONIC)
timer is not available. Instead, the highest resolution monotonic timer is obtained through the mach_absolute_time
function from mach/mach_time.h
.
The result returned may be an unadjusted tick count from the processor, in which case the time units could be a strange multiple. For example, on a CPU with a 33MHz tick count, Darwin returns 1000000000/33333335 as the exact units of the returned result (ie, multiply the mach_absolute_time
by that fraction to obtain a nanosecond value).
We usually wish to convert from exact ticks to "standard" (decimal) units, but unfortunately, naively multiplying the absolute time by the fraction will overflow even in 64-bit arithmetic. This is an error that Apple's sole piece of documentation on mach_absolute_time
falls into (Technical Q&A QA1398).1
How should I write a function that correctly uses mach_absolute_time
?
- Note that this is not a theoretical problem: the sample code in QA1398 completely fails to work on PowerPC-based Macs. On Intel Macs,
mach_timebase_info
always returns 1/1 as the scaling factor because the CPU's raw tick count is unreliable (dynamic speed-stepping), so the API does the scaling for you. On PowerPC Macs,mach_timebase_info
returns either 1000000000/33333335 or 1000000000/25000000, so Apple's provided code definitely overflows every few minutes. Oops.
Most-precise (best) answer
Perform the arithmetic at 128-bit precision to avoid the overflow!
A simple low-resolution answer
A fiddly solution using low-precision arithmetic but using continued fractions to avoid loss of accuracy
The derivation
We aim to reduce the fraction returned by
mach_timebase_info
to one that is essentially the same, but with a small denominator. The size of the timespan that we can handle is limited only by the size of the denominator, not the numerator of the fraction we shall multiply by:If
denom=33333335
as returned bymach_timebase_info
, we can handle differences of up to 18 seconds only before the multiplication by numer overflows. AsgetExpressibleSpan
shows, by calculating a rough lower bound for this, the size ofnumer
doesn't matter: halvingnumer
doublesmaxDiffWithoutOverflow
. The only goal therefore is to produce a fraction close to numer/denom that has a smaller denominator. The simplest method to do this is using continued fractions.The continued fractions method is rather handy.
bestFrac
clearly works correctly if the provided interval contains an integer: it returns the least integer in the interval over 1. Otherwise, it calls itself recursively with a strictly larger interval and returnsm+1/next
. The final result is a continued fraction that can be shown by induction to have the correct property: it's optimal, the fraction inside the given interval with the least denominator.Finally, we reduce the fraction Darwin passes us to a smaller one to use when rescaling the
mach_absolute_time
to nanoseconds. We may introduce an error here because we can't reduce the fraction in general without losing accuracy. We set ourselves the target of 0.1% error, and check that we've reduced the fraction enough for common timespans (up to ten years) to be handled correctly.Arguably the method is over-complicated for what it does, but it handles correctly anything the API can throw at it, and the resulting code is still short and extremely fast (
bestFrac
typically recurses only three or four iterations deep before returning a denominator less than 1000 for random intervals[a,a*1.002]
).